instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
---|---|---|
Python optparse metavar
|
I am not sure what optparse's metavar parameter is used for. I see it is used all around, but I can't see its use.
Can someone make it clear to me? Thanks.
|
As @Guillaume says, it's used for generating help. If you want to have an option that takes an argument, such as a filename, you can add the metavar parameter to the add_option call so your preferred argument name/descriptor is output in the help message. From the current module documentation:
usage = "usage: %prog [options] arg1 arg2"
parser = OptionParser(usage=usage)
parser.add_option("-f", "--filename",
metavar="FILE", help="write output to FILE"),
would produce help like this:
usage: <yourscript> [options] arg1 arg2
options:
-f FILE, --filename=FILE
The "FILE" after the "-f" and the "--filename" comes from the metavar.
|
How to get a row-by-row MySQL ResultSet in python
|
MySQL ResultSets are by default retrieved completely from the server before any work can be done. In cases of huge result sets this becomes unusable. I would like instead to actually retrieve the rows one by one from the server.
In Java, following the instructions here (under "ResultSet"), I create a statement like this:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
This works nicely in Java. My question is: is there a way to do the same in python?
One thing I tried is to limit the query to a 1000 rows at a time, like this:
start_row = 0
while True:
cursor = conn.cursor()
cursor.execute("SELECT item FROM items LIMIT %d,1000" % start_row)
rows = cursor.fetchall()
if not rows:
break
start_row += 1000
# Do something with rows...
However, this seems to get slower the higher start_row is.
And no, using fetchone() instead of fetchall() doesn't change anything.
Clarification:
The naive code I use to reproduce this problem looks like this:
import MySQLdb
conn = MySQLdb.connect(user="user", passwd="password", db="mydb")
cur = conn.cursor()
print "Executing query"
cur.execute("SELECT * FROM bigtable");
print "Starting loop"
row = cur.fetchone()
while row is not None:
print ", ".join([str(c) for c in row])
row = cur.fetchone()
cur.close()
conn.close()
On a ~700,000 rows table, this code runs quickly. But on a ~9,000,000 rows table it prints "Executing Query" and then hangs for a long long time. That is why it makes no difference if I use fetchone() or fetchall().
|
I think you have to connect passing cursorclass = MySQLdb.cursors.SSCursor:
MySQLdb.connect(user="user",
passwd="password",
db="mydb",
cursorclass = MySQLdb.cursors.SSCursor
)
The default cursor fetches all the data at once, even if you don't use fetchall.
Edit: SSCursor or any other cursor class that supports server side resultsets - check the module docs on MySQLdb.cursors.
|
Dynamic Keyword Arguments in Python?
|
Does python have the ability to create dynamic keywords?
For example:
qset.filter(min_price__usd__range=(min_price, max_price))
I want to be able to change the usd part based on a selected currency.
|
Yes, It does. Use **kwargs in a function definition.
Example:
def f(**kwargs):
print kwargs.keys()
f(a=2, b="b") # -> ['a', 'b']
f(**{'d'+'e': 1}) # -> ['de']
But why do you need that?
|
Python, Popen and select - waiting for a process to terminate or a timeout
|
I run a subprocess using:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
This subprocess could either exit immediately with an error on stderr, or keep running. I want to detect either of these conditions - the latter by waiting for several seconds.
I tried this:
SECONDS_TO_WAIT = 10
select.select([],
[p.stdout, p.stderr],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
but it just returns:
([],[],[])
on either condition. What can I do?
|
Have you tried using the Popen.Poll() method. You could just do this:
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
time.sleep(SECONDS_TO_WAIT)
retcode = p.poll()
if retcode is not None:
# process has terminated
This will cause you to always wait 10 seconds, but if the failure case is rare this would be amortized over all the success cases.
Edit:
How about:
t_nought = time.time()
seconds_passed = 0
while(p.poll() is not None and seconds_passed < 10):
seconds_passed = time.time() - t_nought
if seconds_passed >= 10:
#TIMED OUT
This has the ugliness of being a busy wait, but I think it accomplishes what you want.
Additionally looking at the select call documentation again I think you may want to change it as follows:
SECONDS_TO_WAIT = 10
select.select([p.stderr],
[],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
Since you would typically want to read from stderr, you want to know when it has something available to read (ie the failure case).
I hope this helps.
|
How do I use IPython as my Emacs Python interpreter?
|
I'm running Emacs 22.1.1 and IPython 0.9.1 on OS X and I'd like to be able to run lines/methods/snippets of Python code from my current buffer on demand inside an IPython interpreter.
What do I need to do to get this working?
|
also ipython wont load with the official python.el being used with emacs 23.1.1
|
python ImportError No module named
|
I am very new at Python and I am getting this error:
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
Python is installed in a local directory:
My directory tree is like this:
(local directory)/site-packages/toolkit/interface.py
My code is in here
(local directory)/site-packages/toolkit/examples/mountain.py
To run the example I do python mountain.py, and in the code I have:
from toolkit.interface import interface
And i get the error that I wrote, I have already checked sys.path and there I have the directory /site-packages, also I have the file __init__.py.bin in the toolkit folder to indicate to Python that this is a package. I also have a __init__.py.bin in the examples directory.
I do not know why Python cannot find the file when it is in sys.path, any ideas? Can be a permissions problem? Do I need execution permission?
|
Based on your comments to orip's post, I guess this is what happened:
You edited __init__.py on windows.
The windows editor added something non-printing, perhaps a carriage-return (end-of-line in Windows is CR/LF; in unix it is LF only), or perhaps a CTRL-Z (windows end-of-file).
You used WinSCP to copy the file to your unix box.
WinSCP thought: "This has something that's not basic text; I'll put a .bin extension to indicate binary data."
The missing __init__.py (now called __init__.py.bin) means python doesn't understand toolkit as a package.
You create __init__.py in the appropriate directory and everything works... ?
|
Nicest way to pad zeroes to string
|
What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length?
|
Strings:
>>> n = '4'
>>> print n.zfill(3)
004
And for numbers:
>>> n = 4
>>> print '%03d' % n
004
>>> print format(4, '03') # python >= 2.6
004
>>> print '{0:03d}'.format(4) # python >= 2.6
004
>>> print '{foo:03d}'.format(foo=4) # python >= 2.6
004
>>> print('{:03d}'.format(4)) # python >= 2.7 + python3
004
>>> print('{0:03d}'.format(4)) # python 3
004
String formatting documentation.
|
How do you unzip very large files in python?
|
Using python 2.4 and the built-in ZipFile library, I cannot read very large zip files (greater than 1 or 2 GB) because it wants to store the entire contents of the uncompressed file in memory. Is there another way to do this (either with a third-party library or some other hack), or must I "shell out" and unzip it that way (which isn't as cross-platform, obviously).
|
Here's an outline of decompression of large files.
import zipfile
import zlib
import os
src = open( doc, "rb" )
zf = zipfile.ZipFile( src )
for m in zf.infolist():
# Examine the header
print m.filename, m.header_offset, m.compress_size, repr(m.extra), repr(m.comment)
src.seek( m.header_offset )
src.read( 30 ) # Good to use struct to unpack this.
nm= src.read( len(m.filename) )
if len(m.extra) > 0: ex= src.read( len(m.extra) )
if len(m.comment) > 0: cm= src.read( len(m.comment) )
# Build a decompression object
decomp= zlib.decompressobj(-15)
# This can be done with a loop reading blocks
out= open( m.filename, "wb" )
result= decomp.decompress( src.read( m.compress_size ) )
out.write( result )
result = decomp.flush()
out.write( result )
# end of the loop
out.close()
zf.close()
src.close()
|
End-line characters from lines read from text file, using Python
|
When reading lines from a text file using python, the end-line character often needs to be truncated before processing the text, as in the following example:
f = open("myFile.txt", "r")
for line in f:
line = line[:-1]
# do something with line
Is there an elegant way or idiom for retrieving text lines without the end-line character?
|
The idiomatic way to do this in Python is to use rstrip('\n'):
for line in open('myfile.txt'): # opened in text-mode; all EOLs are converted to '\n'
line = line.rstrip('\n')
process(line)
Each of the other alternatives has a gotcha:
file('...').read().splitlines() has to load the whole file in memory at once.
line = line[:-1] will fail if the last line has no EOL.
|
IronPython For Unit Testing over C#
|
We know that Python provides a lot of productivity over any compiled languages. We have programming in C# & need to write the unit test cases in C# itself. If we see the amount of code we write for unit test is approximately ten times more than the original code.
Is it ideal choice to write unit test cases in IronPython instead of C#? Any body has done like that? I wrote few test cases, they seems to be good. But hairy pointy managers won't accept.
|
Python is excellent for UnitTesting C# code. Our app is 75% in Python and 25% C#(Python.Net), and our unit tests are 100% python.
I find that it's much easier to make use of stubs and mocks in Python which is probably one of the most critical components that enable one to write effective unittests.
|
Can I install Python 3.x and 2.x on the same computer?
|
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine?
I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
|
The official solution for coexistence seems to be the Python Launcher for Windows, PEP 397 which was included in Python 3.3.0. Installing the release dumps py.exe and pyw.exe launchers into %SYSTEMROOT% (C:\Windows) which is then associated with py and pyw scripts, respectively.
In order to use the new launcher (without manually setting up your own associations to it), leave the "Register Extensions" option enabled. I'm not quite sure why, but on my machine it left Py 2.7 as the "default" (of the launcher).
Running scripts by calling them directly from the command line will route them through the launcher and parse the shebang (if it exists). You can also explicitly call the launcher and use switches: py -3 mypy2script.py.
All manner of shebangs seem to work
#!C:\Python33\python.exe
#!python3
#!/usr/bin/env python3
as well as wanton abuses
#! notepad.exe
|
Python logging in Django
|
I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head.
I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file:
sviewlog = logging.getLogger('MyApp.views.scans')
view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log')
view_log_handler.setLevel(logging.INFO)
view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s'))
sviewlog.addHandler(view_log_handler)
Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this?
Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.
|
Allow me to answer my own question. The underlying problem here is that settings.py gets imported twice, or maybe even more (See here). (I still don't understand why this is. Maybe some Django expert could explain that to me.) This seems to be true of some other modules as well. At this point I don't think it's wise to make assumptions about how many times settings.py will be imported. For that matter, such assumptions aren't safe in general. I've had this code in places other than settings.py, and the results are similar.
You have to code around this. That is, you have to check your logger for existing handlers before adding additional handlers to it. This is a bit ugly because it's perfectly reasonable to have multiple handlers -- even of the same type -- attached to one logger. There are a few solutions to dealing with this. One is check the handlers property of your logger object. If you only want one handler and your length > 0, then don't add it. Personally I don't love this solution, because it gets messy with more handlers.
I prefer something like this (thanks to Thomas Guettler):
# file logconfig.py
if not hasattr(logging, "set_up_done"):
logging.set_up_done=False
def set_up(myhome):
if logging.set_up_done:
return
# set up your logging here
# ...
logging.set_up_done=True
I must say, I wish the fact that Django imports settings.py multiple times were better documented. And I would imagine that my configuration is somehow cause this multiple import, but I'm having trouble finding out what is causing the problem and why. Maybe I just couldn't find that in their documents, but I would think that's the sort of thing you need to warn your users about.
|
Best way to import version-specific python modules
|
Which method makes the most sense for importing a module in python that is version specific? My use case is that I'm writing code that will be deployed into a python 2.3 environment and in a few months be upgraded to python 2.5. This:
if sys.version_info[:2] >= (2, 5):
from string import Template
else:
from our.compat.string import Template
or this
try:
from string import Template
except ImportError:
from our.compat.string import Template
I know that either case is equally correct and works correctly but which one is preferable?
|
Always the second way - you never know what different Python installations will have installed. Template is a specific case where it matters less, but when you test for the capability instead of the versioning you're always more robust.
That's how I make Testoob support Python 2.2 - 2.6: I try to import a module in different ways until it works. It's also relevant to 3rd-party libraries.
Here's an extreme case - supporting different options for ElementTree to appear:
try: import elementtree.ElementTree as ET
except ImportError:
try: import cElementTree as ET
except ImportError:
try: import lxml.etree as ET
except ImportError:
import xml.etree.ElementTree as ET # Python 2.5 and up
|
How should I learn to use the Windows API with Python?
|
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. So, how should I go about learning to use the Windows API with Python?
|
Honestly, no. The Windows API is an 800 pound monster covered with hair. Charlie Petzold's 15 pound book was the canonical reference once upon a time.
That said, the Python for Windows folks have some good material. Microsoft has the whole API online, including some sample code and such. And the Wikipedia article is a good overview.
|
What can Pygame do in terms of graphics that wxPython can't?
|
I want to develop a very simple 2D game in Python. Pygame is the most popular library for game development in Python, but I'm already quite familiar with wxPython and feel comfortable using it. I've even written a Tetris clone in it, and it was pretty smooth.
I wonder, what does Pygame offer in terms of graphics (leaving sound aside, for a moment) that wxPython can't do ? Is it somehow simpler/faster to do graphics in Pygame than in wxPython ? Is it even more cross-platform ?
It looks like I'm missing something here, but I don't know what.
|
Well, in theory there is nothing you can do with Pygame that you can't with wxPython. The point is not what but how. In my opinion, it's easier to write a game with PyGame becasue:
It's faster. Pygame is based on SDL which is a C library specifically designed for games, it has been developed with speed in mind. When you develop games, you need speed.
Is a game library, not a general purpose canvas, It has classes and functions useful for sprites, transformations, input handling, drawing, collision detection. It also implements algorithms and techniques often used in games like dirty rectangles, page flipping, etc.
There are thousands of games and examples made with it. It will be easier for you to discover how to do any trick.
There are a lot of libraries with effects and utilities you could reuse. You want an isometric game, there is a library, you want a physics engine, there is a library, you what some cool visual effect, there is a library.
PyWeek. :) This is to make the development of your game even funnier!
For some very simple games like Tetris, the difference won't be too much, but if you want to develop a fairly complex game, believe me, you will want something like PyGame.
|
How to convert from UTM to LatLng in python or Javascript
|
I have a bunch of files with coordinates in UTM form. For each coordinate I have easting, northing and zone. I need to convert this to LatLng for use with Google Map API to show the information in a map.
I have found some online calculators that does this, but no actual code or libraries. http://trac.osgeo.org/proj4js/ is a projection library for Javascript, but looking at the demo it doesn't include UTM projection.
I am still pretty fresh to the entire GIS domain, so what I want is something ala:
(lat,lng) = transform(easting, northing, zone)
|
I ended up finding java code from IBM that solved it: http://www.ibm.com/developerworks/java/library/j-coordconvert/index.html
Just for reference, here is my python implementation of the method I needed:
import math
def utmToLatLng(zone, easting, northing, northernHemisphere=True):
if not northernHemisphere:
northing = 10000000 - northing
a = 6378137
e = 0.081819191
e1sq = 0.006739497
k0 = 0.9996
arc = northing / k0
mu = arc / (a * (1 - math.pow(e, 2) / 4.0 - 3 * math.pow(e, 4) / 64.0 - 5 * math.pow(e, 6) / 256.0))
ei = (1 - math.pow((1 - e * e), (1 / 2.0))) / (1 + math.pow((1 - e * e), (1 / 2.0)))
ca = 3 * ei / 2 - 27 * math.pow(ei, 3) / 32.0
cb = 21 * math.pow(ei, 2) / 16 - 55 * math.pow(ei, 4) / 32
cc = 151 * math.pow(ei, 3) / 96
cd = 1097 * math.pow(ei, 4) / 512
phi1 = mu + ca * math.sin(2 * mu) + cb * math.sin(4 * mu) + cc * math.sin(6 * mu) + cd * math.sin(8 * mu)
n0 = a / math.pow((1 - math.pow((e * math.sin(phi1)), 2)), (1 / 2.0))
r0 = a * (1 - e * e) / math.pow((1 - math.pow((e * math.sin(phi1)), 2)), (3 / 2.0))
fact1 = n0 * math.tan(phi1) / r0
_a1 = 500000 - easting
dd0 = _a1 / (n0 * k0)
fact2 = dd0 * dd0 / 2
t0 = math.pow(math.tan(phi1), 2)
Q0 = e1sq * math.pow(math.cos(phi1), 2)
fact3 = (5 + 3 * t0 + 10 * Q0 - 4 * Q0 * Q0 - 9 * e1sq) * math.pow(dd0, 4) / 24
fact4 = (61 + 90 * t0 + 298 * Q0 + 45 * t0 * t0 - 252 * e1sq - 3 * Q0 * Q0) * math.pow(dd0, 6) / 720
lof1 = _a1 / (n0 * k0)
lof2 = (1 + 2 * t0 + Q0) * math.pow(dd0, 3) / 6.0
lof3 = (5 - 2 * Q0 + 28 * t0 - 3 * math.pow(Q0, 2) + 8 * e1sq + 24 * math.pow(t0, 2)) * math.pow(dd0, 5) / 120
_a2 = (lof1 - lof2 + lof3) / math.cos(phi1)
_a3 = _a2 * 180 / math.pi
latitude = 180 * (phi1 - fact1 * (fact2 + fact3 + fact4)) / math.pi
if not northernHemisphere:
latitude = -latitude
longitude = ((zone > 0) and (6 * zone - 183.0) or 3.0) - _a3
return (latitude, longitude)
And here I thought it was something simple like easting*x+zone*y or something.
|
Django Admin's "view on site" points to example.com instead of my domain
|
I added a get_absolute_url function to one of my models.
def get_absolute_url(self):
return '/foo/bar'
The admin site picks it up and adds a "view on site" link to the detail page for that object (when I put a real URL there instead of "/foo/bar").
The problem is instead of going to http://localhost:8000/foo/bar, it goes to http://example.com/foo/bar.
What am I doing wrong?
|
You have to change default site domain value.
|
Python - No handlers could be found for logger "OpenGL.error"
|
Okay, what is it, and why does it occur on Win2003 server, but not on WinXP.
It doesn't seem to affect my application at all, but I get this error message when I close the application. And it's annoying (as errors messages should be).
I am using pyOpenGl and wxPython to do the graphics stuff. Unfortunately, I'm a C# programmer that has taken over this Python app, and I had to learn Python to do it.
I can supply code and version numbers etc, but I'm still learning the technical stuff, so any help would be appreciated.
Python 2.5, wxPython and pyOpenGL
|
Looks like OpenGL is trying to report some error on Win2003, however you've not configured your system where to output logging info.
You can add the following to the beginning of your program and you'll see details of the error in stderr.
import logging
logging.basicConfig()
Checkout documentation on logging module to get more config info, conceptually it's similar to log4J.
|
Read file object as string in python
|
I'm using urllib2 to read in a page. I need to do a quick regex on the source and pull out a few variables but urllib2 presents as a file object rather than a string.
I'm new to python so I'm struggling to see how I use a file object to do this. Is there a quick way to convert this into a string?
|
You can use Python in interactive mode to search for solutions.
if f is your object, you can enter dir(f) to see all methods and attributes. There's one called read. Enter help(f.read) and it tells you that f.read() is the way to retrieve a string from an file object.
|
Format numbers in django templates
|
I'm trying to format numbers. Examples:
1 => 1
12 => 12
123 => 123
1234 => 1,234
12345 => 12,345
It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use.
Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model.
|
Django's contributed humanize application does this:
{% load humanize %}
{{ my_num|intcomma }}
Be sure to add 'django.contrib.humanize' to your INSTALLED_APPS list in the settings.py file.
|
How do I concisely implement multiple similar unit tests in the Python unittest framework?
|
I'm implementing unit tests for a family of functions that all share a number of invariants. For example, calling the function with two matrices produce a matrix of known shape.
I would like to write unit tests to test the entire family of functions for this property, without having to write an individual test case for each function (particularly since more functions might be added later).
One way to do this would be to iterate over a list of these functions:
import unittest
import numpy
from somewhere import the_functions
from somewhere.else import TheClass
class Test_the_functions(unittest.TestCase):
def setUp(self):
self.matrix1 = numpy.ones((5,10))
self.matrix2 = numpy.identity(5)
def testOutputShape(unittest.TestCase):
"""Output of functions be of a certain shape"""
for function in all_functions:
output = function(self.matrix1, self.matrix2)
fail_message = "%s produces output of the wrong shape" % str(function)
self.assertEqual(self.matrix1.shape, output.shape, fail_message)
if __name__ == "__main__":
unittest.main()
I got the idea for this from Dive Into Python. There, it's not a list of functions being tested but a list of known input-output pairs. The problem with this approach is that if any element of the list fails the test, the later elements don't get tested.
I looked at subclassing unittest.TestCase and somehow providing the specific function to test as an argument, but as far as I can tell that prevents us from using unittest.main() because there would be no way to pass the argument to the testcase.
I also looked at dynamically attaching "testSomething" functions to the testcase, by using setattr with a lamdba, but the testcase did not recognize them.
How can I rewrite this so it remains trivial to expand the list of tests, while still ensuring every test is run?
|
Here's my favorite approach to the "family of related tests". I like explicit subclasses of a TestCase that expresses the common features.
class MyTestF1( unittest.TestCase ):
theFunction= staticmethod( f1 )
def setUp(self):
self.matrix1 = numpy.ones((5,10))
self.matrix2 = numpy.identity(5)
def testOutputShape( self ):
"""Output of functions be of a certain shape"""
output = self.theFunction(self.matrix1, self.matrix2)
fail_message = "%s produces output of the wrong shape" % (self.theFunction.__name__,)
self.assertEqual(self.matrix1.shape, output.shape, fail_message)
class TestF2( MyTestF1 ):
"""Includes ALL of TestF1 tests, plus a new test."""
theFunction= staticmethod( f2 )
def testUniqueFeature( self ):
# blah blah blah
pass
class TestF3( MyTestF1 ):
"""Includes ALL of TestF1 tests with no additional code."""
theFunction= staticmethod( f3 )
Add a function, add a subclass of MyTestF1. Each subclass of MyTestF1 includes all of the tests in MyTestF1 with no duplicated code of any kind.
Unique features are handled in an obvious way. New methods are added to the subclass.
It's completely compatible with unittest.main()
|
Gauss-Legendre Algorithm in python
|
I need some help calculating Pi. I am trying to write a python program that will calculate Pi to X digits. I have tried several from the python mailing list, and it is to slow for my use.
I have read about the Gauss-Legendre Algorithm, and I have tried porting it to Python with no success.
I am reading from Here, and I would appreciate any input as to where I am going wrong!
It outputs: 0.163991276262
from __future__ import division
import math
def square(x):return x*x
a = 1
b = 1/math.sqrt(2)
t = 1/4
x = 1
for i in range(1000):
y = a
a = (a+b)/2
b = math.sqrt(b*y)
t = t - x * square((y-a))
x = 2* x
pi = (square((a+b)))/4*t
print pi
raw_input()
|
You forgot parentheses around 4*t:
pi = (a+b)**2 / (4*t)
You can use decimal to perform calculation with higher precision.
#!/usr/bin/env python
from __future__ import with_statement
import decimal
def pi_gauss_legendre():
D = decimal.Decimal
with decimal.localcontext() as ctx:
ctx.prec += 2
a, b, t, p = 1, 1/D(2).sqrt(), 1/D(4), 1
pi = None
while 1:
an = (a + b) / 2
b = (a * b).sqrt()
t -= p * (a - an) * (a - an)
a, p = an, 2*p
piold = pi
pi = (a + b) * (a + b) / (4 * t)
if pi == piold: # equal within given precision
break
return +pi
decimal.getcontext().prec = 100
print pi_gauss_legendre()
Output:
3.141592653589793238462643383279502884197169399375105820974944592307816406286208\
998628034825342117068
|
AKS Primes algorithm in Python
|
A few years ago, it was proven that PRIMES is in P. Are there any algorithms implementing their primality test in Python? I wanted to run some benchmarks with a naive generator and see for myself how fast it is. I'd implement it myself, but I don't understand the paper enough yet to do that.
|
Quick answer: no, the AKS test is not the fastest way to test primality. There are much much faster primality tests that either assume the (generalized) Riemann hypothesis and/or are randomized. (E.g. Miller-Rabin is fast and simple to implement.) The real breakthrough of the paper was theoretical, proving that a deterministic polynomial-time algorithm exists for testing primality, without assuming the GRH or other unproved conjectures.
That said, if you want to understand and implement it, Scott Aaronson's short article might help. It doesn't go into all the details, but you can start at page 10 of 12, and it gives enough. :-)
There is also a list of implementations (mostly in C++) here.
Also, for optimization and improvements (by several orders of magnitude), you might want to look at this report, or (older) Crandall and Papadopoulos's report, or (older still) Daniel J Bernstein's report. All of them have fairly detailed pseudo-code that lends itself well to implementation.
|
How to test django caching?
|
Is there a way to be sure that a page is coming from cache on a production server and on the development server as well?
The solution shouldn't involve caching middleware because not every project uses them. Though the solution itself might be a middleware.
Just checking if the data is stale is not a very safe testing method IMO.
|
We do a lot of component caching and not all of them are updated at the same time. So we set host and timestamp values in a universally included context processor. At the top of each template fragment we stick in:
<!-- component_name {{host}} {{timestamp}} -->
The component_name just makes it easy to do a View Source and search for that string.
All of our views that are object-detail pages define a context variable "page_object" and we have this at the top of the base.html template master:
<!-- {{page_object.class_id}} @ {{timestamp}} -->
class_id() is a method from a super class used by all of our primary content classes. It is just:
def class_id(self):
"%s.%s.%s" % (self.__class__._meta.app_label,
self.__class__.__name__, self.id)
If you load a page and any of the timestamps are more than few seconds old, it's a pretty good bet that the component was cached.
|
What could justify the complexity of Plone?
|
Plone is very complex. Zope2, Zope3, Five, ZCML, ZODB, ZEO, a whole bunch of acronyms and abbreviations.
It's hard to begin and the current state seems to be undecided. It is mainly based on Zope2, but incorporates Zope3 via Five. And there are XML config files everywhere.
Does the steep learning curve pay of? Is this complexity still justified today?
Background: I need a platform. Customers often need a CMS. I'm currently reading "Professional Plone Development", without prior knowledge of Plone.
The problem: Customers don't always want the same and you can't know beforehand. One thing is sure: They don't want the default theme of Plone. But any additional feature is a risk. You can't just start and say "If you want to see the complexity of Plone, you have to ask for it." when you don't know the system good enough to plan.
|
It's hard to answer your question without any background information. Is the complexity justified if you just want a blog? No. Is the complexity justified if you're building a company intranet for 400+ people? Yes. Is it a good investment if you're looking to be a consultant? Absolutely! There's a lot of Plone work out there, and it pays much better than the average PHP job.
I'd encourage you to clarify what you're trying to build, and ask the Plone forums for advice. Plone has a very mature and friendly community â and will absolutely let you know if what you're trying to do is a poor fit for Plone. You can of course do whatever you want with Plone, but there are some areas where it's the best solution available, other areas where it'll be a lot of work to change it to do something else.
Some background:
The reason for the complexity of Plone at this point in time is that it's moving to a more modern architecture. It's bridging both the old and the new approach right now, which adds some complexity until the transition is mostly complete.
Plone is doing this to avoid leaving their customers behind by breaking backwards compatibility, which they take very seriously â unlike other systems I could mention (but won't ;).
You care about your data, the Plone community cares about their data â and we'd like you to be able to upgrade to the new and better versions even when we're transitioning to a new architecture. This is one of the Plone community's strengths, but there is of course a penalty to pay for modifying the plane while it's flying, and that's a bit of temporary, extra complexity.
Furthermore, Plone as a community has a strong focus on security (compare it to any other system on the vulnerabilities reported), and a very professional culture that values good architecture, testing and reusability.
As an example, consider the current version of Plone being developed (what will become 4.0):
It starts up 3-4 times faster than the current version.
It uses about 20% less memory than the current version.
There's a much, much easier types system in the works (Dexterity), which will reduce the complexity and speed up the system a lot, while keeping the same level of functionality
The code base is already 20% smaller than the current shipping version, and getting even smaller.
Early benchmarks of the new types system show a 5× speedup for content editing, and we haven't really started optimizing this part yet.
â Alexander Limi, Plone co-founder (and slightly biased ;)
|
Creating a list of objects in Python
|
I'm trying to create a Python script that opens several databases and compares their contents. In the process of creating that script, I've run into a problem in creating a list whose contents are objects that I've created.
I've simplified the program to its bare bones for this posting. First I create a new class, create a new instance of it, assign it an attribute and then write it to a list. Then I assign a new value to the instance and again write it to a list... and again and again...
Problem is, it's always the same object so I'm really just changing the base object. When I read the list, I get a repeat of the same object over and over.
So how do you write objects to a list within a loop?
Thanks,
Bob J
Here's my simplified code
class SimpleClass(object):
pass
x = SimpleClass
# Then create an empty list
simpleList = []
#Then loop through from 0 to 3 adding an attribute to the instance 'x' of SimpleClass
for count in range(0,4):
# each iteration creates a slightly different attribute value, and then prints it to
# prove that step is working
# but the problem is, I'm always updating a reference to 'x' and what I want to add to
# simplelist is a new instance of x that contains the updated attribute
x.attr1= '*Bob* '* count
print "Loop Count: %s Attribute Value %s" % (count, x.attr1)
simpleList.append(x)
print '-'*20
# And here I print out each instance of the object stored in the list 'simpleList'
# and the problem surfaces. Every element of 'simpleList' contains the same attribute value
y = SimpleClass
print "Reading the attributes from the objects in the list"
for count in range(0,4):
y = simpleList[count]
print y.attr1
So how do I (append, extend, copy or whatever) the elements of simpleList so that each entry contains a different instance of the object instead of all pointing to the same one?
|
You demonstrate a fundamental misunderstanding.
You never created an instance of SimpleClass at all, because you didn't call it.
for count in xrange(4):
x = SimpleClass()
x.attr = count
simplelist.append(x)
Or, if you let the class take parameters, instead, you can use a list comprehension.
simplelist = [SimpleClass(count) for count in xrange(4)]
|
Receive and send emails in python
|
How can i receive and send email in python? A 'mail server' of sorts.
I am looking into making an app that listens to see if it recieves an email addressed to foo@bar.domain.com, and sends an email to the sender.
Now, am i able to do this all in python, would it be best to use 3rd party libraries?
|
Here is a very simple example:
import smtplib
server = 'mail.server.com'
user = ''
password = ''
recipients = ['user@mail.com', 'other@mail.com']
sender = 'you@mail.com'
message = 'Hello World'
session = smtplib.SMTP(server)
# if your SMTP server doesn't need authentications,
# you don't need the following line:
session.login(user, password)
session.sendmail(sender, recipients, message)
For more options, error handling, etc, look at the smtplib module documentation.
|
How can I download all emails with attachments from Gmail?
|
How do I connect to Gmail and determine which messages have attachments? I then want to download each attachment, printing out the Subject: and From: for each message as I process it.
|
Hard one :-)
import email, getpass, imaplib, os
detach_dir = '.' # directory where to save attachments (default: current)
user = raw_input("Enter your GMail username:")
pwd = getpass.getpass("Enter your password: ")
# connecting to the gmail imap server
m = imaplib.IMAP4_SSL("imap.gmail.com")
m.login(user,pwd)
m.select("[Gmail]/All Mail") # here you a can choose a mail box like INBOX instead
# use m.list() to get all the mailboxes
resp, items = m.search(None, "ALL") # you could filter using the IMAP rules here (check http://www.example-code.com/csharp/imap-search-critera.asp)
items = items[0].split() # getting the mails id
for emailid in items:
resp, data = m.fetch(emailid, "(RFC822)") # fetching the mail, "`(RFC822)`" means "get the whole stuff", but you can ask for headers only, etc
email_body = data[0][1] # getting the mail content
mail = email.message_from_string(email_body) # parsing the mail content to get a mail object
#Check if any attachments at all
if mail.get_content_maintype() != 'multipart':
continue
print "["+mail["From"]+"] :" + mail["Subject"]
# we use walk to create a generator so we can iterate on the parts and forget about the recursive headach
for part in mail.walk():
# multipart are just containers, so we skip them
if part.get_content_maintype() == 'multipart':
continue
# is this part an attachment ?
if part.get('Content-Disposition') is None:
continue
filename = part.get_filename()
counter = 1
# if there is no filename, we create one with a counter to avoid duplicates
if not filename:
filename = 'part-%03d%s' % (counter, 'bin')
counter += 1
att_path = os.path.join(detach_dir, filename)
#Check if its already there
if not os.path.isfile(att_path) :
# finally write the stuff
fp = open(att_path, 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
Wowww! That was something. ;-) But try the same in Java, just for fun!
By the way, I tested that in a shell, so some errors likely remain.
Enjoy
EDIT:
Because mail-box names can change from one country to another, I recommend doing m.list() and picking an item in it before m.select("the mailbox name") to avoid this error:
imaplib.error: command SEARCH illegal in state AUTH, only allowed in
states SELECTED
|
How does Django Know the Order to Render Form Fields?
|
If I have a Django form such as:
class ContactForm(forms.Form):
subject = forms.CharField(max_length=100)
message = forms.CharField()
sender = forms.EmailField()
And I call the as_table() method of an instance of this form, Django will render the fields as the same order as specified above.
My question is how does Django know the order that class variables where defined?
(Also how do I override this order, for example when I want to add a field from the classe's init method?)
|
[NOTE: this answer is now somewhat outdated - please see the discussion below it].
If f is a form, its fields are f.fields, which is a django.utils.datastructures.SortedDict (it presents the items in the order they are added). After form construction f.fields has a keyOrder attribute, which is a list containing the field names in the order they should be presented. You can set this to the correct ordering (though you need to exercise care to ensure you don't omit items or add extras).
Here's an example I just created in my current project:
class PrivEdit(ModelForm):
def __init__(self, *args, **kw):
super(ModelForm, self).__init__(*args, **kw)
self.fields.keyOrder = [
'super_user',
'all_districts',
'multi_district',
'all_schools',
'manage_users',
'direct_login',
'student_detail',
'license']
class Meta:
model = Privilege
|
Capitalize a string
|
Does anyone know of a really simple way of capitalizing just the first letter of a string, regardless of the capitalization of the rest of the string?
For example:
asimpletest -> Asimpletest
aSimpleTest -> ASimpleTest
I would like to be able to do all string lengths as well.
|
>>> b = "my name"
>>> b.capitalize()
'My name'
>>> b.title()
'My Name'
|
Extending builtin classes in python
|
How can I extend a builtin class in python?
I would like to add a method to the str class.
I've done some searching but all I'm finding is older posts, I'm hoping someone knows of something newer.
|
Just subclass the type
>>> class X(str):
... def myMethod( self ):
... return int(self)
...
>>> s=X("Hi Mom")
>>> s.lower()
'hi mom'
>>> s.myMethod()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in myMethod
ValueError: invalid literal for int() with base 10: 'Hi Mom'
>>> z=X("271828")
>>> z.lower()
'271828'
>>> z.myMethod()
271828
|
Weighted random selection with and without replacement
|
Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory.
Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
|
One of the fastest ways to make many with replacement samples from an unchanging list is the alias method. The core intuition is that we can create a set of equal-sized bins for the weighted list that can be indexed very efficiently through bit operations, to avoid a binary search. It will turn out that, done correctly, we will need to only store two items from the original list per bin, and thus can represent the split with a single percentage.
Let's us take the example of five equally weighted choices, (a:1, b:1, c:1, d:1, e:1)
To create the alias lookup:
Normalize the weights such that they sum to 1.0. (a:0.2 b:0.2 c:0.2 d:0.2 e:0.2) This is the probability of choosing each weight.
Find the smallest power of 2 greater than or equal to the number of variables, and create this number of partitions, |p|. Each partition represents a probability mass of 1/|p|. In this case, we create 8 partitions, each able to contain 0.125.
Take the variable with the least remaining weight, and place as much of it's mass as possible in an empty partition. In this example, we see that a fills the first partition. (p1{a|null,1.0},p2,p3,p4,p5,p6,p7,p8) with (a:0.075, b:0.2 c:0.2 d:0.2 e:0.2)
If the partition is not filled, take the variable with the most weight, and fill the partition with that variable.
Repeat steps 3 and 4, until none of the weight from the original partition need be assigned to the list.
For example, if we run another iteration of 3 and 4, we see
(p1{a|null,1.0},p2{a|b,0.6},p3,p4,p5,p6,p7,p8) with (a:0, b:0.15 c:0.2 d:0.2 e:0.2) left to be assigned
At runtime:
Get a U(0,1) random number, say binary 0.001100000
bitshift it lg2(p), finding the index partition. Thus, we shift it by 3, yielding 001.1, or position 1, and thus partition 2.
If the partition is split, use the decimal portion of the shifted random number to decide the split. In this case, the value is 0.5, and 0.5 < 0.6, so return a.
Here is some code and another explanation, but unfortunately it doesn't use the bitshifting technique, nor have I actually verified it.
|
How to add file extensions based on file type on Linux/Unix?
|
This is a question regarding Unix shell scripting (any shell), but any other "standard" scripting language solution would also be appreciated:
I have a directory full of files where the filenames are hash values like this:
fd73d0cf8ee68073dce270cf7e770b97
fec8047a9186fdcc98fdbfc0ea6075ee
These files have different original file types such as png, zip, doc, pdf etc.
Can anybody provide a script that would rename the files so they get their appropriate file extension, probably based on the output of the file command?
Answer:
J.F. Sebastian's script will work for both ouput of the filenames as well as the actual renaming.
|
You can use
file -i filename
to get a MIME-type. You could potentially lookup the type in a list and then append an extension. You can find a list of MIME-types and example file extensions on the net.
|
Beginner: Trying to understand how apps interact in Django
|
I just got done working through the Django tutorials for the second time, and am understanding things much more clearly now. However, I'm still unclear how apps inside a site interact with one another.
For example, lets say I'm writing a blog application (a rather popular activity, apparently). Blog posts and comments tend to go together, and yet they are distinct enough that they should be built into separate apps, as is the general philosophy of Djano development.
Consider the following example. In reality I would not actually write the comment app myself, as good code for that already exists on the web, but this is for demonstration/practice purposes:
mysite/blog/models.py
from django.db import models
class post(models.Model):
title = models.CharField(max_length=200)
author = models.CharField(max_length=200)
content = models.TextField()
mysite/comments/models.py
from django.db import models
from mysite.blog.models import post
class comment(models.Model):
id = models.AutoField()
post = models.ForeignKey(post)
author = models.CharField(max_length=200)
text = models.TextField()
Is what I wrote above, importing a model from another app and setting it as a foreign key, how Django apps interact? Or is there a different/better method for the apps that comprise a site to interact?
Update
Per the recommendation in one response, I'm reading the documentation for contrib.contenttypes. If I'm reading this correctly, I could rewrite my example comment app like this:
from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.contrib.contentypes import generic
class comment(models.Model):
id = models.AutoField()
author = models.CharField(max_length=200)
text = models.TextField()
content_type = models.ForeignKey(ContentType)
content_object = generic.GenericForeignKey(content_type, id)
Would this be correct?
|
Take a look at django's built-in contenttypes framework:
django.contrib.contenttypes
It allows you develop your applications as stand-alone units. This is what the django developers used to allow django's built-in comment framework to attach a comment to any model in your project.
For instance, if you have some content object that you want to "attach" to other content objects of different types, like allowing each user to leave a "favorite" star on a blog post, image, or user profile, you can create a Favorite model with a generic relation field like so:
from django.db import models
from django.contrib.auth.models import User
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes import generic
class Favorite(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
In this way you can add a Favorite star from any user to any model in your project. If you want to add API access via the recipient model class you can either add a reverse generic relation field on the recipient model (although this would be "coupling" the two models, which you said you wanted to avoid), or do the lookup through the Favorite model with the content_type and object_id of the recipient instance, see the official docs for an example.
|
How do I check if a string is a number (float) in Python?
|
What is the best possible way to check if a string can be represented as a number in Python?
The function I currently have right now is:
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling float in the main function is even worse.
|
In case you are looking for parsing (positive, unsigned) integers instead of floats, you can use the isdigit() function for string objects.
>>> a = "03523"
>>> a.isdigit()
True
>>> b = "963spam"
>>> b.isdigit()
False
String Methods - isdigit()
There's also something on Unicode strings, which I'm not too familiar with
Unicode - Is decimal/decimal
|
How to configure vim to not put comments at the beginning of lines while editing python files
|
When I add a # in insert mode on an empty line in Vim while editing python files, vim moves the # to the beginning of the line, but I would like the # to be inserted at the tab level where I entered it.
For example, when writing this in vim
for i in range(10):
#
the # does not stay there where I entered it.
It is moved like so, by vim.
for i in range(10):
#
Does anyone know of a configuration item in vim that would change this?
If it helps, I am using Ubuntu 8.10.
|
I found an answer here http://vim.wikia.com/wiki/Restoring_indent_after_typing_hash
It seems that the vim smartindent option is the cause of the problem.
The referenced page above describes work-a-rounds but after reading the help in smartindent in vim itself (:help smartindent), I decided to try cindent instead of smartindent.
I replaced
set smartindent
with
set cindent
in my .vimrc file
and so far it is working perfectly.
This changed also fixed the behavior of '<<' and '>>' for indenting visual blocks that include python comments.
There are more configuration options for and information on indentation in the vim help for smartindent and cindent (:help smartindent and :help cindent).
|
Are there statistical studies that indicates that Python is "more productive"?
|
If I do a google search with the string "python productive" the first results is a page http://www.ferg.org/projects/python_java_side-by-side.html claiming that "python is more productive of Java". Many Python programmers that I have talked with claim that Python is "more productive", and most of them report the arguments listed in the above cited article.
The article could be summarized in these stametements:
Python allow you write terser code
A more terser is written in lesser time
Then Python is more productive
But the article does not reports any statisticals evidences that support the hypothesis that a more terse code could be developed (not written) in lesser time.
Do you know if there any article that reports statistical evidences that Python is more productive of something else?
Update:
I'm not interested in adovacy arguments. Please do not tell why something should be more productive of something else. I'm interested in studies that measures that the use of something is related or not related to the productivity.
I'm interested in statisticals evidences. If you claim that the productivity depends from many other factors then there should be a statistical study that proves that the language choiche is not statistically correlated to the productivity.
|
Yes, and there are also statistical studies that prove that dogs are more productive than cats. Both are equally valid. ;-)
by popular demand, here are a couple of "studies" - take them with a block of salt!
An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl PDF warning!
Programming Language Productivity PDF warning! Note that these statistics are for a "string processing problem", so one might expect the winner to be...Perl of course!
and Jeff Atwood's musings are interesting as well
the issues of programmer productivity are far more complex than what language is being used. Productivity among programmers can vary wildly, and is affected by the problem domain plus many other factors. Thus no "study" can ever be "definitive". See Understanding Software Productivity and Productivity Variations Among Software Developers and Teams for additional information.
Finally, the right tool for the right job is still the rule. No exceptions.
|
How do you return multiple values in Python?
|
The canonical way to return multiple values in languages that support it is often tupling.
Option: Using a tuple
Consider this trivial example:
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a dict.
Consider the following:
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
result['y0']
Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
In python the previous two are perhaps very similar in terms of plumbing- After all { y0, y1, y2 } just end up being entries in the internal __dict__ of the ReturnValue.
There is one additional feature provided by Python though for tiny objects, the __slots__ attribute. The class could be expressed as:
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
From the Python Reference Manual:
The __slots__ declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__ is not created for each instance.
Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are mutable, wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel implied interfaces should be preferred to explicit interfaces, at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python?
|
Named tuples were added in 2.6 for this purpose. Also see os.stat for a similar builtin example.
>>> import collections
>>> point = collections.namedtuple('Point', ['x', 'y'])
>>> p = point(1, y=2)
>>> p.x, p.y
1 2
>>> p[0], p[1]
1 2
|
A get() like method for checking for Python attributes
|
If I had a dictionary dict and I wanted to check for dict['key'] I could either do so in a try block (bleh!) or use the get() method, with False as a default value.
I'd like to do the same thing for object.attribute. That is, I already have object to return False if it hasn't been set, but then that gives me errors like
AttributeError: 'bool' object has no attribute 'attribute'
|
A more direct analogue to dict.get(key, default) than hasattr is getattr.
val = getattr(obj, 'attr_to_check', default_value)
(Where default_value is optional, raising an exception on no attribute if not found.)
For your example, you would pass False.
|
How to compute the nth root of a very big integer
|
I need a way to compute the nth root of a long integer in Python.
I tried pow(m, 1.0/n), but it doesn't work:
OverflowError: long int too large to convert to float
Any ideas?
By long integer I mean REALLY long integers like:
11968003966030964356885611480383408833172346450467339251
196093144141045683463085291115677488411620264826942334897996389
485046262847265769280883237649461122479734279424416861834396522
819159219215308460065265520143082728303864638821979329804885526
557893649662037092457130509980883789368448042961108430809620626
059287437887495827369474189818588006905358793385574832590121472
680866521970802708379837148646191567765584039175249171110593159
305029014037881475265618958103073425958633163441030267478942720
703134493880117805010891574606323700178176718412858948243785754
898788359757528163558061136758276299059029113119763557411729353
915848889261125855717014320045292143759177464380434854573300054
940683350937992500211758727939459249163046465047204851616590276
724564411037216844005877918224201569391107769029955591465502737
961776799311859881060956465198859727495735498887960494256488224
613682478900505821893815926193600121890632
|
If it's a REALLY big number. You could use a binary search.
def find_invpow(x,n):
"""Finds the integer component of the n'th root of x,
an integer such that y ** n <= x < (y + 1) ** n.
"""
high = 1
while high ** n <= x:
high *= 2
low = high/2
while low < high:
mid = (low + high) // 2
if low < mid and mid**n < x:
low = mid
elif high > mid and mid**n > x:
high = mid
else:
return mid
return mid + 1
For example:
>>> x = 237734537465873465
>>> n = 5
>>> y = find_invpow(x,n)
>>> y
2986
>>> y**n <= x <= (y+1)**n
True
>>>
>>> x = 119680039660309643568856114803834088331723464504673392511960931441>
>>> n = 45
>>> y = find_invpow(x,n)
>>> y
227661383982863143360L
>>> y**n <= x < (y+1)**n
True
>>> find_invpow(y**n,n) == y
True
>>>
|
How to handle constructors or methods with a different set (or type) of arguments in Python?
|
Is there a way in Python, to have more than one constructor or more than one method with the same name, who differ in the number of arguments they accept or the type(s) of one or more argument(s)?
If not, what would be the best way to handle such situations?
For an example I made up a color class. This class should only work as a basic example to discuss this, there is lot's of unnecessary and/or redundant stuff in there.
It would be nice, if I could call the constructor with different objects (a list, an other color object or three integers...) and the constructor handles them accordingly. In this basic example it works in some cases with * args and * * kwargs, but using class methods is the only general way I came up with. What would be a "best practice" like solution for this?
The constructor aside, if I would like to implement an _ _ add _ _ method too, how can I get this method to accept all of this: A plain integer (which is added to all values), three integers (where the first is added to the red value and so forth) or another color object (where both red values are added together, etc.)?
EDIT
I added an alternative constructor (initializer, _ _ init _ _) that basicly does all the stuff I wanted.
But I stick with the first one and the factory methods. Seems clearer.
I also added an _ _ add _ _, which does all the things mentioned above but I'm not sure if it's good style. I try to use the iteration protocol and fall back to "single value mode" instead of checking for specific types. Maybe still ugly tho.
I have taken a look at _ _ new _ _, thanks for the links.
My first quick try with it fails: I filter the rgb values from the * args and * * kwargs (is it a class, a list, etc.) then call the superclass's _ _ new _ _ with the right args (just r,g,b) to pass it along to init. The call to the 'Super(cls, self)._ _ new _ _ (....)' works, but since I generate and return the same object as the one I call from (as intended), all the original args get passed to _ _ init _ _ (working as intended), so it bails.
I could get rid of the _ _ init _ _ completly and set the values in the _ _ new _ _ but I don't know... feels like I'm abusing stuff here ;-) I should take a good look at metaclasses and new first I guess.
Source:
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
class Color (object):
# It's strict on what to accept, but I kinda like it that way.
def __init__(self, r=0, g=0, b=0):
self.r = r
self.g = g
self.b = b
# Maybe this would be a better __init__?
# The first may be more clear but this could handle way more cases...
# I like the first more though. What do you think?
#
#def __init__(self, obj):
# self.r, self.g, self.b = list(obj)[:3]
# This methods allows to use lists longer than 3 items (eg. rgba), where
# 'Color(*alist)' would bail
@classmethod
def from_List(cls, alist):
r, g, b = alist[:3]
return cls(r, g, b)
# So we could use dicts with more keys than rgb keys, where
# 'Color(**adict)' would bail
@classmethod
def from_Dict(cls, adict):
return cls(adict['r'], adict['g'], adict['b'])
# This should theoreticaly work with every object that's iterable.
# Maybe that's more intuitive duck typing than to rely on an object
# to have an as_List() methode or similar.
@classmethod
def from_Object(cls, obj):
return cls.from_List(list(obj))
def __str__(self):
return "<Color(%s, %s, %s)>" % (self.r, self.g, self.b)
def _set_rgb(self, r, g, b):
self.r = r
self.g = g
self.b = b
def _get_rgb(self):
return (self.r, self.g, self.b)
rgb = property(_get_rgb, _set_rgb)
def as_List(self):
return [self.r, self.g, self.b]
def __iter__(self):
return (c for c in (self.r, self.g, self.b))
# We could add a single value (to all colorvalues) or a list of three
# (or more) values (from any object supporting the iterator protocoll)
# one for each colorvalue
def __add__(self, obj):
r, g, b = self.r, self.g, self.b
try:
ra, ga, ba = list(obj)[:3]
except TypeError:
ra = ga = ba = obj
r += ra
g += ga
b += ba
return Color(*Color.check_rgb(r, g, b))
@staticmethod
def check_rgb(*vals):
ret = []
for c in vals:
c = int(c)
c = min(c, 255)
c = max(c, 0)
ret.append(c)
return ret
class ColorAlpha(Color):
def __init__(self, r=0, g=0, b=0, alpha=255):
Color.__init__(self, r, g, b)
self.alpha = alpha
def __str__(self):
return "<Color(%s, %s, %s, %s)>" % (self.r, self.g, self.b, self.alpha)
# ...
if __name__ == '__main__':
l = (220, 0, 70)
la = (57, 58, 61, 255)
d = {'r': 220, 'g': 0, 'b':70}
da = {'r': 57, 'g': 58, 'b':61, 'a':255}
c = Color(); print c # <Color(0, 0, 0)>
ca = ColorAlpha(*la); print ca # <Color(57, 58, 61, 255)>
print '---'
c = Color(220, 0, 70); print c # <Color(220, 0, 70)>
c = Color(*l); print c # <Color(220, 0, 70)>
#c = Color(*la); print c # -> Fail
c = Color(**d); print c # <Color(220, 0, 70)>
#c = Color(**da); print c # -> Fail
print '---'
c = Color.from_Object(c); print c # <Color(220, 0, 70)>
c = Color.from_Object(ca); print c # <Color(57, 58, 61, 255)>
c = Color.from_List(l); print c # <Color(220, 0, 70)>
c = Color.from_List(la); print c # <Color(57, 58, 61, 255)>
c = Color.from_Dict(d); print c # <Color(220, 0, 70)>
c = Color.from_Dict(da); print c # <Color(57, 58, 61, 255)>
print '---'
print 'Check =', Color.check_rgb('1', 0x29a, -23, 40)
# Check = [1, 255, 0, 40]
print '%s + %s = %s' % (c, 10, c + 10)
# <Color(57, 58, 61)> + 10 = <Color(67, 68, 71)>
print '%s + %s = %s' % (c, ca, c + ca)
# <Color(57, 58, 61)> + <Color(57, 58, 61, 255)> = <Color(114, 116, 122)>
|
You can have the factory methods, it is fine. But why not just call it as it is?
Color(r, g, b)
Color(*[r, g, b])
Color(**{'r': r, 'g': g, 'b': b})
This is the python way. As for the from object constructor, I would prefer something like:
Color(*Color2.as_list())
Explicit is better than implicit - Python Zen
|
What is the recommended way to use Vim folding for Python code
|
I am interested in enabling code folding in Vim for Python code. I have noticed multiple ways to do so.
Does anyone have a preferred way to do Python code folding in Vim? I.e,
Do you have a particular Vim plugin that you use and like?
Do you use manual folding or do you place markers in comments?
Any other recommended ways to do code folding for Python in Vim?
|
Personally I can't convince myself to litter my code with the markers. I've become pretty used to (and efficient) at using indent-folding. Together with my mapping of space bar (see below) to open/close folds and the zR and zM commands, I'm right at home. Perfect for Python!
nnoremap <space> za
vnoremap <space> zf
|
Does Python have something like anonymous inner classes of Java?
|
In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of OptionParser that overrides only a single method (for example exit()). In Java you can write something like this:
new OptionParser () {
public void exit() {
// body of the method
}
};
This piece of code creates a anonymous class that extends OptionParser and override only the exit() method.
There is a similar idiom in Python? Which idiom is used in these circumstances?
|
You can use the type(name, bases, dict) builtin function to create classes on the fly. For example:
op = type("MyOptionParser", (OptionParser,object), {"foo": lambda self: "foo" })
op().foo()
Since OptionParser isn't a new-style class, you have to explicitly include object in the list of base classes.
|
log4j with timestamp per log entry
|
this is my log output
INFO main digestemails - process inbox
INFO main digestemails - checking for emails in c:\development\DCMail\email\KN-Source
INFO main digestemails - digesting 003d01c95a7b_3446880_0202fea9@xxxx.com.eml
INFO main digestemails - extracting attachments
INFO main digestemails - no attachments or no attachments supported
INFO main digestemails - updating database
INFO main digestemails - email -> COMPLETED folder
INFO main digestemails -
I would like a time stamp per log message ie
INFO 2008-12-25 13:14:00 digestemails - email -> COMPLETED folder
here is my log4j config file
log4j.rootLogger=debug, stdout, R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=c:\\development\\DCMail\\logs\\digestlogfolder\\digest-logfile.log
log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
How do I do it?
|
Use %d in your PatternLayout.
Also %d can take a format pattern as in %d{dd MMM yyyy HH:mm:ss,SSS} you can pick and choose the elements that you want. When the format pattern is omitted the date will be in ISO8601 format.
|
Execute Commands Sequentially in Python
|
I would like to execute multiple commands in a row:
ie (just to illustrate my need):
cmd (the shell)
then
cd dir
and
ls
and read the result of the ls.
Any idea with subprocess module ?
UPDATE:
cd dir and ls are just an example. I need to run complex commands (following a particular order, without any pipelining). In fact, i would like one subprocess shell and the ability to launch many commands on it.
|
To do that, you would have to:
supply the shell=True argument in the subprocess.Popen call, and
separate the commands with:
; if running under a *nix shell (bash, ash, sh, ksh, csh, tcsh, zsh etc)
& if running under the cmd.exe of Windows
|
Switching from python-mode.el to python.el
|
I recently tried switching from using python-mode.el to python.el for editing python files in emacs, found the experience a little alien and unproductive, and scurried back. I've been using python-mode.el for something like ten years, so perhaps I'm a little set in my ways. I'd be interested in hearing from anyone who's carefully evaluated the two modes, in particular of the pros and cons they perceive of each and how their work generally interacts with the features specific to python.el.
The two major issues for me with python.el were
Each buffer visiting a python file gets its own inferior interactive python shell. I am used to doing development in one interactive shell and sharing data between python files. (Might seem like bad practice from a software-engineering perspective, but I'm usually working with huge datasets which take a while to load into memory.)
The skeleton-mode support in python.el, which seemed absolutely gratuitous (python's syntax makes such automation unnecessary) and badly designed (for instance, it has no knowledge of "for" loop generator expressions or "<expr 1> if <cond> else <expr 2>" expressions, so you have to go back and remove the colons it helpfully inserts after insisting that you enter the expression clauses in the minibuffer.) I couldn't figure out how to turn it off. There was a python.el variable which claimed to control this, but it didn't seem to work. It could be that the version of python.el I was using was broken (it came from the debian emacs-snapshot package) so if anyone knows of an up-to-date version of it, I'd like to hear about it. (I had the same problem with the version in CVS emacs as of approximately two weeks ago.)
|
For what it's worth, I do not see the behavior you are seeing in issue #1, "Each buffer visiting a python file gets its own inferior interactive python shell."
This is what I did using python.el from Emacs 22.2.
C-x C-f foo.py
[insert: print "foo"]
C-x C-f bar.py
[insert: print "bar"]
C-c C-z [*Python* buffer appears]
C-x o
C-c C-l RET ["bar" is printed in *Python*]
C-x b foo.py RET
C-c C-l RET ["foo" is printed in the same *Python* buffer]
Therefore the two files are sharing the same inferior python shell. Perhaps there is some unforeseen interaction between your personal customizations of python-mode and the default behaviors of python.el. Have you tried using python.el without your .emacs customizations and checking if it behaves the same way?
The major feature addition of python.el over python-mode is the symbol completion function python-complete-symbol. You can add something like this
(define-key inferior-python-mode-map "\C-c\t" 'python-complete-symbol)
Then typing
>>> import os
>>> os.f[C-c TAB]
you'll get a *Completions* buffer containing
Click <mouse-2> on a completion to select it.
In this buffer, type RET to select the completion near point.
Possible completions are:
os.fchdir os.fdatasync
os.fdopen os.fork
os.forkpty os.fpathconf
os.fstat os.fstatvfs
os.fsync os.ftruncate
It'll work in .py file buffers too.
|
django is very slow on my machine
|
I have a fresh install of django 1.0 and a simple page served from it takes 5 secs to load. On my colleague's computer it takes almost no time.
I start the server using
python manage.py testserver
I can see each GET request (PNGs and style sheets) take about half a second.
Another weird thing, which I think is related, is that the functional tests for the app run much slower on my machine with MySQL (on order of 100 times slower than on my colleague's machine). When I set the app to use sqlite, they run pretty quickly. I would like to exclaim that sqlite doesn't much change the time it takes to load a page, but it does speed up server startup.
It looks like IO problem, but I don't see general performance problems on my machine, apart from django at least.
Django runs on python2.4, I'm running Vista. I have also checked python2.5.
Thanks ΤÎΩΤÎÎÎÎ¥, It must totaly be a DNS problem, because the page loads up quickly as soon as
instead of http://localhost:8000/app I go to http://127.0.0.1:8000/app.
But what could it be caused by? My host file has only two entries:
127.0.0.1 localhost
::1 localhost
|
Firefox has a problem browsing to localhost on some Windows machines. You can solve it by switching off ipv6, which isn't really recommended. Using 127.0.0.1 directly is another way round the problem.
|
Python idiom to return first item or None
|
I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
Edit:
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function is the simple solution.
|
The best way is this:
a = get_list()
return a[0] if a else None
You could also do it in one line, but it's much harder for the programmer to read:
return (get_list()[:1] or [None])[0]
|
In Python, how do I iterate over a dictionary in sorted order?
|
There's an existing function that ends in:
return dict.iteritems()
that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that?
|
Haven't tested this very extensively, but works in Python 2.5.2.
>>> d = {"x":2, "h":15, "a":2222}
>>> it = iter(sorted(d.iteritems()))
>>> it.next()
('a', 2222)
>>> it.next()
('h', 15)
>>> it.next()
('x', 2)
>>>
|
How to get item's position in a list?
|
I am iterating over a list and I want to print out the index of the item if it meets a certain condition. How would I do this?
Example:
testlist = [1,2,3,5,3,1,2,1,6]
for item in testlist:
if item == 1:
print position
|
Hmmm. There was an answer with a list comprehension here, but it's disappeared.
Here:
[i for i,x in enumerate(testlist) if x == 1]
Example:
>>> testlist
[1, 2, 3, 5, 3, 1, 2, 1, 6]
>>> [i for i,x in enumerate(testlist) if x == 1]
[0, 5, 7]
Update:
Okay, you want a generator expression, we'll have a generator expression. Here's the list comprehension again, in a for loop:
>>> for i in [i for i,x in enumerate(testlist) if x == 1]:
... print i
...
0
5
7
Now we'll construct a generator...
>>> (i for i,x in enumerate(testlist) if x == 1)
<generator object at 0x6b508>
>>> for i in (i for i,x in enumerate(testlist) if x == 1):
... print i
...
0
5
7
and niftily enough, we can assign that to a variable, and use it from there...
>>> gen = (i for i,x in enumerate(testlist) if x == 1)
>>> for i in gen: print i
...
0
5
7
And to think I used to write FORTRAN.
|
generator comprehension
|
What does generator comprehension do? How does it work? I couldn't find a tutorial about it.
|
Do you understand list comprehensions? If so, a generator expression is like a list comprehension, but instead of finding all the items you're interested and packing them into list, it waits, and yields each item out of the expression, one by one.
>>> my_list = [1, 3, 5, 9, 2, 6]
>>> filtered_list = [item for item in my_list if item > 3]
>>> print filtered_list
[5, 9, 6]
>>> len(filtered_list)
3
>>> # compare to generator expression
...
>>> filtered_gen = (item for item in my_list if item > 3)
>>> print filtered_gen # notice it's a generator object
<generator object at 0xb7d5e02c>
>>> len(filtered_gen) # So technically, it has no length
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'generator' has no len()
>>> # We extract each item out individually. We'll do it manually first.
...
>>> filtered_gen.next()
5
>>> filtered_gen.next()
9
>>> filtered_gen.next()
6
>>> filtered_gen.next() # Should be all out of items and give an error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> # Yup, the generator is spent. No values for you!
...
>>> # Let's prove it gives the same results as our list comprehension
...
>>> filtered_gen = (item for item in my_list if item > 3)
>>> gen_to_list = list(filtered_gen)
>>> print gen_to_list
[5, 9, 6]
>>> filtered_list == gen_to_list
True
>>>
Because a generator expression only has to yield one item at a time, it can lead to big savings in memory usage. Generator expressions make the most sense in scenarios where you need to take one item at a time, do a lot of calculations based on that item, and then move on to the next item. If you need more than one value, you can also use a generator expression and grab a few at a time. If you need all the values before your program proceeds, use a list comprehension instead.
|
Can you change a field label in the Django Admin application?
|
As the title suggests. I want to be able to change the label of a single field in the admin application. I'm aware of the Form.field attribute, but how do I get my Model or ModelAdmin to pass along that information?
|
the verbose name of the field is the (optional) first parameter at field construction.
|
What is the pythonic way to avoid default parameters that are empty lists?
|
Sometimes it seems natural to have a default parameter which is an empty list. Yet Python gives unexpected behavior in these situations.
If for example, I have a function:
def myFunc(working_list = []):
working_list.append("a")
print working_list
The first time it is called with the default will work, but calls after that will use a constantly updating list.
So, what is the pythonic way to get the behavior I desire (a fresh list on each call)?
|
def myFunc(working_list=None):
if working_list is None:
working_list = []
working_list.append("a")
print working_list
is how I do it.
|
How to limit execution time of a function call in Python
|
There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
|
An improvement on @rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
from __future__ import with_statement # Required in 2.5
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
@contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException, "Timed out!"
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException, msg:
print "Timed out!"
|
How to organize python test in a way that I can run all tests in a single command?
|
Currently my code is organized in the following tree structure:
src/
module1.py
module2.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
test_moduleA.py
test_moduleB.py
Where the module*.py files contains the source code and the test_module*.py contains the TestCases for the relevant module.
With the following comands I can run the tests contained in a single file, for example:
$ cd src
$ nosetests test_filesystem.py
..................
----------------------------------------------------------------------
Ran 18 tests in 0.390s
OK
How can I run all tests? I tried with nosetests -m 'test_.*' but it doesn't work.
$cd src
$ nosetests -m 'test_.*'
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Thanks
|
Whether you seperate or mix tests and modules is probably a matter of taste, although I would strongly advocate for keeping them apart (setup reasons, code stats etc).
When you're using nosetests, make sure that all directories with tests are real packages:
src/
module1.py
module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
tests/
__init__.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
test_moduleA.py
test_moduleB.py
This way, you can just run nosetests in the toplevel directory and all tests will be found. You need to make sure that src/ is on the PYTHONPATH, however, otherwise all the tests will fail due to missing imports.
|
No print output from child multiprocessing.Process unless the program crashes
|
I am having trouble with the Python multiprocessing module. I am using the Process class to spawn a new process in order to utilize my second core. This second process loads a bunch of data into RAM and then waits patiently instead of consuming.
I wanted to see what that process printed with the print command, however, I do not see anything that it prints. I only see what the parent process prints. Now this makes sense to me since they live in two different process. The second process doesn't spawn its own shell/standard output window, nor is its output sent to the parent. Yet when this process crashs, it prints everything that my script told it to print, plus the stack trace and error.
I am wondering if there is a simple way to send the child process's print output to the first process, or have it spawn a shell/standard output so that I may debug it. I know I could create a multiprocessing.Queue dedicated to transmitting prints to the parent so that it may print these to standard output, but I do not feel like doing this if a simpler solution exists.
|
Have you tried flushing stdout?
import sys
print "foo"
sys.stdout.flush()
|
Is there a Python equivalent to `perl -pi -e`?
|
I know of python -c '<code>', but I'm wondering if there's a more elegant python equivalent to perl -pi -e '<code>'. I still use it quite a bit for things like find and replace in a whole directory (perl -pi -e s/foo/bar/g * or even find . | xargs perl -pi -e s/foo/bar/g for sub-directories).
I actually feel that that which makes Perl Perl (free form Tim Toady-ness) is what makes perl -pi -e work so well, while with Python you'd have to do something along the lines of importing the re module, creating an re instance and then capture stdin, but maybe there's a Python shortcut that does all that and I missed it (sorely missed it)...
|
Python is for muggles. If magic you want, Perl you should use!
|
Splitting a string into words and punctuation
|
I'm trying to split a string up into words and punctuation, adding the punctuation to the list produced by the split.
For instance:
>>> c = "help, me"
>>> print c.split()
['help,', 'me']
What I really want the list to look like is:
['help', ',', 'me']
So, I want the string split at whitespace with the punctuation split from the words.
I've tried to parse the string first and then run the split:
>>> for character in c:
... if character in ".,;!?":
... outputCharacter = " %s" % character
... else:
... outputCharacter = character
... separatedPunctuation += outputCharacter
>>> print separatedPunctuation
help , me
>>> print separatedPunctuation.split()
['help', ',', 'me']
This produces the result I want, but is painfully slow on large files.
Is there a way to do this more efficiently?
|
This is more or less the way to do it:
>>> import re
>>> re.findall(r"[\w']+|[.,!?;]", "Hello, I'm a string!")
['Hello', ',', "I'm", 'a', 'string', '!']
The trick is, not to think about where to split the string, but what to include in the tokens.
Caveats:
The underscore (_) is considered an inner-word character. Replace \w, if you don't want that.
This will not work with (single) quotes in the string.
Put any additional punctuation marks you want to use in the right half of the regular expression.
Anything not explicitely mentioned in the re is silently dropped.
|
Setting up a foreign key to an abstract base class with Django
|
I've factored out common attributes from two classes into an abstract base class, however I have another model that needs to reference either one of those classes. It's not possible to reference an ABC as it doesn't actually have a database table.
The following example should illustrate my problem:
class Answer(models.Model):
ovramt = models.ForeignKey("Ovramt")
question = models.ForeignKey("Question")
answer = models.CharField(max_length=3, choices=(("yes","yes"),("no","no") ("NA","N/A"))
likelihood = models.IntegerField(choices=LIKELY_CHOICES)
consequence = models.IntegerField(choices=CONSEQUENCE_CHOICES)
class Meta:
abstract = True
class Answer_A(Answer):
resident = models.ForeignKey("Resident")
def __unicode__(self):
return u"%s - %s - %s" %(self.ovramt.ssa.name, self.resident, self.question)
class Answer_B(Answer):
def __unicode__(self):
return u"%s - %s" %(self.ovramt.ssa.name, self.question)
class Answer_Risk(models.Model):
answer = models.ForeignKey("Answer")
risk = models.CharField(max_length=200)
def __unicode__(self):
return self.risk
Answer_A and Answer_B are slightly different in that Answer_A also needs a FK relationship to another table. Answer_B may also require some specific attributes later. The problem would STILL exist if I had Answer_B be the superclass - and have Answer_A subclass or compose it.
A 'Risk' is the same whether it's Answer_A or Answer_B. I also have other models that need to reference an 'Answer' regardless of it's sub-type. How can this be done? How can you reference a type regardless of it's sub-type?
Update:
I was trying to avoid a join operation but I don't think I'm going to be able to. Would it be worth having the reference to 'Resident' in all 'Answer's and just nulling it where required? Or is that considered very bad practice?
|
A generic relation seems to be the solution. But it will complicate things even further.
It seems to me; your model structure is already more complex than necessary. I would simply merge all three Answer models into one. This way:
Answer_Risk would work without modification.
You can set resident to None (NULL) in case of an Answer_A.
You can return different string represantations depending on resident == None. (in other words; same functionality)
One more thing; are your answers likely to have more than one risk? If they'll have none or one risk you should consider following alternative implementations:
Using a one-to-one relationship
Demoting risk as a field (or any number of fields) inside Answer class.
My main concern is neither database structure nor performance (although these changes should improve performance) but code maintainability.
|
How much input validation should I be doing on my python functions/methods?
|
I'm interested in how much up front validation people do in the Python they write.
Here are a few examples of simple functions:
def factorial(num):
"""Computes the factorial of num."""
def isPalindrome(inputStr):
"""Tests to see if inputStr is the same backwards and forwards."""
def sum(nums):
"""Same as the built-in sum()... computes the sum of all the numbers passed in."""
How thoroughly do you check the input values before beginning computation, and how do you do your checking? Do you throw some kind of proprietary exception if input is faulty (BadInputException defined in the same module, for example)? Do you just start your calculation and figure it will throw an exception at some point if bad data was passed in ("asd" to factorial, for example)?
When the passed in value is supposed to be a container do you check not only the container but all the values inside it?
What about situations like factorial, where what's passed in might be convertible to an int (e.g. a float) but you might lose precision when doing so?
|
I assert what's absolutely essential.
Important: What's absolutely essential. Some people over-test things.
def factorial(num):
assert int(num)
assert num > 0
Isn't completely correct. long is also a legal possibility.
def factorial(num):
assert type(num) in ( int, long )
assert num > 0
Is better, but still not perfect. Many Python types (like rational numbers, or number-like objects) can also work in a good factorial function. It's hard to assert that an object has basic integer-like properties without being too specific and eliminating future unthought-of classes from consideration.
I never define unique exceptions for individual functions. I define a unique exception for a significant module or package. Usually, however, just an Error class or something similar. That way the application says except somelibrary.Error,e: which is about all you need to know. Fine-grained exceptions get fussy and silly.
I've never done this, but I can see places where it might be necessary.
assert all( type(i) in (int,long) for i in someList )
Generally, however, the ordinary Python built-in type checks work fine. They find almost all of the exceptional situations that matter almost all the time. When something isn't the right type, Python raises a TypeError that always points at the right line of code.
BTW. I only add asserts at design time if I'm absolutely certain the function will be abused. I sometimes add assertions later when I have a unit test that fails in an obscure way.
|
How do I build a numpy array from a generator?
|
How can I build a numpy array out of a generator object?
Let me illustrate the problem:
>>> import numpy
>>> def gimme():
... for x in xrange(10):
... yield x
...
>>> gimme()
<generator object at 0x28a1758>
>>> list(gimme())
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> numpy.array(xrange(10))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> numpy.array(gimme())
array(<generator object at 0x28a1758>, dtype=object)
>>> numpy.array(list(gimme()))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In this instance, gimme() is the generator whose output I'd like to turn into an array. However, the array constructor does not iterate over the generator, it simply stores the generator itself. The behaviour I desire is that from numpy.array(list(gimme())), but I don't want to pay the memory overhead of having the intermediate list and the final array in memory at the same time. Is there a more space-efficient way?
|
One google behind this stackoverflow result, I found that there is a numpy.fromiter(data, dtype, count). The default count=-1 takes all elements from the iterable. It requires a dtype to be set explicitly. In my case, this worked:
numpy.fromiter(something.generate(from_this_input), float)
|
Generating random text strings of a given pattern
|
I need to generate random text strings of a particular format. Would like some ideas so that I can code it up in Python. The format is <8 digit number><15 character string>.
|
#!/usr/bin/python
import random
import string
digits = "".join( [random.choice(string.digits) for i in xrange(8)] )
chars = "".join( [random.choice(string.letters) for i in xrange(15)] )
print digits + chars
EDIT: liked the idea of using random.choice better than randint() so I've updated the code to reflect that.
Note: this assumes lowercase and uppercase characters are desired. If lowercase only then change the second list comprehension to read:
chars = "".join( [random.choice(string.letters[:26]) for i in xrange(15)] )
Obviously for uppercase only you can just flip that around so the slice is [26:] instead of the other way around.
|
Questions about Setuptools and alternatives
|
I've seen a good bit of setuptools bashing on the internets lately. Most recently, I read James Bennett's On packaging post on why no one should be using setuptools. From my time in #python on Freenode, I know that there are a few souls there who absolutely detest it. I would count myself among them, but I do actually use it.
I've used setuptools for enough projects to be aware of its deficiencies, and I would prefer something better. I don't particularly like the egg format and how it's deployed. With all of setuptools' problems, I haven't found a better alternative.
My understanding of tools like pip is that it's meant to be an easy_install replacement (not setuptools). In fact, pip uses some setuptools components, right?
Most of my packages make use of a setuptools-aware setup.py, which declares all of the dependencies. When they're ready, I'll build an sdist, bdist, and bdist_egg, and upload them to pypi.
If I wanted to switch to using pip, what kind of changes would I need to make to rid myself of easy_install dependencies? Where are the dependencies declared? I'm guessing that I would need to get away from using the egg format, and provide just source distributions. If so, how do i generate the egg-info directories? or do I even need to?
How would this change my usage of virtualenv? Doesn't virtualenv use easy_install to manage the environments?
How would this change my usage of the setuptools provided "develop" command? Should I not use that? What's the alternative?
I'm basically trying to get a picture of what my development workflow will look like.
Before anyone suggests it, I'm not looking for an OS-dependent solution. I'm mainly concerned with debian linux, but deb packages are not an option, for the reasons Ian Bicking outlines here.
|
pip uses Setuptools, and doesn't require any changes to packages. It actually installs packages with Setuptools, using:
python -c 'import setuptools; __file__="setup.py"; execfile(__file__)' \
install \
--single-version-externally-managed
Because it uses that option (--single-version-externally-managed) it doesn't ever install eggs as zip files, doesn't support multiple simultaneously installed versions of software, and the packages are installed flat (like python setup.py install works if you use only distutils). Egg metadata is still installed. pip also, like easy_install, downloads and installs all the requirements of a package.
In addition you can also use a requirements file to add other packages that should be installed in a batch, and to make version requirements more exact (without putting those exact requirements in your setup.py files). But if you don't make requirements files then you'd use it just like easy_install.
For your install_requires I don't recommend any changes, unless you have been trying to create very exact requirements there that are known to be good. I think there's a limit to how exact you can usefully be in setup.py files about versions, because you can't really know what the future compatibility of new libraries will be like, and I don't recommend you try to predict this. Requirement files are an alternate place to lay out conservative version requirements.
You can still use python setup.py develop, and in fact if you do pip install -e svn+http://mysite/svn/Project/trunk#egg=Project it will check that out (into src/project) and run setup.py develop on it. So that workflow isn't any different really.
If you run pip verbosely (like pip install -vv) you'll see a lot of the commands that are run, and you'll probably recognize most of them.
|
Python UnicodeDecodeError - Am I misunderstanding encode?
|
Any thoughts on why this isn't working? I really thought 'ignore' would do the right thing.
>>> 'add \x93Monitoring\x93 to list '.encode('latin-1','ignore')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 4: ordinal not in range(128)
|
â¦there's a reason they're called "encodings"â¦
A little preamble: think of unicode as the norm, or the ideal state. Unicode is just a table of characters. â65 is latin capital A. â937 is greek capital omega. Just that.
In order for a computer to store and-or manipulate Unicode, it has to encode it into bytes. The most straightforward encoding of Unicode is UCS-4; every character occupies 4 bytes, and all ~1000000 characters are available. The 4 bytes contain the number of the character in the Unicode tables as a 4-byte integer. Another very useful encoding is UTF-8, which can encode any Unicode character with one to four bytes. But there also are some limited encodings, like "latin1", which include a very limited range of characters, mostly used by Western countries. Such encodings use only one byte per character.
Basically, Unicode can be encoded with many encodings, and encoded strings can be decoded to Unicode. The thing is, Unicode came quite late, so all of us that grew up using an 8-bit character set learned too late that all this time we worked with encoded strings. The encoding could be ISO8859-1, or windows CP437, or CP850, or, or, or, depending on our system default.
So when, in your source code, you enter the string "add âMonitoringâ to list" (and I think you wanted the string "add âMonitoringâ to list", note the second quote), you actually are using a string already encoded according to your system's default codepage (by the byte \x93 I assume you use Windows codepage 1252, âWesternâ). If you want to get Unicode from that, you need to decode the string from the "cp1252" encoding.
So, what you meant to do, was:
"add \x93Monitoring\x94 to list".decode("cp1252", "ignore")
It's unfortunate that Python 2.x includes an .encode method for strings too; this is a convenience function for "special" encodings, like the "zip" or "rot13" or "base64" ones, which have nothing to do with Unicode.
Anyway, all you have to remember for your to-and-fro Unicode conversions is:
a Unicode string gets encoded to a Python 2.x string (actually, a sequence of bytes)
a Python 2.x string gets decoded to a Unicode string
In both cases, you need to specify the encoding that will be used.
I'm not very clear, I'm sleepy, but I sure hope I help.
PS A humorous side note: Mayans didn't have Unicode; ancient Romans, ancient Greeks, ancient Egyptians didn't too. They all had their own "encodings", and had little to no respect for other cultures. All these civilizations crumbled to dust. Think about it people! Make your apps Unicode-aware, for the good of mankind. :)
PS2 Please don't spoil the previous message by saying "But the Chineseâ¦". If you feel inclined or obligated to do so, though, delay it by thinking that the Unicode BMP is populated mostly by chinese ideograms, ergo Chinese is the basis of Unicode. I can go on inventing outrageous lies, as long as people develop Unicode-aware applications. Cheers!
|
Difference between dict.clear() and assigning {} in Python
|
In python, is there a difference between calling clear() and assigning {} to a dictionary? If yes, what is it?
Example:d = {"stuff":"things"}
d.clear() #this way
d = {} #vs this way
|
If you have another variable also referring to the same dictionary, there is a big difference:
>>> d = {"stuff": "things"}
>>> d2 = d
>>> d = {}
>>> d2
{'stuff': 'things'}
>>> d = {"stuff": "things"}
>>> d2 = d
>>> d.clear()
>>> d2
{}
This is because assigning d = {} creates a new, empty dictionary and assigns it to the d variable. This leaves d2 pointing at the old dictionary with items still in it. However, d.clear() clears the same dictionary that d and d2 both point at.
|
Python variable scope error
|
I've been programming for many years, and recently started learning Python. The following code works as expected in both python 2.5 and 3.0 (on OS X if that matters):
a, b, c = (1, 2, 3)
print(a, b, c)
def test():
print(a)
print(b)
print(c) # (A)
#c+=1 # (B)
test()
However, when I uncomment line (B), I get an UnboundLocalError: 'c' not assigned at line (A). The values of a and b are printed correctly. This has me completely baffled for two reasons:
Why is there a runtime error thrown at line (A) because of a later statement on line (B)?
Why are variables a and b printed as expected, while c raises an error?
The only explanation I can come up with is that a local variable c is created by the assignment c+=1, which takes precedent over the "global" variable c even before the local variable is created. Of course, it doesn't make sense for a variable to "steal" scope before it exists.
Could someone please explain this behavior?
|
Python treats variables in functions differently depending on whether you assign values to them from within the function or not. If you assign any value to a variable, it is treated by default as a local variable. Therefore, when you uncomment the line, you are attempting to reference a local variable before any value has been assigned to it.
If you want the variable c to refer to the global c put
global c
as the first line of the function.
As of python 3, there is now
nonlocal c
that you can use to refer to the nearest enclosing (not necessarily global) scope.
|
Differences between Python game libraries Pygame and Pyglet?
|
I've had some experience with Pygame, but there seems to be a lot of buzz around Pyglet these days.
How do these two libraries compare? What would be the advantage of using one over the other, both in features and ease of use?
Finally, would you say that one is more Pythonic than the other?
|
Pygame: LGPL license
Pyglet: BSD license
Pygame relies on SDL libraries heavily
Pyglet is a pure python library with fewer dependencies, I think it requires better understanding of OpenGL
Pygame is around here for a long time, a lot of people used it
Pyglet is a new lib
Pygame is geared towards game development (cursors, sprites, joystick/gamepad support)
Pyglet is more general purpose (though it has a Sprite class)
I found also this discussion on pyglet-users mailing list: from pygame+pyopengl to pyglet
Disclaimer: I did not use either yet, only tried some tutorials ;-)
|
Trac documentation?
|
I'm trying to write my first little plugin for Trac and am kind of lost as to what the API exactly is. For example, exactly which fields are offered for "ticket" objects, among many other things.
Does anyone know of a good place to look for Trac API documentation? Can't find anything on the web site but maybe I'm just looking wrong...
|
The component architecture is important, but the real starting page for development is:
http://trac.edgewall.org/wiki/TracDev
Have also a look at the trac-hacks web site http://trac-hacks.org/ This is really a good source of examples, and many times you will find something close to what you want to do, that you can simply adapt from.
Think also about installing this development plugin: http://trac-hacks.org/wiki/TracDeveloperPlugin It makes it much easier to debug your plugin with it
|
Has anyone tried NetBeans 6.5 Python IDE?
|
Has anyone tried the NetBeans 6.5 Python IDE?
What are your opinions? Is it better/worse than PyDev? Do you like it? How does it integrate with source control tools (especially Mercurial)?
|
I will share some of the feelings from using it for quite a while now. Things that are roughly the same quality as in Eclipse+Pydev+mercurial:
editor, code-completion
debugger features
Things that are better:
autoimport
color schemes (Norway today rocks)
Mercurial support (though it is getting better and better in Eclipse)
Things that are worse:
zipped egg packages are not recognized for either code completion or the autoimport
libdyn packages (e.g. datetime) are not recognized
debugger is having trouble with multiprocessing package
you cannot choose file from outside of the project (/usr/bin/paster) to be the main file (this is what I use to debug Pylons applications)
Does anyone have something to add to the list?
|
Python - Using __getattribute__ method
|
I want to override access to one variable in a class, but return all others normally. How do I accomplish this with __getattribute__?
I tried the following (which should also illustrate what I'm trying to do) but I get a recursion error:
class D(object):
def __init__(self):
self.test=20
self.test2=21
def __getattribute__(self,name):
if name=='test':
return 0.
else:
return self.__dict__[name]
>>> print D().test
0.0
>>> print D().test2
...
RuntimeError: maximum recursion depth exceeded in cmp
|
You get a recursion error because you call the same function, your __getattribute__. If you use object's __getattribute__ instead, it works:
class D(object):
def __init__(self):
self.test=20
self.test2=21
def __getattribute__(self,name):
if name=='test':
return 0.
else:
return object.__getattribute__(self, name)
This works because object (in this example) is the base class. By calling the base version of __getattribute__ you avoid the recursive hell you were in before.
Ipython output with code in foo.py:
In [1]: from foo import *
In [2]: d = D()
In [3]: d.test
Out[3]: 0.0
In [4]: d.test2
Out[4]: 21
Update:
There's something in the section titled More attribute access for new-style classes in the current documentation, where they recommend doing exactly this to avoid the infinite recursion.
|
Difference between abstract class and interface in Python
|
What is the difference between abstract class and interface in Python?
|
What you'll see sometimes is the following:
class Abstract1( object ):
"""Some description that tells you it's abstract,
often listing the methods you're expected to supply."""
def aMethod( self ):
raise NotImplementedError( "Should have implemented this" )
Because Python doesn't have (and doesn't need) a formal Interface contract, the Java-style distinction between abstraction and interface doesn't exist. If someone goes through the effort to define a formal interface, it will also be an abstract class. The only differences would be in the stated intent in the docstring.
And the difference between abstract and interface is a hairsplitting thing when you have duck typing.
Java uses interfaces because it doesn't have multiple inheritance.
Because Python has multiple inheritance, you may also see something like this
class SomeAbstraction( object ):
pass # lots of stuff - but missing something
class Mixin1( object ):
def something( self ):
pass # one implementation
class Mixin2( object ):
def something( self ):
pass # another
class Concrete1( SomeAbstraction, Mixin1 ):
pass
class Concrete2( SomeAbstraction, Mixin2 ):
pass
This uses a kind of abstract superclass with mixins to create concrete subclasses that are disjoint.
|
Set timeout for xmlrpclib.ServerProxy
|
I am using xmlrpclib.ServerProxy to make RPC calls to a remote server. If there is not a network connection to the server it takes the default 10 seconds to return a socket.gaierror to my program.
This is annoying when doing development without a network connection, or if the remote server is down. Is there a way to update the timeout on my ServerProxy object?
I can't see a clear way to get access to the socket to update it.
|
An more straightforward solution is at:
http://www.devpicayune.com/entry/200609191448
import xmlrpclib
import socket
x = xmlrpclib.ServerProxy('http:1.2.3.4')
socket.setdefaulttimeout(10) #set the timeout to 10 seconds
x.func_name(args) #times out after 10 seconds
socket.setdefaulttimeout(None) #sets the default back
|
How do I connect to a MySQL Database in Python?
|
How do I connect to a MySQL database using a python program?
|
Connecting to MYSQL with Python in 3 steps
1 - Setting
You must install a MySQL driver before doing anything. Unlike PHP, only the SQLite driver is installed by default with Python. The most used package to do so is MySQLdb but it's hard to install it using easy_install.
For Windows user, you can get an exe of MySQLdb.
For Linux, this is a casual package (python-mysqldb). (You can use sudo apt-get install python-mysqldb (for debian based distros), yum install mysql-python (for rpm-based), or dnf install python-mysql (for modern fedora distro) in command line to download.)
For Mac, you can install MySQLdb using Macport.
2 - Usage
After installing, reboot. This is not mandatory, but will prevent me from answering 3 or 4 other questions in this post if something goes wrong. So please reboot.
Then it is just like using another package :
#!/usr/bin/python
import MySQLdb
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="john", # your username
passwd="megajonhy", # your password
db="jonhydb") # name of the data base
# you must create a Cursor object. It will let
# you execute all the queries you need
cur = db.cursor()
# Use all the SQL you like
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
# print all the first cell of all the rows
for row in cur.fetchall():
print row[0]
db.close()
Of course, there are thousand of possibilities and options; this is a very basic example. You will have to look at the documentation. A good starting point.
3 - More advanced usage
Once you know how it works, you may want to use an ORM to avoid writting SQL manually and manipulate your tables as they were Python objects. The most famous ORM in the Python community is SQLAlchemy.
I strongly advise you to use it: your life is going to be much easier.
I recently discovered another jewel in the Python world: peewee. It's a very lite ORM, really easy and fast to setup then use. It makes my day for small projects or stand alone apps, where using big tools like SQLAlchemy or Django is overkill :
import peewee
from peewee import *
db = MySQLDatabase('jonhydb', user='john',passwd='megajonhy')
class Book(peewee.Model):
author = peewee.CharField()
title = peewee.TextField()
class Meta:
database = db
Book.create_table()
book = Book(author="me", title='Peewee is cool')
book.save()
for book in Book.filter(author="me"):
print book.title
Peewee is cool
This example works out of the box. Nothing other than having peewee (pip install peewee :-)) is required. No complicated setup. It's really cool.
|
Finding the Current Active Window in Mac OS X using Python
|
Is there a way to find the application name of the current active window at a given time on Mac OS X using Python?
|
This should work:
#!/usr/bin/python
from AppKit import NSWorkspace
activeAppName = NSWorkspace.sharedWorkspace().activeApplication()['NSApplicationName']
print activeAppName
Only works on Leopard, or on Tiger if you have PyObjC installed and happen to point at the right python binary in line one (not the case if you've installed universal MacPython, which you'd probably want to do on Tiger). But Peter's answer with the Carbon way of doing this will probably be quite a bit faster, since importing anything from AppKit in Python takes a while, or more accurately, importing something from AppKit for the first time in a Python process takes a while.
If you need this inside a PyObjC app, what I describe will work great and fast, since you only experience the lag of importing AppKit once. If you need this to work as a command-line tool, you'll notice the performance hit. If that's relevant to you, you're probably better off building a 10 line Foundation command line tool in Xcode using Peter's code as a starting point.
|
Python regex for MD5 hash
|
I've come up with:
re.findall("([a-fA-F\d]*)", data)
but it's not very fool proof, is there a better way to grab all MD5-hash codes?
|
Well, since md5 is just a string of 32 hex digits, about all you could add to your expression is a check for "32 digits", perhaps something like this?
re.findall(r"([a-fA-F\d]{32})", data)
|
How do I get a Cron like scheduler in Python?
|
I'm looking for a library in Python which will provide at and cron like functionality.
I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron.
For those unfamiliar with cron: you can schedule tasks based upon an expression like:
0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday
0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays.
The cron time expression syntax is less important, but I would like to have something with this sort of flexibility.
If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received.
Edit
I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process.
To this end, I'm looking for the expressivity of the cron time expression, but in Python.
Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.
|
If you're looking for something lightweight checkout schedule:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
while 1:
schedule.run_pending()
time.sleep(1)
Disclosure: I'm the author of that library.
|
How do I get the UTC time of "midnight" for a given timezone?
|
The best I can come up with for now is this monstrosity:
>>> datetime.utcnow() \
... .replace(tzinfo=pytz.UTC) \
... .astimezone(pytz.timezone("Australia/Melbourne")) \
... .replace(hour=0,minute=0,second=0,microsecond=0) \
... .astimezone(pytz.UTC) \
... .replace(tzinfo=None)
datetime.datetime(2008, 12, 16, 13, 0)
I.e., in English, get the current time (in UTC), convert it to some other timezone, set the time to midnight, then convert back to UTC.
I'm not just using now() or localtime() as that would use the server's timezone, not the user's timezone.
I can't help feeling I'm missing something, any ideas?
|
I think you can shave off a few method calls if you do it like this:
>>> from datetime import datetime
>>> datetime.now(pytz.timezone("Australia/Melbourne")) \
.replace(hour=0, minute=0, second=0, microsecond=0) \
.astimezone(pytz.utc)
BUT⦠there is a bigger problem than aesthetics in your code: it will give the wrong result on the day of the switch to or from Daylight Saving Time.
The reason for this is that neither the datetime constructors nor replace() take DST changes into account.
For example:
>>> now = datetime(2012, 4, 1, 5, 0, 0, 0, tzinfo=pytz.timezone("Australia/Melbourne"))
>>> print now
2012-04-01 05:00:00+10:00
>>> print now.replace(hour=0)
2012-04-01 00:00:00+10:00 # wrong! midnight was at 2012-04-01 00:00:00+11:00
>>> print datetime(2012, 3, 1, 0, 0, 0, 0, tzinfo=tz)
2012-03-01 00:00:00+10:00 # wrong again!
However, the documentation for tz.localize() states:
This method should be used to construct localtimes, rather
than passing a tzinfo argument to a datetime constructor.
Thus, your problem is solved like so:
>>> import pytz
>>> from datetime import datetime, date, time
>>> tz = pytz.timezone("Australia/Melbourne")
>>> the_date = date(2012, 4, 1) # use date.today() here
>>> midnight_without_tzinfo = datetime.combine(the_date, time())
>>> print midnight_without_tzinfo
2012-04-01 00:00:00
>>> midnight_with_tzinfo = tz.localize(midnight_without_tzinfo)
>>> print midnight_with_tzinfo
2012-04-01 00:00:00+11:00
>>> print midnight_with_tzinfo.astimezone(pytz.utc)
2012-03-31 13:00:00+00:00
No guarantees for dates before 1582, though.
|
split string on a number of different characters
|
I'd like to split a string using one or more separator characters.
E.g. "a b.c", split on " " and "." would give the list ["a", "b", "c"].
At the moment, I can't see anything in the standard library to do this, and my own attempts are a bit clumsy. E.g.
def my_split(string, split_chars):
if isinstance(string_L, basestring):
string_L = [string_L]
try:
split_char = split_chars[0]
except IndexError:
return string_L
res = []
for s in string_L:
res.extend(s.split(split_char))
return my_split(res, split_chars[1:])
print my_split("a b.c", [' ', '.'])
Horrible! Any better suggestions?
|
>>> import re
>>> re.split('[ .]', 'a b.c')
['a', 'b', 'c']
|
Running interactive commands in Paramiko
|
I'm trying to run an interactive command through paramiko. The cmd execution tries to prompt for a password but I do not know how to supply the password through paramiko's exec_command and the execution hangs. Is there a way to send values to the terminal if a cmd execution expects input interactively?
ssh = paramiko.SSHClient()
ssh.connect(server, username=username, password=password)
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("psql -U factory -d factory -f /tmp/data.sql")
Does anyone know how this can addressed? Thank you.
|
The full paramiko distribution ships with a lot of good demos.
In the demos subdirectory, demo.py and interactive.py have full interactive TTY examples which would probably be overkill for your situation.
In your example above ssh_stdin acts like a standard Python file object, so ssh_stdin.write should work so long as the channel is still open.
I've never needed to write to stdin, but the docs suggest that a channel is closed as soon as a command exits, so using the standard stdin.write method to send a password up probably won't work. There are lower level paramiko commands on the channel itself that give you more control - see how the SSHClient.exec_command method is implemented for all the gory details.
|
How can I find all the subsets of a set, with exactly n elements?
|
I am writing a program in Python, and I realized that a problem I need to solve requires me, given a set S with n elements (|S|=n), to test a function on all possible subsets of a certain order m (i.e. with m number of elements). To use the answer to produce a partial solution, and then try again with the next order m=m+1, until m=n.
I am on my way to write a solution of the form:
def findsubsets(S, m):
subsets = set([])
...
return subsets
But knowing Python I expected a solution to be already there.
What is the best way to accomplish this?
|
itertools.combinations is your friend if you have Python 2.6 or greater. Otherwise, check the link for an implementation of an equivalent function.
import itertools
def findsubsets(S,m):
return set(itertools.combinations(S, m))
|
How do I get the time a file was last modified in Python?
|
Assuming the file exists (using os.path.exists(filename) to first make sure that it does), how do I display the time a file was last modified? This is on Linux if that makes any difference.
|
>>> import os
>>> f = os.path.getmtime('test1.jpg')
>>> f
1223995325.0
since the beginning of (epoch)
|
Non-blocking read on a subprocess.PIPE in python
|
I'm using the subprocess module to start a subprocess and connect to it's output stream (stdout). I want to be able to execute non-blocking reads on its stdout. Is there a way to make .readline non-blocking or to check if there is data on the stream before I invoke .readline? I'd like this to be portable or at least work under Windows and Linux.
here is how I do it for now (It's blocking on the .readline if no data is avaible):
p = subprocess.Popen('myprogram.exe', stdout = subprocess.PIPE)
output_str = p.stdout.readline()
|
fcntl, select, asyncproc won't help in this case.
A reliable way to read a stream without blocking regardless of operating system is to use Queue.get_nowait():
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # python 3.x
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['myprogram.exe'], stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.daemon = True # thread dies with the program
t.start()
# ... do other things here
# read line without blocking
try: line = q.get_nowait() # or q.get(timeout=.1)
except Empty:
print('no output yet')
else: # got line
# ... do something with line
|
How do you override vim options via comments in a python source code file?
|
I would like to set some vim options in one file in the comments section.
For example, I would like to set this option in one file
set syntax=python
The file does not have a .py extension and I am not interested in making my vim installation recognise all files with this extension as python files.
I know this can be done because I have seen it, but my googling for this has not yet been fruitful.
|
You're wanting a modeline The linked article should explain far better than I can.
|
String concatenation vs. string substitution in Python
|
In Python, the where and when of using string concatenation versus string substitution eludes me. As the string concatenation has seen large boosts in performance, is this (becoming more) a stylistic decision rather than a practical one?
For a concrete example, how should one handle construction of flexible URIs:
DOMAIN = 'http://stackoverflow.com'
QUESTIONS = '/questions'
def so_question_uri_sub(q_num):
return "%s%s/%d" % (DOMAIN, QUESTIONS, q_num)
def so_question_uri_cat(q_num):
return DOMAIN + QUESTIONS + '/' + str(q_num)
Edit: There have also been suggestions about joining a list of strings and for using named substitution. These are variants on the central theme, which is, which way is the Right Way to do it at which time? Thanks for the responses!
|
Concatenation is (significantly) faster according to my machine. But stylistically, I'm willing to pay the price of substitution if performance is not critical. Well, and if I need formatting, there's no need to even ask the question... there's no option but to use interpolation/templating.
>>> import timeit
>>> def so_q_sub(n):
... return "%s%s/%d" % (DOMAIN, QUESTIONS, n)
...
>>> so_q_sub(1000)
'http://stackoverflow.com/questions/1000'
>>> def so_q_cat(n):
... return DOMAIN + QUESTIONS + '/' + str(n)
...
>>> so_q_cat(1000)
'http://stackoverflow.com/questions/1000'
>>> t1 = timeit.Timer('so_q_sub(1000)','from __main__ import so_q_sub')
>>> t2 = timeit.Timer('so_q_cat(1000)','from __main__ import so_q_cat')
>>> t1.timeit(number=10000000)
12.166618871951641
>>> t2.timeit(number=10000000)
5.7813972166853773
>>> t1.timeit(number=1)
1.103492206766532e-05
>>> t2.timeit(number=1)
8.5206360154188587e-06
>>> def so_q_tmp(n):
... return "{d}{q}/{n}".format(d=DOMAIN,q=QUESTIONS,n=n)
...
>>> so_q_tmp(1000)
'http://stackoverflow.com/questions/1000'
>>> t3= timeit.Timer('so_q_tmp(1000)','from __main__ import so_q_tmp')
>>> t3.timeit(number=10000000)
14.564135316080637
>>> def so_q_join(n):
... return ''.join([DOMAIN,QUESTIONS,'/',str(n)])
...
>>> so_q_join(1000)
'http://stackoverflow.com/questions/1000'
>>> t4= timeit.Timer('so_q_join(1000)','from __main__ import so_q_join')
>>> t4.timeit(number=10000000)
9.4431309007150048
|
Test if executable exists in Python?
|
In Python, is there a portable and simple way to test if an executable program exists?
By simple I mean something like the which command which would be just perfect. I don't want to search PATH manually or something involving trying to execute it with Popen & al and see if it fails (that's what I'm doing now, but imagine it's launchmissiles)
|
Easiest way I can think of:
def which(program):
import os
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
Edit: Updated code sample to include logic for handling case where provided argument is already a full path to the executable, i.e. "which /bin/ls". This mimics the behavior of the UNIX 'which' command.
Edit: Updated to use os.path.isfile() instead of os.path.exists() per comments.
|
Stackless python and multicores?
|
So, I'm toying around with Stackless Python and a question popped up in my head, maybe this is "assumed" or "common" knowledge, but I couldn't find it actually written anywhere on the stackless site.
Does Stackless Python take advantage of multicore CPUs? In normal Python you have the GIL being constantly present and to make (true) use of multiple cores you need to use several processes, is this true for Stackless also?
|
Stackless python does not make use of any kind of multi-core environment it runs on.
This is a common misconception about Stackless, as it allows the programmer to take advantage of thread-based programming. For many people these two are closely intertwined, but are, in fact two separate things.
Internally Stackless uses a round-robin scheduler to schedule every tasklet (micro threads), but no tasklet can be run concurrent with another one. This means that if one tasklet is busy, the others must wait until that tasklet relinquishes control. By default the scheduler will not stop a tasklet and give processor time to another. It is the tasklet's responsibility to schedule itself back in the end of the schedule queue using Stackless.schedule(), or by finishing its calculations.
all tasklets are thus executed in a sequential manner, even when multiplpe cores are available.
The reason why Stackless does not have multi-core support is because this makes threads a whole lot easier. And this is just what stackless is all about:
from the official stackless website
Stackless Python is an enhanced
version of the Python programming
language. It allows programmers to
reap the benefits of thread-based
programming without the performance
and complexity problems associated
with conventional threads. The
microthreads that Stackless adds to
Python are a cheap and lightweight
convenience which can if used
properly, give the following benefits:
Improved program structure.
More readable code.
Increased programmer productivity.
Here is a link to some more information about multiple cores and stackless.
|
How do I get my python program to sleep for 50 milliseconds?
|
How do I get my python program to sleep for 50 milliseconds?
|
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from time import sleep
>>> sleep(0.05)
>>>
|
How to get a list of all users with a specific permission group in Django
|
I want to get a list of all Django auth user with a specific permission group, something like this:
user_dict = {
'queryset': User.objects.filter(permisson='blogger')
}
I cannot find out how to do this. How are the permissions groups saved in the user model?
|
If you want to get list of users by permission, look at this variant:
from django.contrib.auth.models import User, Permission
from django.db.models import Q
perm = Permission.objects.get(codename='blogger')
users = User.objects.filter(Q(groups__permissions=perm) | Q(user_permissions=perm)).distinct()
|
What are these tags @ivar @param and @type in python docstring?
|
The ampoule project uses some tags in docstring, like the javadoc ones.
For example from pool.py line 86:
def start(self, ampChild=None):
"""
Starts the ProcessPool with a given child protocol.
@param ampChild: a L{ampoule.child.AMPChild} subclass.
@type ampChild: L{ampoule.child.AMPChild} subclass
"""
What are these tags, which tool uses it.
|
Markup for a documentation tool, probably epydoc.
|
Parse String to Float or Int
|
In Python, how can I parse a numeric string like "545.2222" to its corresponding float value, 542.2222? Or parse the string "31" to an integer, 31?
I just want to know how to parse a float string to a float, and (separately) an int string to an int.
|
>>> a = "545.2222"
>>> float(a)
545.22220000000004
>>> int(float(a))
545
|
Python: single instance of program
|
Is there a Pythonic way to have only one instance of a program running?
The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?
(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)
Update: the solutions offered are much more complex and less reliant than just having a port occupied with a non-existent server, so I'd have to go with that one.
|
The following code should do the job, it is cross-platform and runs on Python 2.4-3.2. I tested it on Windows, OS X and Linux.
from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running
The latest code version is available singleton.py. Please file bugs here.
You can install tend using one of the following methods:
easy_install tendo
pip install tendo
manually by getting it from http://pypi.python.org/pypi/tendo
|
Comparing massive lists of dictionaries in python
|
I never actually thought I'd run into speed-issues with python, but I have. I'm trying to compare really big lists of dictionaries to each other based on the dictionary values. I compare two lists, with the first like so
biglist1=[{'transaction':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, ...]
With 'somevalue' standing for a user-generated string, int or decimal. Now, the second list is pretty similar, except the id-values are always empty, as they have not been assigned yet.
biglist2=[{'transaction':'somevalue', 'id':'', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'', 'date':'somevalue' ...}, ...]
So I want to get a list of the dictionaries in biglist2 that match the dictionaries in biglist1 for all other keys except id.
I've been doing
for item in biglist2:
for transaction in biglist1:
if item['transaction'] == transaction['transaction']:
list_transactionnamematches.append(transaction)
for item in biglist2:
for transaction in list_transactionnamematches:
if item['date'] == transaction['date']:
list_transactionnamematches.append(transaction)
... and so on, not comparing id values, until I get a final list of matches. Since the lists can be really big (around 3000+ items each), this takes quite some time for python to loop through.
I'm guessing this isn't really how this kind of comparison should be done. Any ideas?
|
Index on the fields you want to use for lookup. O(n+m)
matches = []
biglist1_indexed = {}
for item in biglist1:
biglist1_indexed[(item["transaction"], item["date"])] = item
for item in biglist2:
if (item["transaction"], item["date"]) in biglist1_indexed:
matches.append(item)
This is probably thousands of times faster than what you're doing now.
|
In what contexts do programming languages make real use of an Infinity value?
|
So in Ruby there is a trick to specify infinity:
1.0/0
=> Infinity
I believe in Python you can do something like this
float('inf')
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
To summarize, what I'm looking for is a real world reason to use Infinity.
EDIT: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people actually used it.
|
Off the top of the head, it can be useful as an initial value when searching for a minimum value.
For example:
min = float('inf')
for x in somelist:
if x<min:
min=x
Which I prefer to setting min initially to the first value of somelist
Of course, in Python, you should just use the min() built-in function in most cases.
|
Django: How can I use my model classes to interact with my database from outside Django?
|
I'd like to write a script that interacts with my DB using a Django app's model. However, I would like to be able to run this script from the command line or via cron. What all do I need to import to allow this?
|
You need to set up the Django environment variables. These tell Python where your project is, and what the name of the settings module is (the project name in the settings module is optional):
import os
os.environ['PYTHONPATH'] = '/path/to/myproject'
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
Now you should be able to access the models:
from myproject.models import MyModel
all_my_models = MyModel.objects.all()
|
How do you apply 'or' to all values of a list in Python?
|
How do you apply 'or' to all values of a list in Python? I'm thinking something like:
or([True, True, False])
or if it was possible:
reduce(or, [True, True, False])
|
The built-in function any does what you want:
>>> any([True, True, False])
True
>>> any([False, False, False])
False
>>> any([False, False, True])
True
any has the advantage over reduce of shortcutting the test for later items in the sequence once it finds a true value. This can be very handy if the sequence is a generator with an expensive operation behind it. For example:
>>> def iam(result):
... # Pretend this is expensive.
... print "iam(%r)" % result
... return result
...
>>> any((iam(x) for x in [False, True, False]))
iam(False)
iam(True)
True
>>> reduce(lambda x,y: x or y, (iam(x) for x in [False, True, False]))
iam(False)
iam(True)
iam(False)
True
If your Python's version doesn't have any(), all() builtins then they are easily implemented as Guido van Rossum suggested:
def any(S):
for x in S:
if x:
return True
return False
def all(S):
for x in S:
if not x:
return False
return True
|
104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN?
|
We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
(104, 'Connection reset by peer')
When I listen in with wireshark, the "good" and "bad" responses look very similar:
Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
** Further debugging - Looks like server on Linux **
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...
** Further testing - wsgiref looks flaky **
We've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production.
|
I've had this problem. See The Python "Connection Reset By Peer" Problem.
You have (most likely) run afoul of small timing issues based on the Python Global Interpreter Lock.
You can (sometimes) correct this with a time.sleep(0.01) placed strategically.
"Where?" you ask. Beats me. The idea is to provide some better thread concurrency in and around the client requests. Try putting it just before you make the request so that the GIL is reset and the Python interpreter can clear out any pending threads.
|
How can I color Python logging output?
|
Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the logging module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python logging module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original logging output if color isn't supported.
Any ideas how I can get colored output with the logging module?
|
I already knew about the color escapes, I used them in my bash prompt a while ago. Thanks anyway.
What I wanted was to integrate it with the logging module, which I eventually did after a couple of tries and errors.
Here is what I end up with:
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
#The background is set with 40 plus the number of the color, and the foreground with 30
#These are the sequences need to get colored ouput
RESET_SEQ = "\033[0m"
COLOR_SEQ = "\033[1;%dm"
BOLD_SEQ = "\033[1m"
def formatter_message(message, use_color = True):
if use_color:
message = message.replace("$RESET", RESET_SEQ).replace("$BOLD", BOLD_SEQ)
else:
message = message.replace("$RESET", "").replace("$BOLD", "")
return message
COLORS = {
'WARNING': YELLOW,
'INFO': WHITE,
'DEBUG': BLUE,
'CRITICAL': YELLOW,
'ERROR': RED
}
class ColoredFormatter(logging.Formatter):
def __init__(self, msg, use_color = True):
logging.Formatter.__init__(self, msg)
self.use_color = use_color
def format(self, record):
levelname = record.levelname
if self.use_color and levelname in COLORS:
levelname_color = COLOR_SEQ % (30 + COLORS[levelname]) + levelname + RESET_SEQ
record.levelname = levelname_color
return logging.Formatter.format(self, record)
And to use it, create your own Logger:
# Custom logger class with multiple destinations
class ColoredLogger(logging.Logger):
FORMAT = "[$BOLD%(name)-20s$RESET][%(levelname)-18s] %(message)s ($BOLD%(filename)s$RESET:%(lineno)d)"
COLOR_FORMAT = formatter_message(FORMAT, True)
def __init__(self, name):
logging.Logger.__init__(self, name, logging.DEBUG)
color_formatter = ColoredFormatter(self.COLOR_FORMAT)
console = logging.StreamHandler()
console.setFormatter(color_formatter)
self.addHandler(console)
return
logging.setLoggerClass(ColoredLogger)
Just in case anyone else needs it.
Be careful if you're using more than one logger or handler: ColoredFormatter is changing the record object, which is passed further to other handlers or propagated to other loggers. If you have configured file loggers etc. you probably don't want to have the colors in the log files. To avoid that, it's probably best to simply create a copy of record with copy.copy() before manipulating the levelname attribute, or to reset the levelname to the previous value, before returning the formatted string (credit to Michael in the comments).
|
Database does not update automatically with MySQL and Python
|
I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
The output I get when I run this program is:
4Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter UPDATE compinfo SET Co_num=4 WHERE ID=100; the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box.
|
I am not certain, but I am going to guess you are using a INNODB table, and you haven't done a commit. I believe MySQLdb enable transactions automatically.
Call conn.commit() before calling close.
From the FAQ: Starting with 1.2.0, MySQLdb disables autocommit by default
|
MySQL-db lib for Python 3.x?
|
So, looking for a mysql-db-lib that is compatible with py3k/py3.0/py3000, any ideas? Google turned up nothing.
|
It appears the MySQLdb is pretty much a dead project. However, PyMySQL is a dbapi compliant, pure-python implementation of a mysql client, and it has python 3 support.
EDIT: There's also MySQL Connector/Python. Same idea.
|
PIL and numpy
|
Alright, I'm toying around with converting a PIL image object back and forth to a numpy array so I can do some faster pixel by pixel transformations than PIL's PixelAccess object would allow. I've figured out how to place the pixel information in a useful 3D numpy array by way of:
pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)
But I can't seem to figure out how to load it back into the PIL object after I've done all my awesome transforms. I'm aware of the putdata() method, but can't quite seem to get it to behave.
Any thoughts?
|
You're not saying how exactly putdata() is not behaving. I'm assuming you're doing
>>> pic.putdata(a)
Traceback (most recent call last):
File "...blablabla.../PIL/Image.py", line 1185, in putdata
self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple
This is because putdata expects a sequence of tuples and you're giving it a numpy array. This
>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)
will work but it is very slow.
As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply
>>> pix = numpy.array(pic)
although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).
Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.