instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
---|---|---|
Which Version of Python to Use for Maximum Compatibility
|
If I was going to start an open source project using Python what version should I use to ensure that the vast majority of users can use it on their system?
I'm the kind of person who quickly jumps to the next version (which I'll do when Python 3 comes out) but many people may be more conservative if their current version seems to be working fine. What version would hit the sweet spot but still allow me to enjoy the newest and coolest language enhancements?
|
As python is in kind of an transition phase towards python 3 with breaking backward compatibility I don't think it is a good idea to go python 3 only. Based on the time line there will be at least one or two following releases of the 2.x series after 2.6/3.0 in october.
Beside not having python 3 available on your target platforms, it will take some time until important external python libraries will be ported and usable on python 3.
So as Matthew suggests staying at 2.4/2.5 and keeping the transition plan to python 3 in mind is a solid choice.
|
How do you set up a python wsgi server under IIS?
|
I work in a windows environment and would prefer to deploy code to IIS. At the same time I would like to code in python.
Having read that IIS can run fastCGI application, I went to the IIS site where it describes in detail how to get PHP up and running but not much about anything else.
Does anyone have experience getting a Python framework running under IIS using something other that plain old CGI?
If so can you explain to direct me to some instructions on setting this up?
|
There shouldn't be any need to use FastCGI. There exists a ISAPI extension for WSGI.
|
Is there a way to attach a debugger to a multi-threaded Python process?
|
I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process?
Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :)
|
Use Winpdb. It is a platform independent graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Features:
GPL license. Winpdb is Free Software.
Compatible with CPython 2.3 through 2.6 and Python 3000
Compatible with wxPython 2.6 through 2.8
Platform independent, and tested on Ubuntu Gutsy and Windows XP.
User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
|
Generator Expressions vs. List Comprehension
|
When should you use generator expressions and when should you use list comprehensions in Python?
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
|
John's answer is good (that list comprehensions are better when you want to iterate over something multiple times). However, it's also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won't work:
def gen():
return (something for something in get_some_stuff())
print gen()[:2] # generators don't support indexing or slicing
print [5,6] + gen() # generators can't be added to lists
Basically, use a generator expression if all you're doing is iterating once. If you want to store and use the generated results, then you're probably better off with a list comprehension.
Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code.
|
What are the advantages of packaging your python library/application as an .egg file?
|
I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer?
|
From the Python Enterprise Application Kit community:
"Eggs are to Pythons as Jars are to Java..."
Python eggs are a way of bundling
additional information with a Python
project, that allows the project's
dependencies to be checked and
satisfied at runtime, as well as
allowing projects to provide plugins
for other projects. There are several
binary formats that embody eggs, but
the most common is '.egg' zipfile
format, because it's a convenient one
for distributing projects. All of the
formats support including
package-specific data, project-wide
metadata, C extensions, and Python
code.
The primary benefits of Python Eggs
are:
They enable tools like the "Easy Install" Python package manager
.egg files are a "zero installation" format for a Python
package; no build or install step is
required, just put them on PYTHONPATH
or sys.path and use them (may require
the runtime installed if C extensions
or data files are used)
They can include package metadata, such as the other eggs they depend on
They allow "namespace packages" (packages that just contain other
packages) to be split into separate
distributions (e.g. zope., twisted.,
peak.* packages can be distributed as
separate eggs, unlike normal packages
which must always be placed under the
same parent directory. This allows
what are now huge monolithic packages
to be distributed as separate
components.)
They allow applications or libraries to specify the needed
version of a library, so that you can
e.g. require("Twisted-Internet>=2.0")
before doing an import
twisted.internet.
They're a great format for distributing extensions or plugins to
extensible applications and frameworks
(such as Trac, which uses eggs for
plugins as of 0.9b1), because the egg
runtime provides simple APIs to locate
eggs and find their advertised entry
points (similar to Eclipse's
"extension point" concept).
There are also other benefits that may come from having a standardized
format, similar to the benefits of
Java's "jar" format.
|
Glade or no glade: What is the best way to use PyGtk?
|
I've been learning python for a while now with some success. I even managed to create one or two (simple) programs using PyGtk + Glade.
The thing is: I am not sure if the best way to use GTK with python is by building the interfaces using Glade.
I was wondering if the more experienced ones among us (remember, I'm just a beginner) could point out the benefits and caveats of using Glade as opposed to creating everything in the code itself (assuming that learning the correct gtk bindings wouldn't exactly be a problem).
|
I would say that it depends: if you find that using Glade you can build the apps you want or need to make than that's absolutely fine. If however you actually want to learn how GTK works or you have some non-standard UI requirements you will have to dig into GTK internals (which are not that complicated).
Personally I'm usually about 5 minutes into a rich client when I need some feature or customization that is simply impossible through a designer such as Glade or Stetic. Perhaps it's just me. Nevertheless it is still useful for me to bootstrap window design using a graphical tool.
My recommendation: if making rich clients using GTK is going to be a significant part of your job/hobby then learn GTK as well since you will need to write that code someday.
P.S. I personally find Stetic to be superior to Glade for design work, if a little bit more unstable.
|
Project structure for Google App Engine
|
I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is BowlSK. However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working.
What I have:
Main Level contains:
all .py files (didn't know how to make packages work)
all .html templates for main level pages
Subdirectories:
separate folders for css, images, js, etc.
folders that hold .html templates for subdirecty-type urls
Example:
http://www.bowlsk.com/ maps to HomePage (default package), template at "index.html"
http://www.bowlsk.com/games/view-series.html?series=7130 maps to ViewSeriesPage (again, default package), template at "games/view-series.html"
It's nasty. How do I restructure? I had 2 ideas:
Main Folder containing: appdef, indexes, main.py?
Subfolder for code. Does this have to be my first package?
Subfolder for templates. Folder heirarchy would match package heirarchy
Individual subfolders for css, images, js, etc.
Main Folder containing appdef, indexes, main.py?
Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage"
Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again.
|
First, I would suggest you have a look at "Rapid Development with Python, Django, and Google App Engine"
GvR describes a general/standard project layout on page 10 of his slide presentation.
Here I'll post a slightly modified version of the layout/structure from that page. I pretty much follow this pattern myself. You also mentioned you had trouble with packages. Just make sure each of your sub folders has an __init__.py file. It's ok if its empty.
Boilerplate files
These hardly vary between projects
app.yaml: direct all non-static requests to main.py
main.py: initialize app and send it all requests
Project lay-out
static/*: static files; served directly by App Engine
myapp/*.py: app-specific python code
views.py, models.py, tests.py, __init__.py, and more
templates/*.html: templates (or myapp/templates/*.html)
Here are some code examples that may help as well:
main.py
import wsgiref.handlers
from google.appengine.ext import webapp
from myapp.views import *
application = webapp.WSGIApplication([
('/', IndexHandler),
('/foo', FooHandler)
], debug=True)
def main():
wsgiref.handlers.CGIHandler().run(application)
myapp/views.py
import os
import datetime
import logging
import time
from google.appengine.api import urlfetch
from google.appengine.ext.webapp import template
from google.appengine.api import users
from google.appengine.ext import webapp
from models import *
class IndexHandler(webapp.RequestHandler):
def get(self):
date = "foo"
# Do some processing
template_values = {'data': data }
path = os.path.join(os.path.dirname(__file__) + '/../templates/', 'main.html')
self.response.out.write(template.render(path, template_values))
class FooHandler(webapp.RequestHandler):
def get(self):
#logging.debug("start of handler")
myapp/models.py
from google.appengine.ext import db
class SampleModel(db.Model):
I think this layout works great for new and relatively small to medium projects. For larger projects I would suggest breaking up the views and models to have their own sub-folders with something like:
Project lay-out
static/: static files; served directly by App Engine
js/*.js
images/*.gif|png|jpg
css/*.css
myapp/: app structure
models/*.py
views/*.py
tests/*.py
templates/*.html: templates
|
Calling python from a c++ program for distribution
|
I would like to call python script files from my c++ program.
I am not sure that the people I will distribute to will have python installed.
Basically I'm looking for a .lib file that I can use that has an Apache like distribution license.
|
I would like to call python script files from my c++ program.
This means that you want to embed Python in your C++ application. As mentioned in Embedding Python in Another Application:
Embedding Python is similar to
extending it, but not quite. The
difference is that when you extend
Python, the main program of the
application is still the Python
interpreter, while if you embed
Python, the main program may have
nothing to do with Python â instead,
some parts of the application
occasionally call the Python
interpreter to run some Python code.
I suggest that you first go through Embedding Python in Another Application. Then refer the following examples
Embedding Python in C/C++: Part I
Embedding Python in C/C++: Part II
Embedding Python in Multi-Threaded C/C++ Applications
If you like Boost.Python, you may visit the following links:
Embedding Python with Boost.Python Part 1
|
How do I turn a python program into an .egg file?
|
How do I turn a python program into an .egg file?
|
Setuptools is the software that creates .egg files. It's an extension of the distutils package in the standard library.
The process involves creating a setup.py file, then python setup.py bdist_egg creates an .egg package.
|
Java -> Python?
|
Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
|
List comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")] is really nice.
Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name="Eli", age=25)
Functions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
Built-in syntax for lists and dictionaries.
Operator Overloading.
Generally better designed libraries. For example, to parse an XML document in Java, you say
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");
and in Python you say
doc = parse("test.xml")
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.
|
Open source alternative to MATLAB's fmincon function?
|
Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.
|
Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently)
Linear Program (LP)
Quadratic Program (QP)
Convex Quadratically-Constrained Quadratic Program (QCQP)
Second Order Cone Program (SOCP)
Semidefinite Program (SDP)
Non-Linear Convex Problem
Non-Convex Problem
There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems.
The CVXOpt package will be of great use to you if your problem is convex.
If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound.
|
How do I get the path and name of the file that is currently executing?
|
I have scripts calling other script files but I need to get the filepath of the file that is currently running within the process.
For example, let's say I have three files. Using execfile:
script_1.py calls script_2.py.
In turn, script_2.py calls script_3.py.
How can I get the file name and path of script_3.py, from code within script_3.py, without having to pass that information as arguments from script_2.py?
(Executing os.getcwd() returns the original starting script's filepath not the current file's.)
|
__file__
as others have said. You may want to use:
os.path.realpath(__file__)
|
Django Sessions
|
I'm looking at sessions in Django, and by default they are stored in the database. What are the benefits of filesystem and cache sessions and when should I use them?
|
The filesystem backend is only worth looking at if you're not going to use a database for any other part of your system. If you are using a database then the filesystem backend has nothing to recommend it.
The memcache backend is much quicker than the database backend, but you run the risk of a session being purged and some of your session data being lost.
If you're a really, really high traffic website and code carefully so you can cope with losing a session then use memcache. If you're not using a database use the file system cache, but the default database backend is the best, safest and simplest option in almost all cases.
|
How to write a download progress indicator in Python?
|
I am writing a little application to download files over http (as, for example, described here).
I also want to include a little download progress indicator showing the percentage of the download progress.
Here is what I came up with:
sys.stdout.write(rem_file + "...")
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize):
percent = int(count*blockSize*100/totalSize)
sys.stdout.write("%2d%%" % percent)
sys.stdout.write("\b\b\b")
sys.stdout.flush()
Output: MyFileName... 9%
Any other ideas or recommendations to do this?
One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor?
EDIT:
Here a better alternative using a global variable for the filename in dlProgress and the '\r' code:
global rem_file # global variable to be used in dlProgress
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize):
percent = int(count*blockSize*100/totalSize)
sys.stdout.write("\r" + rem_file + "...%d%%" % percent)
sys.stdout.flush()
Output: MyFileName...9%
And the cursor shows up at the END of the line. Much better.
|
There's a text progress bar library for python at http://pypi.python.org/pypi/progressbar/2.2 that you might find useful:
This library provides a text mode progressbar. This is tipically used to display the progress of a long running operation, providing a visual clue that processing is underway.
The ProgressBar class manages the progress, and the format of the line is given by a number of widgets. A widget is an object that may display diferently depending on the state of the progress. There are three types of widget: - a string, which always shows itself; - a ProgressBarWidget, which may return a diferent value every time it's update method is called; and - a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it expands to fill the remaining width of the line.
The progressbar module is very easy to use, yet very powerful. And automatically supports features like auto-resizing when available.
|
How can I retrieve the page title of a webpage using Python?
|
How can I retrieve the page title of a webpage (title html tag) using Python?
|
Here's a simplified version of @Vinko Vrsalovic's answer:
import urllib2
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen("https://www.google.com"))
print soup.title.string
NOTE:
soup.title finds the first title element anywhere in the html document
title.string assumes it has only one child node, and that child node is a string
For beautifulsoup 4.x, use different import:
from bs4 import BeautifulSoup
|
Passing on named variable arguments in python
|
Say I have the following methods:
def methodA(arg, **kwargs):
pass
def methodB(arg, *args, **kwargs):
pass
In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments.
def methodA(arg, **kwargs):
methodB("argvalue", kwargs)
How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
|
Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments.
methodB("argvalue", **kwargs)
|
How to get an absolute file path in Python
|
Given a path such as "mydir/myfile.txt", how do I find the absolute filepath relative to the current working directory in Python? E.g. on Windows, I might end up with:
"C:/example/cwd/mydir/myfile.txt"
|
>>> import os
>>> os.path.abspath("mydir/myfile.txt")
|
Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP)
|
I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows):
SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples;
Is this normal behaviour when using a SQL database?
The schema (the table holds responses to a survey):
CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ','
I wrote some tests in Java and Python for context and they crush SQL (except for pure python):
java 1.5 threads ~ 7 ms
java 1.5 ~ 10 ms
python 2.5 numpy ~ 18 ms
python 2.5 ~ 370 ms
Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown)
Tunings i've tried without success include (blindly following some web advice):
increased the shared memory available to Postgres to 256MB
increased the working memory to 2MB
disabled connection and statement logging
used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL
So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous.
Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help.
No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest.
The sqlite3 timing is driven by the Python program and is running from disk (not :memory:)
I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data.
The Postgres query doesn't change timing on subsequent runs.
I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
|
I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps:
parse the SQL
work up a query plan, i. e. decide on which indices to use (if any), optimize etc.
if an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or
if no index is used, scan the whole table to determine which rows are needed
load the data from disk into a temporary location (hopefully, but not necessarily, memory)
perform the count() and avg() calculations
So, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are.
To obtain more information about where Postgres spends its time, I would suggest the following tests:
Compare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5)
If you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison.
To speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time.
There's several ways to do that:
Cache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached
Reduce the size of your stored data
Optimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table.
If your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres.
There also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process.
Update:
I just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use.
|
Cross-platform space remaining on volume using python
|
I need a way to determine the space remaining on a disk volume using python on linux, Windows and OS X. I'm currently parsing the output of the various system calls (df, dir) to accomplish this - is there a better way?
|
import ctypes
import os
import platform
import sys
def get_free_space_mb(dirname):
"""Return folder/drive free space (in megabytes)."""
if platform.system() == 'Windows':
free_bytes = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p(dirname), None, None, ctypes.pointer(free_bytes))
return free_bytes.value / 1024 / 1024
else:
st = os.statvfs(dirname)
return st.f_bavail * st.f_frsize / 1024 / 1024
Note that you must pass a directory name for GetDiskFreeSpaceEx() to work
(statvfs() works on both files and directories). You can get a directory name
from a file with os.path.dirname().
Also see the documentation for os.statvfs() and GetDiskFreeSpaceEx.
|
How can I do a line break (line continuation) in Python?
|
I have a long line of code that I want to break up among multiple lines. What do I use and what is the syntax?
For example, adding a bunch of strings,
e = 'a' + 'b' + 'c' + 'd'
and have it like this:
e = 'a' + 'b' +
'c' + 'd'
|
What is the line? You can just have arguments on the next line without any problems:
a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5,
blahblah6, blahblah7)
Otherwise you can do something like this:
if a == True and \
b == False
Check the style guide for more information.
From your example line:
a = '1' + '2' + '3' + \
'4' + '5'
Or:
a = ('1' + '2' + '3' +
'4' + '5')
Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go.
|
How do you check whether a python method is bound or not?
|
Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to?
|
def isbound(method):
return method.im_self is not None
def instance(bounded_method):
return bounded_method.im_self
User-defined methods:
When a user-defined method object is
created by retrieving a user-defined
function object from a class, its
im_self attribute is None and the
method object is said to be unbound.
When one is created by retrieving a
user-defined function object from a
class via one of its instances, its
im_self attribute is the instance, and
the method object is said to be bound.
In either case, the new method's
im_class attribute is the class from
which the retrieval takes place, and
its im_func attribute is the original
function object.
In Python 2.6 and 3.0:
Instance method objects have new
attributes for the object and function
comprising the method; the new synonym
for im_self is __self__, and im_func
is also available as __func__. The old
names are still supported in Python
2.6, but are gone in 3.0.
|
Best way to check if a list is empty
|
For example, if passed the following:
a = []
How do I check to see if a is empty?
|
if not a:
print("List is empty")
Using the implicit booleanness of the empty list a is quite pythonic.
|
What are some strategies to write python code that works in CPython, Jython and IronPython
|
Having tries to target two of these environments at the same time I can safely say the if you have to use a database etc. you end up having to write unique code for that environment. Have you got a great way to handle this situation?
|
If you do find you need to write unique code for an environment, use pythons
import mymodule_jython as mymodule
import mymodule_cpython as mymodule
have this stuff in a simple module (''module_importer''?) and write your code like this:
from module_importer import mymodule
This way, all you need to do is alter module_importer.py per platform.
|
Any good AJAX framework for Google App Engine apps?
|
I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea?
I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine?
|
As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other.
jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com.
Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it.
|
What is the difference between old style and new style classes in Python?
|
What is the difference between old style and new style classes in Python? Is there ever a reason to use old-style classes these days?
|
From http://docs.python.org/2/reference/datamodel.html#new-style-and-classic-classes :
Up to Python 2.1, old-style classes were the only flavour available to the user. The concept of (old-style) class is unrelated to the concept of type: if x is an instance of an old-style class, then x.__class__ designates the class of x, but type(x) is always <type 'instance'>. This reflects the fact that all old-style instances, independently of their class, are implemented with a single built-in type, called instance.
New-style classes were introduced in Python 2.2 to unify the concepts of class and type. A new-style class is simply a user-defined type, no more, no less. If x is an instance of a new-style class, then type(x) is typically the same as x.__class__ (although this is not guaranteed â a new-style class instance is permitted to override the value returned for x.__class__).
The major motivation for introducing new-style classes is to provide a unified object model with a full meta-model. It also has a number of immediate benefits, like the ability to subclass most built-in types, or the introduction of "descriptors", which enable computed properties.
For compatibility reasons, classes are still old-style by default. New-style classes are created by specifying another new-style class (i.e. a type) as a parent class, or the "top-level type" object if no other parent is needed. The behaviour of new-style classes differs from that of old-style classes in a number of important details in addition to what type returns. Some of these changes are fundamental to the new object model, like the way special methods are invoked. Others are "fixes" that could not be implemented before for compatibility concerns, like the method resolution order in case of multiple inheritance.
Python 3 only has new-style classes. No matter if you subclass from object or not, classes are new-style in Python 3. It is however recommended that you still subclass from object.
|
In Python, how can you easily retrieve sorted items from a dictionary?
|
Dictionaries unlike lists are not ordered (and do not have the 'sort' attribute). Therefore, you can not rely on getting the items in the same order when first added.
What is the easiest way to loop through a dictionary containing strings as the key value and retrieving them in ascending order by key?
For example, you had this:
d = {'b' : 'this is b', 'a': 'this is a' , 'c' : 'this is c'}
I want to print the associated values in the following sequence sorted by key:
this is a
this is b
this is c
|
Do you mean that you need the values sorted by the value of the key?
In that case, this should do it:
for key in sorted(d):
print d[key]
EDIT: changed to use sorted(d) instead of sorted(d.keys()), thanks Eli!
|
XML writing tools for Python
|
I'm currently trying ElementTree and it looks fine, it escapes HTML entities and so on and so forth. Am I missing something truly wonderful I haven't heard of?
This is similar to what I'm actually doing:
import xml.etree.ElementTree as ET
root = ET.Element('html')
head = ET.SubElement(root,'head')
script = ET.SubElement(head,'script')
script.set('type','text/javascript')
script.text = "var a = 'I love á letters'"
body = ET.SubElement(root,'body')
h1 = ET.SubElement(body,'h1')
h1.text = "And I like the fact that 3 > 1"
tree = ET.ElementTree(root)
tree.write('foo.xhtml')
# more foo.xhtml
<html><head><script type="text/javascript">var a = 'I love &aacute;
letters'</script></head><body><h1>And I like the fact that 3 > 1</h1>
</body></html>
|
Another way is using the E Factory builder from lxml (available in Elementtree too)
>>> from lxml import etree
>>> from lxml.builder import E
>>> def CLASS(*args): # class is a reserved word in Python
... return {"class":' '.join(args)}
>>> html = page = (
... E.html( # create an Element called "html"
... E.head(
... E.title("This is a sample document")
... ),
... E.body(
... E.h1("Hello!", CLASS("title")),
... E.p("This is a paragraph with ", E.b("bold"), " text in it!"),
... E.p("This is another paragraph, with a", "\n ",
... E.a("link", href="http://www.python.org"), "."),
... E.p("Here are some reserved characters: <spam&egg>."),
... etree.XML("<p>And finally an embedded XHTML fragment.</p>"),
... )
... )
... )
>>> print(etree.tostring(page, pretty_print=True))
<html>
<head>
<title>This is a sample document</title>
</head>
<body>
<h1 class="title">Hello!</h1>
<p>This is a paragraph with <b>bold</b> text in it!</p>
<p>This is another paragraph, with a
<a href="http://www.python.org">link</a>.</p>
<p>Here are some reservered characters: <spam&egg>.</p>
<p>And finally an embedded XHTML fragment.</p>
</body>
</html>
|
Anyone used Dabo for a medium-big project?
|
We're at the beginning of a new ERP-ish client-server application, developed as a Python rich client. We're currently evaluating Dabo as our main framework and it looks quite nice and easy to use, but I was wondering, has anyone used it for medium-to-big sized projects?
Thanks for your time!
|
I'm one of the authors of the Dabo framework. One of our users pointed out to me the extremely negative answer you received, and so I thought I had better chime in and clear up some of the incorrect assumptions in the first reply.
Dabo is indeed well-known in the Python community. I have presented it at 3 of the last 4 US PyCons, and we have several hundred users who subscribe to our email lists. Our website (http://dabodev.com) has not had any service interruptions; I don't know why the first responder claimed to have trouble. Support is through our email lists, and we pride ourselves on helping people quickly and efficiently. Many of the newbie questions help us to identify places where our docs are lacking, so we strongly encourage newcomers to ask questions!
Dabo has been around for 4 years. The fact that it is still a few days away from a 0.9 release is more of a reflection of the rather conservative version numbering of my partner, Paul McNett, than any instabilities in the framework. I know of Dabo apps that have been in production since 2006; I have used it for my own projects since 2004. Whatever importance you attach to release numbers, we are at revision 4522, with consistent work being done to add more and more stuff to the framework; refactor and streamline some of the older code, and yes, clean up some bugs.
Please sign up for our free email support list:
http://leafe.com/mailman/listinfo/dabo-users
...and ask any questions you may have about Dabo there. Not many people have discovered Stack Overflow yet, so I wouldn't expect very informed answers here yet. There are several regular contributors there who use Dabo on a daily basis, and are usually more than happy to offer their opinions and their help.
|
round() in Python doesn't seem to be rounding properly
|
The documentation for the round() function states that you pass it a number, and the positions past the decimal to round. Thus it should do this:
n = 5.59
round(n, 1) # 5.6
But, in actuality, good old floating point weirdness creeps in and you get:
5.5999999999999996
For the purposes of UI, I need to display 5.6. I poked around the Internet and found some documentation that this is dependent on my implementation of Python. Unfortunately, this occurs on both my Windows dev machine and each Linux server I've tried. See here also.
Short of creating my own round library, is there any way around this?
|
Formatting works correctly even without having to round:
"%.1f" % n
|
Convert XML/HTML Entities into Unicode String in Python
|
I'm doing some web scraping and sites frequently use HTML entities to represent non ascii characters. Does Python have a utility that takes a string with HTML entities and returns a unicode type?
For example:
I get back:
ǎ
which represents an "Ç" with a tone mark. In binary, this is represented as the 16 bit 01ce. I want to convert the html entity into the value u'\u01ce'
|
Python has the htmlentitydefs module, but this doesn't include a function to unescape HTML entities.
Python developer Fredrik Lundh (author of elementtree, among other things) has such a function on his website, which works with decimal, hex and named entities:
import re, htmlentitydefs
##
# Removes HTML or XML character references and entities from a text string.
#
# @param text The HTML (or XML) source text.
# @return The plain text, as a Unicode string, if necessary.
def unescape(text):
def fixup(m):
text = m.group(0)
if text[:2] == "&#":
# character reference
try:
if text[:3] == "&#x":
return unichr(int(text[3:-1], 16))
else:
return unichr(int(text[2:-1]))
except ValueError:
pass
else:
# named entity
try:
text = unichr(htmlentitydefs.name2codepoint[text[1:-1]])
except KeyError:
pass
return text # leave as is
return re.sub("&#?\w+;", fixup, text)
|
Does re.compile() or any given Python library call throw an exception?
|
I can't tell from the Python documentation whether the re.compile(x) function may throw an exception (assuming you pass in a string). I imagine there is something that could be considered an invalid regular expression. The larger question is, where do I go to find if a given Python library call may throw exception(s) and what those are?
|
Well, re.compile certainly may:
>>> import re
>>> re.compile('he(lo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python25\lib\re.py", line 180, in compile
return _compile(pattern, flags)
File "C:\Python25\lib\re.py", line 233, in _compile
raise error, v # invalid expression
sre_constants.error: unbalanced parenthesis
The documentation does support this, in a roundabout way - check the bottom of the "Module Contents" page for (brief) description of the error exception.
Unfortunately, I don't have any answer to the general question. I suppose the documentation for the various modules varies in quality and thoroughness. If there were particular modules you were interested in, you might be able to decompile them (if written in Python) or even look at the source, if they're in the standard library.
|
Can I document Python code with doxygen (and does it make sense)?
|
I like doxygen to create documentation of C or PHP code. I have an upcoming Python project and I think I remember that Python doesn't have /* .. */ comments and also has its own self-documentation facility which seems to be the pythonic way to document.
Can I just use doxygen? Anything particular to be aware of?
I have done some coding in Python but so far only on small projects where I was to lazy to document at all (yeah, I know ... but let's just pretend that's OK for now).
|
The doxypy input filter allows you to use pretty much all of Doxygen's formatting tags in a standard Python docstring format. I use it to document a large mixed C++ and Python game application framework, and it's working well.
|
How would you design a very "Pythonic" UI framework?
|
I have been playing with the Ruby library "shoes". Basically you can write a GUI application in the following way:
Shoes.app do
t = para "Not clicked!"
button "The Label" do
alert "You clicked the button!" # when clicked, make an alert
t.replace "Clicked!" # ..and replace the label's text
end
end
This made me think - how would I design a similarly nice-to-use GUI framework in Python? One that doesn't have the usual tyings of basically being wrappers to a C* library (In the case of GTK, Tk, wx, QT etc etc)
Shoes takes things from web devlopment (like #f0c2f0 style colour notation, CSS layout techniques, like :margin => 10), and from ruby (extensively using blocks in sensible ways)
Python's lack of "rubyish blocks" makes a (metaphorically)-direct port impossible:
def Shoeless(Shoes.app):
self.t = para("Not clicked!")
def on_click_func(self):
alert("You clicked the button!")
self.t.replace("clicked!")
b = button("The label", click=self.on_click_func)
No where near as clean, and wouldn't be nearly as flexible, and I'm not even sure if it would be implementable.
Using decorators seems like an interesting way to map blocks of code to a specific action:
class BaseControl:
def __init__(self):
self.func = None
def clicked(self, func):
self.func = func
def __call__(self):
if self.func is not None:
self.func()
class Button(BaseControl):
pass
class Label(BaseControl):
pass
# The actual applications code (that the end-user would write)
class MyApp:
ok = Button()
la = Label()
@ok.clicked
def clickeryHappened():
print "OK Clicked!"
if __name__ == '__main__':
a = MyApp()
a.ok() # trigger the clicked action
Basically the decorator function stores the function, then when the action occurred (say, a click) the appropriate function would be executed.
The scope of various stuff (say, the la label in the above example) could be rather complicated, but it seems doable in a fairly neat manner..
|
You could actually pull this off, but it would require using metaclasses, which are deep magic (there be dragons). If you want an intro to metaclasses, there's a series of articles from IBM which manage to introduce the ideas without melting your brain.
The source code from an ORM like SQLObject might help, too, since it uses this same kind of declarative syntax.
|
How to retrieve an element from a set without removing it?
|
Suppose the following:
>>>s = set([1, 2, 3])
How do I get a value (any value) out of s without doing s.pop()? I want to leave the item in the set until I am sure I can remove it - something I can only be sure of after an asynchronous call to another host.
Quick and dirty:
>>>elem = s.pop()
>>>s.add(elem)
But do you know of a better way? Ideally in constant time.
|
Two options that don't require copying the whole set:
for e in s:
break
# e is now an element from s
Or...
e = next(iter(s))
But in general, sets don't support indexing or slicing.
|
Replacements for switch statement in Python?
|
I want to write a function in Python that returns different fixed values based on the value of an input index.
In other languages I would use a switch or case statement, but Python does not appear to have a switch statement. What are the recommended Python solutions in this scenario?
|
If you'd like defaults you could use the dictionary get(key[, default]) method:
def f(x):
return {
'a': 1,
'b': 2,
}.get(x, 9) # 9 is default if x not found
|
cross platform IPC
|
I'm looking for suggestions on possible IPC mechanisms that are:
cross platform (WIN32 and Linux at least)
Simple to implement in C++ as well as the most common scripting languages (perl, ruby python etc).
Finally, simple to use from a programming point of view!
What are my options? I'm programming under Linux, but I'd like what I write to be portable to other OSes in the future. I've thought about using sockets, named pipes, or something like DBus.
Does anyone have any advice?
|
In terms of speed, the best cross-platform IPC mechanism will be pipes. That assumes, however, that you want cross-platform IPC on the same machine. If you want to be able to talk to processes on remote machines, you'll want to look at using sockets instead. Luckily, if you're talking about TCP at least, sockets and pipes behave pretty much the same behavior. While the APIs for setting them up and connecting them are different, they both just act like streams of data.
The difficult part, however, is not the communication channel, but the messages you pass over it. You really want to look at something that will perform verification and parsing for you. I recommend looking at Google's Protocol Buffers. You basically create a spec file that describes the object you want to pass between processes, and there is a compiler that generates code in a number of different languages for reading and writing objects that match the spec. It's much easier (and less bug prone) than trying to come up with a messaging protocol and parser yourself.
|
How do I write a python HTTP server to listen on multiple ports?
|
I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port?
What I'm doing now:
class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def doGET
[...]
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler)
server.serve_forever()
|
Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/
from threading import Thread
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write("Hello World!")
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
def serve_on_port(port):
server = ThreadingHTTPServer(("localhost",port), Handler)
server.serve_forever()
Thread(target=serve_on_port, args=[1111]).start()
serve_on_port(2222)
|
Getting random row through SQLAlchemy
|
How do I select a(or some) random row(s) from a table using SQLAlchemy?
|
This is very much a database-specific issue.
I know that PostgreSQL and MySQL have the ability to order by a random function, so you can use this in SQLAlchemy:
from sqlalchemy.sql.expression import func, select
select.order_by(func.random()) # for PostgreSQL, SQLite
select.order_by(func.rand()) # for MySQL
select.order_by('dbms_random.value') # For Oracle
Next, you need to limit the query by the number of records you need (for example using .limit()).
Bear in mind that at least in PostgreSQL, selecting random record has severe perfomance issues; here is good article about it.
|
How do you retrieve items from a dictionary in the order that they're inserted?
|
Is it possible to retrieve items from a Python dictionary in the order that they were inserted?
|
The standard python dict isn't able to do this.
There is a proposal (PEP 372) to add an "ordered dictionary" (that keeps track of the order of insertion) to the collections module in the standard library. It includes links to various implementations of ordered dictionaries (see also these two recipes in the Python Cookbook).
You might want to stick with the reference implementation in the PEP if you want your code to be compatible with the "official" version (if the proposal is eventually accepted).
EDIT: The PEP was accepted and added in python 2.7 and 3.1. See the docs.
|
Where do the Python unit tests go?
|
If you're writing a library, or an app, where do the unit test files go?
It's nice to separate the test files from the main app code, but it's awkward to put them into a "tests" subdirectory inside of the app root directory, because it makes it harder to import the modules that you'll be testing.
Is there a best practice here?
|
For a file module.py, the unit test should normally be called test_module.py, following Pythonic naming conventions.
There are several commonly accepted places to put test_module.py:
In the same directory as module.py.
In ../tests/test_module.py (at the same level as the code directory).
In tests/test_module.py (one level under the code directory).
I prefer #1 for its simplicity of finding the tests and importing them. Whatever build system you're using can easily be configured to run files starting with test_. Actually, the default unittest pattern used for test discovery is test*.py.
|
Python dictionary from an object's fields
|
Do you know if there is a built-in function to build a dictionary from an arbitrary object? I'd like to do something like this:
>>> class Foo:
... bar = 'hello'
... baz = 'world'
...
>>> f = Foo()
>>> props(f)
{ 'bar' : 'hello', 'baz' : 'world' }
NOTE: It should not include methods. Only fields.
Thanks
|
Note that best practice in current versions of Python is to use new-style classes, i.e.
class Foo(object):
...
Also, there's a difference between an 'object' and a 'class'. To build a dictionary from an arbitrary object, it's sufficient to use __dict__. Usually, you'll declare your methods at class level and your attributes at instance level, so __dict__ should be fine. For example:
>>> class A(object):
... def __init__(self):
... self.b = 1
... self.c = 2
... def do_nothing(self):
... pass
...
>>> a = A()
>>> a.__dict__
{'c': 2, 'b': 1}
Alternatively, depending on what you want to do, it might be nice to inherit from dict. Then your class is already a dictionary, and if you want you can override getattr and/or setattr to call through and set the dict. For example:
class Foo(dict):
def __init__(self):
pass
def __getattr__(self, attr):
return self[attr]
# etc...
|
Is it pythonic for a function to return multiple values?
|
In python, you can have a function return multiple values. Here's a contrived example:
def divide(x, y):
quotient = x/y
remainder = x % y
return quotient, remainder
(q, r) = divide(22, 7)
This seems very useful, but it looks like it can also be abused ("Well..function X already computes what we need as an intermediate value. Let's have X return that value also").
When should you draw the line and define a different method?
|
Absolutely (for the example you provided).
Tuples are first class citizens in Python
There is a builtin function divmod() that does exactly that.
q, r = divmod(x, y) # ((x - x%y)/y, x%y) Invariant: div*y + mod == x
There are other examples: zip, enumerate, dict.items.
for i, e in enumerate([1, 3, 3]):
print "index=%d, element=%s" % (i, e)
# reverse keys and values in a dictionary
d = dict((v, k) for k, v in adict.items()) # or
d = dict(zip(adict.values(), adict.keys()))
BTW, parentheses are not necessary most of the time.
Citation from Python Library Reference:
Tuples are constructed by the comma
operator (not within square brackets),
with or without enclosing parentheses,
but an empty tuple must have the
enclosing parentheses, such as a, b, c
or (). A single item tuple must have a
trailing comma, such as (d,).
Functions should serve single purpose
Therefore they should return a single object. In your case this object is a tuple. Consider tuple as an ad-hoc compound data structure. There are languages where almost every single function returns multiple values (list in Lisp).
Sometimes it is sufficient to return (x, y) instead of Point(x, y).
Named tuples
With the introduction of named tuples in Python 2.6 it is preferable in many cases to return named tuples instead of plain tuples.
>>> import collections
>>> Point = collections.namedtuple('Point', 'x y')
>>> x, y = Point(0, 1)
>>> p = Point(x, y)
>>> x, y, p
(0, 1, Point(x=0, y=1))
>>> p.x, p.y, p[0], p[1]
(0, 1, 0, 1)
>>> for i in p:
... print(i)
...
0
1
|
When to create a new app (with startapp) in Django?
|
I've googled around for this, but I still have trouble relating to what Django defines as "apps".
Should I create a new app for each piece of functionality in a site, even though it uses models from the main project?
Do you guys have good rule of thumb of when to split off a new app, and when to keep functionality together with the "main project" or other apps?
|
James Bennett has a wonderful set of slides on how to organize reusable apps in Django.
|
Sending mail from Python using SMTP
|
I'm using the following method to send mail from Python using SMTP. Is it the right method to use or are there gotchas I'm missing ?
from smtplib import SMTP
import datetime
debuglevel = 0
smtp = SMTP()
smtp.set_debuglevel(debuglevel)
smtp.connect('YOUR.MAIL.SERVER', 26)
smtp.login('USERNAME@DOMAIN', 'PASSWORD')
from_addr = "John Doe <john@doe.net>"
to_addr = "foo@bar.com"
subj = "hello"
date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" )
message_text = "Hello\nThis is a mail from your server\n\nBye\n"
msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s"
% ( from_addr, to_addr, subj, date, message_text )
smtp.sendmail(from_addr, to_addr, msg)
smtp.quit()
|
The script I use is quite similar; I post it here as an example of how to use the email.* modules to generate MIME messages; so this script can be easily modified to attach pictures, etc.
I rely on my ISP to add the date time header.
My ISP requires me to use a secure smtp connection to send mail, I rely on the ssmtplib module (downloadable at http://www1.cs.columbia.edu/~db2501/ssmtplib.py)
As in your script, the username and password, (given dummy values below), used to authenticate on the SMTP server, are in plain text in the source. This is a security weakness; but the best alternative depends on how careful you need (want?) to be about protecting these.
=======================================
#! /usr/local/bin/python
SMTPserver = 'smtp.att.yahoo.com'
sender = 'me@my_email_domain.net'
destination = ['recipient@her_email_domain.com']
USERNAME = "USER_NAME_FOR_INTERNET_SERVICE_PROVIDER"
PASSWORD = "PASSWORD_INTERNET_SERVICE_PROVIDER"
# typical values for text_subtype are plain, html, xml
text_subtype = 'plain'
content="""\
Test message
"""
subject="Sent from Python"
import sys
import os
import re
from smtplib import SMTP_SSL as SMTP # this invokes the secure SMTP protocol (port 465, uses SSL)
# from smtplib import SMTP # use this for standard SMTP protocol (port 25, no encryption)
# old version
# from email.MIMEText import MIMEText
from email.mime.text import MIMEText
try:
msg = MIMEText(content, text_subtype)
msg['Subject']= subject
msg['From'] = sender # some SMTP servers will do this automatically, not all
conn = SMTP(SMTPserver)
conn.set_debuglevel(False)
conn.login(USERNAME, PASSWORD)
try:
conn.sendmail(sender, destination, msg.as_string())
finally:
conn.quit()
except Exception, exc:
sys.exit( "mail failed; %s" % str(exc) ) # give a error message
|
How to setup VIM autoindentation properly for editing Python files - *.py
|
I've troubles setting VIM (7.1.xxx) for editing python files.
Indenting seems to be broken (optimal 4 spaces).
I've followed some tutorials I found via Google. Still no effect :/
Please help.
|
I use this on my macbook:
" configure expanding of tabs for various file types
au BufRead,BufNewFile *.py set expandtab
au BufRead,BufNewFile *.c set noexpandtab
au BufRead,BufNewFile *.h set noexpandtab
au BufRead,BufNewFile Makefile* set noexpandtab
" --------------------------------------------------------------------------------
" configure editor with tabs and nice stuff...
" --------------------------------------------------------------------------------
set expandtab " enter spaces when tab is pressed
set textwidth=120 " break lines when line length increases
set tabstop=4 " use 4 spaces to represent tab
set softtabstop=4
set shiftwidth=4 " number of spaces to use for auto indent
set autoindent " copy indent from current line when starting a new line
" make backspaces more powerfull
set backspace=indent,eol,start
set ruler " show line and column number
syntax on " syntax highlighting
set showcmd " show (partial) command in status line
(edited to only show stuff related to indent / tabs)
|
Caching compiled regex objects in Python?
|
Each time a python file is imported that contains a large quantity of static regular expressions, cpu cycles are spent compiling the strings into their representative state machines in memory.
a = re.compile("a.*b")
b = re.compile("c.*d")
...
Question: Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?
Pickling the object simply does the following, causing compilation to happen anyway:
>>> import pickle
>>> import re
>>> x = re.compile(".*")
>>> pickle.dumps(x)
"cre\n_compile\np0\n(S'.*'\np1\nI0\ntp2\nRp3\n."
And re objects are unmarshallable:
>>> import marshal
>>> import re
>>> x = re.compile(".*")
>>> marshal.dumps(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: unmarshallable object
|
Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?
Not easily. You'd have to write a custom serializer that hooks into the C sre implementation of the Python regex engine. Any performance benefits would be vastly outweighed by the time and effort required.
First, have you actually profiled the code? I doubt that compiling regexes is a significant part of the application's run-time. Remember that they are only compiled the first time the module is imported in the current execution -- thereafter, the module and its attributes are cached in memory.
If you have a program that basically spawns once, compiles a bunch of regexes, and then exits, you could try re-engineering it to perform multiple tests in one invocation. Then you could re-use the regexes, as above.
Finally, you could compile the regexes into C-based state machines and then link them in with an extension module. While this would likely be more difficult to maintain, it would eliminate regex compilation entirely from your application.
|
Serving dynamically generated ZIP archives in Django
|
How to serve users a dynamically generated ZIP archive in Django?
I'm making a site, where users can choose any combination of available books and download them as ZIP archive. I'm worried that generating such archives for each request would slow my server down to a crawl. I have also heard that Django doesn't currently have a good solution for serving dynamically generated files.
|
The solution is as follows.
Use Python module zipfile to create zip archive, but as the file specify StringIO object (ZipFile constructor requires file-like object). Add files you want to compress. Then in your Django application return the content of StringIO object in HttpResponse with mimetype set to application/x-zip-compressed (or at least application/octet-stream). If you want, you can set content-disposition header, but this should not be really required.
But beware, creating zip archives on each request is bad idea and this may kill your server (not counting timeouts if the archives are large). Performance-wise approach is to cache generated output somewhere in filesystem and regenerate it only if source files have changed. Even better idea is to prepare archives in advance (eg. by cron job) and have your web server serving them as usual statics.
|
How to import a module given the full path?
|
How can I load a Python module given its full path? Note that the file can be anywhere in the filesystem, as it is a configuration option.
|
For Python 3.5+ use:
import importlib.util
spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py")
foo = importlib.util.module_from_spec(spec)
spec.loader.exec_module(foo)
foo.MyClass()
For Python 3.3 and 3.4 use:
from importlib.machinery import SourceFileLoader
foo = SourceFileLoader("module.name", "/path/to/file.py").load_module()
foo.MyClass()
(Although this has been deprecated in Python 3.4.)
Python 2 use:
import imp
foo = imp.load_source('module.name', '/path/to/file.py')
foo.MyClass()
There are equivalent convenience functions for compiled Python files and DLLs.
See also. http://bugs.python.org/issue21436.
|
Why do you need explicitly have the "self" argument into a Python method?
|
When defining a method on a class in Python, it looks something like this:
class MyClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype.
Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
|
I like to quote Peters' Zen of Python. "Explicit is better than implicit."
In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't.
Python elects to make things like this explicit rather than based on a rule.
Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
|
How to copy a file to a remote server in Python using SCP or SSH?
|
I have a text file on my local machine that is generated by a daily Python script run in cron.
I would like to add a bit of code to have that file sent securely to my server over SSH.
|
To do this in Python (i.e. not wrapping scp through subprocess.Popen or similar) with the Paramiko library, you would do something like this:
import os
import paramiko
ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username=username, password=password)
sftp = ssh.open_sftp()
sftp.put(localpath, remotepath)
sftp.close()
ssh.close()
(You would probably want to deal with unknown hosts, errors, creating any directories necessary, and so on).
|
Send file using POST from a Python script
|
Is there a way to send a file using POST from a Python script?
|
From http://docs.python-requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file
Requests makes it very simple to upload Multipart-encoded files:
>>> r = requests.post('http://httpbin.org/post', files={'report.xls': open('report.xls', 'rb')})
That's it. I'm not joking - this is one line of code. File was sent. Let's check:
>>> r.text
{
"origin": "179.13.100.4",
"files": {
"report.xls": "<censored...binary...data>"
},
"form": {},
"url": "http://httpbin.org/post",
"args": {},
"headers": {
"Content-Length": "3196",
"Accept-Encoding": "identity, deflate, compress, gzip",
"Accept": "*/*",
"User-Agent": "python-requests/0.8.0",
"Host": "httpbin.org:80",
"Content-Type": "multipart/form-data; boundary=127.0.0.1.502.21746.1321131593.786.1"
},
"data": ""
}
|
Are tuples more efficient than lists in Python?
|
Is there any performance difference between tuples and lists when it comes to instantiation and retrieval of elements?
|
In general, you might expect tuples to be slightly faster. However you should definitely test your specific case (if the difference might impact the performance of your program -- remember "premature optimization is the root of all evil").
Python makes this very easy: timeit is your friend.
$ python -m timeit "x=(1,2,3,4,5,6,7,8)"
10000000 loops, best of 3: 0.0388 usec per loop
$ python -m timeit "x=[1,2,3,4,5,6,7,8]"
1000000 loops, best of 3: 0.363 usec per loop
and...
$ python -m timeit -s "x=(1,2,3,4,5,6,7,8)" "y=x[3]"
10000000 loops, best of 3: 0.0938 usec per loop
$ python -m timeit -s "x=[1,2,3,4,5,6,7,8]" "y=x[3]"
10000000 loops, best of 3: 0.0649 usec per loop
So in this case, instantiation is almost an order of magnitude faster for the tuple, but item access is actually somewhat faster for the list! So if you're creating a few tuples and accessing them many many times, it may actually be faster to use lists instead.
Of course if you want to change an item, the list will definitely be faster since you'd need to create an entire new tuple to change one item of it (since tuples are immutable).
|
Static class variables in Python
|
Is it possible to have static class variables or methods in python? What syntax is required to do this?
|
Variables declared inside the class definition, but not inside a method are class or static variables:
>>> class MyClass:
... i = 3
...
>>> MyClass.i
3
As @millerdev points out, this creates a class-level "i" variable, but this is distinct from any instance-level "i" variable, so you could have
>>> m = MyClass()
>>> m.i = 4
>>> MyClass.i, m.i
>>> (3, 4)
This is different from C++ and Java, but not so different from C#, where a static member can't be accessed using a reference to an instance.
See what the Python tutorial has to say on the subject of classes and class objects.
@Steve Johnson has already answered regarding static methods, also documented under "Built-in Functions" in the Python Library Reference.
class C:
@staticmethod
def f(arg1, arg2, ...): ...
@beidy recommends classmethods over staticmethod, as the method then receives the class type as the first argument, but I'm still a little fuzzy on the advantages of this approach over staticmethod. If you are too, then it probably doesn't matter.
|
Best way to open a socket in Python
|
I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way?
|
Opening sockets in python is pretty simple. You really just need something like this:
import socket
sock = socket.socket()
sock.connect((address, port))
and then you can send() and recv() like any other socket
|
Take a screenshot via a python script. [Linux]
|
I want to take a screenshot via a python script and unobtrusively save it.
I'm only interested in the Linux solution, and should support any X based environment.
|
This works without having to use scrot or ImageMagick.
import gtk.gdk
w = gtk.gdk.get_default_root_window()
sz = w.get_size()
print "The size of the window is %d x %d" % sz
pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False,8,sz[0],sz[1])
pb = pb.get_from_drawable(w,w.get_colormap(),0,0,0,0,sz[0],sz[1])
if (pb != None):
pb.save("screenshot.png","png")
print "Screenshot saved to screenshot.png."
else:
print "Unable to get the screenshot."
Borrowed from http://ubuntuforums.org/showpost.php?p=2681009&postcount=5
|
Why are Python's 'private' methods not actually private?
|
Python gives us the ability to create 'private' methods and variables within a class by prepending double underscores to the name, like this: __myPrivateMethod(). How, then, can one explain this
>>> class MyClass:
... def myPublicMethod(self):
... print 'public method'
... def __myPrivateMethod(self):
... print 'this is private!!'
...
>>> obj = MyClass()
>>> obj.myPublicMethod()
public method
>>> obj.__myPrivateMethod()
Traceback (most recent call last):
File "", line 1, in
AttributeError: MyClass instance has no attribute '__myPrivateMethod'
>>> dir(obj)
['_MyClass__myPrivateMethod', '__doc__', '__module__', 'myPublicMethod']
>>> obj._MyClass__myPrivateMethod()
this is private!!
What's the deal?!
I'll explain this a little for those who didn't quite get that.
>>> class MyClass:
... def myPublicMethod(self):
... print 'public method'
... def __myPrivateMethod(self):
... print 'this is private!!'
...
>>> obj = MyClass()
What I did there is create a class with a public method and a private method and instantiate it.
Next, I call its public method.
>>> obj.myPublicMethod()
public method
Next, I try and call its private method.
>>> obj.__myPrivateMethod()
Traceback (most recent call last):
File "", line 1, in
AttributeError: MyClass instance has no attribute '__myPrivateMethod'
Everything looks good here; we're unable to call it. It is, in fact, 'private'. Well, actually it isn't. Running dir() on the object reveals a new magical method that python creates magically for all of your 'private' methods.
>>> dir(obj)
['_MyClass__myPrivateMethod', '__doc__', '__module__', 'myPublicMethod']
This new method's name is always an underscore, followed by the class name, followed by the method name.
>>> obj._MyClass__myPrivateMethod()
this is private!!
So much for encapsulation, eh?
In any case, I'd always heard Python doesn't support encapsulation, so why even try? What gives?
|
The name scrambling is used to ensure that subclasses don't accidentally override the private methods and attributes of their superclasses. It's not designed to prevent deliberate access from outside.
For example:
>>> class Foo(object):
... def __init__(self):
... self.__baz = 42
... def foo(self):
... print self.__baz
...
>>> class Bar(Foo):
... def __init__(self):
... super(Bar, self).__init__()
... self.__baz = 21
... def bar(self):
... print self.__baz
...
>>> x = Bar()
>>> x.foo()
42
>>> x.bar()
21
>>> print x.__dict__
{'_Bar__baz': 21, '_Foo__baz': 42}
Of course, it breaks down if two different classes have the same name.
|
Python: user input and commandline arguments
|
How do I have a Python script that can accept user input (assuming this is possible) and how do I make it read in arguments if run from the command line?
|
var = raw_input("Please enter something: ")
print "you entered", var
|
How to do relative imports in Python?
|
Imagine this directory structure:
app/
__init__.py
sub1/
__init__.py
mod1.py
sub2/
__init__.py
mod2.py
I'm coding mod1, and I need to import something from mod2. How should I do it?
I tried from ..sub2 import mod2 but I'm getting an "Attempted relative import in non-package".
I googled around but found only "sys.path manipulation" hacks. Isn't there a clean way?
Edit: all my __init__.py's are currently empty
Edit2: I'm trying to do this because sub2 contains classes that are shared across sub packages (sub1, subX, etc.).
Edit3: The behaviour I'm looking for is the same as described in PEP 366 (thanks John B)
|
Everyone seems to want to tell you what you should be doing rather than just answering the question.
The problem is that you're running the module as '__main__' by passing the mod1.py as an argument to the interpreter.
From PEP 328:
Relative imports use a module's __name__ attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to '__main__') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.
In Python 2.6, they're adding the ability to reference modules relative to the main module. PEP 366 describes the change.
Update: According to Nick Coghlan, the recommended alternative is to run the module inside the package using the -m switch.
|
How do I sort a list of dictionaries by values of the dictionary in Python?
|
I got a list of dictionaries and want that to be sorted by a value of that dictionary.
This
[{'name':'Homer', 'age':39}, {'name':'Bart', 'age':10}]
sorted by name, should become
[{'name':'Bart', 'age':10}, {'name':'Homer', 'age':39}]
|
It may look cleaner using a key instead a cmp:
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
or as J.F.Sebastian and others suggested,
from operator import itemgetter
newlist = sorted(list_to_be_sorted, key=itemgetter('name'))
For completeness (as pointed out in comments by fitzgeraldsteele), add reverse=True to sort descending
newlist = sorted(l, key=itemgetter('name'), reverse=True)
|
Terminating a Python script
|
I am aware of the die() command in PHP which stops a script early.
How can I do this in Python?
|
import sys
sys.exit()
details from the sys module documentation:
sys.exit([arg])
Exit from Python. This is implemented by raising the
SystemExit exception, so cleanup actions specified by finally clauses
of try statements are honored, and it is possible to intercept the
exit attempt at an outer level.
The optional argument arg can be an integer giving the exit status
(defaulting to zero), or another type of object. If it is an integer,
zero is considered âsuccessful terminationâ and any nonzero value is
considered âabnormal terminationâ by shells and the like. Most systems
require it to be in the range 0-127, and produce undefined results
otherwise. Some systems have a convention for assigning specific
meanings to specific exit codes, but these are generally
underdeveloped; Unix programs generally use 2 for command line syntax
errors and 1 for all other kind of errors. If another type of object
is passed, None is equivalent to passing zero, and any other object is
printed to stderr and results in an exit code of 1. In particular,
sys.exit("some error message") is a quick way to exit a program when
an error occurs.
Since exit() ultimately âonlyâ raises an exception, it will only exit
the process when called from the main thread, and the exception is not
intercepted.
Note that this is the 'nice' way to exit. @glyphtwistedmatrix below points out that if you want a 'hard exit', you can use os._exit(errorcode), though it's likely os-specific to some extent (it might not take an errorcode under windows, for example), and it definitely is less friendly since it doesn't let the interpreter do any cleanup before the process dies.
|
Sending mail via sendmail from python
|
If I want to send mail not via SMTP, but rather via sendmail, is there a library for python that encapsulates this process?
Better yet, is there a good library that abstracts the whole 'sendmail -versus- smtp' choice?
I'll be running this script on a bunch of unix hosts, only some of which are listening on localhost:25; a few of these are part of embedded systems and can't be set up to accept SMTP.
As part of Good Practice, I'd really like to have the library take care of header injection vulnerabilities itself -- so just dumping a string to popen('/usr/bin/sendmail', 'w') is a little closer to the metal than I'd like.
If the answer is 'go write a library,' so be it ;-)
|
Header injection isn't a factor in how you send the mail, it's a factor in how you construct the mail. Check the email package, construct the mail with that, serialise it, and send it to /usr/sbin/sendmail using the subprocess module:
from email.mime.text import MIMEText
from subprocess import Popen, PIPE
msg = MIMEText("Here is the body of my message")
msg["From"] = "me@example.com"
msg["To"] = "you@example.com"
msg["Subject"] = "This is the subject."
p = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE)
p.communicate(msg.as_string())
|
How do I get the name of a python class as a string?
|
What method do I call to get the name of a class?
|
In [1]: class test(object):
...: pass
...:
In [2]: test.__name__
Out[2]: 'test'
|
Django -vs- Grails -vs-?
|
I'm wondering if there's such a thing as Django-like ease of web app development combined with good deployment, debugging and other tools?
Django is a very productive framework for building content-heavy sites; the best I've tried and a breath of fresh air compared to some of the Java monstrosities out there. However it's written in Python which means there's little real support in the way of deployment/packaging, debugging, profilers and other tools that make building and maintaining applications much easier.
Ruby has similar issues and although I do like Ruby much better than I like Python, I get the impression that Rails is roughly in the same boat at Django when it comes to managing/supporting the app.
Has anyone here tried both Django and Grails (or other web frameworks) for non-trivial projects? How did they compare?
|
You asked for someone who used both Grails and Django. I've done work on both for big projects. Here's my Thoughts:
IDE's:
Django works really well in Eclipse, Grails works really well in IntelliJ Idea.
Debugging:
Practically the same (assuming you use IntelliJ for Grails, and Eclipse for Python). Step debugging, inspecting variables, etc... never need a print statement for either. Sometimes django error messages can be useless but Grails error messages are usually pretty lengthy and hard to parse through.
Time to run a unit test:
django: 2 seconds.
Grails: 20 seconds (the tests themselves both run in a fraction of a second, it's the part about loading the framework to run them that takes the rest... as you can see, Grails is frustratingly slow to load).
Deployment:
Django: copy & paste one file into an apache config, and to redeploy, just change the code and reload apache.
Grails: create a .war file, deploy it on tomcat, rinse and repeat to redeploy.
Programming languages:
Groovy is TOTALLY awesome. I love it, more so than Python. But I certainly have no complaints.
Plugins:
Grails: lots of broken plugins (and can use every java lib ever).
Django: a few stable plugins, but enough to do most of what you need.
Database:
Django: schema migrations using South, and generally intuitive relations.
Grails: no schema migrations, and by default it deletes the database on startup... WTF
Usage:
Django: startups (especially in the Gov 2.0 space), independent web dev shops.
Grails: enterprise
Hope that helps!
|
Which is faster, python webpages or php webpages?
|
Does anyone know how the speed of pylons(or any of the other frameworks) compares to a similar website made with php?
I know that serving a python base webpage via cgi is slower than php because of its long start up every time.
I enjoy using pylons and I would still use it if it was slower than php. But if pylons was faster than php, I could maybe, hopefully, eventually convince my employer to allow me to convert the site over to pylons.
|
It sounds like you don't want to compare the two languages, but that you want to compare two web systems.
This is tricky, because there are many variables involved.
For example, Python web applications can take advantage of mod_wsgi to talk to web servers, which is faster than any of the typical ways that PHP talks to web servers (even mod_php ends up being slower if you're using Apache, because Apache can only use the Prefork MPM with mod_php rather than multi-threaded MPM like Worker).
There is also the issue of code compilation. As you know, Python is compiled just-in-time to byte code (.pyc files) when a file is run each time the file changes. Therefore, after the first run of a Python file, the compilation step is skipped and the Python interpreter simply fetches the precompiled .pyc file. Because of this, one could argue that Python has a native advantage over PHP. However, optimizers and caching systems can be installed for PHP websites (my favorite is eAccelerator) to much the same effect.
In general, enough tools exist such that one can pretty much do everything that the other can do. Of course, as others have mentioned, there's more than just speed involved in the business case to switch languages. We have an app written in oCaml at my current employer, which turned out to be a mistake because the original author left the company and nobody else wants to touch it. Similarly, the PHP-web community is much larger than the Python-web community; Website hosting services are more likely to offer PHP support than Python support; etc.
But back to speed. You must recognize that the question of speed here involves many moving parts. Fortunately, many of these parts can be independently optimized, affording you various avenues to seek performance gains.
|
'id' is a bad variable name in Python
|
Why is it bad to name a variable id in Python?
|
id() is a fundamental built-in:
Help on built-in function id in module
__builtin__:
id(...)
id(object) -> integer
Return the identity of an object. This is guaranteed to be unique among
simultaneously existing objects. (Hint: it's the object's memory
address.)
In general, using variable names that eclipse a keyword or built-in function in any language is a bad idea, even if it is allowed.
|
Is there a benefit to defining a class inside another class in Python?
|
What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
|
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
|
Unittest causing sys.exit()
|
No matter what I do sys.exit() is called by unittest, even the most trivial examples. I can't tell if my install is messed up or what is going on.
IDLE 1.2.2 ==== No Subprocess ====
>>> import unittest
>>>
>>> class Test(unittest.TestCase):
def testA(self):
a = 1
self.assertEqual(a,1)
>>> unittest.main()
option -n not recognized
Usage: idle.pyw [options] [test] [...]
Options:
-h, --help Show this message
-v, --verbose Verbose output
-q, --quiet Minimal output
Examples:
idle.pyw - run default set of tests
idle.pyw MyTestSuite - run suite 'MyTestSuite'
idle.pyw MyTestCase.testSomething - run MyTestCase.testSomething
idle.pyw MyTestCase - run all 'test*' test methods
in MyTestCase
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
unittest.main()
File "E:\Python25\lib\unittest.py", line 767, in __init__
self.parseArgs(argv)
File "E:\Python25\lib\unittest.py", line 796, in parseArgs
self.usageExit(msg)
File "E:\Python25\lib\unittest.py", line 773, in usageExit
sys.exit(2)
SystemExit: 2
>>>
|
In new Python 2.7 release, unittest.main() has a new argument.
If 'exit' is set to False, sys.exit() is not called during the execution of unittest.main().
|
How do I convert local time to UTC in Python?
|
How do I convert a datetime string in local time to a string in UTC time?
I'm sure I've done this before, but can't find it and SO will hopefully help me (and others) do that in future.
Clarification: For example, if I have 2008-09-17 14:02:00 in my local timezone (+10), I'd like to generate a string with the equivalent UTC time: 2008-09-17 04:02:00.
Also, from http://lucumr.pocoo.org/2011/7/15/eppur-si-muove/, note that in general this isn't possible as with DST and other issues there is no unique conversion from local time to UTC time.
|
First, parse the string into a naive datetime object. This is an instance of datetime.datetime with no attached timezone information. See documentation for datetime.strptime for information on parsing the date string.
Use the pytz module, which comes with a full list of time zones + UTC. Figure out what the local timezone is, construct a timezone object from it, and manipulate and attach it to the naive datetime.
Finally, use datetime.astimezone() method to convert the datetime to UTC.
Source code, using local timezone "America/Los_Angeles", for the string "2001-2-3 10:11:12":
import pytz, datetime
local = pytz.timezone ("America/Los_Angeles")
naive = datetime.datetime.strptime ("2001-2-3 10:11:12", "%Y-%m-%d %H:%M:%S")
local_dt = local.localize(naive, is_dst=None)
utc_dt = local_dt.astimezone (pytz.utc)
From there, you can use the strftime() method to format the UTC datetime as needed:
utc_dt.strftime ("%Y-%m-%d %H:%M:%S")
|
How do you design data models for Bigtable/Datastore (GAE)?
|
Since the Google App Engine Datastore is based on Bigtable and we know that's not a relational database, how do you design a database schema/*data model* for applications that use this type of database system?
|
Designing a bigtable schema is an open process, and basically requires you to think about:
The access patterns you will be using and how often each will be used
The relationships between your types
What indices you are going to need
The write patterns you will be using (in order to effectively spread load)
GAE's datastore automatically denormalizes your data. That is, each index contains a (mostly) complete copy of the data, and thus every index adds significantly to time taken to perform a write, and the storage space used.
If this were not the case, designing a Datastore schema would be a lot more work: You would have to think carefully about the primary key for each type, and consider the effect of your decision on the locality of data. For example, when rendering a blog post you would probably need to display the comments to go along with it, so each comment's key would probably begin with the associated post's key.
With Datastore, this is not such a big deal: The query you use will look something like "Select * FROM Comment WHERE post_id = N." (If you want to page the comments, you would also have a limit clause, and a possible suffix of " AND comment_id > last_comment_id".) Once you add such a query, Datastore will build the index for you, and your reads will be magically fast.
Something to keep in mind is that each additional index creates some additional cost: it is best if you can use as few access patterns as possible, since it will reduce the number of indices GAE will construct, and thus the total storage required by your data.
Reading over this answer, I find it a little vague. Maybe a hands-on design question would help to scope this down? :-)
|
Split a string by spaces -- preserving quoted substrings -- in Python
|
I have a string which is like this:
this is "a test"
I'm trying to write something in Python to split it up by space while ignoring spaces within quotes. The result I'm looking for is:
['this','is','a test']
PS. I know you are going to ask "what happens if there are quotes within the quotes, well, in my application, that will never happen.
|
You want split, from the shlex module.
>>> import shlex
>>> shlex.split('this is "a test"')
['this', 'is', 'a test']
This should do exactly what you want.
|
Asychronous Programming in Python Twisted
|
I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo...
Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti.
|
Twisted contains a large number of examples. One in particular, the "evolution of Finger" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.
The key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like "voodoo" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no "voodoo" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:
Data arrives on a connection (it will call dataReceived on a Protocol)
Time has passed (it will call a function registered with callLater).
A connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function).
A connection has been dropped (it will call connectionLost on the appropriate Protocol)
Every asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.
Here's a simple "event-driven engine" that shows you just how simple this process is.
# Engine
import time
class SimplestReactor(object):
def __init__(self):
self.events = []
self.stopped = False
def do(self, something):
self.events.append(something)
def run(self):
while not self.stopped:
time.sleep(0.1)
if self.events:
thisTurn = self.events.pop(0)
thisTurn()
def stop(self):
self.stopped = True
reactor = SimplestReactor()
# Application
def thing1():
print 'Doing thing 1'
reactor.do(thing2)
reactor.do(thing3)
def thing2():
print 'Doing thing 2'
def thing3():
print 'Doing thing 3: and stopping'
reactor.stop()
reactor.do(thing1)
print 'Running'
reactor.run()
print 'Done!'
At the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say "like" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say "Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was."
|
Upload files in Google App Engine
|
I am planning to create a web app that allows users to downgrade their visual studio project files. However, It seems Google App Engine accepts files uploading and flat file storing on the Google Server through db.TextProperty and db.BlobProperty.
I'll be glad anyone can provide code sample (both the client and the server side) on how this can be done.
|
In fact, this question is answered in the App Egnine documentation. See an example on Uploading User Images.
HTML code, inside <form></form>:
<input type="file" name="img"/>
Python code:
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get("content")
avatar = self.request.get("img")
greeting.avatar = db.Blob(avatar)
greeting.put()
self.redirect('/')
|
Is there any list of blog engines, written in Django?
|
Is there any list of blog engines, written in Django?
|
EDIT: Original link went dead so here's an updated link with extracts of the list sorted with the most recently updated source at the top.
Eleven Django blog engines you should know
by Monty Lounge Industries
Biblion
Django-article
Flother
Basic-Blog
Hello-Newman
Banjo
djangotechblog
Django-YABA
Shifting Bits (this is now just a biblion blog)
Mighty Lemon
Coltrane
|
How do I check whether a file exists using Python?
|
How do I check whether a file exists, without using the try statement?
|
You can also use os.path.isfile
Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path.
import os.path
os.path.isfile(fname)
if you need to be sure it's a file.
Starting with Python 3.4, the pathlib module offers an object-oriented approach:
from pathlib import Path
my_file = Path("/path/to/file")
if my_file.is_file():
# file exists
|
How to programmatically enable/disable network interfaces? (Windows XP)
|
I need to enable/disable completely network interfaces from a script in Windows XP. I'm looking for a python solution, but any general way (eg WMI, some command-line à la netsh, some windows call) is welcome and will be adjusted. Thanks.
|
Using the netsh interface
Usage set interface [name = ] IfName
[ [admin = ] ENABLED|DISABLED
[connect = ] CONNECTED|DISCONNECTED
[newname = ] NewName ]
Try including everything inside the outer brackets:
netsh interface set interface name="thename" admin=disabled connect=DISCONNECTED newname="thename"
See also this MS KB page: http://support.microsoft.com/kb/262265/
You could follow either of their suggestions.
For disabling the adapter, you will need to determine a way to reference the hardware device. If there will not be multiple adapters with the same name on the computer, you could possibly go off of the Description for the interface (or PCI ID works well). After that, using devcon (disable|enable). Devcon is an add-on console interface for the Device Manager.
|
Why isn't the 'len' function inherited by dictionaries and lists in Python
|
example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
|
Guido's explanation is here:
First of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:
(a) For some operations, prefix notation just reads better than postfix â prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.
(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isnât a file has a write() method.
Saying the same thing in another way, I see âlenâ as a built-in operation. Iâd hate to lose that. /â¦/
|
Python - time.clock() vs. time.time() - accuracy?
|
Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
start = time.clock()
... do something
elapsed = (time.clock() - start)
vs.
start = time.time()
... do something
elapsed = (time.time() - start)
|
As of 3.3, time.clock() is deprecated, and it's suggested to use time.process_time() or time.perf_counter() instead.
Previously in 2.7, according to the time module docs:
time.clock()
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of âprocessor timeâ, depends on that of the C function
of the same name, but in any case, this is the function to use for
benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Additionally, there is the timeit module for benchmarking code snippets.
|
How do you configure Django for simple development and deployment?
|
I tend to use SQLite when doing Django
development, but on a live server something more robust is
often needed (MySQL/PostgreSQL, for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process?
|
Update: django-configurations has been released which is probably a better option for most people than doing it manually.
If you would prefer to do things manually, my earlier answer still applies:
I have multiple settings files.
settings_local.py - host-specific configuration, such as database name, file paths, etc.
settings_development.py - configuration used for development, e.g. DEBUG = True.
settings_production.py - configuration used for production, e.g. SERVER_EMAIL.
I tie these all together with a settings.py file that firstly imports settings_local.py, and then one of the other two. It decides which to load by two settings inside settings_local.py - DEVELOPMENT_HOSTS and PRODUCTION_HOSTS. settings.py calls platform.node() to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in.
That way, the only thing you really need to worry about is keeping the settings_local.py file up to date with the host-specific configuration, and everything else is handled automatically.
Check out an example here.
|
How do I unit test an __init__() method of a python class with assertRaises()?
|
I have a class:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise Error("foo is not equal to 1!")
and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error:
def testInsufficientArgs(self):
foo = 0
self.assertRaises((Error), myClass = MyClass(Error, foo))
But I get...
NameError: global name 'Error' is not defined
Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no?
|
'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, "The Appropriate Exception Class".
The baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'.
In this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script.
Here is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise ValueError("foo is not equal to 1!")
import unittest
class TestFoo(unittest.TestCase):
def testInsufficientArgs(self):
foo = 0
self.failUnlessRaises(ValueError, MyClass, foo)
if __name__ == '__main__':
unittest.main()
The output is:
.
----------------------------------------------------------------------
Ran 1 test in 0.007s
OK
There is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest:
This is an example of it in use:
class TestFoo(unittest.TestCase):
def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):
try:
callableObj(*args, **kwargs)
except excClass, excObj:
return excObj # Actually return the exception object
else:
if hasattr(excClass,'__name__'): excName = excClass.__name__
else: excName = str(excClass)
raise self.failureException, "%s not raised" % excName
def testInsufficientArgs(self):
foo = 0
excObj = self.failUnlessRaises(ValueError, MyClass, foo)
self.failUnlessEqual(excObj[0], 'foo is not equal to 1!')
I have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly.
|
How do I split a string into a list?
|
If I have this string:
2+24*48/32
what is the most efficient approach for creating this list:
['2', '+', '24', '*', '48', '/', '32']
|
It just so happens that the tokens you want split are already Python tokens, so you can use the built-in tokenize module. It's almost a one-liner:
from cStringIO import StringIO
from tokenize import generate_tokens
STRING = 1
list(token[STRING] for token
in generate_tokens(StringIO('2+24*48/32').readline)
if token[STRING])
['2', '+', '24', '*', '48', '/', '32']
|
Calling an external command in Python
|
How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?
|
Look at the subprocess module in the stdlib:
from subprocess import call
call(["ls", "-l"])
The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...). I think os.system is deprecated, too, or will be:
https://docs.python.org/2/library/subprocess.html#replacing-older-functions-with-the-subprocess-module
For quick/dirty/one time scripts, os.system is enough, though.
|
How do I verify that a string only contains letters, numbers, underscores and dashes?
|
I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method.
Thanks
|
A regular expression will do the trick with very little code:
import re
...
if re.match("^[A-Za-z0-9_-]*$", my_little_string):
# do something here
|
Is there a pretty printer for python data?
|
Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
[0, 1,
[a, b, c],
2, 3, 4]
instead of:
[0, 1, [a, b, c], 2, 3, 4]
I know that's not a very good example, but I think you get the idea.
|
from pprint import pprint
a = [0, 1, ['a', 'b', 'c'], 2, 3, 4]
pprint(a)
Note that for a short list like my example, pprint will in fact print it all on one line. However, for more complex structures it does a pretty good job of pretty printing data.
|
Python, beyond the basics
|
I've gotten to grips with the basics of Python and I've got a small holiday which I want to use some of to learn a little more Python. The problem is that I have no idea what to learn or where to start. I'm primarily web development but in this case I don't know how much difference it will make.
|
Well, there are great ressources for advanced Python programming :
Dive Into Python (read it for free)
Online python cookbooks (e.g. here and there)
O'Reilly's Python Cookbook (see amazon)
A funny riddle game : Python Challenge
Here is a list of subjects you must master if you want to write "Python" on your resume :
list comprehensions
iterators and generators
decorators
They are what make Python such a cool language (with the standard library of course, that I keep discovering everyday).
|
Stripping non printable characters from a string in python
|
I use to run
$s =~ s/[^[:print:]]//g;
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character.
|
Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The unicodedata module is quite helpful for this, especially the unicodedata.category() function. See Unicode Character Database for descriptions of the categories.
import unicodedata, re
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) == 'Cc')
# or equivalently and much more efficiently
control_chars = ''.join(map(unichr, range(0,32) + range(127,160)))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
|
time.sleep -- sleeps thread or process?
|
In Python for the *nix, does time.sleep() block the thread or the process?
|
It blocks the thread. If you look in Modules/timemodule.c in the Python source, you'll see that in the call to floatsleep(), the substantive part of the sleep operation is wrapped in a Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS block, allowing other threads to continue to execute while the current one sleeps. You can also test this with a simple python program:
import time
from threading import Thread
class worker(Thread):
def run(self):
for x in xrange(0,11):
print x
time.sleep(1)
class waiter(Thread):
def run(self):
for x in xrange(100,103):
print x
time.sleep(5)
def run():
worker().start()
waiter().start()
Which will print:
>>> thread_test.run()
0
100
>>> 1
2
3
4
5
101
6
7
8
9
10
102
|
What's the easiest non-memory intensive way to output XML from Python?
|
Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?
|
I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.
import time
from xml.sax.saxutils import XMLGenerator
from xml.sax.xmlreader import AttributesNSImpl
LOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']
class xml_logger:
def __init__(self, output, encoding):
"""
Set up a logger object, which takes SAX events and outputs
an XML log file
"""
logger = XMLGenerator(output, encoding)
logger.startDocument()
attrs = AttributesNSImpl({}, {})
logger.startElementNS((None, u'log'), u'log', attrs)
self._logger = logger
self._output = output
self._encoding = encoding
return
def write_entry(self, level, msg):
"""
Write a log entry to the logger
level - the level of the entry
msg - the text of the entry. Must be a Unicode object
"""
#Note: in a real application, I would use ISO 8601 for the date
#asctime used here for simplicity
now = time.asctime(time.localtime())
attr_vals = {
(None, u'date'): now,
(None, u'level'): LOG_LEVELS[level],
}
attr_qnames = {
(None, u'date'): u'date',
(None, u'level'): u'level',
}
attrs = AttributesNSImpl(attr_vals, attr_qnames)
self._logger.startElementNS((None, u'entry'), u'entry', attrs)
self._logger.characters(msg)
self._logger.endElementNS((None, u'entry'), u'entry')
return
def close(self):
"""
Clean up the logger object
"""
self._logger.endElementNS((None, u'log'), u'log')
self._logger.endDocument()
return
if __name__ == "__main__":
#Test it out
import sys
xl = xml_logger(sys.stdout, 'utf-8')
xl.write_entry(2, u"Vanilla log entry")
xl.close()
You'll probably want to look at the rest of the article I got that from at http://www.xml.com/pub/a/2003/03/12/py-xml.html.
|
How do I persist to disk a temporary file using Python?
|
I am attempting to use the 'tempfile' module for manipulating and creating text files. Once the file is ready I want to save it to disk. I thought it would be as simple as using 'shutil.copy'. However, I get a 'permission denied' IOError:
>>> import tempfile, shutil
>>> f = tempfile.TemporaryFile(mode ='w+t')
>>> f.write('foo')
>>> shutil.copy(f.name, 'bar.txt')
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
shutil.copy(f.name, 'bar.txt')
File "C:\Python25\lib\shutil.py", line 80, in copy
copyfile(src, dst)
File "C:\Python25\lib\shutil.py", line 46, in copyfile
fsrc = open(src, 'rb')
IOError: [Errno 13] Permission denied: 'c:\\docume~1\\me\\locals~1\\temp\\tmpvqq3go'
>>>
Is this not intended when using the 'tempfile' library? Is there a better way to do this? (Maybe I am overlooking something very trivial)
|
hop is right, and dF. is incorrect on why the error occurs.
Since you haven't called f.close() yet, the file is not removed.
The doc for NamedTemporaryFile says:
Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later).
And for TemporaryFile:
Under Unix, the directory entry for the file is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system.
Therefore, to persist a temporary file (on Windows), you can do the following:
>>> import tempfile, shutil
>>> f = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
>>> f.write('foo')
>>> file_name = f.name
>>> f.close()
>>> shutil.copy(file_name, 'bar.txt')
The solution Hans Sjunnesson provided is also off, because copyfileobj only copies from file-like object to file-like object, not file name:
shutil.copyfileobj(fsrc, fdst[, length])
Copy the contents of the file-like object fsrc to the file-like object fdst. The integer length, if given, is the buffer size. In particular, a negative length value means to copy the data without looping over the source data in chunks; by default the data is read in chunks to avoid uncontrolled memory consumption. Note that if the current file position of the fsrc object is not 0, only the contents from the current file position to the end of the file will be copied.
|
What is the difference between range and xrange functions in Python 2.X?
|
Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
for i in range(0, 20):
for i in xrange(0, 20):
|
range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements.
xrange is a sequence object that evaluates lazily.
|
How do I use Django templates without the rest of Django?
|
I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO_SETTINGS_MODULE environment variable?
If I run the following code:
>>> import django.template
>>> from django.template import Template, Context
>>> t = Template('My name is {{ my_name }}.')
I get:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
|
The solution is simple. It's actually well documented, but not too easy to find. (I had to dig around -- it didn't come up when I tried a few different Google searches.)
The following code works:
>>> from django.template import Template, Context
>>> from django.conf import settings
>>> settings.configure()
>>> t = Template('My name is {{ my_name }}.')
>>> c = Context({'my_name': 'Daryl Spitzer'})
>>> t.render(c)
u'My name is Daryl Spitzer.'
See the Django documentation (linked above) for a description of some of the settings you may want to define (as keyword arguments to configure).
|
What is the best solution for database connection pooling in python?
|
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework.
The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect.
What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java.
The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results.
Edited to add:
After some more searching I found anitpool.py which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
|
In MySQL?
I'd say don't bother with the connection pooling. They're often a source of trouble and with MySQL they're not going to bring you the performance advantage you're hoping for. This road may be a lot of effort to follow--politically--because there's so much best practices hand waving and textbook verbiage in this space about the advantages of connection pooling.
Connection pools are simply a bridge between the post-web era of stateless applications (e.g. HTTP protocol) and the pre-web era of stateful long-lived batch processing applications. Since connections were very expensive in pre-web databases (since no one used to care too much about how long a connection took to establish), post-web applications devised this connection pool scheme so that every hit didn't incur this huge processing overhead on the RDBMS.
Since MySQL is more of a web-era RDBMS, connections are extremely lightweight and fast. I have written many high volume web applications that don't use a connection pool at all for MySQL.
This is a complication you may benefit from doing without, so long as there isn't a political obstacle to overcome.
|
What is a metaclass in Python?
|
What are metaclasses? What do you use them for?
|
Classes as objects
Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.
In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:
>>> class ObjectCreator(object):
... pass
...
>>> my_object = ObjectCreator()
>>> print(my_object)
<__main__.ObjectCreator object at 0x8974f2c>
But classes are more than that in Python. Classes are objects too.
Yes, objects.
As soon as you use the keyword class, Python executes it and creates
an OBJECT. The instruction
>>> class ObjectCreator(object):
... pass
...
creates in memory an object with the name "ObjectCreator".
This object (the class) is itself capable of creating objects (the instances),
and this is why it's a class.
But still, it's an object, and therefore:
you can assign it to a variable
you can copy it
you can add attributes to it
you can pass it as a function parameter
e.g.:
>>> print(ObjectCreator) # you can print a class because it's an object
<class '__main__.ObjectCreator'>
>>> def echo(o):
... print(o)
...
>>> echo(ObjectCreator) # you can pass a class as a parameter
<class '__main__.ObjectCreator'>
>>> print(hasattr(ObjectCreator, 'new_attribute'))
False
>>> ObjectCreator.new_attribute = 'foo' # you can add attributes to a class
>>> print(hasattr(ObjectCreator, 'new_attribute'))
True
>>> print(ObjectCreator.new_attribute)
foo
>>> ObjectCreatorMirror = ObjectCreator # you can assign a class to a variable
>>> print(ObjectCreatorMirror.new_attribute)
foo
>>> print(ObjectCreatorMirror())
<__main__.ObjectCreator object at 0x8997b4c>
Creating classes dynamically
Since classes are objects, you can create them on the fly, like any object.
First, you can create a class in a function using class:
>>> def choose_class(name):
... if name == 'foo':
... class Foo(object):
... pass
... return Foo # return the class, not an instance
... else:
... class Bar(object):
... pass
... return Bar
...
>>> MyClass = choose_class('foo')
>>> print(MyClass) # the function returns a class, not an instance
<class '__main__.Foo'>
>>> print(MyClass()) # you can create an object from this class
<__main__.Foo object at 0x89c6d4c>
But it's not so dynamic, since you still have to write the whole class yourself.
Since classes are objects, they must be generated by something.
When you use the class keyword, Python creates this object automatically. But as
with most things in Python, it gives you a way to do it manually.
Remember the function type? The good old function that lets you know what
type an object is:
>>> print(type(1))
<type 'int'>
>>> print(type("1"))
<type 'str'>
>>> print(type(ObjectCreator))
<type 'type'>
>>> print(type(ObjectCreator()))
<class '__main__.ObjectCreator'>
Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters,
and return a class.
(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backwards
compatibility in Python)
type works this way:
type(name of the class,
tuple of the parent class (for inheritance, can be empty),
dictionary containing attributes names and values)
e.g.:
>>> class MyShinyClass(object):
... pass
can be created manually this way:
>>> MyShinyClass = type('MyShinyClass', (), {}) # returns a class object
>>> print(MyShinyClass)
<class '__main__.MyShinyClass'>
>>> print(MyShinyClass()) # create an instance with the class
<__main__.MyShinyClass object at 0x8997cec>
You'll notice that we use "MyShinyClass" as the name of the class
and as the variable to hold the class reference. They can be different,
but there is no reason to complicate things.
type accepts a dictionary to define the attributes of the class. So:
>>> class Foo(object):
... bar = True
Can be translated to:
>>> Foo = type('Foo', (), {'bar':True})
And used as a normal class:
>>> print(Foo)
<class '__main__.Foo'>
>>> print(Foo.bar)
True
>>> f = Foo()
>>> print(f)
<__main__.Foo object at 0x8a9b84c>
>>> print(f.bar)
True
And of course, you can inherit from it, so:
>>> class FooChild(Foo):
... pass
would be:
>>> FooChild = type('FooChild', (Foo,), {})
>>> print(FooChild)
<class '__main__.FooChild'>
>>> print(FooChild.bar) # bar is inherited from Foo
True
Eventually you'll want to add methods to your class. Just define a function
with the proper signature and assign it as an attribute.
>>> def echo_bar(self):
... print(self.bar)
...
>>> FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})
>>> hasattr(Foo, 'echo_bar')
False
>>> hasattr(FooChild, 'echo_bar')
True
>>> my_foo = FooChild()
>>> my_foo.echo_bar()
True
And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.
>>> def echo_bar_more(self):
... print('yet another method')
...
>>> FooChild.echo_bar_more = echo_bar_more
>>> hasattr(FooChild, 'echo_bar_more')
True
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
This is what Python does when you use the keyword class, and it does so by using a metaclass.
What are metaclasses (finally)
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes,
you can picture them this way:
MyClass = MetaClass()
MyObject = MyClass()
You've seen that type lets you do something like this:
MyClass = type('MyClass', (), {})
It's because the function type is in fact a metaclass. type is the
metaclass Python uses to create all classes behind the scenes.
Now you wonder why the heck is it written in lowercase, and not Type?
Well, I guess it's a matter of consistency with str, the class that creates
strings objects, and int the class that creates integer objects. type is
just the class that creates class objects.
You see that by checking the __class__ attribute.
Everything, and I mean everything, is an object in Python. That includes ints,
strings, functions and classes. All of them are objects. And all of them have
been created from a class:
>>> age = 35
>>> age.__class__
<type 'int'>
>>> name = 'bob'
>>> name.__class__
<type 'str'>
>>> def foo(): pass
>>> foo.__class__
<type 'function'>
>>> class Bar(object): pass
>>> b = Bar()
>>> b.__class__
<class '__main__.Bar'>
Now, what is the __class__ of any __class__ ?
>>> age.__class__.__class__
<type 'type'>
>>> name.__class__.__class__
<type 'type'>
>>> foo.__class__.__class__
<type 'type'>
>>> b.__class__.__class__
<type 'type'>
So, a metaclass is just the stuff that creates class objects.
You can call it a 'class factory' if you wish.
type is the built-in metaclass Python uses, but of course, you can create your
own metaclass.
The __metaclass__ attribute
You can add a __metaclass__ attribute when you write a class:
class Foo(object):
__metaclass__ = something...
[...]
If you do so, Python will use the metaclass to create the class Foo.
Careful, it's tricky.
You write class Foo(object) first, but the class object Foo is not created
in memory yet.
Python will look for __metaclass__ in the class definition. If it finds it,
it will use it to create the object class Foo. If it doesn't, it will use
type to create the class.
Read that several times.
When you do:
class Foo(Bar):
pass
Python does the following:
Is there a __metaclass__ attribute in Foo?
If yes, create in memory a class object (I said a class object, stay with me here), with the name Foo by using what is in __metaclass__.
If Python can't find __metaclass__, it will look for a __metaclass__ at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes).
Then if it can't find any __metaclass__ at all, it will use the Bar's (the first parent) own metaclass (which might be the default type) to create the class object.
Be careful here that the __metaclass__ attribute will not be inherited, the metaclass of the parent (Bar.__class__) will be. If Bar used a __metaclass__ attribute that created Bar with type() (and not type.__new__()), the subclasses will not inherit that behavior.
Now the big question is, what can you put in __metaclass__ ?
The answer is: something that can create a class.
And what can create a class? type, or anything that subclasses or uses it.
Custom metaclasses
The main purpose of a metaclass is to change the class automatically,
when it's created.
You usually do this for APIs, where you want to create classes matching the
current context.
Imagine a stupid example, where you decide that all classes in your module
should have their attributes written in uppercase. There are several ways to
do this, but one way is to set __metaclass__ at the module level.
This way, all classes of this module will be created using this metaclass,
and we just have to tell the metaclass to turn all attributes to uppercase.
Luckily, __metaclass__ can actually be any callable, it doesn't need to be a
formal class (I know, something with 'class' in its name doesn't need to be
a class, go figure... but it's helpful).
So we will start with a simple example, by using a function.
# the metaclass will automatically get passed the same argument
# that you usually pass to `type`
def upper_attr(future_class_name, future_class_parents, future_class_attr):
"""
Return a class object, with the list of its attribute turned
into uppercase.
"""
# pick up any attribute that doesn't start with '__' and uppercase it
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
# let `type` do the class creation
return type(future_class_name, future_class_parents, uppercase_attr)
__metaclass__ = upper_attr # this will affect all classes in the module
class Foo(): # global __metaclass__ won't work with "object" though
# but we can define __metaclass__ here instead to affect only this class
# and this will work with "object" children
bar = 'bip'
print(hasattr(Foo, 'bar'))
# Out: False
print(hasattr(Foo, 'BAR'))
# Out: True
f = Foo()
print(f.BAR)
# Out: 'bip'
Now, let's do exactly the same, but using a real class for a metaclass:
# remember that `type` is actually a class like `str` and `int`
# so you can inherit from it
class UpperAttrMetaclass(type):
# __new__ is the method called before __init__
# it's the method that creates the object and returns it
# while __init__ just initializes the object passed as parameter
# you rarely use __new__, except when you want to control how the object
# is created.
# here the created object is the class, and we want to customize it
# so we override __new__
# you can do some stuff in __init__ too if you wish
# some advanced use involves overriding __call__ as well, but we won't
# see this
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attr):
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return type(future_class_name, future_class_parents, uppercase_attr)
But this is not really OOP. We call type directly and we don't override
or call the parent __new__. Let's do it:
class UpperAttrMetaclass(type):
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attr):
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
# reuse the type.__new__ method
# this is basic OOP, nothing magic in there
return type.__new__(upperattr_metaclass, future_class_name,
future_class_parents, uppercase_attr)
You may have noticed the extra argument upperattr_metaclass. There is
nothing special about it: __new__ always receives the class it's defined in, as first parameter. Just like you have self for ordinary methods which receive the instance as first parameter, or the defining class for class methods.
Of course, the names I used here are long for the sake of clarity, but like
for self, all the arguments have conventional names. So a real production
metaclass would look like this:
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, dct):
uppercase_attr = {}
for name, val in dct.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return type.__new__(cls, clsname, bases, uppercase_attr)
We can make it even cleaner by using super, which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type):
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, dct):
uppercase_attr = {}
for name, val in dct.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return super(UpperAttrMetaclass, cls).__new__(cls, clsname, bases, uppercase_attr)
That's it. There is really nothing more about metaclasses.
The reason behind the complexity of the code using metaclasses is not because
of metaclasses, it's because you usually use metaclasses to do twisted stuff
relying on introspection, manipulating inheritance, vars such as __dict__, etc.
Indeed, metaclasses are especially useful to do black magic, and therefore
complicated stuff. But by themselves, they are simple:
intercept a class creation
modify the class
return the modified class
Why would you use metaclasses classes instead of functions?
Since __metaclass__ can accept any callable, why would you use a class
since it's obviously more complicated?
There are several reasons to do so:
The intention is clear. When you read UpperAttrMetaclass(type), you know
what's going to follow
You can use OOP. Metaclass can inherit from metaclass, override parent methods. Metaclasses can even use metaclasses.
You can structure your code better. You never use metaclasses for something as
trivial as the above example. It's usually for something complicated. Having the
ability to make several methods and group them in one class is very useful
to make the code easier to read.
You can hook on __new__, __init__ and __call__. Which will allow
you to do different stuff. Even if usually you can do it all in __new__,
some people are just more comfortable using __init__.
These are called metaclasses, damn it! It must mean something!
Why would you use metaclasses?
Now the big question. Why would you use some obscure error prone feature?
Well, usually you don't:
Metaclasses are deeper magic that
99% of users should never worry about.
If you wonder whether you need them,
you don't (the people who actually
need them know with certainty that
they need them, and don't need an
explanation about why).
Python Guru Tim Peters
The main use case for a metaclass is creating an API. A typical example of this is the Django ORM.
It allows you to define something like this:
class Person(models.Model):
name = models.CharField(max_length=30)
age = models.IntegerField()
But if you do this:
guy = Person(name='bob', age='35')
print(guy.age)
It won't return an IntegerField object. It will return an int, and can even take it directly from the database.
This is possible because models.Model defines __metaclass__ and
it uses some magic that will turn the Person you just defined with simple statements
into a complex hook to a database field.
Django makes something complex look simple by exposing a simple API
and using metaclasses, recreating code from this API to do the real job
behind the scenes.
The last word
First, you know that classes are objects that can create instances.
Well in fact, classes are themselves instances. Of metaclasses.
>>> class Foo(object): pass
>>> id(Foo)
142630324
Everything is an object in Python, and they are all either instances of classes
or instances of metaclasses.
Except for type.
type is actually its own metaclass. This is not something you could
reproduce in pure Python, and is done by cheating a little bit at the implementation
level.
Secondly, metaclasses are complicated. You may not want to use them for
very simple class alterations. You can change classes by using two different techniques:
monkey patching
class decorators
99% of the time you need class alteration, you are better off using these.
But 99% of the time, you don't need class alteration at all.
|
What is the standard way to add N seconds to datetime.time in Python?
|
Given a datetime.time value in Python, is there a standard way to add an integer number of seconds to it, so that 11:34:59 + 3 = 11:35:02, for example?
These obvious ideas don't work:
>>> datetime.time(11, 34, 59) + 3
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int'
>>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'
>>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time'
In the end I have written functions like this:
def add_secs_to_time(timeval, secs_to_add):
secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second
secs += secs_to_add
return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60)
I can't help thinking that I'm missing an easier way to do this though.
Related
python time + timedelta equivalent
|
You can use full datetime variables with timedelta, and by providing a dummy date then using time to just get the time value.
For example:
import datetime
a = datetime.datetime(100,1,1,11,34,59)
b = a + datetime.timedelta(0,3) # days, seconds, then other fields.
print a.time()
print b.time()
results in the two values, three seconds apart:
11:34:59
11:35:02
You could also opt for the more readable
b = a + datetime.timedelta(seconds=3)
if you're so inclined.
If you're after a function that can do this, you can look into using addSecs below:
import datetime
def addSecs(tm, secs):
fulldate = datetime.datetime(100, 1, 1, tm.hour, tm.minute, tm.second)
fulldate = fulldate + datetime.timedelta(seconds=secs)
return fulldate.time()
a = datetime.datetime.now().time()
b = addSecs(a, 300)
print a
print b
This outputs:
09:11:55.775695
09:16:55
|
How can I analyze Python code to identify problematic areas?
|
I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed.
Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs.
How might I go about constructing such a report?
|
For measuring cyclomatic complexity, there's a nice tool available at traceback.org. The page also gives a good overview of how to interpret the results.
+1 for pylint. It is great at verifying adherence to coding standards (be it PEP8 or your own organization's variant), which can in the end help to reduce cyclomatic complexity.
|
Python on Windows - how to wait for multiple child processes?
|
How to wait for multiple child processes in Python on Windows, without active wait (polling)? Something like this almost works for me:
proc1 = subprocess.Popen(['python','mytest.py'])
proc2 = subprocess.Popen(['python','mytest.py'])
proc1.wait()
print "1 finished"
proc2.wait()
print "2 finished"
The problem is that when proc2 finishes before proc1, the parent process will still wait for proc1. On Unix one would use waitpid(0) in a loop to get the child processes' return codes as they finish - how to achieve something like this in Python on Windows?
|
It might seem overkill, but, here it goes:
import Queue, thread, subprocess
results= Queue.Queue()
def process_waiter(popen, description, que):
try: popen.wait()
finally: que.put( (description, popen.returncode) )
process_count= 0
proc1= subprocess.Popen( ['python', 'mytest.py'] )
thread.start_new_thread(process_waiter,
(proc1, "1 finished", results))
process_count+= 1
proc2= subprocess.Popen( ['python', 'mytest.py'] )
thread.start_new_thread(process_waiter,
(proc2, "2 finished", results))
process_count+= 1
# etc
while process_count > 0:
description, rc= results.get()
print "job", description, "ended with rc =", rc
process_count-= 1
|
Why is "if not someobj:" better than "if someobj == None:" in Python?
|
I've seen several examples of code like this:
if not someobj:
#do something
But I'm wondering why not doing:
if someobj == None:
#do something
Is there any difference? Does one have an advantage over the other?
|
In the first test, Python try to convert the object to a bool value if it is not already one. Roughly, we are asking the object : are you meaningful or not ? This is done using the following algorithm :
If the object has a __nonzero__ special method (as do numeric built-ins, int and float), it calls this method. It must either return a bool value which is then directly used, or an int value that is considered False if equal to zero.
Otherwise, if the object has a __len__ special method (as do container built-ins, list, dict, set, tuple, ...), it calls this method, considering a container False if it is empty (length is zero).
Otherwise, the object is considered True unless it is None in which case, it is considered False.
In the second test, the object is compared for equality to None. Here, we are asking the object, "Are you equal to this other value?" This is done using the following algorithm :
If the object has a __eq__ method, it is called, and the return value is then converted to a boolvalue and used to determine the outcome of the if.
Otherwise, if the object has a __cmp__ method, it is called. This function must return an int indicating the order of the two object (-1 if self < other, 0 if self == other, +1 if self > other).
Otherwise, the object are compared for identity (ie. they are reference to the same object, as can be tested by the is operator).
There is another test possible using the is operator. We would be asking the object, "Are you this particular object?"
Generally, I would recommend to use the first test with non-numerical values, to use the test for equality when you want to compare objects of the same nature (two strings, two numbers, ...) and to check for identity only when using sentinel values (None meaning not initialized for a member field for exemple, or when using the getattr or the __getitem__ methods).
To summarize, we have :
>>> class A(object):
... def __repr__(self):
... return 'A()'
... def __nonzero__(self):
... return False
>>> class B(object):
... def __repr__(self):
... return 'B()'
... def __len__(self):
... return 0
>>> class C(object):
... def __repr__(self):
... return 'C()'
... def __cmp__(self, other):
... return 0
>>> class D(object):
... def __repr__(self):
... return 'D()'
... def __eq__(self, other):
... return True
>>> for obj in ['', (), [], {}, 0, 0., A(), B(), C(), D(), None]:
... print '%4s: bool(obj) -> %5s, obj == None -> %5s, obj is None -> %5s' % \
... (repr(obj), bool(obj), obj == None, obj is None)
'': bool(obj) -> False, obj == None -> False, obj is None -> False
(): bool(obj) -> False, obj == None -> False, obj is None -> False
[]: bool(obj) -> False, obj == None -> False, obj is None -> False
{}: bool(obj) -> False, obj == None -> False, obj is None -> False
0: bool(obj) -> False, obj == None -> False, obj is None -> False
0.0: bool(obj) -> False, obj == None -> False, obj is None -> False
A(): bool(obj) -> False, obj == None -> False, obj is None -> False
B(): bool(obj) -> False, obj == None -> False, obj is None -> False
C(): bool(obj) -> True, obj == None -> True, obj is None -> False
D(): bool(obj) -> True, obj == None -> True, obj is None -> False
None: bool(obj) -> False, obj == None -> True, obj is None -> True
|
How do I read text from the (windows) clipboard from python?
|
How do I read text from the (windows) clipboard from python?
|
You can use the module called win32clipboard, which is part of pywin32.
Here is an example that first sets the clipboard data then gets it:
import win32clipboard
# set clipboard data
win32clipboard.OpenClipboard()
win32clipboard.EmptyClipboard()
win32clipboard.SetClipboardText('testing 123')
win32clipboard.CloseClipboard()
# get clipboard data
win32clipboard.OpenClipboard()
data = win32clipboard.GetClipboardData()
win32clipboard.CloseClipboard()
print data
An important reminder from the documentation:
When the window has finished examining or changing the clipboard,
close the clipboard by calling CloseClipboard. This enables other
windows to access the clipboard. Do not place an object on the
clipboard after calling CloseClipboard.
|
Is there a zip-like method in .Net?
|
In Python there is a really neat function called zip which can be used to iterate through two lists at the same time:
list1 = [1, 2, 3]
list2 = ["a", "b", "c"]
for v1, v2 in zip(list1, list2):
print v1 + " " + v2
The above code shoul produce the following:
1 a
2 b
3 c
I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available.
|
Update: It is built-in in C# 4 as System.Linq.Enumerable.Zip Method
Here is a C# 3 version:
IEnumerable<TResult> Zip<TResult,T1,T2>
(IEnumerable<T1> a,
IEnumerable<T2> b,
Func<T1,T2,TResult> combine)
{
using (var f = a.GetEnumerator())
using (var s = b.GetEnumerator())
{
while (f.MoveNext() && s.MoveNext())
yield return combine(f.Current, s.Current);
}
}
Dropped the C# 2 version as it was showing its age.
|
How do you access an authenticated Google App Engine service from a (non-web) python client?
|
I have a Google App Engine app - http://mylovelyapp.appspot.com/
It has a page - mylovelypage
For the moment, the page just does self.response.out.write('OK')
If I run the following Python at my computer:
import urllib2
f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage")
s = f.read()
print s
f.close()
it prints "OK"
the problem is if I add login:required to this page in the app's yaml
then this prints out the HTML of the Google Accounts login page
I've tried "normal" authentication approaches. e.g.
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(None,
uri='http://mylovelyapp.appspot.com/mylovelypage',
user='billy.bob@gmail.com',
passwd='billybobspasswd')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
But it makes no difference - I still get the login page's HTML back.
I've tried Google's ClientLogin auth API, but I can't get it to work.
h = httplib2.Http()
auth_uri = 'https://www.google.com/accounts/ClientLogin'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("billy.bob@gmail.com", "billybobspassword")
response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers)
if response['status'] == '200':
authtok = re.search('Auth=(\S*)', content).group(1)
headers = {}
headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip()
headers['Content-Length'] = '0'
response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage",
'POST',
body="",
headers=headers)
while response['status'] == "302":
response, content = h.request(response['location'], 'POST', body="", headers=headers)
print content
I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-(
Can anyone help, please?
Could I use the GData client library to do this sort of thing? From
what I've read, I think it should be able to access App Engine apps,
but I haven't been any more successful at getting the authentication working for App Engine stuff there either
Any pointers to samples, articles, or even just keywords I should be
searching for to get me started, would be very much appreciated.
Thanks!
|
appcfg.py, the tool that uploads data to App Engine has to do exactly this to authenticate itself with the App Engine server. The relevant functionality is abstracted into appengine_rpc.py. In a nutshell, the solution is:
Use the Google ClientLogin API to obtain an authentication token. appengine_rpc.py does this in _GetAuthToken
Send the auth token to a special URL on your App Engine app. That page then returns a cookie and a 302 redirect. Ignore the redirect and store the cookie. appcfg.py does this in _GetAuthCookie
Use the returned cookie in all future requests.
You may also want to look at _Authenticate, to see how appcfg handles the various return codes from ClientLogin, and _GetOpener, to see how appcfg creates a urllib2 OpenerDirector that doesn't follow HTTP redirects. Or you could, in fact, just use the AbstractRpcServer and HttpRpcServer classes wholesale, since they do pretty much everything you need.
|
Is there a way to run Python on Android?
|
We are working on an S60 version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since Jython exists, is there a way to let the snake and the robot work together?
|
One way is to use Kivy:
Open source Python library for rapid development of applications
that make use of innovative user interfaces, such as multi-touch apps.
Kivy runs on Linux, Windows, OS X, Android and iOS. You can run the same [python] code on all supported platforms.
Kivy Showcase app
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.