text
stringlengths 330
67k
| status
stringclasses 9
values | title
stringlengths 18
80
| type
stringclasses 3
values | abstract
stringlengths 4
917
|
---|---|---|---|---|
PEP 328 – Imports: Multi-Line and Absolute/Relative
Author:
Aahz <aahz at pythoncraft.com>
Status:
Final
Type:
Standards Track
Created:
21-Dec-2003
Python-Version:
2.4, 2.5, 2.6
Post-History:
08-Mar-2004
Table of Contents
Abstract
Timeline
Rationale for Parentheses
Rationale for Absolute Imports
Rationale for Relative Imports
Guido’s Decision
Relative Imports and __name__
Relative Imports and Indirection Entries in sys.modules
References
Copyright
Abstract
The import statement has two problems:
Long import statements can be difficult to write, requiring
various contortions to fit Pythonic style guidelines.
Imports can be ambiguous in the face of packages; within a package,
it’s not clear whether import foo refers to a module within the
package or some module outside the package. (More precisely, a local
module or package can shadow another hanging directly off
sys.path.)
For the first problem, it is proposed that parentheses be permitted to
enclose multiple names, thus allowing Python’s standard mechanisms for
multi-line values to apply. For the second problem, it is proposed that
all import statements be absolute by default (searching sys.path
only) with special syntax (leading dots) for accessing package-relative
imports.
Timeline
In Python 2.5, you must enable the new absolute import behavior with
from __future__ import absolute_import
You may use relative imports freely. In Python 2.6, any import
statement that results in an intra-package import will raise
DeprecationWarning (this also applies to from <> import that
fails to use the relative import syntax).
Rationale for Parentheses
Currently, if you want to import a lot of names from a module or
package, you have to choose one of several unpalatable options:
Write a long line with backslash continuations:from Tkinter import Tk, Frame, Button, Entry, Canvas, Text, \
LEFT, DISABLED, NORMAL, RIDGE, END
Write multiple import statements:from Tkinter import Tk, Frame, Button, Entry, Canvas, Text
from Tkinter import LEFT, DISABLED, NORMAL, RIDGE, END
(import * is not an option ;-)
Instead, it should be possible to use Python’s standard grouping
mechanism (parentheses) to write the import statement:
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
This part of the proposal had BDFL approval from the beginning.
Parentheses support was added to Python 2.4.
Rationale for Absolute Imports
In Python 2.4 and earlier, if you’re reading a module located inside a
package, it is not clear whether
import foo
refers to a top-level module or to another module inside the package.
As Python’s library expands, more and more existing package internal
modules suddenly shadow standard library modules by accident. It’s a
particularly difficult problem inside packages because there’s no way to
specify which module is meant. To resolve the ambiguity, it is proposed
that foo will always be a module or package reachable from
sys.path. This is called an absolute import.
The python-dev community chose absolute imports as the default because
they’re the more common use case and because absolute imports can provide
all the functionality of relative (intra-package) imports – albeit at
the cost of difficulty when renaming package pieces higher up in the
hierarchy or when moving one package inside another.
Because this represents a change in semantics, absolute imports will
be optional in Python 2.5 and 2.6 through the use of
from __future__ import absolute_import
This part of the proposal had BDFL approval from the beginning.
Rationale for Relative Imports
With the shift to absolute imports, the question arose whether
relative imports should be allowed at all. Several use cases were
presented, the most important of which is being able to rearrange the
structure of large packages without having to edit sub-packages. In
addition, a module inside a package can’t easily import itself without
relative imports.
Guido approved of the idea of relative imports, but there has been a
lot of disagreement on the spelling (syntax). There does seem to be
agreement that relative imports will require listing specific names to
import (that is, import foo as a bare term will always be an
absolute import).
Here are the contenders:
One from Guido:from .foo import bar
and
from ...foo import bar
These two forms have a couple of different suggested semantics. One
semantic is to make each dot represent one level. There have been
many complaints about the difficulty of counting dots. Another
option is to only allow one level of relative import. That misses a
lot of functionality, and people still complained about missing the
dot in the one-dot form. The final option is to define an algorithm
for finding relative modules and packages; the objection here is
“Explicit is better than implicit”. (The algorithm proposed is
“search up from current package directory until the ultimate package
parent gets hit”.)
Some people have suggested other punctuation as the separator, such
as “-” or “^”.
Some people have suggested using “*”:
from *.foo import bar
The next set of options is conflated from several posters:from __pkg__.__pkg__ import
and
from .__parent__.__parent__ import
Many people (Guido included) think these look ugly, but they are
clear and explicit. Overall, more people prefer __pkg__ as the
shorter option.
One suggestion was to allow only sibling references. In other words,
you would not be able to use relative imports to refer to modules
higher in the package tree. You would then be able to do eitherfrom .spam import eggs
or
import .spam.eggs
Some people favor allowing indexed parents:from -2.spam import eggs
In this scenario, importing from the current directory would be a
simple
from .spam import eggs
Finally, some people dislike the way you have to change import
to from ... import when you want to dig inside a package. They
suggest completely rewriting the import syntax:from MODULE import NAMES as RENAME searching HOW
or
import NAMES as RENAME from MODULE searching HOW
[from NAMES] [in WHERE] import ...
However, this most likely could not be implemented for Python 2.5
(too big a change), and allowing relative imports is sufficiently
critical that we need something now (given that the standard
import will change to absolute import). More than that, this
proposed syntax has several open questions:
What is the precise proposed syntax? (Which clauses are optional
under which circumstances?)
How strongly does the searching clause bind? In other words,
do you write:import foo as bar searching XXX, spam as ham searching XXX
or:
import foo as bar, spam as ham searching XXX
Guido’s Decision
Guido has Pronounced [1] that relative imports will use leading dots.
A single leading dot indicates a relative import, starting with the
current package. Two or more leading dots give a relative import to the
parent(s) of the current package, one level per dot after the first.
Here’s a sample package layout:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
moduleA.py
Assuming that the current file is either moduleX.py or
subpackage1/__init__.py, following are correct usages of the new
syntax:
from .moduleY import spam
from .moduleY import spam as ham
from . import moduleY
from ..subpackage1 import moduleY
from ..subpackage2.moduleZ import eggs
from ..moduleA import foo
from ...package import bar
from ...sys import path
Note that while that last case is legal, it is certainly discouraged
(“insane” was the word Guido used).
Relative imports must always use from <> import; import <> is
always absolute. Of course, absolute imports can use from <> import
by omitting the leading dots. The reason import .foo is prohibited
is because after
import XXX.YYY.ZZZ
then
XXX.YYY.ZZZ
is usable in an expression. But
.moduleY
is not usable in an expression.
Relative Imports and __name__
Relative imports use a module’s __name__ attribute to determine that
module’s position in the package hierarchy. If the module’s name does
not contain any package information (e.g. it is set to ‘__main__’)
then relative imports are resolved as if the module were a top level
module, regardless of where the module is actually located on the file
system.
Relative Imports and Indirection Entries in sys.modules
When packages were introduced, the concept of an indirection entry in
sys.modules came into existence [2]. When an entry in sys.modules
for a module within a package had a value of None, it represented that
the module actually referenced the top-level module. For instance,
‘Sound.Effects.string’ might have a value of None in sys.modules.
That meant any import that resolved to that name actually was to
import the top-level ‘string’ module.
This introduced an optimization for when a relative import was meant
to resolve to an absolute import. But since this PEP makes a very
clear delineation between absolute and relative imports, this
optimization is no longer needed. When absolute/relative imports
become the only import semantics available then indirection entries in
sys.modules will no longer be supported.
References
For more background, see the following python-dev threads:
Re: Christmas Wishlist
Re: Python-Dev Digest, Vol 5, Issue 57
Relative import
Another Strategy for Relative Import
[1]
https://mail.python.org/pipermail/python-dev/2004-March/043739.html
[2]
https://www.python.org/doc/essays/packages/
Copyright
This document has been placed in the public domain.
| Final | PEP 328 – Imports: Multi-Line and Absolute/Relative | Standards Track | The import statement has two problems: |
PEP 329 – Treating Builtins as Constants in the Standard Library
Author:
Raymond Hettinger <python at rcn.com>
Status:
Rejected
Type:
Standards Track
Created:
18-Apr-2004
Python-Version:
2.4
Post-History:
18-Apr-2004
Table of Contents
Abstract
Status
Motivation
Proposal
Questions and Answers
Sample Implementation
References
Copyright
Abstract
The proposal is to add a function for treating builtin references as
constants and to apply that function throughout the standard library.
Status
The PEP is self rejected by the author. Though the ASPN recipe was
well received, there was less willingness to consider this for
inclusion in the core distribution.
The Jython implementation does not use byte codes, so its performance
would suffer if the current _len=len optimizations were removed.
Also, altering byte codes is one of the least clean ways to improve
performance and enable cleaner coding. A more robust solution would
likely involve compiler pragma directives or metavariables indicating
what can be optimized (similar to const/volatile declarations).
Motivation
The library contains code such as _len=len which is intended to
create fast local references instead of slower global lookups. Though
necessary for performance, these constructs clutter the code and are
usually incomplete (missing many opportunities).
If the proposal is adopted, those constructs could be eliminated from
the code base and at the same time improve upon their results in terms
of performance.
There are currently over a hundred instances of while 1 in the
library. They were not replaced with the more readable while True
because of performance reasons (the compiler cannot eliminate the test
because True is not known to always be a constant). Conversion of
True to a constant will clarify the code while retaining performance.
Many other basic Python operations run much slower because of global
lookups. In try/except statements, the trapped exceptions are
dynamically looked up before testing whether they match.
Similarly, simple identity tests such as while x is not None
require the None variable to be re-looked up on every pass.
Builtin lookups are especially egregious because the enclosing global
scope must be checked first. These lookup chains devour cache space
that is best used elsewhere.
In short, if the proposal is adopted, the code will become cleaner
and performance will improve across the board.
Proposal
Add a module called codetweaks.py which contains two functions,
bind_constants() and bind_all(). The first function performs
constant binding and the second recursively applies it to every
function and class in a target module.
For most modules in the standard library, add a pair of lines near
the end of the script:
import codetweaks, sys
codetweaks.bind_all(sys.modules[__name__])
In addition to binding builtins, there are some modules (like
sre_compile) where it also makes sense to bind module variables
as well as builtins into constants.
Questions and Answers
Will this make everyone divert their attention to optimization
issues?Because it is done automatically, it reduces the need to think
about optimizations.
In a nutshell, how does it work?Every function has attributes with its bytecodes (the language of
the Python virtual machine) and a table of constants. The bind
function scans the bytecodes for a LOAD_GLOBAL instruction and
checks to see whether the value is already known. If so, it adds
that value to the constants table and replaces the opcode with
LOAD_CONSTANT.
When does it work?When a module is imported for the first time, python compiles the
bytecode and runs the binding optimization. Subsequent imports
just re-use the previous work. Each session repeats this process
(the results are not saved in pyc files).
How do you know this works?I implemented it, applied it to every module in library, and the test
suite ran without exception.
What if the module defines a variable shadowing a builtin?This does happen. For instance, True can be redefined at the module
level as True = (1==1). The sample implementation below detects the
shadowing and leaves the global lookup unchanged.
Are you the first person to recognize that most global lookups are for
values that never change?No, this has long been known. Skip Montanaro provides an eloquent
explanation in PEP 266.
What if I want to replace the builtins module and supply my own
implementations?Either do this before importing a module, or just reload the
module, or disable codetweaks.py (it will have a disable flag).
How susceptible is this module to changes in Python’s byte coding?It imports opcode.py to protect against renumbering. Also, it
uses LOAD_CONST and LOAD_GLOBAL which are fundamental and have
been around forever. That notwithstanding, the coding scheme could
change and this implementation would have to change along with
modules like dis which also rely on the current coding scheme.
What is the effect on startup time?I could not measure a difference. None of the startup modules are
bound except for warnings.py. Also, the binding function is very
fast, making just a single pass over the code string in search of
the LOAD_GLOBAL opcode.
Sample Implementation
Here is a sample implementation for codetweaks.py:
from types import ClassType, FunctionType
from opcode import opmap, HAVE_ARGUMENT, EXTENDED_ARG
LOAD_GLOBAL, LOAD_CONST = opmap['LOAD_GLOBAL'], opmap['LOAD_CONST']
ABORT_CODES = (EXTENDED_ARG, opmap['STORE_GLOBAL'])
def bind_constants(f, builtin_only=False, stoplist=[], verbose=False):
""" Return a new function with optimized global references.
Replaces global references with their currently defined values.
If not defined, the dynamic (runtime) global lookup is left undisturbed.
If builtin_only is True, then only builtins are optimized.
Variable names in the stoplist are also left undisturbed.
If verbose is True, prints each substitution as is occurs.
"""
import __builtin__
env = vars(__builtin__).copy()
stoplist = dict.fromkeys(stoplist)
if builtin_only:
stoplist.update(f.func_globals)
else:
env.update(f.func_globals)
co = f.func_code
newcode = map(ord, co.co_code)
newconsts = list(co.co_consts)
codelen = len(newcode)
i = 0
while i < codelen:
opcode = newcode[i]
if opcode in ABORT_CODES:
return f # for simplicity, only optimize common cases
if opcode == LOAD_GLOBAL:
oparg = newcode[i+1] + (newcode[i+2] << 8)
name = co.co_names[oparg]
if name in env and name not in stoplist:
value = env[name]
try:
pos = newconsts.index(value)
except ValueError:
pos = len(newconsts)
newconsts.append(value)
newcode[i] = LOAD_CONST
newcode[i+1] = pos & 0xFF
newcode[i+2] = pos >> 8
if verbose:
print name, '-->', value
i += 1
if opcode >= HAVE_ARGUMENT:
i += 2
codestr = ''.join(map(chr, newcode))
codeobj = type(co)(co.co_argcount, co.co_nlocals, co.co_stacksize,
co.co_flags, codestr, tuple(newconsts), co.co_names,
co.co_varnames, co.co_filename, co.co_name,
co.co_firstlineno, co.co_lnotab, co.co_freevars,
co.co_cellvars)
return type(f)(codeobj, f.func_globals, f.func_name, f.func_defaults,
f.func_closure)
def bind_all(mc, builtin_only=False, stoplist=[], verbose=False):
"""Recursively apply bind_constants() to functions in a module or class.
Use as the last line of the module (after everything is defined, but
before test code).
In modules that need modifiable globals, set builtin_only to True.
"""
for k, v in vars(mc).items():
if type(v) is FunctionType:
newv = bind_constants(v, builtin_only, stoplist, verbose)
setattr(mc, k, newv)
elif type(v) in (type, ClassType):
bind_all(v, builtin_only, stoplist, verbose)
def f(): pass
try:
f.func_code.code
except AttributeError: # detect non-CPython environments
bind_all = lambda *args, **kwds: 0
del f
import sys
bind_all(sys.modules[__name__]) # Optimizer, optimize thyself!
Note the automatic detection of a non-CPython environment that does not
have bytecodes [2]. In that situation, the bind functions would simply
return the original function unchanged. This assures that the two
line additions to library modules do not impact other implementations.
The final code should add a flag to make it easy to disable binding.
References
[1] ASPN Recipe for a non-private implementation
https://code.activestate.com/recipes/277940/
[2]
Differences between CPython and Jython
https://web.archive.org/web/20031018014238/http://www.jython.org/cgi-bin/faqw.py?req=show&file=faq01.003.htp
Copyright
This document has been placed in the public domain.
| Rejected | PEP 329 – Treating Builtins as Constants in the Standard Library | Standards Track | The proposal is to add a function for treating builtin references as
constants and to apply that function throughout the standard library. |
PEP 330 – Python Bytecode Verification
Author:
Michel Pelletier <michel at users.sourceforge.net>
Status:
Rejected
Type:
Standards Track
Created:
17-Jun-2004
Python-Version:
2.6
Post-History:
Table of Contents
Abstract
Pronouncement
Motivation
Static Constraints on Bytecode Instructions
Static Constraints on Bytecode Instruction Operands
Structural Constraints between Bytecode Instructions
Implementation
Verification Issues
Required Changes
References
Copyright
Abstract
If Python Virtual Machine (PVM) bytecode is not “well-formed” it
is possible to crash or exploit the PVM by causing various errors
such as under/overflowing the value stack or reading/writing into
arbitrary areas of the PVM program space. Most of these kinds of
errors can be eliminated by verifying that PVM bytecode does not
violate a set of simple constraints before execution.
This PEP proposes a set of constraints on the format and structure
of Python Virtual Machine (PVM) bytecode and provides an
implementation in Python of this verification process.
Pronouncement
Guido believes that a verification tool has some value. If
someone wants to add it to Tools/scripts, no PEP is required.
Such a tool may have value for validating the output from
“bytecodehacks” or from direct edits of PYC files. As security
measure, its value is somewhat limited because perfectly valid
bytecode can still do horrible things. That situation could
change if the concept of restricted execution were to be
successfully resurrected.
Motivation
The Python Virtual Machine executes Python programs that have been
compiled from the Python language into a bytecode representation.
The PVM assumes that any bytecode being executed is “well-formed”
with regard to a number implicit constraints. Some of these
constraints are checked at run-time, but most of them are not due
to the overhead they would create.
When running in debug mode the PVM does do several run-time checks
to ensure that any particular bytecode cannot violate these
constraints that, to a degree, prevent bytecode from crashing or
exploiting the interpreter. These checks add a measurable
overhead to the interpreter, and are typically turned off in
common use.
Bytecode that is not well-formed and executed by a PVM not running
in debug mode may create a variety of fatal and non-fatal errors.
Typically, ill-formed code will cause the PVM to seg-fault and
cause the OS to immediately and abruptly terminate the
interpreter.
Conceivably, ill-formed bytecode could exploit the interpreter and
allow Python bytecode to execute arbitrary C-level machine
instructions or to modify private, internal data structures in the
interpreter. If used cleverly this could subvert any form of
security policy an application may want to apply to its objects.
Practically, it would be difficult for a malicious user to
“inject” invalid bytecode into a PVM for the purposes of
exploitation, but not impossible. Buffer overflow and memory
overwrite attacks are commonly understood, particularly when the
exploit payload is transmitted unencrypted over a network or when
a file or network security permission weakness is used as a
foothold for further attacks.
Ideally, no bytecode should ever be allowed to read or write
underlying C-level data structures to subvert the operation of the
PVM, whether the bytecode was maliciously crafted or not. A
simple pre-execution verification step could ensure that bytecode
cannot over/underflow the value stack or access other sensitive
areas of PVM program space at run-time.
This PEP proposes several validation steps that should be taken on
Python bytecode before it is executed by the PVM so that it
compiles with static and structure constraints on its instructions
and their operands. These steps are simple and catch a large
class of invalid bytecode that can cause crashes. There is also
some possibility that some run-time checks can be eliminated up
front by a verification pass.
There is, of course, no way to verify that bytecode is “completely
safe”, for every definition of complete and safe. Even with
bytecode verification, Python programs can and most likely in the
future will seg-fault for a variety of reasons and continue to
cause many different classes of run-time errors, fatal or not.
The verification step proposed here simply plugs an easy hole that
can cause a large class of fatal and subtle errors at the bytecode
level.
Currently, the Java Virtual Machine (JVM) verifies Java bytecode
in a way very similar to what is proposed here. The JVM
Specification version 2 [1], Sections 4.8 and 4.9 were therefore
used as a basis for some of the constraints explained below. Any
Python bytecode verification implementation at a minimum must
enforce these constraints, but may not be limited to them.
Static Constraints on Bytecode Instructions
The bytecode string must not be empty. (len(co_code) > 0).
The bytecode string cannot exceed a maximum size
(len(co_code) < sizeof(unsigned char) - 1).
The first instruction in the bytecode string begins at index 0.
Only valid byte-codes with the correct number of operands can
be in the bytecode string.
Static Constraints on Bytecode Instruction Operands
The target of a jump instruction must be within the code
boundaries and must fall on an instruction, never between an
instruction and its operands.
The operand of a LOAD_* instruction must be a valid index into
its corresponding data structure.
The operand of a STORE_* instruction must be a valid index
into its corresponding data structure.
Structural Constraints between Bytecode Instructions
Each instruction must only be executed with the appropriate
number of arguments in the value stack, regardless of the
execution path that leads to its invocation.
If an instruction can be executed along several different
execution paths, the value stack must have the same depth prior
to the execution of the instruction, regardless of the path
taken.
At no point during execution can the value stack grow to a
depth greater than that implied by co_stacksize.
Execution never falls off the bottom of co_code.
Implementation
This PEP is the working document for a Python bytecode
verification implementation written in Python. This
implementation is not used implicitly by the PVM before executing
any bytecode, but is to be used explicitly by users concerned
about possibly invalid bytecode with the following snippet:
import verify
verify.verify(object)
The verify module provides a verify function which accepts the
same kind of arguments as dis.dis: classes, methods, functions,
or code objects. It verifies that the object’s bytecode is
well-formed according to the specifications of this PEP.
If the code is well-formed the call to verify returns silently
without error. If an error is encountered, it throws a
VerificationError whose argument indicates the cause of the
failure. It is up to the programmer whether or not to handle the
error in some way or execute the invalid code regardless.
Phillip Eby has proposed a pseudo-code algorithm for bytecode
stack depth verification used by the reference implementation.
Verification Issues
This PEP describes only a small number of verifications. While
discussion and analysis will lead to many more, it is highly
possible that future verification may need to be done or custom,
project-specific verifications. For this reason, it might be
desirable to add a verification registration interface to the test
implementation to register future verifiers. The need for this is
minimal since custom verifiers can subclass and extend the current
implementation for added behavior.
Required Changes
Armin Rigo noted that several byte-codes will need modification in
order for their stack effect to be statically analyzed. These are
END_FINALLY, POP_BLOCK, and MAKE_CLOSURE. Armin and Guido have
already agreed on how to correct the instructions. Currently the
Python implementation punts on these instructions.
This PEP does not propose to add the verification step to the
interpreter, but only to provide the Python implementation in the
standard library for optional use. Whether or not this
verification procedure is translated into C, included with the PVM
or enforced in any way is left for future discussion.
References
[1]
The Java Virtual Machine Specification 2nd Edition
http://java.sun.com/docs/books/vmspec/2nd-edition/html/ClassFile.doc.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 330 – Python Bytecode Verification | Standards Track | If Python Virtual Machine (PVM) bytecode is not “well-formed” it
is possible to crash or exploit the PVM by causing various errors
such as under/overflowing the value stack or reading/writing into
arbitrary areas of the PVM program space. Most of these kinds of
errors can be eliminated by verifying that PVM bytecode does not
violate a set of simple constraints before execution. |
PEP 331 – Locale-Independent Float/String Conversions
Author:
Christian R. Reis <kiko at async.com.br>
Status:
Final
Type:
Standards Track
Created:
19-Jul-2003
Python-Version:
2.4
Post-History:
21-Jul-2003, 13-Aug-2003, 18-Jun-2004
Table of Contents
Abstract
Introduction
Rationale
Example Problem
Proposal
Potential Code Contributions
Risks
Implementation
References
Copyright
Abstract
Support for the LC_NUMERIC locale category in Python 2.3 is
implemented only in Python-space. This causes inconsistent
behavior and thread-safety issues for applications that use
extension modules and libraries implemented in C that parse and
generate floats from strings. This document proposes a plan for
removing this inconsistency by providing and using substitute
locale-agnostic functions as necessary.
Introduction
Python provides generic localization services through the locale
module, which among other things allows localizing the display and
conversion process of numeric types. Locale categories, such as
LC_TIME and LC_COLLATE, allow configuring precisely what aspects
of the application are to be localized.
The LC_NUMERIC category specifies formatting for non-monetary
numeric information, such as the decimal separator in float and
fixed-precision numbers. Localization of the LC_NUMERIC category
is currently implemented only in Python-space; C libraries invoked
from the Python runtime are unaware of Python’s LC_NUMERIC
setting. This is done to avoid changing the behavior of certain
low-level functions that are used by the Python parser and related
code [2].
However, this presents a problem for extension modules that wrap C
libraries. Applications that use these extension modules will
inconsistently display and convert floating-point values.
James Henstridge, the author of PyGTK [3], has additionally
pointed out that the setlocale() function also presents
thread-safety issues, since a thread may call the C library
setlocale() outside of the GIL, and cause Python to parse and
generate floats incorrectly.
Rationale
The inconsistency between Python and C library localization for
LC_NUMERIC is a problem for any localized application using C
extensions. The exact nature of the problem will vary depending
on the application, but it will most likely occur when parsing or
formatting a floating-point value.
Example Problem
The initial problem that motivated this PEP is related to the
GtkSpinButton [4] widget in the GTK+ UI toolkit, wrapped by the
PyGTK module. The widget can be set to numeric mode, and when
this occurs, characters typed into it are evaluated as a number.
Problems occur when LC_NUMERIC is set to a locale with a float
separator that differs from the C locale’s standard (for instance,
‘,’ instead of ‘.’ for the Brazilian locale pt_BR). Because
LC_NUMERIC is not set at the libc level, float values are
displayed incorrectly (using ‘.’ as a separator) in the
spinbutton’s text entry, and it is impossible to enter fractional
values using the ‘,’ separator.
This small example demonstrates reduced usability for localized
applications using this toolkit when coded in Python.
Proposal
Martin v. Löwis commented on the initial constraints for an
acceptable solution to the problem on python-dev:
LC_NUMERIC can be set at the C library level without
breaking the parser.
float() and str() stay locale-unaware.
locale-aware str() and atof() stay in the locale module.
An analysis of the Python source suggests that the following
functions currently depend on LC_NUMERIC being set to the C
locale:
Python/compile.c:parsenumber()
Python/marshal.c:r_object()
Objects/complexobject.c:complex_to_buf()
Objects/complexobject.c:complex_subtype_from_string()
Objects/floatobject.c:PyFloat_FromString()
Objects/floatobject.c:format_float()
Objects/stringobject.c:formatfloat()
Modules/stropmodule.c:strop_atof()
Modules/cPickle.c:load_float()
The proposed approach is to implement LC_NUMERIC-agnostic
functions for converting from (strtod()/atof()) and to
(snprintf()) float formats, using these functions where the
formatting should not vary according to the user-specified locale.
The locale module should also be changed to remove the
special-casing for LC_NUMERIC.
This change should also solve the aforementioned thread-safety
problems.
Potential Code Contributions
This problem was initially reported as a problem in the GTK+
libraries [5]; since then it has been correctly diagnosed as an
inconsistency in Python’s implementation. However, in a fortunate
coincidence, the glib library (developed primarily for GTK+, not
to be confused with the GNU C library) implements a number of
LC_NUMERIC-agnostic functions (for an example, see [6]) for
reasons similar to those presented in this paper.
In the same GTK+ problem report, Havoc Pennington suggested that
the glib authors would be willing to contribute this code to the
PSF, which would simplify implementation of this PEP considerably.
Alex Larsson, the original author of the glib code, submitted a
PSF Contributor Agreement [7] on 2003-08-20 [8] to ensure the code
could be safely integrated; this agreement has been received and
accepted.
Risks
There may be cross-platform issues with the provided
locale-agnostic functions, though this risk is low given that the
code supplied simply reverses any locale-dependent changes made to
floating-point numbers.
Martin and Guido pointed out potential copyright issues with the
contributed code. I believe we will have no problems in this area
as members of the GTK+ and glib teams have said they are fine with
relicensing the code, and a PSF contributor agreement has been
mailed in to ensure this safety.
Tim Peters has pointed out [9] that there are situations involving
threading in which the proposed change is insufficient to solve
the problem completely. A complete solution, however, does not
currently exist.
Implementation
An implementation was developed by Gustavo Carneiro <gjc at
inescporto.pt>, and attached to Sourceforge.net bug 774665 [10]
The final patch [11] was integrated into Python CVS by Martin v.
Löwis on 2004-06-08, as stated in the bug report.
References
[2]
Python locale documentation for embedding,
http://docs.python.org/library/locale.html
[3]
PyGTK homepage, http://www.daa.com.au/~james/pygtk/
[4]
GtkSpinButton screenshot (demonstrating problem),
http://www.async.com.br/~kiko/spin.png
[5]
GNOME bug report, http://bugzilla.gnome.org/show_bug.cgi?id=114132
[6]
Code submission of g_ascii_strtod and g_ascii_dtostr (later
renamed g_ascii_formatd) by Alex Larsson,
http://mail.gnome.org/archives/gtk-devel-list/2001-October/msg00114.html
[7]
PSF Contributor Agreement,
https://www.python.org/psf/contrib/contrib-form/
[8]
Alex Larsson’s email confirming his agreement was mailed in,
https://mail.python.org/pipermail/python-dev/2003-August/037755.html
[9]
Tim Peters’ email summarizing LC_NUMERIC trouble with Spambayes,
https://mail.python.org/pipermail/python-dev/2003-September/037898.html
[10]
Python bug report, https://bugs.python.org/issue774665
[11]
Integrated LC_NUMERIC-agnostic patch,
https://sourceforge.net/tracker/download.php?group_id=5470&atid=305470&file_id=89685&aid=774665
Copyright
This document has been placed in the public domain.
| Final | PEP 331 – Locale-Independent Float/String Conversions | Standards Track | Support for the LC_NUMERIC locale category in Python 2.3 is
implemented only in Python-space. This causes inconsistent
behavior and thread-safety issues for applications that use
extension modules and libraries implemented in C that parse and
generate floats from strings. This document proposes a plan for
removing this inconsistency by providing and using substitute
locale-agnostic functions as necessary. |
PEP 332 – Byte vectors and String/Unicode Unification
Author:
Skip Montanaro <skip at pobox.com>
Status:
Rejected
Type:
Standards Track
Created:
11-Aug-2004
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Rejection Notice
Rationale
Proposed Implementation
Bytes Object API
Issues
Copyright
Abstract
This PEP outlines the introduction of a raw bytes sequence object
and the unification of the current str and unicode objects.
Rejection Notice
This PEP is rejected in this form. The author has expressed lack of
time to continue to shepherd it, and discussion on python-dev has
moved to a slightly different proposal which will (eventually) be
written up as a new PEP. See the thread starting at
https://mail.python.org/pipermail/python-dev/2006-February/060930.html.
Rationale
Python’s current string objects are overloaded. They serve both to
hold ASCII and non-ASCII character data and to also hold sequences of
raw bytes which have no reasonable interpretation as displayable
character sequences. This overlap hasn’t been a big problem in the
past, but as Python moves closer to requiring source code to be
properly encoded, the use of strings to represent raw byte sequences
will be more problematic. In addition, as Python’s Unicode support
has improved, it’s easier to consider strings as ASCII-encoded Unicode
objects.
Proposed Implementation
The number in parentheses indicates the Python version in which the
feature will be introduced.
Add a bytes builtin which is just a synonym for str. (2.5)
Add a b"..." string literal which is equivalent to raw string
literals, with the exception that values which conflict with the
source encoding of the containing file not generate warnings. (2.5)
Warn about the use of variables named “bytes”. (2.5 or 2.6)
Introduce a bytes builtin which refers to a sequence distinct
from the str type. (2.6)
Make str a synonym for unicode. (3.0)
Bytes Object API
TBD.
Issues
Can this be accomplished before Python 3.0?
Should bytes objects be mutable or immutable? (Guido seems to
like them to be mutable.)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 332 – Byte vectors and String/Unicode Unification | Standards Track | This PEP outlines the introduction of a raw bytes sequence object
and the unification of the current str and unicode objects. |
PEP 334 – Simple Coroutines via SuspendIteration
Author:
Clark C. Evans <cce at clarkevans.com>
Status:
Withdrawn
Type:
Standards Track
Created:
26-Aug-2004
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Semantics
Simple Iterators
Introducing SuspendIteration
Application Iterators
Complicating Factors
Resource Cleanup
API and Limitations
Low-Level Implementation
References
Copyright
Abstract
Asynchronous application frameworks such as Twisted [1] and Peak
[2], are based on a cooperative multitasking via event queues or
deferred execution. While this approach to application development
does not involve threads and thus avoids a whole class of problems
[3], it creates a different sort of programming challenge. When an
I/O operation would block, a user request must suspend so that other
requests can proceed. The concept of a coroutine [4] promises to
help the application developer grapple with this state management
difficulty.
This PEP proposes a limited approach to coroutines based on an
extension to the iterator protocol. Currently, an iterator may
raise a StopIteration exception to indicate that it is done producing
values. This proposal adds another exception to this protocol,
SuspendIteration, which indicates that the given iterator may have
more values to produce, but is unable to do so at this time.
Rationale
There are two current approaches to bringing co-routines to Python.
Christian Tismer’s Stackless [6] involves a ground-up restructuring
of Python’s execution model by hacking the ‘C’ stack. While this
approach works, its operation is hard to describe and keep portable. A
related approach is to compile Python code to Parrot [7], a
register-based virtual machine, which has coroutines. Unfortunately,
neither of these solutions is portable with IronPython (CLR) or Jython
(JavaVM).
It is thought that a more limited approach, based on iterators, could
provide a coroutine facility to application programmers and still be
portable across runtimes.
Iterators keep their state in local variables that are not on the
“C” stack. Iterators can be viewed as classes, with state stored in
member variables that are persistent across calls to its next()
method.
While an uncaught exception may terminate a function’s execution, an
uncaught exception need not invalidate an iterator. The proposed
exception, SuspendIteration, uses this feature. In other words,
just because one call to next() results in an exception does not
necessarily need to imply that the iterator itself is no longer
capable of producing values.
There are four places where this new exception impacts:
The PEP 255 simple generator mechanism could be extended to safely
‘catch’ this SuspendIteration exception, stuff away its current
state, and pass the exception on to the caller.
Various iterator filters [9] in the standard library, such as
itertools.izip should be made aware of this exception so that it can
transparently propagate SuspendIteration.
Iterators generated from I/O operations, such as a file or socket
reader, could be modified to have a non-blocking variety. This
option would raise a subclass of SuspendIteration if the requested
operation would block.
The asyncore library could be updated to provide a basic ‘runner’
that pulls from an iterator; if the SuspendIteration exception is
caught, then it moves on to the next iterator in its runlist [10].
External frameworks like Twisted would provide alternative
implementations, perhaps based on FreeBSD’s kqueue or Linux’s epoll.
While these may seem dramatic changes, it is a very small amount of
work compared with the utility provided by continuations.
Semantics
This section will explain, at a high level, how the introduction of
this new SuspendIteration exception would behave.
Simple Iterators
The current functionality of iterators is best seen with a simple
example which produces two values ‘one’ and ‘two’.
class States:
def __iter__(self):
self._next = self.state_one
return self
def next(self):
return self._next()
def state_one(self):
self._next = self.state_two
return "one"
def state_two(self):
self._next = self.state_stop
return "two"
def state_stop(self):
raise StopIteration
print list(States())
An equivalent iteration could, of course, be created by the
following generator:
def States():
yield 'one'
yield 'two'
print list(States())
Introducing SuspendIteration
Suppose that between producing ‘one’ and ‘two’, the generator above
could block on a socket read. In this case, we would want to raise
SuspendIteration to signal that the iterator is not done producing,
but is unable to provide a value at the current moment.
from random import randint
from time import sleep
class SuspendIteration(Exception):
pass
class NonBlockingResource:
"""Randomly unable to produce the second value"""
def __iter__(self):
self._next = self.state_one
return self
def next(self):
return self._next()
def state_one(self):
self._next = self.state_suspend
return "one"
def state_suspend(self):
rand = randint(1,10)
if 2 == rand:
self._next = self.state_two
return self.state_two()
raise SuspendIteration()
def state_two(self):
self._next = self.state_stop
return "two"
def state_stop(self):
raise StopIteration
def sleeplist(iterator, timeout = .1):
"""
Do other things (e.g. sleep) while resource is
unable to provide the next value
"""
it = iter(iterator)
retval = []
while True:
try:
retval.append(it.next())
except SuspendIteration:
sleep(timeout)
continue
except StopIteration:
break
return retval
print sleeplist(NonBlockingResource())
In a real-world situation, the NonBlockingResource would be a file
iterator, socket handle, or other I/O based producer. The sleeplist
would instead be an async reactor, such as those found in asyncore or
Twisted. The non-blocking resource could, of course, be written as a
generator:
def NonBlockingResource():
yield "one"
while True:
rand = randint(1,10)
if 2 == rand:
break
raise SuspendIteration()
yield "two"
It is not necessary to add a keyword, ‘suspend’, since most real
content generators will not be in application code, they will be in
low-level I/O based operations. Since most programmers need not be
exposed to the SuspendIteration() mechanism, a keyword is not needed.
Application Iterators
The previous example is rather contrived, a more ‘real-world’ example
would be a web page generator which yields HTML content, and pulls
from a database. Note that this is an example of neither the
‘producer’ nor the ‘consumer’, but rather of a filter.
def ListAlbums(cursor):
cursor.execute("SELECT title, artist FROM album")
yield '<html><body><table><tr><td>Title</td><td>Artist</td></tr>'
for (title, artist) in cursor:
yield '<tr><td>%s</td><td>%s</td></tr>' % (title, artist)
yield '</table></body></html>'
The problem, of course, is that the database may block for some time
before any rows are returned, and that during execution, rows may be
returned in blocks of 10 or 100 at a time. Ideally, if the database
blocks for the next set of rows, another user connection could be
serviced. Note the complete absence of SuspendIterator in the above
code. If done correctly, application developers would be able to
focus on functionality rather than concurrency issues.
The iterator created by the above generator should do the magic
necessary to maintain state, yet pass the exception through to a
lower-level async framework. Here is an example of what the
corresponding iterator would look like if coded up as a class:
class ListAlbums:
def __init__(self, cursor):
self.cursor = cursor
def __iter__(self):
self.cursor.execute("SELECT title, artist FROM album")
self._iter = iter(self._cursor)
self._next = self.state_head
return self
def next(self):
return self._next()
def state_head(self):
self._next = self.state_cursor
return "<html><body><table><tr><td>\
Title</td><td>Artist</td></tr>"
def state_tail(self):
self._next = self.state_stop
return "</table></body></html>"
def state_cursor(self):
try:
(title,artist) = self._iter.next()
return '<tr><td>%s</td><td>%s</td></tr>' % (title, artist)
except StopIteration:
self._next = self.state_tail
return self.next()
except SuspendIteration:
# just pass-through
raise
def state_stop(self):
raise StopIteration
Complicating Factors
While the above example is straightforward, things are a bit more
complicated if the intermediate generator ‘condenses’ values, that is,
it pulls in two or more values for each value it produces. For
example,
def pair(iterLeft,iterRight):
rhs = iter(iterRight)
lhs = iter(iterLeft)
while True:
yield (rhs.next(), lhs.next())
In this case, the corresponding iterator behavior has to be a bit more
subtle to handle the case of either the right or left iterator raising
SuspendIteration. It seems to be a matter of decomposing the
generator to recognize intermediate states where a SuspendIterator
exception from the producing context could happen.
class pair:
def __init__(self, iterLeft, iterRight):
self.iterLeft = iterLeft
self.iterRight = iterRight
def __iter__(self):
self.rhs = iter(iterRight)
self.lhs = iter(iterLeft)
self._temp_rhs = None
self._temp_lhs = None
self._next = self.state_rhs
return self
def next(self):
return self._next()
def state_rhs(self):
self._temp_rhs = self.rhs.next()
self._next = self.state_lhs
return self.next()
def state_lhs(self):
self._temp_lhs = self.lhs.next()
self._next = self.state_pair
return self.next()
def state_pair(self):
self._next = self.state_rhs
return (self._temp_rhs, self._temp_lhs)
This proposal assumes that a corresponding iterator written using
this class-based method is possible for existing generators. The
challenge seems to be the identification of distinct states within
the generator where suspension could occur.
Resource Cleanup
The current generator mechanism has a strange interaction with
exceptions where a ‘yield’ statement is not allowed within a
try/finally block. The SuspendIterator exception provides another
similar issue. The impacts of this issue are not clear. However it
may be that re-writing the generator into a state machine, as the
previous section did, could resolve this issue allowing for the
situation to be no-worse than, and perhaps even removing the
yield/finally situation. More investigation is needed in this area.
API and Limitations
This proposal only covers ‘suspending’ a chain of iterators, and does
not cover (of course) suspending general functions, methods, or “C”
extension function. While there could be no direct support for
creating generators in “C” code, native “C” iterators which comply
with the SuspendIterator semantics are certainly possible.
Low-Level Implementation
The author of the PEP is not yet familiar with the Python execution
model to comment in this area.
References
[1]
Twisted
(http://twistedmatrix.com)
[2]
Peak
(http://peak.telecommunity.com)
[3]
C10K
(http://www.kegel.com/c10k.html)
[4]
Coroutines
(http://c2.com/cgi/wiki?CallWithCurrentContinuation)
[6]
Stackless Python
(http://stackless.com)
[7]
Parrot /w coroutines
(http://www.sidhe.org/~dan/blog/archives/000178.html)
[9]
itertools - Functions creating iterators
(http://docs.python.org/library/itertools.html)
[10]
Microthreads in Python, David Mertz
(http://www-106.ibm.com/developerworks/linux/library/l-pythrd.html)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 334 – Simple Coroutines via SuspendIteration | Standards Track | Asynchronous application frameworks such as Twisted [1] and Peak
[2], are based on a cooperative multitasking via event queues or
deferred execution. While this approach to application development
does not involve threads and thus avoids a whole class of problems
[3], it creates a different sort of programming challenge. When an
I/O operation would block, a user request must suspend so that other
requests can proceed. The concept of a coroutine [4] promises to
help the application developer grapple with this state management
difficulty. |
PEP 335 – Overloadable Boolean Operators
Author:
Gregory Ewing <greg.ewing at canterbury.ac.nz>
Status:
Rejected
Type:
Standards Track
Created:
29-Aug-2004
Python-Version:
3.3
Post-History:
05-Sep-2004, 30-Sep-2011, 25-Oct-2011
Table of Contents
Rejection Notice
Abstract
Background
Motivation
Rationale
Specification
Special Methods
Bytecodes
Type Slots
Python/C API Functions
Alternatives and Optimisations
Reduced special method set
Additional bytecodes
Optimisation of ‘not’
Usage Examples
Example 1: NumPy Arrays
Example 1 Output
Example 2: Database Queries
Example 2 Output
Copyright
Rejection Notice
This PEP was rejected.
See https://mail.python.org/pipermail/python-dev/2012-March/117510.html
Abstract
This PEP proposes an extension to permit objects to define their own
meanings for the boolean operators ‘and’, ‘or’ and ‘not’, and suggests
an efficient strategy for implementation. A prototype of this
implementation is available for download.
Background
Python does not currently provide any ‘__xxx__’ special methods
corresponding to the ‘and’, ‘or’ and ‘not’ boolean operators. In the
case of ‘and’ and ‘or’, the most likely reason is that these operators
have short-circuiting semantics, i.e. the second operand is not
evaluated if the result can be determined from the first operand. The
usual technique of providing special methods for these operators
therefore would not work.
There is no such difficulty in the case of ‘not’, however, and it
would be straightforward to provide a special method for this
operator. The rest of this proposal will therefore concentrate mainly
on providing a way to overload ‘and’ and ‘or’.
Motivation
There are many applications in which it is natural to provide custom
meanings for Python operators, and in some of these, having boolean
operators excluded from those able to be customised can be
inconvenient. Examples include:
NumPy, in which almost all the operators are defined on
arrays so as to perform the appropriate operation between
corresponding elements, and return an array of the results. For
consistency, one would expect a boolean operation between two
arrays to return an array of booleans, but this is not currently
possible.There is a precedent for an extension of this kind: comparison
operators were originally restricted to returning boolean results,
and rich comparisons were added so that comparisons of NumPy
arrays could return arrays of booleans.
A symbolic algebra system, in which a Python expression is
evaluated in an environment which results in it constructing a tree
of objects corresponding to the structure of the expression.
A relational database interface, in which a Python expression is
used to construct an SQL query.
A workaround often suggested is to use the bitwise operators ‘&’, ‘|’
and ‘~’ in place of ‘and’, ‘or’ and ‘not’, but this has some
drawbacks:
The precedence of these is different in relation to the other operators,
and they may already be in use for other purposes (as in example 1).
It is aesthetically displeasing to force users to use something other
than the most obvious syntax for what they are trying to express. This
would be particularly acute in the case of example 3, considering that
boolean operations are a staple of SQL queries.
Bitwise operators do not provide a solution to the problem of
chained comparisons such as ‘a < b < c’ which involve an implicit
‘and’ operation. Such expressions currently cannot be used at all
on data types such as NumPy arrays where the result of a comparison
cannot be treated as having normal boolean semantics; they must be
expanded into something like (a < b) & (b < c), losing a considerable
amount of clarity.
Rationale
The requirements for a successful solution to the problem of allowing
boolean operators to be customised are:
In the default case (where there is no customisation), the existing
short-circuiting semantics must be preserved.
There must not be any appreciable loss of speed in the default
case.
Ideally, the customisation mechanism should allow the object to
provide either short-circuiting or non-short-circuiting semantics,
at its discretion.
One obvious strategy, that has been previously suggested, is to pass
into the special method the first argument and a function for
evaluating the second argument. This would satisfy requirements 1 and
3, but not requirement 2, since it would incur the overhead of
constructing a function object and possibly a Python function call on
every boolean operation. Therefore, it will not be considered further
here.
The following section proposes a strategy that addresses all three
requirements. A prototype implementation of this strategy is
available for download.
Specification
Special Methods
At the Python level, objects may define the following special methods.
Unary
Binary, phase 1
Binary, phase 2
__not__(self)
__and1__(self)
__or1__(self)
__and2__(self, other)
__or2__(self, other)
__rand2__(self, other)
__ror2__(self, other)
The __not__ method, if defined, implements the ‘not’ operator. If it
is not defined, or it returns NotImplemented, existing semantics are
used.
To permit short-circuiting, processing of the ‘and’ and ‘or’ operators
is split into two phases. Phase 1 occurs after evaluation of the first
operand but before the second. If the first operand defines the
relevant phase 1 method, it is called with the first operand as
argument. If that method can determine the result without needing the
second operand, it returns the result, and further processing is
skipped.
If the phase 1 method determines that the second operand is needed, it
returns the special value NeedOtherOperand. This triggers the
evaluation of the second operand, and the calling of a relevant
phase 2 method. During phase 2, the __and2__/__rand2__ and
__or2__/__ror2__ method pairs work as for other binary operators.
Processing falls back to existing semantics if at any stage a relevant
special method is not found or returns NotImplemented.
As a special case, if the first operand defines a phase 2 method but
no corresponding phase 1 method, the second operand is always
evaluated and the phase 2 method called. This allows an object which
does not want short-circuiting semantics to simply implement the
phase 2 methods and ignore phase 1.
Bytecodes
The patch adds four new bytecodes, LOGICAL_AND_1, LOGICAL_AND_2,
LOGICAL_OR_1 and LOGICAL_OR_2. As an example of their use, the
bytecode generated for an ‘and’ expression looks like this:
.
.
.
evaluate first operand
LOGICAL_AND_1 L
evaluate second operand
LOGICAL_AND_2
L: .
.
.
The LOGICAL_AND_1 bytecode performs phase 1 processing. If it
determines that the second operand is needed, it leaves the first
operand on the stack and continues with the following code. Otherwise
it pops the first operand, pushes the result and branches to L.
The LOGICAL_AND_2 bytecode performs phase 2 processing, popping both
operands and pushing the result.
Type Slots
At the C level, the new special methods are manifested as five new
slots in the type object. In the patch, they are added to the
tp_as_number substructure, since this allows making use of some
existing code for dealing with unary and binary operators. Their
existence is signalled by a new type flag,
Py_TPFLAGS_HAVE_BOOLEAN_OVERLOAD.
The new type slots are:
unaryfunc nb_logical_not;
unaryfunc nb_logical_and_1;
unaryfunc nb_logical_or_1;
binaryfunc nb_logical_and_2;
binaryfunc nb_logical_or_2;
Python/C API Functions
There are also five new Python/C API functions corresponding to the
new operations:
PyObject *PyObject_LogicalNot(PyObject *);
PyObject *PyObject_LogicalAnd1(PyObject *);
PyObject *PyObject_LogicalOr1(PyObject *);
PyObject *PyObject_LogicalAnd2(PyObject *, PyObject *);
PyObject *PyObject_LogicalOr2(PyObject *, PyObject *);
Alternatives and Optimisations
This section discusses some possible variations on the proposal,
and ways in which the bytecode sequences generated for boolean
expressions could be optimised.
Reduced special method set
For completeness, the full version of this proposal includes a
mechanism for types to define their own customised short-circuiting
behaviour. However, the full mechanism is not needed to address the
main use cases put forward here, and it would be possible to
define a simplified version that only includes the phase 2
methods. There would then only be 5 new special methods (__and2__,
__rand2__, __or2__, __ror2__, __not__) with 3 associated type slots
and 3 API functions.
This simplified version could be expanded to the full version
later if desired.
Additional bytecodes
As defined here, the bytecode sequence for code that branches on
the result of a boolean expression would be slightly longer than
it currently is. For example, in Python 2.7,
if a and b:
statement1
else:
statement2
generates
LOAD_GLOBAL a
POP_JUMP_IF_FALSE false_branch
LOAD_GLOBAL b
POP_JUMP_IF_FALSE false_branch
<code for statement1>
JUMP_FORWARD end_branch
false_branch:
<code for statement2>
end_branch:
Under this proposal as described so far, it would become something like
LOAD_GLOBAL a
LOGICAL_AND_1 test
LOAD_GLOBAL b
LOGICAL_AND_2
test:
POP_JUMP_IF_FALSE false_branch
<code for statement1>
JUMP_FORWARD end_branch
false_branch:
<code for statement2>
end_branch:
This involves executing one extra bytecode in the short-circuiting
case and two extra bytecodes in the non-short-circuiting case.
However, by introducing extra bytecodes that combine the logical
operations with testing and branching on the result, it can be
reduced to the same number of bytecodes as the original:
LOAD_GLOBAL a
AND1_JUMP true_branch, false_branch
LOAD_GLOBAL b
AND2_JUMP_IF_FALSE false_branch
true_branch:
<code for statement1>
JUMP_FORWARD end_branch
false_branch:
<code for statement2>
end_branch:
Here, AND1_JUMP performs phase 1 processing as above,
and then examines the result. If there is a result, it is popped
from the stack, its truth value is tested and a branch taken to
one of two locations.
Otherwise, the first operand is left on the stack and execution
continues to the next bytecode. The AND2_JUMP_IF_FALSE bytecode
performs phase 2 processing, pops the result and branches if
it tests false
For the ‘or’ operator, there would be corresponding OR1_JUMP
and OR2_JUMP_IF_TRUE bytecodes.
If the simplified version without phase 1 methods is used, then
early exiting can only occur if the first operand is false for
‘and’ and true for ‘or’. Consequently, the two-target AND1_JUMP and
OR1_JUMP bytecodes can be replaced with AND1_JUMP_IF_FALSE and
OR1_JUMP_IF_TRUE, these being ordinary branch instructions with
only one target.
Optimisation of ‘not’
Recent versions of Python implement a simple optimisation in
which branching on a negated boolean expression is implemented
by reversing the sense of the branch, saving a UNARY_NOT opcode.
Taking a strict view, this optimisation should no longer be
performed, because the ‘not’ operator may be overridden to produce
quite different results from usual. However, in typical use cases,
it is not envisaged that expressions involving customised boolean
operations will be used for branching – it is much more likely
that the result will be used in some other way.
Therefore, it would probably do little harm to specify that the
compiler is allowed to use the laws of boolean algebra to
simplify any expression that appears directly in a boolean
context. If this is inconvenient, the result can always be assigned
to a temporary name first.
This would allow the existing ‘not’ optimisation to remain, and
would permit future extensions of it such as using De Morgan’s laws
to extend it deeper into the expression.
Usage Examples
Example 1: NumPy Arrays
#-----------------------------------------------------------------
#
# This example creates a subclass of numpy array to which
# 'and', 'or' and 'not' can be applied, producing an array
# of booleans.
#
#-----------------------------------------------------------------
from numpy import array, ndarray
class BArray(ndarray):
def __str__(self):
return "barray(%s)" % ndarray.__str__(self)
def __and2__(self, other):
return (self & other)
def __or2__(self, other):
return (self & other)
def __not__(self):
return (self == 0)
def barray(*args, **kwds):
return array(*args, **kwds).view(type = BArray)
a0 = barray([0, 1, 2, 4])
a1 = barray([1, 2, 3, 4])
a2 = barray([5, 6, 3, 4])
a3 = barray([5, 1, 2, 4])
print "a0:", a0
print "a1:", a1
print "a2:", a2
print "a3:", a3
print "not a0:", not a0
print "a0 == a1 and a2 == a3:", a0 == a1 and a2 == a3
print "a0 == a1 or a2 == a3:", a0 == a1 or a2 == a3
Example 1 Output
a0: barray([0 1 2 4])
a1: barray([1 2 3 4])
a2: barray([5 6 3 4])
a3: barray([5 1 2 4])
not a0: barray([ True False False False])
a0 == a1 and a2 == a3: barray([False False False True])
a0 == a1 or a2 == a3: barray([False False False True])
Example 2: Database Queries
#-----------------------------------------------------------------
#
# This example demonstrates the creation of a DSL for database
# queries allowing 'and' and 'or' operators to be used to
# formulate the query.
#
#-----------------------------------------------------------------
class SQLNode(object):
def __and2__(self, other):
return SQLBinop("and", self, other)
def __rand2__(self, other):
return SQLBinop("and", other, self)
def __eq__(self, other):
return SQLBinop("=", self, other)
class Table(SQLNode):
def __init__(self, name):
self.__tablename__ = name
def __getattr__(self, name):
return SQLAttr(self, name)
def __sql__(self):
return self.__tablename__
class SQLBinop(SQLNode):
def __init__(self, op, opnd1, opnd2):
self.op = op.upper()
self.opnd1 = opnd1
self.opnd2 = opnd2
def __sql__(self):
return "(%s %s %s)" % (sql(self.opnd1), self.op, sql(self.opnd2))
class SQLAttr(SQLNode):
def __init__(self, table, name):
self.table = table
self.name = name
def __sql__(self):
return "%s.%s" % (sql(self.table), self.name)
class SQLSelect(SQLNode):
def __init__(self, targets):
self.targets = targets
self.where_clause = None
def where(self, expr):
self.where_clause = expr
return self
def __sql__(self):
result = "SELECT %s" % ", ".join([sql(target) for target in self.targets])
if self.where_clause:
result = "%s WHERE %s" % (result, sql(self.where_clause))
return result
def sql(expr):
if isinstance(expr, SQLNode):
return expr.__sql__()
elif isinstance(expr, str):
return "'%s'" % expr.replace("'", "''")
else:
return str(expr)
def select(*targets):
return SQLSelect(targets)
#-----------------------------------------------------------------
dishes = Table("dishes")
customers = Table("customers")
orders = Table("orders")
query = select(customers.name, dishes.price, orders.amount).where(
customers.cust_id == orders.cust_id and orders.dish_id == dishes.dish_id
and dishes.name == "Spam, Eggs, Sausages and Spam")
print repr(query)
print sql(query)
Example 2 Output
<__main__.SQLSelect object at 0x1cc830>
SELECT customers.name, dishes.price, orders.amount WHERE
(((customers.cust_id = orders.cust_id) AND (orders.dish_id =
dishes.dish_id)) AND (dishes.name = 'Spam, Eggs, Sausages and Spam'))
Copyright
This document has been placed in the public domain.
| Rejected | PEP 335 – Overloadable Boolean Operators | Standards Track | This PEP proposes an extension to permit objects to define their own
meanings for the boolean operators ‘and’, ‘or’ and ‘not’, and suggests
an efficient strategy for implementation. A prototype of this
implementation is available for download. |
PEP 336 – Make None Callable
Author:
Andrew McClelland <eternalsquire at comcast.net>
Status:
Rejected
Type:
Standards Track
Created:
28-Oct-2004
Post-History:
Table of Contents
Abstract
BDFL Pronouncement
Motivation
Rationale
How To Use
References
Copyright
Abstract
None should be a callable object that when called with any
arguments has no side effect and returns None.
BDFL Pronouncement
This PEP is rejected. It is considered a feature that None raises
an error when called. The proposal falls short in tests for
obviousness, clarity, explicitness, and necessity. The provided Switch
example is nice but easily handled by a simple lambda definition.
See python-dev discussion on 17 June 2005 [1].
Motivation
To allow a programming style for selectable actions that is more
in accordance with the minimalistic functional programming goals
of the Python language.
Rationale
Allow the use of None in method tables as a universal no effect
rather than either (1) checking a method table entry against None
before calling, or (2) writing a local no effect method with
arguments similar to other functions in the table.
The semantics would be effectively:
class None:
def __call__(self, *args):
pass
How To Use
Before, checking function table entry against None:
class Select:
def a(self, input):
print 'a'
def b(self, input):
print 'b'
def c(self, input):
print 'c'
def __call__(self, input):
function = { 1 : self.a,
2 : self.b,
3 : self.c
}.get(input, None)
if function: return function(input)
Before, using a local no effect method:
class Select:
def a(self, input):
print 'a'
def b(self, input):
print 'b'
def c(self, input):
print 'c'
def nop(self, input):
pass
def __call__(self, input):
return { 1 : self.a,
2 : self.b,
3 : self.c
}.get(input, self.nop)(input)
After:
class Select:
def a(self, input):
print 'a'
def b(self, input):
print 'b'
def c(self, input):
print 'c'
def __call__(self, input):
return { 1 : self.a,
2 : self.b,
3 : self.c
}.get(input, None)(input)
References
[1]
Raymond Hettinger, Propose to reject PEP 336 – Make None Callable
https://mail.python.org/pipermail/python-dev/2005-June/054280.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 336 – Make None Callable | Standards Track | None should be a callable object that when called with any
arguments has no side effect and returns None. |
PEP 337 – Logging Usage in the Standard Library
Author:
Michael P. Dubner <dubnerm at mindless.com>
Status:
Deferred
Type:
Standards Track
Created:
02-Oct-2004
Python-Version:
2.5
Post-History:
10-Nov-2004
Table of Contents
Abstract
PEP Deferral
Rationale
Proposal
Module List
Doubtful Modules
Guidelines for Logging Usage
References
Copyright
Abstract
This PEP defines a standard for using the logging system (PEP 282) in the
standard library.
Implementing this PEP will simplify development of daemon
applications. As a downside this PEP requires slight
modifications (however in a back-portable way) to a large number
of standard modules.
After implementing this PEP one can use following filtering
scheme:
logging.getLogger('py.BaseHTTPServer').setLevel(logging.FATAL)
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the
PEP and collecting and incorporating feedback, and with sufficient
available time to do so effectively.
Rationale
There are a couple of situations when output to stdout or stderr
is impractical:
Daemon applications where the framework doesn’t allow the
redirection of standard output to some file, but assumes use of
some other form of logging. Examples are syslog under *nix’es
and EventLog under WinNT+.
GUI applications which want to output every new log entry in
separate pop-up window (i.e. fading OSD).
Also sometimes applications want to filter output entries based on
their source or severity. This requirement can’t be implemented
using simple redirection.
Finally sometimes output needs to be marked with event timestamps,
which can be accomplished with ease using the logging system.
Proposal
Every module usable for daemon and GUI applications should be
rewritten to use the logging system instead of print or
sys.stdout.write.
There should be code like this included in the beginning of every
modified module:
import logging
_log = logging.getLogger('py.<module-name>')
A prefix of py. [2] must be used by all modules included in the
standard library distributed along with Python, and only by such
modules (unverifiable). The use of _log is intentional as we
don’t want to auto-export it. For modules that use log only in
one class a logger can be created inside the class definition as
follows:
class XXX:
__log = logging.getLogger('py.<module-name>')
Then this class can create access methods to log to this private
logger.
So print and sys.std{out|err}.write statements should be
replaced with _log.{debug|info}, and traceback.print_exception
with _log.exception or sometimes _log.debug('...', exc_info=1).
Module List
Here is a (possibly incomplete) list of modules to be reworked:
asyncore (dispatcher.log, dispatcher.log_info)
BaseHTTPServer (BaseHTTPRequestHandler.log_request,
BaseHTTPRequestHandler.log_error,
BaseHTTPRequestHandler.log_message)
cgi (possibly - is cgi.log used by somebody?)
ftplib (if FTP.debugging)
gopherlib (get_directory)
httplib (HTTPResponse, HTTPConnection)
ihooks (_Verbose)
imaplib (IMAP4._mesg)
mhlib (MH.error)
nntplib (NNTP)
pipes (Template.makepipeline)
pkgutil (extend_path)
platform (_syscmd_ver)
poplib (if POP3._debugging)
profile (if Profile.verbose)
robotparser (_debug)
smtplib (if SGMLParser.verbose)
shlex (if shlex.debug)
smtpd (SMTPChannel/PureProxy where print >> DEBUGSTREAM)
smtplib (if SMTP.debuglevel)
SocketServer (BaseServer.handle_error)
telnetlib (if Telnet.debuglevel)
threading? (_Verbose._note, Thread.__bootstrap)
timeit (Timer.print_exc)
trace
uu (decode)
Additionally there are a couple of modules with commented debug
output or modules where debug output should be added. For
example:
urllib
Finally possibly some modules should be extended to provide more
debug information.
Doubtful Modules
Listed here are modules that the community will propose for
addition to the module list and modules that the community say
should be removed from the module list.
tabnanny (check)
Guidelines for Logging Usage
Also we can provide some recommendation to authors of library
modules so they all follow the same format of naming loggers. I
propose that non-standard library modules should use loggers named
after their full names, so a module “spam” in sub-package “junk”
of package “dummy” will be named “dummy.junk.spam” and, of course,
the __init__ module of the same sub-package will have the logger
name “dummy.junk”.
References
[2]
https://mail.python.org/pipermail/python-dev/2004-October/049282.html
Copyright
This document has been placed in the public domain.
| Deferred | PEP 337 – Logging Usage in the Standard Library | Standards Track | This PEP defines a standard for using the logging system (PEP 282) in the
standard library. |
PEP 338 – Executing modules as scripts
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
16-Oct-2004
Python-Version:
2.5
Post-History:
08-Nov-2004, 11-Feb-2006, 12-Feb-2006, 18-Feb-2006
Table of Contents
Abstract
Rationale
Scope of this proposal
Current Behaviour
Proposed Semantics
Reference Implementation
Import Statements and the Main Module
Resolved Issues
Alternatives
References
Copyright
Abstract
This PEP defines semantics for executing any Python module as a
script, either with the -m command line switch, or by invoking
it via runpy.run_module(modulename).
The -m switch implemented in Python 2.4 is quite limited. This
PEP proposes making use of the PEP 302 import hooks to allow any
module which provides access to its code object to be executed.
Rationale
Python 2.4 adds the command line switch -m to allow modules to be
located using the Python module namespace for execution as scripts.
The motivating examples were standard library modules such as pdb
and profile, and the Python 2.4 implementation is fine for this
limited purpose.
A number of users and developers have requested extension of the
feature to also support running modules located inside packages. One
example provided is pychecker’s pychecker.checker module. This
capability was left out of the Python 2.4 implementation because the
implementation of this was significantly more complicated, and the most
appropriate strategy was not at all clear.
The opinion on python-dev was that it was better to postpone the
extension to Python 2.5, and go through the PEP process to help make
sure we got it right.
Since that time, it has also been pointed out that the current version
of -m does not support zipimport or any other kind of
alternative import behaviour (such as frozen modules).
Providing this functionality as a Python module is significantly easier
than writing it in C, and makes the functionality readily available to
all Python programs, rather than being specific to the CPython
interpreter. CPython’s command line switch can then be rewritten to
make use of the new module.
Scripts which execute other scripts (e.g. profile, pdb) also
have the option to use the new module to provide -m style support
for identifying the script to be executed.
Scope of this proposal
In Python 2.4, a module located using -m is executed just as if
its filename had been provided on the command line. The goal of this
PEP is to get as close as possible to making that statement also hold
true for modules inside packages, or accessed via alternative import
mechanisms (such as zipimport).
Prior discussions suggest it should be noted that this PEP is not
about changing the idiom for making Python modules also useful as
scripts (see PEP 299). That issue is considered orthogonal to the
specific feature addressed by this PEP.
Current Behaviour
Before describing the new semantics, it’s worth covering the existing
semantics for Python 2.4 (as they are currently defined only by the
source code and the command line help).
When -m is used on the command line, it immediately terminates the
option list (like -c). The argument is interpreted as the name of
a top-level Python module (i.e. one which can be found on
sys.path).
If the module is found, and is of type PY_SOURCE or
PY_COMPILED, then the command line is effectively reinterpreted
from python <options> -m <module> <args> to python <options>
<filename> <args>. This includes setting sys.argv[0] correctly
(some scripts rely on this - Python’s own regrtest.py is one
example).
If the module is not found, or is not of the correct type, an error
is printed.
Proposed Semantics
The semantics proposed are fairly simple: if -m is used to execute
a module the PEP 302 import mechanisms are used to locate the module and
retrieve its compiled code, before executing the module in accordance
with the semantics for a top-level module. The interpreter does this by
invoking a new standard library function runpy.run_module.
This is necessary due to the way Python’s import machinery locates
modules inside packages. A package may modify its own __path__
variable during initialisation. In addition, paths may be affected by
*.pth files, and some packages will install custom loaders on
sys.metapath. Accordingly, the only way for Python to reliably
locate the module is by importing the containing package and
using the PEP 302 import hooks to gain access to the Python code.
Note that the process of locating the module to be executed may require
importing the containing package. The effects of such a package import
that will be visible to the executed module are:
the containing package will be in sys.modules
any external effects of the package initialisation (e.g. installed
import hooks, loggers, atexit handlers, etc.)
Reference Implementation
A reference implementation is available on SourceForge ([2]), along
with documentation for the library reference ([5]). There are
two parts to this implementation. The first is a proposed standard
library module runpy. The second is a modification to the code
implementing the -m switch to always delegate to
runpy.run_module instead of trying to run the module directly.
The delegation has the form:
runpy.run_module(sys.argv[0], run_name="__main__", alter_sys=True)
run_module is the only function runpy exposes in its public API.
run_module(mod_name[, init_globals][, run_name][, alter_sys])
Execute the code of the specified module and return the resulting
module globals dictionary. The module’s code is first located using
the standard import mechanism (refer to PEP 302 for details) and
then executed in a fresh module namespace.The optional dictionary argument init_globals may be used to
pre-populate the globals dictionary before the code is executed.
The supplied dictionary will not be modified. If any of the special
global variables below are defined in the supplied dictionary, those
definitions are overridden by the run_module function.
The special global variables __name__, __file__,
__loader__ and __builtins__ are set in the globals dictionary
before the module code is executed.
__name__ is set to run_name if this optional argument is
supplied, and the original mod_name argument otherwise.
__loader__ is set to the PEP 302 module loader used to retrieve
the code for the module (This loader may be a wrapper around the
standard import mechanism).
__file__ is set to the name provided by the module loader. If
the loader does not make filename information available, this
argument is set to None.
__builtins__ is automatically initialised with a reference to
the top level namespace of the __builtin__ module.
If the argument alter_sys is supplied and evaluates to True,
then sys.argv[0] is updated with the value of __file__
and sys.modules[__name__] is updated with a temporary module
object for the module being executed. Both sys.argv[0] and
sys.modules[__name__] are restored to their original values
before this function returns.
When invoked as a script, the runpy module finds and executes the
module supplied as the first argument. It adjusts sys.argv by
deleting sys.argv[0] (which refers to the runpy module itself)
and then invokes run_module(sys.argv[0], run_name="__main__",
alter_sys=True).
Import Statements and the Main Module
The release of 2.5b1 showed a surprising (although obvious in
retrospect) interaction between this PEP and PEP 328 - explicit
relative imports don’t work from a main module. This is due to
the fact that relative imports rely on __name__ to determine
the current module’s position in the package hierarchy. In a main
module, the value of __name__ is always '__main__', so
explicit relative imports will always fail (as they only work for
a module inside a package).
Investigation into why implicit relative imports appear to work when
a main module is executed directly but fail when executed using -m
showed that such imports are actually always treated as absolute
imports. Because of the way direct execution works, the package
containing the executed module is added to sys.path, so its sibling
modules are actually imported as top level modules. This can easily
lead to multiple copies of the sibling modules in the application if
implicit relative imports are used in modules that may be directly
executed (e.g. test modules or utility scripts).
For the 2.5 release, the recommendation is to always use absolute
imports in any module that is intended to be used as a main module.
The -m switch provides a benefit here, as it inserts the current
directory into sys.path, instead of the directory contain the main
module. This means that it is possible to run a module from inside a
package using -m so long as the current directory contains the top
level directory for the package. Absolute imports will work correctly
even if the package isn’t installed anywhere else on sys.path. If the
module is executed directly and uses absolute imports to retrieve its
sibling modules, then the top level package directory needs to be
installed somewhere on sys.path (since the current directory won’t be
added automatically).
Here’s an example file layout:
devel/
pkg/
__init__.py
moduleA.py
moduleB.py
test/
__init__.py
test_A.py
test_B.py
So long as the current directory is devel, or devel is already
on sys.path and the test modules use absolute imports (such as
import pkg moduleA to retrieve the module under test, PEP 338
allows the tests to be run as:
python -m pkg.test.test_A
python -m pkg.test.test_B
The question of whether or not relative imports should be supported
when a main module is executed with -m is something that will be
revisited for Python 2.6. Permitting it would require changes to
either Python’s import semantics or the semantics used to indicate
when a module is the main module, so it is not a decision to be made
hastily.
Resolved Issues
There were some key design decisions that influenced the development of
the runpy module. These are listed below.
The special variables __name__, __file__ and __loader__
are set in a module’s global namespace before the module is executed.
As run_module alters these values, it does not mutate the
supplied dictionary. If it did, then passing globals() to this
function could have nasty side effects.
Sometimes, the information needed to populate the special variables
simply isn’t available. Rather than trying to be too clever, these
variables are simply set to None when the relevant information
cannot be determined.
There is no special protection on the alter_sys argument.
This may result in sys.argv[0] being set to None if file
name information is not available.
The import lock is NOT used to avoid potential threading issues that
arise when alter_sys is set to True. Instead, it is recommended that
threaded code simply avoid using this flag.
Alternatives
The first alternative implementation considered ignored packages’
__path__ variables, and looked only in the main package directory. A
Python script with this behaviour can be found in the discussion of
the execmodule cookbook recipe [3].
The execmodule cookbook recipe itself was the proposed mechanism in
an earlier version of this PEP (before the PEP’s author read PEP 302).
Both approaches were rejected as they do not meet the main goal of the
-m switch – to allow the full Python namespace to be used to
locate modules for execution from the command line.
An earlier version of this PEP included some mistaken assumptions
about the way exec handled locals dictionaries and code from
function objects. These mistaken assumptions led to some unneeded
design complexity which has now been removed - run_code shares all
of the quirks of exec.
Earlier versions of the PEP also exposed a broader API that just the
single run_module() function needed to implement the updates to
the -m switch. In the interests of simplicity, those extra functions
have been dropped from the proposed API.
After the original implementation in SVN, it became clear that holding
the import lock when executing the initial application script was not
correct (e.g. python -m test.regrtest test_threadedimport failed).
So the run_module function only holds the import lock during the
actual search for the module, and releases it before execution, even if
alter_sys is set.
References
[2]
PEP 338 implementation (runpy module and -m update)
(https://bugs.python.org/issue1429601)
[3]
execmodule Python Cookbook Recipe
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/307772)
[5]
PEP 338 documentation (for runpy module)
(https://bugs.python.org/issue1429605)
Copyright
This document has been placed in the public domain.
| Final | PEP 338 – Executing modules as scripts | Standards Track | This PEP defines semantics for executing any Python module as a
script, either with the -m command line switch, or by invoking
it via runpy.run_module(modulename). |
PEP 339 – Design of the CPython Compiler
Author:
Brett Cannon <brett at python.org>
Status:
Withdrawn
Type:
Informational
Created:
02-Feb-2005
Post-History:
Table of Contents
Abstract
Parse Trees
Abstract Syntax Trees (AST)
Memory Management
Parse Tree to AST
Control Flow Graphs
AST to CFG to Bytecode
Introducing New Bytecode
Code Objects
Important Files
Known Compiler-related Experiments
References
Note
This PEP has been withdrawn and moved to the Python
developer’s guide.
Abstract
Historically (through 2.4), compilation from source code to bytecode
involved two steps:
Parse the source code into a parse tree (Parser/pgen.c)
Emit bytecode based on the parse tree (Python/compile.c)
Historically, this is not how a standard compiler works. The usual
steps for compilation are:
Parse source code into a parse tree (Parser/pgen.c)
Transform parse tree into an Abstract Syntax Tree (Python/ast.c)
Transform AST into a Control Flow Graph (Python/compile.c)
Emit bytecode based on the Control Flow Graph (Python/compile.c)
Starting with Python 2.5, the above steps are now used. This change
was done to simplify compilation by breaking it into three steps.
The purpose of this document is to outline how the latter three steps
of the process works.
This document does not touch on how parsing works beyond what is needed
to explain what is needed for compilation. It is also not exhaustive
in terms of the how the entire system works. You will most likely need
to read some source to have an exact understanding of all details.
Parse Trees
Python’s parser is an LL(1) parser mostly based on the
implementation laid out in the Dragon Book [Aho86].
The grammar file for Python can be found in Grammar/Grammar with the
numeric value of grammar rules are stored in Include/graminit.h. The
numeric values for types of tokens (literal tokens, such as :,
numbers, etc.) are kept in Include/token.h). The parse tree made up of
node * structs (as defined in Include/node.h).
Querying data from the node structs can be done with the following
macros (which are all defined in Include/token.h):
CHILD(node *, int)Returns the nth child of the node using zero-offset indexing
RCHILD(node *, int)Returns the nth child of the node from the right side; use
negative numbers!
NCH(node *)Number of children the node has
STR(node *)String representation of the node; e.g., will return : for a
COLON token
TYPE(node *)The type of node as specified in Include/graminit.h
REQ(node *, TYPE)Assert that the node is the type that is expected
LINENO(node *)retrieve the line number of the source code that led to the
creation of the parse rule; defined in Python/ast.c
To tie all of this example, consider the rule for ‘while’:
while_stmt: 'while' test ':' suite ['else' ':' suite]
The node representing this will have TYPE(node) == while_stmt and
the number of children can be 4 or 7 depending on if there is an ‘else’
statement. To access what should be the first ‘:’ and require it be an
actual ‘:’ token, (REQ(CHILD(node, 2), COLON).
Abstract Syntax Trees (AST)
The abstract syntax tree (AST) is a high-level representation of the
program structure without the necessity of containing the source code;
it can be thought of as an abstract representation of the source code. The
specification of the AST nodes is specified using the Zephyr Abstract
Syntax Definition Language (ASDL) [Wang97].
The definition of the AST nodes for Python is found in the file
Parser/Python.asdl .
Each AST node (representing statements, expressions, and several
specialized types, like list comprehensions and exception handlers) is
defined by the ASDL. Most definitions in the AST correspond to a
particular source construct, such as an ‘if’ statement or an attribute
lookup. The definition is independent of its realization in any
particular programming language.
The following fragment of the Python ASDL construct demonstrates the
approach and syntax:
module Python
{
stmt = FunctionDef(identifier name, arguments args, stmt* body,
expr* decorators)
| Return(expr? value) | Yield(expr value)
attributes (int lineno)
}
The preceding example describes three different kinds of statements;
function definitions, return statements, and yield statements. All
three kinds are considered of type stmt as shown by ‘|’ separating the
various kinds. They all take arguments of various kinds and amounts.
Modifiers on the argument type specify the number of values needed; ‘?’
means it is optional, ‘*’ means 0 or more, no modifier means only one
value for the argument and it is required. FunctionDef, for instance,
takes an identifier for the name, ‘arguments’ for args, zero or more
stmt arguments for ‘body’, and zero or more expr arguments for
‘decorators’.
Do notice that something like ‘arguments’, which is a node type, is
represented as a single AST node and not as a sequence of nodes as with
stmt as one might expect.
All three kinds also have an ‘attributes’ argument; this is shown by the
fact that ‘attributes’ lacks a ‘|’ before it.
The statement definitions above generate the following C structure type:
typedef struct _stmt *stmt_ty;
struct _stmt {
enum { FunctionDef_kind=1, Return_kind=2, Yield_kind=3 } kind;
union {
struct {
identifier name;
arguments_ty args;
asdl_seq *body;
} FunctionDef;
struct {
expr_ty value;
} Return;
struct {
expr_ty value;
} Yield;
} v;
int lineno;
}
Also generated are a series of constructor functions that allocate (in
this case) a stmt_ty struct with the appropriate initialization. The
‘kind’ field specifies which component of the union is initialized. The
FunctionDef() constructor function sets ‘kind’ to FunctionDef_kind and
initializes the ‘name’, ‘args’, ‘body’, and ‘attributes’ fields.
Memory Management
Before discussing the actual implementation of the compiler, a discussion of
how memory is handled is in order. To make memory management simple, an arena
is used. This means that a memory is pooled in a single location for easy
allocation and removal. What this gives us is the removal of explicit memory
deallocation. Because memory allocation for all needed memory in the compiler
registers that memory with the arena, a single call to free the arena is all
that is needed to completely free all memory used by the compiler.
In general, unless you are working on the critical core of the compiler, memory
management can be completely ignored. But if you are working at either the
very beginning of the compiler or the end, you need to care about how the arena
works. All code relating to the arena is in either Include/pyarena.h or
Python/pyarena.c .
PyArena_New() will create a new arena. The returned PyArena structure will
store pointers to all memory given to it. This does the bookkeeping of what
memory needs to be freed when the compiler is finished with the memory it used.
That freeing is done with PyArena_Free(). This needs to only be called in
strategic areas where the compiler exits.
As stated above, in general you should not have to worry about memory
management when working on the compiler. The technical details have been
designed to be hidden from you for most cases.
The only exception comes about when managing a PyObject. Since the rest
of Python uses reference counting, there is extra support added
to the arena to cleanup each PyObject that was allocated. These cases
are very rare. However, if you’ve allocated a PyObject, you must tell
the arena about it by calling PyArena_AddPyObject().
Parse Tree to AST
The AST is generated from the parse tree (see Python/ast.c) using the
function PyAST_FromNode().
The function begins a tree walk of the parse tree, creating various AST
nodes as it goes along. It does this by allocating all new nodes it
needs, calling the proper AST node creation functions for any required
supporting functions, and connecting them as needed.
Do realize that there is no automated nor symbolic connection between
the grammar specification and the nodes in the parse tree. No help is
directly provided by the parse tree as in yacc.
For instance, one must keep track of which node in the parse tree
one is working with (e.g., if you are working with an ‘if’ statement
you need to watch out for the ‘:’ token to find the end of the conditional).
The functions called to generate AST nodes from the parse tree all have
the name ast_for_xx where xx is what the grammar rule that the function
handles (alias_for_import_name is the exception to this). These in turn
call the constructor functions as defined by the ASDL grammar and
contained in Python/Python-ast.c (which was generated by
Parser/asdl_c.py) to create the nodes of the AST. This all leads to a
sequence of AST nodes stored in asdl_seq structs.
Function and macros for creating and using asdl_seq * types as found
in Python/asdl.c and Include/asdl.h:
asdl_seq_new()Allocate memory for an asdl_seq for the specified length
asdl_seq_GET()Get item held at a specific position in an asdl_seq
asdl_seq_SET()Set a specific index in an asdl_seq to the specified value
asdl_seq_LEN(asdl_seq *)Return the length of an asdl_seq
If you are working with statements, you must also worry about keeping
track of what line number generated the statement. Currently the line
number is passed as the last parameter to each stmt_ty function.
Control Flow Graphs
A control flow graph (often referenced by its acronym, CFG) is a
directed graph that models the flow of a program using basic blocks that
contain the intermediate representation (abbreviated “IR”, and in this
case is Python bytecode) within the blocks. Basic blocks themselves are
a block of IR that has a single entry point but possibly multiple exit
points. The single entry point is the key to basic blocks; it all has
to do with jumps. An entry point is the target of something that
changes control flow (such as a function call or a jump) while exit
points are instructions that would change the flow of the program (such
as jumps and ‘return’ statements). What this means is that a basic
block is a chunk of code that starts at the entry point and runs to an
exit point or the end of the block.
As an example, consider an ‘if’ statement with an ‘else’ block. The
guard on the ‘if’ is a basic block which is pointed to by the basic
block containing the code leading to the ‘if’ statement. The ‘if’
statement block contains jumps (which are exit points) to the true body
of the ‘if’ and the ‘else’ body (which may be NULL), each of which are
their own basic blocks. Both of those blocks in turn point to the
basic block representing the code following the entire ‘if’ statement.
CFGs are usually one step away from final code output. Code is directly
generated from the basic blocks (with jump targets adjusted based on the
output order) by doing a post-order depth-first search on the CFG
following the edges.
AST to CFG to Bytecode
With the AST created, the next step is to create the CFG. The first step
is to convert the AST to Python bytecode without having jump targets
resolved to specific offsets (this is calculated when the CFG goes to
final bytecode). Essentially, this transforms the AST into Python
bytecode with control flow represented by the edges of the CFG.
Conversion is done in two passes. The first creates the namespace
(variables can be classified as local, free/cell for closures, or
global). With that done, the second pass essentially flattens the CFG
into a list and calculates jump offsets for final output of bytecode.
The conversion process is initiated by a call to the function
PyAST_Compile() in Python/compile.c . This function does both the
conversion of the AST to a CFG and
outputting final bytecode from the CFG. The AST to CFG step is handled
mostly by two functions called by PyAST_Compile(); PySymtable_Build() and
compiler_mod() . The former is in Python/symtable.c while the latter is in
Python/compile.c .
PySymtable_Build() begins by entering the starting code block for the
AST (passed-in) and then calling the proper symtable_visit_xx function
(with xx being the AST node type). Next, the AST tree is walked with
the various code blocks that delineate the reach of a local variable
as blocks are entered and exited using symtable_enter_block() and
symtable_exit_block(), respectively.
Once the symbol table is created, it is time for CFG creation, whose
code is in Python/compile.c . This is handled by several functions
that break the task down by various AST node types. The functions are
all named compiler_visit_xx where xx is the name of the node type (such
as stmt, expr, etc.). Each function receives a struct compiler *
and xx_ty where xx is the AST node type. Typically these functions
consist of a large ‘switch’ statement, branching based on the kind of
node type passed to it. Simple things are handled inline in the
‘switch’ statement with more complex transformations farmed out to other
functions named compiler_xx with xx being a descriptive name of what is
being handled.
When transforming an arbitrary AST node, use the VISIT() macro.
The appropriate compiler_visit_xx function is called, based on the value
passed in for <node type> (so VISIT(c, expr, node) calls
compiler_visit_expr(c, node)). The VISIT_SEQ macro is very similar,
but is called on AST node sequences (those values that were created as
arguments to a node that used the ‘*’ modifier). There is also
VISIT_SLICE() just for handling slices.
Emission of bytecode is handled by the following macros:
ADDOP()add a specified opcode
ADDOP_I()add an opcode that takes an argument
ADDOP_O(struct compiler *c, int op, PyObject *type, PyObject *obj)add an opcode with the proper argument based on the position of the
specified PyObject in PyObject sequence object, but with no handling of
mangled names; used for when you
need to do named lookups of objects such as globals, consts, or
parameters where name mangling is not possible and the scope of the
name is known
ADDOP_NAME()just like ADDOP_O, but name mangling is also handled; used for
attribute loading or importing based on name
ADDOP_JABS()create an absolute jump to a basic block
ADDOP_JREL()create a relative jump to a basic block
Several helper functions that will emit bytecode and are named
compiler_xx() where xx is what the function helps with (list, boolop,
etc.). A rather useful one is compiler_nameop().
This function looks up the scope of a variable and, based on the
expression context, emits the proper opcode to load, store, or delete
the variable.
As for handling the line number on which a statement is defined, is
handled by compiler_visit_stmt() and thus is not a worry.
In addition to emitting bytecode based on the AST node, handling the
creation of basic blocks must be done. Below are the macros and
functions used for managing basic blocks:
NEW_BLOCK()create block and set it as current
NEXT_BLOCK()basically NEW_BLOCK() plus jump from current block
compiler_new_block()create a block but don’t use it (used for generating jumps)
Once the CFG is created, it must be flattened and then final emission of
bytecode occurs. Flattening is handled using a post-order depth-first
search. Once flattened, jump offsets are backpatched based on the
flattening and then a PyCodeObject file is created. All of this is
handled by calling assemble() .
Introducing New Bytecode
Sometimes a new feature requires a new opcode. But adding new bytecode is
not as simple as just suddenly introducing new bytecode in the AST ->
bytecode step of the compiler. Several pieces of code throughout Python depend
on having correct information about what bytecode exists.
First, you must choose a name and a unique identifier number. The official
list of bytecode can be found in Include/opcode.h . If the opcode is to take
an argument, it must be given a unique number greater than that assigned to
HAVE_ARGUMENT (as found in Include/opcode.h).
Once the name/number pair
has been chosen and entered in Include/opcode.h, you must also enter it into
Lib/opcode.py and Doc/library/dis.rst .
With a new bytecode you must also change what is called the magic number for
.pyc files. The variable MAGIC in Python/import.c contains the number.
Changing this number will lead to all .pyc files with the old MAGIC
to be recompiled by the interpreter on import.
Finally, you need to introduce the use of the new bytecode. Altering
Python/compile.c and Python/ceval.c will be the primary places to change.
But you will also need to change the ‘compiler’ package. The key files
to do that are Lib/compiler/pyassem.py and Lib/compiler/pycodegen.py .
If you make a change here that can affect the output of bytecode that
is already in existence and you do not change the magic number constantly, make
sure to delete your old .py(c|o) files! Even though you will end up changing
the magic number if you change the bytecode, while you are debugging your work
you will be changing the bytecode output without constantly bumping up the
magic number. This means you end up with stale .pyc files that will not be
recreated. Running
find . -name '*.py[co]' -exec rm -f {} ';' should delete all .pyc files you
have, forcing new ones to be created and thus allow you test out your new
bytecode properly.
Code Objects
The result of PyAST_Compile() is a PyCodeObject which is defined in
Include/code.h . And with that you now have executable Python bytecode!
The code objects (byte code) is executed in Python/ceval.c . This file
will also need a new case statement for the new opcode in the big switch
statement in PyEval_EvalFrameEx().
Important Files
Parser/
Python.asdlASDL syntax file
asdl.py“An implementation of the Zephyr Abstract Syntax Definition
Language.” Uses SPARK to parse the ASDL files.
asdl_c.py“Generate C code from an ASDL description.” Generates
Python/Python-ast.c and Include/Python-ast.h .
spark.pySPARK parser generator
Python/
Python-ast.cCreates C structs corresponding to the ASDL types. Also
contains code for marshaling AST nodes (core ASDL types have
marshaling code in asdl.c). “File automatically generated by
Parser/asdl_c.py”. This file must be committed separately
after every grammar change is committed since the __version__
value is set to the latest grammar change revision number.
asdl.cContains code to handle the ASDL sequence type. Also has code
to handle marshalling the core ASDL types, such as number and
identifier. used by Python-ast.c for marshaling AST nodes.
ast.cConverts Python’s parse tree into the abstract syntax tree.
ceval.cExecutes byte code (aka, eval loop).
compile.cEmits bytecode based on the AST.
symtable.cGenerates a symbol table from AST.
pyarena.cImplementation of the arena memory manager.
import.cHome of the magic number (named MAGIC) for bytecode versioning
Include/
Python-ast.hContains the actual definitions of the C structs as generated by
Python/Python-ast.c .
“Automatically generated by Parser/asdl_c.py”.
asdl.hHeader for the corresponding Python/ast.c .
ast.hDeclares PyAST_FromNode() external (from Python/ast.c).
code.hHeader file for Objects/codeobject.c; contains definition of
PyCodeObject.
symtable.hHeader for Python/symtable.c . struct symtable and
PySTEntryObject are defined here.
pyarena.hHeader file for the corresponding Python/pyarena.c .
opcode.hMaster list of bytecode; if this file is modified you must modify
several other files accordingly (see “Introducing New Bytecode”)
Objects/
codeobject.cContains PyCodeObject-related code (originally in
Python/compile.c).
Lib/
opcode.pyOne of the files that must be modified if Include/opcode.h is.
compiler/
pyassem.pyOne of the files that must be modified if Include/opcode.h is
changed.
pycodegen.pyOne of the files that must be modified if Include/opcode.h is
changed.
Known Compiler-related Experiments
This section lists known experiments involving the compiler (including
bytecode).
Skip Montanaro presented a paper at a Python workshop on a peephole optimizer
[1].
Michael Hudson has a non-active SourceForge project named Bytecodehacks
[2] that provides functionality for playing with bytecode
directly.
An opcode to combine the functionality of LOAD_ATTR/CALL_FUNCTION was created
named CALL_ATTR [3]. Currently only works for classic classes and
for new-style classes rough benchmarking showed an actual slowdown thanks to
having to support both classic and new-style classes.
References
[Aho86]
Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman.
Compilers: Principles, Techniques, and Tools,
http://www.amazon.com/exec/obidos/tg/detail/-/0201100886/104-0162389-6419108
[Wang97]
Daniel C. Wang, Andrew W. Appel, Jeff L. Korn, and Chris
S. Serra. The Zephyr Abstract Syntax Description Language.
In Proceedings of the Conference on Domain-Specific Languages, pp.
213–227, 1997.
[1]
Skip Montanaro’s Peephole Optimizer Paper
(https://legacy.python.org/workshops/1998-11/proceedings/papers/montanaro/montanaro.html)
[2]
Bytecodehacks Project
(http://bytecodehacks.sourceforge.net/bch-docs/bch/index.html)
[3]
CALL_ATTR opcode
(https://bugs.python.org/issue709744)
| Withdrawn | PEP 339 – Design of the CPython Compiler | Informational | Historically (through 2.4), compilation from source code to bytecode
involved two steps: |
PEP 341 – Unifying try-except and try-finally
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Standards Track
Created:
04-May-2005
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Rationale/Proposal
Changes to the grammar
Implementation
References
Copyright
Abstract
This PEP proposes a change in the syntax and semantics of try
statements to allow combined try-except-finally blocks. This
means in short that it would be valid to write:
try:
<do something>
except Exception:
<handle the error>
finally:
<cleanup>
Rationale/Proposal
There are many use cases for the try-except statement and
for the try-finally statement per se; however, often one needs
to catch exceptions and execute some cleanup code afterwards.
It is slightly annoying and not very intelligible that
one has to write:
f = None
try:
try:
f = open(filename)
text = f.read()
except IOError:
print 'An error occurred'
finally:
if f:
f.close()
So it is proposed that a construction like this:
try:
<suite 1>
except Ex1:
<suite 2>
<more except: clauses>
else:
<suite 3>
finally:
<suite 4>
be exactly the same as the legacy:
try:
try:
<suite 1>
except Ex1:
<suite 2>
<more except: clauses>
else:
<suite 3>
finally:
<suite 4>
This is backwards compatible, and every try statement that is
legal today would continue to work.
Changes to the grammar
The grammar for the try statement, which is currently:
try_stmt: ('try' ':' suite (except_clause ':' suite)+
['else' ':' suite] | 'try' ':' suite 'finally' ':' suite)
would have to become:
try_stmt: 'try' ':' suite
(
(except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite]
|
'finally' ':' suite
)
Implementation
As the PEP author currently does not have sufficient knowledge
of the CPython implementation, he is unfortunately not able
to deliver one. Thomas Lee has submitted a patch [2].
However, according to Guido, it should be a piece of cake to
implement [1] – at least for a core hacker.
This patch was committed 17 December 2005, SVN revision 41740 [3].
References
[1]
https://mail.python.org/pipermail/python-dev/2005-May/053319.html
[2]
https://bugs.python.org/issue1355913
[3]
https://mail.python.org/pipermail/python-checkins/2005-December/048457.html
Copyright
This document has been placed in the public domain.
| Final | PEP 341 – Unifying try-except and try-finally | Standards Track | This PEP proposes a change in the syntax and semantics of try
statements to allow combined try-except-finally blocks. This
means in short that it would be valid to write: |
PEP 343 – The “with” Statement
Author:
Guido van Rossum, Alyssa Coghlan
Status:
Final
Type:
Standards Track
Created:
13-May-2005
Python-Version:
2.5
Post-History:
02-Jun-2005, 16-Oct-2005, 29-Oct-2005, 23-Apr-2006, 01-May-2006,
30-Jul-2006
Table of Contents
Abstract
Author’s Note
Introduction
Motivation and Summary
Use Cases
Specification: The ‘with’ Statement
Transition Plan
Generator Decorator
Context Managers in the Standard Library
Standard Terminology
Caching Context Managers
Resolved Issues
Rejected Options
Examples
Reference Implementation
Acknowledgements
References
Copyright
Abstract
This PEP adds a new statement “with” to the Python language to make
it possible to factor out standard uses of try/finally statements.
In this PEP, context managers provide __enter__() and __exit__()
methods that are invoked on entry to and exit from the body of the
with statement.
Author’s Note
This PEP was originally written in first person by Guido, and
subsequently updated by Alyssa (Nick) Coghlan to reflect later discussion
on python-dev. Any first person references are from Guido’s
original.
Python’s alpha release cycle revealed terminology problems in this
PEP and in the associated documentation and implementation [13].
The PEP stabilised around the time of the first Python 2.5 beta
release.
Yes, the verb tense is messed up in a few places. We’ve been
working on this PEP for over a year now, so things that were
originally in the future are now in the past :)
Introduction
After a lot of discussion about PEP 340 and alternatives, I
decided to withdraw PEP 340 and proposed a slight variant on PEP
310. After more discussion, I have added back a mechanism for
raising an exception in a suspended generator using a throw()
method, and a close() method which throws a new GeneratorExit
exception; these additions were first proposed on python-dev in
[2] and universally approved of. I’m also changing the keyword to
‘with’.
After acceptance of this PEP, the following PEPs were rejected due
to overlap:
PEP 310, Reliable Acquisition/Release Pairs. This is the
original with-statement proposal.
PEP 319, Python Synchronize/Asynchronize Block. Its use cases
can be covered by the current PEP by providing suitable
with-statement controllers: for ‘synchronize’ we can use the
“locking” template from example 1; for ‘asynchronize’ we can use
a similar “unlocking” template. I don’t think having an
“anonymous” lock associated with a code block is all that
important; in fact it may be better to always be explicit about
the mutex being used.
PEP 340 and PEP 346 also overlapped with this PEP, but were
voluntarily withdrawn when this PEP was submitted.
Some discussion of earlier incarnations of this PEP took place on
the Python Wiki [3].
Motivation and Summary
PEP 340, Anonymous Block Statements, combined many powerful ideas:
using generators as block templates, adding exception handling and
finalization to generators, and more. Besides praise it received
a lot of opposition from people who didn’t like the fact that it
was, under the covers, a (potential) looping construct. This
meant that break and continue in a block-statement would break or
continue the block-statement, even if it was used as a non-looping
resource management tool.
But the final blow came when I read Raymond Chen’s rant about
flow-control macros [1]. Raymond argues convincingly that hiding
flow control in macros makes your code inscrutable, and I find
that his argument applies to Python as well as to C. I realized
that PEP 340 templates can hide all sorts of control flow; for
example, its example 4 (auto_retry()) catches exceptions and
repeats the block up to three times.
However, the with-statement of PEP 310 does not hide control
flow, in my view: while a finally-suite temporarily suspends the
control flow, in the end, the control flow resumes as if the
finally-suite wasn’t there at all.
Remember, PEP 310 proposes roughly this syntax (the “VAR =” part is
optional):
with VAR = EXPR:
BLOCK
which roughly translates into this:
VAR = EXPR
VAR.__enter__()
try:
BLOCK
finally:
VAR.__exit__()
Now consider this example:
with f = open("/etc/passwd"):
BLOCK1
BLOCK2
Here, just as if the first line was “if True” instead, we know
that if BLOCK1 completes without an exception, BLOCK2 will be
reached; and if BLOCK1 raises an exception or executes a non-local
goto (a break, continue or return), BLOCK2 is not reached. The
magic added by the with-statement at the end doesn’t affect this.
(You may ask, what if a bug in the __exit__() method causes an
exception? Then all is lost – but this is no worse than with
other exceptions; the nature of exceptions is that they can happen
anywhere, and you just have to live with that. Even if you
write bug-free code, a KeyboardInterrupt exception can still cause
it to exit between any two virtual machine opcodes.)
This argument almost led me to endorse PEP 310, but I had one idea
left from the PEP 340 euphoria that I wasn’t ready to drop: using
generators as “templates” for abstractions like acquiring and
releasing a lock or opening and closing a file is a powerful idea,
as can be seen by looking at the examples in that PEP.
Inspired by a counter-proposal to PEP 340 by Phillip Eby I tried
to create a decorator that would turn a suitable generator into an
object with the necessary __enter__() and __exit__() methods.
Here I ran into a snag: while it wasn’t too hard for the locking
example, it was impossible to do this for the opening example.
The idea was to define the template like this:
@contextmanager
def opening(filename):
f = open(filename)
try:
yield f
finally:
f.close()
and used it like this:
with f = opening(filename):
...read data from f...
The problem is that in PEP 310, the result of calling EXPR is
assigned directly to VAR, and then VAR’s __exit__() method is
called upon exit from BLOCK1. But here, VAR clearly needs to
receive the opened file, and that would mean that __exit__() would
have to be a method on the file.
While this can be solved using a proxy class, this is awkward and
made me realize that a slightly different translation would make
writing the desired decorator a piece of cake: let VAR receive the
result from calling the __enter__() method, and save the value of
EXPR to call its __exit__() method later. Then the decorator can
return an instance of a wrapper class whose __enter__() method
calls the generator’s next() method and returns whatever next()
returns; the wrapper instance’s __exit__() method calls next()
again but expects it to raise StopIteration. (Details below in
the section Optional Generator Decorator.)
So now the final hurdle was that the PEP 310 syntax:
with VAR = EXPR:
BLOCK1
would be deceptive, since VAR does not receive the value of
EXPR. Borrowing from PEP 340, it was an easy step to:
with EXPR as VAR:
BLOCK1
Additional discussion showed that people really liked being able
to “see” the exception in the generator, even if it was only to
log it; the generator is not allowed to yield another value, since
the with-statement should not be usable as a loop (raising a
different exception is marginally acceptable). To enable this, a
new throw() method for generators is proposed, which takes one to
three arguments representing an exception in the usual fashion
(type, value, traceback) and raises it at the point where the
generator is suspended.
Once we have this, it is a small step to proposing another
generator method, close(), which calls throw() with a special
exception, GeneratorExit. This tells the generator to exit, and
from there it’s another small step to proposing that close() be
called automatically when the generator is garbage-collected.
Then, finally, we can allow a yield-statement inside a try-finally
statement, since we can now guarantee that the finally-clause will
(eventually) be executed. The usual cautions about finalization
apply – the process may be terminated abruptly without finalizing
any objects, and objects may be kept alive forever by cycles or
memory leaks in the application (as opposed to cycles or leaks in
the Python implementation, which are taken care of by GC).
Note that we’re not guaranteeing that the finally-clause is
executed immediately after the generator object becomes unused,
even though this is how it will work in CPython. This is similar
to auto-closing files: while a reference-counting implementation
like CPython deallocates an object as soon as the last reference
to it goes away, implementations that use other GC algorithms do
not make the same guarantee. This applies to Jython, IronPython,
and probably to Python running on Parrot.
(The details of the changes made to generators can now be found in
PEP 342 rather than in the current PEP)
Use Cases
See the Examples section near the end.
Specification: The ‘with’ Statement
A new statement is proposed with the syntax:
with EXPR as VAR:
BLOCK
Here, ‘with’ and ‘as’ are new keywords; EXPR is an arbitrary
expression (but not an expression-list) and VAR is a single
assignment target. It can not be a comma-separated sequence of
variables, but it can be a parenthesized comma-separated
sequence of variables. (This restriction makes a future extension
possible of the syntax to have multiple comma-separated resources,
each with its own optional as-clause.)
The “as VAR” part is optional.
The translation of the above statement is:
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
BLOCK
except:
# The exceptional case is handled here
exc = False
if not exit(mgr, *sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if exc:
exit(mgr, None, None, None)
Here, the lowercase variables (mgr, exit, value, exc) are internal
variables and not accessible to the user; they will most likely be
implemented as special registers or stack positions.
The details of the above translation are intended to prescribe the
exact semantics. If either of the relevant methods are not found
as expected, the interpreter will raise AttributeError, in the
order that they are tried (__exit__, __enter__).
Similarly, if any of the calls raises an exception, the effect is
exactly as it would be in the above code. Finally, if BLOCK
contains a break, continue or return statement, the __exit__()
method is called with three None arguments just as if BLOCK
completed normally. (I.e. these “pseudo-exceptions” are not seen
as exceptions by __exit__().)
If the “as VAR” part of the syntax is omitted, the “VAR =” part of
the translation is omitted (but mgr.__enter__() is still called).
The calling convention for mgr.__exit__() is as follows. If the
finally-suite was reached through normal completion of BLOCK or
through a non-local goto (a break, continue or return statement in
BLOCK), mgr.__exit__() is called with three None arguments. If
the finally-suite was reached through an exception raised in
BLOCK, mgr.__exit__() is called with three arguments representing
the exception type, value, and traceback.
IMPORTANT: if mgr.__exit__() returns a “true” value, the exception
is “swallowed”. That is, if it returns “true”, execution
continues at the next statement after the with-statement, even if
an exception happened inside the with-statement. However, if the
with-statement was left via a non-local goto (break, continue or
return), this non-local return is resumed when mgr.__exit__()
returns regardless of the return value. The motivation for this
detail is to make it possible for mgr.__exit__() to swallow
exceptions, without making it too easy (since the default return
value, None, is false and this causes the exception to be
re-raised). The main use case for swallowing exceptions is to
make it possible to write the @contextmanager decorator so
that a try/except block in a decorated generator behaves exactly
as if the body of the generator were expanded in-line at the place
of the with-statement.
The motivation for passing the exception details to __exit__(), as
opposed to the argument-less __exit__() from PEP 310, was given by
the transactional() use case, example 3 below. The template in
that example must commit or roll back the transaction depending on
whether an exception occurred or not. Rather than just having a
boolean flag indicating whether an exception occurred, we pass the
complete exception information, for the benefit of an
exception-logging facility for example. Relying on sys.exc_info()
to get at the exception information was rejected; sys.exc_info()
has very complex semantics and it is perfectly possible that it
returns the exception information for an exception that was caught
ages ago. It was also proposed to add an additional boolean to
distinguish between reaching the end of BLOCK and a non-local
goto. This was rejected as too complex and unnecessary; a
non-local goto should be considered unexceptional for the purposes
of a database transaction roll-back decision.
To facilitate chaining of contexts in Python code that directly
manipulates context managers, __exit__() methods should not
re-raise the error that is passed in to them. It is always the
responsibility of the caller of the __exit__() method to do any
reraising in that case.
That way, if the caller needs to tell whether the __exit__()
invocation failed (as opposed to successfully cleaning up before
propagating the original error), it can do so.
If __exit__() returns without an error, this can then be
interpreted as success of the __exit__() method itself (regardless
of whether or not the original error is to be propagated or
suppressed).
However, if __exit__() propagates an exception to its caller, this
means that __exit__() itself has failed. Thus, __exit__()
methods should avoid raising errors unless they have actually
failed. (And allowing the original error to proceed isn’t a
failure.)
Transition Plan
In Python 2.5, the new syntax will only be recognized if a future
statement is present:
from __future__ import with_statement
This will make both ‘with’ and ‘as’ keywords. Without the future
statement, using ‘with’ or ‘as’ as an identifier will cause a
Warning to be issued to stderr.
In Python 2.6, the new syntax will always be recognized; ‘with’
and ‘as’ are always keywords.
Generator Decorator
With PEP 342 accepted, it is possible to write a decorator
that makes it possible to use a generator that yields exactly once
to control a with-statement. Here’s a sketch of such a decorator:
class GeneratorContextManager(object):
def __init__(self, gen):
self.gen = gen
def __enter__(self):
try:
return self.gen.next()
except StopIteration:
raise RuntimeError("generator didn't yield")
def __exit__(self, type, value, traceback):
if type is None:
try:
self.gen.next()
except StopIteration:
return
else:
raise RuntimeError("generator didn't stop")
else:
try:
self.gen.throw(type, value, traceback)
raise RuntimeError("generator didn't stop after throw()")
except StopIteration:
return True
except:
# only re-raise if it's *not* the exception that was
# passed to throw(), because __exit__() must not raise
# an exception unless __exit__() itself failed. But
# throw() has to raise the exception to signal
# propagation, so this fixes the impedance mismatch
# between the throw() protocol and the __exit__()
# protocol.
#
if sys.exc_info()[1] is not value:
raise
def contextmanager(func):
def helper(*args, **kwds):
return GeneratorContextManager(func(*args, **kwds))
return helper
This decorator could be used as follows:
@contextmanager
def opening(filename):
f = open(filename) # IOError is untouched by GeneratorContext
try:
yield f
finally:
f.close() # Ditto for errors here (however unlikely)
A robust implementation of this decorator will be made
part of the standard library.
Context Managers in the Standard Library
It would be possible to endow certain objects, like files,
sockets, and locks, with __enter__() and __exit__() methods so
that instead of writing:
with locking(myLock):
BLOCK
one could write simply:
with myLock:
BLOCK
I think we should be careful with this; it could lead to mistakes
like:
f = open(filename)
with f:
BLOCK1
with f:
BLOCK2
which does not do what one might think (f is closed before BLOCK2
is entered).
OTOH such mistakes are easily diagnosed; for example, the
generator context decorator above raises RuntimeError when a
second with-statement calls f.__enter__() again. A similar error
can be raised if __enter__ is invoked on a closed file object.
For Python 2.5, the following types have been identified as
context managers:
- file
- thread.LockType
- threading.Lock
- threading.RLock
- threading.Condition
- threading.Semaphore
- threading.BoundedSemaphore
A context manager will also be added to the decimal module to
support using a local decimal arithmetic context within the body
of a with statement, automatically restoring the original context
when the with statement is exited.
Standard Terminology
This PEP proposes that the protocol consisting of the __enter__()
and __exit__() methods be known as the “context management protocol”,
and that objects that implement that protocol be known as “context
managers”. [4]
The expression immediately following the with keyword in the
statement is a “context expression” as that expression provides the
main clue as to the runtime environment the context manager
establishes for the duration of the statement body.
The code in the body of the with statement and the variable name
(or names) after the as keyword don’t really have special terms at
this point in time. The general terms “statement body” and “target
list” can be used, prefixing with “with” or “with statement” if the
terms would otherwise be unclear.
Given the existence of objects such as the decimal module’s
arithmetic context, the term “context” is unfortunately ambiguous.
If necessary, it can be made more specific by using the terms
“context manager” for the concrete object created by the context
expression and “runtime context” or (preferably) “runtime
environment” for the actual state modifications made by the context
manager. When simply discussing use of the with statement, the
ambiguity shouldn’t matter too much as the context expression fully
defines the changes made to the runtime environment.
The distinction is more important when discussing the mechanics of
the with statement itself and how to go about actually implementing
context managers.
Caching Context Managers
Many context managers (such as files and generator-based contexts)
will be single-use objects. Once the __exit__() method has been
called, the context manager will no longer be in a usable state
(e.g. the file has been closed, or the underlying generator has
finished execution).
Requiring a fresh manager object for each with statement is the
easiest way to avoid problems with multi-threaded code and nested
with statements trying to use the same context manager. It isn’t
coincidental that all of the standard library context managers
that support reuse come from the threading module - they’re all
already designed to deal with the problems created by threaded
and nested usage.
This means that in order to save a context manager with particular
initialisation arguments to be used in multiple with statements, it
will typically be necessary to store it in a zero-argument callable
that is then called in the context expression of each statement
rather than caching the context manager directly.
When this restriction does not apply, the documentation of the
affected context manager should make that clear.
Resolved Issues
The following issues were resolved by BDFL approval (and a lack
of any major objections on python-dev).
What exception should GeneratorContextManager raise when the
underlying generator-iterator misbehaves? The following quote is
the reason behind Guido’s choice of RuntimeError for both this
and for the generator close() method in PEP 342 (from [8]):“I’d rather not introduce a new exception class just for this
purpose, since it’s not an exception that I want people to catch:
I want it to turn into a traceback which is seen by the
programmer who then fixes the code. So now I believe they
should both raise RuntimeError.
There are some precedents for that: it’s raised by the core
Python code in situations where endless recursion is detected,
and for uninitialized objects (and for a variety of
miscellaneous conditions).”
It is fine to raise AttributeError instead of TypeError if the
relevant methods aren’t present on a class involved in a with
statement. The fact that the abstract object C API raises
TypeError rather than AttributeError is an accident of history,
rather than a deliberate design decision [11].
Objects with __enter__/__exit__ methods are called “context
managers” and the decorator to convert a generator function
into a context manager factory is contextlib.contextmanager.
There were some other suggestions [15] during the 2.5 release
cycle but no compelling arguments for switching away from the
terms that had been used in the PEP implementation were made.
Rejected Options
For several months, the PEP prohibited suppression of exceptions
in order to avoid hidden flow control. Implementation
revealed this to be a right royal pain, so Guido restored the
ability [12].
Another aspect of the PEP that caused no end of questions and
terminology debates was providing a __context__() method that
was analogous to an iterable’s __iter__() method [5] [7] [9].
The ongoing problems [10] [12] with explaining what it was and why
it was and how it was meant to work eventually lead to Guido
killing the concept outright [14] (and there was much rejoicing!).
The notion of using the PEP 342 generator API directly to define
the with statement was also briefly entertained [6], but quickly
dismissed as making it too difficult to write non-generator
based context managers.
Examples
The generator based examples rely on PEP 342. Also, some of the
examples are unnecessary in practice, as the appropriate objects,
such as threading.RLock, are able to be used directly in with
statements.
The tense used in the names of the example contexts is not
arbitrary. Past tense (“-ed”) is used when the name refers to an
action which is done in the __enter__ method and undone in the
__exit__ method. Progressive tense (“-ing”) is used when the name
refers to an action which is to be done in the __exit__ method.
A template for ensuring that a lock, acquired at the start of a
block, is released when the block is left:@contextmanager
def locked(lock):
lock.acquire()
try:
yield
finally:
lock.release()
Used as follows:
with locked(myLock):
# Code here executes with myLock held. The lock is
# guaranteed to be released when the block is left (even
# if via return or by an uncaught exception).
A template for opening a file that ensures the file is closed
when the block is left:@contextmanager
def opened(filename, mode="r"):
f = open(filename, mode)
try:
yield f
finally:
f.close()
Used as follows:
with opened("/etc/passwd") as f:
for line in f:
print line.rstrip()
A template for committing or rolling back a database
transaction:@contextmanager
def transaction(db):
db.begin()
try:
yield None
except:
db.rollback()
raise
else:
db.commit()
Example 1 rewritten without a generator:class locked:
def __init__(self, lock):
self.lock = lock
def __enter__(self):
self.lock.acquire()
def __exit__(self, type, value, tb):
self.lock.release()
(This example is easily modified to implement the other
relatively stateless examples; it shows that it is easy to avoid
the need for a generator if no special state needs to be
preserved.)
Redirect stdout temporarily:@contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
with opened(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn’t thread-safe, of course, but neither is doing this
same dance manually. In single-threaded programs (for example,
in scripts) it is a popular way of doing things.
A variant on opened() that also returns an error condition:@contextmanager
def opened_w_error(filename, mode="r"):
try:
f = open(filename, mode)
except IOError, err:
yield None, err
else:
try:
yield f, None
finally:
f.close()
Used as follows:
with opened_w_error("/etc/passwd", "a") as (f, err):
if err:
print "IOError:", err
else:
f.write("guido::0:0::/:/bin/sh\n")
Another useful example would be an operation that blocks
signals. The use could be like this:import signal
with signal.blocked():
# code executed without worrying about signals
An optional argument might be a list of signals to be blocked;
by default all signals are blocked. The implementation is left
as an exercise to the reader.
Another use for this feature is the Decimal context. Here’s a
simple example, after one posted by Michael Chermside:import decimal
@contextmanager
def extra_precision(places=2):
c = decimal.getcontext()
saved_prec = c.prec
c.prec += places
try:
yield None
finally:
c.prec = saved_prec
Sample usage (adapted from the Python Library Reference):
def sin(x):
"Return the sine of x as measured in radians."
with extra_precision():
i, lasts, s, fact, num, sign = 1, 0, x, 1, x, 1
while s != lasts:
lasts = s
i += 2
fact *= i * (i-1)
num *= x * x
sign *= -1
s += num / fact * sign
# The "+s" rounds back to the original precision,
# so this must be outside the with-statement:
return +s
Here’s a simple context manager for the decimal module:@contextmanager
def localcontext(ctx=None):
"""Set a new local decimal context for the block"""
# Default to using the current context
if ctx is None:
ctx = getcontext()
# We set the thread context to a copy of this context
# to ensure that changes within the block are kept
# local to the block.
newctx = ctx.copy()
oldctx = decimal.getcontext()
decimal.setcontext(newctx)
try:
yield newctx
finally:
# Always restore the original context
decimal.setcontext(oldctx)
Sample usage:
from decimal import localcontext, ExtendedContext
def sin(x):
with localcontext() as ctx:
ctx.prec += 2
# Rest of sin calculation algorithm
# uses a precision 2 greater than normal
return +s # Convert result to normal precision
def sin(x):
with localcontext(ExtendedContext):
# Rest of sin calculation algorithm
# uses the Extended Context from the
# General Decimal Arithmetic Specification
return +s # Convert result to normal context
A generic “object-closing” context manager:class closing(object):
def __init__(self, obj):
self.obj = obj
def __enter__(self):
return self.obj
def __exit__(self, *exc_info):
try:
close_it = self.obj.close
except AttributeError:
pass
else:
close_it()
This can be used to deterministically close anything with a
close method, be it file, generator, or something else. It
can even be used when the object isn’t guaranteed to require
closing (e.g., a function that accepts an arbitrary
iterable):
# emulate opening():
with closing(open("argument.txt")) as contradiction:
for line in contradiction:
print line
# deterministically finalize an iterator:
with closing(iter(data_source)) as data:
for datum in data:
process(datum)
(Python 2.5’s contextlib module contains a version
of this context manager)
PEP 319 gives a use case for also having a released()
context to temporarily release a previously acquired lock;
this can be written very similarly to the locked context
manager above by swapping the acquire() and release() calls:class released:
def __init__(self, lock):
self.lock = lock
def __enter__(self):
self.lock.release()
def __exit__(self, type, value, tb):
self.lock.acquire()
Sample usage:
with my_lock:
# Operations with the lock held
with released(my_lock):
# Operations without the lock
# e.g. blocking I/O
# Lock is held again here
A “nested” context manager that automatically nests the
supplied contexts from left-to-right to avoid excessive
indentation:@contextmanager
def nested(*contexts):
exits = []
vars = []
try:
try:
for context in contexts:
exit = context.__exit__
enter = context.__enter__
vars.append(enter())
exits.append(exit)
yield vars
except:
exc = sys.exc_info()
else:
exc = (None, None, None)
finally:
while exits:
exit = exits.pop()
try:
exit(*exc)
except:
exc = sys.exc_info()
else:
exc = (None, None, None)
if exc != (None, None, None):
# sys.exc_info() may have been
# changed by one of the exit methods
# so provide explicit exception info
raise exc[0], exc[1], exc[2]
Sample usage:
with nested(a, b, c) as (x, y, z):
# Perform operation
Is equivalent to:
with a as x:
with b as y:
with c as z:
# Perform operation
(Python 2.5’s contextlib module contains a version
of this context manager)
Reference Implementation
This PEP was first accepted by Guido at his EuroPython
keynote, 27 June 2005.
It was accepted again later, with the __context__ method added.
The PEP was implemented in Subversion for Python 2.5a1
The __context__() method was removed in Python 2.5b1
Acknowledgements
Many people contributed to the ideas and concepts in this PEP,
including all those mentioned in the acknowledgements for PEP 340
and PEP 346.
Additional thanks goes to (in no meaningful order): Paul Moore,
Phillip J. Eby, Greg Ewing, Jason Orendorff, Michael Hudson,
Raymond Hettinger, Walter Dörwald, Aahz, Georg Brandl, Terry Reedy,
A.M. Kuchling, Brett Cannon, and all those that participated in the
discussions on python-dev.
References
[1]
Raymond Chen’s article on hidden flow control
https://devblogs.microsoft.com/oldnewthing/20050106-00/?p=36783
[2]
Guido suggests some generator changes that ended up in PEP 342
https://mail.python.org/pipermail/python-dev/2005-May/053885.html
[3]
Wiki discussion of PEP 343
http://wiki.python.org/moin/WithStatement
[4]
Early draft of some documentation for the with statement
https://mail.python.org/pipermail/python-dev/2005-July/054658.html
[5]
Proposal to add the __with__ method
https://mail.python.org/pipermail/python-dev/2005-October/056947.html
[6]
Proposal to use the PEP 342 enhanced generator API directly
https://mail.python.org/pipermail/python-dev/2005-October/056969.html
[7]
Guido lets me (Alyssa Coghlan) talk him into a bad idea ;)
https://mail.python.org/pipermail/python-dev/2005-October/057018.html
[8]
Guido raises some exception handling questions
https://mail.python.org/pipermail/python-dev/2005-June/054064.html
[9]
Guido answers some questions about the __context__ method
https://mail.python.org/pipermail/python-dev/2005-October/057520.html
[10]
Guido answers more questions about the __context__ method
https://mail.python.org/pipermail/python-dev/2005-October/057535.html
[11]
Guido says AttributeError is fine for missing special methods
https://mail.python.org/pipermail/python-dev/2005-October/057625.html
[12] (1, 2)
Guido restores the ability to suppress exceptions
https://mail.python.org/pipermail/python-dev/2006-February/061909.html
[13]
A simple question kickstarts a thorough review of PEP 343
https://mail.python.org/pipermail/python-dev/2006-April/063859.html
[14]
Guido kills the __context__() method
https://mail.python.org/pipermail/python-dev/2006-April/064632.html
[15]
Proposal to use ‘context guard’ instead of ‘context manager’
https://mail.python.org/pipermail/python-dev/2006-May/064676.html
Copyright
This document has been placed in the public domain.
| Final | PEP 343 – The “with” Statement | Standards Track | This PEP adds a new statement “with” to the Python language to make
it possible to factor out standard uses of try/finally statements. |
PEP 344 – Exception Chaining and Embedded Tracebacks
Author:
Ka-Ping Yee
Status:
Superseded
Type:
Standards Track
Created:
12-May-2005
Python-Version:
2.5
Post-History:
Table of Contents
Numbering Note
Abstract
Motivation
History
Rationale
Implicit Exception Chaining
Explicit Exception Chaining
Traceback Attribute
Enhanced Reporting
C API
Compatibility
Open Issue: Extra Information
Open Issue: Suppressing Context
Open Issue: Limiting Exception Types
Open Issue: yield
Open Issue: Garbage Collection
Possible Future Compatible Changes
Possible Future Incompatible Changes
Acknowledgements
References
Copyright
Numbering Note
This PEP has been renumbered to PEP 3134. The text below is the last version
submitted under the old number.
Abstract
This PEP proposes three standard attributes on exception instances: the
__context__ attribute for implicitly chained exceptions, the
__cause__ attribute for explicitly chained exceptions, and the
__traceback__ attribute for the traceback. A new raise ... from
statement sets the __cause__ attribute.
Motivation
During the handling of one exception (exception A), it is possible that another
exception (exception B) may occur. In today’s Python (version 2.4), if this
happens, exception B is propagated outward and exception A is lost. In order
to debug the problem, it is useful to know about both exceptions. The
__context__ attribute retains this information automatically.
Sometimes it can be useful for an exception handler to intentionally re-raise
an exception, either to provide extra information or to translate an exception
to another type. The __cause__ attribute provides an explicit way to
record the direct cause of an exception.
In today’s Python implementation, exceptions are composed of three parts: the
type, the value, and the traceback. The sys module, exposes the current
exception in three parallel variables, exc_type, exc_value, and
exc_traceback, the sys.exc_info() function returns a tuple of these
three parts, and the raise statement has a three-argument form accepting
these three parts. Manipulating exceptions often requires passing these three
things in parallel, which can be tedious and error-prone. Additionally, the
except statement can only provide access to the value, not the traceback.
Adding the __traceback__ attribute to exception values makes all the
exception information accessible from a single place.
History
Raymond Hettinger [1] raised the issue of masked exceptions on Python-Dev in
January 2003 and proposed a PyErr_FormatAppend() function that C modules
could use to augment the currently active exception with more information.
Brett Cannon [2] brought up chained exceptions again in June 2003, prompting
a long discussion.
Greg Ewing [3] identified the case of an exception occurring in a finally
block during unwinding triggered by an original exception, as distinct from
the case of an exception occurring in an except block that is handling the
original exception.
Greg Ewing [4] and Guido van Rossum [5], and probably others, have
previously mentioned adding a traceback attribute to Exception instances.
This is noted in PEP 3000.
This PEP was motivated by yet another recent Python-Dev reposting of the same
ideas [6] [7].
Rationale
The Python-Dev discussions revealed interest in exception chaining for two
quite different purposes. To handle the unexpected raising of a secondary
exception, the exception must be retained implicitly. To support intentional
translation of an exception, there must be a way to chain exceptions
explicitly. This PEP addresses both.
Several attribute names for chained exceptions have been suggested on
Python-Dev [2], including cause, antecedent, reason, original,
chain, chainedexc, xc_chain, excprev, previous and
precursor. For an explicitly chained exception, this PEP suggests
__cause__ because of its specific meaning. For an implicitly chained
exception, this PEP proposes the name __context__ because the intended
meaning is more specific than temporal precedence but less specific than
causation: an exception occurs in the context of handling another exception.
This PEP suggests names with leading and trailing double-underscores for these
three attributes because they are set by the Python VM. Only in very special
cases should they be set by normal assignment.
This PEP handles exceptions that occur during except blocks and
finally blocks in the same way. Reading the traceback makes it clear
where the exceptions occurred, so additional mechanisms for distinguishing
the two cases would only add unnecessary complexity.
This PEP proposes that the outermost exception object (the one exposed for
matching by except clauses) be the most recently raised exception for
compatibility with current behaviour.
This PEP proposes that tracebacks display the outermost exception last,
because this would be consistent with the chronological order of tracebacks
(from oldest to most recent frame) and because the actual thrown exception is
easier to find on the last line.
To keep things simpler, the C API calls for setting an exception will not
automatically set the exception’s __context__. Guido van Rossum has
expressed concerns with making such changes [8].
As for other languages, Java and Ruby both discard the original exception when
another exception occurs in a catch/rescue or finally/ensure clause.
Perl 5 lacks built-in structured exception handling. For Perl 6, RFC number
88 [9] proposes an exception mechanism that implicitly retains chained
exceptions in an array named @@. In that RFC, the most recently raised
exception is exposed for matching, as in this PEP; also, arbitrary expressions
(possibly involving @@) can be evaluated for exception matching.
Exceptions in C# contain a read-only InnerException property that may
point to another exception. Its documentation [10] says that “When an
exception X is thrown as a direct result of a previous exception Y, the
InnerException property of X should contain a reference to Y.” This
property is not set by the VM automatically; rather, all exception
constructors take an optional innerException argument to set it
explicitly. The __cause__ attribute fulfills the same purpose as
InnerException, but this PEP proposes a new form of raise rather than
extending the constructors of all exceptions. C# also provides a
GetBaseException method that jumps directly to the end of the
InnerException chain; this PEP proposes no analog.
The reason all three of these attributes are presented together in one proposal
is that the __traceback__ attribute provides convenient access to the
traceback on chained exceptions.
Implicit Exception Chaining
Here is an example to illustrate the __context__ attribute:
def compute(a, b):
try:
a/b
except Exception, exc:
log(exc)
def log(exc):
file = open('logfile.txt') # oops, forgot the 'w'
print >>file, exc
file.close()
Calling compute(0, 0) causes a ZeroDivisionError. The compute()
function catches this exception and calls log(exc), but the log()
function also raises an exception when it tries to write to a file that wasn’t
opened for writing.
In today’s Python, the caller of compute() gets thrown an IOError. The
ZeroDivisionError is lost. With the proposed change, the instance of
IOError has an additional __context__ attribute that retains the
ZeroDivisionError.
The following more elaborate example demonstrates the handling of a mixture of
finally and except clauses:
def main(filename):
file = open(filename) # oops, forgot the 'w'
try:
try:
compute()
except Exception, exc:
log(file, exc)
finally:
file.clos() # oops, misspelled 'close'
def compute():
1/0
def log(file, exc):
try:
print >>file, exc # oops, file is not writable
except:
display(exc)
def display(exc):
print ex # oops, misspelled 'exc'
Calling main() with the name of an existing file will trigger four
exceptions. The ultimate result will be an AttributeError due to the
misspelling of clos, whose __context__ points to a NameError due
to the misspelling of ex, whose __context__ points to an IOError
due to the file being read-only, whose __context__ points to a
ZeroDivisionError, whose __context__ attribute is None.
The proposed semantics are as follows:
Each thread has an exception context initially set to None.
Whenever an exception is raised, if the exception instance does not
already have a __context__ attribute, the interpreter sets it equal to
the thread’s exception context.
Immediately after an exception is raised, the thread’s exception context is
set to the exception.
Whenever the interpreter exits an except block by reaching the end or
executing a return, yield, continue, or break statement,
the thread’s exception context is set to None.
Explicit Exception Chaining
The __cause__ attribute on exception objects is always initialized to
None. It is set by a new form of the raise statement:
raise EXCEPTION from CAUSE
which is equivalent to:
exc = EXCEPTION
exc.__cause__ = CAUSE
raise exc
In the following example, a database provides implementations for a few
different kinds of storage, with file storage as one kind. The database
designer wants errors to propagate as DatabaseError objects so that the
client doesn’t have to be aware of the storage-specific details, but doesn’t
want to lose the underlying error information:
class DatabaseError(StandardError):
pass
class FileDatabase(Database):
def __init__(self, filename):
try:
self.file = open(filename)
except IOError, exc:
raise DatabaseError('failed to open') from exc
If the call to open() raises an exception, the problem will be reported as
a DatabaseError, with a __cause__ attribute that reveals the
IOError as the original cause.
Traceback Attribute
The following example illustrates the __traceback__ attribute:
def do_logged(file, work):
try:
work()
except Exception, exc:
write_exception(file, exc)
raise exc
from traceback import format_tb
def write_exception(file, exc):
...
type = exc.__class__
message = str(exc)
lines = format_tb(exc.__traceback__)
file.write(... type ... message ... lines ...)
...
In today’s Python, the do_logged() function would have to extract the
traceback from sys.exc_traceback or sys.exc_info() [2] and pass both
the value and the traceback to write_exception(). With the proposed
change, write_exception() simply gets one argument and obtains the
exception using the __traceback__ attribute.
The proposed semantics are as follows:
Whenever an exception is caught, if the exception instance does not already
have a __traceback__ attribute, the interpreter sets it to the newly
caught traceback.
Enhanced Reporting
The default exception handler will be modified to report chained exceptions.
The chain of exceptions is traversed by following the __cause__ and
__context__ attributes, with __cause__ taking priority. In keeping
with the chronological order of tracebacks, the most recently raised exception
is displayed last; that is, the display begins with the description of the
innermost exception and backs up the chain to the outermost exception. The
tracebacks are formatted as usual, with one of the lines:
The above exception was the direct cause of the following exception:
or
During handling of the above exception, another exception occurred:
between tracebacks, depending whether they are linked by __cause__ or
__context__ respectively. Here is a sketch of the procedure:
def print_chain(exc):
if exc.__cause__:
print_chain(exc.__cause__)
print '\nThe above exception was the direct cause...'
elif exc.__context__:
print_chain(exc.__context__)
print '\nDuring handling of the above exception, ...'
print_exc(exc)
In the traceback module, the format_exception, print_exception,
print_exc, and print_last functions will be updated to accept an
optional chain argument, True by default. When this argument is
True, these functions will format or display the entire chain of
exceptions as just described. When it is False, these functions will
format or display only the outermost exception.
The cgitb module should also be updated to display the entire chain of
exceptions.
C API
The PyErr_Set* calls for setting exceptions will not set the
__context__ attribute on exceptions. PyErr_NormalizeException will
always set the traceback attribute to its tb argument and the
__context__ and __cause__ attributes to None.
A new API function, PyErr_SetContext(context), will help C programmers
provide chained exception information. This function will first normalize the
current exception so it is an instance, then set its __context__
attribute. A similar API function, PyErr_SetCause(cause), will set the
__cause__ attribute.
Compatibility
Chained exceptions expose the type of the most recent exception, so they will
still match the same except clauses as they do now.
The proposed changes should not break any code unless it sets or uses
attributes named __context__, __cause__, or __traceback__ on
exception instances. As of 2005-05-12, the Python standard library contains
no mention of such attributes.
Open Issue: Extra Information
Walter Dörwald [11] expressed a desire to attach extra information to an
exception during its upward propagation without changing its type. This could
be a useful feature, but it is not addressed by this PEP. It could
conceivably be addressed by a separate PEP establishing conventions for other
informational attributes on exceptions.
Open Issue: Suppressing Context
As written, this PEP makes it impossible to suppress __context__, since
setting exc.__context__ to None in an except or finally clause
will only result in it being set again when exc is raised.
Open Issue: Limiting Exception Types
To improve encapsulation, library implementors may want to wrap all
implementation-level exceptions with an application-level exception. One could
try to wrap exceptions by writing this:
try:
... implementation may raise an exception ...
except:
import sys
raise ApplicationError from sys.exc_value
or this
try:
... implementation may raise an exception ...
except Exception, exc:
raise ApplicationError from exc
but both are somewhat flawed. It would be nice to be able to name the current
exception in a catch-all except clause, but that isn’t addressed here.
Such a feature would allow something like this:
try:
... implementation may raise an exception ...
except *, exc:
raise ApplicationError from exc
Open Issue: yield
The exception context is lost when a yield statement is executed; resuming
the frame after the yield does not restore the context. Addressing this
problem is out of the scope of this PEP; it is not a new problem, as
demonstrated by the following example:
>>> def gen():
... try:
... 1/0
... except:
... yield 3
... raise
...
>>> g = gen()
>>> g.next()
3
>>> g.next()
TypeError: exceptions must be classes, instances, or strings
(deprecated), not NoneType
Open Issue: Garbage Collection
The strongest objection to this proposal has been that it creates cycles
between exceptions and stack frames [12]. Collection of cyclic garbage (and
therefore resource release) can be greatly delayed:
>>> try:
>>> 1/0
>>> except Exception, err:
>>> pass
will introduce a cycle from err -> traceback -> stack frame -> err, keeping
all locals in the same scope alive until the next GC happens.
Today, these locals would go out of scope. There is lots of code which
assumes that “local” resources – particularly open files – will be closed
quickly. If closure has to wait for the next GC, a program (which runs fine
today) may run out of file handles.
Making the __traceback__ attribute a weak reference would avoid the
problems with cyclic garbage. Unfortunately, it would make saving the
Exception for later (as unittest does) more awkward, and it would not
allow as much cleanup of the sys module.
A possible alternate solution, suggested by Adam Olsen, would be to instead
turn the reference from the stack frame to the err variable into a weak
reference when the variable goes out of scope [13].
Possible Future Compatible Changes
These changes are consistent with the appearance of exceptions as a single
object rather than a triple at the interpreter level.
If PEP 340 or PEP 343 is accepted, replace the three (type, value,
traceback) arguments to __exit__ with a single exception argument.
Deprecate sys.exc_type, sys.exc_value, sys.exc_traceback, and
sys.exc_info() in favour of a single member, sys.exception.
Deprecate sys.last_type, sys.last_value, and sys.last_traceback
in favour of a single member, sys.last_exception.
Deprecate the three-argument form of the raise statement in favour of
the one-argument form.
Upgrade cgitb.html() to accept a single value as its first argument as
an alternative to a (type, value, traceback) tuple.
Possible Future Incompatible Changes
These changes might be worth considering for Python 3000.
Remove sys.exc_type, sys.exc_value, sys.exc_traceback, and
sys.exc_info().
Remove sys.last_type, sys.last_value, and sys.last_traceback.
Replace the three-argument sys.excepthook with a one-argument API, and
changing the cgitb module to match.
Remove the three-argument form of the raise statement.
Upgrade traceback.print_exception to accept an exception argument
instead of the type, value, and traceback arguments.
Acknowledgements
Brett Cannon, Greg Ewing, Guido van Rossum, Jeremy Hylton, Phillip J. Eby,
Raymond Hettinger, Walter Dörwald, and others.
References
[1]
Raymond Hettinger, “Idea for avoiding exception masking”
https://mail.python.org/pipermail/python-dev/2003-January/032492.html
[2] (1, 2, 3)
Brett Cannon explains chained exceptions
https://mail.python.org/pipermail/python-dev/2003-June/036063.html
[3]
Greg Ewing points out masking caused by exceptions during finally
https://mail.python.org/pipermail/python-dev/2003-June/036290.html
[4]
Greg Ewing suggests storing the traceback in the exception object
https://mail.python.org/pipermail/python-dev/2003-June/036092.html
[5]
Guido van Rossum mentions exceptions having a traceback attribute
https://mail.python.org/pipermail/python-dev/2005-April/053060.html
[6]
Ka-Ping Yee, “Tidier Exceptions”
https://mail.python.org/pipermail/python-dev/2005-May/053671.html
[7]
Ka-Ping Yee, “Chained Exceptions”
https://mail.python.org/pipermail/python-dev/2005-May/053672.html
[8]
Guido van Rossum discusses automatic chaining in PyErr_Set*
https://mail.python.org/pipermail/python-dev/2003-June/036180.html
[9]
Tony Olensky, “Omnibus Structured Exception/Error Handling Mechanism”
http://dev.perl.org/perl6/rfc/88.html
[10]
MSDN .NET Framework Library, “Exception.InnerException Property”
http://msdn.microsoft.com/library/en-us/cpref/html/frlrfsystemexceptionclassinnerexceptiontopic.asp
[11]
Walter Dörwald suggests wrapping exceptions to add details
https://mail.python.org/pipermail/python-dev/2003-June/036148.html
[12]
Guido van Rossum restates the objection to cyclic trash
https://mail.python.org/pipermail/python-3000/2007-January/005322.html
[13]
Adam Olsen suggests using a weakref from stack frame to exception
https://mail.python.org/pipermail/python-3000/2007-January/005363.html
Copyright
This document has been placed in the public domain.
| Superseded | PEP 344 – Exception Chaining and Embedded Tracebacks | Standards Track | This PEP proposes three standard attributes on exception instances: the
__context__ attribute for implicitly chained exceptions, the
__cause__ attribute for explicitly chained exceptions, and the
__traceback__ attribute for the traceback. A new raise ... from
statement sets the __cause__ attribute. |
PEP 346 – User Defined (”with”) Statements
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
06-May-2005
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Author’s Note
Introduction
Relationship with other PEPs
User defined statements
Usage syntax for user defined statements
Semantics for user defined statements
Statement template protocol: __enter__
Statement template protocol: __exit__
Factoring out arbitrary exception handling
Generators
Default value for yield
Template generator decorator: statement_template
Template generator wrapper: __enter__() method
Template generator wrapper: __exit__() method
Injecting exceptions into generators
Generator finalisation
Generator finalisation: TerminateIteration exception
Generator finalisation: __del__() method
Deterministic generator finalisation
Generators as user defined statement templates
Examples
Open Issues
Rejected Options
Having the basic construct be a looping construct
Allowing statement templates to suppress exceptions
Differentiating between non-exceptional exits
Not injecting raised exceptions into generators
Making all generators statement templates
Using do as the keyword
Not having a keyword
Enhancing try statements
Having the template protocol directly reflect try statements
Iterator finalisation (WITHDRAWN)
Iterator protocol addition: __finish__
Best effort finalisation
Deterministic finalisation
for loop syntax
Updated for loop semantics
Generator iterator finalisation: __finish__() method
Partial iteration of finishable iterators
Acknowledgements
References
Copyright
Abstract
This PEP is a combination of PEP 310’s “Reliable Acquisition/Release
Pairs” with the “Anonymous Block Statements” of Guido’s PEP 340. This
PEP aims to take the good parts of PEP 340, blend them with parts of
PEP 310 and rearrange the lot into an elegant whole. It borrows from
various other PEPs in order to paint a complete picture, and is
intended to stand on its own.
Author’s Note
During the discussion of PEP 340, I maintained drafts of this PEP as
PEP 3XX on my own website (since I didn’t have CVS access to update a
submitted PEP fast enough to track the activity on python-dev).
Since the first draft of this PEP, Guido wrote PEP 343 as a simplified
version of PEP 340. PEP 343 (at the time of writing) uses the exact
same semantics for the new statements as this PEP, but uses a slightly
different mechanism to allow generators to be used to write statement
templates. However, Guido has indicated that he intends to accept a
new PEP being written by Raymond Hettinger that will integrate PEP 288
and PEP 325, and will permit a generator decorator like the one
described in this PEP to be used to write statement templates for PEP
343. The other difference was the choice of keyword (‘with’ versus
‘do’) and Guido has stated he will organise a vote on that in the
context of PEP 343.
Accordingly, the version of this PEP submitted for archiving on
python.org is to be WITHDRAWN immediately after submission. PEP 343
and the combined generator enhancement PEP will cover the important
ideas.
Introduction
This PEP proposes that Python’s ability to reliably manage resources
be enhanced by the introduction of a new with statement that
allows factoring out of arbitrary try/finally and some
try/except/else boilerplate. The new construct is called
a ‘user defined statement’, and the associated class definitions are
called ‘statement templates’.
The above is the main point of the PEP. However, if that was all it
said, then PEP 310 would be sufficient and this PEP would be
essentially redundant. Instead, this PEP recommends additional
enhancements that make it natural to write these statement templates
using appropriately decorated generators. A side effect of those
enhancements is that it becomes important to appropriately deal
with the management of resources inside generators.
This is quite similar to PEP 343, but the exceptions that occur are
re-raised inside the generators frame, and the issue of generator
finalisation needs to be addressed as a result. The template
generator decorator suggested by this PEP also creates reusable
templates, rather than the single use templates of PEP 340.
In comparison to PEP 340, this PEP eliminates the ability to suppress
exceptions, and makes the user defined statement a non-looping
construct. The other main difference is the use of a decorator to
turn generators into statement templates, and the incorporation of
ideas for addressing iterator finalisation.
If all that seems like an ambitious operation… well, Guido was the
one to set the bar that high when he wrote PEP 340 :)
Relationship with other PEPs
This PEP competes directly with PEP 310, PEP 340 and PEP 343,
as those PEPs all describe alternative mechanisms for handling
deterministic resource management.
It does not compete with PEP 342 which splits off PEP 340’s
enhancements related to passing data into iterators. The associated
changes to the for loop semantics would be combined with the
iterator finalisation changes suggested in this PEP. User defined
statements would not be affected.
Neither does this PEP compete with the generator enhancements
described in PEP 288. While this PEP proposes the ability to
inject exceptions into generator frames, it is an internal
implementation detail, and does not require making that ability
publicly available to Python code. PEP 288 is, in part, about
making that implementation detail easily accessible.
This PEP would, however, make the generator resource release support
described in PEP 325 redundant - iterators which require
finalisation should provide an appropriate implementation of the
statement template protocol.
User defined statements
To steal the motivating example from PEP 310, correct handling of a
synchronisation lock currently looks like this:
the_lock.acquire()
try:
# Code here executes with the lock held
finally:
the_lock.release()
Like PEP 310, this PEP proposes that such code be able to be written
as:
with the_lock:
# Code here executes with the lock held
These user defined statements are primarily designed to allow easy
factoring of try blocks that are not easily converted to
functions. This is most commonly the case when the exception handling
pattern is consistent, but the body of the try block changes.
With a user-defined statement, it is straightforward to factor out the
exception handling into a statement template, with the body of the
try clause provided inline in the user code.
The term ‘user defined statement’ reflects the fact that the meaning
of a with statement is governed primarily by the statement
template used, and programmers are free to create their own statement
templates, just as they are free to create their own iterators for use
in for loops.
Usage syntax for user defined statements
The proposed syntax is simple:
with EXPR1 [as VAR1]:
BLOCK1
Semantics for user defined statements
the_stmt = EXPR1
stmt_enter = getattr(the_stmt, "__enter__", None)
stmt_exit = getattr(the_stmt, "__exit__", None)
if stmt_enter is None or stmt_exit is None:
raise TypeError("Statement template required")
VAR1 = stmt_enter() # Omit 'VAR1 =' if no 'as' clause
exc = (None, None, None)
try:
try:
BLOCK1
except:
exc = sys.exc_info()
raise
finally:
stmt_exit(*exc)
Other than VAR1, none of the local variables shown above will be
visible to user code. Like the iteration variable in a for loop,
VAR1 is visible in both BLOCK1 and code following the user
defined statement.
Note that the statement template can only react to exceptions, it
cannot suppress them. See Rejected Options for an explanation as
to why.
Statement template protocol: __enter__
The __enter__() method takes no arguments, and if it raises an
exception, BLOCK1 is never executed. If this happens, the
__exit__() method is not called. The value returned by this
method is assigned to VAR1 if the as clause is used. Object’s
with no other value to return should generally return self rather
than None to permit in-place creation in the with statement.
Statement templates should use this method to set up the conditions
that are to exist during execution of the statement (e.g. acquisition
of a synchronisation lock).
Statement templates which are not always usable (e.g. closed file
objects) should raise a RuntimeError if an attempt is made to call
__enter__() when the template is not in a valid state.
Statement template protocol: __exit__
The __exit__() method accepts three arguments which correspond to
the three “arguments” to the raise statement: type, value, and
traceback. All arguments are always supplied, and will be set to
None if no exception occurred. This method will be called exactly
once by the with statement machinery if the __enter__() method
completes successfully.
Statement templates perform their exception handling in this method.
If the first argument is None, it indicates non-exceptional
completion of BLOCK1 - execution either reached the end of block,
or early completion was forced using a return, break or
continue statement. Otherwise, the three arguments reflect the
exception that terminated BLOCK1.
Any exceptions raised by the __exit__() method are propagated to
the scope containing the with statement. If the user code in
BLOCK1 also raised an exception, that exception would be lost, and
replaced by the one raised by the __exit__() method.
Factoring out arbitrary exception handling
Consider the following exception handling arrangement:
SETUP_BLOCK
try:
try:
TRY_BLOCK
except exc_type1, exc:
EXCEPT_BLOCK1
except exc_type2, exc:
EXCEPT_BLOCK2
except:
EXCEPT_BLOCK3
else:
ELSE_BLOCK
finally:
FINALLY_BLOCK
It can be roughly translated to a statement template as follows:
class my_template(object):
def __init__(self, *args):
# Any required arguments (e.g. a file name)
# get stored in member variables
# The various BLOCK's will need updating to reflect
# that.
def __enter__(self):
SETUP_BLOCK
def __exit__(self, exc_type, value, traceback):
try:
try:
if exc_type is not None:
raise exc_type, value, traceback
except exc_type1, exc:
EXCEPT_BLOCK1
except exc_type2, exc:
EXCEPT_BLOCK2
except:
EXCEPT_BLOCK3
else:
ELSE_BLOCK
finally:
FINALLY_BLOCK
Which can then be used as:
with my_template(*args):
TRY_BLOCK
However, there are two important semantic differences between this
code and the original try statement.
Firstly, in the original try statement, if a break, return
or continue statement is encountered in TRY_BLOCK, only
FINALLY_BLOCK will be executed as the statement completes. With
the statement template, ELSE_BLOCK will also execute, as these
statements are treated like any other non-exceptional block
termination. For use cases where it matters, this is likely to be a
good thing (see transaction in the Examples), as this hole where
neither the except nor the else clause gets executed is easy
to forget when writing exception handlers.
Secondly, the statement template will not suppress any exceptions.
If, for example, the original code suppressed the exc_type1 and
exc_type2 exceptions, then this would still need to be done inline
in the user code:
try:
with my_template(*args):
TRY_BLOCK
except (exc_type1, exc_type2):
pass
However, even in these cases where the suppression of exceptions needs
to be made explicit, the amount of boilerplate repeated at the calling
site is significantly reduced (See Rejected Options for further
discussion of this behaviour).
In general, not all of the clauses will be needed. For resource
handling (like files or synchronisation locks), it is possible to
simply execute the code that would have been part of FINALLY_BLOCK
in the __exit__() method. This can be seen in the following
implementation that makes synchronisation locks into statement
templates as mentioned at the beginning of this section:
# New methods of synchronisation lock objects
def __enter__(self):
self.acquire()
return self
def __exit__(self, *exc_info):
self.release()
Generators
With their ability to suspend execution, and return control to the
calling frame, generators are natural candidates for writing statement
templates. Adding user defined statements to the language does not
require the generator changes described in this section, thus making
this PEP an obvious candidate for a phased implementation (with
statements in phase 1, generator integration in phase 2). The
suggested generator updates allow arbitrary exception handling to
be factored out like this:
@statement_template
def my_template(*arguments):
SETUP_BLOCK
try:
try:
yield
except exc_type1, exc:
EXCEPT_BLOCK1
except exc_type2, exc:
EXCEPT_BLOCK2
except:
EXCEPT_BLOCK3
else:
ELSE_BLOCK
finally:
FINALLY_BLOCK
Notice that, unlike the class based version, none of the blocks need
to be modified, as shared values are local variables of the
generator’s internal frame, including the arguments passed in by the
invoking code. The semantic differences noted earlier (all
non-exceptional block termination triggers the else clause, and
the template is unable to suppress exceptions) still apply.
Default value for yield
When creating a statement template with a generator, the yield
statement will often be used solely to return control to the body of
the user defined statement, rather than to return a useful value.
Accordingly, if this PEP is accepted, yield, like return, will
supply a default value of None (i.e. yield and yield None
will become equivalent statements).
This same change is being suggested in PEP 342. Obviously, it would
only need to be implemented once if both PEPs were accepted :)
Template generator decorator: statement_template
As with PEP 343, a new decorator is suggested that wraps a generator
in an object with the appropriate statement template semantics.
Unlike PEP 343, the templates suggested here are reusable, as the
generator is instantiated anew in each call to __enter__().
Additionally, any exceptions that occur in BLOCK1 are re-raised in
the generator’s internal frame:
class template_generator_wrapper(object):
def __init__(self, func, func_args, func_kwds):
self.func = func
self.args = func_args
self.kwds = func_kwds
self.gen = None
def __enter__(self):
if self.gen is not None:
raise RuntimeError("Enter called without exit!")
self.gen = self.func(*self.args, **self.kwds)
try:
return self.gen.next()
except StopIteration:
raise RuntimeError("Generator didn't yield")
def __exit__(self, *exc_info):
if self.gen is None:
raise RuntimeError("Exit called without enter!")
try:
try:
if exc_info[0] is not None:
self.gen._inject_exception(*exc_info)
else:
self.gen.next()
except StopIteration:
pass
else:
raise RuntimeError("Generator didn't stop")
finally:
self.gen = None
def statement_template(func):
def factory(*args, **kwds):
return template_generator_wrapper(func, args, kwds)
return factory
Template generator wrapper: __enter__() method
The template generator wrapper has an __enter__() method that
creates a new instance of the contained generator, and then invokes
next() once. It will raise a RuntimeError if the last
generator instance has not been cleaned up, or if the generator
terminates instead of yielding a value.
Template generator wrapper: __exit__() method
The template generator wrapper has an __exit__() method that
simply invokes next() on the generator if no exception is passed
in. If an exception is passed in, it is re-raised in the contained
generator at the point of the last yield statement.
In either case, the generator wrapper will raise a RuntimeError if the
internal frame does not terminate as a result of the operation. The
__exit__() method will always clean up the reference to the used
generator instance, permitting __enter__() to be called again.
A StopIteration raised by the body of the user defined statement
may be inadvertently suppressed inside the __exit__() method, but
this is unimportant, as the originally raised exception still
propagates correctly.
Injecting exceptions into generators
To implement the __exit__() method of the template generator
wrapper, it is necessary to inject exceptions into the internal frame
of the generator. This is new implementation level behaviour that has
no current Python equivalent.
The injection mechanism (referred to as _inject_exception in this
PEP) raises an exception in the generator’s frame with the specified
type, value and traceback information. This means that the exception
looks like the original if it is allowed to propagate.
For the purposes of this PEP, there is no need to make this capability
available outside the Python implementation code.
Generator finalisation
To support resource management in template generators, this PEP will
eliminate the restriction on yield statements inside the try
block of a try/finally statement. Accordingly, generators
which require the use of a file or some such object can ensure the
object is managed correctly through the use of try/finally or
with statements.
This restriction will likely need to be lifted globally - it would be
difficult to restrict it so that it was only permitted inside
generators used to define statement templates. Accordingly, this PEP
includes suggestions designed to ensure generators which are not used
as statement templates are still finalised appropriately.
Generator finalisation: TerminateIteration exception
A new exception is proposed:
class TerminateIteration(Exception): pass
The new exception is injected into a generator in order to request
finalisation. It should not be suppressed by well-behaved code.
Generator finalisation: __del__() method
To ensure a generator is finalised eventually (within the limits of
Python’s garbage collection), generators will acquire a __del__()
method with the following semantics:
def __del__(self):
try:
self._inject_exception(TerminateIteration, None, None)
except TerminateIteration:
pass
Deterministic generator finalisation
There is a simple way to provide deterministic finalisation of
generators - give them appropriate __enter__() and __exit__()
methods:
def __enter__(self):
return self
def __exit__(self, *exc_info):
try:
self._inject_exception(TerminateIteration, None, None)
except TerminateIteration:
pass
Then any generator can be finalised promptly by wrapping the relevant
for loop inside a with statement:
with all_lines(filenames) as lines:
for line in lines:
print lines
(See the Examples for the definition of all_lines, and the reason
it requires prompt finalisation)
Compare the above example to the usage of file objects:
with open(filename) as f:
for line in f:
print f
Generators as user defined statement templates
When used to implement a user defined statement, a generator should
yield only once on a given control path. The result of that yield
will then be provided as the result of the generator’s __enter__()
method. Having a single yield on each control path ensures that
the internal frame will terminate when the generator’s __exit__()
method is called. Multiple yield statements on a single control
path will result in a RuntimeError being raised by the
__exit__() method when the internal frame fails to terminate
correctly. Such an error indicates a bug in the statement template.
To respond to exceptions, or to clean up resources, it is sufficient
to wrap the yield statement in an appropriately constructed
try statement. If execution resumes after the yield without
an exception, the generator knows that the body of the do
statement completed without incident.
Examples
A template for ensuring that a lock, acquired at the start of a
block, is released when the block is left:# New methods on synchronisation locks
def __enter__(self):
self.acquire()
return self
def __exit__(self, *exc_info):
lock.release()
Used as follows:
with myLock:
# Code here executes with myLock held. The lock is
# guaranteed to be released when the block is left (even
# if via return or by an uncaught exception).
A template for opening a file that ensures the file is closed when
the block is left:# New methods on file objects
def __enter__(self):
if self.closed:
raise RuntimeError, "Cannot reopen closed file handle"
return self
def __exit__(self, *args):
self.close()
Used as follows:
with open("/etc/passwd") as f:
for line in f:
print line.rstrip()
A template for committing or rolling back a database transaction:def transaction(db):
try:
yield
except:
db.rollback()
else:
db.commit()
Used as follows:
with transaction(the_db):
make_table(the_db)
add_data(the_db)
# Getting to here automatically triggers a commit
# Any exception automatically triggers a rollback
It is possible to nest blocks and combine templates:@statement_template
def lock_opening(lock, filename, mode="r"):
with lock:
with open(filename, mode) as f:
yield f
Used as follows:
with lock_opening(myLock, "/etc/passwd") as f:
for line in f:
print line.rstrip()
Redirect stdout temporarily:@statement_template
def redirected_stdout(new_stdout):
save_stdout = sys.stdout
try:
sys.stdout = new_stdout
yield
finally:
sys.stdout = save_stdout
Used as follows:
with open(filename, "w") as f:
with redirected_stdout(f):
print "Hello world"
A variant on open() that also returns an error condition:@statement_template
def open_w_error(filename, mode="r"):
try:
f = open(filename, mode)
except IOError, err:
yield None, err
else:
try:
yield f, None
finally:
f.close()
Used as follows:
do open_w_error("/etc/passwd", "a") as f, err:
if err:
print "IOError:", err
else:
f.write("guido::0:0::/:/bin/sh\n")
Find the first file with a specific header:for name in filenames:
with open(name) as f:
if f.read(2) == 0xFEB0:
break
Find the first item you can handle, holding a lock for the entire
loop, or just for each iteration:with lock:
for item in items:
if handle(item):
break
for item in items:
with lock:
if handle(item):
break
Hold a lock while inside a generator, but release it when
returning control to the outer scope:@statement_template
def released(lock):
lock.release()
try:
yield
finally:
lock.acquire()
Used as follows:
with lock:
for item in items:
with released(lock):
yield item
Read the lines from a collection of files (e.g. processing
multiple configuration sources):def all_lines(filenames):
for name in filenames:
with open(name) as f:
for line in f:
yield line
Used as follows:
with all_lines(filenames) as lines:
for line in lines:
update_config(line)
Not all uses need to involve resource management:@statement_template
def tag(*args, **kwds):
name = cgi.escape(args[0])
if kwds:
kwd_pairs = ["%s=%s" % cgi.escape(key), cgi.escape(value)
for key, value in kwds]
print '<%s %s>' % name, " ".join(kwd_pairs)
else:
print '<%s>' % name
yield
print '</%s>' % name
Used as follows:
with tag('html'):
with tag('head'):
with tag('title'):
print 'A web page'
with tag('body'):
for par in pars:
with tag('p'):
print par
with tag('a', href="http://www.python.org"):
print "Not a dead parrot!"
From PEP 343, another useful example would be an operation that
blocks signals. The use could be like this:from signal import blocked_signals
with blocked_signals():
# code executed without worrying about signals
An optional argument might be a list of signals to be blocked; by
default all signals are blocked. The implementation is left as an
exercise to the reader.
Another use for this feature is for Decimal contexts:# New methods on decimal Context objects
def __enter__(self):
if self._old_context is not None:
raise RuntimeError("Already suspending other Context")
self._old_context = getcontext()
setcontext(self)
def __exit__(self, *args):
setcontext(self._old_context)
self._old_context = None
Used as follows:
with decimal.Context(precision=28):
# Code here executes with the given context
# The context always reverts after this statement
Open Issues
None, as this PEP has been withdrawn.
Rejected Options
Having the basic construct be a looping construct
The major issue with this idea, as illustrated by PEP 340’s
block statements, is that it causes problems with factoring
try statements that are inside loops, and contain break and
continue statements (as these statements would then apply to the
block construct, instead of the original loop). As a key goal is
to be able to factor out arbitrary exception handling (other than
suppression) into statement templates, this is a definite problem.
There is also an understandability problem, as can be seen in the
Examples. In the example showing acquisition of a lock either for an
entire loop, or for each iteration of the loop, if the user defined
statement was itself a loop, moving it from outside the for loop
to inside the for loop would have major semantic implications,
beyond those one would expect.
Finally, with a looping construct, there are significant problems with
TOOWTDI, as it is frequently unclear whether a particular situation
should be handled with a conventional for loop or the new looping
construct. With the current PEP, there is no such problem - for
loops continue to be used for iteration, and the new do statements
are used to factor out exception handling.
Another issue, specifically with PEP 340’s anonymous block statements,
is that they make it quite difficult to write statement templates
directly (i.e. not using a generator). This problem is addressed by
the current proposal, as can be seen by the relative simplicity of the
various class based implementations of statement templates in the
Examples.
Allowing statement templates to suppress exceptions
Earlier versions of this PEP gave statement templates the ability to
suppress exceptions. The BDFL expressed concern over the associated
complexity, and I agreed after reading an article by Raymond Chen
about the evils of hiding flow control inside macros in C code [1].
Removing the suppression ability eliminated a whole lot of complexity
from both the explanation and implementation of user defined
statements, further supporting it as the correct choice. Older
versions of the PEP had to jump through some horrible hoops to avoid
inadvertently suppressing exceptions in __exit__() methods - that
issue does not exist with the current suggested semantics.
There was one example (auto_retry) that actually used the ability
to suppress exceptions. This use case, while not quite as elegant,
has significantly more obvious control flow when written out in full
in the user code:
def attempts(num_tries):
return reversed(xrange(num_tries))
for retry in attempts(3):
try:
make_attempt()
except IOError:
if not retry:
raise
For what it’s worth, the perverse could still write this as:
for attempt in auto_retry(3, IOError):
try:
with attempt:
make_attempt()
except FailedAttempt:
pass
To protect the innocent, the code to actually support that is not
included here.
Differentiating between non-exceptional exits
Earlier versions of this PEP allowed statement templates to
distinguish between exiting the block normally, and exiting via a
return, break or continue statement. The BDFL flirted
with a similar idea in PEP 343 and its associated discussion. This
added significant complexity to the description of the semantics, and
it required each and every statement template to decide whether or not
those statements should be treated like exceptions, or like a normal
mechanism for exiting the block.
This template-by-template decision process raised great potential for
confusion - consider if one database connector provided a transaction
template that treated early exits like an exception, whereas a second
connector treated them as normal block termination.
Accordingly, this PEP now uses the simplest solution - early exits
appear identical to normal block termination as far as the statement
template is concerned.
Not injecting raised exceptions into generators
PEP 343 suggests simply invoking next() unconditionally on generators
used to define statement templates. This means the template
generators end up looking rather unintuitive, and the retention of the
ban against yielding inside try/finally means that Python’s
exception handling capabilities cannot be used to deal with management
of multiple resources.
The alternative which this PEP advocates (injecting raised exceptions
into the generator frame), means that multiple resources can be
managed elegantly as shown by lock_opening in the Examples
Making all generators statement templates
Separating the template object from the generator itself makes it
possible to have reusable generator templates. That is, the following
code will work correctly if this PEP is accepted:
open_it = lock_opening(parrot_lock, "dead_parrot.txt")
with open_it as f:
# use the file for a while
with open_it as f:
# use the file again
The second benefit is that iterator generators and template generators
are very different things - the decorator keeps that distinction
clear, and prevents one being used where the other is required.
Finally, requiring the decorator allows the native methods of
generator objects to be used to implement generator finalisation.
Using do as the keyword
do was an alternative keyword proposed during the PEP 340
discussion. It reads well with appropriately named functions, but it
reads poorly when used with methods, or with objects that provide
native statement template support.
When do was first suggested, the BDFL had rejected PEP 310’s
with keyword, based on a desire to use it for a Pascal/Delphi
style with statement. Since then, the BDFL has retracted this
objection, as he no longer intends to provide such a statement. This
change of heart was apparently based on the C# developers reasons for
not providing the feature [2].
Not having a keyword
This is an interesting option, and can be made to read quite well.
However, it’s awkward to look up in the documentation for new users,
and strikes some as being too magical. Accordingly, this PEP goes
with a keyword based suggestion.
Enhancing try statements
This suggestion involves give bare try statements a signature
similar to that proposed for with statements.
I think that trying to write a with statement as an enhanced
try statement makes as much sense as trying to write a for
loop as an enhanced while loop. That is, while the semantics of
the former can be explained as a particular way of using the latter,
the former is not an instance of the latter. The additional
semantics added around the more fundamental statement result in a new
construct, and the two different statements shouldn’t be confused.
This can be seen by the fact that the ‘enhanced’ try statement
still needs to be explained in terms of a ‘non-enhanced’ try
statement. If it’s something different, it makes more sense to give
it a different name.
Having the template protocol directly reflect try statements
One suggestion was to have separate methods in the protocol to cover
different parts of the structure of a generalised try statement.
Using the terms try, except, else and finally, we
would have something like:
class my_template(object):
def __init__(self, *args):
# Any required arguments (e.g. a file name)
# get stored in member variables
# The various BLOCK's will need to updated to reflect
# that.
def __try__(self):
SETUP_BLOCK
def __except__(self, exc, value, traceback):
if isinstance(exc, exc_type1):
EXCEPT_BLOCK1
if isinstance(exc, exc_type2):
EXCEPT_BLOCK2
else:
EXCEPT_BLOCK3
def __else__(self):
ELSE_BLOCK
def __finally__(self):
FINALLY_BLOCK
Aside from preferring the addition of two method slots rather than
four, I consider it significantly easier to be able to simply
reproduce a slightly modified version of the original try
statement code in the __exit__() method (as shown in Factoring
out arbitrary exception handling), rather than have to split the
functionality amongst several different methods (or figure out
which method to use if not all clauses are used by the template).
To make this discussion less theoretical, here is the transaction
example implemented using both the two method and the four method
protocols instead of a generator. Both implementations guarantee a
commit if a break, return or continue statement is
encountered (as does the generator-based implementation in the
Examples section):
class transaction_2method(object):
def __init__(self, db):
self.db = db
def __enter__(self):
pass
def __exit__(self, exc_type, *exc_details):
if exc_type is None:
self.db.commit()
else:
self.db.rollback()
class transaction_4method(object):
def __init__(self, db):
self.db = db
self.commit = False
def __try__(self):
self.commit = True
def __except__(self, exc_type, exc_value, traceback):
self.db.rollback()
self.commit = False
def __else__(self):
pass
def __finally__(self):
if self.commit:
self.db.commit()
self.commit = False
There are two more minor points, relating to the specific method names
in the suggestion. The name of the __try__() method is
misleading, as SETUP_BLOCK executes before the try statement
is entered, and the name of the __else__() method is unclear in
isolation, as numerous other Python statements include an else
clause.
Iterator finalisation (WITHDRAWN)
The ability to use user defined statements inside generators is likely
to increase the need for deterministic finalisation of iterators, as
resource management is pushed inside the generators, rather than being
handled externally as is currently the case.
The PEP currently suggests handling this by making all generators
statement templates, and using with statements to handle
finalisation. However, earlier versions of this PEP suggested the
following, more complex, solution, that allowed the author of a
generator to flag the need for finalisation, and have for loops
deal with it automatically. It is included here as a long, detailed
rejected option.
Iterator protocol addition: __finish__
An optional new method for iterators is proposed, called
__finish__(). It takes no arguments, and should not return
anything.
The __finish__ method is expected to clean up all resources the
iterator has open. Iterators with a __finish__() method are
called ‘finishable iterators’ for the remainder of the PEP.
Best effort finalisation
A finishable iterator should ensure that it provides a __del__
method that also performs finalisation (e.g. by invoking the
__finish__() method). This allows Python to still make a best
effort at finalisation in the event that deterministic finalisation is
not applied to the iterator.
Deterministic finalisation
If the iterator used in a for loop has a __finish__() method,
the enhanced for loop semantics will guarantee that that method
will be executed, regardless of the means of exiting the loop. This
is important for iterator generators that utilise user defined
statements or the now permitted try/finally statements, or
for new iterators that rely on timely finalisation to release
allocated resources (e.g. releasing a thread or database connection
back into a pool).
for loop syntax
No changes are suggested to for loop syntax. This is just to
define the statement parts needed for the description of the
semantics:
for VAR1 in EXPR1:
BLOCK1
else:
BLOCK2
Updated for loop semantics
When the target iterator does not have a __finish__() method, a
for loop will execute as follows (i.e. no change from the status
quo):
itr = iter(EXPR1)
exhausted = False
while True:
try:
VAR1 = itr.next()
except StopIteration:
exhausted = True
break
BLOCK1
if exhausted:
BLOCK2
When the target iterator has a __finish__() method, a for loop
will execute as follows:
itr = iter(EXPR1)
exhausted = False
try:
while True:
try:
VAR1 = itr.next()
except StopIteration:
exhausted = True
break
BLOCK1
if exhausted:
BLOCK2
finally:
itr.__finish__()
The implementation will need to take some care to avoid incurring the
try/finally overhead when the iterator does not have a
__finish__() method.
Generator iterator finalisation: __finish__() method
When enabled with the appropriate decorator, generators will have a
__finish__() method that raises TerminateIteration in the
internal frame:
def __finish__(self):
try:
self._inject_exception(TerminateIteration)
except TerminateIteration:
pass
A decorator (e.g. needs_finish()) is required to enable this
feature, so that existing generators (which are not expecting
finalisation) continue to work as expected.
Partial iteration of finishable iterators
Partial iteration of a finishable iterator is possible, although it
requires some care to ensure the iterator is still finalised promptly
(it was made finishable for a reason!). First, we need a class to
enable partial iteration of a finishable iterator by hiding the
iterator’s __finish__() method from the for loop:
class partial_iter(object):
def __init__(self, iterable):
self.iter = iter(iterable)
def __iter__(self):
return self
def next(self):
return self.itr.next()
Secondly, an appropriate statement template is needed to ensure the
iterator is finished eventually:
@statement_template
def finishing(iterable):
itr = iter(iterable)
itr_finish = getattr(itr, "__finish__", None)
if itr_finish is None:
yield itr
else:
try:
yield partial_iter(itr)
finally:
itr_finish()
This can then be used as follows:
do finishing(finishable_itr) as itr:
for header_item in itr:
if end_of_header(header_item):
break
# process header item
for body_item in itr:
# process body item
Note that none of the above is needed for an iterator that is not
finishable - without a __finish__() method, it will not be
promptly finalised by the for loop, and hence inherently allows
partial iteration. Allowing partial iteration of non-finishable
iterators as the default behaviour is a key element in keeping this
addition to the iterator protocol backwards compatible.
Acknowledgements
The acknowledgements section for PEP 340 applies, since this text grew
out of the discussion of that PEP, but additional thanks go to Michael
Hudson, Paul Moore and Guido van Rossum for writing PEP 310 and PEP
340 in the first place, and to (in no meaningful order) Fredrik Lundh,
Phillip J. Eby, Steven Bethard, Josiah Carlson, Greg Ewing, Tim
Delaney and Arnold deVos for prompting particular ideas that made
their way into this text.
References
[1]
A rant against flow control macros
(http://blogs.msdn.com/oldnewthing/archive/2005/01/06/347666.aspx)
[2]
Why doesn’t C# have a ‘with’ statement?
(http://msdn.microsoft.com/vcsharp/programming/language/ask/withstatement/)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 346 – User Defined (”with”) Statements | Standards Track | This PEP is a combination of PEP 310’s “Reliable Acquisition/Release
Pairs” with the “Anonymous Block Statements” of Guido’s PEP 340. This
PEP aims to take the good parts of PEP 340, blend them with parts of
PEP 310 and rearrange the lot into an elegant whole. It borrows from
various other PEPs in order to paint a complete picture, and is
intended to stand on its own. |
PEP 348 – Exception Reorganization for Python 3.0
Author:
Brett Cannon <brett at python.org>
Status:
Rejected
Type:
Standards Track
Created:
28-Jul-2005
Post-History:
Table of Contents
Abstract
Rationale For Wanting Change
Philosophy of Reorganization
New Hierarchy
Differences Compared to Python 2.4
BaseException
KeyboardInterrupt and SystemExit
NotImplementedError
Required Superclass for raise
Implementation
Bare except Clauses Catch Exception
Implementation
Transition Plan
Rejected Ideas
DeprecationWarning Inheriting From PendingDeprecationWarning
AttributeError Inheriting From TypeError or NameError
Removal of EnvironmentError
Introduction of MacError and UnixError
SystemError Subclassing SystemExit
ControlFlowException Under Exception
Rename NameError to NamespaceError
Renaming RuntimeError or Introducing SimpleError
Renaming Existing Exceptions
Have EOFError Subclass IOError
Have MemoryError and SystemError Have a Common Superclass
Common Superclass for PendingDeprecationWarning and DeprecationWarning
Removing WindowsError
Superclass for KeyboardInterrupt and SystemExit
Acknowledgements
References
Copyright
Note
This PEP has been rejected [16].
Abstract
Python, as of version 2.4, has 38 exceptions (including warnings) in
the built-in namespace in a rather shallow hierarchy. These
classes have come about over the years without a chance to learn from
experience. This PEP proposes doing a reorganization of the hierarchy
for Python 3.0 when backwards-compatibility is not as much of an
issue.
Along with this reorganization, adding a requirement that all
objects passed to a raise statement must inherit from a specific
superclass is proposed. This is to have guarantees about the basic
interface of exceptions and to further enhance the natural hierarchy
of exceptions.
Lastly, bare except clauses will be changed to be semantically
equivalent to except Exception. Most people currently use bare
except clause for this purpose and with the exception hierarchy
reorganization becomes a viable default.
Rationale For Wanting Change
Exceptions are a critical part of Python. While exceptions are
traditionally used to signal errors in a program, they have also grown
to be used for flow control for things such as iterators.
While their importance is great, there is a lack of structure to them.
This stems from the fact that any object can be raised as an
exception. Because of this you have no guarantee in terms of what
kind of object will be raised, destroying any possible hierarchy
raised objects might adhere to.
But exceptions do have a hierarchy, showing the severity of the
exception. The hierarchy also groups related exceptions together to
simplify catching them in except clauses. To allow people to
be able to rely on this hierarchy, a common superclass that all
raise objects must inherit from is being proposed. It also allows
guarantees about the interface to raised objects to be made (see
PEP 344). A discussion about all of this has occurred
before on python-dev [1].
As bare except clauses stand now, they catch all exceptions.
While this can be handy, it is rather overreaching for the common
case. Thanks to having a required superclass, catching all
exceptions is as easy as catching just one specific exception.
This allows bare except clauses to be used for a more useful
purpose.
Once again, this has been discussed on python-dev [2].
Finally, slight changes to the exception hierarchy will make it much
more reasonable in terms of structure. By minor rearranging
exceptions
that should not typically be caught can be allowed to propagate to the
top of the execution stack, terminating the interpreter as intended.
Philosophy of Reorganization
For the reorganization of the hierarchy, there was a general
philosophy followed that developed from discussion of earlier drafts
of this PEP [4], [5],
[6], [7],
[8], [9].
First and foremost was to not break anything
that works. This meant that renaming exceptions was out of the
question unless the name was deemed severely bad. This
also meant no removal of exceptions unless they were viewed as
truly misplaced. The introduction of new exceptions were only done in
situations where there might be a use for catching a superclass of a
category of exceptions. Lastly, existing exceptions would have their
inheritance tree changed only if it was felt they were truly
misplaced to begin with.
For all new exceptions, the proper suffix had to be chosen. For
those that signal an error, “Error” is to be used. If the exception
is a warning, then “Warning”. “Exception” is to be used when none
of the other suffixes are proper to use and no specific suffix is
a better fit.
After that it came down to choosing which exceptions should and
should not inherit from Exception. This was for the purpose of
making bare except clauses more useful.
Lastly, the entire existing hierarchy had to inherit from the new
exception meant to act as the required superclass for all exceptions
to inherit from.
New Hierarchy
Note
Exceptions flagged with “stricter inheritance” will no
longer inherit from a certain class. A “broader inheritance” flag
means a class has been added to the exception’s inheritance tree.
All comparisons are against the Python 2.4 exception hierarchy.
+-- BaseException (new; broader inheritance for subclasses)
+-- Exception
+-- GeneratorExit (defined in PEP 342)
+-- StandardError
+-- ArithmeticError
+-- DivideByZeroError
+-- FloatingPointError
+-- OverflowError
+-- AssertionError
+-- AttributeError
+-- EnvironmentError
+-- IOError
+-- EOFError
+-- OSError
+-- ImportError
+-- LookupError
+-- IndexError
+-- KeyError
+-- MemoryError
+-- NameError
+-- UnboundLocalError
+-- NotImplementedError (stricter inheritance)
+-- SyntaxError
+-- IndentationError
+-- TabError
+-- TypeError
+-- RuntimeError
+-- UnicodeError
+-- UnicodeDecodeError
+-- UnicodeEncodeError
+-- UnicodeTranslateError
+-- ValueError
+-- ReferenceError
+-- StopIteration
+-- SystemError
+-- Warning
+-- DeprecationWarning
+-- FutureWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+ -- WindowsError
+-- KeyboardInterrupt (stricter inheritance)
+-- SystemExit (stricter inheritance)
Differences Compared to Python 2.4
A more thorough explanation of terms is needed when discussing
inheritance changes. Inheritance changes result in either broader or
more restrictive inheritance. “Broader” is when a class has an
inheritance tree like cls, A and then becomes cls, B, A.
“Stricter” is the reverse.
BaseException
The superclass that all exceptions must inherit from. It’s name was
chosen to reflect that it is at the base of the exception hierarchy
while being an exception itself. “Raisable” was considered as a name,
it was passed on because its name did not properly reflect the fact
that it is an exception itself.
Direct inheritance of BaseException is not expected, and will
be discouraged for the general case. Most user-defined
exceptions should inherit from Exception instead. This allows
catching Exception to continue to work in the common case of catching
all exceptions that should be caught. Direct inheritance of
BaseException should only be done in cases where an entirely new
category of exception is desired.
But, for cases where all
exceptions should be caught blindly, except BaseException will
work.
KeyboardInterrupt and SystemExit
Both exceptions are no longer under Exception. This is to allow bare
except clauses to act as a more viable default case by catching
exceptions that inherit from Exception. With both KeyboardInterrupt
and SystemExit acting as signals that the interpreter is expected to
exit, catching them in the common case is the wrong semantics.
NotImplementedError
Inherits from Exception instead of from RuntimeError.
Originally inheriting from RuntimeError, NotImplementedError does not
have any direct relation to the exception meant for use in user code
as a quick-and-dirty exception. Thus it now directly inherits from
Exception.
Required Superclass for raise
By requiring all objects passed to a raise statement to inherit
from a specific superclass, all exceptions are guaranteed to have
certain attributes. If PEP 344 is accepted, the attributes
outlined there will be guaranteed to be on all exceptions raised.
This should help facilitate debugging by making the querying of
information from exceptions much easier.
The proposed hierarchy has BaseException as the required base class.
Implementation
Enforcement is straightforward. Modifying RAISE_VARARGS to do an
inheritance check first before raising an exception should be enough.
For the C API, all functions that set an exception will have the same
inheritance check applied.
Bare except Clauses Catch Exception
In most existing Python 2.4 code, bare except clauses are too
broad in the exceptions they catch. Typically only exceptions that
signal an error are desired to be caught. This means that exceptions
that are used to signify that the interpreter should exit should not
be caught in the common case.
With KeyboardInterrupt and SystemExit moved to inherit from
BaseException instead of Exception, changing bare except clauses
to act as except Exception becomes a much more reasonable
default. This change also will break very little code since these
semantics are what most people want for bare except clauses.
The complete removal of bare except clauses has been argued for.
The case has been made that they violate both Only One Way To Do It
(OOWTDI) and Explicit Is Better Than Implicit (EIBTI) as listed in the
Zen of Python. But Practicality Beats Purity (PBP), also in
the Zen of Python, trumps both of these in this case. The BDFL has
stated that bare except clauses will work this way
[14].
Implementation
The compiler will emit the bytecode for except Exception whenever
a bare except clause is reached.
Transition Plan
Because of the complexity and clutter that would be required to add
all features planned in this PEP, the transition plan is very simple.
In Python 2.5 BaseException is added. In Python 3.0, all remaining
features (required superclass, change in inheritance, bare except
clauses becoming the same as except Exception) will go into
affect. In order to make all of this work in a backwards-compatible
way in Python 2.5 would require very deep hacks in the exception
machinery which could be error-prone and lead to a slowdown in
performance for little benefit.
To help with the transition, the documentation will be changed to
reflect several programming guidelines:
When one wants to catch all exceptions, catch BaseException
To catch all exceptions that do not represent the termination of
the interpreter, catch Exception explicitly
Explicitly catch KeyboardInterrupt and SystemExit; don’t rely on
inheritance from Exception to lead to the capture
Always catch NotImplementedError explicitly instead of relying on
the inheritance from RuntimeError
The documentation for the ‘exceptions’ module [3],
tutorial [15], and PEP 290 will all require
updating.
Rejected Ideas
DeprecationWarning Inheriting From PendingDeprecationWarning
This was originally proposed because a DeprecationWarning can be
viewed as a PendingDeprecationWarning that is being removed in the
next version. But since enough people thought the inheritance could
logically work the other way around, the idea was dropped.
AttributeError Inheriting From TypeError or NameError
Viewing attributes as part of the interface of a type caused the idea
of inheriting from TypeError. But that partially defeats the thinking
of duck typing and thus the idea was dropped.
Inheriting from NameError was suggested because objects can be viewed
as having their own namespace where the attributes live and when an
attribute is not found it is a namespace failure. This was also
dropped as a possibility since not everyone shared this view.
Removal of EnvironmentError
Originally proposed based on the idea that EnvironmentError was an
unneeded distinction, the BDFL overruled this idea [10].
Introduction of MacError and UnixError
Proposed to add symmetry to WindowsError, the BDFL said they won’t be
used enough [10]. The idea of then removing WindowsError
was proposed and accepted as reasonable, thus completely negating the
idea of adding these exceptions.
SystemError Subclassing SystemExit
Proposed because a SystemError is meant to lead to a system exit, the
idea was removed since CriticalError indicates this better.
ControlFlowException Under Exception
It has been suggested that ControlFlowException should inherit from
Exception. This idea has been rejected based on the thinking that
control flow exceptions typically do not all need to be caught by a
single except clause.
Rename NameError to NamespaceError
NameError is considered more succinct and leaves open no possible
mistyping of
the capitalization of “Namespace” [11].
Renaming RuntimeError or Introducing SimpleError
The thinking was that RuntimeError was in no way an obvious name for
an exception meant to be used when a situation did not call for the
creation of a new exception. The renaming was rejected on the basis
that the exception is already used throughout the interpreter
[12].
Rejection of SimpleError was founded on the thought that people
should be free to use whatever exception they choose and not have one
so blatantly suggested [13].
Renaming Existing Exceptions
Various renamings were suggested but non garnered more than a +0 vote
(renaming ReferenceError to WeakReferenceError). The thinking was
that the existing names were fine and no one had actively complained
about them ever. To minimize backwards-compatibility issues and
causing existing Python programmers extra pain, the renamings were
removed.
Have EOFError Subclass IOError
The original thought was that since EOFError deals directly with I/O,
it should
subclass IOError. But since EOFError is used more as a signal that an
event
has occurred (the exhaustion of an I/O port), it should not subclass
such a specific error exception.
Have MemoryError and SystemError Have a Common Superclass
Both classes deal with the interpreter, so why not have them have a
common
superclass? Because one of them means that the interpreter is in a
state that it should not recover from while the other does not.
Common Superclass for PendingDeprecationWarning and DeprecationWarning
Grouping the deprecation warning exceptions together makes intuitive
sense.
But this sensical idea does not extend well when one considers how
rarely either warning is used, let along at the same time.
Removing WindowsError
Originally proposed based on the idea that having such a
platform-specific exception should not be in the built-in namespace.
It turns out, though, enough code exists that uses the exception to
warrant it staying.
Superclass for KeyboardInterrupt and SystemExit
Proposed to make catching non-Exception inheriting exceptions easier
along with easing the transition to the new hierarchy, the idea was
rejected by the BDFL [14]. The argument that existing
code did not show enough instances of the pair of exceptions being
caught and thus did not justify cluttering the built-in namespace
was used.
Acknowledgements
Thanks to Robert Brewer, Josiah Carlson, Alyssa Coghlan, Timothy
Delaney, Jack Diedrich, Fred L. Drake, Jr., Philip J. Eby, Greg Ewing,
James Y. Knight, MA Lemburg, Guido van Rossum, Stephen J. Turnbull,
Raymond Hettinger, and everyone else I missed for participating in the
discussion.
References
[1]
python-dev Summary (An exception is an
exception, unless it doesn’t inherit from Exception)
http://www.python.org/dev/summary/2004-08-01_2004-08-15.html#an-exception-is-an-exception-unless-it-doesn-t-inherit-from-exception
[2]
python-dev email (PEP, take 2: Exception
Reorganization for Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-August/055116.html
[3]
exceptions module
http://docs.python.org/library/exceptions.html
[4]
python-dev thread (Pre-PEP: Exception
Reorganization for Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-July/055020.html,
https://mail.python.org/pipermail/python-dev/2005-August/055065.html
[5]
python-dev thread (PEP, take 2: Exception
Reorganization for Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-August/055103.html
[6]
python-dev thread (Reorg PEP checked in)
https://mail.python.org/pipermail/python-dev/2005-August/055138.html
[7]
python-dev thread (Major revision of PEP 348 committed)
https://mail.python.org/pipermail/python-dev/2005-August/055199.html
[8]
python-dev thread (Exception Reorg PEP revised yet again)
https://mail.python.org/pipermail/python-dev/2005-August/055292.html
[9]
python-dev thread (PEP 348 (exception reorg) revised again)
https://mail.python.org/pipermail/python-dev/2005-August/055412.html
[10] (1, 2)
python-dev email (Pre-PEP: Exception Reorganization
for Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-July/055019.html
[11]
python-dev email (PEP, take 2: Exception Reorganization for
Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-August/055159.html
[12]
python-dev email (Exception Reorg PEP checked in)
https://mail.python.org/pipermail/python-dev/2005-August/055149.html
[13]
python-dev email (Exception Reorg PEP checked in)
https://mail.python.org/pipermail/python-dev/2005-August/055175.html
[14] (1, 2)
python-dev email (PEP 348 (exception reorg) revised again)
https://mail.python.org/pipermail/python-dev/2005-August/055423.html
[15]
Python Tutorial
http://docs.python.org/tutorial/
[16]
python-dev email (Bare except clauses in PEP 348)
https://mail.python.org/pipermail/python-dev/2005-August/055676.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 348 – Exception Reorganization for Python 3.0 | Standards Track | Python, as of version 2.4, has 38 exceptions (including warnings) in
the built-in namespace in a rather shallow hierarchy. These
classes have come about over the years without a chance to learn from
experience. This PEP proposes doing a reorganization of the hierarchy
for Python 3.0 when backwards-compatibility is not as much of an
issue. |
PEP 349 – Allow str() to return unicode strings
Author:
Neil Schemenauer <nas at arctrix.com>
Status:
Rejected
Type:
Standards Track
Created:
02-Aug-2005
Python-Version:
2.5
Post-History:
06-Aug-2005
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Specification
Backwards Compatibility
Alternative Solutions
References
Copyright
Abstract
This PEP proposes to change the str() built-in function so that it
can return unicode strings. This change would make it easier to
write code that works with either string type and would also make
some existing code handle unicode strings. The C function
PyObject_Str() would remain unchanged and the function
PyString_New() would be added instead.
Rationale
Python has had a Unicode string type for some time now but use of
it is not yet widespread. There is a large amount of Python code
that assumes that string data is represented as str instances.
The long-term plan for Python is to phase out the str type and use
unicode for all string data. Clearly, a smooth migration path
must be provided.
We need to upgrade existing libraries, written for str instances,
to be made capable of operating in an all-unicode string world.
We can’t change to an all-unicode world until all essential
libraries are made capable for it. Upgrading the libraries in one
shot does not seem feasible. A more realistic strategy is to
individually make the libraries capable of operating on unicode
strings while preserving their current all-str environment
behaviour.
First, we need to be able to write code that can accept unicode
instances without attempting to coerce them to str instances. Let
us label such code as Unicode-safe. Unicode-safe libraries can be
used in an all-unicode world.
Second, we need to be able to write code that, when provided only
str instances, will not create unicode results. Let us label such
code as str-stable. Libraries that are str-stable can be used by
libraries and applications that are not yet Unicode-safe.
Sometimes it is simple to write code that is both str-stable and
Unicode-safe. For example, the following function just works:
def appendx(s):
return s + 'x'
That’s not too surprising since the unicode type is designed to
make the task easier. The principle is that when str and unicode
instances meet, the result is a unicode instance. One notable
difficulty arises when code requires a string representation of an
object; an operation traditionally accomplished by using the str()
built-in function.
Using the current str() function makes the code not Unicode-safe.
Replacing a str() call with a unicode() call makes the code not
str-stable. Changing str() so that it could return unicode
instances would solve this problem. As a further benefit, some code
that is currently not Unicode-safe because it uses str() would
become Unicode-safe.
Specification
A Python implementation of the str() built-in follows:
def str(s):
"""Return a nice string representation of the object. The
return value is a str or unicode instance.
"""
if type(s) is str or type(s) is unicode:
return s
r = s.__str__()
if not isinstance(r, (str, unicode)):
raise TypeError('__str__ returned non-string')
return r
The following function would be added to the C API and would be the
equivalent to the str() built-in (ideally it be called PyObject_Str,
but changing that function could cause a massive number of
compatibility problems):
PyObject *PyString_New(PyObject *);
A reference implementation is available on Sourceforge [1] as a
patch.
Backwards Compatibility
Some code may require that str() returns a str instance. In the
standard library, only one such case has been found so far. The
function email.header_decode() requires a str instance and the
email.Header.decode_header() function tries to ensure this by
calling str() on its argument. The code was fixed by changing
the line “header = str(header)” to:
if isinstance(header, unicode):
header = header.encode('ascii')
Whether this is truly a bug is questionable since decode_header()
really operates on byte strings, not character strings. Code that
passes it a unicode instance could itself be considered buggy.
Alternative Solutions
A new built-in function could be added instead of changing str().
Doing so would introduce virtually no backwards compatibility
problems. However, since the compatibility problems are expected to
rare, changing str() seems preferable to adding a new built-in.
The basestring type could be changed to have the proposed behaviour,
rather than changing str(). However, that would be confusing
behaviour for an abstract base type.
References
[1]
https://bugs.python.org/issue1266570
Copyright
This document has been placed in the public domain.
| Rejected | PEP 349 – Allow str() to return unicode strings | Standards Track | This PEP proposes to change the str() built-in function so that it
can return unicode strings. This change would make it easier to
write code that works with either string type and would also make
some existing code handle unicode strings. The C function
PyObject_Str() would remain unchanged and the function
PyString_New() would be added instead. |
PEP 350 – Codetags
Author:
Micah Elliott <mde at tracos.org>
Status:
Rejected
Type:
Informational
Created:
27-Jun-2005
Post-History:
10-Aug-2005, 26-Sep-2005
Table of Contents
Rejection Notice
Abstract
What Are Codetags?
Philosophy
Motivation
Examples
Specification
General Syntax
Mnemonics
Fields
DONE File
Tools
Objections
References
Rejection Notice
This PEP has been rejected. While the community may be interested,
there is no desire to make the standard library conform to this standard.
Abstract
This informational PEP aims to provide guidelines for consistent use
of codetags, which would enable the construction of standard
utilities to take advantage of the codetag information, as well as
making Python code more uniform across projects. Codetags also
represent a very lightweight programming micro-paradigm and become
useful for project management, documentation, change tracking, and
project health monitoring. This is submitted as a PEP because its
ideas are thought to be Pythonic, although the concepts are not unique
to Python programming. Herein are the definition of codetags, the
philosophy behind them, a motivation for standardized conventions,
some examples, a specification, a toolset description, and possible
objections to the Codetag project/paradigm.
This PEP is also living as a wiki for people to add comments.
What Are Codetags?
Programmers widely use ad-hoc code comment markup conventions to serve
as reminders of sections of code that need closer inspection or
review. Examples of markup include FIXME, TODO, XXX,
BUG, but there many more in wide use in existing software. Such
markup will henceforth be referred to as codetags. These codetags
may show up in application code, unit tests, scripts, general
documentation, or wherever suitable.
Codetags have been under discussion and in use (hundreds of codetags
in the Python 2.4 sources) in many places (e.g., c2) for many years.
See References for further historic and current information.
Philosophy
If you subscribe to most of these values, then codetags will likely be
useful for you.
As much information as possible should be contained inside the
source code (application code or unit tests). This along with
use of codetags impedes duplication. Most documentation can be
generated from that source code; e.g., by using help2man, man2html,
docutils, epydoc/pydoc, ctdoc, etc.
Information should be almost never duplicated – it should be
recorded in a single original format and all other locations should
be automatically generated from the original, or simply be
referenced. This is famously known as the Single Point Of
Truth (SPOT) or Don’t Repeat Yourself (DRY) rule.
Documentation that gets into customers’ hands should be
auto-generated from single sources into all other output
formats. People want documentation in many forms. It is thus
important to have a documentation system that can generate all of
these.
The developers are the documentation team. They write the code
and should know the code the best. There should not be a
dedicated, disjoint documentation team for any non-huge project.
Plain text (with non-invasive markup) is the best format for
writing anything. All other formats are to be generated from the
plain text.
Codetag design was influenced by the following goals:
Comments should be short whenever possible.
Codetag fields should be optional and of minimal length. Default
values and custom fields can be set by individual code shops.
Codetags should be minimalistic. The quicker it is to jot
something down, the more likely it is to get jotted.
The most common use of codetags will only have zero to two fields
specified, and these should be the easiest to type and read.
Motivation
Various productivity tools can be built around codetags.See Tools.
Encourages consistency.Historically, a subset of these codetags has been used informally in
the majority of source code in existence, whether in Python or in
other languages. Tags have been used in an inconsistent manner with
different spellings, semantics, format, and placement. For example,
some programmers might include datestamps and/or user identifiers,
limit to a single line or not, spell the codetag differently than
others, etc.
Encourages adherence to SPOT/DRY principle.E.g., generating a roadmap dynamically from codetags instead of
keeping TODOs in sync with separate roadmap document.
Easy to remember.All codetags must be concise, intuitive, and semantically
non-overlapping with others. The format must also be simple.
Use not required/imposed.If you don’t use codetags already, there’s no obligation to start,
and no risk of affecting code (but see Objections). A small subset
can be adopted and the Tools will still be useful (a few codetags
have probably already been adopted on an ad-hoc basis anyway). Also
it is very easy to identify and remove (and possibly record) a
codetag that is no longer deemed useful.
Gives a global view of code.Tools can be used to generate documentation and reports.
A logical location for capturing CRCs/Stories/Requirements.The XP community often does not electronically capture Stories, but
codetags seem like a good place to locate them.
Extremely lightweight process.Creating tickets in a tracking system for every thought degrades
development velocity. Even if a ticketing system is employed,
codetags are useful for simply containing links to those tickets.
Examples
This shows a simple codetag as commonly found in sources everywhere
(with the addition of a trailing <>):
# FIXME: Seems like this loop should be finite. <>
while True: ...
The following contrived example demonstrates a typical use of
codetags. It uses some of the available fields to specify the
assignees (a pair of programmers with initials MDE and CLE), the
Date of expected completion (Week 14), and the Priority of the item
(2):
# FIXME: Seems like this loop should be finite. <MDE,CLE d:14w p:2>
while True: ...
This codetag shows a bug with fields describing author, discovery
(origination) date, due date, and priority:
# BUG: Crashes if run on Sundays.
# <MDE 2005-09-04 d:14w p:2>
if day == 'Sunday': ...
Here is a demonstration of how not to use codetags. This has many
problems: 1) Codetags cannot share a line with code; 2) Missing colon
after mnemonic; 3) A codetag referring to codetags is usually useless,
and worse, it is not completable; 4) No need to have a bunch of fields
for a trivial codetag; 5) Fields with unknown values (t:XXX)
should not be used:
i = i + 1 # TODO Add some more codetags.
# <JRNewbie 2005-04-03 d:2005-09-03 t:XXX d:14w p:0 s:inprogress>
Specification
This describes the format: syntax, mnemonic names, fields, and
semantics, and also the separate DONE File.
General Syntax
Each codetag should be inside a comment, and can be any number of
lines. It should not share a line with code. It should match the
indentation of surrounding code. The end of the codetag is marked by
a pair of angle brackets <> containing optional fields, which must
not be split onto multiple lines. It is preferred to have a codetag
in # comments instead of string comments. There can be multiple
fields per codetag, all of which are optional.
In short, a codetag consists of a mnemonic, a colon, commentary text,
an opening angle bracket, an optional list of fields, and a closing
angle bracket. E.g.,
# MNEMONIC: Some (maybe multi-line) commentary. <field field ...>
Mnemonics
The codetags of interest are listed below, using the following format:
recommended mnemonic (& synonym list)
canonical name: semantics
TODO (MILESTONE, MLSTN, DONE, YAGNI, TBD, TOBEDONE)To do: Informal tasks/features that are pending completion.
FIXME (XXX, DEBUG, BROKEN, REFACTOR, REFACT, RFCTR, OOPS, SMELL, NEEDSWORK, INSPECT)Fix me: Areas of problematic or ugly code needing refactoring or
cleanup.
BUG (BUGFIX)Bugs: Reported defects tracked in bug database.
NOBUG (NOFIX, WONTFIX, DONTFIX, NEVERFIX, UNFIXABLE, CANTFIX)Will Not Be Fixed: Problems that are well-known but will never be
addressed due to design problems or domain limitations.
REQ (REQUIREMENT, STORY)Requirements: Satisfactions of specific, formal requirements.
RFE (FEETCH, NYI, FR, FTRQ, FTR)Requests For Enhancement: Roadmap items not yet implemented.
IDEAIdeas: Possible RFE candidates, but less formal than RFE.
??? (QUESTION, QUEST, QSTN, WTF)Questions: Misunderstood details.
!!! (ALERT)Alerts: In need of immediate attention.
HACK (CLEVER, MAGIC)Hacks: Temporary code to force inflexible functionality, or
simply a test change, or workaround a known problem.
PORT (PORTABILITY, WKRD)Portability: Workarounds specific to OS, Python version, etc.
CAVEAT (CAV, CAVT, WARNING, CAUTION)Caveats: Implementation details/gotchas that stand out as
non-intuitive.
NOTE (HELP)Notes: Sections where a code reviewer found something that needs
discussion or further investigation.
FAQFrequently Asked Questions: Interesting areas that require
external explanation.
GLOSS (GLOSSARY)Glossary: Definitions for project glossary.
SEE (REF, REFERENCE)See: Pointers to other code, web link, etc.
TODOC (DOCDO, DODOC, NEEDSDOC, EXPLAIN, DOCUMENT)Needs Documentation: Areas of code that still need to be
documented.
CRED (CREDIT, THANKS)Credits: Accreditations for external provision of enlightenment.
STAT (STATUS)Status: File-level statistical indicator of maturity of this
file.
RVD (REVIEWED, REVIEW)Reviewed: File-level indicator that review was conducted.
File-level codetags might be better suited as properties in the
revision control system, but might still be appropriately specified in
a codetag.
Some of these are temporary (e.g., FIXME) while others are
persistent (e.g., REQ). A mnemonic was chosen over a synonym
using three criteria: descriptiveness, length (shorter is better),
commonly used.
Choosing between FIXME and XXX is difficult. XXX seems to
be more common, but much less descriptive. Furthermore, XXX is a
useful placeholder in a piece of code having a value that is unknown.
Thus FIXME is the preferred spelling. Sun says that XXX
and FIXME are slightly different, giving XXX higher severity.
However, with decades of chaos on this topic, and too many millions of
developers who won’t be influenced by Sun, it is easy to rightly call
them synonyms.
DONE is always a completed TODO item, but this should probably
be indicated through the revision control system and/or a completion
recording mechanism (see DONE File).
It may be a useful metric to count NOTE tags: a high count may
indicate a design (or other) problem. But of course the majority of
codetags indicate areas of code needing some attention.
An FAQ is probably more appropriately documented in a wiki where
users can more easily view and contribute.
Fields
All fields are optional. The proposed standard fields are described
in this section. Note that upper case field characters are intended
to be replaced.
The Originator/Assignee and Origination Date/Week fields are the
most common and don’t usually require a prefix.
This lengthy list of fields is liable to scare people (the intended
minimalists) away from adopting codetags, but keep in mind that these
only exist to support programmers who either 1) like to keep BUG
or RFE codetags in a complete form, or 2) are using codetags as
their complete and only tracking system. In other words, many of
these fields will be used very rarely. They are gathered largely from
industry-wide conventions, and example sources include GCC
Bugzilla and Python’s SourceForge tracking systems.
AAA[,BBB]...List of Originator or Assignee initials (the context
determines which unless both should exist). It is also okay to
use usernames such as MicahE instead of initials. Initials
(in upper case) are the preferred form.
a:AAA[,BBB]...List of Assignee initials. This is necessary only in (rare)
cases where a codetag has both an assignee and an originator, and
they are different. Otherwise the a: prefix is omitted, and
context determines the intent. E.g., FIXME usually has an
Assignee, and NOTE usually has an Originator, but if a
FIXME was originated (and initialed) by a reviewer, then the
assignee’s initials would need a a: prefix.
YYYY[-MM[-DD]] or WW[.D]wThe Origination Date indicating when the comment was added, in
ISO 8601 format (digits and hyphens only). Or Origination
Week, an alternative form for specifying an Origination Date.
A day of the week can be optionally specified. The w suffix
is necessary for distinguishing from a date.
d:YYYY[-MM[-DD]] or d:WW[.D]wDue Date (d) target completion (estimate). Or Due Week (d),
an alternative to specifying a Due Date.
p:NPriority (p) level. Range (N) is from 0..3 with 3 being the
highest. 0..3 are analogous to low, medium, high, and
showstopper/critical. The Severity field could be factored into
this single number, and doing so is recommended since having both
is subject to varying interpretation. The range and order should
be customizable. The existence of this field is important for any
tool that itemizes codetags. Thus a (customizable) default value
should be supported.
t:NNNNTracker (t) number corresponding to associated Ticket ID in
separate tracking system.
The following fields are also available but expected to be less
common.
c:AAAACategory (c) indicating some specific area affected by this
item.
s:AAAAStatus (s) indicating state of item. Examples are “unexplored”,
“understood”, “inprogress”, “fixed”, “done”, “closed”. Note that
when an item is completed it is probably better to remove the
codetag and record it in a DONE File.
i:NDevelopment cycle Iteration (i). Useful for grouping codetags into
completion target groups.
r:NDevelopment cycle Release (r). Useful for grouping codetags into
completion target groups.
To summarize, the non-prefixed fields are initials and origination
date, and the prefixed fields are: assignee (a), due (d), priority
(p), tracker (t), category (c), status (s), iteration (i), and release
(r).
It should be possible for groups to define or add their own fields,
and these should have upper case prefixes to distinguish them from the
standard set. Examples of custom fields are Operating System (O),
Severity (S), Affected Version (A), Customer (C), etc.
DONE File
Some codetags have an ability to be completed (e.g., FIXME,
TODO, BUG). It is often important to retain completed items
by recording them with a completion date stamp. Such completed items
are best stored in a single location, global to a project (or maybe a
package). The proposed format is most easily described by an example,
say ~/src/fooproj/DONE:
# TODO: Recurse into subdirs only on blue
# moons. <MDE 2003-09-26>
[2005-09-26 Oops, I underestimated this one a bit. Should have
used Warsaw's First Law!]
# FIXME: ...
...
You can see that the codetag is copied verbatim from the original
source file. The date stamp is then entered on the following line
with an optional post-mortem commentary. The entry is terminated by a
blank line (\n\n).
It may sound burdensome to have to delete codetag lines every time one
gets completed. But in practice it is quite easy to setup a Vim or
Emacs mapping to auto-record a codetag deletion in this format (sans
the commentary).
Tools
Currently, programmers (and sometimes analysts) typically use grep
to generate a list of items corresponding to a single codetag.
However, various hypothetical productivity tools could take advantage
of a consistent codetag format. Some example tools follow.
Document GeneratorPossible docs: glossary, roadmap, manpages
Codetag HistoryTrack (with revision control system interface) when a BUG tag
(or any codetag) originated/resolved in a code section
Code StatisticsA project Health-O-Meter
Codetag LintNotify of invalid use of codetags, and aid in porting to codetags
Story Manager/BrowserAn electronic means to replace XP notecards. In MVC terms, the
codetag is the Model, and the Story Manager could be a graphical
Viewer/Controller to do visual rearrangement, prioritization, and
assignment, milestone management.
Any Text EditorUsed for changing, removing, adding, rearranging, recording
codetags.
There are some tools already in existence that take advantage of a
smaller set of pseudo-codetags (see References). There is also an
example codetags implementation under way, known as the Codetag
Project.
Objections
Objection:
Extreme Programming argues that such codetags should not
ever exist in code since the code is the documentation.
Defense:
Maybe you should put the codetags in the unit test files
instead. Besides, it’s tough to generate documentation from
uncommented source code.
Objection:
Too much existing code has not followed proposed
guidelines.
Defense:
[Simple] utilities (ctlint) could convert existing codes.
Objection:
Causes duplication with tracking system.
Defense:
Not really, unless fields are abused. If an item exists in
the tracker, a simple ticket number in the codetag tracker field
is sufficient. Maybe a duplicated title would be acceptable.
Furthermore, it’s too burdensome to have a ticket filed for every
item that pops into a developer’s mind on-the-go. Additionally,
the tracking system could possibly be obviated for simple or small
projects that can reasonably fit the relevant data into a codetag.
Objection:
Codetags are ugly and clutter code.
Defense:
That is a good point. But I’d still rather have such info
in a single place (the source code) than various other documents,
likely getting duplicated or forgotten about. The completed
codetags can be sent off to the DONE File, or to the bit
bucket.
Objection:
Codetags (and all comments) get out of date.
Defense:
Not so much if other sources (externally visible
documentation) depend on their being accurate.
Objection:
Codetags tend to only rarely have estimated completion
dates of any sort. OK, the fields are optional, but you want to
suggest fields that actually will be widely used.
Defense:
If an item is inestimable don’t bother with specifying a
date field. Using tools to display items with order and/or color
by due date and/or priority, it is easier to make estimates.
Having your roadmap be a dynamic reflection of your codetags makes
you much more likely to keep the codetags accurate.
Objection:
Named variables for the field parameters in the <>
should be used instead of cryptic one-character prefixes. I.e.,
<MDE p:3> should rather be <author=MDE, priority=3>.
Defense:
It is just too much typing/verbosity to spell out fields. I
argue that p:3 i:2 is as readable as priority=3,
iteration=2 and is much more likely to by typed and remembered
(see bullet C in Philosophy). In this case practicality beats
purity. There are not many fields to keep track of so one letter
prefixes are suitable.
Objection:
Synonyms should be deprecated since it is better to have a
single way to spell something.
Defense:
Many programmers prefer short mnemonic names, especially in
comments. This is why short mnemonics were chosen as the primary
names. However, others feel that an explicit spelling is less
confusing and less prone to error. There will always be two camps
on this subject. Thus synonyms (and complete, full spellings)
should remain supported.
Objection:
It is cruel to use [for mnemonics] opaque acronyms and
abbreviations which drop vowels; it’s hard to figure these things
out. On that basis I hate: MLSTN RFCTR RFE FEETCH, NYI, FR, FTRQ,
FTR WKRD RVDBY
Defense:
Mnemonics are preferred since they are pretty easy to
remember and take up less space. If programmers didn’t like
dropping vowels we would be able to fit very little code on a
line. The space is important for those who write comments that
often fit on a single line. But when using a canon everywhere it
is much less likely to get something to fit on a line.
Objection:
It takes too long to type the fields.
Defense:
Then don’t use (most or any of) them, especially if you’re
the only programmer. Terminating a codetag with <> is a small
chore, and in doing so you enable the use of the proposed tools.
Editor auto-completion of codetags is also useful: You can
program your editor to stamp a template (e.g. # FIXME . <MDE
{date}>) with just a keystroke or two.
Objection:
WorkWeek is an obscure and uncommon time unit.
Defense:
That’s true but it is a highly suitable unit of granularity
for estimation/targeting purposes, and it is very compact. The
ISO 8601 is widely understood but allows you to only specify
either a specific day (restrictive) or month (broad).
Objection:
I aesthetically dislike for the comment to be terminated
with <> in the empty field case.
Defense:
It is necessary to have a terminator since codetags may be
followed by non-codetag comments. Or codetags could be limited to
a single line, but that’s prohibitive. I can’t think of any
single-character terminator that is appropriate and significantly
better than <>. Maybe @ could be a terminator, but then most
codetags will have an unnecessary @.
Objection:
I can’t use codetags when writing HTML, or less
specifically, XML. Maybe @fields@ would be a better than
<fields> as the delimiters.
Defense:
Maybe you’re right, but <> looks nicer whenever
applicable. XML/SGML could use @ while more common
programming languages stick to <>.
References
Some other tools have approached defining/exploiting codetags.
See http://tracos.org/codetag/wiki/Links.
| Rejected | PEP 350 – Codetags | Informational | This informational PEP aims to provide guidelines for consistent use
of codetags, which would enable the construction of standard
utilities to take advantage of the codetag information, as well as
making Python code more uniform across projects. Codetags also
represent a very lightweight programming micro-paradigm and become
useful for project management, documentation, change tracking, and
project health monitoring. This is submitted as a PEP because its
ideas are thought to be Pythonic, although the concepts are not unique
to Python programming. Herein are the definition of codetags, the
philosophy behind them, a motivation for standardized conventions,
some examples, a specification, a toolset description, and possible
objections to the Codetag project/paradigm. |
PEP 351 – The freeze protocol
Author:
Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Standards Track
Created:
14-Apr-2005
Post-History:
Table of Contents
Abstract
Rejection Notice
Rationale
Proposal
Sample implementations
Reference implementation
Open issues
Copyright
Abstract
This PEP describes a simple protocol for requesting a frozen,
immutable copy of a mutable object. It also defines a new built-in
function which uses this protocol to provide an immutable copy on any
cooperating object.
Rejection Notice
This PEP was rejected. For a rationale, see this thread on python-dev.
Rationale
Built-in objects such dictionaries and sets accept only immutable
objects as keys. This means that mutable objects like lists cannot be
used as keys to a dictionary. However, a Python programmer can
convert a list to a tuple; the two objects are similar, but the latter
is immutable, and can be used as a dictionary key.
It is conceivable that third party objects also have similar mutable
and immutable counterparts, and it would be useful to have a standard
protocol for conversion of such objects.
sets.Set objects expose a “protocol for automatic conversion to
immutable” so that you can create sets.Sets of sets.Sets. PEP 218
deliberately dropped this feature from built-in sets. This PEP
advances that the feature is still useful and proposes a standard
mechanism for its support.
Proposal
It is proposed that a new built-in function called freeze() is added.
If freeze() is passed an immutable object, as determined by hash() on
that object not raising a TypeError, then the object is returned
directly.
If freeze() is passed a mutable object (i.e. hash() of that object
raises a TypeError), then freeze() will call that object’s
__freeze__() method to get an immutable copy. If the object does not
have a __freeze__() method, then a TypeError is raised.
Sample implementations
Here is a Python implementation of the freeze() built-in:
def freeze(obj):
try:
hash(obj)
return obj
except TypeError:
freezer = getattr(obj, '__freeze__', None)
if freezer:
return freezer()
raise TypeError('object is not freezable')``
Here are some code samples which show the intended semantics:
class xset(set):
def __freeze__(self):
return frozenset(self)
class xlist(list):
def __freeze__(self):
return tuple(self)
class imdict(dict):
def __hash__(self):
return id(self)
def _immutable(self, *args, **kws):
raise TypeError('object is immutable')
__setitem__ = _immutable
__delitem__ = _immutable
clear = _immutable
update = _immutable
setdefault = _immutable
pop = _immutable
popitem = _immutable
class xdict(dict):
def __freeze__(self):
return imdict(self)
>>> s = set([1, 2, 3])
>>> {s: 4}
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: set objects are unhashable
>>> t = freeze(s)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/tmp/python-lWCjBK.py", line 9, in freeze
TypeError: object is not freezable
>>> t = xset(s)
>>> u = freeze(t)
>>> {u: 4}
{frozenset([1, 2, 3]): 4}
>>> x = 'hello'
>>> freeze(x) is x
True
>>> d = xdict(a=7, b=8, c=9)
>>> hash(d)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: dict objects are unhashable
>>> hash(freeze(d))
-1210776116
>>> {d: 4}
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: dict objects are unhashable
>>> {freeze(d): 4}
{{'a': 7, 'c': 9, 'b': 8}: 4}
Reference implementation
Patch 1335812 provides the C implementation of this feature. It adds the
freeze() built-in, along with implementations of the __freeze__()
method for lists and sets. Dictionaries are not easily freezable in
current Python, so an implementation of dict.__freeze__() is not
provided yet.
Open issues
Should we define a similar protocol for thawing frozen objects?
Should dicts and sets automatically freeze their mutable keys?
Should we support “temporary freezing” (perhaps with a method called
__congeal__()) a la __as_temporarily_immutable__() in sets.Set?
For backward compatibility with sets.Set, should we support
__as_immutable__()? Or should __freeze__() just be renamed to
__as_immutable__()?
Copyright
This document has been placed in the public domain.
| Rejected | PEP 351 – The freeze protocol | Standards Track | This PEP describes a simple protocol for requesting a frozen,
immutable copy of a mutable object. It also defines a new built-in
function which uses this protocol to provide an immutable copy on any
cooperating object. |
PEP 352 – Required Superclass for Exceptions
Author:
Brett Cannon, Guido van Rossum
Status:
Final
Type:
Standards Track
Created:
27-Oct-2005
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Requiring a Common Superclass
Exception Hierarchy Changes
Transition Plan
Retracted Ideas
References
Copyright
Abstract
In Python 2.4 and before, any (classic) class can be raised as an
exception. The plan for 2.5 was to allow new-style classes, but this
makes the problem worse – it would mean any class (or
instance) can be raised! This is a problem as it prevents any
guarantees from being made about the interface of exceptions.
This PEP proposes introducing a new superclass that all raised objects
must inherit from. Imposing the restriction will allow a standard
interface for exceptions to exist that can be relied upon. It also
leads to a known hierarchy for all exceptions to adhere to.
One might counter that requiring a specific base class for a
particular interface is unPythonic. However, in the specific case of
exceptions there’s a good reason (which has generally been agreed to
on python-dev): requiring hierarchy helps code that wants to catch
exceptions by making it possible to catch all exceptions explicitly
by writing except BaseException: instead of
except *:. [1]
Introducing a new superclass for exceptions also gives us the chance
to rearrange the exception hierarchy slightly for the better. As it
currently stands, all exceptions in the built-in namespace inherit
from Exception. This is a problem since this includes two exceptions
(KeyboardInterrupt and SystemExit) that often need to be excepted from
the application’s exception handling: the default behavior of shutting
the interpreter down without a traceback is usually more desirable than
whatever the application might do (with the possible exception of
applications that emulate Python’s interactive command loop with
>>> prompt). Changing it so that these two exceptions inherit
from the common superclass instead of Exception will make it easy for
people to write except clauses that are not overreaching and not
catch exceptions that should propagate up.
This PEP is based on previous work done for PEP 348.
Requiring a Common Superclass
This PEP proposes introducing a new exception named BaseException that
is a new-style class and has a single attribute, args. Below
is the code as the exception will work in Python 3.0 (how it will
work in Python 2.x is covered in the Transition Plan section):
class BaseException(object):
"""Superclass representing the base of the exception hierarchy.
Provides an 'args' attribute that contains all arguments passed
to the constructor. Suggested practice, though, is that only a
single string argument be passed to the constructor.
"""
def __init__(self, *args):
self.args = args
def __str__(self):
if len(self.args) == 1:
return str(self.args[0])
else:
return str(self.args)
def __repr__(self):
return "%s(*%s)" % (self.__class__.__name__, repr(self.args))
No restriction is placed upon what may be passed in for args
for backwards-compatibility reasons. In practice, though, only
a single string argument should be used. This keeps the string
representation of the exception to be a useful message about the
exception that is human-readable; this is why the __str__ method
special-cases on length-1 args value. Including programmatic
information (e.g., an error code number) should be stored as a
separate attribute in a subclass.
The raise statement will be changed to require that any object
passed to it must inherit from BaseException. This will make sure
that all exceptions fall within a single hierarchy that is anchored at
BaseException [1]. This also guarantees a basic
interface that is inherited from BaseException. The change to
raise will be enforced starting in Python 3.0 (see the Transition
Plan below).
With BaseException being the root of the exception hierarchy,
Exception will now inherit from it.
Exception Hierarchy Changes
With the exception hierarchy now even more important since it has a
basic root, a change to the existing hierarchy is called for. As it
stands now, if one wants to catch all exceptions that signal an error
and do not mean the interpreter should be allowed to exit, you must
specify all but two exceptions specifically in an except clause
or catch the two exceptions separately and then re-raise them and
have all other exceptions fall through to a bare except clause:
except (KeyboardInterrupt, SystemExit):
raise
except:
...
That is needlessly explicit. This PEP proposes moving
KeyboardInterrupt and SystemExit to inherit directly from
BaseException.
- BaseException
|- KeyboardInterrupt
|- SystemExit
|- Exception
|- (all other current built-in exceptions)
Doing this makes catching Exception more reasonable. It would catch
only exceptions that signify errors. Exceptions that signal that the
interpreter should exit will not be caught and thus be allowed to
propagate up and allow the interpreter to terminate.
KeyboardInterrupt has been moved since users typically expect an
application to exit when they press the interrupt key (usually Ctrl-C).
If people have overly broad except clauses the expected behaviour
does not occur.
SystemExit has been moved for similar reasons. Since the exception is
raised when sys.exit() is called the interpreter should normally
be allowed to terminate. Unfortunately overly broad except
clauses can prevent the explicitly requested exit from occurring.
To make sure that people catch Exception most of the time, various
parts of the documentation and tutorials will need to be updated to
strongly suggest that Exception be what programmers want to use. Bare
except clauses or catching BaseException directly should be
discouraged based on the fact that KeyboardInterrupt and SystemExit
almost always should be allowed to propagate up.
Transition Plan
Since semantic changes to Python are being proposed, a transition plan
is needed. The goal is to end up with the new semantics being used in
Python 3.0 while providing a smooth transition for 2.x code. All
deprecations mentioned in the plan will lead to the removal of the
semantics starting in the version following the initial deprecation.
Here is BaseException as implemented in the 2.x series:
class BaseException(object):
"""Superclass representing the base of the exception hierarchy.
The __getitem__ method is provided for backwards-compatibility
and will be deprecated at some point. The 'message' attribute
is also deprecated.
"""
def __init__(self, *args):
self.args = args
def __str__(self):
return str(self.args[0]
if len(self.args) <= 1
else self.args)
def __repr__(self):
func_args = repr(self.args) if self.args else "()"
return self.__class__.__name__ + func_args
def __getitem__(self, index):
"""Index into arguments passed in during instantiation.
Provided for backwards-compatibility and will be
deprecated.
"""
return self.args[index]
def _get_message(self):
"""Method for 'message' property."""
warnings.warn("the 'message' attribute has been deprecated "
"since Python 2.6")
return self.args[0] if len(args) == 1 else ''
message = property(_get_message,
doc="access the 'message' attribute; "
"deprecated and provided only for "
"backwards-compatibility")
Deprecation of features in Python 2.9 is optional. This is because it
is not known at this time if Python 2.9 (which is slated to be the
last version in the 2.x series) will actively deprecate features that
will not be in 3.0. It is conceivable that no deprecation warnings
will be used in 2.9 since there could be such a difference between 2.9
and 3.0 that it would make 2.9 too “noisy” in terms of warnings. Thus
the proposed deprecation warnings for Python 2.9 will be revisited
when development of that version begins, to determine if they are still
desired.
Python 2.5 [done]
all standard exceptions become new-style classes [done]
introduce BaseException [done]
Exception, KeyboardInterrupt, and SystemExit inherit from
BaseException [done]
deprecate raising string exceptions [done]
Python 2.6 [done]
deprecate catching string exceptions [done]
deprecate message attribute (see Retracted Ideas) [done]
Python 2.7 [done]
deprecate raising exceptions that do not inherit from BaseException
Python 3.0 [done]
drop everything that was deprecated above:
string exceptions (both raising and catching) [done]
all exceptions must inherit from BaseException [done]
drop __getitem__, message [done]
Retracted Ideas
A previous version of this PEP that was implemented in Python 2.5
included a ‘message’ attribute on BaseException. Its purpose was to
begin a transition to BaseException accepting only a single argument.
This was to tighten the interface and to force people to use
attributes in subclasses to carry arbitrary information with an
exception instead of cramming it all into args.
Unfortunately, while implementing the removal of the args
attribute in Python 3.0 at the PyCon 2007 sprint
[3], it was discovered that the transition was
very painful, especially for C extension modules. It was decided that
it would be better to deprecate the message attribute in
Python 2.6 (and remove it in Python 2.7 and Python 3.0) and consider a
more long-term transition strategy in Python 3.0 to remove
multiple-argument support in BaseException in preference of accepting
only a single argument. Thus the introduction of message and the
original deprecation of args has been retracted.
References
[1] (1, 2)
python-dev Summary for 2004-08-01 through 2004-08-15
http://www.python.org/dev/summary/2004-08-01_2004-08-15.html#an-exception-is-an-exception-unless-it-doesn-t-inherit-from-exception
[2]
SF patch #1104669 (new-style exceptions)
https://bugs.python.org/issue1104669
[3]
python-3000 email (“How far to go with cleaning up exceptions”)
https://mail.python.org/pipermail/python-3000/2007-March/005911.html
Copyright
This document has been placed in the public domain.
| Final | PEP 352 – Required Superclass for Exceptions | Standards Track | In Python 2.4 and before, any (classic) class can be raised as an
exception. The plan for 2.5 was to allow new-style classes, but this
makes the problem worse – it would mean any class (or
instance) can be raised! This is a problem as it prevents any
guarantees from being made about the interface of exceptions.
This PEP proposes introducing a new superclass that all raised objects
must inherit from. Imposing the restriction will allow a standard
interface for exceptions to exist that can be relied upon. It also
leads to a known hierarchy for all exceptions to adhere to. |
PEP 353 – Using ssize_t as the index type
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
18-Dec-2005
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Rationale
Specification
Conversion guidelines
Discussion
Why not size_t
Why not Py_intptr_t
Doesn’t this break much code?
Doesn’t this consume too much memory?
Open Issues
Copyright
Abstract
In Python 2.4, indices of sequences are restricted to the C type
int. On 64-bit machines, sequences therefore cannot use the full
address space, and are restricted to 2**31 elements. This PEP proposes
to change this, introducing a platform-specific index type
Py_ssize_t. An implementation of the proposed change is in
http://svn.python.org/projects/python/branches/ssize_t.
Rationale
64-bit machines are becoming more popular, and the size of main memory
increases beyond 4GiB. On such machines, Python currently is limited,
in that sequences (strings, unicode objects, tuples, lists,
array.arrays, …) cannot contain more than 2GiElements.
Today, very few machines have memory to represent larger lists: as
each pointer is 8B (in a 64-bit machine), one needs 16GiB to just hold
the pointers of such a list; with data in the list, the memory
consumption grows even more. However, there are three container types
for which users request improvements today:
strings (currently restricted to 2GiB)
mmap objects (likewise; plus the system typically
won’t keep the whole object in memory concurrently)
Numarray objects (from Numerical Python)
As the proposed change will cause incompatibilities on 64-bit
machines, it should be carried out while such machines are not in wide
use (IOW, as early as possible).
Specification
A new type Py_ssize_t is introduced, which has the same size as the
compiler’s size_t type, but is signed. It will be a typedef for
ssize_t where available.
The internal representation of the length fields of all container
types is changed from int to ssize_t, for all types included in the
standard distribution. In particular, PyObject_VAR_HEAD is changed to
use Py_ssize_t, affecting all extension modules that use that macro.
All occurrences of index and length parameters and results are changed
to use Py_ssize_t, including the sequence slots in type objects, and
the buffer interface.
New conversion functions PyInt_FromSsize_t and PyInt_AsSsize_t, are
introduced. PyInt_FromSsize_t will transparently return a long int
object if the value exceeds the LONG_MAX; PyInt_AsSsize_t will
transparently process long int objects.
New function pointer typedefs ssizeargfunc, ssizessizeargfunc,
ssizeobjargproc, ssizessizeobjargproc, and lenfunc are introduced. The
buffer interface function types are now called readbufferproc,
writebufferproc, segcountproc, and charbufferproc.
A new conversion code ‘n’ is introduced for PyArg_ParseTuple
Py_BuildValue, PyObject_CallFunction and PyObject_CallMethod.
This code operates on Py_ssize_t.
The conversion codes ‘s#’ and ‘t#’ will output Py_ssize_t
if the macro PY_SSIZE_T_CLEAN is defined before Python.h
is included, and continue to output int if that macro
isn’t defined.
At places where a conversion from size_t/Py_ssize_t to
int is necessary, the strategy for conversion is chosen
on a case-by-case basis (see next section).
To prevent loading extension modules that assume a 32-bit
size type into an interpreter that has a 64-bit size type,
Py_InitModule4 is renamed to Py_InitModule4_64.
Conversion guidelines
Module authors have the choice whether they support this PEP in their
code or not; if they support it, they have the choice of different
levels of compatibility.
If a module is not converted to support this PEP, it will continue to
work unmodified on a 32-bit system. On a 64-bit system, compile-time
errors and warnings might be issued, and the module might crash the
interpreter if the warnings are ignored.
Conversion of a module can either attempt to continue using int
indices, or use Py_ssize_t indices throughout.
If the module should continue to use int indices, care must be taken
when calling functions that return Py_ssize_t or size_t, in
particular, for functions that return the length of an object (this
includes the strlen function and the sizeof operator). A good compiler
will warn when a Py_ssize_t/size_t value is truncated into an int.
In these cases, three strategies are available:
statically determine that the size can never exceed an int
(e.g. when taking the sizeof a struct, or the strlen of
a file pathname). In this case, write:some_int = Py_SAFE_DOWNCAST(some_value, Py_ssize_t, int);
This will add an assertion in debug mode that the value
really fits into an int, and just add a cast otherwise.
statically determine that the value shouldn’t overflow an
int unless there is a bug in the C code somewhere. Test
whether the value is smaller than INT_MAX, and raise an
InternalError if it isn’t.
otherwise, check whether the value fits an int, and raise
a ValueError if it doesn’t.
The same care must be taken for tp_as_sequence slots, in
addition, the signatures of these slots change, and the
slots must be explicitly recast (e.g. from intargfunc
to ssizeargfunc). Compatibility with previous Python
versions can be achieved with the test:
#if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN)
typedef int Py_ssize_t;
#define PY_SSIZE_T_MAX INT_MAX
#define PY_SSIZE_T_MIN INT_MIN
#endif
and then using Py_ssize_t in the rest of the code. For
the tp_as_sequence slots, additional typedefs might
be necessary; alternatively, by replacing:
PyObject* foo_item(struct MyType* obj, int index)
{
...
}
with:
PyObject* foo_item(PyObject* _obj, Py_ssize_t index)
{
struct MyType* obj = (struct MyType*)_obj;
...
}
it becomes possible to drop the cast entirely; the type
of foo_item should then match the sq_item slot in all
Python versions.
If the module should be extended to use Py_ssize_t indices, all usages
of the type int should be reviewed, to see whether it should be
changed to Py_ssize_t. The compiler will help in finding the spots,
but a manual review is still necessary.
Particular care must be taken for PyArg_ParseTuple calls:
they need all be checked for s# and t# converters, and
PY_SSIZE_T_CLEAN must be defined before including Python.h
if the calls have been updated accordingly.
Fredrik Lundh has written a scanner which checks the code
of a C module for usage of APIs whose signature has changed.
Discussion
Why not size_t
An initial attempt to implement this feature tried to use
size_t. It quickly turned out that this cannot work: Python
uses negative indices in many places (to indicate counting
from the end). Even in places where size_t would be usable,
too many reformulations of code where necessary, e.g. in
loops like:
for(index = length-1; index >= 0; index--)
This loop will never terminate if index is changed from
int to size_t.
Why not Py_intptr_t
Conceptually, Py_intptr_t and Py_ssize_t are different things:
Py_intptr_t needs to be the same size as void*, and Py_ssize_t
the same size as size_t. These could differ, e.g. on machines
where pointers have segment and offset. On current flat-address
space machines, there is no difference, so for all practical
purposes, Py_intptr_t would have worked as well.
Doesn’t this break much code?
With the changes proposed, code breakage is fairly
minimal. On a 32-bit system, no code will break, as
Py_ssize_t is just a typedef for int.
On a 64-bit system, the compiler will warn in many
places. If these warnings are ignored, the code will
continue to work as long as the container sizes don’t
exceed 2**31, i.e. it will work nearly as good as
it does currently. There are two exceptions to this
statement: if the extension module implements the
sequence protocol, it must be updated, or the calling
conventions will be wrong. The other exception is
the places where Py_ssize_t is output through a
pointer (rather than a return value); this applies
most notably to codecs and slice objects.
If the conversion of the code is made, the same code
can continue to work on earlier Python releases.
Doesn’t this consume too much memory?
One might think that using Py_ssize_t in all tuples,
strings, lists, etc. is a waste of space. This is
not true, though: on a 32-bit machine, there is no
change. On a 64-bit machine, the size of many
containers doesn’t change, e.g.
in lists and tuples, a pointer immediately follows
the ob_size member. This means that the compiler
currently inserts a 4 padding bytes; with the
change, these padding bytes become part of the size.
in strings, the ob_shash field follows ob_size.
This field is of type long, which is a 64-bit
type on most 64-bit systems (except Win64), so
the compiler inserts padding before it as well.
Open Issues
Marc-Andre Lemburg commented that complete backwards
compatibility with existing source code should be
preserved. In particular, functions that have
Py_ssize_t* output arguments should continue to run
correctly even if the callers pass int*.It is not clear what strategy could be used to implement
that requirement.
Copyright
This document has been placed in the public domain.
| Final | PEP 353 – Using ssize_t as the index type | Standards Track | In Python 2.4, indices of sequences are restricted to the C type
int. On 64-bit machines, sequences therefore cannot use the full
address space, and are restricted to 2**31 elements. This PEP proposes
to change this, introducing a platform-specific index type
Py_ssize_t. An implementation of the proposed change is in
http://svn.python.org/projects/python/branches/ssize_t. |
PEP 354 – Enumerations in Python
Author:
Ben Finney <ben+python at benfinney.id.au>
Status:
Superseded
Type:
Standards Track
Created:
20-Dec-2005
Python-Version:
2.6
Post-History:
20-Dec-2005
Superseded-By:
435
Table of Contents
Rejection Notice
Abstract
Motivation
Specification
Rationale – Other designs considered
All in one class
Metaclass for creating enumeration classes
Values related to other types
Hiding attributes of enumerated values
Implementation
References and Footnotes
Copyright
Rejection Notice
This PEP has been rejected. This doesn’t slot nicely into any of the
existing modules (like collections), and the Python standard library
eschews having lots of individual data structures in their own
modules. Also, the PEP has generated no widespread interest. For
those who need enumerations, there are cookbook recipes and PyPI
packages that meet these needs.
Note: this PEP was superseded by PEP 435, which has been accepted in
May 2013.
Abstract
This PEP specifies an enumeration data type for Python.
An enumeration is an exclusive set of symbolic names bound to
arbitrary unique values. Values within an enumeration can be iterated
and compared, but the values have no inherent relationship to values
outside the enumeration.
Motivation
The properties of an enumeration are useful for defining an immutable,
related set of constant values that have a defined sequence but no
inherent semantic meaning. Classic examples are days of the week
(Sunday through Saturday) and school assessment grades (‘A’ through
‘D’, and ‘F’). Other examples include error status values and states
within a defined process.
It is possible to simply define a sequence of values of some other
basic type, such as int or str, to represent discrete
arbitrary values. However, an enumeration ensures that such values
are distinct from any others, and that operations without meaning
(“Wednesday times two”) are not defined for these values.
Specification
An enumerated type is created from a sequence of arguments to the
type’s constructor:
>>> Weekdays = enum('sun', 'mon', 'tue', 'wed', 'thu', 'fri', 'sat')
>>> Grades = enum('A', 'B', 'C', 'D', 'F')
Enumerations with no values are meaningless. The exception
EnumEmptyError is raised if the constructor is called with no
value arguments.
The values are bound to attributes of the new enumeration object:
>>> today = Weekdays.mon
The values can be compared:
>>> if today == Weekdays.fri:
... print "Get ready for the weekend"
Values within an enumeration cannot be meaningfully compared except
with values from the same enumeration. The comparison operation
functions return NotImplemented [1] when a
value from an enumeration is compared against any value not from the
same enumeration or of a different type:
>>> gym_night = Weekdays.wed
>>> gym_night.__cmp__(Weekdays.mon)
1
>>> gym_night.__cmp__(Weekdays.wed)
0
>>> gym_night.__cmp__(Weekdays.fri)
-1
>>> gym_night.__cmp__(23)
NotImplemented
>>> gym_night.__cmp__("wed")
NotImplemented
>>> gym_night.__cmp__(Grades.B)
NotImplemented
This allows the operation to succeed, evaluating to a boolean value:
>>> gym_night = Weekdays.wed
>>> gym_night < Weekdays.mon
False
>>> gym_night < Weekdays.wed
False
>>> gym_night < Weekdays.fri
True
>>> gym_night < 23
False
>>> gym_night > 23
True
>>> gym_night > "wed"
True
>>> gym_night > Grades.B
True
Coercing a value from an enumeration to a str results in the
string that was specified for that value when constructing the
enumeration:
>>> gym_night = Weekdays.wed
>>> str(gym_night)
'wed'
The sequence index of each value from an enumeration is exported as an
integer via that value’s index attribute:
>>> gym_night = Weekdays.wed
>>> gym_night.index
3
An enumeration can be iterated, returning its values in the sequence
they were specified when the enumeration was created:
>>> print [str(day) for day in Weekdays]
['sun', 'mon', 'tue', 'wed', 'thu', 'fri', 'sat']
Values from an enumeration are hashable, and can be used as dict
keys:
>>> plans = {}
>>> plans[Weekdays.sat] = "Feed the horse"
The normal usage of enumerations is to provide a set of possible
values for a data type, which can then be used to map to other
information about the values:
>>> for report_grade in Grades:
... report_students[report_grade] = \
... [s for s in students if students.grade == report_grade]
Rationale – Other designs considered
All in one class
Some implementations have the enumeration and its values all as
attributes of a single object or class.
This PEP specifies a design where the enumeration is a container, and
the values are simple comparables. It was felt that attempting to
place all the properties of enumeration within a single class
complicates the design without apparent benefit.
Metaclass for creating enumeration classes
The enumerations specified in this PEP are instances of an enum
type. Some alternative designs implement each enumeration as its own
class, and a metaclass to define common properties of all
enumerations.
One motivation for having a class (rather than an instance) for each
enumeration is to allow subclasses of enumerations, extending and
altering an existing enumeration. A class, though, implies that
instances of that class will be created; it is difficult to imagine
what it means to have separate instances of a “days of the week”
class, where each instance contains all days. This usually leads to
having each class follow the Singleton pattern, further complicating
the design.
In contrast, this PEP specifies enumerations that are not expected to
be extended or modified. It is, of course, possible to create a new
enumeration from the string values of an existing one, or even
subclass the enum type if desired.
Values related to other types
Some designs express a strong relationship to some other value, such
as a particular integer or string, for each enumerated value.
This results in using such values in contexts where the enumeration
has no meaning, and unnecessarily complicates the design. The
enumerated values specified in this PEP export the values used to
create them, and can be compared for equality with any other value,
but sequence comparison with values outside the enumeration is
explicitly not implemented.
Hiding attributes of enumerated values
A previous design had the enumerated values hiding as much as possible
about their implementation, to the point of not exporting the string
key and sequence index.
The design in this PEP acknowledges that programs will often find it
convenient to know the enumerated value’s enumeration type, sequence
index, and string key specified for the value. These are exported by
the enumerated value as attributes.
Implementation
This design is based partly on a recipe [2] from the
Python Cookbook.
The PyPI package enum [3] provides a Python
implementation of the data types described in this PEP.
References and Footnotes
[1]
The NotImplemented return value from comparison operations
signals the Python interpreter to attempt alternative comparisons
or other fallbacks.
<http://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy>
[2]
“First Class Enums in Python”, Zoran Isailovski,
Python Cookbook recipe 413486
<http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413486>
[3]
Python Package Index, package enum
<http://cheeseshop.python.org/pypi/enum/>
Copyright
This document has been placed in the public domain.
| Superseded | PEP 354 – Enumerations in Python | Standards Track | This PEP specifies an enumeration data type for Python. |
PEP 355 – Path - Object oriented filesystem paths
Author:
Björn Lindqvist <bjourne at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
24-Jan-2006
Python-Version:
2.5
Post-History:
Table of Contents
Rejection Notice
Abstract
Background
Motivation
Rationale
Specification
Replacing older functions with the Path class
Deprecations
Closed Issues
Open Issues
Reference Implementation
Examples
References and Footnotes
Copyright
Rejection Notice
This PEP has been rejected (in this form). The proposed path class
is the ultimate kitchen sink; but the notion that it’s better to
implement all functionality that uses a path as a method on a single
class is an anti-pattern. (E.g. why not open()? Or execfile()?)
Subclassing from str is a particularly bad idea; many string
operations make no sense when applied to a path. This PEP has
lingered, and while the discussion flares up from time to time,
it’s time to put this PEP out of its misery. A less far-fetched
proposal might be more palatable.
Abstract
This PEP describes a new class, Path, to be added to the os
module, for handling paths in an object oriented fashion. The
“weak” deprecation of various related functions is also discussed
and recommended.
Background
The ideas expressed in this PEP are not recent, but have been
debated in the Python community for many years. Many have felt
that the API for manipulating file paths as offered in the os.path
module is inadequate. The first proposal for a Path object was
raised by Just van Rossum on python-dev in 2001 [2]. In 2003,
Jason Orendorff released version 1.0 of the “path module” which
was the first public implementation that used objects to represent
paths [3].
The path module quickly became very popular and numerous attempts
were made to get the path module included in the Python standard
library; [4], [5], [6], [7].
This PEP summarizes the ideas and suggestions people have
expressed about the path module and proposes that a modified
version should be included in the standard library.
Motivation
Dealing with filesystem paths is a common task in any programming
language, and very common in a high-level language like Python.
Good support for this task is needed, because:
Almost every program uses paths to access files. It makes sense
that a task, that is so often performed, should be as intuitive
and as easy to perform as possible.
It makes Python an even better replacement language for
over-complicated shell scripts.
Currently, Python has a large number of different functions
scattered over half a dozen modules for handling paths. This
makes it hard for newbies and experienced developers to choose
the right method.
The Path class provides the following enhancements over the
current common practice:
One “unified” object provides all functionality from previous
functions.
Subclassability - the Path object can be extended to support
paths other than filesystem paths. The programmer does not need
to learn a new API, but can reuse their knowledge of Path
to deal with the extended class.
With all related functionality in one place, the right approach
is easier to learn as one does not have to hunt through many
different modules for the right functions.
Python is an object oriented language. Just like files,
datetimes and sockets are objects so are paths, they are not
merely strings to be passed to functions. Path objects is
inherently a pythonic idea.
Path takes advantage of properties. Properties make for more
readable code:if imgpath.ext == 'jpg':
jpegdecode(imgpath)
Is better than:
if os.path.splitexit(imgpath)[1] == 'jpg':
jpegdecode(imgpath)
Rationale
The following points summarize the design:
Path extends from string, therefore all code which expects
string pathnames need not be modified and no existing code will
break.
A Path object can be created either by using the classmethod
Path.cwd, by instantiating the class with a string representing
a path or by using the default constructor which is equivalent
to Path(".").
Path provides common pathname manipulation, pattern expansion,
pattern matching and other high-level file operations including
copying. Basically Path provides everything path-related except
the manipulation of file contents, for which file objects are
better suited.
Platform incompatibilities are dealt with by not instantiating
system specific methods.
Specification
This class defines the following public interface (docstrings have
been extracted from the reference implementation, and shortened
for brevity; see the reference implementation for more detail):
class Path(str):
# Special Python methods:
def __new__(cls, *args) => Path
"""
Creates a new path object concatenating the *args. *args
may only contain Path objects or strings. If *args is
empty, Path(os.curdir) is created.
"""
def __repr__(self): ...
def __add__(self, more): ...
def __radd__(self, other): ...
# Alternative constructor.
def cwd(cls): ...
# Operations on path strings:
def abspath(self) => Path
"""Returns the absolute path of self as a new Path object."""
def normcase(self): ...
def normpath(self): ...
def realpath(self): ...
def expanduser(self): ...
def expandvars(self): ...
def basename(self): ...
def expand(self): ...
def splitpath(self) => (Path, str)
"""p.splitpath() -> Return (p.parent, p.name)."""
def stripext(self) => Path
"""p.stripext() -> Remove one file extension from the path."""
def splitunc(self): ... # See footnote [1]
def splitall(self): ...
def relpath(self): ...
def relpathto(self, dest): ...
# Properties about the path:
parent => Path
"""This Path's parent directory as a new path object."""
name => str
"""The name of this file or directory without the full path."""
ext => str
"""
The file extension or an empty string if Path refers to a
file without an extension or a directory.
"""
drive => str
"""
The drive specifier. Always empty on systems that don't
use drive specifiers.
"""
namebase => str
"""
The same as path.name, but with one file extension
stripped off.
"""
uncshare[1]
# Operations that return lists of paths:
def listdir(self, pattern = None): ...
def dirs(self, pattern = None): ...
def files(self, pattern = None): ...
def walk(self, pattern = None): ...
def walkdirs(self, pattern = None): ...
def walkfiles(self, pattern = None): ...
def match(self, pattern) => bool
"""Returns True if self.name matches the given pattern."""
def matchcase(self, pattern) => bool
"""
Like match() but is guaranteed to be case sensitive even
on platforms with case insensitive filesystems.
"""
def glob(self, pattern):
# Methods for retrieving information about the filesystem
# path:
def exists(self): ...
def isabs(self): ...
def isdir(self): ...
def isfile(self): ...
def islink(self): ...
def ismount(self): ...
def samefile(self, other): ... # See footnote [1]
def atime(self): ...
"""Last access time of the file."""
def mtime(self): ...
"""Last-modified time of the file."""
def ctime(self): ...
"""
Return the system's ctime which, on some systems (like
Unix) is the time of the last change, and, on others (like
Windows), is the creation time for path.
"""
def size(self): ...
def access(self, mode): ... # See footnote [1]
def stat(self): ...
def lstat(self): ...
def statvfs(self): ... # See footnote [1]
def pathconf(self, name): ... # See footnote [1]
# Methods for manipulating information about the filesystem
# path.
def utime(self, times) => None
def chmod(self, mode) => None
def chown(self, uid, gid) => None # See footnote [1]
def rename(self, new) => None
def renames(self, new) => None
# Create/delete operations on directories
def mkdir(self, mode = 0777): ...
def makedirs(self, mode = 0777): ...
def rmdir(self): ...
def removedirs(self): ...
# Modifying operations on files
def touch(self): ...
def remove(self): ...
def unlink(self): ...
# Modifying operations on links
def link(self, newpath): ...
def symlink(self, newlink): ...
def readlink(self): ...
def readlinkabs(self): ...
# High-level functions from shutil
def copyfile(self, dst): ...
def copymode(self, dst): ...
def copystat(self, dst): ...
def copy(self, dst): ...
def copy2(self, dst): ...
def copytree(self, dst, symlinks = True): ...
def move(self, dst): ...
def rmtree(self, ignore_errors = False, onerror = None): ...
# Special stuff from os
def chroot(self): ... # See footnote [1]
def startfile(self): ... # See footnote [1]
Replacing older functions with the Path class
In this section, “a ==> b” means that b can be used as a
replacement for a.
In the following examples, we assume that the Path class is
imported with from path import Path.
Replacing os.path.join:os.path.join(os.getcwd(), "foobar")
==>
Path(Path.cwd(), "foobar")
os.path.join("foo", "bar", "baz")
==>
Path("foo", "bar", "baz")
Replacing os.path.splitext:fname = "Python2.4.tar.gz"
os.path.splitext(fname)[1]
==>
fname = Path("Python2.4.tar.gz")
fname.ext
Or if you want both parts:
fname = "Python2.4.tar.gz"
base, ext = os.path.splitext(fname)
==>
fname = Path("Python2.4.tar.gz")
base, ext = fname.namebase, fname.extx
Replacing glob.glob:lib_dir = "/lib"
libs = glob.glob(os.path.join(lib_dir, "*s.o"))
==>
lib_dir = Path("/lib")
libs = lib_dir.files("*.so")
Deprecations
Introducing this module to the standard library introduces a need
for the “weak” deprecation of a number of existing modules and
functions. These modules and functions are so widely used that
they cannot be truly deprecated, as in generating
DeprecationWarning. Here “weak deprecation” means notes in the
documentation only.
The table below lists the existing functionality that should be
deprecated.
Path method/property
Deprecates function
normcase()
os.path.normcase()
normpath()
os.path.normpath()
realpath()
os.path.realpath()
expanduser()
os.path.expanduser()
expandvars()
os.path.expandvars()
parent
os.path.dirname()
name
os.path.basename()
splitpath()
os.path.split()
drive
os.path.splitdrive()
ext
os.path.splitext()
splitunc()
os.path.splitunc()
__new__()
os.path.join(), os.curdir
listdir()
os.listdir() [fnmatch.filter()]
match()
fnmatch.fnmatch()
matchcase()
fnmatch.fnmatchcase()
glob()
glob.glob()
exists()
os.path.exists()
isabs()
os.path.isabs()
isdir()
os.path.isdir()
isfile()
os.path.isfile()
islink()
os.path.islink()
ismount()
os.path.ismount()
samefile()
os.path.samefile()
atime()
os.path.getatime()
ctime()
os.path.getctime()
mtime()
os.path.getmtime()
size()
os.path.getsize()
cwd()
os.getcwd()
access()
os.access()
stat()
os.stat()
lstat()
os.lstat()
statvfs()
os.statvfs()
pathconf()
os.pathconf()
utime()
os.utime()
chmod()
os.chmod()
chown()
os.chown()
rename()
os.rename()
renames()
os.renames()
mkdir()
os.mkdir()
makedirs()
os.makedirs()
rmdir()
os.rmdir()
removedirs()
os.removedirs()
remove()
os.remove()
unlink()
os.unlink()
link()
os.link()
symlink()
os.symlink()
readlink()
os.readlink()
chroot()
os.chroot()
startfile()
os.startfile()
copyfile()
shutil.copyfile()
copymode()
shutil.copymode()
copystat()
shutil.copystat()
copy()
shutil.copy()
copy2()
shutil.copy2()
copytree()
shutil.copytree()
move()
shutil.move()
rmtree()
shutil.rmtree()
The Path class deprecates the whole of os.path, shutil, fnmatch
and glob. A big chunk of os is also deprecated.
Closed Issues
A number contentious issues have been resolved since this PEP
first appeared on python-dev:
The __div__() method was removed. Overloading the / (division)
operator may be “too much magic” and make path concatenation
appear to be division. The method can always be re-added later
if the BDFL so desires. In its place, __new__() got an *args
argument that accepts both Path and string objects. The *args
are concatenated with os.path.join() which is used to construct
the Path object. These changes obsoleted the problematic
joinpath() method which was removed.
The methods and the properties getatime()/atime,
getctime()/ctime, getmtime()/mtime and getsize()/size duplicated
each other. These methods and properties have been merged to
atime(), ctime(), mtime() and size(). The reason they are not
properties instead, is because there is a possibility that they
may change unexpectedly. The following example is not
guaranteed to always pass the assertion:p = Path("foobar")
s = p.size()
assert p.size() == s
Open Issues
Some functionality of Jason Orendorff’s path module have been
omitted:
Function for opening a path - better handled by the builtin
open().
Functions for reading and writing whole files - better handled
by file objects’ own read() and write() methods.
A chdir() function may be a worthy inclusion.
A deprecation schedule needs to be set up. How much
functionality should Path implement? How much of existing
functionality should it deprecate and when?
The name obviously has to be either “path” or “Path,” but where
should it live? In its own module or in os?
Due to Path subclassing either str or unicode, the following
non-magic, public methods are available on Path objects:capitalize(), center(), count(), decode(), encode(),
endswith(), expandtabs(), find(), index(), isalnum(),
isalpha(), isdigit(), islower(), isspace(), istitle(),
isupper(), join(), ljust(), lower(), lstrip(), replace(),
rfind(), rindex(), rjust(), rsplit(), rstrip(), split(),
splitlines(), startswith(), strip(), swapcase(), title(),
translate(), upper(), zfill()
On python-dev it has been argued whether this inheritance is
sane or not. Most persons debating said that most string
methods doesn’t make sense in the context of filesystem paths –
they are just dead weight. The other position, also argued on
python-dev, is that inheriting from string is very convenient
because it allows code to “just work” with Path objects without
having to be adapted for them.
One of the problems is that at the Python level, there is no way
to make an object “string-like enough,” so that it can be passed
to the builtin function open() (and other builtins expecting a
string or buffer), unless the object inherits from either str or
unicode. Therefore, to not inherit from string requires changes
in CPython’s core.
The functions and modules that this new module is trying to
replace (os.path, shutil, fnmatch, glob and parts of os) are
expected to be available in future Python versions for a long
time, to preserve backwards compatibility.
Reference Implementation
Currently, the Path class is implemented as a thin wrapper around
the standard library modules fnmatch, glob, os, os.path and
shutil. The intention of this PEP is to move functionality from
the aforementioned modules to Path while they are being
deprecated.
For more detail and an implementation see:
http://wiki.python.org/moin/PathModule
Examples
In this section, “a ==> b” means that b can be used as a
replacement for a.
Make all python files in the a directory executable:DIR = '/usr/home/guido/bin'
for f in os.listdir(DIR):
if f.endswith('.py'):
path = os.path.join(DIR, f)
os.chmod(path, 0755)
==>
for f in Path('/usr/home/guido/bin').files("*.py"):
f.chmod(0755)
Delete emacs backup files:def delete_backups(arg, dirname, names):
for name in names:
if name.endswith('~'):
os.remove(os.path.join(dirname, name))
os.path.walk(os.environ['HOME'], delete_backups, None)
==>
d = Path(os.environ['HOME'])
for f in d.walkfiles('*~'):
f.remove()
Finding the relative path to a file:b = Path('/users/peter/')
a = Path('/users/peter/synergy/tiki.txt')
a.relpathto(b)
Splitting a path into directory and filename:os.path.split("/path/to/foo/bar.txt")
==>
Path("/path/to/foo/bar.txt").splitpath()
List all Python scripts in the current directory tree:list(Path().walkfiles("*.py"))
References and Footnotes
[1] Method is not guaranteed to be available on all platforms.
[2]
“(idea) subclassable string: path object?”, van Rossum, 2001
https://mail.python.org/pipermail/python-dev/2001-August/016663.html
[3]
“path module v1.0 released”, Orendorff, 2003
https://mail.python.org/pipermail/python-announce-list/2003-January/001984.html
[4]
“Some RFE for review”, Birkenfeld, 2005
https://mail.python.org/pipermail/python-dev/2005-June/054438.html
[5]
“path module”, Orendorff, 2003
https://mail.python.org/pipermail/python-list/2003-July/174289.html
[6]
“PRE-PEP: new Path class”, Roth, 2004
https://mail.python.org/pipermail/python-list/2004-January/201672.html
[7]
http://wiki.python.org/moin/PathClass
Copyright
This document has been placed in the public domain.
| Rejected | PEP 355 – Path - Object oriented filesystem paths | Standards Track | This PEP describes a new class, Path, to be added to the os
module, for handling paths in an object oriented fashion. The
“weak” deprecation of various related functions is also discussed
and recommended. |
PEP 356 – Python 2.5 Release Schedule
Author:
Neal Norwitz, Guido van Rossum, Anthony Baxter
Status:
Final
Type:
Informational
Topic:
Release
Created:
07-Feb-2006
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Release Manager
Release Schedule
Completed features for 2.5
Possible features for 2.5
Deferred until 2.6
Open issues
References
Copyright
Abstract
This document describes the development and release schedule for
Python 2.5. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release.
There will be at least two alpha releases, two beta releases, and
one release candidate. The release date is planned for
12 September 2006.
Release Manager
Anthony Baxter has volunteered to be Release Manager.
Martin von Loewis is building the Windows installers,
Ronald Oussoren is building the Mac installers,
Fred Drake the doc packages and
Sean Reifschneider the RPMs.
Release Schedule
alpha 1: April 5, 2006 [completed]
alpha 2: April 27, 2006 [completed]
beta 1: June 20, 2006 [completed]
beta 2: July 11, 2006 [completed]
beta 3: August 3, 2006 [completed]
rc 1: August 17, 2006 [completed]
rc 2: September 12, 2006 [completed]
final: September 19, 2006 [completed]
Completed features for 2.5
PEP 308: Conditional Expressions
PEP 309: Partial Function Application
PEP 314: Metadata for Python Software Packages v1.1
PEP 328: Absolute/Relative Imports
PEP 338: Executing Modules as Scripts
PEP 341: Unified try-except/try-finally to try-except-finally
PEP 342: Coroutines via Enhanced Generators
PEP 343: The “with” Statement (still need updates in Doc/ref and for the
contextlib module)
PEP 352: Required Superclass for Exceptions
PEP 353: Using ssize_t as the index type
PEP 357: Allowing Any Object to be Used for Slicing
ASCII became the default coding
AST-based compiler
Access to C AST from Python through new _ast module
any()/all() builtin truth functions
New standard library modules:
cProfile – suitable for profiling long running applications
with minimal overhead
ctypes – optional component of the windows installer
ElementTree and cElementTree – by Fredrik Lundh
hashlib – adds support for SHA-224, -256, -384, and -512
(replaces old md5 and sha modules)
msilib – for creating MSI files and bdist_msi in distutils.
pysqlite
uuid
wsgiref
Other notable features:
Added support for reading shadow passwords [1]
Added support for the Unicode 4.1 UCD
Added PEP 302 zipfile/__loader__ support to the following modules:
warnings, linecache, inspect, traceback, site, and
doctest
Added pybench Python benchmark suite – by Marc-Andre Lemburg
Add write support for mailboxes from the code in sandbox/mailbox.
(Owner: A.M. Kuchling. It would still be good if another person
would take a look at the new code.)
Support for building “fat” Mac binaries (Intel and PPC)
Add new icons for Windows with the new Python logo?
New utilities in functools to help write wrapper functions that
support naive introspection (e.g. having f.__name__ return
the original function name).
Upgrade pyexpat to use expat 2.0.
Python core now compiles cleanly with g++
Possible features for 2.5
Each feature below should implemented prior to beta1 or
will require BDFL approval for inclusion in 2.5.
Modules under consideration for inclusion:
Add new icons for MacOS and Unix with the new Python logo?
(Owner: ???)
MacOS: http://hcs.harvard.edu/~jrus/python/prettified-py-icons.png
Check the various bits of code in Demo/ all still work, update or
remove the ones that don’t.
(Owner: Anthony)
All modules in Modules/ should be updated to be ssize_t clean.
(Owner: Neal)
Deferred until 2.6
bdist_deb in distutils package [2]
bdist_egg in distutils package
pure python pgen module
(Owner: Guido)
Remove the fpectl module?
Make everything in Modules/ build cleanly with g++
Open issues
Bugs that need resolving before release, ie, they block release:None
Bugs deferred until 2.5.1 (or later):
https://bugs.python.org/issue1544279 - Socket module is not thread-safe
https://bugs.python.org/issue1541420 - tools and demo missing from windows
https://bugs.python.org/issue1542451 - crash with continue in nested try/finally
https://bugs.python.org/issue1475523 - gettext.py bug (owner: Martin v. Loewis)
https://bugs.python.org/issue1467929 - %-formatting and dicts
https://bugs.python.org/issue1446043 - unicode() does not raise LookupError
The PEP 302 changes to (at least) pkgutil, runpy and pydoc must
be documented.
test_zipfile64 takes too long and too much disk space for
most of the buildbots. How should this be handled?
It is currently disabled.
should C modules listed in “Undocumented modules” be removed too?
“timing” (listed as obsolete), “cl” (listed as possibly not up-to-date),
and “sv” (listed as obsolete hardware specific).
References
[1]
Shadow Password Support Module
https://bugs.python.org/issue579435
[2]
Joe Smith, bdist_* to stdlib?
https://mail.python.org/pipermail/python-dev/2006-February/060926.html
Copyright
This document has been placed in the public domain.
| Final | PEP 356 – Python 2.5 Release Schedule | Informational | This document describes the development and release schedule for
Python 2.5. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release. |
PEP 357 – Allowing Any Object to be Used for Slicing
Author:
Travis Oliphant <oliphant at ee.byu.edu>
Status:
Final
Type:
Standards Track
Created:
09-Feb-2006
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Rationale
Proposal
Specification
Implementation Plan
Discussion Questions
Speed
Why not use nb_int which is already there?
Why the name __index__?
Why return PyObject * from nb_index?
Why can’t __index__ return any object with the nb_index method?
Reference Implementation
References
Copyright
Abstract
This PEP proposes adding an nb_index slot in PyNumberMethods and an
__index__ special method so that arbitrary objects can be used
whenever integers are explicitly needed in Python, such as in slice
syntax (from which the slot gets its name).
Rationale
Currently integers and long integers play a special role in
slicing in that they are the only objects allowed in slice
syntax. In other words, if X is an object implementing the
sequence protocol, then X[obj1:obj2] is only valid if obj1 and
obj2 are both integers or long integers. There is no way for obj1
and obj2 to tell Python that they could be reasonably used as
indexes into a sequence. This is an unnecessary limitation.
In NumPy, for example, there are 8 different integer scalars
corresponding to unsigned and signed integers of 8, 16, 32, and 64
bits. These type-objects could reasonably be used as integers in
many places where Python expects true integers but cannot inherit from
the Python integer type because of incompatible memory layouts.
There should be some way to be able to tell Python that an object can
behave like an integer.
It is not possible to use the nb_int (and __int__ special method)
for this purpose because that method is used to coerce objects
to integers. It would be inappropriate to allow every object that
can be coerced to an integer to be used as an integer everywhere
Python expects a true integer. For example, if __int__ were used
to convert an object to an integer in slicing, then float objects
would be allowed in slicing and x[3.2:5.8] would not raise an error
as it should.
Proposal
Add an nb_index slot to PyNumberMethods, and a corresponding
__index__ special method. Objects could define a function to
place in the nb_index slot that returns a Python integer
(either an int or a long). This integer can
then be appropriately converted to a Py_ssize_t value whenever
Python needs one such as in PySequence_GetSlice,
PySequence_SetSlice, and PySequence_DelSlice.
Specification
The nb_index slot will have the following signature:PyObject *index_func (PyObject *self)
The returned object must be a Python IntType or
Python LongType. NULL should be returned on
error with an appropriate error set.
The __index__ special method will have the signature:def __index__(self):
return obj
where obj must be either an int or a long.
3 new abstract C-API functions will be added
The first checks to see if the object supports the index
slot and if it is filled in.int PyIndex_Check(obj)
This will return true if the object defines the nb_index
slot.
The second is a simple wrapper around the nb_index call that
raises PyExc_TypeError if the call is not available or if it
doesn’t return an int or long. Because the
PyIndex_Check is performed inside the PyNumber_Index call
you can call it directly and manage any error rather than
check for compatibility first.PyObject *PyNumber_Index (PyObject *obj)
The third call helps deal with the common situation of
actually needing a Py_ssize_t value from the object to use for
indexing or other needs.Py_ssize_t PyNumber_AsSsize_t(PyObject *obj, PyObject *exc)
The function calls the nb_index slot of obj if it is
available and then converts the returned Python integer into
a Py_ssize_t value. If this goes well, then the value is
returned. The second argument allows control over what
happens if the integer returned from nb_index cannot fit
into a Py_ssize_t value.
If exc is NULL, then the returned value will be clipped to
PY_SSIZE_T_MAX or PY_SSIZE_T_MIN depending on whether the
nb_index slot of obj returned a positive or negative
integer. If exc is non-NULL, then it is the error object
that will be set to replace the PyExc_OverflowError that was
raised when the Python integer or long was converted to Py_ssize_t.
A new operator.index(obj) function will be added that calls
equivalent of obj.__index__() and raises an error if obj does not implement
the special method.
Implementation Plan
Add the nb_index slot in object.h and modify typeobject.c to
create the __index__ method
Change the ISINT macro in ceval.c to ISINDEX and alter it to
accommodate objects with the index slot defined.
Change the _PyEval_SliceIndex function to accommodate objects
with the index slot defined.
Change all builtin objects (e.g. lists) that use the as_mapping
slots for subscript access and use a special-check for integers to
check for the slot as well.
Add the nb_index slot to integers and long_integers
(which just return themselves)
Add PyNumber_Index C-API to return an integer from any
Python Object that has the nb_index slot.
Add the operator.index(x) function.
Alter arrayobject.c and mmapmodule.c to use the new C-API for their
sub-scripting and other needs.
Add unit-tests
Discussion Questions
Speed
Implementation should not slow down Python because integers and long
integers used as indexes will complete in the same number of
instructions. The only change will be that what used to generate
an error will now be acceptable.
Why not use nb_int which is already there?
The nb_int method is used for coercion and so means something
fundamentally different than what is requested here. This PEP
proposes a method for something that can already be thought of as
an integer communicate that information to Python when it needs an
integer. The biggest example of why using nb_int would be a bad
thing is that float objects already define the nb_int method, but
float objects should not be used as indexes in a sequence.
Why the name __index__?
Some questions were raised regarding the name __index__ when other
interpretations of the slot are possible. For example, the slot
can be used any time Python requires an integer internally (such
as in "mystring" * 3). The name was suggested by Guido because
slicing syntax is the biggest reason for having such a slot and
in the end no better name emerged. See the discussion thread [1]
for examples of names that were suggested such as “__discrete__” and
“__ordinal__”.
Why return PyObject * from nb_index?
Initially Py_ssize_t was selected as the return type for the
nb_index slot. However, this led to an inability to track and
distinguish overflow and underflow errors without ugly and brittle
hacks. As the nb_index slot is used in at least 3 different ways
in the Python core (to get an integer, to get a slice end-point,
and to get a sequence index), there is quite a bit of flexibility
needed to handle all these cases. The importance of having the
necessary flexibility to handle all the use cases is critical.
For example, the initial implementation that returned Py_ssize_t for
nb_index led to the discovery that on a 32-bit machine with >=2GB of RAM
s = 'x' * (2**100) works but len(s) was clipped at 2147483647.
Several fixes were suggested but eventually it was decided that
nb_index needed to return a Python Object similar to the nb_int
and nb_long slots in order to handle overflow correctly.
Why can’t __index__ return any object with the nb_index method?
This would allow infinite recursion in many different ways that are not
easy to check for. This restriction is similar to the requirement that
__nonzero__ return an int or a bool.
Reference Implementation
Submitted as patch 1436368 to SourceForge.
References
[1]
Travis Oliphant, PEP for adding an sq_index slot so that any object, a
or b, can be used in X[a:b] notation,https://mail.python.org/pipermail/python-dev/2006-February/thread.html#60594
Copyright
This document is placed in the public domain.
| Final | PEP 357 – Allowing Any Object to be Used for Slicing | Standards Track | This PEP proposes adding an nb_index slot in PyNumberMethods and an
__index__ special method so that arbitrary objects can be used
whenever integers are explicitly needed in Python, such as in slice
syntax (from which the slot gets its name). |
PEP 358 – The “bytes” Object
Author:
Neil Schemenauer <nas at arctrix.com>, Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
15-Feb-2006
Python-Version:
2.6, 3.0
Post-History:
Table of Contents
Update
Abstract
Motivation
Specification
Out of Scope Issues
Open Issues
Frequently Asked Questions
Copyright
Update
This PEP has partially been superseded by PEP 3137.
Abstract
This PEP outlines the introduction of a raw bytes sequence type.
Adding the bytes type is one step in the transition to
Unicode-based str objects which will be introduced in Python 3.0.
The PEP describes how the bytes type should work in Python 2.6, as
well as how it should work in Python 3.0. (Occasionally there are
differences because in Python 2.6, we have two string types, str
and unicode, while in Python 3.0 we will only have one string
type, whose name will be str but whose semantics will be like the
2.6 unicode type.)
Motivation
Python’s current string objects are overloaded. They serve to hold
both sequences of characters and sequences of bytes. This
overloading of purpose leads to confusion and bugs. In future
versions of Python, string objects will be used for holding
character data. The bytes object will fulfil the role of a byte
container. Eventually the unicode type will be renamed to str
and the old str type will be removed.
Specification
A bytes object stores a mutable sequence of integers that are in
the range 0 to 255. Unlike string objects, indexing a bytes
object returns an integer. Assigning or comparing an object that
is not an integer to an element causes a TypeError exception.
Assigning an element to a value outside the range 0 to 255 causes
a ValueError exception. The .__len__() method of bytes returns
the number of integers stored in the sequence (i.e. the number of
bytes).
The constructor of the bytes object has the following signature:
bytes([initializer[, encoding]])
If no arguments are provided then a bytes object containing zero
elements is created and returned. The initializer argument can be
a string (in 2.6, either str or unicode), an iterable of integers,
or a single integer. The pseudo-code for the constructor
(optimized for clear semantics, not for speed) is:
def bytes(initializer=0, encoding=None):
if isinstance(initializer, int): # In 2.6, int -> (int, long)
initializer = [0]*initializer
elif isinstance(initializer, basestring):
if isinstance(initializer, unicode): # In 3.0, "if True"
if encoding is None:
# In 3.0, raise TypeError("explicit encoding required")
encoding = sys.getdefaultencoding()
initializer = initializer.encode(encoding)
initializer = [ord(c) for c in initializer]
else:
if encoding is not None:
raise TypeError("no encoding allowed for this initializer")
tmp = []
for c in initializer:
if not isinstance(c, int):
raise TypeError("initializer must be iterable of ints")
if not 0 <= c < 256:
raise ValueError("initializer element out of range")
tmp.append(c)
initializer = tmp
new = <new bytes object of length len(initializer)>
for i, c in enumerate(initializer):
new[i] = c
return new
The .__repr__() method returns a string that can be evaluated to
generate a new bytes object containing a bytes literal:
>>> bytes([10, 20, 30])
b'\n\x14\x1e'
The object has a .decode() method equivalent to the .decode()
method of the str object. The object has a classmethod .fromhex()
that takes a string of characters from the set [0-9a-fA-F ] and
returns a bytes object (similar to binascii.unhexlify). For
example:
>>> bytes.fromhex('5c5350ff')
b'\\SP\xff'
>>> bytes.fromhex('5c 53 50 ff')
b'\\SP\xff'
The object has a .hex() method that does the reverse conversion
(similar to binascii.hexlify):
>> bytes([92, 83, 80, 255]).hex()
'5c5350ff'
The bytes object has some methods similar to list methods, and
others similar to str methods. Here is a complete list of
methods, with their approximate signatures:
.__add__(bytes) -> bytes
.__contains__(int | bytes) -> bool
.__delitem__(int | slice) -> None
.__delslice__(int, int) -> None
.__eq__(bytes) -> bool
.__ge__(bytes) -> bool
.__getitem__(int | slice) -> int | bytes
.__getslice__(int, int) -> bytes
.__gt__(bytes) -> bool
.__iadd__(bytes) -> bytes
.__imul__(int) -> bytes
.__iter__() -> iterator
.__le__(bytes) -> bool
.__len__() -> int
.__lt__(bytes) -> bool
.__mul__(int) -> bytes
.__ne__(bytes) -> bool
.__reduce__(...) -> ...
.__reduce_ex__(...) -> ...
.__repr__() -> str
.__reversed__() -> bytes
.__rmul__(int) -> bytes
.__setitem__(int | slice, int | iterable[int]) -> None
.__setslice__(int, int, iterable[int]) -> Bote
.append(int) -> None
.count(int) -> int
.decode(str) -> str | unicode # in 3.0, only str
.endswith(bytes) -> bool
.extend(iterable[int]) -> None
.find(bytes) -> int
.index(bytes | int) -> int
.insert(int, int) -> None
.join(iterable[bytes]) -> bytes
.partition(bytes) -> (bytes, bytes, bytes)
.pop([int]) -> int
.remove(int) -> None
.replace(bytes, bytes) -> bytes
.rindex(bytes | int) -> int
.rpartition(bytes) -> (bytes, bytes, bytes)
.split(bytes) -> list[bytes]
.startswith(bytes) -> bool
.reverse() -> None
.rfind(bytes) -> int
.rindex(bytes | int) -> int
.rsplit(bytes) -> list[bytes]
.translate(bytes, [bytes]) -> bytes
Note the conspicuous absence of .isupper(), .upper(), and friends.
(But see “Open Issues” below.) There is no .__hash__() because
the object is mutable. There is no use case for a .sort() method.
The bytes type also supports the buffer interface, supporting
reading and writing binary (but not character) data.
Out of Scope Issues
Python 3k will have a much different I/O subsystem. Deciding
how that I/O subsystem will work and interact with the bytes
object is out of the scope of this PEP. The expectation however
is that binary I/O will read and write bytes, while text I/O
will read strings. Since the bytes type supports the buffer
interface, the existing binary I/O operations in Python 2.6 will
support bytes objects.
It has been suggested that a special method named .__bytes__()
be added to the language to allow objects to be converted into
byte arrays. This decision is out of scope.
A bytes literal of the form b"..." is also proposed. This is
the subject of PEP 3112.
Open Issues
The .decode() method is redundant since a bytes object b can
also be decoded by calling unicode(b, <encoding>) (in 2.6) or
str(b, <encoding>) (in 3.0). Do we need encode/decode methods
at all? In a sense the spelling using a constructor is cleaner.
Need to specify the methods still more carefully.
Pickling and marshalling support need to be specified.
Should all those list methods really be implemented?
A case could be made for supporting .ljust(), .rjust(),
.center() with a mandatory second argument.
A case could be made for supporting .split() with a mandatory
argument.
A case could even be made for supporting .islower(), .isupper(),
.isspace(), .isalpha(), .isalnum(), .isdigit() and the
corresponding conversions (.lower() etc.), using the ASCII
definitions for letters, digits and whitespace. If this is
accepted, the cases for .ljust(), .rjust(), .center() and
.split() become much stronger, and they should have default
arguments as well, using an ASCII space or all ASCII whitespace
(for .split()).
Frequently Asked Questions
Q: Why have the optional encoding argument when the encode method of
Unicode objects does the same thing?
A: In the current version of Python, the encode method returns a str
object and we cannot change that without breaking code. The
construct bytes(s.encode(...)) is expensive because it has to
copy the byte sequence multiple times. Also, Python generally
provides two ways of converting an object of type A into an
object of type B: ask an A instance to convert itself to a B, or
ask the type B to create a new instance from an A. Depending on
what A and B are, both APIs make sense; sometimes reasons of
decoupling require that A can’t know about B, in which case you
have to use the latter approach; sometimes B can’t know about A,
in which case you have to use the former.
Q: Why does bytes ignore the encoding argument if the initializer is
a str? (This only applies to 2.6.)
A: There is no sane meaning that the encoding can have in that case.
str objects are byte arrays and they know nothing about the
encoding of character data they contain. We need to assume that
the programmer has provided a str object that already uses the
desired encoding. If you need something other than a pure copy of
the bytes then you need to first decode the string. For example:
bytes(s.decode(encoding1), encoding2)
Q: Why not have the encoding argument default to Latin-1 (or some
other encoding that covers the entire byte range) rather than
ASCII?
A: The system default encoding for Python is ASCII. It seems least
confusing to use that default. Also, in Py3k, using Latin-1 as
the default might not be what users expect. For example, they
might prefer a Unicode encoding. Any default will not always
work as expected. At least ASCII will complain loudly if you try
to encode non-ASCII data.
Copyright
This document has been placed in the public domain.
| Final | PEP 358 – The “bytes” Object | Standards Track | This PEP outlines the introduction of a raw bytes sequence type.
Adding the bytes type is one step in the transition to
Unicode-based str objects which will be introduced in Python 3.0. |
PEP 359 – The “make” Statement
Author:
Steven Bethard <steven.bethard at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
05-Apr-2006
Python-Version:
2.6
Post-History:
05-Apr-2006, 06-Apr-2006, 13-Apr-2006
Table of Contents
Abstract
Withdrawal Notice
Motivation
Example: simple namespaces
Example: GUI objects
Example: custom descriptors
Example: property namespaces
Example: interfaces
Specification
Open Issues
Keyword
The make-statement as an alternate constructor
Customizing the dict in which the block is executed
Optional Extensions
Remove the make keyword
Removing __metaclass__ in Python 3000
Removing class statements in Python 3000
References
Copyright
Abstract
This PEP proposes a generalization of the class-declaration syntax,
the make statement. The proposed syntax and semantics parallel
the syntax for class definition, and so:
make <callable> <name> <tuple>:
<block>
is translated into the assignment:
<name> = <callable>("<name>", <tuple>, <namespace>)
where <namespace> is the dict created by executing <block>.
This is mostly syntactic sugar for:
class <name> <tuple>:
__metaclass__ = <callable>
<block>
and is intended to help more clearly express the intent of the
statement when something other than a class is being created. Of
course, other syntax for such a statement is possible, but it is hoped
that by keeping a strong parallel to the class statement, an
understanding of how classes and metaclasses work will translate into
an understanding of how the make-statement works as well.
The PEP is based on a suggestion [1] from Michele Simionato on the
python-dev list.
Withdrawal Notice
This PEP was withdrawn at Guido’s request [2]. Guido didn’t like it,
and in particular didn’t like how the property use-case puts the
instance methods of a property at a different level than other
instance methods and requires fixed names for the property functions.
Motivation
Class statements provide two nice facilities to Python:
They execute a block of statements and provide the resulting
bindings as a dict to the metaclass.
They encourage DRY (don’t repeat yourself) by allowing the class
being created to know the name it is being assigned.
Thus in a simple class statement like:
class C(object):
x = 1
def foo(self):
return 'bar'
the metaclass (type) gets called with something like:
C = type('C', (object,), {'x':1, 'foo':<function foo at ...>})
The class statement is just syntactic sugar for the above assignment
statement, but clearly a very useful sort of syntactic sugar. It
avoids not only the repetition of C, but also simplifies the
creation of the dict by allowing it to be expressed as a series of
statements.
Historically, type instances (a.k.a. class objects) have been the
only objects blessed with this sort of syntactic support. The make
statement aims to extend this support to other sorts of objects where
such syntax would also be useful.
Example: simple namespaces
Let’s say I have some attributes in a module that I access like:
mod.thematic_roletype
mod.opinion_roletype
mod.text_format
mod.html_format
and since “Namespaces are one honking great idea”, I’d like to be able
to access these attributes instead as:
mod.roletypes.thematic
mod.roletypes.opinion
mod.format.text
mod.format.html
I currently have two main options:
Turn the module into a package, turn roletypes and format
into submodules, and move the attributes to the submodules.
Create roletypes and format classes, and move the
attributes to the classes.
The former is a fair chunk of refactoring work, and produces two tiny
modules without much content. The latter keeps the attributes local
to the module, but creates classes when there is no intention of ever
creating instances of those classes.
In situations like this, it would be nice to simply be able to declare
a “namespace” to hold the few attributes. With the new make
statement, I could introduce my new namespaces with something like:
make namespace roletypes:
thematic = ...
opinion = ...
make namespace format:
text = ...
html = ...
and keep my attributes local to the module without making classes that
are never intended to be instantiated. One definition of namespace
that would make this work is:
class namespace(object):
def __init__(self, name, args, kwargs):
self.__dict__.update(kwargs)
Given this definition, at the end of the make-statements above,
roletypes and format would be namespace instances.
Example: GUI objects
In GUI toolkits, objects like frames and panels are often associated
with attributes and functions. With the make-statement, code that
looks something like:
root = Tkinter.Tk()
frame = Tkinter.Frame(root)
frame.pack()
def say_hi():
print "hi there, everyone!"
hi_there = Tkinter.Button(frame, text="Hello", command=say_hi)
hi_there.pack(side=Tkinter.LEFT)
root.mainloop()
could be rewritten to group the Button’s function with its
declaration:
root = Tkinter.Tk()
frame = Tkinter.Frame(root)
frame.pack()
make Tkinter.Button hi_there(frame):
text = "Hello"
def command():
print "hi there, everyone!"
hi_there.pack(side=Tkinter.LEFT)
root.mainloop()
Example: custom descriptors
Since descriptors are used to customize access to an attribute, it’s
often useful to know the name of that attribute. Current Python
doesn’t give an easy way to find this name and so a lot of custom
descriptors, like Ian Bicking’s setonce descriptor [3], have to hack
around this somehow. With the make-statement, you could create a
setonce attribute like:
class A(object):
...
make setonce x:
"A's x attribute"
...
where the setonce descriptor would be defined like:
class setonce(object):
def __init__(self, name, args, kwargs):
self._name = '_setonce_attr_%s' % name
self.__doc__ = kwargs.pop('__doc__', None)
def __get__(self, obj, type=None):
if obj is None:
return self
return getattr(obj, self._name)
def __set__(self, obj, value):
try:
getattr(obj, self._name)
except AttributeError:
setattr(obj, self._name, value)
else:
raise AttributeError("Attribute already set")
def set(self, obj, value):
setattr(obj, self._name, value)
def __delete__(self, obj):
delattr(obj, self._name)
Note that unlike the original implementation, the private attribute
name is stable since it uses the name of the descriptor, and therefore
instances of class A are pickleable.
Example: property namespaces
Python’s property type takes three function arguments and a docstring
argument which, though relevant only to the property, must be declared
before it and then passed as arguments to the property call, e.g.:
class C(object):
...
def get_x(self):
...
def set_x(self):
...
x = property(get_x, set_x, "the x of the frobulation")
This issue has been brought up before, and Guido [4] and others [5]
have briefly mused over alternate property syntaxes to make declaring
properties easier. With the make-statement, the following syntax
could be supported:
class C(object):
...
make block_property x:
'''The x of the frobulation'''
def fget(self):
...
def fset(self):
...
with the following definition of block_property:
def block_property(name, args, block_dict):
fget = block_dict.pop('fget', None)
fset = block_dict.pop('fset', None)
fdel = block_dict.pop('fdel', None)
doc = block_dict.pop('__doc__', None)
assert not block_dict
return property(fget, fset, fdel, doc)
Example: interfaces
Guido [6] and others have occasionally suggested introducing
interfaces into python. Most suggestions have offered syntax along
the lines of:
interface IFoo:
"""Foo blah blah"""
def fumble(name, count):
"""docstring"""
but since there is currently no way in Python to declare an interface
in this manner, most implementations of Python interfaces use class
objects instead, e.g. Zope’s:
class IFoo(Interface):
"""Foo blah blah"""
def fumble(name, count):
"""docstring"""
With the new make-statement, these interfaces could instead be
declared as:
make Interface IFoo:
"""Foo blah blah"""
def fumble(name, count):
"""docstring"""
which makes the intent (that this is an interface, not a class) much
clearer.
Specification
Python will translate a make-statement:
make <callable> <name> <tuple>:
<block>
into the assignment:
<name> = <callable>("<name>", <tuple>, <namespace>)
where <namespace> is the dict created by executing <block>.
The <tuple> expression is optional; if not present, an empty tuple
will be assumed.
A patch is available implementing these semantics [7].
The make-statement introduces a new keyword, make. Thus in Python
2.6, the make-statement will have to be enabled using from
__future__ import make_statement.
Open Issues
Keyword
Does the make keyword break too much code? Originally, the make
statement used the keyword create (a suggestion due to Alyssa
Coghlan). However, investigations into the standard library [8] and
Zope+Plone code [9] revealed that create would break a lot more
code, so make was adopted as the keyword instead. However, there
are still a few instances where make would break code. Is there a
better keyword for the statement?
Some possible keywords and their counts in the standard library (plus
some installed packages):
make - 2 (both in tests)
create - 19 (including existing function in imaplib)
build - 83 (including existing class in distutils.command.build)
construct - 0
produce - 0
The make-statement as an alternate constructor
Currently, there are not many functions which have the signature
(name, args, kwargs). That means that something like:
make dict params:
x = 1
y = 2
is currently impossible because the dict constructor has a different
signature. Does this sort of thing need to be supported? One
suggestion, by Carl Banks, would be to add a __make__ magic method
that if found would be called instead of __call__. For types,
the __make__ method would be identical to __call__ and thus
unnecessary, but dicts could support the make-statement by defining a
__make__ method on the dict type that looks something like:
def __make__(cls, name, args, kwargs):
return cls(**kwargs)
Of course, rather than adding another magic method, the dict type
could just grow a classmethod something like dict.fromblock that
could be used like:
make dict.fromblock params:
x = 1
y = 2
So the question is, will many types want to use the make-statement as
an alternate constructor? And if so, does that alternate constructor
need to have the same name as the original constructor?
Customizing the dict in which the block is executed
Should users of the make-statement be able to determine in which dict
object the code is executed? This would allow the make-statement to
be used in situations where a normal dict object would not suffice,
e.g. if order and repeated names must be allowed. Allowing this sort
of customization could allow XML to be written without repeating
element names, and with nesting of make-statements corresponding to
nesting of XML elements:
make Element html:
make Element body:
text('before first h1')
make Element h1:
attrib(style='first')
text('first h1')
tail('after first h1')
make Element h1:
attrib(style='second')
text('second h1')
tail('after second h1')
If the make-statement tried to get the dict in which to execute its
block by calling the callable’s __make_dict__ method, the
following code would allow the make-statement to be used as above:
class Element(object):
class __make_dict__(dict):
def __init__(self, *args, **kwargs):
self._super = super(Element.__make_dict__, self)
self._super.__init__(*args, **kwargs)
self.elements = []
self.text = None
self.tail = None
self.attrib = {}
def __getitem__(self, name):
try:
return self._super.__getitem__(name)
except KeyError:
if name in ['attrib', 'text', 'tail']:
return getattr(self, 'set_%s' % name)
else:
return globals()[name]
def __setitem__(self, name, value):
self._super.__setitem__(name, value)
self.elements.append(value)
def set_attrib(self, **kwargs):
self.attrib = kwargs
def set_text(self, text):
self.text = text
def set_tail(self, text):
self.tail = text
def __new__(cls, name, args, edict):
get_element = etree.ElementTree.Element
result = get_element(name, attrib=edict.attrib)
result.text = edict.text
result.tail = edict.tail
for element in edict.elements:
result.append(element)
return result
Note, however, that the code to support this is somewhat fragile –
it has to magically populate the namespace with attrib, text
and tail, and it assumes that every name binding inside the make
statement body is creating an Element. As it stands, this code would
break with the introduction of a simple for-loop to any one of the
make-statement bodies, because the for-loop would bind a name to a
non-Element object. This could be worked around by adding some sort
of isinstance check or attribute examination, but this still results
in a somewhat fragile solution.
It has also been pointed out that the with-statement can provide
equivalent nesting with a much more explicit syntax:
with Element('html') as html:
with Element('body') as body:
body.text = 'before first h1'
with Element('h1', style='first') as h1:
h1.text = 'first h1'
h1.tail = 'after first h1'
with Element('h1', style='second') as h1:
h1.text = 'second h1'
h1.tail = 'after second h1'
And if the repetition of the element names here is too much of a DRY
violation, it is also possible to eliminate all as-clauses except for
the first by adding a few methods to Element. [10]
So are there real use-cases for executing the block in a dict of a
different type? And if so, should the make-statement be extended to
support them?
Optional Extensions
Remove the make keyword
It might be possible to remove the make keyword so that such
statements would begin with the callable being called, e.g.:
namespace ns:
badger = 42
def spam():
...
interface C(...):
...
However, almost all other Python statements begin with a keyword, and
removing the keyword would make it harder to look up this construct in
the documentation. Additionally, this would add some complexity in
the grammar and so far I (Steven Bethard) have not been able to
implement the feature without the keyword.
Removing __metaclass__ in Python 3000
As a side-effect of its generality, the make-statement mostly
eliminates the need for the __metaclass__ attribute in class
objects. Thus in Python 3000, instead of:
class <name> <bases-tuple>:
__metaclass__ = <metaclass>
<block>
metaclasses could be supported by using the metaclass as the callable
in a make-statement:
make <metaclass> <name> <bases-tuple>:
<block>
Removing the __metaclass__ hook would simplify the BUILD_CLASS
opcode a bit.
Removing class statements in Python 3000
In the most extreme application of make-statements, the class
statement itself could be deprecated in favor of make type
statements.
References
[1]
Michele Simionato’s original suggestion
(https://mail.python.org/pipermail/python-dev/2005-October/057435.html)
[2]
Guido requests withdrawal
(https://mail.python.org/pipermail/python-3000/2006-April/000936.html)
[3]
Ian Bicking’s setonce descriptor
(http://blog.ianbicking.org/easy-readonly-attributes.html)
[4]
Guido ponders property syntax
(https://mail.python.org/pipermail/python-dev/2005-October/057404.html)
[5]
Namespace-based property recipe
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418)
[6]
Python interfaces
(http://www.artima.com/weblogs/viewpost.jsp?thread=86641)
[7]
Make Statement patch
(http://ucsu.colorado.edu/~bethard/py/make_statement.patch)
[8]
Instances of create in the stdlib
(https://mail.python.org/pipermail/python-list/2006-April/335159.html)
[9]
Instances of create in Zope+Plone
(https://mail.python.org/pipermail/python-list/2006-April/335284.html)
[10]
Eliminate as-clauses in with-statement XML
(https://mail.python.org/pipermail/python-list/2006-April/336774.html)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 359 – The “make” Statement | Standards Track | This PEP proposes a generalization of the class-declaration syntax,
the make statement. The proposed syntax and semantics parallel
the syntax for class definition, and so: |
PEP 360 – Externally Maintained Packages
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Process
Created:
30-May-2006
Post-History:
Table of Contents
Abstract
Externally Maintained Packages
ElementTree
Expat XML parser
Optik
wsgiref
References
Copyright
Warning
No new modules are to be added to this PEP. It has been
deemed dangerous to codify external maintenance of any
code checked into Python’s code repository. Code
contributors should expect Python’s development
methodology to be used for any and all code checked into
Python’s code repository.
Abstract
There are many great pieces of Python software developed outside of
the Python standard library (a.k.a., the “stdlib”). Sometimes it
makes sense to incorporate these externally maintained packages into
the stdlib in order to fill a gap in the tools provided by Python.
But by having the packages maintained externally it means Python’s
developers do not have direct control over the packages’ evolution and
maintenance. Some package developers prefer to have bug reports and
patches go through them first instead of being directly applied to
Python’s repository.
This PEP is meant to record details of packages in the stdlib that are
maintained outside of Python’s repository. Specifically, it is meant
to keep track of any specific maintenance needs for each package. It
should be mentioned that changes needed in order to fix bugs and keep
the code running on all of Python’s supported platforms will be done
directly in Python’s repository without worrying about going through
the contact developer. This is so that Python itself is not held up
by a single bug and allows the whole process to scale as needed.
It also is meant to allow people to know which version of a package is
released with which version of Python.
Externally Maintained Packages
The section title is the name of the package as it is known outside of
the Python standard library. The “standard library name” is what the
package is named within Python. The “contact person” is the Python
developer in charge of maintaining the package. The “synchronisation
history” lists what external version of the package was included in
each version of Python (if different from the previous Python
release).
ElementTree
Web site:
http://effbot.org/zone/element-index.htm
Standard library name:
xml.etree
Contact person:
Fredrik Lundh
Fredrik has ceded ElementTree maintenance to the core Python development
team [1].
Expat XML parser
Web site:
http://www.libexpat.org/
Standard library name:
N/A (this refers to the parser itself, and not the Python
bindings)
Contact person:
None
Optik
Web site:
http://optik.sourceforge.net/
Standard library name:
optparse
Contact person:
Greg Ward
External development seems to have ceased. For new applications, optparse
itself has been largely superseded by argparse.
wsgiref
Web site:
None
Standard library name:
wsgiref
Contact Person:
Phillip J. Eby
This module is maintained in the standard library, but significant bug
reports and patches should pass through the Web-SIG mailing list
[2] for discussion.
References
[1]
Fredrik’s handing over of ElementTree
(https://mail.python.org/pipermail/python-dev/2012-February/116389.html)
[2]
Web-SIG mailing list
(https://mail.python.org/mailman/listinfo/web-sig)
Copyright
This document has been placed in the public domain.
| Final | PEP 360 – Externally Maintained Packages | Process | There are many great pieces of Python software developed outside of
the Python standard library (a.k.a., the “stdlib”). Sometimes it
makes sense to incorporate these externally maintained packages into
the stdlib in order to fill a gap in the tools provided by Python. |
PEP 361 – Python 2.6 and 3.0 Release Schedule
Author:
Neal Norwitz, Barry Warsaw
Status:
Final
Type:
Informational
Topic:
Release
Created:
29-Jun-2006
Python-Version:
2.6, 3.0
Post-History:
17-Mar-2008
Table of Contents
Abstract
Release Manager and Crew
Release Lifespan
Release Schedule
Completed features for 3.0
Completed features for 2.6
Possible features for 2.6
Deferred until 2.7
Open issues
References
Copyright
Abstract
This document describes the development and release schedule for
Python 2.6 and 3.0. The schedule primarily concerns itself with
PEP-sized items. Small features may be added up to and including
the first beta release. Bugs may be fixed until the final
release.
There will be at least two alpha releases, two beta releases, and
one release candidate. The releases are planned for October 2008.
Python 2.6 is not only the next advancement in the Python 2
series, it is also a transitional release, helping developers
begin to prepare their code for Python 3.0. As such, many
features are being backported from Python 3.0 to 2.6. Thus, it
makes sense to release both versions in at the same time. The
precedence for this was set with the Python 1.6 and 2.0 release.
Until rc, we will be releasing Python 2.6 and 3.0 in lockstep, on
a monthly release cycle. The releases will happen on the first
Wednesday of every month through the beta testing cycle. Because
Python 2.6 is ready sooner, and because we have outside deadlines
we’d like to meet, we’ve decided to split the rc releases. Thus
Python 2.6 final is currently planned to come out two weeks before
Python 3.0 final.
Release Manager and Crew
2.6/3.0 Release Manager: Barry Warsaw
Windows installers: Martin v. Loewis
Mac installers: Ronald Oussoren
Documentation: Georg Brandl
RPMs: Sean Reifschneider
Release Lifespan
Python 3.0 is no longer being maintained for any purpose.
Python 2.6.9 is the final security-only source-only maintenance
release of the Python 2.6 series. With its release on October 29,
2013, all official support for Python 2.6 has ended. Python 2.6
is no longer being maintained for any purpose.
Release Schedule
Feb 29 2008: Python 2.6a1 and 3.0a3 are released
Apr 02 2008: Python 2.6a2 and 3.0a4 are released
May 08 2008: Python 2.6a3 and 3.0a5 are released
Jun 18 2008: Python 2.6b1 and 3.0b1 are released
Jul 17 2008: Python 2.6b2 and 3.0b2 are released
Aug 20 2008: Python 2.6b3 and 3.0b3 are released
Sep 12 2008: Python 2.6rc1 is released
Sep 17 2008: Python 2.6rc2 and 3.0rc1 released
Oct 01 2008: Python 2.6 final released
Nov 06 2008: Python 3.0rc2 released
Nov 21 2008: Python 3.0rc3 released
Dec 03 2008: Python 3.0 final released
Dec 04 2008: Python 2.6.1 final released
Apr 14 2009: Python 2.6.2 final released
Oct 02 2009: Python 2.6.3 final released
Oct 25 2009: Python 2.6.4 final released
Mar 19 2010: Python 2.6.5 final released
Aug 24 2010: Python 2.6.6 final released
Jun 03 2011: Python 2.6.7 final released (security-only)
Apr 10 2012: Python 2.6.8 final released (security-only)
Oct 29 2013: Python 2.6.9 final released (security-only)
Completed features for 3.0
See PEP 3000 and PEP 3100 for details on the
Python 3.0 project.
Completed features for 2.6
PEPs:
PEP 352: Raising a string exception now triggers a TypeError.
Attempting to catch a string exception raises DeprecationWarning.
BaseException.message has been deprecated.
PEP 358: The “bytes” Object
PEP 366: Main module explicit relative imports
PEP 370: Per user site-packages directory
PEP 3112: Bytes literals in Python 3000
PEP 3127: Integer Literal Support and Syntax
PEP 371: Addition of the multiprocessing package
New modules in the standard library:
json
new enhanced turtle module
ast
Deprecated modules and functions in the standard library:
buildtools
cfmfile
commands.getstatus()
macostools.touched()
md5
MimeWriter
mimify
popen2, os.popen[234]()
posixfile
sets
sha
Modules removed from the standard library:
gopherlib
rgbimg
macfs
Warnings for features removed in Py3k:
builtins: apply, callable, coerce, dict.has_key, execfile,
reduce, reload
backticks and <>
float args to xrange
coerce and all its friends
comparing by default comparison
{}.has_key()
file.xreadlines
softspace removal for print() function
removal of modules because of PEP 4/PEP 3100/PEP 3108
Other major features:
with/as will be keywords
a __dir__() special method to control dir() was added [1]
AtheOS support stopped.
warnings module implemented in C
compile() takes an AST and can convert to byte code
Possible features for 2.6
New features should be implemented prior to alpha2, particularly
any C modifications or behavioral changes. New features must be
implemented prior to beta1 or will require Release Manager approval.
The following PEPs are being worked on for inclusion in 2.6: None.
Each non-trivial feature listed here that is not a PEP must be
discussed on python-dev. Other enhancements include:
distutils replacement (requires a PEP)
New modules in the standard library:
winerror
https://bugs.python.org/issue1505257
(Patch rejected, module should be written in C)
setuptools
BDFL pronouncement for inclusion in 2.5:
https://mail.python.org/pipermail/python-dev/2006-April/063964.html
PJE’s withdrawal from 2.5 for inclusion in 2.6:
https://mail.python.org/pipermail/python-dev/2006-April/064145.html
Modules to gain a DeprecationWarning (as specified for Python 2.6
or through negligence):
rfc822
mimetools
multifile
compiler package (or a Py3K warning instead?)
Convert Parser/*.c to use the C warnings module rather than printf
Add warnings for Py3k features removed:
__getslice__/__setslice__/__delslice__
float args to PyArgs_ParseTuple
__cmp__?
other comparison changes?
int division?
All PendingDeprecationWarnings (e.g. exceptions)
using zip() result as a list
the exec statement (use function syntax)
function attributes that start with func_* (should use __*__)
the L suffix for long literals
renaming of __nonzero__ to __bool__
multiple inheritance with classic classes? (MRO might change)
properties and classic classes? (instance attrs shadow property)
use __bool__ method if available and there’s no __nonzero__
Check the various bits of code in Demo/ and Tools/ all still work,
update or remove the ones that don’t.
All modules in Modules/ should be updated to be ssize_t clean.
All of Python (including Modules/) should compile cleanly with g++
Start removing deprecated features and generally moving towards Py3k
Replace all old style tests (operate on import) with unittest or docttest
Add tests for all untested modules
Document undocumented modules/features
bdist_deb in distutils package
https://mail.python.org/pipermail/python-dev/2006-February/060926.html
bdist_egg in distutils package
pure python pgen module
(Owner: Guido)
Deferral to 2.6:
https://mail.python.org/pipermail/python-dev/2006-April/064528.html
Remove the fpectl module?
Deferred until 2.7
None
Open issues
How should import warnings be handled?
https://mail.python.org/pipermail/python-dev/2006-June/066345.html
https://bugs.python.org/issue1515609
https://bugs.python.org/issue1515361
References
[1]
Adding a __dir__() magic method
https://mail.python.org/pipermail/python-dev/2006-July/067139.html
Copyright
This document has been placed in the public domain.
| Final | PEP 361 – Python 2.6 and 3.0 Release Schedule | Informational | This document describes the development and release schedule for
Python 2.6 and 3.0. The schedule primarily concerns itself with
PEP-sized items. Small features may be added up to and including
the first beta release. Bugs may be fixed until the final
release. |
PEP 362 – Function Signature Object
Author:
Brett Cannon <brett at python.org>, Jiwon Seo <seojiwon at gmail.com>,
Yury Selivanov <yury at edgedb.com>, Larry Hastings <larry at hastings.org>
Status:
Final
Type:
Standards Track
Created:
21-Aug-2006
Python-Version:
3.3
Post-History:
04-Jun-2012
Resolution:
Python-Dev message
Table of Contents
Abstract
Signature Object
Parameter Object
BoundArguments Object
Implementation
Design Considerations
No implicit caching of Signature objects
Some functions may not be introspectable
Signature and Parameter equivalence
Examples
Visualizing Callable Objects’ Signature
Annotation Checker
Acceptance
References
Copyright
Abstract
Python has always supported powerful introspection capabilities,
including introspecting functions and methods (for the rest of
this PEP, “function” refers to both functions and methods). By
examining a function object you can fully reconstruct the function’s
signature. Unfortunately this information is stored in an inconvenient
manner, and is spread across a half-dozen deeply nested attributes.
This PEP proposes a new representation for function signatures.
The new representation contains all necessary information about a function
and its parameters, and makes introspection easy and straightforward.
However, this object does not replace the existing function
metadata, which is used by Python itself to execute those
functions. The new metadata object is intended solely to make
function introspection easier for Python programmers.
Signature Object
A Signature object represents the call signature of a function and
its return annotation. For each parameter accepted by the function
it stores a Parameter object in its parameters collection.
A Signature object has the following public attributes and methods:
return_annotation : objectThe “return” annotation for the function. If the function
has no “return” annotation, this attribute is set to
Signature.empty.
parameters : OrderedDictAn ordered mapping of parameters’ names to the corresponding
Parameter objects.
bind(*args, **kwargs) -> BoundArgumentsCreates a mapping from positional and keyword arguments to
parameters. Raises a TypeError if the passed arguments do
not match the signature.
bind_partial(*args, **kwargs) -> BoundArgumentsWorks the same way as bind(), but allows the omission
of some required arguments (mimics functools.partial
behavior.) Raises a TypeError if the passed arguments do
not match the signature.
replace(parameters=<optional>, *, return_annotation=<optional>) -> SignatureCreates a new Signature instance based on the instance
replace was invoked on. It is possible to pass different
parameters and/or return_annotation to override the
corresponding properties of the base signature. To remove
return_annotation from the copied Signature, pass in
Signature.empty.Note that the ‘=<optional>’ notation, means that the argument is
optional. This notation applies to the rest of this PEP.
Signature objects are immutable. Use Signature.replace() to
make a modified copy:
>>> def foo() -> None:
... pass
>>> sig = signature(foo)
>>> new_sig = sig.replace(return_annotation="new return annotation")
>>> new_sig is not sig
True
>>> new_sig.return_annotation != sig.return_annotation
True
>>> new_sig.parameters == sig.parameters
True
>>> new_sig = new_sig.replace(return_annotation=new_sig.empty)
>>> new_sig.return_annotation is Signature.empty
True
There are two ways to instantiate a Signature class:
Signature(parameters=<optional>, *, return_annotation=Signature.empty)Default Signature constructor. Accepts an optional sequence
of Parameter objects, and an optional return_annotation.
Parameters sequence is validated to check that there are no
parameters with duplicate names, and that the parameters
are in the right order, i.e. positional-only first, then
positional-or-keyword, etc.
Signature.from_function(function)Returns a Signature object reflecting the signature of the
function passed in.
It’s possible to test Signatures for equality. Two signatures are
equal when their parameters are equal, their positional and
positional-only parameters appear in the same order, and they
have equal return annotations.
Changes to the Signature object, or to any of its data members,
do not affect the function itself.
Signature also implements __str__:
>>> str(Signature.from_function((lambda *args: None)))
'(*args)'
>>> str(Signature())
'()'
Parameter Object
Python’s expressive syntax means functions can accept many different
kinds of parameters with many subtle semantic differences. We
propose a rich Parameter object designed to represent any possible
function parameter.
A Parameter object has the following public attributes and methods:
name : strThe name of the parameter as a string. Must be a valid
python identifier name (with the exception of POSITIONAL_ONLY
parameters, which can have it set to None.)
default : objectThe default value for the parameter. If the parameter has no
default value, this attribute is set to Parameter.empty.
annotation : objectThe annotation for the parameter. If the parameter has no
annotation, this attribute is set to Parameter.empty.
kindDescribes how argument values are bound to the parameter.
Possible values:
Parameter.POSITIONAL_ONLY - value must be supplied
as a positional argument.Python has no explicit syntax for defining positional-only
parameters, but many built-in and extension module functions
(especially those that accept only one or two parameters)
accept them.
Parameter.POSITIONAL_OR_KEYWORD - value may be
supplied as either a keyword or positional argument
(this is the standard binding behaviour for functions
implemented in Python.)
Parameter.KEYWORD_ONLY - value must be supplied
as a keyword argument. Keyword only parameters are those
which appear after a “*” or “*args” entry in a Python
function definition.
Parameter.VAR_POSITIONAL - a tuple of positional
arguments that aren’t bound to any other parameter.
This corresponds to a “*args” parameter in a Python
function definition.
Parameter.VAR_KEYWORD - a dict of keyword arguments
that aren’t bound to any other parameter. This corresponds
to a “**kwargs” parameter in a Python function definition.
Always use Parameter.* constants for setting and checking
value of the kind attribute.
replace(*, name=<optional>, kind=<optional>, default=<optional>, annotation=<optional>) -> ParameterCreates a new Parameter instance based on the instance
replaced was invoked on. To override a Parameter
attribute, pass the corresponding argument. To remove
an attribute from a Parameter, pass Parameter.empty.
Parameter constructor:
Parameter(name, kind, *, annotation=Parameter.empty, default=Parameter.empty)Instantiates a Parameter object. name and kind are required,
while annotation and default are optional.
Two parameters are equal when they have equal names, kinds, defaults,
and annotations.
Parameter objects are immutable. Instead of modifying a Parameter object,
you can use Parameter.replace() to create a modified copy like so:
>>> param = Parameter('foo', Parameter.KEYWORD_ONLY, default=42)
>>> str(param)
'foo=42'
>>> str(param.replace())
'foo=42'
>>> str(param.replace(default=Parameter.empty, annotation='spam'))
"foo:'spam'"
BoundArguments Object
Result of a Signature.bind call. Holds the mapping of arguments
to the function’s parameters.
Has the following public attributes:
arguments : OrderedDictAn ordered, mutable mapping of parameters’ names to arguments’ values.
Contains only explicitly bound arguments. Arguments for
which bind() relied on a default value are skipped.
args : tupleTuple of positional arguments values. Dynamically computed from
the ‘arguments’ attribute.
kwargs : dictDict of keyword arguments values. Dynamically computed from
the ‘arguments’ attribute.
The arguments attribute should be used in conjunction with
Signature.parameters for any arguments processing purposes.
args and kwargs properties can be used to invoke functions:
def test(a, *, b):
...
sig = signature(test)
ba = sig.bind(10, b=20)
test(*ba.args, **ba.kwargs)
Arguments which could be passed as part of either *args or **kwargs
will be included only in the BoundArguments.args attribute. Consider the
following example:
def test(a=1, b=2, c=3):
pass
sig = signature(test)
ba = sig.bind(a=10, c=13)
>>> ba.args
(10,)
>>> ba.kwargs:
{'c': 13}
Implementation
The implementation adds a new function signature() to the inspect
module. The function is the preferred way of getting a Signature for
a callable object.
The function implements the following algorithm:
If the object is not callable - raise a TypeError
If the object has a __signature__ attribute and if it
is not None - return it
If it has a __wrapped__ attribute, return
signature(object.__wrapped__)
If the object is an instance of FunctionType, construct
and return a new Signature for it
If the object is a bound method, construct and return a new Signature
object, with its first parameter (usually self or cls)
removed. (classmethod and staticmethod are supported
too. Since both are descriptors, the former returns a bound method,
and the latter returns its wrapped function.)
If the object is an instance of functools.partial, construct
a new Signature from its partial.func attribute, and
account for already bound partial.args and partial.kwargs
If the object is a class or metaclass:
If the object’s type has a __call__ method defined in
its MRO, return a Signature for it
If the object has a __new__ method defined in its MRO,
return a Signature object for it
If the object has a __init__ method defined in its MRO,
return a Signature object for it
Return signature(object.__call__)
Note that the Signature object is created in a lazy manner, and
is not automatically cached. However, the user can manually cache a
Signature by storing it in the __signature__ attribute.
An implementation for Python 3.3 can be found at [1].
The python issue tracking the patch is [2].
Design Considerations
No implicit caching of Signature objects
The first PEP design had a provision for implicit caching of Signature
objects in the inspect.signature() function. However, this has the
following downsides:
If the Signature object is cached then any changes to the function
it describes will not be reflected in it. However, If the caching is
needed, it can be always done manually and explicitly
It is better to reserve the __signature__ attribute for the cases
when there is a need to explicitly set to a Signature object that
is different from the actual one
Some functions may not be introspectable
Some functions may not be introspectable in certain implementations of
Python. For example, in CPython, built-in functions defined in C provide
no metadata about their arguments. Adding support for them is out of
scope for this PEP.
Signature and Parameter equivalence
We assume that parameter names have semantic significance–two
signatures are equal only when their corresponding parameters are equal
and have the exact same names. Users who want looser equivalence tests,
perhaps ignoring names of VAR_KEYWORD or VAR_POSITIONAL parameters, will
need to implement those themselves.
Examples
Visualizing Callable Objects’ Signature
Let’s define some classes and functions:
from inspect import signature
from functools import partial, wraps
class FooMeta(type):
def __new__(mcls, name, bases, dct, *, bar:bool=False):
return super().__new__(mcls, name, bases, dct)
def __init__(cls, name, bases, dct, **kwargs):
return super().__init__(name, bases, dct)
class Foo(metaclass=FooMeta):
def __init__(self, spam:int=42):
self.spam = spam
def __call__(self, a, b, *, c) -> tuple:
return a, b, c
@classmethod
def spam(cls, a):
return a
def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
@shared_vars({})
def example(_state, a, b, c):
return _state, a, b, c
def format_signature(obj):
return str(signature(obj))
Now, in the python REPL:
>>> format_signature(FooMeta)
'(name, bases, dct, *, bar:bool=False)'
>>> format_signature(Foo)
'(spam:int=42)'
>>> format_signature(Foo.__call__)
'(self, a, b, *, c) -> tuple'
>>> format_signature(Foo().__call__)
'(a, b, *, c) -> tuple'
>>> format_signature(Foo.spam)
'(a)'
>>> format_signature(partial(Foo().__call__, 1, c=3))
'(b, *, c=3) -> tuple'
>>> format_signature(partial(partial(Foo().__call__, 1, c=3), 2, c=20))
'(*, c=20) -> tuple'
>>> format_signature(example)
'(a, b, c)'
>>> format_signature(partial(example, 1, 2))
'(c)'
>>> format_signature(partial(partial(example, 1, b=2), c=3))
'(b=2, c=3)'
Annotation Checker
import inspect
import functools
def checktypes(func):
'''Decorator to verify arguments and return types
Example:
>>> @checktypes
... def test(a:int, b:str) -> int:
... return int(a * b)
>>> test(10, '1')
1111111111
>>> test(10, 1)
Traceback (most recent call last):
...
ValueError: foo: wrong type of 'b' argument, 'str' expected, got 'int'
'''
sig = inspect.signature(func)
types = {}
for param in sig.parameters.values():
# Iterate through function's parameters and build the list of
# arguments types
type_ = param.annotation
if type_ is param.empty or not inspect.isclass(type_):
# Missing annotation or not a type, skip it
continue
types[param.name] = type_
# If the argument has a type specified, let's check that its
# default value (if present) conforms with the type.
if param.default is not param.empty and not isinstance(param.default, type_):
raise ValueError("{func}: wrong type of a default value for {arg!r}". \
format(func=func.__qualname__, arg=param.name))
def check_type(sig, arg_name, arg_type, arg_value):
# Internal function that encapsulates arguments type checking
if not isinstance(arg_value, arg_type):
raise ValueError("{func}: wrong type of {arg!r} argument, " \
"{exp!r} expected, got {got!r}". \
format(func=func.__qualname__, arg=arg_name,
exp=arg_type.__name__, got=type(arg_value).__name__))
@functools.wraps(func)
def wrapper(*args, **kwargs):
# Let's bind the arguments
ba = sig.bind(*args, **kwargs)
for arg_name, arg in ba.arguments.items():
# And iterate through the bound arguments
try:
type_ = types[arg_name]
except KeyError:
continue
else:
# OK, we have a type for the argument, lets get the corresponding
# parameter description from the signature object
param = sig.parameters[arg_name]
if param.kind == param.VAR_POSITIONAL:
# If this parameter is a variable-argument parameter,
# then we need to check each of its values
for value in arg:
check_type(sig, arg_name, type_, value)
elif param.kind == param.VAR_KEYWORD:
# If this parameter is a variable-keyword-argument parameter:
for subname, value in arg.items():
check_type(sig, arg_name + ':' + subname, type_, value)
else:
# And, finally, if this parameter a regular one:
check_type(sig, arg_name, type_, arg)
result = func(*ba.args, **ba.kwargs)
# The last bit - let's check that the result is correct
return_type = sig.return_annotation
if (return_type is not sig._empty and
isinstance(return_type, type) and
not isinstance(result, return_type)):
raise ValueError('{func}: wrong return type, {exp} expected, got {got}'. \
format(func=func.__qualname__, exp=return_type.__name__,
got=type(result).__name__))
return result
return wrapper
Acceptance
PEP 362 was accepted by Guido, Friday, June 22, 2012 [3] .
The reference implementation was committed to trunk later that day.
References
[1]
pep362 branch (https://bitbucket.org/1st1/cpython/overview)
[2]
issue 15008 (http://bugs.python.org/issue15008)
[3]
“A Desperate Plea For Introspection (aka: BDFAP Needed)” (https://mail.python.org/pipermail/python-dev/2012-June/120682.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 362 – Function Signature Object | Standards Track | Python has always supported powerful introspection capabilities,
including introspecting functions and methods (for the rest of
this PEP, “function” refers to both functions and methods). By
examining a function object you can fully reconstruct the function’s
signature. Unfortunately this information is stored in an inconvenient
manner, and is spread across a half-dozen deeply nested attributes. |
PEP 363 – Syntax For Dynamic Attribute Access
Author:
Ben North <ben at redfrontdoor.org>
Status:
Rejected
Type:
Standards Track
Created:
29-Jan-2007
Post-History:
12-Feb-2007
Table of Contents
Abstract
Rationale
Impact On Existing Code
Performance Impact
Error Cases
Draft Implementation
Mailing Lists Discussion
References
Copyright
Abstract
Dynamic attribute access is currently possible using the “getattr”
and “setattr” builtins. The present PEP suggests a new syntax to
make such access easier, allowing the coder for example to write:
x.('foo_%d' % n) += 1
z = y.('foo_%d' % n).('bar_%s' % s)
instead of:
attr_name = 'foo_%d' % n
setattr(x, attr_name, getattr(x, attr_name) + 1)
z = getattr(getattr(y, 'foo_%d' % n), 'bar_%s' % s)
Rationale
Dictionary access and indexing both have a friendly invocation
syntax: instead of x.__getitem__(12) the coder can write x[12].
This also allows the use of subscripted elements in an augmented
assignment, as in “x[12] += 1”. The present proposal brings this
ease-of-use to dynamic attribute access too.
Attribute access is currently possible in two ways:
When the attribute name is known at code-writing time, the
“.NAME” trailer can be used, as in:x.foo = 42
y.bar += 100
When the attribute name is computed dynamically at run-time, the
“getattr” and “setattr” builtins must be used:x = getattr(y, 'foo_%d' % n)
setattr(z, 'bar_%s' % s, 99)
The “getattr” builtin also allows the coder to specify a default
value to be returned in the event that the object does not have
an attribute of the given name:
x = getattr(y, 'foo_%d' % n, 0)
This PEP describes a new syntax for dynamic attribute access —
“x.(expr)” — with examples given in the Abstract above.
(The new syntax could also allow the provision of a default value in
the “get” case, as in:
x = y.('foo_%d' % n, None)
This 2-argument form of dynamic attribute access would not be
permitted as the target of an (augmented or normal) assignment. The
“Discussion” section below includes opinions specifically on the
2-argument extension.)
Finally, the new syntax can be used with the “del” statement, as in:
del x.(attr_name)
Impact On Existing Code
The proposed new syntax is not currently valid, so no existing
well-formed programs have their meaning altered by this proposal.
Across all “*.py” files in the 2.5 distribution, there are around
600 uses of “getattr”, “setattr” or “delattr”. They break down as
follows (figures have some room for error because they were
arrived at by partially-manual inspection):
c.300 uses of plain "getattr(x, attr_name)", which could be
replaced with the new syntax;
c.150 uses of the 3-argument form, i.e., with the default
value; these could be replaced with the 2-argument form
of the new syntax (the cases break down into c.125 cases
where the attribute name is a literal string, and c.25
where it's only known at run-time);
c.5 uses of the 2-argument form with a literal string
attribute name, which I think could be replaced with the
standard "x.attribute" syntax;
c.120 uses of setattr, of which 15 use getattr to find the
new value; all could be replaced with the new syntax,
the 15 where getattr is also involved would show a
particular increase in clarity;
c.5 uses which would have to stay as "getattr" because they
are calls of a variable named "getattr" whose default
value is the builtin "getattr";
c.5 uses of the 2-argument form, inside a try/except block
which catches AttributeError and uses a default value
instead; these could use 2-argument form of the new
syntax;
c.10 uses of "delattr", which could use the new syntax.
As examples, the line:
setattr(self, attr, change_root(self.root, getattr(self, attr)))
from Lib/distutils/command/install.py could be rewritten:
self.(attr) = change_root(self.root, self.(attr))
and the line:
setattr(self, method_name, getattr(self.metadata, method_name))
from Lib/distutils/dist.py could be rewritten:
self.(method_name) = self.metadata.(method_name)
Performance Impact
Initial pystone measurements are inconclusive, but suggest there may
be a performance penalty of around 1% in the pystones score with the
patched version. One suggestion is that this is because the longer
main loop in ceval.c hurts the cache behaviour, but this has not
been confirmed.
On the other hand, measurements suggest a speed-up of around 40–45%
for dynamic attribute access.
Error Cases
Only strings are permitted as attribute names, so for instance the
following error is produced:
>>> x.(99) = 8
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: attribute name must be string, not 'int'
This is handled by the existing PyObject_GetAttr function.
Draft Implementation
A draft implementation adds a new alternative to the “trailer”
clause in Grammar/Grammar; a new AST type, “DynamicAttribute” in
Python.asdl, with accompanying changes to symtable.c, ast.c, and
compile.c, and three new opcodes (load/store/del) with
accompanying changes to opcode.h and ceval.c. The patch consists
of c.180 additional lines in the core code, and c.100 additional
lines of tests. It is available as sourceforge patch #1657573 [1].
Mailing Lists Discussion
Initial posting of this PEP in draft form was to python-ideas on
20070209 [2], and the response was generally positive. The PEP was
then posted to python-dev on 20070212 [3], and an interesting
discussion ensued. A brief summary:
Initially, there was reasonable (but not unanimous) support for the
idea, although the precise choice of syntax had a more mixed
reception. Several people thought the “.” would be too easily
overlooked, with the result that the syntax could be confused with a
method/function call. A few alternative syntaxes were suggested:
obj.(foo)
obj.[foo]
obj.{foo}
obj{foo}
obj.*foo
obj->foo
obj<-foo
obj@[foo]
obj.[[foo]]
with “obj.[foo]” emerging as the preferred one. In this initial
discussion, the two-argument form was universally disliked, so it
was to be taken out of the PEP.
Discussion then took a step back to whether this particular feature
provided enough benefit to justify new syntax. As well as requiring
coders to become familiar with the new syntax, there would also be
the problem of backward compatibility — code using the new syntax
would not run on older pythons.
Instead of new syntax, a new “wrapper class” was proposed, with the
following specification / conceptual implementation suggested by
Martin von Löwis:
class attrs:
def __init__(self, obj):
self.obj = obj
def __getitem__(self, name):
return getattr(self.obj, name)
def __setitem__(self, name, value):
return setattr(self.obj, name, value)
def __delitem__(self, name):
return delattr(self, name)
def __contains__(self, name):
return hasattr(self, name)
This was considered a cleaner and more elegant solution to the
original problem. (Another suggestion was a mixin class providing
dictionary-style access to an object’s attributes.)
The decision was made that the present PEP did not meet the burden
of proof for the introduction of new syntax, a view which had been
put forward by some from the beginning of the discussion. The
wrapper class idea was left open as a possibility for a future PEP.
References
[1]
Sourceforge patch #1657573
http://sourceforge.net/tracker/index.php?func=detail&aid=1657573&group_id=5470&atid=305470
[2]
https://mail.python.org/pipermail/python-ideas/2007-February/000210.html
and following posts
[3]
https://mail.python.org/pipermail/python-dev/2007-February/070939.html
and following posts
Copyright
This document has been placed in the public domain.
| Rejected | PEP 363 – Syntax For Dynamic Attribute Access | Standards Track | Dynamic attribute access is currently possible using the “getattr”
and “setattr” builtins. The present PEP suggests a new syntax to
make such access easier, allowing the coder for example to write: |
PEP 364 – Transitioning to the Py3K Standard Library
Author:
Barry Warsaw <barry at python.org>
Status:
Withdrawn
Type:
Standards Track
Created:
01-Mar-2007
Python-Version:
2.6
Post-History:
Table of Contents
Abstract
Rationale
Supported Renamings
.mv files
Implementation Specification
Programmatic Interface
Open Issues
Reference Implementation
References
Copyright
Abstract
PEP 3108 describes the reorganization of the Python standard library
for the Python 3.0 release. This PEP describes a
mechanism for transitioning from the Python 2.x standard library to
the Python 3.0 standard library. This transition will allow and
encourage Python programmers to use the new Python 3.0 library names
starting with Python 2.6, while maintaining the old names for backward
compatibility. In this way, a Python programmer will be able to write
forward compatible code without sacrificing interoperability with
existing Python programs.
Rationale
PEP 3108 presents a rationale for Python standard library (stdlib)
reorganization. The reader is encouraged to consult that PEP for
details about why and how the library will be reorganized. Should
PEP 3108 be accepted in part or in whole, then it is advantageous to
allow Python programmers to begin the transition to the new stdlib
module names in Python 2.x, so that they can write forward compatible
code starting with Python 2.6.
Note that PEP 3108 proposes to remove some “silly old stuff”,
i.e. modules that are no longer useful or necessary. The PEP you are
reading does not address this because there are no forward
compatibility issues for modules that are to be removed, except to
stop using such modules.
This PEP concerns only the mechanism by which mappings from old stdlib
names to new stdlib names are maintained. Please consult PEP 3108 for
all specific module renaming proposals. Specifically see the section
titled Modules to Rename for guidelines on the old name to new
name mappings. The few examples in this PEP are given for
illustrative purposes only and should not be used for specific
renaming recommendations.
Supported Renamings
There are at least 4 use cases explicitly supported by this PEP:
Simple top-level package name renamings, such as StringIO to
stringio;
Sub-package renamings where the package name may or may not be
renamed, such as email.MIMEText to email.mime.text;
Extension module renaming, such as cStringIO to cstringio;
Third party renaming of any of the above.
Two use cases supported by this PEP include renaming simple top-level
modules, such as StringIO, as well as modules within packages,
such as email.MIMEText.
In the former case, PEP 3108 currently recommends StringIO be
renamed to stringio, following PEP 8 recommendations.
In the latter case, the email 4.0 package distributed with Python 2.5
already renamed email.MIMEText to email.mime.text, although it
did so in a one-off, uniquely hackish way inside the email package.
The mechanism described in this PEP is general enough to handle all
module renamings, obviating the need for the Python 2.5 hack (except
for backward compatibility with earlier Python versions).
An additional use case is to support the renaming of C extension
modules. As long as the new name for the C module is importable, it
can be remapped to the new name. E.g. cStringIO renamed to
cstringio.
Third party package renaming is also supported, via several public
interfaces accessible by any Python module.
Remappings are not performed recursively.
.mv files
Remapping files are called .mv files; the suffix was chosen to be
evocative of the Unix mv(1) command. An .mv file is a simple
line-oriented text file. All blank lines and lines that start with a
# are ignored. All other lines must contain two whitespace separated
fields. The first field is the old module name, and the second field
is the new module name. Both module names must be specified using
their full dotted-path names. Here is an example .mv file from
Python 2.6:
# Map the various string i/o libraries to their new names
StringIO stringio
cStringIO cstringio
.mv files can appear anywhere in the file system, and there is a
programmatic interface provided to parse them, and register the
remappings inside them. By default, when Python starts up, all the
.mv files in the oldlib package are read, and their remappings
are automatically registered. This is where all the module remappings
should be specified for top-level Python 2.x standard library modules.
Implementation Specification
This section provides the full specification for how module renamings
in Python 2.x are implemented. The central mechanism relies on
various import hooks as described in PEP 302. Specifically
sys.path_importer_cache, sys.path, and sys.meta_path are
all employed to provide the necessary functionality.
When Python’s import machinery is initialized, the oldlib package is
imported. Inside oldlib there is a class called OldStdlibLoader.
This class implements the PEP 302 interface and is automatically
instantiated, with zero arguments. The constructor reads all the
.mv files from the oldlib package directory, automatically
registering all the remappings found in those .mv files. This is
how the Python 2.x standard library is remapped.
The OldStdlibLoader class should not be instantiated by other Python
modules. Instead, you can access the global OldStdlibLoader instance
via the sys.stdlib_remapper instance. Use this instance if you want
programmatic access to the remapping machinery.
One important implementation detail: as needed by the PEP 302 API, a
magic string is added to sys.path, and module __path__ attributes in
order to hook in our remapping loader. This magic string is currently
<oldlib> and some changes were necessary to Python’s site.py file
in order to treat all sys.path entries starting with < as
special. Specifically, no attempt is made to make them absolute file
names (since they aren’t file names at all).
In order for the remapping import hooks to work, the module or package
must be physically located under its new name. This is because the
import hooks catch only modules that are not already imported, and
cannot be imported by Python’s built-in import rules. Thus, if a
module has been moved, say from Lib/StringIO.py to Lib/stringio.py,
and the former’s .pyc file has been removed, then without the
remapper, this would fail:
import StringIO
Instead, with the remapper, this failing import will be caught, the
old name will be looked up in the registered remappings, and in this
case, the new name stringio will be found. The remapper then
attempts to import the new name, and if that succeeds, it binds the
resulting module into sys.modules, under both the old and new names.
Thus, the above import will result in entries in sys.modules for
‘StringIO’ and ‘stringio’, and both will point to the exact same
module object.
Note that no way to disable the remapping machinery is proposed, short
of moving all the .mv files away or programmatically removing them
in some custom start up code. In Python 3.0, the remappings will be
eliminated, leaving only the “new” names.
Programmatic Interface
Several methods are added to the sys.stdlib_remapper object, which
third party packages can use to register their own remappings. Note
however that in all cases, there is one and only one mapping from an
old name to a new name. If two .mv files contain different
mappings for an old name, or if a programmatic call is made with an
old name that is already remapped, the previous mapping is lost. This
will not affect any already imported modules.
The following methods are available on the sys.stdlib_remapper
object:
read_mv_file(filename) – Read the given file and register all
remappings found in the file.
read_directory_mv_files(dirname, suffix='.mv') – List the given
directory, reading all files in that directory that have the
matching suffix (.mv by default). For each parsed file,
register all the remappings found in that file.
set_mapping(oldname, newname) – Register a new mapping from an
old module name to a new module name. Both must be the full
dotted-path name to the module. newname may be None in which
case any existing mapping for oldname will be removed (it is not an
error if there is no existing mapping).
get_mapping(oldname, default=None) – Return any registered
newname for the given oldname. If there is no registered remapping,
default is returned.
Open Issues
Should there be a command line switch and/or environment variable to
disable all remappings?
Should remappings occur recursively?
Should we automatically parse package directories for .mv files when
the package’s __init__.py is loaded? This would allow packages to
easily include .mv files for their own remappings. Compare what the
email package currently has to do if we place its .mv file in
the email package instead of in the oldlib package:# Expose old names
import os, sys
sys.stdlib_remapper.read_directory_mv_files(os.path.dirname(__file__))
I think we should automatically read a package’s directory for any
.mv files it might contain.
Reference Implementation
A reference implementation, in the form of a patch against the current
(as of this writing) state of the Python 2.6 svn trunk, is available
as SourceForge patch #1675334 [1]. Note that this patch includes a
rename of cStringIO to cstringio, but this is primarily for
illustrative and unit testing purposes. Should the patch be accepted,
we might want to split this change off into other PEP 3108 changes.
References
[1]
Reference implementation
(http://bugs.python.org/issue1675334)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 364 – Transitioning to the Py3K Standard Library | Standards Track | PEP 3108 describes the reorganization of the Python standard library
for the Python 3.0 release. This PEP describes a
mechanism for transitioning from the Python 2.x standard library to
the Python 3.0 standard library. This transition will allow and
encourage Python programmers to use the new Python 3.0 library names
starting with Python 2.6, while maintaining the old names for backward
compatibility. In this way, a Python programmer will be able to write
forward compatible code without sacrificing interoperability with
existing Python programs. |
PEP 365 – Adding the pkg_resources module
Author:
Phillip J. Eby <pje at telecommunity.com>
Status:
Rejected
Type:
Standards Track
Topic:
Packaging
Created:
30-Apr-2007
Post-History:
30-Apr-2007
Table of Contents
Abstract
Proposal
Rationale
Implementation and Documentation
Copyright
Abstract
This PEP proposes adding an enhanced version of the pkg_resources
module to the standard library.
pkg_resources is a module used to find and manage Python
package/version dependencies and access bundled files and resources,
including those inside of zipped .egg files. Currently,
pkg_resources is only available through installing the entire
setuptools distribution, but it does not depend on any other part
of setuptools; in effect, it comprises the entire runtime support
library for Python Eggs, and is independently useful.
In addition, with one feature addition, this module could support
easy bootstrap installation of several Python package management
tools, including setuptools, workingenv, and zc.buildout.
Proposal
Rather than proposing to include setuptools in the standard
library, this PEP proposes only that pkg_resources be added to the
standard library for Python 2.6 and 3.0. pkg_resources is
considerably more stable than the rest of setuptools, with virtually
no new features being added in the last 12 months.
However, this PEP also proposes that a new feature be added to
pkg_resources, before being added to the stdlib. Specifically, it
should be possible to do something like:
python -m pkg_resources SomePackage==1.2
to request downloading and installation of SomePackage from PyPI.
This feature would not be a replacement for easy_install;
instead, it would rely on SomePackage having pure-Python .egg
files listed for download via the PyPI XML-RPC API, and the eggs would
be placed in the $PYTHON_EGG_CACHE directory, where they would
not be importable by default. (And no scripts would be installed.)
However, if the download egg contains installation bootstrap code, it
will be given a chance to run.
These restrictions would allow the code to be extremely simple, yet
still powerful enough to support users downloading package management
tools such as setuptools, workingenv and zc.buildout,
simply by supplying the tool’s name on the command line.
Rationale
Many users have requested that setuptools be included in the
standard library, to save users needing to go through the awkward
process of bootstrapping it. However, most of the bootstrapping
complexity comes from the fact that setuptools-installed code cannot
use the pkg_resources runtime module unless setuptools is already
installed. Thus, installing setuptools requires (in a sense) that
setuptools already be installed.
Other Python package management tools, such as workingenv and
zc.buildout, have similar bootstrapping issues, since they both
make use of setuptools, but also want to provide users with something
approaching a “one-step install”. The complexity of creating bootstrap
utilities for these and any other such tools that arise in future, is
greatly reduced if pkg_resources is already present, and is also
able to download pre-packaged eggs from PyPI.
(It would also mean that setuptools would not need to be installed
in order to simply use eggs, as opposed to building them.)
Finally, in addition to providing access to eggs built via setuptools
or other packaging tools, it should be noted that since Python 2.5,
the distutils install package metadata (aka PKG-INFO) files that
can be read by pkg_resources to identify what distributions are
already on sys.path. In environments where Python packages are
installed using system package tools (like RPM), the pkg_resources
module provides an API for detecting what versions of what packages
are installed, even if those packages were installed via the distutils
instead of setuptools.
Implementation and Documentation
The pkg_resources implementation is maintained in the Python
SVN repository under /sandbox/trunk/setuptools/; see
pkg_resources.py and pkg_resources.txt. Documentation for the
egg format(s) supported by pkg_resources can be found in
doc/formats.txt. HTML versions of these documents are available
at:
http://peak.telecommunity.com/DevCenter/PkgResources and
http://peak.telecommunity.com/DevCenter/EggFormats
(These HTML versions are for setuptools 0.6; they may not reflect all
of the changes found in the Subversion trunk’s .txt versions.)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 365 – Adding the pkg_resources module | Standards Track | This PEP proposes adding an enhanced version of the pkg_resources
module to the standard library. |
PEP 366 – Main module explicit relative imports
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
01-May-2007
Python-Version:
2.6, 3.0
Post-History:
01-May-2007, 04-Jul-2007, 07-Jul-2007, 23-Nov-2007
Table of Contents
Abstract
Proposed Change
Rationale for Change
Reference Implementation
Alternative Proposals
References
Copyright
Abstract
This PEP proposes a backwards compatible mechanism that permits
the use of explicit relative imports from executable modules within
packages. Such imports currently fail due to an awkward interaction
between PEP 328 and PEP 338.
By adding a new module level attribute, this PEP allows relative imports
to work automatically if the module is executed using the -m switch.
A small amount of boilerplate in the module itself will allow the relative
imports to work when the file is executed by name.
Guido accepted the PEP in November 2007 [5].
Proposed Change
The major proposed change is the introduction of a new module level
attribute, __package__. When it is present, relative imports will
be based on this attribute rather than the module __name__
attribute.
As with the current __name__ attribute, setting __package__ will
be the responsibility of the PEP 302 loader used to import a module.
Loaders which use imp.new_module() to create the module object will
have the new attribute set automatically to None. When the import
system encounters an explicit relative import in a module without
__package__ set (or with it set to None), it will calculate and
store the correct value (__name__.rpartition('.')[0] for normal
modules and __name__ for package initialisation modules). If
__package__ has already been set then the import system will use
it in preference to recalculating the package name from the
__name__ and __path__ attributes.
The runpy module will explicitly set the new attribute, basing it off
the name used to locate the module to be executed rather than the name
used to set the module’s __name__ attribute. This will allow relative
imports to work correctly from main modules executed with the -m
switch.
When the main module is specified by its filename, then the
__package__ attribute will be set to None. To allow
relative imports when the module is executed directly, boilerplate
similar to the following would be needed before the first relative
import statement:
if __name__ == "__main__" and __package__ is None:
__package__ = "expected.package.name"
Note that this boilerplate is sufficient only if the top level package
is already accessible via sys.path. Additional code that manipulates
sys.path would be needed in order for direct execution to work
without the top level package already being importable.
This approach also has the same disadvantage as the use of absolute
imports of sibling modules - if the script is moved to a different
package or subpackage, the boilerplate will need to be updated
manually. It has the advantage that this change need only be made
once per file, regardless of the number of relative imports.
Note that setting __package__ to the empty string explicitly is
permitted, and has the effect of disabling all relative imports from
that module (since the import machinery will consider it to be a
top level module in that case). This means that tools like runpy
do not need to provide special case handling for top level modules
when setting __package__.
Rationale for Change
The current inability to use explicit relative imports from the main
module is the subject of at least one open SF bug report (#1510172) [1],
and has most likely been a factor in at least a few queries on
comp.lang.python (such as Alan Isaac’s question in [2]).
This PEP is intended to provide a solution which permits explicit
relative imports from main modules, without incurring any significant
costs during interpreter startup or normal module import.
The section in PEP 338 on relative imports and the main module provides
further details and background on this problem.
Reference Implementation
Rev 47142 in SVN implemented an early variant of this proposal
which stored the main module’s real module name in the
__module_name__ attribute. It was reverted due to the fact
that 2.5 was already in beta by that time.
Patch 1487 [4] is the proposed implementation for this PEP.
Alternative Proposals
PEP 3122 proposed addressing this problem by changing the way
the main module is identified. That’s a significant compatibility cost
to incur to fix something that is a pretty minor bug in the overall
scheme of things, and the PEP was rejected [3].
The advantage of the proposal in this PEP is that its only impact on
normal code is the small amount of time needed to set the extra
attribute when importing a module. Relative imports themselves should
be sped up fractionally, as the package name is cached in the module
globals, rather than having to be worked out again for each relative
import.
References
[1]
Absolute/relative import not working?
(https://github.com/python/cpython/issues/43535)
[2]
c.l.p. question about modules and relative imports
(http://groups.google.com/group/comp.lang.python/browse_thread/thread/c44c769a72ca69fa/)
[3]
Guido’s rejection of PEP 3122
(https://mail.python.org/pipermail/python-3000/2007-April/006793.html)
[4]
PEP 366 implementation patch
(https://github.com/python/cpython/issues/45828)
[5]
Acceptance of the PEP
(https://mail.python.org/pipermail/python-dev/2007-November/075475.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 366 – Main module explicit relative imports | Standards Track | This PEP proposes a backwards compatible mechanism that permits
the use of explicit relative imports from executable modules within
packages. Such imports currently fail due to an awkward interaction
between PEP 328 and PEP 338. |
PEP 367 – New Super
Author:
Calvin Spealman <ironfroggy at gmail.com>,
Tim Delaney <timothy.c.delaney at gmail.com>
Status:
Superseded
Type:
Standards Track
Created:
28-Apr-2007
Python-Version:
2.6
Post-History:
28-Apr-2007,
29-Apr-2007,
29-Apr-2007,
14-May-2007
Table of Contents
Numbering Note
Abstract
Rationale
Specification
Open Issues
Determining the class object to use
Should super actually become a keyword?
Closed Issues
super used with __call__ attributes
Reference Implementation
Alternative Proposals
No Changes
Dynamic attribute on super type
super(__this_class__, self)
self.__super__.foo(*args)
super(self, *args) or __super__(self, *args)
super.foo(self, *args)
super or super()
super(*p, **kw)
History
References
Copyright
Numbering Note
This PEP has been renumbered to PEP 3135. The text below is the last
version submitted under the old number.
Abstract
This PEP proposes syntactic sugar for use of the super type to automatically
construct instances of the super type binding to the class that a method was
defined in, and the instance (or class object for classmethods) that the method
is currently acting upon.
The premise of the new super usage suggested is as follows:
super.foo(1, 2)
to replace the old:
super(Foo, self).foo(1, 2)
and the current __builtin__.super be aliased to __builtin__.__super__
(with __builtin__.super to be removed in Python 3.0).
It is further proposed that assignment to super become a SyntaxError,
similar to the behaviour of None.
Rationale
The current usage of super requires an explicit passing of both the class and
instance it must operate from, requiring a breaking of the DRY (Don’t Repeat
Yourself) rule. This hinders any change in class name, and is often considered
a wart by many.
Specification
Within the specification section, some special terminology will be used to
distinguish similar and closely related concepts. “super type” will refer to
the actual builtin type named “super”. A “super instance” is simply an instance
of the super type, which is associated with a class and possibly with an
instance of that class.
Because the new super semantics are not backwards compatible with Python
2.5, the new semantics will require a __future__ import:
from __future__ import new_super
The current __builtin__.super will be aliased to __builtin__.__super__.
This will occur regardless of whether the new super semantics are active.
It is not possible to simply rename __builtin__.super, as that would affect
modules that do not use the new super semantics. In Python 3.0 it is
proposed that the name __builtin__.super will be removed.
Replacing the old usage of super, calls to the next class in the MRO (method
resolution order) can be made without explicitly creating a super
instance (although doing so will still be supported via __super__). Every
function will have an implicit local named super. This name behaves
identically to a normal local, including use by inner functions via a cell,
with the following exceptions:
Assigning to the name super will raise a SyntaxError at compile time;
Calling a static method or normal function that accesses the name super
will raise a TypeError at runtime.
Every function that uses the name super, or has an inner function that
uses the name super, will include a preamble that performs the equivalent
of:
super = __builtin__.__super__(<class>, <instance>)
where <class> is the class that the method was defined in, and
<instance> is the first parameter of the method (normally self for
instance methods, and cls for class methods). For static methods and normal
functions, <class> will be None, resulting in a TypeError being
raised during the preamble.
Note: The relationship between super and __super__ is similar to that
between import and __import__.
Much of this was discussed in the thread of the python-dev list, “Fixing super
anyone?” [1].
Open Issues
Determining the class object to use
The exact mechanism for associating the method with the defining class is not
specified in this PEP, and should be chosen for maximum performance. For
CPython, it is suggested that the class instance be held in a C-level variable
on the function object which is bound to one of NULL (not part of a class),
Py_None (static method) or a class object (instance or class method).
Should super actually become a keyword?
With this proposal, super would become a keyword to the same extent that
None is a keyword. It is possible that further restricting the super
name may simplify implementation, however some are against the actual
keyword-ization of super. The simplest solution is often the correct solution
and the simplest solution may well not be adding additional keywords to the
language when they are not needed. Still, it may solve other open issues.
Closed Issues
super used with __call__ attributes
It was considered that it might be a problem that instantiating super instances
the classic way, because calling it would lookup the __call__ attribute and
thus try to perform an automatic super lookup to the next class in the MRO.
However, this was found to be false, because calling an object only looks up
the __call__ method directly on the object’s type. The following example shows
this in action.
class A(object):
def __call__(self):
return '__call__'
def __getattribute__(self, attr):
if attr == '__call__':
return lambda: '__getattribute__'
a = A()
assert a() == '__call__'
assert a.__call__() == '__getattribute__'
In any case, with the renaming of __builtin__.super to
__builtin__.__super__ this issue goes away entirely.
Reference Implementation
It is impossible to implement the above specification entirely in Python. This
reference implementation has the following differences to the specification:
New super semantics are implemented using bytecode hacking.
Assignment to super is not a SyntaxError. Also see point #4.
Classes must either use the metaclass autosuper_meta or inherit from
the base class autosuper to acquire the new super semantics.
super is not an implicit local variable. In particular, for inner
functions to be able to use the super instance, there must be an assignment
of the form super = super in the method.
The reference implementation assumes that it is being run on Python 2.5+.
#!/usr/bin/env python
#
# autosuper.py
from array import array
import dis
import new
import types
import __builtin__
__builtin__.__super__ = __builtin__.super
del __builtin__.super
# We need these for modifying bytecode
from opcode import opmap, HAVE_ARGUMENT, EXTENDED_ARG
LOAD_GLOBAL = opmap['LOAD_GLOBAL']
LOAD_NAME = opmap['LOAD_NAME']
LOAD_CONST = opmap['LOAD_CONST']
LOAD_FAST = opmap['LOAD_FAST']
LOAD_ATTR = opmap['LOAD_ATTR']
STORE_FAST = opmap['STORE_FAST']
LOAD_DEREF = opmap['LOAD_DEREF']
STORE_DEREF = opmap['STORE_DEREF']
CALL_FUNCTION = opmap['CALL_FUNCTION']
STORE_GLOBAL = opmap['STORE_GLOBAL']
DUP_TOP = opmap['DUP_TOP']
POP_TOP = opmap['POP_TOP']
NOP = opmap['NOP']
JUMP_FORWARD = opmap['JUMP_FORWARD']
ABSOLUTE_TARGET = dis.hasjabs
def _oparg(code, opcode_pos):
return code[opcode_pos+1] + (code[opcode_pos+2] << 8)
def _bind_autosuper(func, cls):
co = func.func_code
name = func.func_name
newcode = array('B', co.co_code)
codelen = len(newcode)
newconsts = list(co.co_consts)
newvarnames = list(co.co_varnames)
# Check if the global 'super' keyword is already present
try:
sn_pos = list(co.co_names).index('super')
except ValueError:
sn_pos = None
# Check if the varname 'super' keyword is already present
try:
sv_pos = newvarnames.index('super')
except ValueError:
sv_pos = None
# Check if the cellvar 'super' keyword is already present
try:
sc_pos = list(co.co_cellvars).index('super')
except ValueError:
sc_pos = None
# If 'super' isn't used anywhere in the function, we don't have anything to do
if sn_pos is None and sv_pos is None and sc_pos is None:
return func
c_pos = None
s_pos = None
n_pos = None
# Check if the 'cls_name' and 'super' objects are already in the constants
for pos, o in enumerate(newconsts):
if o is cls:
c_pos = pos
if o is __super__:
s_pos = pos
if o == name:
n_pos = pos
# Add in any missing objects to constants and varnames
if c_pos is None:
c_pos = len(newconsts)
newconsts.append(cls)
if n_pos is None:
n_pos = len(newconsts)
newconsts.append(name)
if s_pos is None:
s_pos = len(newconsts)
newconsts.append(__super__)
if sv_pos is None:
sv_pos = len(newvarnames)
newvarnames.append('super')
# This goes at the start of the function. It is:
#
# super = __super__(cls, self)
#
# If 'super' is a cell variable, we store to both the
# local and cell variables (i.e. STORE_FAST and STORE_DEREF).
#
preamble = [
LOAD_CONST, s_pos & 0xFF, s_pos >> 8,
LOAD_CONST, c_pos & 0xFF, c_pos >> 8,
LOAD_FAST, 0, 0,
CALL_FUNCTION, 2, 0,
]
if sc_pos is None:
# 'super' is not a cell variable - we can just use the local variable
preamble += [
STORE_FAST, sv_pos & 0xFF, sv_pos >> 8,
]
else:
# If 'super' is a cell variable, we need to handle LOAD_DEREF.
preamble += [
DUP_TOP,
STORE_FAST, sv_pos & 0xFF, sv_pos >> 8,
STORE_DEREF, sc_pos & 0xFF, sc_pos >> 8,
]
preamble = array('B', preamble)
# Bytecode for loading the local 'super' variable.
load_super = array('B', [
LOAD_FAST, sv_pos & 0xFF, sv_pos >> 8,
])
preamble_len = len(preamble)
need_preamble = False
i = 0
while i < codelen:
opcode = newcode[i]
need_load = False
remove_store = False
if opcode == EXTENDED_ARG:
raise TypeError("Cannot use 'super' in function with EXTENDED_ARG opcode")
# If the opcode is an absolute target it needs to be adjusted
# to take into account the preamble.
elif opcode in ABSOLUTE_TARGET:
oparg = _oparg(newcode, i) + preamble_len
newcode[i+1] = oparg & 0xFF
newcode[i+2] = oparg >> 8
# If LOAD_GLOBAL(super) or LOAD_NAME(super) then we want to change it into
# LOAD_FAST(super)
elif (opcode == LOAD_GLOBAL or opcode == LOAD_NAME) and _oparg(newcode, i) == sn_pos:
need_preamble = need_load = True
# If LOAD_FAST(super) then we just need to add the preamble
elif opcode == LOAD_FAST and _oparg(newcode, i) == sv_pos:
need_preamble = need_load = True
# If LOAD_DEREF(super) then we change it into LOAD_FAST(super) because
# it's slightly faster.
elif opcode == LOAD_DEREF and _oparg(newcode, i) == sc_pos:
need_preamble = need_load = True
if need_load:
newcode[i:i+3] = load_super
i += 1
if opcode >= HAVE_ARGUMENT:
i += 2
# No changes needed - get out.
if not need_preamble:
return func
# Our preamble will have 3 things on the stack
co_stacksize = max(3, co.co_stacksize)
# Conceptually, our preamble is on the `def` line.
co_lnotab = array('B', co.co_lnotab)
if co_lnotab:
co_lnotab[0] += preamble_len
co_lnotab = co_lnotab.tostring()
# Our code consists of the preamble and the modified code.
codestr = (preamble + newcode).tostring()
codeobj = new.code(co.co_argcount, len(newvarnames), co_stacksize,
co.co_flags, codestr, tuple(newconsts), co.co_names,
tuple(newvarnames), co.co_filename, co.co_name,
co.co_firstlineno, co_lnotab, co.co_freevars,
co.co_cellvars)
func.func_code = codeobj
func.func_class = cls
return func
class autosuper_meta(type):
def __init__(cls, name, bases, clsdict):
UnboundMethodType = types.UnboundMethodType
for v in vars(cls):
o = getattr(cls, v)
if isinstance(o, UnboundMethodType):
_bind_autosuper(o.im_func, cls)
class autosuper(object):
__metaclass__ = autosuper_meta
if __name__ == '__main__':
class A(autosuper):
def f(self):
return 'A'
class B(A):
def f(self):
return 'B' + super.f()
class C(A):
def f(self):
def inner():
return 'C' + super.f()
# Needed to put 'super' into a cell
super = super
return inner()
class D(B, C):
def f(self, arg=None):
var = None
return 'D' + super.f()
assert D().f() == 'DBCA'
Disassembly of B.f and C.f reveals the different preambles used when super
is simply a local variable compared to when it is used by an inner function.
>>> dis.dis(B.f)
214 0 LOAD_CONST 4 (<type 'super'>)
3 LOAD_CONST 2 (<class '__main__.B'>)
6 LOAD_FAST 0 (self)
9 CALL_FUNCTION 2
12 STORE_FAST 1 (super)
215 15 LOAD_CONST 1 ('B')
18 LOAD_FAST 1 (super)
21 LOAD_ATTR 1 (f)
24 CALL_FUNCTION 0
27 BINARY_ADD
28 RETURN_VALUE
>>> dis.dis(C.f)
218 0 LOAD_CONST 4 (<type 'super'>)
3 LOAD_CONST 2 (<class '__main__.C'>)
6 LOAD_FAST 0 (self)
9 CALL_FUNCTION 2
12 DUP_TOP
13 STORE_FAST 1 (super)
16 STORE_DEREF 0 (super)
219 19 LOAD_CLOSURE 0 (super)
22 LOAD_CONST 1 (<code object inner at 00C160A0, file "autosuper.py", line 219>)
25 MAKE_CLOSURE 0
28 STORE_FAST 2 (inner)
223 31 LOAD_FAST 1 (super)
34 STORE_DEREF 0 (super)
224 37 LOAD_FAST 2 (inner)
40 CALL_FUNCTION 0
43 RETURN_VALUE
Note that in the final implementation, the preamble would not be part of the
bytecode of the method, but would occur immediately following unpacking of
parameters.
Alternative Proposals
No Changes
Although its always attractive to just keep things how they are, people have
sought a change in the usage of super calling for some time, and for good
reason, all mentioned previously.
Decoupling from the class name (which might not even be bound to the
right class anymore!)
Simpler looking, cleaner super calls would be better
Dynamic attribute on super type
The proposal adds a dynamic attribute lookup to the super type, which will
automatically determine the proper class and instance parameters. Each super
attribute lookup identifies these parameters and performs the super lookup on
the instance, as the current super implementation does with the explicit
invocation of a super instance upon a class and instance.
This proposal relies on sys._getframe(), which is not appropriate for anything
except a prototype implementation.
super(__this_class__, self)
This is nearly an anti-proposal, as it basically relies on the acceptance of
the __this_class__ PEP, which proposes a special name that would always be
bound to the class within which it is used. If that is accepted, __this_class__
could simply be used instead of the class’ name explicitly, solving the name
binding issues [2].
self.__super__.foo(*args)
The __super__ attribute is mentioned in this PEP in several places, and could
be a candidate for the complete solution, actually using it explicitly instead
of any super usage directly. However, double-underscore names are usually an
internal detail, and attempted to be kept out of everyday code.
super(self, *args) or __super__(self, *args)
This solution only solves the problem of the type indication, does not handle
differently named super methods, and is explicit about the name of the
instance. It is less flexible without being able to enacted on other method
names, in cases where that is needed. One use case this fails is where a
base-class has a factory classmethod and a subclass has two factory
classmethods, both of which needing to properly make super calls to the one
in the base-class.
super.foo(self, *args)
This variation actually eliminates the problems with locating the proper
instance, and if any of the alternatives were pushed into the spotlight, I
would want it to be this one.
super or super()
This proposal leaves no room for different names, signatures, or application
to other classes, or instances. A way to allow some similar use alongside the
normal proposal would be favorable, encouraging good design of multiple
inheritance trees and compatible methods.
super(*p, **kw)
There has been the proposal that directly calling super(*p, **kw) would
be equivalent to calling the method on the super object with the same name
as the method currently being executed i.e. the following two methods would be
equivalent:
def f(self, *p, **kw):
super.f(*p, **kw)
def f(self, *p, **kw):
super(*p, **kw)
There is strong sentiment for and against this, but implementation and style
concerns are obvious. Guido has suggested that this should be excluded from
this PEP on the principle of KISS (Keep It Simple Stupid).
History
29-Apr-2007 - Changed title from “Super As A Keyword” to “New Super”
Updated much of the language and added a terminology section
for clarification in confusing places.
Added reference implementation and history sections.
06-May-2007 - Updated by Tim Delaney to reflect discussions on the python-3000and python-dev mailing lists.
References
[1]
Fixing super anyone?
(https://mail.python.org/pipermail/python-3000/2007-April/006667.html)
[2]
PEP 3130: Access to Module/Class/Function Currently Being Defined (this)
(https://mail.python.org/pipermail/python-ideas/2007-April/000542.html)
Copyright
This document has been placed in the public domain.
| Superseded | PEP 367 – New Super | Standards Track | This PEP proposes syntactic sugar for use of the super type to automatically
construct instances of the super type binding to the class that a method was
defined in, and the instance (or class object for classmethods) that the method
is currently acting upon. |
PEP 368 – Standard image protocol and class
Author:
Lino Mastrodomenico <l.mastrodomenico at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
28-Jun-2007
Python-Version:
2.6, 3.0
Post-History:
Table of Contents
Abstract
PEP Deferral
Rationale
Specification
Python API
Mode Objects
Image Protocol
Image and ImageMixin Classes
Line Objects
Pixel Objects
ImageSize Class
C API
Examples
Backwards Compatibility
Reference Implementation
Acknowledgments
Copyright
Abstract
The current situation of image storage and manipulation in the Python
world is extremely fragmented: almost every library that uses image
objects has implemented its own image class, incompatible with
everyone else’s and often not very pythonic. A basic RGB image class
exists in the standard library (Tkinter.PhotoImage), but is pretty
much unusable, and unused, for anything except Tkinter programming.
This fragmentation not only takes up valuable space in the developers
minds, but also makes the exchange of images between different
libraries (needed in relatively common use cases) slower and more
complex than it needs to be.
This PEP proposes to improve the situation by defining a simple and
pythonic image protocol/interface that can be hopefully accepted and
implemented by existing image classes inside and outside the standard
library without breaking backward compatibility with their existing
user bases. In practice this is a definition of how a minimal
image-like object should look and act (in a similar way to the
read() and write() methods in file-like objects).
The inclusion in the standard library of a class that provides basic
image manipulation functionality and implements the new protocol is
also proposed, together with a mixin class that helps adding support
for the protocol to existing image classes.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the PEP
and collecting and incorporating feedback, and with sufficient available
time to do so effectively.
Rationale
A good way to have high quality modules ready for inclusion in the
Python standard library is to simply wait for natural selection among
competing external libraries to provide a clear winner with useful
functionality and a big user base. Then the de facto standard can be
officially sanctioned by including it in the standard library.
Unfortunately this approach hasn’t worked well for the creation of a
dominant image class in the Python world: almost every third-party
library that requires an image object creates its own class
incompatible with the ones from other libraries. This is a real
problem because it’s entirely reasonable for a program to create and
manipulate an image using, e.g., PIL (the Python Imaging Library) and
then display it using wxPython or pygame. But these libraries have
different and incompatible image classes, and the usual solution is to
manually “export” an image from the source to a (width, height,
bytes_string) tuple and “import” it creating a new instance in the
target format. This approach works, but is both uglier and slower
than it needs to be.
Another “solution” that has been sometimes used is the creation of
specific adapters and/or converters from a class to another (e.g. PIL
offers the ImageTk module for converting PIL images to a class
compatible with the Tkinter one). But this approach doesn’t scale
well with the number of libraries involved and it’s still annoying for
the user: if I have a perfectly good image object why should I convert
before passing it to the next method, why can’t it simply accept my
image as-is?
The problem isn’t by any stretch limited to the three mentioned
libraries and has probably multiple causes, including two that IMO are
very important to understand before solving it:
in today’s computing world an image is a basic type not strictly
tied to a specific domain. This is why there will never be a clear
winner between the image classes from the three libraries mentioned
above (PIL, wxPython and pygame): they cover different domains and
don’t really compete with each other;
the Python standard library has never provided a good image class
that can be adopted or imitated by third part modules.
Tkinter.PhotoImage provides basic RGB functionality, but it’s by
far the slowest and ugliest of the bunch and it can be instantiated
only after the Tkinter root window has been created.
This PEP tries to improve this situation in four ways:
It defines a simple and pythonic image protocol/interface (both on
the Python and the C side) that can be hopefully accepted and
implemented by existing image classes inside and outside the
standard library without breaking backward compatibility with
their existing user bases.
It proposes the inclusion in the standard library of three new
classes:
ImageMixin provides almost everything necessary to implement
the new protocol; its main purpose is to make as simple as
possible to support this interface for existing libraries, in
some cases as simple as adding it to the list of base classes and
doing minor additions to the constructor.
Image is a subclass of ImageMixin and will add a
constructor that can resize and/or convert an image between
different pixel formats. This is intended to provide a fast and
efficient default implementation of the new protocol.
ImageSize is a minor helper class. See below for details.
Tkinter.PhotoImage will implement the new protocol (mostly
through the ImageMixin class) and all the Tkinter methods that
can receive an image will be modified the accept any object that
implements the interface. As an aside the author of this PEP will
collaborate with the developers of the most common external
libraries to achieve the same goal (supporting the protocol in
their classes and accepting any class that implements it).
New PyImage_* functions will be added to the CPython C API:
they implement the C side of the protocol and accept as first
parameter any object that supports it, even if it isn’t an
instance of the Image/ImageMixin classes.
The main effects for the end user will be a simplification of the
interchange of images between different libraries (if everything goes
well, any Python library will accept images from any other library)
and the out-of-the-box availability of the new Image class. The
new class is intended to cover simple but common use cases like
cropping and/or resizing a photograph to the desired size and passing
it an appropriate widget for displaying it on a window, or darkening a
texture and passing it to a 3D library.
The Image class is not intended to replace or compete with PIL,
Pythonmagick or NumPy, even if it provides a (very small) subset of
the functionality of these three libraries. In particular PIL offers
very rich image manipulation features with dozens of classes,
filters, transformations and file formats. The inclusion of PIL (or
something similar) in the standard library may, or may not, be a
worthy goal but it’s completely outside the scope of this PEP.
Specification
The imageop module is used as the default location for the new
classes and objects because it has for a long time hosted functions
that provided a somewhat similar functionality, but a new module may
be created if preferred (e.g. a new “image” or “media” module;
the latter may eventually include other multimedia classes).
MODES is a new module level constant: it is a set of the pixel
formats supported by the Image class. Any image object that
implements the new protocol is guaranteed to be formatted in one of
these modes, but libraries that accept images are allowed to support
only a subset of them.
These modes are in turn also available as module level constants (e.g.
imageop.RGB).
The following table is a summary of the modes currently supported and
their properties:
Name
Component
names
Bits per
component
Subsampling
Valid
intervals
L
l (lowercase L)
8
no
full range
L16
l
16
no
full range
L32
l
32
no
full range
LA
l, a
8
no
full range
LA32
l, a
16
no
full range
RGB
r, g, b
8
no
full range
RGB48
r, g, b
16
no
full range
RGBA
r, g, b, a
8
no
full range
RGBA64
r, g, b, a
16
no
full range
YV12
y, cr, cb
8
1, 2, 2
16-235, 16-240, 16-240
JPEG_YV12
y, cr, cb
8
1, 2, 2
full range
CMYK
c, m, y, k
8
no
full range
CMYK64
c, m, y, k
16
no
full range
When the name of a mode ends with a number, it represents the average
number of bits per pixel. All the other modes simply use a byte per
component per pixel.
No palette modes or modes with less than 8 bits per component are
supported. Welcome to the 21st century.
Here’s a quick description of the modes and the rationale for their
inclusion; there are four groups of modes:
grayscale (L* modes): they are heavily used in scientific
computing (those people may also need a very high dynamic range and
precision, hence L32, the only mode with 32 bits per component)
and sometimes it can be useful to consider a single component of a
color image as a grayscale image (this is used by the individual
planes of the planar images, see YV12 below); the name of the
component ('l', lowercase letter L) stands for luminance, the
second optional component ('a') is the alpha value and
represents the opacity of the pixels: alpha = 0 means full
transparency, alpha = 255/65535 represents a fully opaque pixel;
RGB* modes: the garden variety color images. The optional
alpha component has the same meaning as in grayscale modes;
YCbCr, a.k.a. YUV (*YV12 modes). These modes are planar
(i.e. the values of all the pixel for each component are stored in
a consecutive memory area, instead of the usual arrangement where
all the components of a pixel reside in consecutive bytes) and use
a 1, 2, 2 (a.k.a. 4:2:0) subsampling (i.e. each pixel has its own Y
value, but the Cb and Cr components are shared between groups of
2x2 adjacent pixels) because this is the format that’s by far the
most common for YCbCr images. Please note that the V (Cr) plane is
stored before the U (Cb) plane.YV12 is commonly used for MPEG2 (including DVDs), MPEG4 (both
ASP/DivX and AVC/H.264) and Theora video frames. Valid values for
Y are in range(16, 236) (excluding 236), and valid values for Cb
and Cr are in range(16, 241). JPEG_YV12 is similar to
YV12, but the three components can have the full range of 256
values. It’s the native format used by almost all JPEG/JFIF files
and by MJPEG video frames. The “strangeness” of these two wrt all
the other supported modes derives from the fact that they are
widely used that way by a lot of existing libraries and
applications; this is also the reason why they are included (and
the fact that they can’t losslessly converted to RGB because YCbCr
is a bigger color space); the funny 4:2:0 planar arrangement of the
pixel values is relatively easy to support because in most cases
the three planes can be considered three separate grayscale images;
CMYK* modes (cyan, magenta, yellow and black) are subtractive
color modes, used for printing color images on dead trees.
Professional designers love to pretend that they can’t live without
them, so here they are.
Python API
See the examples below.
In Python 2.x, all the new classes defined here are new-style classes.
Mode Objects
The mode objects offer a number of attributes and methods that can be
used for implementing generic algorithms that work on different types
of images:
components
The number of components per pixel (e.g. 4 for an RGBA image).
component_names
A tuple of strings; see the column “Component names” in the above
table.
bits_per_component
8, 16 or 32; see “Bits per component” in the above table.
bytes_per_pixel
components * bits_per_component // 8, only available for non
planar modes (see below).
planar
Boolean; True if the image components reside each in a
separate plane. Currently this happens if and only if the mode
uses subsampling.
subsampling
A tuple that for each component in the mode contains a tuple of
two integers that represent the amount of downsampling in the
horizontal and vertical direction, respectively. In practice it’s
((1, 1), (2, 2), (2, 2)) for YV12 and JPEG_YV12 and
((1, 1),) * components for everything else.
x_divisor
max(x for x, y in subsampling); the width of an image that
uses this mode must be divisible for this value.
y_divisor
max(y for x, y in subsampling); the height of an image that
uses this mode must be divisible for this value.
intervals
A tuple that for each component in the mode contains a tuple of
two integers: the minimum and maximum valid value for the
component. Its value is ((16, 235), (16, 240), (16, 240)) for
YV12 and ((0, 2 ** bits_per_component - 1),) * components
for everything else.
get_length(iterable[integer]) -> int
The parameter must be an iterable that contains two integers: the
width and height of an image; it returns the number of bytes
needed to store an image of these dimensions with this mode.
Implementation detail: the modes are instances of a subclass of
str and have a value equal to their name (e.g. imageop.RGB ==
'RGB') except for L32 that has value 'I'. This is only
intended for backward compatibility with existing PIL users; new code
that uses the image protocol proposed here should not rely on this
detail.
Image Protocol
Any object that supports the image protocol must provide the following
methods and attributes:
mode
The format and the arrangement of the pixels in this image; it’s
one of the constants in the MODES set.
size
An instance of the ImageSize class; it’s a named tuple of two
integers: the width and the height of the image in pixels; both of
them must be >= 1 and can also be accessed as the width and
height attributes of size.
buffer
A sequence of integers between 0 and 255; they are the actual
bytes used for storing the image data (i.e. modifying their values
affects the image pixels and vice versa); the data has a
row-major/C-contiguous order without padding and without any
special memory alignment, even when there are more than 8 bits per
component. The only supported methods are __len__,
__getitem__/__setitem__ (with both integers and slice
indexes) and __iter__; on the C side it implements the buffer
protocol.This is a pretty low level interface to the image and the user is
responsible for using the correct (native) byte order for modes
with more than 8 bit per component and the correct value ranges
for YV12 images. A buffer may or may not keep a reference to
its image, but it’s still safe (if useless) to use the buffer even
after the corresponding image has been destroyed by the garbage
collector (this will require changes to the image class of
wxPython and possibly other libraries). Implementation detail:
this can be an array('B'), a bytes() object or a
specialized fixed-length type.
info
A dict object that can contain arbitrary metadata associated
with the image (e.g. DPI, gamma, ICC profile, exposure time…);
the interpretation of this data is beyond the scope of this PEP
and probably depends on the library used to create and/or to save
the image; if a method of the image returns a new image, it can
copy or adapt metadata from its own info attribute (the
ImageMixin implementation always creates a new image with an
empty info dictionary).
bits_per_component
bytes_per_pixel
component_names
components
intervals
planar
subsampling
Shortcuts for the corresponding mode.* attributes.
map(function[, function...]) -> None
For every pixel in the image, maps each component through the
corresponding function. If only one function is passed, it is
used repeatedly for each component. This method modifies the
image in place and is usually very fast (most of the time the
functions are called only a small number of times, possibly only
once for simple functions without branches), but it imposes a
number of restrictions on the function(s) passed:
it must accept a single integer argument and return a number
(map will round the result to the nearest integer and clip
it to range(0, 2 ** bits_per_component), if necessary);
it must not try to intercept any BaseException,
Exception or any unknown subclass of Exception raised by
any operation on the argument (implementations may try to
optimize the speed by passing funny objects, so even a simple
"if n == 10:" may raise an exception: simply ignore it,
map will take care of it); catching any other exception is
fine;
it should be side-effect free and its result should not depend
on values (other than the argument) that may change during a
single invocation of map.
rotate90() -> image
rotate180() -> image
rotate270() -> image
Return a copy of the image rotated 90, 180 or 270 degrees
counterclockwise around its center.
clip() -> None
Saturates invalid component values in YV12 images to the
minimum or the maximum allowed (see mode.intervals), for other
image modes this method does nothing, very fast; libraries that
save/export YV12 images are encouraged to always call this
method, since intermediate operations (e.g. the map method)
may assign to pixels values outside the valid intervals.
split() -> tuple[image]
Returns a tuple of L, L16 or L32 images corresponding
to the individual components in the image.
Planar images also supports attributes with the same names defined in
component_names: they contain grayscale (mode L) images that
offer a view on the pixel values for the corresponding component; any
change to the subimages is immediately reflected on the parent image
and vice versa (their buffers refer to the same memory location).
Non-planar images offer the following additional methods:
pixels() -> iterator[pixel]
Returns an iterator that iterates over all the pixels in the
image, starting from the top line and scanning each line from left
to right. See below for a description of the pixel objects.
__iter__() -> iterator[line]
Returns an iterator that iterates over all the lines in the image,
from top to bottom. See below for a description of the line
objects.
__len__() -> int
Returns the number of lines in the image (size.height).
__getitem__(integer) -> line
Returns the line at the specified (y) position.
__getitem__(tuple[integer]) -> pixel
The parameter must be a tuple of two integers; they are
interpreted respectively as x and y coordinates in the image (0, 0
is the top left corner) and a pixel object is returned.
__getitem__(slice | tuple[integer | slice]) -> image
The parameter must be a slice or a tuple that contains two slices
or an integer and a slice; the selected area of the image is
copied and a new image is returned; image[x:y:z] is equivalent
to image[:, x:y:z].
__setitem__(tuple[integer], integer | iterable[integer]) -> None
Modifies the pixel at specified position; image[x, y] =
integer is a shortcut for image[x, y] = (integer,) for
images with a single component.
__setitem__(slice | tuple[integer | slice], image) -> None
Selects an area in the same way as the corresponding form of the
__getitem__ method and assigns to it a copy of the pixels from
the image in the second argument, that must have exactly the same
mode as this image and the same size as the specified area; the
alpha component, if present, is simply copied and doesn’t affect
the other components of the image (i.e. no alpha compositing is
performed).
The mode, size and buffer (including the address in memory
of the buffer) never change after an image is created.
It is expected that, if PEP 3118 is accepted, all the image objects
will support the new buffer protocol, however this is beyond the scope
of this PEP.
Image and ImageMixin Classes
The ImageMixin class implements all the methods and attributes
described above except mode, size, buffer and info.
Image is a subclass of ImageMixin that adds support for these
four attributes and offers the following constructor (please note that
the constructor is not part of the image protocol):
__init__(mode, size, color, source)
mode must be one of the constants in the MODES set,
size is a sequence of two integers (width and height of the
new image); color is a sequence of integers, one for each
component of the image, used to initialize all the pixels to the
same value; source can be a sequence of integers of the
appropriate size and format that is copied as-is in the buffer of
the new image or an existing image; in Python 2.x source can
also be an instance of str and is interpreted as a sequence of
bytes. color and source are mutually exclusive and if
they are both omitted the image is initialized to transparent
black (all the bytes in the buffer have value 16 in the YV12
mode, 255 in the CMYK* modes and 0 for everything else). If
source is present and is an image, mode and/or size
can be omitted; if they are specified and are different from the
source mode and/or size, the source image is converted.The exact algorithms used for resizing and doing color space
conversions may differ between Python versions and
implementations, but they always give high quality results (e.g.:
a cubic spline interpolation can be used for upsampling and an
antialias filter can be used for downsampling images); any
combination of mode conversion is supported, but the algorithm
used for conversions to and from the CMYK* modes is pretty
naïve: if you have the exact color profiles of your devices you
may want to use a good color management tool such as LittleCMS.
The new image has an empty info dict.
Line Objects
The line objects (returned, e.g., when iterating over an image)
support the following attributes and methods:
mode
The mode of the image from where this line comes.
__iter__() -> iterator[pixel]
Returns an iterator that iterates over all the pixels in the line,
from left to right. See below for a description of the pixel
objects.
__len__() -> int
Returns the number of pixels in the line (the image width).
__getitem__(integer) -> pixel
Returns the pixel at the specified (x) position.
__getitem__(slice) -> image
The selected part of the line is copied and a new image is
returned; the new image will always have height 1.
__setitem__(integer, integer | iterable[integer]) -> None
Modifies the pixel at the specified position; line[x] =
integer is a shortcut for line[x] = (integer,) for images
with a single component.
__setitem__(slice, image) -> None
Selects a part of the line and assigns to it a copy of the pixels
from the image in the second argument, that must have height 1, a
width equal to the specified slice and the same mode as this line;
the alpha component, if present, is simply copied and doesn’t
affect the other components of the image (i.e. no alpha
compositing is performed).
Pixel Objects
The pixel objects (returned, e.g., when iterating over a line) support
the following attributes and methods:
mode
The mode of the image from where this pixel comes.
value
A tuple of integers, one for each component. Any iterable of the
correct length can be assigned to value (it will be
automagically converted to a tuple), but you can’t assign to it an
integer, even if the mode has only a single component: use, e.g.,
pixel.l = 123 instead.
r, g, b, a, l, c, m, y, k
The integer values of each component; only those applicable for
the current mode (in mode.component_names) will be available.
__iter__() -> iterator[int]
__len__() -> int
__getitem__(integer | slice) -> int | tuple[int]
__setitem__(integer | slice, integer | iterable[integer]) ->
None
These four methods emulate a fixed length list of integers, one
for each pixel component.
ImageSize Class
ImageSize is a named tuple, a class identical to tuple except
that:
its constructor only accepts two integers, width and height; they
are converted in the constructor using their __index__()
methods, so all the ImageSize objects are guaranteed to contain
only int (or possibly long, in Python 2.x) instances;
it has a width and a height property that are equivalent to
the first and the second number in the tuple, respectively;
the string returned by its __repr__ method is
'imageop.ImageSize(width=%d, height=%d)' % (width, height).
ImageSize is not usually instantiated by end-users, but can be
used when creating a new class that implements the image protocol,
since the size attribute must be an ImageSize instance.
C API
The available image modes are visible at the C level as PyImage_*
constants of type PyObject * (e.g.: PyImage_RGB is
imageop.RGB).
The following functions offer a C-friendly interface to mode and image
objects (all the functions return NULL or -1 on failure):
int PyImageMode_Check(PyObject *obj)
Returns true if the object obj is a valid image mode.
int PyImageMode_GetComponents(PyObject *mode)
PyObject* PyImageMode_GetComponentNames(PyObject *mode)
int PyImageMode_GetBitsPerComponent(PyObject *mode)
int PyImageMode_GetBytesPerPixel(PyObject *mode)
int PyImageMode_GetPlanar(PyObject *mode)
PyObject* PyImageMode_GetSubsampling(PyObject *mode)
int PyImageMode_GetXDivisor(PyObject *mode)
int PyImageMode_GetYDivisor(PyObject *mode)
Py_ssize_t PyImageMode_GetLength(PyObject *mode, Py_ssize_t width,
Py_ssize_t height)
These functions are equivalent to their corresponding Python
attributes or methods.
int PyImage_Check(PyObject *obj)
Returns true if the object obj is an Image object or an
instance of a subtype of the Image type; see also
PyObject_CheckImage below.
int PyImage_CheckExact(PyObject *obj)
Returns true if the object obj is an Image object, but not
an instance of a subtype of the Image type.
PyObject* PyImage_New(PyObject *mode, Py_ssize_t width,
Py_ssize_t height)
Returns a new Image instance, initialized to transparent black
(see Image.__init__ above for the details).
PyObject* PyImage_FromImage(PyObject *image, PyObject *mode,
Py_ssize_t width, Py_ssize_t height)
Returns a new Image instance, initialized with the contents of
the image object rescaled and converted to the specified
mode, if necessary.
PyObject* PyImage_FromBuffer(PyObject *buffer, PyObject *mode,
Py_ssize_t width,
Py_ssize_t height)
Returns a new Image instance, initialized with the contents of
the buffer object.
int PyObject_CheckImage(PyObject *obj)
Returns true if the object obj implements a sufficient subset
of the image protocol to be accepted by the functions defined
below, even if its class is not a subclass of ImageMixin
and/or Image. Currently it simply checks for the existence
and correctness of the attributes mode, size and
buffer.
PyObject* PyImage_GetMode(PyObject *image)
Py_ssize_t PyImage_GetWidth(PyObject *image)
Py_ssize_t PyImage_GetHeight(PyObject *image)
int PyImage_Clip(PyObject *image)
PyObject* PyImage_Split(PyObject *image)
PyObject* PyImage_GetBuffer(PyObject *image)
int PyImage_AsBuffer(PyObject *image, const void **buffer,
Py_ssize_t *buffer_len)
These functions are equivalent to their corresponding Python
attributes or methods; the image memory can be accessed only with
the GIL and a reference to the image or its buffer held, and extra
care should be taken for modes with more than 8 bits per
component: the data is stored in native byte order and it can be
not aligned on 2 or 4 byte boundaries.
Examples
A few examples of common operations with the new Image class and
protocol:
# create a new black RGB image of 6x9 pixels
rgb_image = imageop.Image(imageop.RGB, (6, 9))
# same as above, but initialize the image to bright red
rgb_image = imageop.Image(imageop.RGB, (6, 9), color=(255, 0, 0))
# convert the image to YCbCr
yuv_image = imageop.Image(imageop.JPEG_YV12, source=rgb_image)
# read the value of a pixel and split it into three ints
r, g, b = rgb_image[x, y]
# modify the magenta component of a pixel in a CMYK image
cmyk_image[x, y].m = 13
# modify the Y (luma) component of a pixel in a *YV12 image and
# its corresponding subsampled Cr (red chroma)
yuv_image.y[x, y] = 42
yuv_image.cr[x // 2, y // 2] = 54
# iterate over an image
for line in rgb_image:
for pixel in line:
# swap red and blue, and set green to 0
pixel.value = pixel.b, 0, pixel.r
# find the maximum value of the red component in the image
max_red = max(pixel.r for pixel in rgb_image.pixels())
# count the number of colors in the image
num_of_colors = len(set(tuple(pixel) for pixel in image.pixels()))
# copy a block of 4x2 pixels near the upper right corner of an
# image and paste it into the lower left corner of the same image
image[:4, -2:] = image[-6:-2, 1:3]
# create a copy of the image, except that the new image can have a
# different (usually empty) info dict
new_image = image[:]
# create a mirrored copy of the image, with the left and right
# sides flipped
flipped_image = image[::-1, :]
# downsample an image to half its original size using a fast, low
# quality operation and a slower, high quality one:
low_quality_image = image[::2, ::2]
new_size = image.size.width // 2, image.size.height // 2
high_quality_image = imageop.Image(size=new_size, source=image)
# direct buffer access
rgb_image[0, 0] = r, g, b
assert tuple(rgb_image.buffer[:3]) == (r, g, b)
Backwards Compatibility
There are three areas touched by this PEP where backwards
compatibility should be considered:
Python 2.6: new classes and objects are added to the imageop
module without touching the existing module contents; new methods
and attributes will be added to Tkinter.PhotoImage and its
__getitem__ and __setitem__ methods will be modified to
accept integers, tuples and slices (currently they only accept
strings). All the changes provide a superset of the existing
functionality, so no major compatibility issues are expected.
Python 3.0: the legacy contents of the imageop module will
be deleted, according to PEP 3108; everything defined in this
proposal will work like in Python 2.x with the exception of the
usual 2.x/3.0 differences (e.g. support for long integers and
for interpreting str instances as sequences of bytes will be
dropped).
external libraries: the names and the semantics of the standard
image methods and attributes are carefully chosen to allow some
external libraries that manipulate images (including at least PIL,
wxPython and pygame) to implement the new protocol in their image
classes without breaking compatibility with existing code. The only
blatant conflicts between the image protocol and NumPy arrays are
the value of the size attribute and the coordinates order in the
image[x, y] expression.
Reference Implementation
If this PEP is accepted, the author will provide a reference
implementation of the new classes in pure Python (that can run in
CPython, PyPy, Jython and IronPython) and a second one optimized for
speed in Python and C, suitable for inclusion in the CPython standard
library. The author will also submit the required Tkinter patches.
For all the code will be available a version for Python 2.x and a
version for Python 3.0 (it is expected that the two version will be
very similar and the Python 3.0 one will probably be generated almost
completely automatically).
Acknowledgments
The implementation of this PEP, if accepted, is sponsored by Google
through the Google Summer of Code program.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 368 – Standard image protocol and class | Standards Track | The current situation of image storage and manipulation in the Python
world is extremely fragmented: almost every library that uses image
objects has implemented its own image class, incompatible with
everyone else’s and often not very pythonic. A basic RGB image class
exists in the standard library (Tkinter.PhotoImage), but is pretty
much unusable, and unused, for anything except Tkinter programming. |
PEP 369 – Post import hooks
Author:
Christian Heimes <christian at python.org>
Status:
Withdrawn
Type:
Standards Track
Created:
02-Jan-2008
Python-Version:
2.6, 3.0
Post-History:
02-Dec-2012
Table of Contents
Withdrawal Notice
Abstract
Rationale
Use cases
Existing implementations
Post import hook implementation
States
No hook was registered
A hook is registered and the module is not loaded yet
A module is successfully loaded
A module can’t be loaded
A hook is registered but the module is already loaded
Invariants
Sample Python implementation
C API
New C API functions
Python API
Open issues
Backwards Compatibility
Reference Implementation
Acknowledgments
Copyright
References
Withdrawal Notice
This PEP has been withdrawn by its author, as much of the detailed design
is no longer valid following the migration to importlib in Python 3.3.
Abstract
This PEP proposes enhancements for the import machinery to add
post import hooks. It is intended primarily to support the wider
use of abstract base classes that is expected in Python 3.0.
The PEP originally started as a combined PEP for lazy imports and
post import hooks. After some discussion on the python-dev mailing
list the PEP was parted in two separate PEPs. [1]
Rationale
Python has no API to hook into the import machinery and execute code
after a module is successfully loaded. The import hooks of PEP 302 are
about finding modules and loading modules but they were not designed to
as post import hooks.
Use cases
A use case for a post import hook is mentioned in Alyssa (Nick) Coghlan’s initial
posting [2]. about callbacks on module import. It was found during the
development of Python 3.0 and its ABCs. We wanted to register classes
like decimal.Decimal with an ABC but the module should not be imported
on every interpreter startup. Alyssa came up with this example:
@imp.when_imported('decimal')
def register(decimal):
Inexact.register(decimal.Decimal)
The function register is registered as callback for the module named
‘decimal’. When decimal is imported the function is called with the
module object as argument.
While this particular example isn’t necessary in practice, (as
decimal.Decimal will inherit from the appropriate abstract Number base
class in 2.6 and 3.0), it still illustrates the principle.
Existing implementations
PJE’s peak.util.imports [3] implements post load hooks. My
implementation shares a lot with his and it’s partly based on his ideas.
Post import hook implementation
Post import hooks are called after a module has been loaded. The hooks
are callable which take one argument, the module instance. They are
registered by the dotted name of the module, e.g. ‘os’ or ‘os.path’.
The callable are stored in the dict sys.post_import_hooks which
is a mapping from names (as string) to a list of callables or None.
States
No hook was registered
sys.post_import_hooks contains no entry for the module
A hook is registered and the module is not loaded yet
The import hook registry contains an entry
sys.post_import_hooks[“name”] = [hook1]
A module is successfully loaded
The import machinery checks if sys.post_import_hooks contains post import
hooks for the newly loaded module. If hooks are found then the hooks are
called in the order they were registered with the module instance as first
argument. The processing of the hooks is stopped when a method raises an
exception. At the end the entry for the module name set to None, even
when an error has occurred.
Additionally the new __notified__ slot of the module object is set
to True in order to prevent infinity recursions when the notification
method is called inside a hook. For object which don’t subclass from
PyModule a new attribute is added instead.
A module can’t be loaded
The import hooks are neither called nor removed from the registry. It
may be possible to load the module later.
A hook is registered but the module is already loaded
The hook is fired immediately.
Invariants
The import hook system guarantees certain invariants. XXX
Sample Python implementation
A Python implementation may look like:
def notify(name):
try:
module = sys.modules[name]
except KeyError:
raise ImportError("Module %s has not been imported" % (name,))
if module.__notified__:
return
try:
module.__notified__ = True
if '.' in name:
notify(name[:name.rfind('.')])
for callback in post_import_hooks[name]:
callback(module)
finally:
post_import_hooks[name] = None
XXX
C API
New C API functions
PyObject* PyImport_GetPostImportHooks(void)Returns the dict sys.post_import_hooks or NULL
PyObject* PyImport_NotifyLoadedByModule(PyObject *module)Notify the post import system that a module was requested. Returns the
a borrowed reference to the same module object or NULL if an error has
occurred. The function calls only the hooks for the module itself and not
its parents. The function must be called with the import lock acquired.
PyObject* PyImport_NotifyLoadedByName(const char *name)PyImport_NotifyLoadedByName("a.b.c") calls
PyImport_NotifyLoadedByModule() for a, a.b and a.b.c
in that particular order. The modules are retrieved from
sys.modules. If a module can’t be retrieved, an exception is raised
otherwise the a borrowed reference to modname is returned.
The hook calls always start with the prime parent module.
The caller of PyImport_NotifyLoadedByName() must hold the import lock!
PyObject* PyImport_RegisterPostImportHook(PyObject *callable, PyObject *mod_name)Register a new hook callable for the module mod_name
int PyModule_GetNotified(PyObject *module)Returns the status of the __notified__ slot / attribute.
int PyModule_SetNotified(PyObject *module, int status)Set the status of the __notified__ slot / attribute.
The PyImport_NotifyLoadedByModule() method is called inside
import_submodule(). The import system makes sure that the import lock
is acquired and the hooks for the parent modules are already called.
Python API
The import hook registry and two new API methods are exposed through the
sys and imp module.
sys.post_import_hooksThe dict contains the post import hooks:{"name" : [hook1, hook2], ...}
imp.register_post_import_hook(hook: "callable", name: str)Register a new hook hook for the module name
imp.notify_module_loaded(module: "module instance") -> moduleNotify the system that a module has been loaded. The method is provided
for compatibility with existing lazy / deferred import extensions.
module.__notified__A slot of a module instance. XXX
The when_imported function decorator is also in the imp module,
which is equivalent to:
def when_imported(name):
def register(hook):
register_post_import_hook(hook, name)
return register
imp.when_imported(name) -> decorator functionfor @when_imported(name) def hook(module): pass
Open issues
The when_imported decorator hasn’t been written.
The code contains several XXX comments. They are mostly about error
handling in edge cases.
Backwards Compatibility
The new features and API don’t conflict with old import system of Python
and don’t cause any backward compatibility issues for most software.
However systems like PEAK and Zope which implement their own lazy import
magic need to follow some rules.
The post import hooks carefully designed to cooperate with existing
deferred and lazy import systems. It’s the suggestion of the PEP author
to replace own on-load-hooks with the new hook API. The alternative
lazy or deferred imports will still work but the implementations must
call the imp.notify_module_loaded function.
Reference Implementation
A reference implementation is already written and is available in the
py3k-importhook branch. [4] It still requires some cleanups,
documentation updates and additional unit tests.
Acknowledgments
Alyssa Coghlan, for proof reading and the initial discussion
Phillip J. Eby, for his implementation in PEAK and help with my own implementation
Copyright
This document has been placed in the public domain.
References
[1]
PEP: Lazy module imports and post import hook
http://permalink.gmane.org/gmane.comp.python.devel/90949
[2]
Interest in PEP for callbacks on module import
http://permalink.gmane.org/gmane.comp.python.python-3000.devel/11126
[3]
peak.utils.imports
http://svn.eby-sarna.com/Importing/peak/util/imports.py?view=markup
[4]
py3k-importhook branch
http://svn.python.org/view/python/branches/py3k-importhook/
| Withdrawn | PEP 369 – Post import hooks | Standards Track | This PEP proposes enhancements for the import machinery to add
post import hooks. It is intended primarily to support the wider
use of abstract base classes that is expected in Python 3.0. |
PEP 370 – Per user site-packages directory
Author:
Christian Heimes <christian at python.org>
Status:
Final
Type:
Standards Track
Created:
11-Jan-2008
Python-Version:
2.6, 3.0
Post-History:
Table of Contents
Abstract
Rationale
Specification
Windows Notes
Unix Notes
Mac OS X Notes
Implementation
Backwards Compatibility
Reference Implementation
Copyright
References
Abstract
This PEP proposes a new a per user site-packages directory to allow
users the local installation of Python packages in their home directory.
Rationale
Current Python versions don’t have a unified way to install packages
into the home directory of a user (except for Mac Framework
builds). Users are either forced to ask the system administrator to
install or update a package for them or to use one of the many
workarounds like Virtual Python [1], Working Env [2] or
Virtual Env [3].
It’s not the goal of the PEP to replace the tools or to implement
isolated installations of Python. It only implements the most common
use case of an additional site-packages directory for each user.
The feature can’t be implemented using the environment variable
PYTHONPATH. The env var just inserts a new directory to the beginning
of sys.path but it doesn’t parse the pth files in the directory. A
full blown site-packages path is required for several applications
and Python eggs.
Specification
site directory (site-packages)
A directory in sys.path. In contrast to ordinary directories the pth
files in the directory are processed, too.
user site directory
A site directory inside the users’ home directory. A user site
directory is specific to a Python version. The path contains
the version number (major and minor only).
Unix (including Mac OS X)~/.local/lib/python2.6/site-packages
Windows%APPDATA%/Python/Python26/site-packages
user data directory
Usually the parent directory of the user site directory. It’s meant
for Python version specific data like config files, docs, images
and translations.
Unix (including Mac)~/.local/lib/python2.6
Windows%APPDATA%/Python/Python26
user base directory
It’s located inside the user’s home directory. The user site and
use config directory are inside the base directory. On some systems
the directory may be shared with 3rd party apps.
Unix (including Mac)~/.local
Windows%APPDATA%/Python
user script directory
A directory for binaries and scripts. [10] It’s shared across Python
versions and the destination directory for scripts.
Unix (including Mac)~/.local/bin
Windows%APPDATA%/Python/Scripts
Windows Notes
On Windows the Application Data directory (aka APPDATA) was chosen
because it is the most designated place for application data. Microsoft
recommends that software doesn’t write to USERPROFILE [5] and
My Documents is not suited for application data, either. [8] The code
doesn’t query the Win32 API, instead it uses the environment variable
%APPDATA%.
The application data directory is part of the roaming profile. In networks
with domain logins the application data may be copied from and to the a
central server. This can slow down log-in and log-off. Users can keep
the data on the server by e.g. setting PYTHONUSERBASE to the value
“%HOMEDRIVE%%HOMEPATH%Applicata Data”. Users should consult their local
administrator for more information. [13]
Unix Notes
On Unix ~/.local was chosen in favor over ~/.python because the
directory is already used by several other programs in analogy to
/usr/local. [7] [11]
Mac OS X Notes
On Mac OS X Python uses ~/.local directory as well. [12] Framework builds
of Python include ~/Library/Python/2.6/site-packages as an additional
search path.
Implementation
The site module gets a new method adduserpackage() which adds the
appropriate directory to the search path. The directory is not added if
it doesn’t exist when Python is started. However the location of the
user site directory and user base directory is stored in an internal
variable for distutils.
The user site directory is added before the system site directories
but after Python’s search paths and PYTHONPATH. This setup allows
the user to install a different version of a package than the system
administrator but it prevents the user from accidentally overwriting a
stdlib module. Stdlib modules can still be overwritten with
PYTHONPATH.
For security reasons the user site directory is not added to
sys.path when the effective user id or group id is not equal to the
process uid / gid [9]. It’s an additional barrier against code injection
into suid apps. However Python suid scripts must always use the -E
and -s option or users can sneak in their own code.
The user site directory can be suppressed with a new option -s or
the environment variable PYTHONNOUSERSITE. The feature can be
disabled globally by setting site.ENABLE_USER_SITE to the value
False. It must be set by editing site.py. It can’t be altered
in sitecustomize.py or later.
The path to the user base directory can be overwritten with the
environment variable PYTHONUSERBASE. The default location is used
when PYTHONUSERBASE is not set or empty.
distutils.command.install (setup.py install) gets a new argument
--user to install packages in the user site directory. The required
directories are created on demand.
distutils.command.build_ext (setup.py build_ext) gets a new argument
--user which adds the include/ and lib/ directories in the user base
directory to the search paths for header files and libraries. It also
adds the lib/ directory to rpath.
The site module gets two arguments --user-base and --user-site
to print the path to the user base or user site directory to the standard
output. The feature is intended for scripting, e.g.
./configure --prefix $(python2.5 -m site --user-base)
distutils.sysconfig will get methods to access the private variables
of site. (not yet implemented)
The Windows updater needs to be updated, too. It should create a menu
item which opens the user site directory in a new explorer windows.
Backwards Compatibility
TBD
Reference Implementation
A reference implementation is available in the bug tracker. [4]
Copyright
This document has been placed in the public domain.
References
[1]
Virtual Python
http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-python
[2]
Working Env
https://pypi.org/project/workingenv.py/
https://ianbicking.org/archive/workingenv-revisited.html
[3]
Virtual Env
https://pypi.org/project/virtualenv/
[4]
reference implementation
https://github.com/python/cpython/issues/46132
http://svn.python.org/view/sandbox/trunk/pep370
[5]
MSDN: CSIDL
https://learn.microsoft.com/en/windows/win32/shell/csidl
[6] Initial suggestion for a per user site-packages directory
https://mail.python.org/archives/list/python-dev@python.org/message/V23CUKRH3VCHFLV33ADMHJSM53STPA7I/
[7]
Suggestion of ~/.local/
https://mail.python.org/pipermail/python-dev/2008-January/075985.html
[8]
APPDATA discussion
https://mail.python.org/pipermail/python-dev/2008-January/075993.html
[9]
Security concerns and -s option
https://mail.python.org/pipermail/python-dev/2008-January/076130.html
[10]
Discussion about the bin directory
https://mail.python.org/pipermail/python-dev/2008-January/076162.html
[11]
freedesktop.org XGD basedir specs mentions ~/.local
https://www.freedesktop.org/wiki/Specifications/basedir-spec/
[12]
~/.local for Mac and usercustomize file
https://mail.python.org/pipermail/python-dev/2008-January/076236.html
[13]
Roaming profile on Windows
https://mail.python.org/pipermail/python-dev/2008-January/076256.html
| Final | PEP 370 – Per user site-packages directory | Standards Track | This PEP proposes a new a per user site-packages directory to allow
users the local installation of Python packages in their home directory. |
PEP 371 – Addition of the multiprocessing package to the standard library
Author:
Jesse Noller <jnoller at gmail.com>,
Richard Oudkerk <r.m.oudkerk at googlemail.com>
Status:
Final
Type:
Standards Track
Created:
06-May-2008
Python-Version:
2.6, 3.0
Post-History:
03-Jun-2008
Table of Contents
Abstract
Rationale
The “Distributed” Problem
Performance Comparison
Maintenance
API Naming
Timing/Schedule
Open Issues
Closed Issues
References
Copyright
Abstract
This PEP proposes the inclusion of the pyProcessing [1] package
into the Python standard library, renamed to “multiprocessing”.
The processing package mimics the standard library threading
module functionality to provide a process-based approach to
threaded programming allowing end-users to dispatch multiple
tasks that effectively side-step the global interpreter lock.
The package also provides server and client functionality
(processing.Manager) to provide remote sharing and management of
objects and tasks so that applications may not only leverage
multiple cores on the local machine, but also distribute objects
and tasks across a cluster of networked machines.
While the distributed capabilities of the package are beneficial,
the primary focus of this PEP is the core threading-like API and
capabilities of the package.
Rationale
The current CPython interpreter implements the Global Interpreter
Lock (GIL) and barring work in Python 3000 or other versions
currently planned [2], the GIL will remain as-is within the
CPython interpreter for the foreseeable future. While the GIL
itself enables clean and easy to maintain C code for the
interpreter and extensions base, it is frequently an issue for
those Python programmers who are leveraging multi-core machines.
The GIL itself prevents more than a single thread from running
within the interpreter at any given point in time, effectively
removing Python’s ability to take advantage of multi-processor
systems.
The pyprocessing package offers a method to side-step the GIL
allowing applications within CPython to take advantage of
multi-core architectures without asking users to completely change
their programming paradigm (i.e.: dropping threaded programming
for another “concurrent” approach - Twisted, Actors, etc).
The Processing package offers CPython a “known API” which mirrors
albeit in a PEP 8 compliant manner, that of the threading API,
with known semantics and easy scalability.
In the future, the package might not be as relevant should the
CPython interpreter enable “true” threading, however for some
applications, forking an OS process may sometimes be more
desirable than using lightweight threads, especially on those
platforms where process creation is fast and optimized.
For example, a simple threaded application:
from threading import Thread as worker
def afunc(number):
print number * 3
t = worker(target=afunc, args=(4,))
t.start()
t.join()
The pyprocessing package mirrored the API so well, that with a
simple change of the import to:
from processing import process as worker
The code would now execute through the processing.process class.
Obviously, with the renaming of the API to PEP 8 compliance there
would be additional renaming which would need to occur within
user applications, however minor.
This type of compatibility means that, with a minor (in most cases)
change in code, users’ applications will be able to leverage all
cores and processors on a given machine for parallel execution.
In many cases the pyprocessing package is even faster than the
normal threading approach for I/O bound programs. This of course,
takes into account that the pyprocessing package is in optimized C
code, while the threading module is not.
The “Distributed” Problem
In the discussion on Python-Dev about the inclusion of this
package [3] there was confusion about the intentions this PEP with
an attempt to solve the “Distributed” problem - frequently
comparing the functionality of this package with other solutions
like MPI-based communication [4], CORBA, or other distributed
object approaches [5].
The “distributed” problem is large and varied. Each programmer
working within this domain has either very strong opinions about
their favorite module/method or a highly customized problem for
which no existing solution works.
The acceptance of this package does not preclude or recommend that
programmers working on the “distributed” problem not examine other
solutions for their problem domain. The intent of including this
package is to provide entry-level capabilities for local
concurrency and the basic support to spread that concurrency
across a network of machines - although the two are not tightly
coupled, the pyprocessing package could in fact, be used in
conjunction with any of the other solutions including MPI/etc.
If necessary - it is possible to completely decouple the local
concurrency abilities of the package from the
network-capable/shared aspects of the package. Without serious
concerns or cause however, the author of this PEP does not
recommend that approach.
Performance Comparison
As we all know - there are “lies, damned lies, and benchmarks”.
These speed comparisons, while aimed at showcasing the performance
of the pyprocessing package, are by no means comprehensive or
applicable to all possible use cases or environments. Especially
for those platforms with sluggish process forking timing.
All benchmarks were run using the following:
4 Core Intel Xeon CPU @ 3.00GHz
16 GB of RAM
Python 2.5.2 compiled on Gentoo Linux (kernel 2.6.18.6)
pyProcessing 0.52
All of the code for this can be downloaded from
http://jessenoller.com/code/bench-src.tgz
The basic method of execution for these benchmarks is in the
run_benchmarks.py [6] script, which is simply a wrapper to execute a
target function through a single threaded (linear), multi-threaded
(via threading), and multi-process (via pyprocessing) function for
a static number of iterations with increasing numbers of execution
loops and/or threads.
The run_benchmarks.py script executes each function 100 times,
picking the best run of that 100 iterations via the timeit module.
First, to identify the overhead of the spawning of the workers, we
execute a function which is simply a pass statement (empty):
cmd: python run_benchmarks.py empty_func.py
Importing empty_func
Starting tests ...
non_threaded (1 iters) 0.000001 seconds
threaded (1 threads) 0.000796 seconds
processes (1 procs) 0.000714 seconds
non_threaded (2 iters) 0.000002 seconds
threaded (2 threads) 0.001963 seconds
processes (2 procs) 0.001466 seconds
non_threaded (4 iters) 0.000002 seconds
threaded (4 threads) 0.003986 seconds
processes (4 procs) 0.002701 seconds
non_threaded (8 iters) 0.000003 seconds
threaded (8 threads) 0.007990 seconds
processes (8 procs) 0.005512 seconds
As you can see, process forking via the pyprocessing package is
faster than the speed of building and then executing the threaded
version of the code.
The second test calculates 50000 Fibonacci numbers inside of each
thread (isolated and shared nothing):
cmd: python run_benchmarks.py fibonacci.py
Importing fibonacci
Starting tests ...
non_threaded (1 iters) 0.195548 seconds
threaded (1 threads) 0.197909 seconds
processes (1 procs) 0.201175 seconds
non_threaded (2 iters) 0.397540 seconds
threaded (2 threads) 0.397637 seconds
processes (2 procs) 0.204265 seconds
non_threaded (4 iters) 0.795333 seconds
threaded (4 threads) 0.797262 seconds
processes (4 procs) 0.206990 seconds
non_threaded (8 iters) 1.591680 seconds
threaded (8 threads) 1.596824 seconds
processes (8 procs) 0.417899 seconds
The third test calculates the sum of all primes below 100000,
again sharing nothing:
cmd: run_benchmarks.py crunch_primes.py
Importing crunch_primes
Starting tests ...
non_threaded (1 iters) 0.495157 seconds
threaded (1 threads) 0.522320 seconds
processes (1 procs) 0.523757 seconds
non_threaded (2 iters) 1.052048 seconds
threaded (2 threads) 1.154726 seconds
processes (2 procs) 0.524603 seconds
non_threaded (4 iters) 2.104733 seconds
threaded (4 threads) 2.455215 seconds
processes (4 procs) 0.530688 seconds
non_threaded (8 iters) 4.217455 seconds
threaded (8 threads) 5.109192 seconds
processes (8 procs) 1.077939 seconds
The reason why tests two and three focused on pure numeric
crunching is to showcase how the current threading implementation
does hinder non-I/O applications. Obviously, these tests could be
improved to use a queue for coordination of results and chunks of
work but that is not required to show the performance of the
package and core processing.process module.
The next test is an I/O bound test. This is normally where we see
a steep improvement in the threading module approach versus a
single-threaded approach. In this case, each worker is opening a
descriptor to lorem.txt, randomly seeking within it and writing
lines to /dev/null:
cmd: python run_benchmarks.py file_io.py
Importing file_io
Starting tests ...
non_threaded (1 iters) 0.057750 seconds
threaded (1 threads) 0.089992 seconds
processes (1 procs) 0.090817 seconds
non_threaded (2 iters) 0.180256 seconds
threaded (2 threads) 0.329961 seconds
processes (2 procs) 0.096683 seconds
non_threaded (4 iters) 0.370841 seconds
threaded (4 threads) 1.103678 seconds
processes (4 procs) 0.101535 seconds
non_threaded (8 iters) 0.749571 seconds
threaded (8 threads) 2.437204 seconds
processes (8 procs) 0.203438 seconds
As you can see, pyprocessing is still faster on this I/O operation
than using multiple threads. And using multiple threads is slower
than the single threaded execution itself.
Finally, we will run a socket-based test to show network I/O
performance. This function grabs a URL from a server on the LAN
that is a simple error page from tomcat. It gets the page 100
times. The network is silent, and a 10G connection:
cmd: python run_benchmarks.py url_get.py
Importing url_get
Starting tests ...
non_threaded (1 iters) 0.124774 seconds
threaded (1 threads) 0.120478 seconds
processes (1 procs) 0.121404 seconds
non_threaded (2 iters) 0.239574 seconds
threaded (2 threads) 0.146138 seconds
processes (2 procs) 0.138366 seconds
non_threaded (4 iters) 0.479159 seconds
threaded (4 threads) 0.200985 seconds
processes (4 procs) 0.188847 seconds
non_threaded (8 iters) 0.960621 seconds
threaded (8 threads) 0.659298 seconds
processes (8 procs) 0.298625 seconds
We finally see threaded performance surpass that of
single-threaded execution, but the pyprocessing package is still
faster when increasing the number of workers. If you stay with
one or two threads/workers, then the timing between threads and
pyprocessing is fairly close.
One item of note however, is that there is an implicit overhead
within the pyprocessing package’s Queue implementation due to the
object serialization.
Alec Thomas provided a short example based on the
run_benchmarks.py script to demonstrate this overhead versus the
default Queue implementation:
cmd: run_bench_queue.py
non_threaded (1 iters) 0.010546 seconds
threaded (1 threads) 0.015164 seconds
processes (1 procs) 0.066167 seconds
non_threaded (2 iters) 0.020768 seconds
threaded (2 threads) 0.041635 seconds
processes (2 procs) 0.084270 seconds
non_threaded (4 iters) 0.041718 seconds
threaded (4 threads) 0.086394 seconds
processes (4 procs) 0.144176 seconds
non_threaded (8 iters) 0.083488 seconds
threaded (8 threads) 0.184254 seconds
processes (8 procs) 0.302999 seconds
Additional benchmarks can be found in the pyprocessing package’s
source distribution’s examples/ directory. The examples will be
included in the package’s documentation.
Maintenance
Richard M. Oudkerk - the author of the pyprocessing package has
agreed to maintain the package within Python SVN. Jesse Noller
has volunteered to also help maintain/document and test the
package.
API Naming
While the aim of the package’s API is designed to closely mimic that of
the threading and Queue modules as of python 2.x, those modules are not
PEP 8 compliant. It has been decided that instead of adding the package
“as is” and therefore perpetuating the non-PEP 8 compliant naming, we
will rename all APIs, classes, etc to be fully PEP 8 compliant.
This change does affect the ease-of-drop in replacement for those using
the threading module, but that is an acceptable side-effect in the view
of the authors, especially given that the threading module’s own API
will change.
Issue 3042 in the tracker proposes that for Python 2.6 there will be
two APIs for the threading module - the current one, and the PEP 8
compliant one. Warnings about the upcoming removal of the original
java-style API will be issued when -3 is invoked.
In Python 3000, the threading API will become PEP 8 compliant, which
means that the multiprocessing module and the threading module will
again have matching APIs.
Timing/Schedule
Some concerns have been raised about the timing/lateness of this
PEP for the 2.6 and 3.0 releases this year, however it is felt by
both the authors and others that the functionality this package
offers surpasses the risk of inclusion.
However, taking into account the desire not to destabilize
Python-core, some refactoring of pyprocessing’s code “into”
Python-core can be withheld until the next 2.x/3.x releases. This
means that the actual risk to Python-core is minimal, and largely
constrained to the actual package itself.
Open Issues
Confirm no “default” remote connection capabilities, if needed
enable the remote security mechanisms by default for those
classes which offer remote capabilities.
Some of the API (Queue methods qsize(), task_done() and join())
either need to be added, or the reason for their exclusion needs
to be identified and documented clearly.
Closed Issues
The PyGILState bug patch submitted in issue 1683 by roudkerk
must be applied for the package unit tests to work.
Existing documentation has to be moved to ReST formatting.
Reliance on ctypes: The pyprocessing package’s reliance on
ctypes prevents the package from functioning on platforms where
ctypes is not supported. This is not a restriction of this
package, but rather of ctypes.
DONE: Rename top-level package from “pyprocessing” to
“multiprocessing”.
DONE: Also note that the default behavior of process spawning
does not make it compatible with use within IDLE as-is, this
will be examined as a bug-fix or “setExecutable” enhancement.
DONE: Add in “multiprocessing.setExecutable()” method to override the
default behavior of the package to spawn processes using the
current executable name rather than the Python interpreter. Note
that Mark Hammond has suggested a factory-style interface for
this [7].
References
[1]
The 2008 era PyProcessing project (the pyprocessing name was since repurposed)
https://web.archive.org/web/20080914113946/https://pyprocessing.berlios.de/
[2]
See Adam Olsen’s “safe threading” project
https://code.google.com/archive/p/python-safethread/
[3]
See: Addition of “pyprocessing” module to standard lib.
https://mail.python.org/pipermail/python-dev/2008-May/079417.html
[4]
https://mpi4py.readthedocs.io/
[5]
See “Cluster Computing”
https://wiki.python.org/moin/ParallelProcessing#Cluster_Computing
[6]
The original run_benchmark.py code was published in Python
Magazine in December 2007: “Python Threads and the Global
Interpreter Lock” by Jesse Noller. It has been modified for
this PEP.
[7]
http://groups.google.com/group/python-dev2/msg/54cf06d15cbcbc34
Copyright
This document has been placed in the public domain.
| Final | PEP 371 – Addition of the multiprocessing package to the standard library | Standards Track | This PEP proposes the inclusion of the pyProcessing [1] package
into the Python standard library, renamed to “multiprocessing”. |
PEP 372 – Adding an ordered dictionary to collections
Author:
Armin Ronacher <armin.ronacher at active-4.com>,
Raymond Hettinger <python at rcn.com>
Status:
Final
Type:
Standards Track
Created:
15-Jun-2008
Python-Version:
2.7, 3.1
Post-History:
Table of Contents
Abstract
Patch
Rationale
Ordered Dict API
Questions and Answers
Reference Implementation
Future Directions
References
Copyright
Abstract
This PEP proposes an ordered dictionary as a new data structure for
the collections module, called “OrderedDict” in this PEP. The
proposed API incorporates the experiences gained from working with
similar implementations that exist in various real-world applications
and other programming languages.
Patch
A working Py3.1 patch including tests and documentation is at:
OrderedDict patch
The check-in was in revisions: 70101 and 70102
Rationale
In current Python versions, the widely used built-in dict type does
not specify an order for the key/value pairs stored. This makes it
hard to use dictionaries as data storage for some specific use cases.
Some dynamic programming languages like PHP and Ruby 1.9 guarantee a
certain order on iteration. In those languages, and existing Python
ordered-dict implementations, the ordering of items is defined by the
time of insertion of the key. New keys are appended at the end, but
keys that are overwritten are not moved to the end.
The following example shows the behavior for simple assignments:
>>> d = OrderedDict()
>>> d['parrot'] = 'dead'
>>> d['penguin'] = 'exploded'
>>> d.items()
[('parrot', 'dead'), ('penguin', 'exploded')]
That the ordering is preserved makes an OrderedDict useful for a couple of
situations:
XML/HTML processing libraries currently drop the ordering of
attributes, use a list instead of a dict which makes filtering
cumbersome, or implement their own ordered dictionary. This affects
ElementTree, html5lib, Genshi and many more libraries.
There are many ordered dict implementations in various libraries
and applications, most of them subtly incompatible with each other.
Furthermore, subclassing dict is a non-trivial task and many
implementations don’t override all the methods properly which can
lead to unexpected results.Additionally, many ordered dicts are implemented in an inefficient
way, making many operations more complex then they have to be.
PEP 3115 allows metaclasses to change the mapping object used for
the class body. An ordered dict could be used to create ordered
member declarations similar to C structs. This could be useful, for
example, for future ctypes releases as well as ORMs that define
database tables as classes, like the one the Django framework ships.
Django currently uses an ugly hack to restore the ordering of
members in database models.
The RawConfigParser class accepts a dict_type argument that
allows an application to set the type of dictionary used internally.
The motivation for this addition was expressly to allow users to
provide an ordered dictionary. [1]
Code ported from other programming languages such as PHP often
depends on an ordered dict. Having an implementation of an
ordering-preserving dictionary in the standard library could ease
the transition and improve the compatibility of different libraries.
Ordered Dict API
The ordered dict API would be mostly compatible with dict and existing
ordered dicts. Note: this PEP refers to the 2.7 and 3.0 dictionary
API as described in collections.Mapping abstract base class.
The constructor and update() both accept iterables of tuples as
well as mappings like a dict does. Unlike a regular dictionary,
the insertion order is preserved.
>>> d = OrderedDict([('a', 'b'), ('c', 'd')])
>>> d.update({'foo': 'bar'})
>>> d
collections.OrderedDict([('a', 'b'), ('c', 'd'), ('foo', 'bar')])
If ordered dicts are updated from regular dicts, the ordering of new
keys is of course undefined.
All iteration methods as well as keys(), values() and
items() return the values ordered by the time the key was
first inserted:
>>> d['spam'] = 'eggs'
>>> d.keys()
['a', 'c', 'foo', 'spam']
>>> d.values()
['b', 'd', 'bar', 'eggs']
>>> d.items()
[('a', 'b'), ('c', 'd'), ('foo', 'bar'), ('spam', 'eggs')]
New methods not available on dict:
OrderedDict.__reversed__()Supports reverse iteration by key.
Questions and Answers
What happens if an existing key is reassigned?
The key is not moved but assigned a new value in place. This is
consistent with existing implementations.
What happens if keys appear multiple times in the list passed to the
constructor?
The same as for regular dicts – the latter item overrides the
former. This has the side-effect that the position of the first
key is used because only the value is actually overwritten:>>> OrderedDict([('a', 1), ('b', 2), ('a', 3)])
collections.OrderedDict([('a', 3), ('b', 2)])
This behavior is consistent with existing implementations in
Python, the PHP array and the hashmap in Ruby 1.9.
Is the ordered dict a dict subclass? Why?
Yes. Like defaultdict, an ordered dictionary subclasses dict.
Being a dict subclass make some of the methods faster (like
__getitem__ and __len__). More importantly, being a dict
subclass lets ordered dictionaries be usable with tools like json that
insist on having dict inputs by testing isinstance(d, dict).
Do any limitations arise from subclassing dict?
Yes. Since the API for dicts is different in Py2.x and Py3.x, the
OrderedDict API must also be different. So, the Py2.7 version will need
to override iterkeys, itervalues, and iteritems.
Does OrderedDict.popitem() return a particular key/value pair?
Yes. It pops-off the most recently inserted new key and its
corresponding value. This corresponds to the usual LIFO behavior
exhibited by traditional push/pop pairs. It is semantically
equivalent to k=list(od)[-1]; v=od[k]; del od[k]; return (k,v).
The actual implementation is more efficient and pops directly
from a sorted list of keys.
Does OrderedDict support indexing, slicing, and whatnot?
As a matter of fact, OrderedDict does not implement the Sequence
interface. Rather, it is a MutableMapping that remembers
the order of key insertion. The only sequence-like addition is
support for reversed.A further advantage of not allowing indexing is that it leaves open
the possibility of a fast C implementation using linked lists.
Does OrderedDict support alternate sort orders such as alphabetical?
No. Those wanting different sort orders really need to be using another
technique. The OrderedDict is all about recording insertion order. If any
other order is of interest, then another structure (like an in-memory
dbm) is likely a better fit.
How well does OrderedDict work with the json module, PyYAML, and ConfigParser?
For json, the good news is that json’s encoder respects OrderedDict’s iteration order:>>> items = [('one', 1), ('two', 2), ('three',3), ('four',4), ('five',5)]
>>> json.dumps(OrderedDict(items))
'{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}'
In Py2.6, the object_hook for json decoders passes-in an already built
dictionary so order is lost before the object hook sees it. This
problem is being fixed for Python 2.7/3.1 by adding a new hook that
preserves order (see https://github.com/python/cpython/issues/49631 ).
With the new hook, order can be preserved:
>>> jtext = '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}'
>>> json.loads(jtext, object_pairs_hook=OrderedDict)
OrderedDict({'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5})
For PyYAML, a full round-trip is problem free:
>>> ytext = yaml.dump(OrderedDict(items))
>>> print ytext
!!python/object/apply:collections.OrderedDict
- - [one, 1]
- [two, 2]
- [three, 3]
- [four, 4]
- [five, 5]
>>> yaml.load(ytext)
OrderedDict({'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5})
For the ConfigParser module, round-tripping is also problem free. Custom
dicts were added in Py2.6 specifically to support ordered dictionaries:
>>> config = ConfigParser(dict_type=OrderedDict)
>>> config.read('myconfig.ini')
>>> config.remove_option('Log', 'error')
>>> config.write(open('myconfig.ini', 'w'))
How does OrderedDict handle equality testing?
Comparing two ordered dictionaries implies that the test will be
order-sensitive so that list (od1.items())==list(od2.items()).When ordered dicts are compared with other Mappings, their order
insensitive comparison is used. This allows ordered dictionaries
to be substituted anywhere regular dictionaries are used.
How __repr__ format will maintain order during a repr/eval round-trip?
OrderedDict([(‘a’, 1), (‘b’, 2)])
What are the trade-offs of the possible underlying data structures?
Keeping a sorted list of keys is fast for all operations except
__delitem__() which becomes an O(n) exercise. This data structure leads
to very simple code and little wasted space.
Keeping a separate dictionary to record insertion sequence numbers makes
the code a little bit more complex. All of the basic operations are O(1)
but the constant factor is increased for __setitem__() and __delitem__()
meaning that every use case will have to pay for this speedup (since all
buildup go through __setitem__). Also, the first traversal incurs a
one-time O(n log n) sorting cost. The storage costs are double that
for the sorted-list-of-keys approach.
A version written in C could use a linked list. The code would be more
complex than the other two approaches but it would conserve space and
would keep the same big-oh performance as regular dictionaries. It is
the fastest and most space efficient.
Reference Implementation
An implementation with tests and documentation is at:
OrderedDict patch
The proposed version has several merits:
Strict compliance with the MutableMapping API and no new methods
so that the learning curve is near zero. It is simply a dictionary
that remembers insertion order.
Generally good performance. The big-oh times are the same as regular
dictionaries except that key deletion is O(n).
Other implementations of ordered dicts in various Python projects or
standalone libraries, that inspired the API proposed here, are:
odict in Python
odict in Babel
OrderedDict in Django
The odict module
ordereddict (a C implementation of the odict module)
StableDict
Armin Rigo’s OrderedDict
Future Directions
With the availability of an ordered dict in the standard library,
other libraries may take advantage of that. For example, ElementTree
could return odicts in the future that retain the attribute ordering
of the source file.
References
[1]
https://github.com/python/cpython/issues/42649
Copyright
This document has been placed in the public domain.
| Final | PEP 372 – Adding an ordered dictionary to collections | Standards Track | This PEP proposes an ordered dictionary as a new data structure for
the collections module, called “OrderedDict” in this PEP. The
proposed API incorporates the experiences gained from working with
similar implementations that exist in various real-world applications
and other programming languages. |
PEP 373 – Python 2.7 Release Schedule
Author:
Benjamin Peterson <benjamin at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
03-Nov-2008
Python-Version:
2.7
Table of Contents
Abstract
Update (April 2014)
Release Manager and Crew
Maintenance releases
2.7.0 Release Schedule
Possible features for 2.7
References
Copyright
Abstract
This document describes the development and release schedule for
Python 2.7.
Python 2.7 is the end of the Python 2.x series, and is succeeded by
Python 3. See the “Sunsetting Python 2” FAQ on python.org for a general
overview.
Update (April 2014)
The End Of Life date (EOL, sunset date) for Python 2.7 has been moved
five years into the future, to 2020. This decision was made to
clarify the status of Python 2.7 and relieve worries for those users
who cannot yet migrate to Python 3. See also PEP 466.
This declaration does not guarantee that bugfix releases will be made
on a regular basis, but it should enable volunteers who want to
contribute bugfixes for Python 2.7 and it should satisfy vendors who
still have to support Python 2 for years to come.
There will be no Python 2.8 (see PEP 404).
Release Manager and Crew
Position
Name
2.7 Release Manager
Benjamin Peterson
Windows installers
Steve Dower
Mac installers
Ned Deily
Maintenance releases
Being the last of the 2.x series, 2.7 received bugfix support until
2020. Support officially stopped January 1 2020, and 2.7.18 code
freeze occurred on January 1 2020, but the final release occurred
after that date.
Dates of previous maintenance releases:
2.7.1 2010-11-27
2.7.2 2011-07-21
2.7.3rc1 2012-02-23
2.7.3rc2 2012-03-15
2.7.3 2012-03-09
2.7.4rc1 2013-03-23
2.7.4 2013-04-06
2.7.5 2013-05-12
2.7.6rc1 2013-10-26
2.7.6 2013-11-10
2.7.7rc1 2014-05-17
2.7.7 2014-05-31
2.7.8 2014-06-30
2.7.9rc1 2014-11-26
2.7.9 2014-12-10
2.7.10rc1 2015-05-09
2.7.10 2015-05-23
2.7.11rc1 2015-11-21
2.7.11 2015-12-05
2.7.12 2016-06-25
2.7.13rc1 2016-12-03
2.7.13 2016-12-17
2.7.14rc1 2017-08-26
2.7.14 2017-09-16
2.7.15rc1 2018-04-14
2.7.15 2018-05-01
2.7.16rc 2019-02-16
2.7.16 2019-03-02
2.7.17rc1 2019-10-05
2.7.17 2019-10-19
2.7.18rc1 2020-04-04
2.7.18 2020-04-20
2.7.0 Release Schedule
The release schedule for 2.7.0 was:
2.7 alpha 1 2009-12-05
2.7 alpha 2 2010-01-09
2.7 alpha 3 2010-02-06
2.7 alpha 4 2010-03-06
2.7 beta 1 2010-04-03
2.7 beta 2 2010-05-08
2.7 rc1 2010-06-05
2.7 rc2 2010-06-19
2.7 final 2010-07-03
Possible features for 2.7
Nothing here. [Note that a moratorium on core language changes is in effect.]
References
“The Python 2 death march” on python-dev
Petition: abandon plans to ship a 2.7.18 in April
[RELEASE] Python 2.7.18, the end of an era
Copyright
This document has been placed in the public domain.
| Final | PEP 373 – Python 2.7 Release Schedule | Informational | This document describes the development and release schedule for
Python 2.7. |
PEP 375 – Python 3.1 Release Schedule
Author:
Benjamin Peterson <benjamin at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
08-Feb-2009
Python-Version:
3.1
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
Maintenance Releases
Features for 3.1
Footnotes
Copyright
Abstract
This document describes the development and release schedule for Python 3.1.
The schedule primarily concerns itself with PEP-sized items. Small features may
be added up to and including the first beta release. Bugs may be fixed until
the final release.
Release Manager and Crew
Position
Name
3.1 Release Manager
Benjamin Peterson
Windows installers
Martin v. Loewis
Mac installers
Ronald Oussoren
Release Schedule
3.1a1 March 7, 2009
3.1a2 April 4, 2009
3.1b1 May 6, 2009
3.1rc1 May 30, 2009
3.1rc2 June 13, 2009
3.1 final June 27, 2009
Maintenance Releases
3.1 is no longer maintained. 3.1 received security fixes until June
2012.
Previous maintenance releases are:
v3.1.1rc1 2009-08-13
v3.1.1 2009-08-16
v3.1.2rc1 2010-03-06
v3.1.2 2010-03-20
v3.1.3rc1 2010-11-13
v3.1.3 2010-11-27
v3.1.4rc1 2011-05-29
v3.1.4 2011-06-11
v3.1.5rc1 2012-02-23
v3.1.5rc2 2012-03-15
v3.1.5 2012-04-06
Features for 3.1
importlib
io in C
Update simplejson to the latest external version [1].
Ordered dictionary for collections (PEP 372).
auto-numbered replacement fields in str.format() strings [2]
Nested with-statements in one with statement
Footnotes
[1]
http://bugs.python.org/issue4136
[2]
http://bugs.python.org/issue5237
Copyright
This document has been placed in the public domain.
| Final | PEP 375 – Python 3.1 Release Schedule | Informational | This document describes the development and release schedule for Python 3.1.
The schedule primarily concerns itself with PEP-sized items. Small features may
be added up to and including the first beta release. Bugs may be fixed until
the final release. |
PEP 376 – Database of Installed Python Distributions
Author:
Tarek Ziadé <tarek at ziade.org>
Status:
Final
Type:
Standards Track
Topic:
Packaging
Created:
22-Feb-2009
Python-Version:
2.7, 3.2
Post-History:
22-Jun-2009
Table of Contents
Abstract
Rationale
How distributions are installed
Uninstall information
What this PEP proposes
One .dist-info directory per installed distribution
RECORD
INSTALLER
REQUESTED
Implementation details
New functions and classes in pkgutil
Functions
Distribution class
Examples
New functions in Distutils
Filtering
Installer marker
Adding an Uninstall script
Backward compatibility and roadmap
References
Acknowledgements
Copyright
Attention
This PEP is a historical document. The up-to-date, canonical spec, Core metadata specifications, is maintained on the PyPA specs page.
×
See the PyPA specification update process for how to propose changes.
Abstract
The goal of this PEP is to provide a standard infrastructure to manage
project distributions installed on a system, so all tools that are
installing or removing projects are interoperable.
To achieve this goal, the PEP proposes a new format to describe installed
distributions on a system. It also describes a reference implementation
for the standard library.
In the past an attempt was made to create an installation database
(see PEP 262).
Combined with PEP 345, the current proposal supersedes PEP 262.
Note: the implementation plan didn’t go as expected, so it should be
considered informative only for this PEP.
Rationale
There are two problems right now in the way distributions are installed in
Python:
There are too many ways to do it and this makes interoperation difficult.
There is no API to get information on installed distributions.
How distributions are installed
Right now, when a distribution is installed in Python, every element can
be installed in a different directory.
For instance, Distutils installs the pure Python code in the purelib
directory, which is lib/python2.6/site-packages for unix-like systems and
Mac OS X, or Lib\site-packages under Python’s installation directory for
Windows.
Additionally, the install_egg_info subcommand of the Distutils install
command adds an .egg-info file for the project into the purelib
directory.
For example, for the docutils distribution, which contains one package an
extra module and executable scripts, three elements are installed in
site-packages:
docutils: The docutils package.
roman.py: An extra module used by docutils.
docutils-0.5-py2.6.egg-info: A file containing the distribution metadata
as described in PEP 314. This file corresponds to the file
called PKG-INFO, built by the sdist command.
Some executable scripts, such as rst2html.py, are also added in the
bin directory of the Python installation.
Another project called setuptools [3] has two other formats
to install distributions, called EggFormats [6]:
a self-contained .egg directory, that contains all the distribution files
and the distribution metadata in a file called PKG-INFO in a subdirectory
called EGG-INFO. setuptools creates other files in that directory that can
be considered as complementary metadata.
an .egg-info directory installed in site-packages, that contains the same
files EGG-INFO has in the .egg format.
The first format is automatically used when you install a distribution that
uses the setuptools.setup function in its setup.py file, instead of
the distutils.core.setup one.
setuptools also add a reference to the distribution into an
easy-install.pth file.
Last, the setuptools project provides an executable script called
easy_install [4] that installs all distributions, including
distutils-based ones in self-contained .egg directories.
If you want to have standalone .egg-info directories for your distributions,
e.g. the second setuptools format, you have to force it when you work
with a setuptools-based distribution or with the easy_install script.
You can force it by using the --single-version-externally-managed option
or the --root option. This will make the setuptools project install
the project like distutils does.
This option is used by :
the pip [5] installer
the Fedora packagers [7].
the Debian packagers [8].
Uninstall information
Distutils doesn’t provide an uninstall command. If you want to uninstall
a distribution, you have to be a power user and remove the various elements
that were installed, and then look over the .pth file to clean them if
necessary.
And the process differs depending on the tools you have used to install the
distribution and if the distribution’s setup.py uses Distutils or
Setuptools.
Under some circumstances, you might not be able to know for sure that you
have removed everything, or that you didn’t break another distribution by
removing a file that is shared among several distributions.
But there’s a common behavior: when you install a distribution, files are
copied in your system. And it’s possible to keep track of these files for
later removal.
Moreover, the Pip project has gained an uninstall feature lately. It
records all installed files, using the record option of the install
command.
What this PEP proposes
To address those issues, this PEP proposes a few changes:
A new .dist-info structure using a directory, inspired on one format of
the EggFormats standard from setuptools.
New APIs in pkgutil to be able to query the information of installed
distributions.
An uninstall function and an uninstall script in Distutils.
One .dist-info directory per installed distribution
This PEP proposes an installation format inspired by one of the options in the
EggFormats standard, the one that uses a distinct directory located in the
site-packages directory.
This distinct directory is named as follows:
name + '-' + version + '.dist-info'
This .dist-info directory can contain these files:
METADATA: contains metadata, as described in PEP 345, PEP 314 and PEP 241.
RECORD: records the list of installed files
INSTALLER: records the name of the tool used to install the project
REQUESTED: the presence of this file indicates that the project
installation was explicitly requested (i.e., not installed as a dependency).
The METADATA, RECORD and INSTALLER files are mandatory, while REQUESTED may
be missing.
This proposal will not impact Python itself because the metadata files are not
used anywhere yet in the standard library besides Distutils.
It will impact the setuptools and pip projects but, given the fact that
they already work with a directory that contains a PKG-INFO file, the change
will have no deep consequences.
RECORD
A RECORD file is added inside the .dist-info directory at installation
time when installing a source distribution using the install command.
Notice that when installing a binary distribution created with bdist command
or a bdist-based command, the RECORD file will be installed as well since
these commands use the install command to create binary distributions.
The RECORD file holds the list of installed files. These correspond
to the files listed by the record option of the install command, and will
be generated by default. This allows the implementation of an uninstallation
feature, as explained later in this PEP. The install command also provides
an option to prevent the RECORD file from being written and this option
should be used when creating system packages.
Third-party installation tools also should not overwrite or delete files
that are not in a RECORD file without prompting or warning.
This RECORD file is inspired from PEP 262 FILES.
The RECORD file is a CSV file, composed of records, one line per
installed file. The csv module is used to read the file, with
these options:
field delimiter : ,
quoting char : ".
line terminator : os.linesep (so \r\n or \n)
When a distribution is installed, files can be installed under:
the base location: path defined by the --install-lib option,
which defaults to the site-packages directory.
the installation prefix: path defined by the --prefix option, which
defaults to sys.prefix.
any other path on the system.
Each record is composed of three elements:
the file’s path
a ‘/’-separated path, relative to the base location, if the file is
under the base location.
a ‘/’-separated path, relative to the base location, if the file
is under the installation prefix AND if the base location is a
subpath of the installation prefix.
an absolute path, using the local platform separator
a hash of the file’s contents.
Notice that pyc and pyo generated files don’t have any hash because
they are automatically produced from py files. So checking the hash
of the corresponding py file is enough to decide if the file and
its associated pyc or pyo files have changed.The hash is either the empty string or the hash algorithm as named in
hashlib.algorithms_guaranteed, followed by the equals character
=, followed by the urlsafe-base64-nopad encoding of the digest
(base64.urlsafe_b64encode(digest) with trailing = removed).
the file’s size in bytes
The csv module is used to generate this file, so the field separator is
“,”. Any “,” character found within a field is escaped automatically by
csv.
When the file is read, the U option is used so the universal newline
support (see PEP 278) is activated, avoiding any trouble
reading a file produced on a platform that uses a different new line
terminator.
Here’s an example of a RECORD file (extract):
lib/python2.6/site-packages/docutils/__init__.py,md5=nWt-Dge1eug4iAgqLS_uWg,9544
lib/python2.6/site-packages/docutils/__init__.pyc,,
lib/python2.6/site-packages/docutils/core.py,md5=X90C_JLIcC78PL74iuhPnA,66188
lib/python2.6/site-packages/docutils/core.pyc,,
lib/python2.6/site-packages/roman.py,md5=7YhfNczihNjOY0FXlupwBg,234
lib/python2.6/site-packages/roman.pyc,,
/usr/local/bin/rst2html.py,md5=g22D3amDLJP-FhBzCi7EvA,234
/usr/local/bin/rst2html.pyc,,
python2.6/site-packages/docutils-0.5.dist-info/METADATA,md5=ovJyUNzXdArGfmVyb0onyA,195
lib/python2.6/site-packages/docutils-0.5.dist-info/RECORD,,
Notice that the RECORD file can’t contain a hash of itself and is just mentioned here
A project that installs a config.ini file in /etc/myapp will be added like this:
/etc/myapp/config.ini,md5=gLfd6IANquzGLhOkW4Mfgg,9544
For a windows platform, the drive letter is added for the absolute paths,
so a file that is copied in c:MyAppwill be:
c:\etc\myapp\config.ini,md5=gLfd6IANquzGLhOkW4Mfgg,9544
INSTALLER
The install command has a new option called installer. This option
is the name of the tool used to invoke the installation. It’s a normalized
lower-case string matching [a-z0-9_\-\.].
$ python setup.py install –installer=pkg-system
It defaults to distutils if not provided.
When a distribution is installed, the INSTALLER file is generated in the
.dist-info directory with this value, to keep track of who installed the
distribution. The file is a single-line text file.
REQUESTED
Some install tools automatically detect unfulfilled dependencies and
install them. In these cases, it is useful to track which
distributions were installed purely as a dependency, so if their
dependent distribution is later uninstalled, the user can be alerted
of the orphaned dependency.
If a distribution is installed by direct user request (the usual
case), a file REQUESTED is added to the .dist-info directory of the
installed distribution. The REQUESTED file may be empty, or may
contain a marker comment line beginning with the “#” character.
If an install tool installs a distribution automatically, as a
dependency of another distribution, the REQUESTED file should not be
created.
The install command of distutils by default creates the REQUESTED
file. It accepts --requested and --no-requested options to explicitly
specify whether the file is created.
If a distribution that was already installed on the system as a dependency
is later installed by name, the distutils install command will
create the REQUESTED file in the .dist-info directory of the existing
installation.
Implementation details
Note: this section is non-normative. In the end, this PEP was
implemented by third-party libraries and tools, not the standard
library.
New functions and classes in pkgutil
To use the .dist-info directory content, we need to add in the standard
library a set of APIs. The best place to put these APIs is pkgutil.
Functions
The new functions added in the pkgutil module are :
distinfo_dirname(name, version) -> directory name
name is converted to a standard distribution name by replacing any
runs of non-alphanumeric characters with a single ‘-‘.version is converted to a standard version string. Spaces become
dots, and all other non-alphanumeric characters (except dots) become
dashes, with runs of multiple dashes condensed to a single dash.
Both attributes are then converted into their filename-escaped form,
i.e. any ‘-’ characters are replaced with ‘_’ other than the one in
‘dist-info’ and the one separating the name from the version number.
get_distributions() -> iterator of Distribution instances.Provides an iterator that looks for .dist-info directories in
sys.path and returns Distribution instances for
each one of them.
get_distribution(name) -> Distribution or None.
obsoletes_distribution(name, version=None) -> iterator of Distribution
instances.Iterates over all distributions to find which distributions obsolete
name. If a version is provided, it will be used to filter the results.
provides_distribution(name, version=None) -> iterator of Distribution
instances.Iterates over all distributions to find which distributions provide
name. If a version is provided, it will be used to filter the results.
Scans all elements in sys.path and looks for all directories ending with
.dist-info. Returns a Distribution corresponding to the
.dist-info directory that contains a METADATA that matches name
for the name metadata.
This function only returns the first result founded, since no more than one
values are expected. If the directory is not found, returns None.
get_file_users(path) -> iterator of Distribution instances.Iterates over all distributions to find out which distributions uses path.
path can be a local absolute path or a relative ‘/’-separated path.
A local absolute path is an absolute path in which occurrences of ‘/’
have been replaced by the system separator given by os.sep.
Distribution class
A new class called Distribution is created with the path of the
.dist-info directory provided to the constructor. It reads the metadata
contained in METADATA when it is instantiated.
Distribution(path) -> instance
Creates a Distribution instance for the given path.
Distribution provides the following attributes:
name: The name of the distribution.
metadata: A DistributionMetadata instance loaded with the
distribution’s METADATA file.
requested: A boolean that indicates whether the REQUESTED
metadata file is present (in other words, whether the distribution was
installed by user request).
And following methods:
get_installed_files(local=False) -> iterator of (path, hash, size)Iterates over the RECORD entries and return a tuple (path, hash, size)
for each line. If local is True, the path is transformed into a
local absolute path. Otherwise the raw value from RECORD is returned.
A local absolute path is an absolute path in which occurrences of ‘/’
have been replaced by the system separator given by os.sep.
uses(path) -> BooleanReturns True if path is listed in RECORD. path
can be a local absolute path or a relative ‘/’-separated path.
get_distinfo_file(path, binary=False) -> file object
Returns a file located under the .dist-info directory.Returns a file instance for the file pointed by path.
path has to be a ‘/’-separated path relative to the .dist-info
directory or an absolute path.
If path is an absolute path and doesn’t start with the .dist-info
directory path, a DistutilsError is raised.
If binary is True, opens the file in read-only binary mode (rb),
otherwise opens it in read-only mode (r).
get_distinfo_files(local=False) -> iterator of pathsIterates over the RECORD entries and returns paths for each line if the path
is pointing to a file located in the .dist-info directory or one of its
subdirectories.
If local is True, each path is transformed into a
local absolute path. Otherwise the raw value from RECORD is returned.
Notice that the API is organized in five classes that work with directories
and Zip files (so it works with files included in Zip files, see PEP 273 for
more details). These classes are described in the documentation
of the prototype implementation for interested readers [9].
Examples
Let’s use some of the new APIs with our docutils example:
>>> from pkgutil import get_distribution, get_file_users, distinfo_dirname
>>> dist = get_distribution('docutils')
>>> dist.name
'docutils'
>>> dist.metadata.version
'0.5'
>>> distinfo_dirname('docutils', '0.5')
'docutils-0.5.dist-info'
>>> distinfo_dirname('python-ldap', '2.5')
'python_ldap-2.5.dist-info'
>>> distinfo_dirname('python-ldap', '2.5 a---5')
'python_ldap-2.5.a_5.dist-info'
>>> for path, hash, size in dist.get_installed_files()::
... print '%s %s %d' % (path, hash, size)
...
python2.6/site-packages/docutils/__init__.py,b690274f621402dda63bf11ba5373bf2,9544
python2.6/site-packages/docutils/core.py,9c4b84aff68aa55f2e9bf70481b94333,66188
python2.6/site-packages/roman.py,a4b84aff68aa55f2e9bf70481b943D3,234
/usr/local/bin/rst2html.py,a4b84aff68aa55f2e9bf70481b943D3,234
python2.6/site-packages/docutils-0.5.dist-info/METADATA,6fe57de576d749536082d8e205b77748,195
python2.6/site-packages/docutils-0.5.dist-info/RECORD
>>> dist.uses('docutils/core.py')
True
>>> dist.uses('/usr/local/bin/rst2html.py')
True
>>> dist.get_distinfo_file('METADATA')
<open file at ...>
>>> dist.requested
True
New functions in Distutils
Distutils already provides a very basic way to install a distribution, which
is running the install command over the setup.py script of the
distribution.
Distutils2 will provide a very basic uninstall function, that
is added in distutils2.util and takes the name of the distribution to
uninstall as its argument. uninstall uses the APIs described earlier and
remove all unique files, as long as their hash didn’t change. Then it removes
empty directories left behind.
uninstall returns a list of uninstalled files:
>>> from distutils2.util import uninstall
>>> uninstall('docutils')
['/opt/local/lib/python2.6/site-packages/docutils/core.py',
...
'/opt/local/lib/python2.6/site-packages/docutils/__init__.py']
If the distribution is not found, a DistutilsUninstallError is raised.
Filtering
To make it a reference API for third-party projects that wish to control
how uninstall works, a second callable argument can be used. It’s
called for each file that is removed. If the callable returns True, the
file is removed. If it returns False, it’s left alone.
Examples:
>>> def _remove_and_log(path):
... logging.info('Removing %s' % path)
... return True
...
>>> uninstall('docutils', _remove_and_log)
>>> def _dry_run(path):
... logging.info('Removing %s (dry run)' % path)
... return False
...
>>> uninstall('docutils', _dry_run)
Of course, a third-party tool can use lower-level pkgutil APIs to
implement its own uninstall feature.
Installer marker
As explained earlier in this PEP, the install command adds an INSTALLER
file in the .dist-info directory with the name of the installer.
To avoid removing distributions that were installed by another packaging
system, the uninstall function takes an extra argument installer which
defaults to distutils2.
When called, uninstall controls that the INSTALLER file matches
this argument. If not, it raises a DistutilsUninstallError:
>>> uninstall('docutils')
Traceback (most recent call last):
...
DistutilsUninstallError: docutils was installed by 'cool-pkg-manager'
>>> uninstall('docutils', installer='cool-pkg-manager')
This allows a third-party application to use the uninstall function
and strongly suggest that no other program remove a distribution it has
previously installed. This is useful when a third-party program that relies
on Distutils APIs does extra steps on the system at installation time,
it has to undo at uninstallation time.
Adding an Uninstall script
An uninstall script is added in Distutils2. and is used like this:
$ python -m distutils2.uninstall projectname
Notice that script doesn’t control if the removal of a distribution breaks
another distribution. Although it makes sure that all the files it removes
are not used by any other distribution, by using the uninstall function.
Also note that this uninstall script pays no attention to the
REQUESTED metadata; that is provided only for use by external tools to
provide more advanced dependency management.
Backward compatibility and roadmap
These changes don’t introduce any compatibility problems since they
will be implemented in:
pkgutil in new functions
distutils2
The plan is to include the functionality outlined in this PEP in pkgutil for
Python 3.2, and in Distutils2.
Distutils2 will also contain a backport of the new pgkutil, and can be used for
2.4 onward.
Distributions installed using existing, pre-standardization formats do not have
the necessary metadata available for the new API, and thus will be
ignored. Third-party tools may of course to continue to support previous
formats in addition to the new format, in order to ease the transition.
References
[1]
http://docs.python.org/distutils
[2]
http://hg.python.org/distutils2
[3]
http://peak.telecommunity.com/DevCenter/setuptools
[4]
http://peak.telecommunity.com/DevCenter/EasyInstall
[5]
http://pypi.python.org/pypi/pip
[6]
http://peak.telecommunity.com/DevCenter/EggFormats
[7]
http://fedoraproject.org/wiki/Packaging/Python/Eggs#Providing_Eggs_using_Setuptools
[8]
http://wiki.debian.org/DebianPython/NewPolicy
[9]
http://bitbucket.org/tarek/pep376/
Acknowledgements
Jim Fulton, Ian Bicking, Phillip Eby, Rafael Villar Burke, and many people at
Pycon and Distutils-SIG.
Copyright
This document has been placed in the public domain.
| Final | PEP 376 – Database of Installed Python Distributions | Standards Track | The goal of this PEP is to provide a standard infrastructure to manage
project distributions installed on a system, so all tools that are
installing or removing projects are interoperable. |
PEP 377 – Allow __enter__() methods to skip the statement body
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
08-Mar-2009
Python-Version:
2.7, 3.1
Post-History:
08-Mar-2009
Table of Contents
Abstract
PEP Rejection
Proposed Change
Rationale for Change
Performance Impact
Reference Implementation
Acknowledgements
References
Copyright
Abstract
This PEP proposes a backwards compatible mechanism that allows __enter__()
methods to skip the body of the associated with statement. The lack of
this ability currently means the contextlib.contextmanager decorator
is unable to fulfil its specification of being able to turn arbitrary
code into a context manager by moving it into a generator function
with a yield in the appropriate location. One symptom of this is that
contextlib.nested will currently raise RuntimeError in
situations where writing out the corresponding nested with
statements would not [1].
The proposed change is to introduce a new flow control exception
SkipStatement, and skip the execution of the with
statement body if __enter__() raises this exception.
PEP Rejection
This PEP was rejected by Guido [4] as it imposes too great an increase
in complexity without a proportional increase in expressiveness and
correctness. In the absence of compelling use cases that need the more
complex semantics proposed by this PEP the existing behaviour is
considered acceptable.
Proposed Change
The semantics of the with statement will be changed to include a
new try/except/else block around the call to __enter__().
If SkipStatement is raised by the __enter__() method, then
the main section of the with statement (now located in the else
clause) will not be executed. To avoid leaving the names in any as
clause unbound in this case, a new StatementSkipped singleton
(similar to the existing NotImplemented singleton) will be
assigned to all names that appear in the as clause.
The components of the with statement remain as described in PEP 343:
with EXPR as VAR:
BLOCK
After the modification, the with statement semantics would
be as follows:
mgr = (EXPR)
exit = mgr.__exit__ # Not calling it yet
try:
value = mgr.__enter__()
except SkipStatement:
VAR = StatementSkipped
# Only if "as VAR" is present and
# VAR is a single name
# If VAR is a tuple of names, then StatementSkipped
# will be assigned to each name in the tuple
else:
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
BLOCK
except:
# The exceptional case is handled here
exc = False
if not exit(*sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if exc:
exit(None, None, None)
With the above change in place for the with statement semantics,
contextlib.contextmanager() will then be modified to raise
SkipStatement instead of RuntimeError when the underlying
generator doesn’t yield.
Rationale for Change
Currently, some apparently innocuous context managers may raise
RuntimeError when executed. This occurs when the context
manager’s __enter__() method encounters a situation where
the written out version of the code corresponding to the
context manager would skip the code that is now the body
of the with statement. Since the __enter__() method
has no mechanism available to signal this to the interpreter,
it is instead forced to raise an exception that not only
skips the body of the with statement, but also jumps over
all code until the nearest exception handler. This goes against
one of the design goals of the with statement, which was to
be able to factor out arbitrary common exception handling code
into a single context manager by putting into a generator
function and replacing the variant part of the code with a
yield statement.
Specifically, the following examples behave differently if
cmB().__enter__() raises an exception which cmA().__exit__()
then handles and suppresses:
with cmA():
with cmB():
do_stuff()
# This will resume here without executing "do_stuff()"
@contextlib.contextmanager
def combined():
with cmA():
with cmB():
yield
with combined():
do_stuff()
# This will raise a RuntimeError complaining that the context
# manager's underlying generator didn't yield
with contextlib.nested(cmA(), cmB()):
do_stuff()
# This will raise the same RuntimeError as the contextmanager()
# example (unsurprising, given that the nested() implementation
# uses contextmanager())
# The following class based version shows that the issue isn't
# specific to contextlib.contextmanager() (it also shows how
# much simpler it is to write context managers as generators
# instead of as classes!)
class CM(object):
def __init__(self):
self.cmA = None
self.cmB = None
def __enter__(self):
if self.cmA is not None:
raise RuntimeError("Can't re-use this CM")
self.cmA = cmA()
self.cmA.__enter__()
try:
self.cmB = cmB()
self.cmB.__enter__()
except:
self.cmA.__exit__(*sys.exc_info())
# Can't suppress in __enter__(), so must raise
raise
def __exit__(self, *args):
suppress = False
try:
if self.cmB is not None:
suppress = self.cmB.__exit__(*args)
except:
suppress = self.cmA.__exit__(*sys.exc_info()):
if not suppress:
# Exception has changed, so reraise explicitly
raise
else:
if suppress:
# cmB already suppressed the exception,
# so don't pass it to cmA
suppress = self.cmA.__exit__(None, None, None):
else:
suppress = self.cmA.__exit__(*args):
return suppress
With the proposed semantic change in place, the contextlib based examples
above would then “just work”, but the class based version would need
a small adjustment to take advantage of the new semantics:
class CM(object):
def __init__(self):
self.cmA = None
self.cmB = None
def __enter__(self):
if self.cmA is not None:
raise RuntimeError("Can't re-use this CM")
self.cmA = cmA()
self.cmA.__enter__()
try:
self.cmB = cmB()
self.cmB.__enter__()
except:
if self.cmA.__exit__(*sys.exc_info()):
# Suppress the exception, but don't run
# the body of the with statement either
raise SkipStatement
raise
def __exit__(self, *args):
suppress = False
try:
if self.cmB is not None:
suppress = self.cmB.__exit__(*args)
except:
suppress = self.cmA.__exit__(*sys.exc_info()):
if not suppress:
# Exception has changed, so reraise explicitly
raise
else:
if suppress:
# cmB already suppressed the exception,
# so don't pass it to cmA
suppress = self.cmA.__exit__(None, None, None):
else:
suppress = self.cmA.__exit__(*args):
return suppress
There is currently a tentative suggestion [3] to add import-style syntax to
the with statement to allow multiple context managers to be included in
a single with statement without needing to use contextlib.nested. In
that case the compiler has the option of simply emitting multiple with
statements at the AST level, thus allowing the semantics of actual nested
with statements to be reproduced accurately. However, such a change
would highlight rather than alleviate the problem the current PEP aims to
address: it would not be possible to use contextlib.contextmanager to
reliably factor out such with statements, as they would exhibit exactly
the same semantic differences as are seen with the combined() context
manager in the above example.
Performance Impact
Implementing the new semantics makes it necessary to store the references
to the __enter__ and __exit__ methods in temporary variables instead
of on the stack. This results in a slight regression in with statement
speed relative to Python 2.6/3.1. However, implementing a custom
SETUP_WITH opcode would negate any differences between the two
approaches (as well as dramatically improving speed by eliminating more
than a dozen unnecessary trips around the eval loop).
Reference Implementation
Patch attached to Issue 5251 [1]. That patch uses only existing opcodes
(i.e. no SETUP_WITH).
Acknowledgements
James William Pye both raised the issue and suggested the basic outline of
the solution described in this PEP.
References
[1] (1, 2)
Issue 5251: contextlib.nested inconsistent with nested with statements
(http://bugs.python.org/issue5251)
[3]
Import-style syntax to reduce indentation of nested with statements
(https://mail.python.org/pipermail/python-ideas/2009-March/003188.html)
[4]
Guido’s rejection of the PEP
(https://mail.python.org/pipermail/python-dev/2009-March/087263.html)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 377 – Allow __enter__() methods to skip the statement body | Standards Track | This PEP proposes a backwards compatible mechanism that allows __enter__()
methods to skip the body of the associated with statement. The lack of
this ability currently means the contextlib.contextmanager decorator
is unable to fulfil its specification of being able to turn arbitrary
code into a context manager by moving it into a generator function
with a yield in the appropriate location. One symptom of this is that
contextlib.nested will currently raise RuntimeError in
situations where writing out the corresponding nested with
statements would not [1]. |
PEP 379 – Adding an Assignment Expression
Author:
Jervis Whitley <jervisau at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
14-Mar-2009
Python-Version:
2.7, 3.2
Post-History:
Table of Contents
Abstract
Motivation and Summary
Use Cases
Specification
Examples from the Standard Library
Examples
References
Copyright
Abstract
This PEP adds a new assignment expression to the Python language
to make it possible to assign the result of an expression in
almost any place. The new expression will allow the assignment of
the result of an expression at first use (in a comparison for
example).
Motivation and Summary
Issue1714448 “if something as x:” [1] describes a feature to allow
assignment of the result of an expression in an if statement to a
name. It supposed that the as syntax could be borrowed for this
purpose. Many times it is not the expression itself that is
interesting, rather one of the terms that make up the
expression. To be clear, something like this:
if (f_result() == [1, 2, 3]) as res:
seems awfully limited, when this:
if (f_result() as res) == [1, 2, 3]:
is probably the desired result.
Use Cases
See the Examples section near the end.
Specification
A new expression is proposed with the (nominal) syntax:
EXPR -> VAR
This single expression does the following:
Evaluate the value of EXPR, an arbitrary expression;
Assign the result to VAR, a single assignment target; and
Leave the result of EXPR on the Top of Stack (TOS)
Here -> or (RARROW) has been used to illustrate the concept that
the result of EXPR is assigned to VAR.
The translation of the proposed syntax is:
VAR = (EXPR)
(EXPR)
The assignment target can be either an attribute, a subscript or
name:
f() -> name[0] # where 'name' exists previously.
f() -> name.attr # again 'name' exists prior to this expression.
f() -> name
This expression should be available anywhere that an expression is
currently accepted.
All exceptions that are currently raised during invalid
assignments will continue to be raised when using the assignment
expression. For example, a NameError will be raised when in
example 1 and 2 above if name is not previously defined, or an
IndexError if index 0 was out of range.
Examples from the Standard Library
The following two examples were chosen after a brief search
through the standard library, specifically both are from ast.py
which happened to be open at the time of the search.
Original:
def walk(node):
from collections import deque
todo = deque([node])
while todo:
node = todo.popleft()
todo.extend(iter_child_nodes(node))
yield node
Using assignment expression:
def walk(node):
from collections import deque
todo = deque([node])
while todo:
todo.extend(iter_child_nodes(todo.popleft() -> node))
yield node
Original:
def get_docstring(node, clean=True):
if not isinstance(node, (FunctionDef, ClassDef, Module)):
raise TypeError("%r can't have docstrings"
% node.__class__.__name__)
if node.body and isinstance(node.body[0], Expr) and \
isinstance(node.body[0].value, Str):
if clean:
import inspect
return inspect.cleandoc(node.body[0].value.s)
return node.body[0].value.s
Using assignment expression:
def get_docstring(node, clean=True):
if not isinstance(node, (FunctionDef, ClassDef, Module)):
raise TypeError("%r can't have docstrings"
% node.__class__.__name__)
if node.body -> body and isinstance(body[0] -> elem, Expr) and \
isinstance(elem.value -> value, Str):
if clean:
import inspect
return inspect.cleandoc(value.s)
return value.s
Examples
The examples shown below highlight some of the desirable features
of the assignment expression, and some of the possible corner
cases.
Assignment in an if statement for use later:def expensive():
import time; time.sleep(1)
return 'spam'
if expensive() -> res in ('spam', 'eggs'):
dosomething(res)
Assignment in a while loop clause:while len(expensive() -> res) == 4:
dosomething(res)
Keep the iterator object from the for loop:for ch in expensive() -> res:
sell_on_internet(res)
Corner case:for ch -> please_dont in expensive():
pass
# who would want to do this? Not I.
References
[1]
Issue1714448 “if something as x:”, k0wax
http://bugs.python.org/issue1714448
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 379 – Adding an Assignment Expression | Standards Track | This PEP adds a new assignment expression to the Python language
to make it possible to assign the result of an expression in
almost any place. The new expression will allow the assignment of
the result of an expression at first use (in a comparison for
example). |
PEP 380 – Syntax for Delegating to a Subgenerator
Author:
Gregory Ewing <greg.ewing at canterbury.ac.nz>
Status:
Final
Type:
Standards Track
Created:
13-Feb-2009
Python-Version:
3.3
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
PEP Acceptance
Motivation
Proposal
Enhancements to StopIteration
Formal Semantics
Rationale
The Refactoring Principle
Finalization
Generators as Threads
Syntax
Optimisations
Use of StopIteration to return values
Rejected Ideas
Criticisms
Alternative Proposals
Additional Material
Copyright
Abstract
A syntax is proposed for a generator to delegate part of its
operations to another generator. This allows a section of code
containing ‘yield’ to be factored out and placed in another generator.
Additionally, the subgenerator is allowed to return with a value, and
the value is made available to the delegating generator.
The new syntax also opens up some opportunities for optimisation when
one generator re-yields values produced by another.
PEP Acceptance
Guido officially accepted the PEP on 26th June, 2011.
Motivation
A Python generator is a form of coroutine, but has the limitation that
it can only yield to its immediate caller. This means that a piece of
code containing a yield cannot be factored out and put into a
separate function in the same way as other code. Performing such a
factoring causes the called function to itself become a generator, and
it is necessary to explicitly iterate over this second generator and
re-yield any values that it produces.
If yielding of values is the only concern, this can be performed
without much difficulty using a loop such as
for v in g:
yield v
However, if the subgenerator is to interact properly with the caller
in the case of calls to send(), throw() and close(),
things become considerably more difficult. As will be seen later, the
necessary code is very complicated, and it is tricky to handle all the
corner cases correctly.
A new syntax will be proposed to address this issue. In the simplest
use cases, it will be equivalent to the above for-loop, but it will
also handle the full range of generator behaviour, and allow generator
code to be refactored in a simple and straightforward way.
Proposal
The following new expression syntax will be allowed in the body of a
generator:
yield from <expr>
where <expr> is an expression evaluating to an iterable, from which an
iterator is extracted. The iterator is run to exhaustion, during which
time it yields and receives values directly to or from the caller of
the generator containing the yield from expression (the
“delegating generator”).
Furthermore, when the iterator is another generator, the subgenerator
is allowed to execute a return statement with a value, and that
value becomes the value of the yield from expression.
The full semantics of the yield from expression can be described
in terms of the generator protocol as follows:
Any values that the iterator yields are passed directly to the
caller.
Any values sent to the delegating generator using send() are
passed directly to the iterator. If the sent value is None, the
iterator’s __next__() method is called. If the sent value
is not None, the iterator’s send() method is called. If the
call raises StopIteration, the delegating generator is resumed.
Any other exception is propagated to the delegating generator.
Exceptions other than GeneratorExit thrown into the delegating
generator are passed to the throw() method of the iterator.
If the call raises StopIteration, the delegating generator is
resumed. Any other exception is propagated to the delegating
generator.
If a GeneratorExit exception is thrown into the delegating
generator, or the close() method of the delegating generator
is called, then the close() method of the iterator is called
if it has one. If this call results in an exception, it is
propagated to the delegating generator. Otherwise,
GeneratorExit is raised in the delegating generator.
The value of the yield from expression is the first argument
to the StopIteration exception raised by the iterator when
it terminates.
return expr in a generator causes StopIteration(expr) to
be raised upon exit from the generator.
Enhancements to StopIteration
For convenience, the StopIteration exception will be given a
value attribute that holds its first argument, or None if there
are no arguments.
Formal Semantics
Python 3 syntax is used in this section.
The statementRESULT = yield from EXPR
is semantically equivalent to
_i = iter(EXPR)
try:
_y = next(_i)
except StopIteration as _e:
_r = _e.value
else:
while 1:
try:
_s = yield _y
except GeneratorExit as _e:
try:
_m = _i.close
except AttributeError:
pass
else:
_m()
raise _e
except BaseException as _e:
_x = sys.exc_info()
try:
_m = _i.throw
except AttributeError:
raise _e
else:
try:
_y = _m(*_x)
except StopIteration as _e:
_r = _e.value
break
else:
try:
if _s is None:
_y = next(_i)
else:
_y = _i.send(_s)
except StopIteration as _e:
_r = _e.value
break
RESULT = _r
In a generator, the statementreturn value
is semantically equivalent to
raise StopIteration(value)
except that, as currently, the exception cannot be caught by
except clauses within the returning generator.
The StopIteration exception behaves as though defined thusly:class StopIteration(Exception):
def __init__(self, *args):
if len(args) > 0:
self.value = args[0]
else:
self.value = None
Exception.__init__(self, *args)
Rationale
The Refactoring Principle
The rationale behind most of the semantics presented above stems from
the desire to be able to refactor generator code. It should be
possible to take a section of code containing one or more yield
expressions, move it into a separate function (using the usual
techniques to deal with references to variables in the surrounding
scope, etc.), and call the new function using a yield from
expression.
The behaviour of the resulting compound generator should be, as far as
reasonably practicable, the same as the original unfactored generator
in all situations, including calls to __next__(), send(),
throw() and close().
The semantics in cases of subiterators other than generators has been
chosen as a reasonable generalization of the generator case.
The proposed semantics have the following limitations with regard to
refactoring:
A block of code that catches GeneratorExit without subsequently
re-raising it cannot be factored out while retaining exactly the
same behaviour.
Factored code may not behave the same way as unfactored code if a
StopIteration exception is thrown into the delegating generator.
With use cases for these being rare to non-existent, it was not
considered worth the extra complexity required to support them.
Finalization
There was some debate as to whether explicitly finalizing the
delegating generator by calling its close() method while it is
suspended at a yield from should also finalize the subiterator.
An argument against doing so is that it would result in premature
finalization of the subiterator if references to it exist elsewhere.
Consideration of non-refcounting Python implementations led to the
decision that this explicit finalization should be performed, so that
explicitly closing a factored generator has the same effect as doing
so to an unfactored one in all Python implementations.
The assumption made is that, in the majority of use cases, the
subiterator will not be shared. The rare case of a shared subiterator
can be accommodated by means of a wrapper that blocks throw() and
close() calls, or by using a means other than yield from to
call the subiterator.
Generators as Threads
A motivation for generators being able to return values concerns the
use of generators to implement lightweight threads. When using
generators in that way, it is reasonable to want to spread the
computation performed by the lightweight thread over many functions.
One would like to be able to call a subgenerator as though it were an
ordinary function, passing it parameters and receiving a returned
value.
Using the proposed syntax, a statement such as
y = f(x)
where f is an ordinary function, can be transformed into a delegation
call
y = yield from g(x)
where g is a generator. One can reason about the behaviour of the
resulting code by thinking of g as an ordinary function that can be
suspended using a yield statement.
When using generators as threads in this way, typically one is not
interested in the values being passed in or out of the yields.
However, there are use cases for this as well, where the thread is
seen as a producer or consumer of items. The yield from
expression allows the logic of the thread to be spread over as many
functions as desired, with the production or consumption of items
occurring in any subfunction, and the items are automatically routed to
or from their ultimate source or destination.
Concerning throw() and close(), it is reasonable to expect
that if an exception is thrown into the thread from outside, it should
first be raised in the innermost generator where the thread is
suspended, and propagate outwards from there; and that if the thread
is terminated from outside by calling close(), the chain of active
generators should be finalised from the innermost outwards.
Syntax
The particular syntax proposed has been chosen as suggestive of its
meaning, while not introducing any new keywords and clearly standing
out as being different from a plain yield.
Optimisations
Using a specialised syntax opens up possibilities for optimisation
when there is a long chain of generators. Such chains can arise, for
instance, when recursively traversing a tree structure. The overhead
of passing __next__() calls and yielded values down and up the
chain can cause what ought to be an O(n) operation to become, in the
worst case, O(n**2).
A possible strategy is to add a slot to generator objects to hold a
generator being delegated to. When a __next__() or send()
call is made on the generator, this slot is checked first, and if it
is nonempty, the generator that it references is resumed instead. If
it raises StopIteration, the slot is cleared and the main generator is
resumed.
This would reduce the delegation overhead to a chain of C function
calls involving no Python code execution. A possible enhancement
would be to traverse the whole chain of generators in a loop and
directly resume the one at the end, although the handling of
StopIteration is more complicated then.
Use of StopIteration to return values
There are a variety of ways that the return value from the generator
could be passed back. Some alternatives include storing it as an
attribute of the generator-iterator object, or returning it as the
value of the close() call to the subgenerator. However, the
proposed mechanism is attractive for a couple of reasons:
Using a generalization of the StopIteration exception makes it easy
for other kinds of iterators to participate in the protocol without
having to grow an extra attribute or a close() method.
It simplifies the implementation, because the point at which the
return value from the subgenerator becomes available is the same
point at which the exception is raised. Delaying until any later
time would require storing the return value somewhere.
Rejected Ideas
Some ideas were discussed but rejected.
Suggestion: There should be some way to prevent the initial call to
__next__(), or substitute it with a send() call with a specified
value, the intention being to support the use of generators wrapped so
that the initial __next__() is performed automatically.
Resolution: Outside the scope of the proposal. Such generators should
not be used with yield from.
Suggestion: If closing a subiterator raises StopIteration with a
value, return that value from the close() call to the delegating
generator.
The motivation for this feature is so that the end of a stream of
values being sent to a generator can be signalled by closing the
generator. The generator would catch GeneratorExit, finish its
computation and return a result, which would then become the return
value of the close() call.
Resolution: This usage of close() and GeneratorExit would be
incompatible with their current role as a bail-out and clean-up
mechanism. It would require that when closing a delegating generator,
after the subgenerator is closed, the delegating generator be resumed
instead of re-raising GeneratorExit. But this is not acceptable,
because it would fail to ensure that the delegating generator is
finalised properly in the case where close() is being called for
cleanup purposes.
Signalling the end of values to a consumer is better addressed by
other means, such as sending in a sentinel value or throwing in an
exception agreed upon by the producer and consumer. The consumer can
then detect the sentinel or exception and respond by finishing its
computation and returning normally. Such a scheme behaves correctly
in the presence of delegation.
Suggestion: If close() is not to return a value, then raise an
exception if StopIteration with a non-None value occurs.
Resolution: No clear reason to do so. Ignoring a return value is not
considered an error anywhere else in Python.
Criticisms
Under this proposal, the value of a yield from expression would be
derived in a very different way from that of an ordinary yield
expression. This suggests that some other syntax not containing the
word yield might be more appropriate, but no acceptable
alternative has so far been proposed. Rejected alternatives include
call, delegate and gcall.
It has been suggested that some mechanism other than return in the
subgenerator should be used to establish the value returned by the
yield from expression. However, this would interfere with the
goal of being able to think of the subgenerator as a suspendable
function, since it would not be able to return values in the same way
as other functions.
The use of an exception to pass the return value has been criticised
as an “abuse of exceptions”, without any concrete justification of
this claim. In any case, this is only one suggested implementation;
another mechanism could be used without losing any essential features
of the proposal.
It has been suggested that a different exception, such as
GeneratorReturn, should be used instead of StopIteration to return a
value. However, no convincing practical reason for this has been put
forward, and the addition of a value attribute to StopIteration
mitigates any difficulties in extracting a return value from a
StopIteration exception that may or may not have one. Also, using a
different exception would mean that, unlike ordinary functions,
‘return’ without a value in a generator would not be equivalent to
‘return None’.
Alternative Proposals
Proposals along similar lines have been made before, some using the
syntax yield * instead of yield from. While yield * is
more concise, it could be argued that it looks too similar to an
ordinary yield and the difference might be overlooked when reading
code.
To the author’s knowledge, previous proposals have focused only on
yielding values, and thereby suffered from the criticism that the
two-line for-loop they replace is not sufficiently tiresome to write
to justify a new syntax. By dealing with the full generator protocol,
this proposal provides considerably more benefit.
Additional Material
Some examples of the use of the proposed syntax are available, and
also a prototype implementation based on the first optimisation
outlined above.
Examples and Implementation
A version of the implementation updated for Python 3.3 is available from
tracker issue #11682
Copyright
This document has been placed in the public domain.
| Final | PEP 380 – Syntax for Delegating to a Subgenerator | Standards Track | A syntax is proposed for a generator to delegate part of its
operations to another generator. This allows a section of code
containing ‘yield’ to be factored out and placed in another generator.
Additionally, the subgenerator is allowed to return with a value, and
the value is made available to the delegating generator. |
PEP 381 – Mirroring infrastructure for PyPI
Author:
Tarek Ziadé <tarek at ziade.org>, Martin von Löwis <martin at v.loewis.de>
Status:
Withdrawn
Type:
Standards Track
Topic:
Packaging
Created:
21-Mar-2009
Post-History:
Table of Contents
Abstract
PEP Withdrawal
Rationale
Mirror listing and registering
Statistics page
Mirror Authenticity
Special pages a mirror needs to provide
Last modified date
Local statistics
How a mirror should synchronize with PyPI
The mirroring protocol
User-agent request header
How a client can use PyPI and its mirrors
Fail-over mechanism
Extra package indexes
Merging several indexes
References
Acknowledgments
Copyright
Abstract
This PEP describes a mirroring infrastructure for PyPI.
PEP Withdrawal
The main PyPI web service was moved behind the Fastly caching CDN in May 2013:
https://mail.python.org/pipermail/distutils-sig/2013-May/020848.html
Subsequently, this arrangement was formalised as an in-kind sponsorship with
the PSF, and the PSF has also taken on the task of risk management in the event
that that sponsorship arrangement were to ever cease.
The download statistics that were previously provided directly on PyPI, are now
published indirectly via Google Big Query:
https://packaging.python.org/guides/analyzing-pypi-package-downloads/
Accordingly, the mirroring proposal described in this PEP is no longer required,
and has been marked as Withdrawn.
Rationale
PyPI is hosting over 6000 projects and is used on a daily basis
by people to build applications. Especially systems like easy_install
and zc.buildout make intensive usage of PyPI.
For people making intensive use of PyPI, it can act as a single point
of failure. People have started to set up some mirrors, both private
and public. Those mirrors are active mirrors, which means that they
are browsing PyPI to get synced.
In order to make the system more reliable, this PEP describes:
the mirror listing and registering at PyPI
the pages a public mirror should maintain. These pages will be used
by PyPI, in order to get hit counts and the last modified date.
how a mirror should synchronize with PyPI
how a client can implement a fail-over mechanism
Mirror listing and registering
People that wants to mirror PyPI make a proposal on catalog-SIG.
When a mirror is proposed on the mailing list, it is manually
added in a mirror list in the PyPI application after it
has been checked to be compliant with the mirroring rules.
The mirror list is provided as a list of host names of the
form
X.pypi.python.org
The values of X are the sequence a,b,c,…,aa,ab,…
a.pypi.python.org is the master server; the mirrors start
with b. A CNAME record last.pypi.python.org points to the
last host name. Mirror operators should use a static address,
and report planned changes to that address in advance to
distutils-sig.
The new mirror also appears at http://pypi.python.org/mirrors
which is a human-readable page that gives the list of mirrors.
This page also explains how to register a new mirror.
Statistics page
PyPI provides statistics on downloads at /stats. This page is
calculated daily by PyPI, by reading all mirrors’ local stats and
summing them.
The stats are presented in daily or monthly files, under /stats/days
and /stats/months. Each file is a bzip2 file with these formats:
YYYY-MM-DD.bz2 for daily files
YYYY-MM.bz2 for monthly files
Examples:
/stats/days/2008-11-06.bz2
/stats/days/2008-11-07.bz2
/stats/days/2008-11-08.bz2
/stats/months/2008-11.bz2
/stats/months/2008-10.bz2
Mirror Authenticity
With a distributed mirroring system, clients may want to verify that
the mirrored copies are authentic. There are multiple threats to
consider:
the central index may get compromised
the central index is assumed to be trusted, but the mirrors might
be tampered.
a man in the middle between the central index and the end user,
or between a mirror and the end user might tamper with datagrams.
This specification only deals with the second threat. Some provisions
are made to detect man-in-the-middle attacks. To detect the first
attack, package authors need to sign their packages using PGP keys, so
that users verify that the package comes from the author they trust.
The central index provides a DSA key at the URL /serverkey, in the PEM
format as generated by “openssl dsa -pubout” (i.e. RFC 3280
SubjectPublicKeyInfo, with the algorithm 1.3.14.3.2.12). This URL must
not be mirrored, and clients must fetch the official serverkey from
PyPI directly, or use the copy that came with the PyPI client software.
Mirrors should still download the key, to detect a key rollover.
For each package, a mirrored signature is provided at
/serversig/<package>. This is the DSA signature of the parallel URL
/simple/<package>, in DER form, using SHA-1 with DSA (i.e. as a
RFC 3279 Dsa-Sig-Value, created by algorithm 1.2.840.10040.4.3)
Clients using a mirror need to perform the following steps to verify
a package:
download the /simple page, and compute its SHA-1 hash
compute the DSA signature of that hash
download the corresponding /serversig, and compare it (byte-for-byte)
with the value computed in step 2.
compute and verify (against the /simple page) the MD-5 hashes
of all files they download from the mirror.
An implementation of the verification algorithm is available from
https://svn.python.org/packages/trunk/pypi/tools/verify.py
Verification is not needed when downloading from central index, and
should be avoided to reduce the computation overhead.
About once a year, the key will be replaced with a new one. Mirrors
will have to re-fetch all /serversig pages. Clients using mirrors need
to find a trusted copy of the new server key. One way to obtain one
is to download it from https://pypi.python.org/serverkey. To detect
man-in-the-middle attacks, clients need to verify the SSL server
certificate, which will be signed by the CACert authority.
Special pages a mirror needs to provide
A mirror is a subset copy of PyPI, so it provides the same structure
by copying it.
simple: rest version of the package index
packages: packages, stored by Python version, and letters
serversig: signatures for the simple pages
It also needs to provide two specific elements:
last-modified
local-stats
Last modified date
CPAN uses a freshness date system where the mirror’s last
synchronisation date is made available.
For PyPI, each mirror needs to maintain a URL with simple text content
that represents the last synchronisation date the mirror maintains.
The date is provided in GMT time, using the ISO 8601 format [2].
Each mirror will be responsible to maintain its last modified date.
This page must be located at : /last-modified and must be a
text/plain page.
Local statistics
Each mirror is responsible to count all the downloads that where done
via it. This is used by PyPI to sum up all downloads, to be able to
display the grand total.
These statistics are in CSV-like form, with a header in the first
line. It needs to obey PEP 305. Basically, it should be
readable by Python’s csv module.
The fields in this file are:
package: the distutils id of the package.
filename: the filename that has been downloaded.
useragent: the User-Agent of the client that has downloaded the
package.
count: the number of downloads.
The content will look like this:
# package,filename,useragent,count
zc.buildout,zc.buildout-1.6.0.tgz,MyAgent,142
...
The counting starts the day the mirror is launched, and there is one
file per day, compressed using the bzip2 format. Each file is named
like the day. For example, 2008-11-06.bz2 is the file for the 6th of
November 2008.
They are then provided in a folder called days. For example:
/local-stats/days/2008-11-06.bz2
/local-stats/days/2008-11-07.bz2
/local-stats/days/2008-11-08.bz2
This page must be located at /local-stats.
How a mirror should synchronize with PyPI
A mirroring protocol called Simple Index was described and
implemented by Martin v. Loewis and Jim Fulton, based on how
easy_install works. This section synthesizes it and gives a few
relevant links, plus a small part about User-Agent.
The mirroring protocol
Mirrors must reduce the amount of data transferred between the central
server and the mirror. To achieve that, they MUST use the changelog()
PyPI XML-RPC call, and only refetch the packages that have been
changed since the last time. For each package P, they MUST copy
documents /simple/P/ and /serversig/P. If a package is deleted on the
central server, they MUST delete the package and all associated files.
To detect modification of package files, they MAY cache the file’s
ETag, and MAY request skipping it using the If-none-match header.
Each mirroring tool MUST identify itself using a descripte User-agent
header.
The pep381client package [1] provides an application that
respects this protocol to browse PyPI.
User-agent request header
In order to be able to differentiate actions taken by clients over
PyPI, a specific user agent name should be provided by all mirroring
software.
This is also true for all clients like:
zc.buildout [3].
setuptools [4].
pip [5].
XXX user agent registering mechanism at PyPI ?
How a client can use PyPI and its mirrors
Clients that are browsing PyPI should be able to use alternative
mirrors, by getting the list of the mirrors using last.pypi.python.org.
Code example:
>>> import socket
>>> socket.gethostbyname_ex('last.pypi.python.org')[0]
'h.pypi.python.org'
The clients so far that could use this mechanism:
setuptools
zc.buildout (through setuptools)
pip
Fail-over mechanism
Clients that are browsing PyPI should be able to use a fail-over
mechanism when PyPI or the used mirror is not responding.
It is up to the client to decide which mirror should be used, maybe by
looking at its geographical location and its responsiveness.
This PEP does not describe how this fail-over mechanism should work,
but it is strongly encouraged that the clients try to use the nearest
mirror.
The clients so far that could use this mechanism:
setuptools
zc.buildout (through setuptools)
pip
Extra package indexes
It is obvious that some packages will not be uploaded to PyPI, whether
because they are private or whether because the project maintainer
runs their own server where people might get the project package.
However, it is strongly encouraged that a public package index follows
PyPI and Distutils protocols.
In other words, the register and upload command should be
compatible with any package index server out there.
Software that are compatible with PyPI and Distutils so far:
PloneSoftwareCenter [6] which is used to run plone.org products section.
EggBasket [7].
An extra package index is not a mirror of PyPI, but can have some
mirrors itself.
Merging several indexes
When a client needs to get some packages from several distinct
indexes, it should be able to use each one of them as a potential
source of packages. Different indexes should be defined as a sorted
list for the client to look for a package.
Each independent index can of course provide a list of its mirrors.
XXX define how to get the hostname for the mirrors of an arbitrary
index.
That permits all combinations at client level, for a reliable
packaging system with all levels of privacy.
It is up the client to deal with the merging.
References
[1]
http://pypi.python.org/pypi/pep381client
[2]
http://en.wikipedia.org/wiki/ISO_8601
[3]
http://pypi.python.org/pypi/zc.buildout
[4]
http://pypi.python.org/pypi/setuptools
[5]
http://pypi.python.org/pypi/pip
[6]
http://plone.org/products/plonesoftwarecenter
[7]
http://www.chrisarndt.de/projects/eggbasket
Acknowledgments
Georg Brandl.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 381 – Mirroring infrastructure for PyPI | Standards Track | This PEP describes a mirroring infrastructure for PyPI. |
PEP 382 – Namespace Packages
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Rejected
Type:
Standards Track
Created:
02-Apr-2009
Python-Version:
3.2
Post-History:
Table of Contents
Rejection Notice
Abstract
Terminology
Namespace packages today
Rationale
Specification
Impact on Import Hooks
Discussion
References
Copyright
Rejection Notice
On the first day of sprints at US PyCon 2012 we had a long and
fruitful discussion about PEP 382 and PEP 402. We ended up rejecting
both but a new PEP will be written to carry on in the spirit of PEP
402. Martin von Löwis wrote up a summary: [2].
Abstract
Namespace packages are a mechanism for splitting a single Python
package across multiple directories on disk. In current Python
versions, an algorithm to compute the packages __path__ must be
formulated. With the enhancement proposed here, the import machinery
itself will construct the list of directories that make up the
package. An implementation of this PEP is available at [1].
Terminology
Within this PEP, the term package refers to Python packages as defined
by Python’s import statement. The term distribution refers to
separately installable sets of Python modules as stored in the Python
package index, and installed by distutils or setuptools. The term
vendor package refers to groups of files installed by an operating
system’s packaging mechanism (e.g. Debian or Redhat packages install
on Linux systems).
The term portion refers to a set of files in a single directory (possibly
stored in a zip file) that contribute to a namespace package.
Namespace packages today
Python currently provides the pkgutil.extend_path to denote a package as
a namespace package. The recommended way of using it is to put:
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
in the package’s __init__.py. Every distribution needs to provide
the same contents in its __init__.py, so that extend_path is
invoked independent of which portion of the package gets imported
first. As a consequence, the package’s __init__.py cannot
practically define any names as it depends on the order of the package
fragments on sys.path which portion is imported first. As a special
feature, extend_path reads files named <packagename>.pkg which
allow to declare additional portions.
setuptools provides a similar function pkg_resources.declare_namespace
that is used in the form:
import pkg_resources
pkg_resources.declare_namespace(__name__)
In the portion’s __init__.py, no assignment to __path__ is necessary,
as declare_namespace modifies the package __path__ through sys.modules.
As a special feature, declare_namespace also supports zip files, and
registers the package name internally so that future additions to sys.path
by setuptools can properly add additional portions to each package.
setuptools allows declaring namespace packages in a distribution’s
setup.py, so that distribution developers don’t need to put the
magic __path__ modification into __init__.py themselves.
Rationale
The current imperative approach to namespace packages has lead to
multiple slightly-incompatible mechanisms for providing namespace
packages. For example, pkgutil supports *.pkg files; setuptools
doesn’t. Likewise, setuptools supports inspecting zip files, and
supports adding portions to its _namespace_packages variable, whereas
pkgutil doesn’t.
In addition, the current approach causes problems for system vendors.
Vendor packages typically must not provide overlapping files, and an
attempt to install a vendor package that has a file already on disk
will fail or cause unpredictable behavior. As vendors might chose to
package distributions such that they will end up all in a single
directory for the namespace package, all portions would contribute
conflicting __init__.py files.
Specification
Rather than using an imperative mechanism for importing packages, a
declarative approach is proposed here: A directory whose name ends
with .pyp (for Python package) contains a portion of a package.
The import statement is extended so that computes the package’s
__path__ attribute for a package named P as consisting of
optionally a single directory name P containing a file
__init__.py, plus all directories named P.pyp, in the order in
which they are found in the parent’s package __path__ (or
sys.path). If either of these are found, search for additional
portions of the package continues.
A directory may contain both a package in the P/__init__.py and
the P.pyp form.
No other change to the importing mechanism is made; searching modules
(including __init__.py) will continue to stop at the first module
encountered. In summary, the process import a package foo works like
this:
sys.path is searched for directories foo or foo.pyp, or a file foo.<ext>.
If a file is found and no directory, it is treated as a module, and imported.
If a directory foo is found, a check is made whether it contains __init__.py.
If so, the location of the __init__.py is remembered. Otherwise, the directory
is skipped. Once an __init__.py is found, further directories called foo are
skipped.
For both directories foo and foo.pyp, the directories are added to the package’s
__path__.
If an __init__ module was found, it is imported, with __path__
being initialized to the path computed all .pyp directories.
Impact on Import Hooks
Both loaders and finders as defined in PEP 302 will need to be changed
to support namespace packages. Failure to conform to the protocol
below might cause a package not being recognized as a namespace
package; loaders and finders not supporting this protocol must raise
AttributeError when the functions below get accessed.
Finders need to support looking for *.pth files in step 1 of above
algorithm. To do so, a finder used as a path hook must support a
method:
finder.find_package_portion(fullname)
This method will be called in the same manner as find_module, and it
must return a string to be added to the package’s __path__.
If the finder doesn’t find a portion of the package, it shall return
None. Raising AttributeError from above call will be treated
as non-conformance with this PEP, and the exception will be ignored.
All other exceptions are reported.
A finder may report both success from find_module and from
find_package_portion, allowing for both a package containing
an __init__.py and a portion of the same package.
All strings returned from find_package_portion, along with all
path names of .pyp directories are added to the new package’s
__path__.
Discussion
Original versions of this specification proposed the addition of
*.pth files, similar to the way those files are used on sys.path.
With a wildcard marker (*), a package could indicate that the
entire path is derived by looking at the parent path, searching for
properly-named subdirectories.
People then observed that the support for the full .pth syntax is
inappropriate, and the .pth files were changed to be mere marker
files, indicating that a directories is a package. Peter Tröger
suggested that .pth is an unsuitable file extension, as all file
extensions related to Python should start with .py. Therefore, the
marker file was renamed to be .pyp.
Dinu Gherman then observed that using a marker file is not necessary,
and that a directory extension could well serve as a such as a
marker. This is what this PEP currently proposes.
Phillip Eby designed PEP 402 as an alternative approach to this PEP,
after comparing Python’s package syntax with that found in other
languages. PEP 402 proposes not to use a marker file at all. At the
discussion at PyCon DE 2011, people remarked that having an explicit
declaration of a directory as contributing to a package is a desirable
property, rather than an obstacle. In particular, Jython developers
noticed that Jython could easily mistake a directory that is a Java
package as being a Python package, if there is no need to declare
Python packages.
Packages can stop filling out the namespace package’s __init__.py. As
a consequence, extend_path and declare_namespace become obsolete.
Namespace packages can start providing non-trivial __init__.py
implementations; to do so, it is recommended that a single distribution
provides a portion with just the namespace package’s __init__.py
(and potentially other modules that belong to the namespace package
proper).
The mechanism is mostly compatible with the existing namespace
mechanisms. extend_path will be adjusted to this specification;
any other mechanism might cause portions to get added twice to
__path__.
References
[1]
PEP 382 branch
(http://hg.python.org/features/pep-382-2#pep-382)
[2]
Namespace Packages resolution
(https://mail.python.org/pipermail/import-sig/2012-March/000421.html)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 382 – Namespace Packages | Standards Track | Namespace packages are a mechanism for splitting a single Python
package across multiple directories on disk. In current Python
versions, an algorithm to compute the packages __path__ must be
formulated. With the enhancement proposed here, the import machinery
itself will construct the list of directories that make up the
package. An implementation of this PEP is available at [1]. |
PEP 383 – Non-decodable Bytes in System Character Interfaces
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
22-Apr-2009
Python-Version:
3.1
Post-History:
Table of Contents
Abstract
Rationale
Specification
Discussion
References
Copyright
Abstract
File names, environment variables, and command line arguments are
defined as being character data in POSIX; the C APIs however allow
passing arbitrary bytes - whether these conform to a certain encoding
or not. This PEP proposes a means of dealing with such irregularities
by embedding the bytes in character strings in such a way that allows
recreation of the original byte string.
Rationale
The C char type is a data type that is commonly used to represent both
character data and bytes. Certain POSIX interfaces are specified and
widely understood as operating on character data, however, the system
call interfaces make no assumption on the encoding of these data, and
pass them on as-is. With Python 3, character strings use a
Unicode-based internal representation, making it difficult to ignore
the encoding of byte strings in the same way that the C interfaces can
ignore the encoding.
On the other hand, Microsoft Windows NT has corrected the original
design limitation of Unix, and made it explicit in its system
interfaces that these data (file names, environment variables, command
line arguments) are indeed character data, by providing a
Unicode-based API (keeping a C-char-based one for backwards
compatibility).
For Python 3, one proposed solution is to provide two sets of APIs: a
byte-oriented one, and a character-oriented one, where the
character-oriented one would be limited to not being able to represent
all data accurately. Unfortunately, for Windows, the situation would
be exactly the opposite: the byte-oriented interface cannot represent
all data; only the character-oriented API can. As a consequence,
libraries and applications that want to support all user data in a
cross-platform manner have to accept mish-mash of bytes and characters
exactly in the way that caused endless troubles for Python 2.x.
With this PEP, a uniform treatment of these data as characters becomes
possible. The uniformity is achieved by using specific encoding
algorithms, meaning that the data can be converted back to bytes on
POSIX systems only if the same encoding is used.
Being able to treat such strings uniformly will allow application
writers to abstract from details specific to the operating system, and
reduces the risk of one API failing when the other API would have
worked.
Specification
On Windows, Python uses the wide character APIs to access
character-oriented APIs, allowing direct conversion of the
environmental data to Python str objects (PEP 277).
On POSIX systems, Python currently applies the locale’s encoding to
convert the byte data to Unicode, failing for characters that cannot
be decoded. With this PEP, non-decodable bytes >= 128 will be
represented as lone surrogate codes U+DC80..U+DCFF. Bytes below
128 will produce exceptions; see the discussion below.
To convert non-decodable bytes, a new error handler (PEP 293)
“surrogateescape” is introduced, which produces these surrogates. On
encoding, the error handler converts the surrogate back to the
corresponding byte. This error handler will be used in any API that
receives or produces file names, command line arguments, or
environment variables.
The error handler interface is extended to allow the encode error
handler to return byte strings immediately, in addition to returning
Unicode strings which then get encoded again (also see the discussion
below).
Byte-oriented interfaces that already exist in Python 3.0 are not
affected by this specification. They are neither enhanced nor
deprecated.
External libraries that operate on file names (such as GUI file
choosers) should also encode them according to the PEP.
Discussion
This surrogateescape encoding is based on Markus Kuhn’s idea that
he called UTF-8b [3].
While providing a uniform API to non-decodable bytes, this interface
has the limitation that chosen representation only “works” if the data
get converted back to bytes with the surrogateescape error handler
also. Encoding the data with the locale’s encoding and the (default)
strict error handler will raise an exception, encoding them with UTF-8
will produce nonsensical data.
Data obtained from other sources may conflict with data produced
by this PEP. Dealing with such conflicts is out of scope of the PEP.
This PEP allows the possibility of “smuggling” bytes in character
strings. This would be a security risk if the bytes are
security-critical when interpreted as characters on a target system,
such as path name separators. For this reason, the PEP rejects
smuggling bytes below 128. If the target system uses EBCDIC, such
smuggled bytes may still be a security risk, allowing smuggling of
e.g. square brackets or the backslash. Python currently does not
support EBCDIC, so this should not be a problem in practice. Anybody
porting Python to an EBCDIC system might want to adjust the error
handlers, or come up with other approaches to address the security
risks.
Encodings that are not compatible with ASCII are not supported by
this specification; bytes in the ASCII range that fail to decode
will cause an exception. It is widely agreed that such encodings
should not be used as locale charsets.
For most applications, we assume that they eventually pass data
received from a system interface back into the same system
interfaces. For example, an application invoking os.listdir() will
likely pass the result strings back into APIs like os.stat() or
open(), which then encodes them back into their original byte
representation. Applications that need to process the original byte
strings can obtain them by encoding the character strings with the
file system encoding, passing “surrogateescape” as the error handler
name. For example, a function that works like os.listdir, except for
accepting and returning bytes, would be written as:
def listdir_b(dirname):
fse = sys.getfilesystemencoding()
dirname = dirname.decode(fse, "surrogateescape")
for fn in os.listdir(dirname):
# fn is now a str object
yield fn.encode(fse, "surrogateescape")
The extension to the encode error handler interface proposed by this
PEP is necessary to implement the ‘surrogateescape’ error handler,
because there are required byte sequences which cannot be generated
from replacement Unicode. However, the encode error handler interface
presently requires replacement Unicode to be provided in lieu of the
non-encodable Unicode from the source string. Then it promptly
encodes that replacement Unicode. In some error handlers, such as the
‘surrogateescape’ proposed here, it is also simpler and more efficient
for the error handler to provide a pre-encoded replacement byte
string, rather than forcing it to calculating Unicode from which the
encoder would create the desired bytes.
A few alternative approaches have been proposed:
create a new string subclass that supports embedded bytes
use different escape schemes, such as escaping with a NUL
character, or mapping to infrequent characters.
Of these proposals, the approach of escaping each byte XX
with the sequence U+0000 U+00XX has the disadvantage that
encoding to UTF-8 will introduce a NUL byte in the UTF-8
sequence. As a consequence, C libraries may interpret this
as a string termination, even though the string continues.
In particular, the gtk libraries will truncate text in this
case; other libraries may show similar problems.
References
[3]
UTF-8b
https://web.archive.org/web/20090830064219/http://mail.nl.linux.org/linux-utf8/2000-07/msg00040.html
Copyright
This document has been placed in the public domain.
| Final | PEP 383 – Non-decodable Bytes in System Character Interfaces | Standards Track | File names, environment variables, and command line arguments are
defined as being character data in POSIX; the C APIs however allow
passing arbitrary bytes - whether these conform to a certain encoding
or not. This PEP proposes a means of dealing with such irregularities
by embedding the bytes in character strings in such a way that allows
recreation of the original byte string. |
PEP 384 – Defining a Stable ABI
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
17-May-2009
Python-Version:
3.2
Post-History:
Table of Contents
Abstract
Rationale
Specification
Terminology
Header Files and Preprocessor Definitions
Structures
Type Objects
typedefs
Functions and function-like Macros
Excluded Functions
Global Variables
Other Macros
The Buffer Interface
Signature Changes
Linkage
Implementation Strategy
References
Copyright
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at C API Stability (user docs) and
Changing Python’s C API (development docs).
×
See PEP 1 for how to propose changes.
Abstract
Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI.
Rationale
The primary source of ABI incompatibility are changes to the lay-out
of in-memory structures. For example, the way in which string interning
works, or the data type used to represent the size of an object, have
changed during the life of Python 2.x. As a consequence, extension
modules making direct access to fields of strings, lists, or tuples,
would break if their code is loaded into a newer version of the
interpreter without recompilation: offsets of other fields may have
changed, making the extension modules access the wrong data.
In some cases, the incompatibilities only affect internal objects of
the interpreter, such as frame or code objects. For example, the way
line numbers are represented has changed in the 2.x lifetime, as has
the way in which local variables are stored (due to the introduction
of closures). Even though most applications probably never used these
objects, changing them had required to change the PYTHON_API_VERSION.
On Linux, changes to the ABI are often not much of a problem: the
system will provide a default Python installation, and many extension
modules are already provided pre-compiled for that version. If additional
modules are needed, or additional Python versions, users can typically
compile them themselves on the system, resulting in modules that use
the right ABI.
On Windows, multiple simultaneous installations of different Python
versions are common, and extension modules are compiled by their
authors, not by end users. To reduce the risk of ABI incompatibilities,
Python currently introduces a new DLL name pythonXY.dll for each
feature release, whether or not ABI incompatibilities actually exist.
With this PEP, it will be possible to reduce the dependency of binary
extension modules on a specific Python feature release, and applications
embedding Python can be made work with different releases.
Specification
The ABI specification falls into two parts: an API specification,
specifying what function (groups) are available for use with the
ABI, and a linkage specification specifying what libraries to link
with. The actual ABI (layout of structures in memory, function
calling conventions) is not specified, but implied by the
compiler. As a recommendation, a specific ABI is recommended for
selected platforms.
During evolution of Python, new ABI functions will be added.
Applications using them will then have a requirement on a minimum
version of Python; this PEP provides no mechanism for such
applications to fall back when the Python library is too old.
Terminology
Applications and extension modules that want to use this ABI
are collectively referred to as “applications” from here on.
Header Files and Preprocessor Definitions
Applications shall only include the header file Python.h (before
including any system headers), or, optionally, include pyconfig.h, and
then Python.h.
During the compilation of applications, the preprocessor macro
Py_LIMITED_API must be defined. Doing so will hide all definitions
that are not part of the ABI.
Structures
Only the following structures and structure fields are accessible to
applications:
PyObject (ob_refcnt, ob_type)
PyVarObject (ob_base, ob_size)
PyMethodDef (ml_name, ml_meth, ml_flags, ml_doc)
PyMemberDef (name, type, offset, flags, doc)
PyGetSetDef (name, get, set, doc, closure)
PyModuleDefBase (ob_base, m_init, m_index, m_copy)
PyModuleDef (m_base, m_name, m_doc, m_size, m_methods, m_traverse,
m_clear, m_free)
PyStructSequence_Field (name, doc)
PyStructSequence_Desc (name, doc, fields, sequence)
PyType_Slot (see below)
PyType_Spec (see below)
The accessor macros to these fields (Py_REFCNT, Py_TYPE, Py_SIZE)
are also available to applications.
The following types are available, but opaque (i.e. incomplete):
PyThreadState
PyInterpreterState
struct _frame
struct symtable
struct _node
PyWeakReference
PyLongObject
PyTypeObject
Type Objects
The structure of type objects is not available to applications;
declaration of “static” type objects is not possible anymore
(for applications using this ABI).
Instead, type objects get created dynamically. To allow an
easy creation of types (in particular, to be able to fill out
function pointers easily), the following structures and functions
are available:
typedef struct{
int slot; /* slot id, see below */
void *pfunc; /* function pointer */
} PyType_Slot;
typedef struct{
const char* name;
int basicsize;
int itemsize;
unsigned int flags;
PyType_Slot *slots; /* terminated by slot==0. */
} PyType_Spec;
PyObject* PyType_FromSpec(PyType_Spec*);
To specify a slot, a unique slot id must be provided. New Python
versions may introduce new slot ids, but slot ids will never be
recycled. Slots may get deprecated, but continue to be supported
throughout Python 3.x.
The slot ids are named like the field names of the structures that
hold the pointers in Python 3.1, with an added Py_ prefix (i.e.
Py_tp_dealloc instead of just tp_dealloc):
tp_dealloc, tp_getattr, tp_setattr, tp_repr,
tp_hash, tp_call, tp_str, tp_getattro, tp_setattro,
tp_doc, tp_traverse, tp_clear, tp_richcompare, tp_iter,
tp_iternext, tp_methods, tp_base, tp_descr_get, tp_descr_set,
tp_init, tp_alloc, tp_new, tp_is_gc, tp_bases, tp_del
nb_add nb_subtract nb_multiply nb_remainder nb_divmod nb_power
nb_negative nb_positive nb_absolute nb_bool nb_invert nb_lshift
nb_rshift nb_and nb_xor nb_or nb_int nb_float nb_inplace_add
nb_inplace_subtract nb_inplace_multiply nb_inplace_remainder
nb_inplace_power nb_inplace_lshift nb_inplace_rshift nb_inplace_and
nb_inplace_xor nb_inplace_or nb_floor_divide nb_true_divide
nb_inplace_floor_divide nb_inplace_true_divide nb_index
sq_length sq_concat sq_repeat sq_item sq_ass_item
sq_contains sq_inplace_concat sq_inplace_repeat
mp_length mp_subscript mp_ass_subscript
The following fields cannot be set during type definition:
- tp_dict tp_mro tp_cache tp_subclasses tp_weaklist tp_print
- tp_weaklistoffset tp_dictoffset
typedefs
In addition to the typedefs for structs listed above, the following
typedefs are available. Their inclusion in the ABI means that the
underlying type must not change on a platform (even though it may
differ across platforms).
Py_uintptr_t Py_intptr_t Py_ssize_t
unaryfunc binaryfunc ternaryfunc inquiry lenfunc ssizeargfunc
ssizessizeargfunc ssizeobjargproc ssizessizeobjargproc objobjargproc
objobjproc visitproc traverseproc
destructor getattrfunc getattrofunc setattrfunc setattrofunc reprfunc
hashfunc richcmpfunc getiterfunc iternextfunc descrgetfunc
descrsetfunc initproc newfunc allocfunc
PyCFunction PyCFunctionWithKeywords PyNoArgsFunction
PyCapsule_Destructor
getter setter
PyOS_sighandler_t
PyGILState_STATE
Py_UCS4
Most notably, Py_UNICODE is not available as a typedef,
since the same Python version may use different definitions
of it on the same platform (depending on whether it uses narrow
or wide code units). Applications that need to access the contents
of a Unicode string can convert it to wchar_t.
Functions and function-like Macros
By default, all functions are available, unless they are excluded
below.
Whether a function is documented or not does not matter.
Function-like macros (in particular, field access macros) remain
available to applications, but get replaced by function calls
(unless their definition only refers to features of the ABI, such
as the various _Check macros)
ABI function declarations will not change their parameters or return
types. If a change to the signature becomes necessary, a new function
will be introduced. If the new function is source-compatible (e.g. if
just the return type changes), an alias macro may get added to
redirect calls to the new function when the applications is
recompiled.
If continued provision of the old function is not possible, it may get
deprecated, then removed, causing
applications that use that function to break.
Excluded Functions
All functions starting with _Py are not available to applications.
Also, all functions that expect parameter types that are unavailable
to applications are excluded from the ABI, such as PyAST_FromNode
(which expects a node*).
Functions declared in the following header files are not part
of the ABI:
bytes_methods.h
cellobject.h
classobject.h
code.h
compile.h
datetime.h
dtoa.h
frameobject.h
funcobject.h
genobject.h
longintrepr.h
parsetok.h
pyarena.h
pyatomic.h
pyctype.h
pydebug.h
pytime.h
symtable.h
token.h
ucnhash.h
In addition, functions expecting FILE* are not part of
the ABI, to avoid depending on a specific version of the
Microsoft C runtime DLL on Windows.
Module and type initializer and finalizer functions are not available
(PyByteArray_Init, PyOS_FiniInterrupts
and all functions ending in _Fini or _ClearFreeList).
Several functions dealing with interpreter implementation
details are not available:
PyInterpreterState_Head, PyInterpreterState_Next,
PyInterpreterState_ThreadHead, PyThreadState_Next
Py_SubversionRevision, Py_SubversionShortBranch
PyStructSequence_InitType is not available, as it requires
the caller to provide a static type object.
Py_FatalError will be moved from pydebug.h into some other
header file (e.g. pyerrors.h).
The exact list of functions being available is given
in the Windows module definition file for python3.dll [1].
Global Variables
Global variables representing types and exceptions are available
to applications. In addition, selected global variables referenced
in macros (such as Py_True and Py_False) are available.
A complete list of global variable definitions is given in the
python3.def file [1]; those declared DATA denote variables.
Other Macros
All macros defining symbolic constants are available to applications;
the numeric values will not change.
In addition, the following macros are available:
Py_BEGIN_ALLOW_THREADS, Py_BLOCK_THREADS, Py_UNBLOCK_THREADS,
Py_END_ALLOW_THREADS
The Buffer Interface
The buffer interface (type Py_buffer, type slots bf_getbuffer and
bf_releasebuffer, etc) has been omitted from the ABI, since the stability
of the Py_buffer structure is not clear at this time. Inclusion in the
ABI can be considered in future releases.
Signature Changes
A number of functions currently expect a specific struct, even though
callers typically have PyObject* available. These have been changed
to expect PyObject* as the parameter; this will cause warnings in
applications that currently explicitly cast to the parameter type.
These functions are PySlice_GetIndices, PySlice_GetIndicesEx,
PyUnicode_AsWideChar, and PyEval_EvalCode.
Linkage
On Windows, applications shall link with python3.dll; an import
library python3.lib will be available. This DLL will redirect all of
its API functions through /export linker options to the full
interpreter DLL, i.e. python3y.dll.
On Unix systems, the ABI is typically provided by the python
executable itself. PyModule_Create is changed to pass 3 as the API
version if the extension module was compiled with Py_LIMITED_API; the
version check for the API version will accept either 3 or the current
PYTHON_API_VERSION as conforming. If Python is compiled as a shared
library, it is installed as both libpython3.so, and libpython3.y.so;
applications conforming to this PEP should then link to the former
(extension modules can continue to link with no libpython shared object,
but rather rely on runtime linking).
The ABI version is symbolically available as PYTHON_ABI_VERSION.
Also on Unix, the PEP 3149 tag abi<PYTHON_ABI_VERSION> is accepted
in file names of extension modules. No checking is performed that
files named in this way are actually restricted to the limited API,
and no support for building such files will be added to distutils
due to the distutils code freeze.
Implementation Strategy
This PEP will be implemented in a branch [2], allowing users to check
whether their modules conform to the ABI. To avoid users having to
rewrite their type definitions, a script to convert C source code
containing type definitions will be provided [3].
References
[1] (1, 2)
“python3 module definition file”:
http://svn.python.org/projects/python/branches/pep-0384/PC/python3.def
[2]
“PEP 384 branch”:
http://svn.python.org/projects/python/branches/pep-0384/
[3]
“ABI type conversion script”:
http://svn.python.org/projects/python/branches/pep-0384/Tools/scripts/abitype.py
Copyright
This document has been placed in the public domain.
| Final | PEP 384 – Defining a Stable ABI | Standards Track | Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI. |
PEP 386 – Changing the version comparison module in Distutils
Author:
Tarek Ziadé <tarek at ziade.org>
Status:
Superseded
Type:
Standards Track
Topic:
Packaging
Created:
04-Jun-2009
Superseded-By:
440
Table of Contents
Abstract
Motivation
Requisites and current status
Distutils
Setuptools
Caveats of existing systems
The new versioning algorithm
NormalizedVersion
suggest_normalized_version
Roadmap
References
Acknowledgments
Copyright
Abstract
Note: This PEP has been superseded by the version identification and
dependency specification scheme defined in PEP 440.
This PEP proposed a new version comparison schema system in Distutils.
Motivation
In Python there are no real restrictions yet on how a project should manage its
versions, and how they should be incremented.
Distutils provides a version distribution meta-data field but it is freeform and
current users, such as PyPI usually consider the latest version pushed as the
latest one, regardless of the expected semantics.
Distutils will soon extend its capabilities to allow distributions to express a
dependency on other distributions through the Requires-Dist metadata field
(see PEP 345) and it will optionally allow use of that field to
restrict the dependency to a set of compatible versions. Notice that this field
is replacing Requires that was expressing dependencies on modules and packages.
The Requires-Dist field will allow a distribution to define a dependency on
another package and optionally restrict this dependency to a set of
compatible versions, so one may write:
Requires-Dist: zope.interface (>3.5.0)
This means that the distribution requires zope.interface with a version
greater than 3.5.0.
This also means that Python projects will need to follow the same convention
as the tool that will be used to install them, so they are able to compare
versions.
That is why this PEP proposes, for the sake of interoperability, a standard
schema to express version information and its comparison semantics.
Furthermore, this will make OS packagers’ work easier when repackaging standards
compliant distributions, because as of now it can be difficult to decide how two
distribution versions compare.
Requisites and current status
It is not in the scope of this PEP to provide a universal versioning schema
intended to support all or even most of existing versioning schemas. There
will always be competing grammars, either mandated by distro or project
policies or by historical reasons that we cannot expect to change.
The proposed schema should be able to express the usual versioning semantics,
so it’s possible to parse any alternative versioning schema and transform it
into a compliant one. This is how OS packagers usually deal with the existing
version schemas and is a preferable alternative than supporting an arbitrary
set of versioning schemas.
Conformance to usual practice and conventions, as well as a simplicity are a
plus, to ease frictionless adoption and painless transition. Practicality beats
purity, sometimes.
Projects have very different versioning needs, but the following are widely
considered important semantics:
it should be possible to express more than one versioning level
(usually this is expressed as major and minor revision and, sometimes,
also a micro revision).
a significant number of projects need special meaning versions for
“pre-releases” (such as “alpha”, “beta”, “rc”), and these have widely
used aliases (“a” stands for “alpha”, “b” for “beta” and “c” for “rc”).
And these pre-release versions make it impossible to use a simple
alphanumerical ordering of the version string components.
(Example: 3.1a1 < 3.1)
some projects also need “post-releases” of regular versions,
mainly for installer work which can’t be clearly expressed otherwise.
development versions allow packagers of unreleased work to avoid version
clash with later regular releases.
For people that want to go further and use a tool to manage their version
numbers, the two major ones are:
The current Distutils system [1]
Setuptools [2]
Distutils
Distutils currently provides a StrictVersion and a LooseVersion class
that can be used to manage versions.
The LooseVersion class is quite lax. From Distutils doc:
Version numbering for anarchists and software realists.
Implements the standard interface for version number classes as
described above. A version number consists of a series of numbers,
separated by either periods or strings of letters. When comparing
version numbers, the numeric components will be compared
numerically, and the alphabetic components lexically. The following
are all valid version numbers, in no particular order:
1.5.1
1.5.2b2
161
3.10a
8.02
3.4j
1996.07.12
3.2.pl0
3.1.1.6
2g6
11g
0.960923
2.2beta29
1.13++
5.5.kw
2.0b1pl0
In fact, there is no such thing as an invalid version number under
this scheme; the rules for comparison are simple and predictable,
but may not always give the results you want (for some definition
of "want").
This class makes any version string valid, and provides an algorithm to sort
them numerically then lexically. It means that anything can be used to version
your project:
>>> from distutils.version import LooseVersion as V
>>> v1 = V('FunkyVersion')
>>> v2 = V('GroovieVersion')
>>> v1 > v2
False
The problem with this is that while it allows expressing any
nesting level it doesn’t allow giving special meaning to versions
(pre and post-releases as well as development versions), as expressed in
requisites 2, 3 and 4.
The StrictVersion class is more strict. From the doc:
Version numbering for meticulous retentive and software idealists.
Implements the standard interface for version number classes as
described above. A version number consists of two or three
dot-separated numeric components, with an optional "pre-release" tag
on the end. The pre-release tag consists of the letter 'a' or 'b'
followed by a number. If the numeric components of two version
numbers are equal, then one with a pre-release tag will always
be deemed earlier (lesser) than one without.
The following are valid version numbers (shown in the order that
would be obtained by sorting according to the supplied cmp function):
0.4 0.4.0 (these two are equivalent)
0.4.1
0.5a1
0.5b3
0.5
0.9.6
1.0
1.0.4a3
1.0.4b1
1.0.4
The following are examples of invalid version numbers:
1
2.7.2.2
1.3.a4
1.3pl1
1.3c4
This class enforces a few rules, and makes a decent tool to work with version
numbers:
>>> from distutils.version import StrictVersion as V
>>> v2 = V('GroovieVersion')
Traceback (most recent call last):
...
ValueError: invalid version number 'GroovieVersion'
>>> v2 = V('1.1')
>>> v3 = V('1.3')
>>> v2 < v3
True
It adds pre-release versions, and some structure, but lacks a few semantic
elements to make it usable, such as development releases or post-release tags,
as expressed in requisites 3 and 4.
Also, note that Distutils version classes have been present for years
but are not really used in the community.
Setuptools
Setuptools provides another version comparison tool [3]
which does not enforce any rules for the version, but tries to provide a better
algorithm to convert the strings to sortable keys, with a parse_version
function.
From the doc:
Convert a version string to a chronologically-sortable key
This is a rough cross between Distutils' StrictVersion and LooseVersion;
if you give it versions that would work with StrictVersion, then it behaves
the same; otherwise it acts like a slightly-smarter LooseVersion. It is
*possible* to create pathological version coding schemes that will fool
this parser, but they should be very rare in practice.
The returned value will be a tuple of strings. Numeric portions of the
version are padded to 8 digits so they will compare numerically, but
without relying on how numbers compare relative to strings. Dots are
dropped, but dashes are retained. Trailing zeros between alpha segments
or dashes are suppressed, so that e.g. "2.4.0" is considered the same as
"2.4". Alphanumeric parts are lower-cased.
The algorithm assumes that strings like "-" and any alpha string that
alphabetically follows "final" represents a "patch level". So, "2.4-1"
is assumed to be a branch or patch of "2.4", and therefore "2.4.1" is
considered newer than "2.4-1", which in turn is newer than "2.4".
Strings like "a", "b", "c", "alpha", "beta", "candidate" and so on (that
come before "final" alphabetically) are assumed to be pre-release versions,
so that the version "2.4" is considered newer than "2.4a1".
Finally, to handle miscellaneous cases, the strings "pre", "preview", and
"rc" are treated as if they were "c", i.e. as though they were release
candidates, and therefore are not as new as a version string that does not
contain them, and "dev" is replaced with an '@' so that it sorts lower
than any other pre-release tag.
In other words, parse_version will return a tuple for each version string,
that is compatible with StrictVersion but also accept arbitrary version and
deal with them so they can be compared:
>>> from pkg_resources import parse_version as V
>>> V('1.2')
('00000001', '00000002', '*final')
>>> V('1.2b2')
('00000001', '00000002', '*b', '00000002', '*final')
>>> V('FunkyVersion')
('*funkyversion', '*final')
In this schema practicality takes priority over purity, but as a result it
doesn’t enforce any policy and leads to very complex semantics due to the lack
of a clear standard. It just tries to adapt to widely used conventions.
Caveats of existing systems
The major problem with the described version comparison tools is that they are
too permissive and, at the same time, aren’t capable of expressing some of the
required semantics. Many of the versions on PyPI [4] are obviously not
useful versions, which makes it difficult for users to grok the versioning that
a particular package was using and to provide tools on top of PyPI.
Distutils classes are not really used in Python projects, but the
Setuptools function is quite widespread because it’s used by tools like
easy_install [6], pip [5] or zc.buildout
[7] to install dependencies of a given project.
While Setuptools does provide a mechanism for comparing/sorting versions,
it is much preferable if the versioning spec is such that a human can make a
reasonable attempt at that sorting without having to run it against some code.
Also there’s a problem with the use of dates at the “major” version number
(e.g. a version string “20090421”) with RPMs: it means that any attempt to
switch to a more typical “major.minor…” version scheme is problematic because
it will always sort less than “20090421”.
Last, the meaning of - is specific to Setuptools, while it is avoided in
some packaging systems like the one used by Debian or Ubuntu.
The new versioning algorithm
During Pycon, members of the Python, Ubuntu and Fedora community worked on
a version standard that would be acceptable for everyone.
It’s currently called verlib and a prototype lives at [10].
The pseudo-format supported is:
N.N[.N]+[{a|b|c|rc}N[.N]+][.postN][.devN]
The real regular expression is:
expr = r"""^
(?P<version>\d+\.\d+) # minimum 'N.N'
(?P<extraversion>(?:\.\d+)*) # any number of extra '.N' segments
(?:
(?P<prerel>[abc]|rc) # 'a' = alpha, 'b' = beta
# 'c' or 'rc' = release candidate
(?P<prerelversion>\d+(?:\.\d+)*)
)?
(?P<postdev>(\.post(?P<post>\d+))?(\.dev(?P<dev>\d+))?)?
$"""
Some examples probably make it clearer:
>>> from verlib import NormalizedVersion as V
>>> (V('1.0a1')
... < V('1.0a2.dev456')
... < V('1.0a2')
... < V('1.0a2.1.dev456')
... < V('1.0a2.1')
... < V('1.0b1.dev456')
... < V('1.0b2')
... < V('1.0b2.post345')
... < V('1.0c1.dev456')
... < V('1.0c1')
... < V('1.0.dev456')
... < V('1.0')
... < V('1.0.post456.dev34')
... < V('1.0.post456'))
True
The trailing .dev123 is for pre-releases. The .post123 is for
post-releases – which apparently are used by a number of projects out there
(e.g. Twisted [8]). For example, after a 1.2.0 release there might
be a 1.2.0-r678 release. We used post instead of r because the
r is ambiguous as to whether it indicates a pre- or post-release.
.post456.dev34 indicates a dev marker for a post release, that sorts
before a .post456 marker. This can be used to do development versions
of post releases.
Pre-releases can use a for “alpha”, b for “beta” and c for
“release candidate”. rc is an alternative notation for “release candidate”
that is added to make the version scheme compatible with Python’s own version
scheme. rc sorts after c:
>>> from verlib import NormalizedVersion as V
>>> (V('1.0a1')
... < V('1.0a2')
... < V('1.0b3')
... < V('1.0c1')
... < V('1.0rc2')
... < V('1.0'))
True
Note that c is the preferred marker for third party projects.
verlib provides a NormalizedVersion class and a
suggest_normalized_version function.
NormalizedVersion
The NormalizedVersion class is used to hold a version and to compare it
with others. It takes a string as an argument, that contains the representation
of the version:
>>> from verlib import NormalizedVersion
>>> version = NormalizedVersion('1.0')
The version can be represented as a string:
>>> str(version)
'1.0'
Or compared with others:
>>> NormalizedVersion('1.0') > NormalizedVersion('0.9')
True
>>> NormalizedVersion('1.0') < NormalizedVersion('1.1')
True
A class method called from_parts is available if you want to create an
instance by providing the parts that composes the version.
Examples
>>> version = NormalizedVersion.from_parts((1, 0))
>>> str(version)
'1.0'
>>> version = NormalizedVersion.from_parts((1, 0), ('c', 4))
>>> str(version)
'1.0c4'
>>> version = NormalizedVersion.from_parts((1, 0), ('c', 4), ('dev', 34))
>>> str(version)
'1.0c4.dev34'
suggest_normalized_version
suggest_normalized_version is a function that suggests a normalized version
close to the given version string. If you have a version string that isn’t
normalized (i.e. NormalizedVersion doesn’t like it) then you might be able
to get an equivalent (or close) normalized version from this function.
This does a number of simple normalizations to the given string, based
on an observation of versions currently in use on PyPI.
Given a dump of those versions on January 6th 2010, the function has given those
results out of the 8821 distributions PyPI had:
7822 (88.67%) already match NormalizedVersion without any change
717 (8.13%) match when using this suggestion method
282 (3.20%) don’t match at all.
The 3.20% of projects that are incompatible with NormalizedVersion
and cannot be transformed into a compatible form, are for most of them date-based
version schemes, versions with custom markers, or dummy versions. Examples:
working proof of concept
1 (first draft)
unreleased.unofficialdev
0.1.alphadev
2008-03-29_r219
etc.
When a tool needs to work with versions, a strategy is to use
suggest_normalized_version on the versions string. If this function returns
None, it means that the provided version is not close enough to the
standard scheme. If it returns a version that slightly differs from
the original version, it’s a suggested normalized version. Last, if it
returns the same string, it means that the version matches the scheme.
Here’s an example of usage:
>>> from verlib import suggest_normalized_version, NormalizedVersion
>>> import warnings
>>> def validate_version(version):
... rversion = suggest_normalized_version(version)
... if rversion is None:
... raise ValueError('Cannot work with "%s"' % version)
... if rversion != version:
... warnings.warn('"%s" is not a normalized version.\n'
... 'It has been transformed into "%s" '
... 'for interoperability.' % (version, rversion))
... return NormalizedVersion(rversion)
...
>>> validate_version('2.4-rc1')
__main__:8: UserWarning: "2.4-rc1" is not a normalized version.
It has been transformed into "2.4c1" for interoperability.
NormalizedVersion('2.4c1')
>>> validate_version('2.4c1')
NormalizedVersion('2.4c1')
>>> validate_version('foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in validate_version
ValueError: Cannot work with "foo"
Roadmap
Distutils will deprecate its existing versions class in favor of
NormalizedVersion. The verlib module presented in this PEP will be
renamed to version and placed into the distutils package.
References
[1]
http://docs.python.org/distutils
[2]
http://peak.telecommunity.com/DevCenter/setuptools
[3]
http://peak.telecommunity.com/DevCenter/setuptools#specifying-your-project-s-version
[4]
http://pypi.python.org/pypi
[5]
http://pypi.python.org/pypi/pip
[6]
http://peak.telecommunity.com/DevCenter/EasyInstall
[7]
http://pypi.python.org/pypi/zc.buildout
[8]
http://twistedmatrix.com/trac/
[9]
http://peak.telecommunity.com/DevCenter/setuptools
[10]
http://bitbucket.org/tarek/distutilsversion/
Acknowledgments
Trent Mick, Matthias Klose, Phillip Eby, David Lyon, and many people at Pycon
and Distutils-SIG.
Copyright
This document has been placed in the public domain.
| Superseded | PEP 386 – Changing the version comparison module in Distutils | Standards Track | Note: This PEP has been superseded by the version identification and
dependency specification scheme defined in PEP 440. |
PEP 389 – argparse - New Command Line Parsing Module
Author:
Steven Bethard <steven.bethard at gmail.com>
Status:
Final
Type:
Standards Track
Created:
25-Sep-2009
Python-Version:
2.7, 3.2
Post-History:
27-Sep-2009, 24-Oct-2009
Table of Contents
Acceptance
Abstract
Motivation
Why aren’t getopt and optparse enough?
Why isn’t the functionality just being added to optparse?
Deprecation of optparse
Updates to getopt documentation
Deferred: string formatting
Rejected: getopt compatibility methods
Out of Scope: Various Feature Requests
Discussion: sys.stderr and sys.exit
References
Copyright
Acceptance
This PEP was approved by Guido on python-dev on February 21, 2010 [17].
Abstract
This PEP proposes inclusion of the argparse [1] module in the Python
standard library in Python 2.7 and 3.2.
Motivation
The argparse module is a command line parsing library which provides
more functionality than the existing command line parsing modules in
the standard library, getopt [2] and optparse [3]. It includes
support for positional arguments (not just options), subcommands,
required options, options syntaxes like “/f” and “+rgb”, zero-or-more
and one-or-more style arguments, and many other features the other
two lack.
The argparse module is also already a popular third-party replacement
for these modules. It is used in projects like IPython (the Scipy
Python shell) [4], is included in Debian testing and unstable [5],
and since 2007 has had various requests for its inclusion in the
standard library [6] [7] [8]. This popularity suggests it may be
a valuable addition to the Python libraries.
Why aren’t getopt and optparse enough?
One argument against adding argparse is that there are “already two
different option parsing modules in the standard library” [9]. The
following is a list of features provided by argparse but not present
in getopt or optparse:
While it is true there are two option parsing libraries, there
are no full command line parsing libraries – both getopt and
optparse support only options and have no support for positional
arguments. The argparse module handles both, and as a result, is
able to generate better help messages, avoiding redundancies like
the usage= string usually required by optparse.
The argparse module values practicality over purity. Thus, argparse
allows required options and customization of which characters are
used to identify options, while optparse explicitly states “the
phrase ‘required option’ is self-contradictory” and that the option
syntaxes -pf, -file, +f, +rgb, /f and /file
“are not supported by optparse, and they never will be”.
The argparse module allows options to accept a variable number of
arguments using nargs='?', nargs='*' or nargs='+'. The
optparse module provides an untested recipe for some part of this
functionality [10] but admits that “things get hairy when you want
an option to take a variable number of arguments.”
The argparse module supports subcommands, where a main command
line parser dispatches to other command line parsers depending on
the command line arguments. This is a common pattern in command
line interfaces, e.g. svn co and svn up.
Why isn’t the functionality just being added to optparse?
Clearly all the above features offer improvements over what is
available through optparse. A reasonable question then is why these
features are not simply provided as patches to optparse, instead of
introducing an entirely new module. In fact, the original development
of argparse intended to do just that, but because of various fairly
constraining design decisions of optparse, this wasn’t really
possible. Some of the problems included:
The optparse module exposes the internals of its parsing algorithm.
In particular, parser.largs and parser.rargs are guaranteed
to be available to callbacks [11]. This makes it extremely
difficult to improve the parsing algorithm as was necessary in
argparse for proper handling of positional arguments and variable
length arguments. For example, nargs='+' in argparse is matched
using regular expressions and thus has no notion of things like
parser.largs.
The optparse extension APIs are extremely complex. For example,
just to use a simple custom string-to-object conversion function,
you have to subclass Option, hack class attributes, and then
specify your custom option type to the parser, like this:class MyOption(Option):
TYPES = Option.TYPES + ("mytype",)
TYPE_CHECKER = copy(Option.TYPE_CHECKER)
TYPE_CHECKER["mytype"] = check_mytype
parser = optparse.OptionParser(option_class=MyOption)
parser.add_option("-m", type="mytype")
For comparison, argparse simply allows conversion functions to be
used as type= arguments directly, e.g.:
parser = argparse.ArgumentParser()
parser.add_option("-m", type=check_mytype)
But given the baroque customization APIs of optparse, it is unclear
how such a feature should interact with those APIs, and it is
quite possible that introducing the simple argparse API would break
existing custom Option code.
Both optparse and argparse parse command line arguments and assign
them as attributes to an object returned by parse_args.
However, the optparse module guarantees that the take_action
method of custom actions will always be passed a values object
which provides an ensure_value method [12], while the argparse
module allows attributes to be assigned to any object, e.g.:foo_object = ...
parser.parse_args(namespace=foo_object)
foo_object.some_attribute_parsed_from_command_line
Modifying optparse to allow any object to be passed in would be
difficult because simply passing the foo_object around instead
of a Values instance will break existing custom actions that
depend on the ensure_value method.
Because of issues like these, which made it unreasonably difficult
for argparse to stay compatible with the optparse APIs, argparse was
developed as an independent module. Given these issues, merging all
the argparse features into optparse with no backwards
incompatibilities seems unlikely.
Deprecation of optparse
Because all of optparse’s features are available in argparse, the
optparse module will be deprecated. However, because of the
widespread use of optparse, the deprecation strategy contains only
documentation changes and warnings that will not be visible by
default:
Python 2.7+ and 3.2+ – The following note will be added to the
optparse documentation:
The optparse module is deprecated and will not be developed
further; development will continue with the argparse module.
Python 2.7+ – If the Python 3 compatibility flag, -3, is
provided at the command line, then importing optparse will issue a
DeprecationWarning. Otherwise no warnings will be issued.
Python 3.2+ – Importing optparse will issue a
PendingDeprecationWarning, which is not displayed by default.
Note that no removal date is proposed for optparse.
Updates to getopt documentation
The getopt module will not be deprecated. However, its documentation
will be updated to point to argparse in a couple of places. At the
top of the module, the following note will be added:
The getopt module is a parser for command line options whose API
is designed to be familiar to users of the C getopt function.
Users who are unfamiliar with the C getopt function or who would
like to write less code and get better help and error messages
should consider using the argparse module instead.
Additionally, after the final getopt example, the following note will
be added:
Note that an equivalent command line interface could be produced
with less code by using the argparse module:import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-o', '--output')
parser.add_argument('-v', dest='verbose', action='store_true')
args = parser.parse_args()
# ... do something with args.output ...
# ... do something with args.verbose ..
Deferred: string formatting
The argparse module supports Python from 2.3 up through 3.2 and as a
result relies on traditional %(foo)s style string formatting. It
has been suggested that it might be better to use the new style
{foo} string formatting [13]. There was some discussion about
how best to do this for modules in the standard library [14] and
several people are developing functions for automatically converting
%-formatting to {}-formatting [15] [16]. When one of these is added
to the standard library, argparse will use them to support both
formatting styles.
Rejected: getopt compatibility methods
Previously, when this PEP was suggesting the deprecation of getopt
as well as optparse, there was some talk of adding a method like:
ArgumentParser.add_getopt_arguments(options[, long_options])
However, this method will not be added for a number of reasons:
The getopt module is not being deprecated, so there is less need.
This method would not actually ease the transition for any getopt
users who were already maintaining usage messages, because the API
above gives no way of adding help messages to the arguments.
Some users of getopt consider it very important that only a single
function call is necessary. The API above does not satisfy this
requirement because both ArgumentParser() and parse_args()
must also be called.
Out of Scope: Various Feature Requests
Several feature requests for argparse were made in the discussion of
this PEP:
Support argument defaults from environment variables
Support argument defaults from configuration files
Support “foo –help subcommand” in addition to the currently
supported “foo subcommand –help”
These are all reasonable feature requests for the argparse module,
but are out of the scope of this PEP, and have been redirected to
the argparse issue tracker.
Discussion: sys.stderr and sys.exit
There were some concerns that argparse by default always writes to
sys.stderr and always calls sys.exit when invalid arguments
are provided. This is the desired behavior for the vast majority of
argparse use cases which revolve around simple command line
interfaces. However, in some cases, it may be desirable to keep
argparse from exiting, or to have it write its messages to something
other than sys.stderr. These use cases can be supported by
subclassing ArgumentParser and overriding the exit or
_print_message methods. The latter is an undocumented
implementation detail, but could be officially exposed if this turns
out to be a common need.
References
[1]
argparse
(http://code.google.com/p/argparse/)
[2]
getopt
(http://docs.python.org/library/getopt.html)
[3]
optparse
(http://docs.python.org/library/optparse.html)
[4]
argparse in IPython
(http://mail.scipy.org/pipermail/ipython-dev/2009-April/005102.html)
[5]
argparse in Debian
(http://packages.debian.org/search?keywords=argparse)
[6] (1, 2)
2007-01-03 request for argparse in the standard library
(https://mail.python.org/pipermail/python-list/2007-January/472276.html)
[7]
2009-06-09 request for argparse in the standard library
(http://bugs.python.org/issue6247)
[8]
2009-09-10 request for argparse in the standard library
(https://mail.python.org/pipermail/stdlib-sig/2009-September/000342.html)
[9]
Fredrik Lundh response to [6]
(https://mail.python.org/pipermail/python-list/2007-January/1086892.html)
[10]
optparse variable args
(http://docs.python.org/library/optparse.html#callback-example-6-variable-arguments)
[11]
parser.largs and parser.rargs
(http://docs.python.org/library/optparse.html#how-callbacks-are-called)
[12]
take_action values argument
(http://docs.python.org/library/optparse.html#adding-new-actions)
[13]
use {}-formatting instead of %-formatting
(http://bugs.python.org/msg89279)
[14]
transitioning from % to {} formatting
(https://mail.python.org/pipermail/python-dev/2009-September/092326.html)
[15]
Vinay Sajip’s %-to-{} converter
(http://gist.github.com/200936)
[16]
Benjamin Peterson’s %-to-{} converter
(http://bazaar.launchpad.net/~gutworth/+junk/mod2format/files)
[17]
Guido’s approval
(https://mail.python.org/pipermail/python-dev/2010-February/097839.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 389 – argparse - New Command Line Parsing Module | Standards Track | This PEP proposes inclusion of the argparse [1] module in the Python
standard library in Python 2.7 and 3.2. |
PEP 391 – Dictionary-Based Configuration For Logging
Author:
Vinay Sajip <vinay_sajip at red-dove.com>
Status:
Final
Type:
Standards Track
Created:
15-Oct-2009
Python-Version:
2.7, 3.2
Post-History:
Table of Contents
Abstract
Rationale
Specification
Naming
API
Dictionary Schema - Overview
Object connections
User-defined objects
Access to external objects
Access to internal objects
Handler Ids
Dictionary Schema - Detail
A Working Example
Incremental Configuration
API Customization
Change to Socket Listener Implementation
Configuration Errors
Discussion in the community
Reference implementation
Copyright
Abstract
This PEP describes a new way of configuring logging using a dictionary
to hold configuration information.
Rationale
The present means for configuring Python’s logging package is either
by using the logging API to configure logging programmatically, or
else by means of ConfigParser-based configuration files.
Programmatic configuration, while offering maximal control, fixes the
configuration in Python code. This does not facilitate changing it
easily at runtime, and, as a result, the ability to flexibly turn the
verbosity of logging up and down for different parts of a using
application is lost. This limits the usability of logging as an aid
to diagnosing problems - and sometimes, logging is the only diagnostic
aid available in production environments.
The ConfigParser-based configuration system is usable, but does not
allow its users to configure all aspects of the logging package. For
example, Filters cannot be configured using this system. Furthermore,
the ConfigParser format appears to engender dislike (sometimes strong
dislike) in some quarters. Though it was chosen because it was the
only configuration format supported in the Python standard at that
time, many people regard it (or perhaps just the particular schema
chosen for logging’s configuration) as ‘crufty’ or ‘ugly’, in some
cases apparently on purely aesthetic grounds.
Recent versions of Python include JSON support in the standard
library, and this is also usable as a configuration format. In other
environments, such as Google App Engine, YAML is used to configure
applications, and usually the configuration of logging would be
considered an integral part of the application configuration.
Although the standard library does not contain YAML support at
present, support for both JSON and YAML can be provided in a common
way because both of these serialization formats allow deserialization
to Python dictionaries.
By providing a way to configure logging by passing the configuration
in a dictionary, logging will be easier to configure not only for
users of JSON and/or YAML, but also for users of custom configuration
methods, by providing a common format in which to describe the desired
configuration.
Another drawback of the current ConfigParser-based configuration
system is that it does not support incremental configuration: a new
configuration completely replaces the existing configuration.
Although full flexibility for incremental configuration is difficult
to provide in a multi-threaded environment, the new configuration
mechanism will allow the provision of limited support for incremental
configuration.
Specification
The specification consists of two parts: the API and the format of the
dictionary used to convey configuration information (i.e. the schema
to which it must conform).
Naming
Historically, the logging package has not been PEP 8 conformant.
At some future time, this will be corrected by changing method and
function names in the package in order to conform with PEP 8.
However, in the interests of uniformity, the proposed additions to the
API use a naming scheme which is consistent with the present scheme
used by logging.
API
The logging.config module will have the following addition:
A function, called dictConfig(), which takes a single argument
- the dictionary holding the configuration. Exceptions will be
raised if there are errors while processing the dictionary.
It will be possible to customize this API - see the section on API
Customization. Incremental configuration is covered in its own
section.
Dictionary Schema - Overview
Before describing the schema in detail, it is worth saying a few words
about object connections, support for user-defined objects and access
to external and internal objects.
Object connections
The schema is intended to describe a set of logging objects - loggers,
handlers, formatters, filters - which are connected to each other in
an object graph. Thus, the schema needs to represent connections
between the objects. For example, say that, once configured, a
particular logger has attached to it a particular handler. For the
purposes of this discussion, we can say that the logger represents the
source, and the handler the destination, of a connection between the
two. Of course in the configured objects this is represented by the
logger holding a reference to the handler. In the configuration dict,
this is done by giving each destination object an id which identifies
it unambiguously, and then using the id in the source object’s
configuration to indicate that a connection exists between the source
and the destination object with that id.
So, for example, consider the following YAML snippet:
formatters:
brief:
# configuration for formatter with id 'brief' goes here
precise:
# configuration for formatter with id 'precise' goes here
handlers:
h1: #This is an id
# configuration of handler with id 'h1' goes here
formatter: brief
h2: #This is another id
# configuration of handler with id 'h2' goes here
formatter: precise
loggers:
foo.bar.baz:
# other configuration for logger 'foo.bar.baz'
handlers: [h1, h2]
(Note: YAML will be used in this document as it is a little more
readable than the equivalent Python source form for the dictionary.)
The ids for loggers are the logger names which would be used
programmatically to obtain a reference to those loggers, e.g.
foo.bar.baz. The ids for Formatters and Filters can be any string
value (such as brief, precise above) and they are transient,
in that they are only meaningful for processing the configuration
dictionary and used to determine connections between objects, and are
not persisted anywhere when the configuration call is complete.
Handler ids are treated specially, see the section on
Handler Ids, below.
The above snippet indicates that logger named foo.bar.baz should
have two handlers attached to it, which are described by the handler
ids h1 and h2. The formatter for h1 is that described by id
brief, and the formatter for h2 is that described by id
precise.
User-defined objects
The schema should support user-defined objects for handlers, filters
and formatters. (Loggers do not need to have different types for
different instances, so there is no support - in the configuration -
for user-defined logger classes.)
Objects to be configured will typically be described by dictionaries
which detail their configuration. In some places, the logging system
will be able to infer from the context how an object is to be
instantiated, but when a user-defined object is to be instantiated,
the system will not know how to do this. In order to provide complete
flexibility for user-defined object instantiation, the user will need
to provide a ‘factory’ - a callable which is called with a
configuration dictionary and which returns the instantiated object.
This will be signalled by an absolute import path to the factory being
made available under the special key '()'. Here’s a concrete
example:
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
custom:
(): my.package.customFormatterFactory
bar: baz
spam: 99.9
answer: 42
The above YAML snippet defines three formatters. The first, with id
brief, is a standard logging.Formatter instance with the
specified format string. The second, with id default, has a
longer format and also defines the time format explicitly, and will
result in a logging.Formatter initialized with those two format
strings. Shown in Python source form, the brief and default
formatters have configuration sub-dictionaries:
{
'format' : '%(message)s'
}
and:
{
'format' : '%(asctime)s %(levelname)-8s %(name)-15s %(message)s',
'datefmt' : '%Y-%m-%d %H:%M:%S'
}
respectively, and as these dictionaries do not contain the special key
'()', the instantiation is inferred from the context: as a result,
standard logging.Formatter instances are created. The
configuration sub-dictionary for the third formatter, with id
custom, is:
{
'()' : 'my.package.customFormatterFactory',
'bar' : 'baz',
'spam' : 99.9,
'answer' : 42
}
and this contains the special key '()', which means that
user-defined instantiation is wanted. In this case, the specified
factory callable will be used. If it is an actual callable it will be
used directly - otherwise, if you specify a string (as in the example)
the actual callable will be located using normal import mechanisms.
The callable will be called with the remaining items in the
configuration sub-dictionary as keyword arguments. In the above
example, the formatter with id custom will be assumed to be
returned by the call:
my.package.customFormatterFactory(bar='baz', spam=99.9, answer=42)
The key '()' has been used as the special key because it is not a
valid keyword parameter name, and so will not clash with the names of
the keyword arguments used in the call. The '()' also serves as a
mnemonic that the corresponding value is a callable.
Access to external objects
There are times where a configuration will need to refer to objects
external to the configuration, for example sys.stderr. If the
configuration dict is constructed using Python code then this is
straightforward, but a problem arises when the configuration is
provided via a text file (e.g. JSON, YAML). In a text file, there is
no standard way to distinguish sys.stderr from the literal string
'sys.stderr'. To facilitate this distinction, the configuration
system will look for certain special prefixes in string values and
treat them specially. For example, if the literal string
'ext://sys.stderr' is provided as a value in the configuration,
then the ext:// will be stripped off and the remainder of the
value processed using normal import mechanisms.
The handling of such prefixes will be done in a way analogous to
protocol handling: there will be a generic mechanism to look for
prefixes which match the regular expression
^(?P<prefix>[a-z]+)://(?P<suffix>.*)$ whereby, if the prefix
is recognised, the suffix is processed in a prefix-dependent
manner and the result of the processing replaces the string value. If
the prefix is not recognised, then the string value will be left
as-is.
The implementation will provide for a set of standard prefixes such as
ext:// but it will be possible to disable the mechanism completely
or provide additional or different prefixes for special handling.
Access to internal objects
As well as external objects, there is sometimes also a need to refer
to objects in the configuration. This will be done implicitly by the
configuration system for things that it knows about. For example, the
string value 'DEBUG' for a level in a logger or handler will
automatically be converted to the value logging.DEBUG, and the
handlers, filters and formatter entries will take an
object id and resolve to the appropriate destination object.
However, a more generic mechanism needs to be provided for the case
of user-defined objects which are not known to logging. For example,
take the instance of logging.handlers.MemoryHandler, which takes
a target which is another handler to delegate to. Since the system
already knows about this class, then in the configuration, the given
target just needs to be the object id of the relevant target
handler, and the system will resolve to the handler from the id. If,
however, a user defines a my.package.MyHandler which has a
alternate handler, the configuration system would not know that
the alternate referred to a handler. To cater for this, a
generic resolution system will be provided which allows the user to
specify:
handlers:
file:
# configuration of file handler goes here
custom:
(): my.package.MyHandler
alternate: cfg://handlers.file
The literal string 'cfg://handlers.file' will be resolved in an
analogous way to the strings with the ext:// prefix, but looking
in the configuration itself rather than the import namespace. The
mechanism will allow access by dot or by index, in a similar way to
that provided by str.format. Thus, given the following snippet:
handlers:
email:
class: logging.handlers.SMTPHandler
mailhost: localhost
fromaddr: my_app@domain.tld
toaddrs:
- support_team@domain.tld
- dev_team@domain.tld
subject: Houston, we have a problem.
in the configuration, the string 'cfg://handlers' would resolve to
the dict with key handlers, the string 'cfg://handlers.email
would resolve to the dict with key email in the handlers dict,
and so on. The string 'cfg://handlers.email.toaddrs[1] would
resolve to 'dev_team.domain.tld' and the string
'cfg://handlers.email.toaddrs[0]' would resolve to the value
'support_team@domain.tld'. The subject value could be accessed
using either 'cfg://handlers.email.subject' or, equivalently,
'cfg://handlers.email[subject]'. The latter form only needs to be
used if the key contains spaces or non-alphanumeric characters. If an
index value consists only of decimal digits, access will be attempted
using the corresponding integer value, falling back to the string
value if needed.
Given a string cfg://handlers.myhandler.mykey.123, this will
resolve to config_dict['handlers']['myhandler']['mykey']['123'].
If the string is specified as cfg://handlers.myhandler.mykey[123],
the system will attempt to retrieve the value from
config_dict['handlers']['myhandler']['mykey'][123], and fall back
to config_dict['handlers']['myhandler']['mykey']['123'] if that
fails.
Handler Ids
Some specific logging configurations require the use of handler levels
to achieve the desired effect. However, unlike loggers which can
always be identified by their names, handlers have no persistent
handles whereby levels can be changed via an incremental configuration
call.
Therefore, this PEP proposes to add an optional name property to
handlers. If used, this will add an entry in a dictionary which maps
the name to the handler. (The entry will be removed when the handler
is closed.) When an incremental configuration call is made, handlers
will be looked up in this dictionary to set the handler level
according to the value in the configuration. See the section on
incremental configuration for more details.
In theory, such a “persistent name” facility could also be provided
for Filters and Formatters. However, there is not a strong case to be
made for being able to configure these incrementally. On the basis
that practicality beats purity, only Handlers will be given this new
name property. The id of a handler in the configuration will
become its name.
The handler name lookup dictionary is for configuration use only and
will not become part of the public API for the package.
Dictionary Schema - Detail
The dictionary passed to dictConfig() must contain the following
keys:
version - to be set to an integer value representing the schema
version. The only valid value at present is 1, but having this key
allows the schema to evolve while still preserving backwards
compatibility.
All other keys are optional, but if present they will be interpreted
as described below. In all cases below where a ‘configuring dict’ is
mentioned, it will be checked for the special '()' key to see if a
custom instantiation is required. If so, the mechanism described
above is used to instantiate; otherwise, the context is used to
determine how to instantiate.
formatters - the corresponding value will be a dict in which each
key is a formatter id and each value is a dict describing how to
configure the corresponding Formatter instance.The configuring dict is searched for keys format and datefmt
(with defaults of None) and these are used to construct a
logging.Formatter instance.
filters - the corresponding value will be a dict in which each key
is a filter id and each value is a dict describing how to configure
the corresponding Filter instance.The configuring dict is searched for key name (defaulting to the
empty string) and this is used to construct a logging.Filter
instance.
handlers - the corresponding value will be a dict in which each
key is a handler id and each value is a dict describing how to
configure the corresponding Handler instance.The configuring dict is searched for the following keys:
class (mandatory). This is the fully qualified name of the
handler class.
level (optional). The level of the handler.
formatter (optional). The id of the formatter for this
handler.
filters (optional). A list of ids of the filters for this
handler.
All other keys are passed through as keyword arguments to the
handler’s constructor. For example, given the snippet:
handlers:
console:
class : logging.StreamHandler
formatter: brief
level : INFO
filters: [allow_foo]
stream : ext://sys.stdout
file:
class : logging.handlers.RotatingFileHandler
formatter: precise
filename: logconfig.log
maxBytes: 1024
backupCount: 3
the handler with id console is instantiated as a
logging.StreamHandler, using sys.stdout as the underlying
stream. The handler with id file is instantiated as a
logging.handlers.RotatingFileHandler with the keyword arguments
filename='logconfig.log', maxBytes=1024, backupCount=3.
loggers - the corresponding value will be a dict in which each key
is a logger name and each value is a dict describing how to
configure the corresponding Logger instance.The configuring dict is searched for the following keys:
level (optional). The level of the logger.
propagate (optional). The propagation setting of the logger.
filters (optional). A list of ids of the filters for this
logger.
handlers (optional). A list of ids of the handlers for this
logger.
The specified loggers will be configured according to the level,
propagation, filters and handlers specified.
root - this will be the configuration for the root logger.
Processing of the configuration will be as for any logger, except
that the propagate setting will not be applicable.
incremental - whether the configuration is to be interpreted as
incremental to the existing configuration. This value defaults to
False, which means that the specified configuration replaces the
existing configuration with the same semantics as used by the
existing fileConfig() API.If the specified value is True, the configuration is processed
as described in the section on Incremental Configuration, below.
disable_existing_loggers - whether any existing loggers are to be
disabled. This setting mirrors the parameter of the same name in
fileConfig(). If absent, this parameter defaults to True.
This value is ignored if incremental is True.
A Working Example
The following is an actual working configuration in YAML format
(except that the email addresses are bogus):
formatters:
brief:
format: '%(levelname)-8s: %(name)-15s: %(message)s'
precise:
format: '%(asctime)s %(name)-15s %(levelname)-8s %(message)s'
filters:
allow_foo:
name: foo
handlers:
console:
class : logging.StreamHandler
formatter: brief
level : INFO
stream : ext://sys.stdout
filters: [allow_foo]
file:
class : logging.handlers.RotatingFileHandler
formatter: precise
filename: logconfig.log
maxBytes: 1024
backupCount: 3
debugfile:
class : logging.FileHandler
formatter: precise
filename: logconfig-detail.log
mode: a
email:
class: logging.handlers.SMTPHandler
mailhost: localhost
fromaddr: my_app@domain.tld
toaddrs:
- support_team@domain.tld
- dev_team@domain.tld
subject: Houston, we have a problem.
loggers:
foo:
level : ERROR
handlers: [debugfile]
spam:
level : CRITICAL
handlers: [debugfile]
propagate: no
bar.baz:
level: WARNING
root:
level : DEBUG
handlers : [console, file]
Incremental Configuration
It is difficult to provide complete flexibility for incremental
configuration. For example, because objects such as filters
and formatters are anonymous, once a configuration is set up, it is
not possible to refer to such anonymous objects when augmenting a
configuration.
Furthermore, there is not a compelling case for arbitrarily altering
the object graph of loggers, handlers, filters, formatters at
run-time, once a configuration is set up; the verbosity of loggers and
handlers can be controlled just by setting levels (and, in the case of
loggers, propagation flags). Changing the object graph arbitrarily in
a safe way is problematic in a multi-threaded environment; while not
impossible, the benefits are not worth the complexity it adds to the
implementation.
Thus, when the incremental key of a configuration dict is present
and is True, the system will ignore any formatters and
filters entries completely, and process only the level
settings in the handlers entries, and the level and
propagate settings in the loggers and root entries.
It’s certainly possible to provide incremental configuration by other
means, for example making dictConfig() take an incremental
keyword argument which defaults to False. The reason for
suggesting that a value in the configuration dict be used is that it
allows for configurations to be sent over the wire as pickled dicts
to a socket listener. Thus, the logging verbosity of a long-running
application can be altered over time with no need to stop and
restart the application.
Note: Feedback on incremental configuration needs based on your
practical experience will be particularly welcome.
API Customization
The bare-bones dictConfig() API will not be sufficient for all
use cases. Provision for customization of the API will be made by
providing the following:
A class, called DictConfigurator, whose constructor is passed
the dictionary used for configuration, and which has a
configure() method.
A callable, called dictConfigClass, which will (by default) be
set to DictConfigurator. This is provided so that if desired,
DictConfigurator can be replaced with a suitable user-defined
implementation.
The dictConfig() function will call dictConfigClass passing
the specified dictionary, and then call the configure() method on
the returned object to actually put the configuration into effect:
def dictConfig(config):
dictConfigClass(config).configure()
This should cater to all customization needs. For example, a subclass
of DictConfigurator could call DictConfigurator.__init__() in
its own __init__(), then set up custom prefixes which would be
usable in the subsequent configure() call. The dictConfigClass
would be bound to the subclass, and then dictConfig() could be
called exactly as in the default, uncustomized state.
Change to Socket Listener Implementation
The existing socket listener implementation will be modified as
follows: when a configuration message is received, an attempt will be
made to deserialize to a dictionary using the json module. If this
step fails, the message will be assumed to be in the fileConfig format
and processed as before. If deserialization is successful, then
dictConfig() will be called to process the resulting dictionary.
Configuration Errors
If an error is encountered during configuration, the system will raise
a ValueError, TypeError, AttributeError or ImportError
with a suitably descriptive message. The following is a (possibly
incomplete) list of conditions which will raise an error:
A level which is not a string or which is a string not
corresponding to an actual logging level
A propagate value which is not a boolean
An id which does not have a corresponding destination
A non-existent handler id found during an incremental call
An invalid logger name
Inability to resolve to an internal or external object
Discussion in the community
The PEP has been announced on python-dev and python-list. While there
hasn’t been a huge amount of discussion, this is perhaps to be
expected for a niche topic.
Discussion threads on python-dev:
https://mail.python.org/pipermail/python-dev/2009-October/092695.html
https://mail.python.org/pipermail/python-dev/2009-October/092782.html
https://mail.python.org/pipermail/python-dev/2009-October/093062.html
And on python-list:
https://mail.python.org/pipermail/python-list/2009-October/1223658.html
https://mail.python.org/pipermail/python-list/2009-October/1224228.html
There have been some comments in favour of the proposal, no
objections to the proposal as a whole, and some questions and
objections about specific details. These are believed by the author
to have been addressed by making changes to the PEP.
Reference implementation
A reference implementation of the changes is available as a module
dictconfig.py with accompanying unit tests in test_dictconfig.py, at:
http://bitbucket.org/vinay.sajip/dictconfig
This incorporates all features other than the socket listener change.
Copyright
This document has been placed in the public domain.
| Final | PEP 391 – Dictionary-Based Configuration For Logging | Standards Track | This PEP describes a new way of configuring logging using a dictionary
to hold configuration information. |
PEP 392 – Python 3.2 Release Schedule
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
30-Dec-2009
Python-Version:
3.2
Table of Contents
Abstract
Release Manager and Crew
3.2 Lifespan
Release Schedule
3.2 schedule
3.2.1 schedule
3.2.2 schedule
3.2.3 schedule
3.2.4 schedule
3.2.5 schedule (regression fix release)
3.2.6 schedule
Features for 3.2
Copyright
Abstract
This document describes the development and release schedule for the
Python 3.2 series. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.2 Release Manager: Georg Brandl
Windows installers: Martin v. Loewis
Mac installers: Ronald Oussoren
Documentation: Georg Brandl
3.2 Lifespan
3.2 will receive bugfix updates approximately every 4-6 months for
approximately 18 months. After the release of 3.3.0 final (see PEP
398), a final 3.2 bugfix update will be released. After that,
security updates (source only) will be released until 5 years after
the release of 3.2 final, which was planned for February 2016.
As of 2016-02-20, Python 3.2.x reached end-of-life status. The final
source release was 3.2.6 in October 2014.
Release Schedule
3.2 schedule
3.2 alpha 1: August 1, 2010
3.2 alpha 2: September 6, 2010
3.2 alpha 3: October 12, 2010
3.2 alpha 4: November 16, 2010
3.2 beta 1: December 6, 2010
(No new features beyond this point.)
3.2 beta 2: December 20, 2010
3.2 candidate 1: January 16, 2011
3.2 candidate 2: January 31, 2011
3.2 candidate 3: February 14, 2011
3.2 final: February 20, 2011
3.2.1 schedule
3.2.1 beta 1: May 8, 2011
3.2.1 candidate 1: May 17, 2011
3.2.1 candidate 2: July 3, 2011
3.2.1 final: July 11, 2011
3.2.2 schedule
3.2.2 candidate 1: August 14, 2011
3.2.2 final: September 4, 2011
3.2.3 schedule
3.2.3 candidate 1: February 25, 2012
3.2.3 candidate 2: March 18, 2012
3.2.3 final: April 11, 2012
3.2.4 schedule
3.2.4 candidate 1: March 23, 2013
3.2.4 final: April 6, 2013
3.2.5 schedule (regression fix release)
3.2.5 final: May 13, 2013
– Only security releases after 3.2.5 –
3.2.6 schedule
3.2.6 candidate 1 (source-only release): October 4, 2014
3.2.6 final (source-only release): October 11, 2014
Features for 3.2
Note that PEP 3003 is in effect: no changes to language
syntax and no additions to the builtins may be made.
No large-scale changes have been recorded yet.
Copyright
This document has been placed in the public domain.
| Final | PEP 392 – Python 3.2 Release Schedule | Informational | This document describes the development and release schedule for the
Python 3.2 series. The schedule primarily concerns itself with PEP-sized
items. |
PEP 393 – Flexible String Representation
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
24-Jan-2010
Python-Version:
3.3
Post-History:
Table of Contents
Abstract
Rationale
Specification
String Creation
String Access
New API
Stable ABI
GDB Debugging Hooks
Deprecations, Removals, and Incompatibilities
Discussion
Performance
Porting Guidelines
References
Copyright
Abstract
The Unicode string type is changed to support multiple internal
representations, depending on the character with the largest Unicode
ordinal (1, 2, or 4 bytes). This will allow a space-efficient
representation in common cases, but give access to full UCS-4 on all
systems. For compatibility with existing APIs, several representations
may exist in parallel; over time, this compatibility should be phased
out. The distinction between narrow and wide Unicode builds is
dropped. An implementation of this PEP is available at [1].
Rationale
There are two classes of complaints about the current implementation
of the unicode type: on systems only supporting UTF-16, users complain
that non-BMP characters are not properly supported. On systems using
UCS-4 internally (and also sometimes on systems using UCS-2), there is
a complaint that Unicode strings take up too much memory - especially
compared to Python 2.x, where the same code would often use ASCII
strings (i.e. ASCII-encoded byte strings). With the proposed approach,
ASCII-only Unicode strings will again use only one byte per character;
while still allowing efficient indexing of strings containing non-BMP
characters (as strings containing them will use 4 bytes per
character).
One problem with the approach is support for existing applications
(e.g. extension modules). For compatibility, redundant representations
may be computed. Applications are encouraged to phase out reliance on
a specific internal representation if possible. As interaction with
other libraries will often require some sort of internal
representation, the specification chooses UTF-8 as the recommended way
of exposing strings to C code.
For many strings (e.g. ASCII), multiple representations may actually
share memory (e.g. the shortest form may be shared with the UTF-8 form
if all characters are ASCII). With such sharing, the overhead of
compatibility representations is reduced. If representations do share
data, it is also possible to omit structure fields, reducing the base
size of string objects.
Specification
Unicode structures are now defined as a hierarchy of structures,
namely:
typedef struct {
PyObject_HEAD
Py_ssize_t length;
Py_hash_t hash;
struct {
unsigned int interned:2;
unsigned int kind:2;
unsigned int compact:1;
unsigned int ascii:1;
unsigned int ready:1;
} state;
wchar_t *wstr;
} PyASCIIObject;
typedef struct {
PyASCIIObject _base;
Py_ssize_t utf8_length;
char *utf8;
Py_ssize_t wstr_length;
} PyCompactUnicodeObject;
typedef struct {
PyCompactUnicodeObject _base;
union {
void *any;
Py_UCS1 *latin1;
Py_UCS2 *ucs2;
Py_UCS4 *ucs4;
} data;
} PyUnicodeObject;
Objects for which both size and maximum character are known at
creation time are called “compact” unicode objects; character data
immediately follow the base structure. If the maximum character is
less than 128, they use the PyASCIIObject structure, and the UTF-8
data, the UTF-8 length and the wstr length are the same as the length
of the ASCII data. For non-ASCII strings, the PyCompactObject
structure is used. Resizing compact objects is not supported.
Objects for which the maximum character is not given at creation time
are called “legacy” objects, created through
PyUnicode_FromStringAndSize(NULL, length). They use the
PyUnicodeObject structure. Initially, their data is only in the wstr
pointer; when PyUnicode_READY is called, the data pointer (union) is
allocated. Resizing is possible as long PyUnicode_READY has not been
called.
The fields have the following interpretations:
length: number of code points in the string (result of sq_length)
interned: interned-state (SSTATE_*) as in 3.2
kind: form of string
00 => str is not initialized (data are in wstr)
01 => 1 byte (Latin-1)
10 => 2 byte (UCS-2)
11 => 4 byte (UCS-4);
compact: the object uses one of the compact representations
(implies ready)
ascii: the object uses the PyASCIIObject representation
(implies compact and ready)
ready: the canonical representation is ready to be accessed through
PyUnicode_DATA and PyUnicode_GET_LENGTH. This is set either if the
object is compact, or the data pointer and length have been
initialized.
wstr_length, wstr: representation in platform’s wchar_t
(null-terminated). If wchar_t is 16-bit, this form may use surrogate
pairs (in which cast wstr_length differs form length).
wstr_length differs from length only if there are surrogate pairs
in the representation.
utf8_length, utf8: UTF-8 representation (null-terminated).
data: shortest-form representation of the unicode string.
The string is null-terminated (in its respective representation).
All three representations are optional, although the data form is
considered the canonical representation which can be absent only
while the string is being created. If the representation is absent,
the pointer is NULL, and the corresponding length field may contain
arbitrary data.
The Py_UNICODE type is still supported but deprecated. It is always
defined as a typedef for wchar_t, so the wstr representation can double
as Py_UNICODE representation.
The data and utf8 pointers point to the same memory if the string uses
only ASCII characters (using only Latin-1 is not sufficient). The data
and wstr pointers point to the same memory if the string happens to
fit exactly to the wchar_t type of the platform (i.e. uses some
BMP-not-Latin-1 characters if sizeof(wchar_t) is 2, and uses some
non-BMP characters if sizeof(wchar_t) is 4).
String Creation
The recommended way to create a Unicode object is to use the function
PyUnicode_New:
PyObject* PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar);
Both parameters must denote the eventual size/range of the strings.
In particular, codecs using this API must compute both the number of
characters and the maximum character in advance. A string is
allocated according to the specified size and character range and is
null-terminated; the actual characters in it may be uninitialized.
PyUnicode_FromString and PyUnicode_FromStringAndSize remain supported
for processing UTF-8 input; the input is decoded, and the UTF-8
representation is not yet set for the string.
PyUnicode_FromUnicode remains supported but is deprecated. If the
Py_UNICODE pointer is non-null, the data representation is set. If the
pointer is NULL, a properly-sized wstr representation is allocated,
which can be modified until PyUnicode_READY() is called (explicitly
or implicitly). Resizing a Unicode string remains possible until it
is finalized.
PyUnicode_READY() converts a string containing only a wstr
representation into the canonical representation. Unless wstr and data
can share the memory, the wstr representation is discarded after the
conversion. The macro returns 0 on success and -1 on failure, which
happens in particular if the memory allocation fails.
String Access
The canonical representation can be accessed using two macros
PyUnicode_Kind and PyUnicode_Data. PyUnicode_Kind gives one of the
values PyUnicode_WCHAR_KIND (0), PyUnicode_1BYTE_KIND (1),
PyUnicode_2BYTE_KIND (2), or PyUnicode_4BYTE_KIND (3). PyUnicode_DATA
gives the void pointer to the data. Access to individual characters
should use PyUnicode_{READ|WRITE}[_CHAR]:
PyUnicode_READ(kind, data, index)
PyUnicode_WRITE(kind, data, index, value)
PyUnicode_READ_CHAR(unicode, index)
All these macros assume that the string is in canonical form;
callers need to ensure this by calling PyUnicode_READY.
A new function PyUnicode_AsUTF8 is provided to access the UTF-8
representation. It is thus identical to the existing
_PyUnicode_AsString, which is removed. The function will compute the
utf8 representation when first called. Since this representation will
consume memory until the string object is released, applications
should use the existing PyUnicode_AsUTF8String where possible
(which generates a new string object every time). APIs that implicitly
converts a string to a char* (such as the ParseTuple functions) will
use PyUnicode_AsUTF8 to compute a conversion.
New API
This section summarizes the API additions.
Macros to access the internal representation of a Unicode object
(read-only):
PyUnicode_IS_COMPACT_ASCII(o), PyUnicode_IS_COMPACT(o),
PyUnicode_IS_READY(o)
PyUnicode_GET_LENGTH(o)
PyUnicode_KIND(o), PyUnicode_CHARACTER_SIZE(o),
PyUnicode_MAX_CHAR_VALUE(o)
PyUnicode_DATA(o), PyUnicode_1BYTE_DATA(o), PyUnicode_2BYTE_DATA(o),
PyUnicode_4BYTE_DATA(o)
Character access macros:
PyUnicode_READ(kind, data, index), PyUnicode_READ_CHAR(o, index)
PyUnicode_WRITE(kind, data, index, value)
Other macros:
PyUnicode_READY(o)
PyUnicode_CONVERT_BYTES(from_type, to_type, begin, end, to)
String creation functions:
PyUnicode_New(size, maxchar)
PyUnicode_FromKindAndData(kind, data, size)
PyUnicode_Substring(o, start, end)
Character access utility functions:
PyUnicode_GetLength(o), PyUnicode_ReadChar(o, index),
PyUnicode_WriteChar(o, index, character)
PyUnicode_CopyCharacters(to, to_start, from, from_start, how_many)
PyUnicode_FindChar(str, ch, start, end, direction)
Representation conversion:
PyUnicode_AsUCS4(o, buffer, buflen)
PyUnicode_AsUCS4Copy(o)
PyUnicode_AsUnicodeAndSize(o, size_out)
PyUnicode_AsUTF8(o)
PyUnicode_AsUTF8AndSize(o, size_out)
UCS4 utility functions:
Py_UCS4_{strlen, strcpy, strcat, strncpy, strcmp, strncpy, strcmp,
strncmp, strchr, strrchr}
Stable ABI
The following functions are added to the stable ABI (PEP 384), as they
are independent of the actual representation of Unicode objects:
PyUnicode_New, PyUnicode_Substring, PyUnicode_GetLength,
PyUnicode_ReadChar, PyUnicode_WriteChar, PyUnicode_Find,
PyUnicode_FindChar.
GDB Debugging Hooks
Tools/gdb/libpython.py contains debugging hooks that embed knowledge
about the internals of CPython’s data types, include PyUnicodeObject
instances. It has been updated to track the change.
Deprecations, Removals, and Incompatibilities
While the Py_UNICODE representation and APIs are deprecated with this
PEP, no removal of the respective APIs is scheduled. The APIs should
remain available at least five years after the PEP is accepted; before
they are removed, existing extension modules should be studied to find
out whether a sufficient majority of the open-source code on PyPI has
been ported to the new API. A reasonable motivation for using the
deprecated API even in new code is for code that shall work both on
Python 2 and Python 3.
The following macros and functions are deprecated:
PyUnicode_FromUnicode
PyUnicode_GET_SIZE, PyUnicode_GetSize, PyUnicode_GET_DATA_SIZE,
PyUnicode_AS_UNICODE, PyUnicode_AsUnicode, PyUnicode_AsUnicodeAndSize
PyUnicode_COPY, PyUnicode_FILL, PyUnicode_MATCH
PyUnicode_Encode, PyUnicode_EncodeUTF7, PyUnicode_EncodeUTF8,
PyUnicode_EncodeUTF16, PyUnicode_EncodeUTF32,
PyUnicode_EncodeUnicodeEscape, PyUnicode_EncodeRawUnicodeEscape,
PyUnicode_EncodeLatin1, PyUnicode_EncodeASCII,
PyUnicode_EncodeCharmap, PyUnicode_TranslateCharmap,
PyUnicode_EncodeMBCS, PyUnicode_EncodeDecimal,
PyUnicode_TransformDecimalToASCII
Py_UNICODE_{strlen, strcat, strcpy, strcmp, strchr, strrchr}
PyUnicode_AsUnicodeCopy
PyUnicode_GetMax
_PyUnicode_AsDefaultEncodedString is removed. It previously returned a
borrowed reference to an UTF-8-encoded bytes object. Since the unicode
object cannot anymore cache such a reference, implementing it without
leaking memory is not possible. No deprecation phase is provided,
since it was an API for internal use only.
Extension modules using the legacy API may inadvertently call
PyUnicode_READY, by calling some API that requires that the object is
ready, and then continue accessing the (now invalid) Py_UNICODE
pointer. Such code will break with this PEP. The code was already
flawed in 3.2, as there is was no explicit guarantee that the
PyUnicode_AS_UNICODE result would stay valid after an API call (due to
the possibility of string resizing). Modules that face this issue
need to re-fetch the Py_UNICODE pointer after API calls; doing
so will continue to work correctly in earlier Python versions.
Discussion
Several concerns have been raised about the approach presented here:
It makes the implementation more complex. That’s true, but considered
worth it given the benefits.
The Py_UNICODE representation is not instantaneously available,
slowing down applications that request it. While this is also true,
applications that care about this problem can be rewritten to use the
data representation.
Performance
Performance of this patch must be considered for both memory
consumption and runtime efficiency. For memory consumption, the
expectation is that applications that have many large strings will see
a reduction in memory usage. For small strings, the effects depend on
the pointer size of the system, and the size of the Py_UNICODE/wchar_t
type. The following table demonstrates this for various small ASCII
and Latin-1 string sizes and platforms.
string
size
Python 3.2
This PEP
16-bit wchar_t
32-bit wchar_t
ASCII
Latin-1
32-bit
64-bit
32-bit
64-bit
32-bit
64-bit
32-bit
64-bit
1
32
64
40
64
32
56
40
80
2
40
64
40
72
32
56
40
80
3
40
64
48
72
32
56
40
80
4
40
72
48
80
32
56
48
80
5
40
72
56
80
32
56
48
80
6
48
72
56
88
32
56
48
80
7
48
72
64
88
32
56
48
80
8
48
80
64
96
40
64
48
88
The runtime effect is significantly affected by the API being
used. After porting the relevant pieces of code to the new API,
the iobench, stringbench, and json benchmarks see typically
slowdowns of 1% to 30%; for specific benchmarks, speedups may
happen as may happen significantly larger slowdowns.
In actual measurements of a Django application ([2]), significant
reductions of memory usage could be found. For example, the storage
for Unicode objects reduced to 2216807 bytes, down from 6378540 bytes
for a wide Unicode build, and down from 3694694 bytes for a narrow
Unicode build (all on a 32-bit system). This reduction came from the
prevalence of ASCII strings in this application; out of 36,000 strings
(with 1,310,000 chars), 35713 where ASCII strings (with 1,300,000
chars). The sources for these strings where not further analysed;
many of them likely originate from identifiers in the library, and
string constants in Django’s source code.
In comparison to Python 2, both Unicode and byte strings need to be
accounted. In the test application, Unicode and byte strings combined
had a length of 2,046,000 units (bytes/chars) in 2.x, and 2,200,000
units in 3.x. On a 32-bit system, where the 2.x build used 32-bit
wchar_t/Py_UNICODE, the 2.x test used 3,620,000 bytes, and the 3.x
build 3,340,000 bytes. This reduction in 3.x using the PEP compared
to 2.x only occurs when comparing with a wide unicode build.
Porting Guidelines
Only a small fraction of C code is affected by this PEP, namely code
that needs to look “inside” unicode strings. That code doesn’t
necessarily need to be ported to this API, as the existing API will
continue to work correctly. In particular, modules that need to
support both Python 2 and Python 3 might get too complicated when
simultaneously supporting this new API and the old Unicode API.
In order to port modules to the new API, try to eliminate
the use of these API elements:
the Py_UNICODE type,
PyUnicode_AS_UNICODE and PyUnicode_AsUnicode,
PyUnicode_GET_SIZE and PyUnicode_GetSize, and
PyUnicode_FromUnicode.
When iterating over an existing string, or looking at specific
characters, use indexing operations rather than pointer arithmetic;
indexing works well for PyUnicode_READ(_CHAR) and PyUnicode_WRITE. Use
void* as the buffer type for characters to let the compiler detect
invalid dereferencing operations. If you do want to use pointer
arithmetics (e.g. when converting existing code), use (unsigned)
char* as the buffer type, and keep the element size (1, 2, or 4) in a
variable. Notice that (1<<(kind-1)) will produce the element size
given a buffer kind.
When creating new strings, it was common in Python to start of with a
heuristical buffer size, and then grow or shrink if the heuristics
failed. With this PEP, this is now less practical, as you need not
only a heuristics for the length of the string, but also for the
maximum character.
In order to avoid heuristics, you need to make two passes over the
input: once to determine the output length, and the maximum character;
then allocate the target string with PyUnicode_New and iterate over
the input a second time to produce the final output. While this may
sound expensive, it could actually be cheaper than having to copy the
result again as in the following approach.
If you take the heuristical route, avoid allocating a string meant to
be resized, as resizing strings won’t work for their canonical
representation. Instead, allocate a separate buffer to collect the
characters, and then construct a unicode object from that using
PyUnicode_FromKindAndData. One option is to use Py_UCS4 as the buffer
element, assuming for the worst case in character ordinals. This will
allow for pointer arithmetics, but may require a lot of memory.
Alternatively, start with a 1-byte buffer, and increase the element
size as you encounter larger characters. In any case,
PyUnicode_FromKindAndData will scan over the buffer to verify the
maximum character.
For common tasks, direct access to the string representation may not
be necessary: PyUnicode_Find, PyUnicode_FindChar, PyUnicode_Ord, and
PyUnicode_CopyCharacters help in analyzing and creating string
objects, operating on indexes instead of data pointers.
References
[1]
PEP 393 branch
https://bitbucket.org/t0rsten/pep-393
[2]
Django measurement results
https://web.archive.org/web/20160911215951/http://www.dcl.hpi.uni-potsdam.de/home/loewis/djmemprof/
Copyright
This document has been placed in the public domain.
| Final | PEP 393 – Flexible String Representation | Standards Track | The Unicode string type is changed to support multiple internal
representations, depending on the character with the largest Unicode
ordinal (1, 2, or 4 bytes). This will allow a space-efficient
representation in common cases, but give access to full UCS-4 on all
systems. For compatibility with existing APIs, several representations
may exist in parallel; over time, this compatibility should be phased
out. The distinction between narrow and wide Unicode builds is
dropped. An implementation of this PEP is available at [1]. |
PEP 394 – The “python” Command on Unix-Like Systems
Author:
Kerrick Staley <mail at kerrickstaley.com>,
Alyssa Coghlan <ncoghlan at gmail.com>,
Barry Warsaw <barry at python.org>,
Petr Viktorin <encukou at gmail.com>,
Miro Hrončok <miro at hroncok.cz>,
Carol Willing <willingc at gmail.com>
Status:
Active
Type:
Informational
Created:
02-Mar-2011
Post-History:
04-Mar-2011, 20-Jul-2011, 16-Feb-2012, 30-Sep-2014, 28-Apr-2018,
26-Jun-2019
Resolution:
Python-Dev message
Table of Contents
Abstract
Recommendation
For Python runtime distributors
For Python script publishers
For end users of Python
History of this PEP
Current Rationale
Future Changes to this Recommendation
Migration Notes
Backwards Compatibility
Application to the CPython Reference Interpreter
Impact on PYTHON* Environment Variables
Exclusion of MS Windows
References
Copyright
Abstract
This PEP outlines the behavior of Python scripts when the python command
is invoked.
Depending on a distribution or system configuration,
python may or may not be installed.
If python is installed its target interpreter may refer to python2
or python3.
End users may be unaware of this inconsistency across Unix-like systems.
This PEP’s goal is to reduce user confusion about what python references
and what will be the script’s behavior.
The recommendations in the next section of this PEP will outline the behavior
when:
using virtual environments
writing cross-platform scripts with shebangs for either python2 or python3
The PEP’s goal is to clarify the behavior for script end users, distribution
providers, and script maintainers / authors.
Recommendation
Our recommendations are detailed below.
We call out any expectations that these recommendations are based upon.
For Python runtime distributors
We expect Unix-like software distributions (including systems like macOS and
Cygwin) to install the python2 command into the default path
whenever a version of the Python 2 interpreter is installed, and the same
for python3 and the Python 3 interpreter.
When invoked, python2 should run some version of the Python 2
interpreter, and python3 should run some version of the Python 3
interpreter.
If the python command is installed, it is expected to invoke either
the same version of Python as the python3 command or as the python2
command.
Distributors may choose to set the behavior of the python command
as follows:
python2,
python3,
not provide python command,
allow python to be configurable by an end user or
a system administrator.
The Python 3.x idle, pydoc, and python-config commands should
likewise be available as idle3, pydoc3, and python3-config;
Python 2.x versions as idle2, pydoc2, and python2-config.
The commands with no version number should either invoke the same version
of Python as the python command, or not be available at all.
When packaging third party Python scripts, distributors are encouraged to
change less specific shebangs to more specific ones.
This ensures software is used with the latest version of Python available,
and it can remove a dependency on Python 2.
The details on what specifics to set are left to the distributors;
though. Example specifics could include:
Changing python shebangs to python3 when Python 3.x is supported.
Changing python shebangs to python2 when Python 3.x is not yet
supported.
Changing python3 shebangs to python3.8 if the software is built
with Python 3.8.
When a virtual environment (created by the PEP 405 venv package or a
similar tool such as virtualenv or conda) is active, the python
command should refer to the virtual environment’s interpreter and should
always be available.
The python3 or python2 command (according to the environment’s
interpreter version) should also be available.
For Python script publishers
When reinvoking the interpreter from a Python script, querying
sys.executable to avoid hardcoded assumptions regarding the
interpreter location remains the preferred approach.
Encourage your end users to use a virtual environment.
This makes the user’s environment more predictable (possibly resulting
in fewer issues), and helps avoid disrupting their system.
For scripts that are only expected to be run in an activated virtual
environment, shebang lines can be written as #!/usr/bin/env python,
as this instructs the script to respect the active virtual environment.
In cases where the script is expected to be executed outside virtual
environments, developers will need to be aware of the following
discrepancies across platforms and installation methods:
Older Linux distributions will provide a python command that
refers to Python 2, and will likely not provide a python2 command.
Some newer Linux distributions will provide a python command that
refers to Python 3.
Some Linux distributions will not provide a python command at
all by default, but will provide a python3 command by default.
When potentially targeting these environments, developers may either
use a Python package installation tool that rewrites shebang lines for
the installed environment, provide instructions on updating shebang lines
interactively, or else use more specific shebang lines that are
tailored to the target environment.
Scripts targeting both “old systems” and systems without the default
python command need to make a compromise and document this situation.
Avoiding shebangs (via the console_scripts Entry Points ([9]) or similar
means) is the recommended workaround for this problem.
Applications designed exclusively for a specific environment (such as
a container or virtual environment) may continue to use the python
command name.
For end users of Python
While far from being universally available, python remains the
preferred spelling for explicitly invoking Python, as this is the
spelling that virtual environments make consistently available
across different platforms and Python installations.
For software that is not distributed with (or developed for) your system,
we recommend using a virtual environment, possibly with an environment
manager like conda or pipenv, to help avoid disrupting your system
Python installation.
These recommendations are the outcome of the relevant python-dev discussions
in March and July 2011 ([1], [2]), February 2012 ([4]),
September 2014 ([6]), discussion on GitHub in April 2018 ([7]),
on python-dev in February 2019 ([8]), and during the PEP update review
in May/June 2019 ([10]).
History of this PEP
In 2011, the majority of distributions
aliased the python command to Python 2, but some started switching it to
Python 3 ([5]). As some of the former distributions did not provide a
python2 command by default, there was previously no way for Python 2 code
(or any code that invokes the Python 2 interpreter directly rather than via
sys.executable) to reliably run on all Unix-like systems without
modification, as the python command would invoke the wrong interpreter
version on some systems, and the python2 command would fail completely
on others. This PEP originally provided a very simple mechanism
to restore cross-platform support, with minimal additional work required
on the part of distribution maintainers. Simplified, the recommendation was:
The python command was preferred for code compatible with both
Python 2 and 3 (since it was available on all systems, even those that
already aliased it to Python 3).
The python command should always invoke Python 2 (to prevent
hard-to-diagnose errors when Python 2 code is run on Python 3).
The python2 and python3 commands should be available to specify
the version explicitly.
However, these recommendations implicitly assumed that Python 2 would always be
available. As Python 2 is nearing its end of life in 2020 (PEP 373, PEP 404),
distributions are making Python 2 optional or removing it entirely.
This means either removing the python command or switching it to invoke
Python 3. Some distributors also decided that their users were better served by
ignoring the PEP’s original recommendations, and provided system
administrators with the freedom to configure their systems based on
the needs of their particular environment.
Current Rationale
As of 2019, activating a Python virtual environment (or its functional
equivalent) prior to script execution is one way to obtain a consistent
cross-platform and cross-distribution experience.
Accordingly, publishers can expect users of the software to provide a suitable
execution environment.
Future Changes to this Recommendation
This recommendation will be periodically reviewed over the next few years,
and updated when the core development team judges it appropriate. As a
point of reference, regular maintenance releases for the Python 2.7 series
will continue until January 2020.
Migration Notes
This section does not contain any official recommendations from the core
CPython developers. It’s merely a collection of notes regarding various
aspects of migrating to Python 3 as the default version of Python for a
system. They will hopefully be helpful to any distributions considering
making such a change.
The main barrier to a distribution switching the python command from
python2 to python3 isn’t breakage within the distribution, but
instead breakage of private third party scripts developed by sysadmins
and other users. Updating the python command to invoke python3
by default indicates that a distribution is willing to break such scripts
with errors that are potentially quite confusing for users that aren’t
familiar with the backwards incompatible changes in Python 3. For
example, while the change of print from a statement to a builtin
function is relatively simple for automated converters to handle, the
SyntaxError from attempting to use the Python 2 notation in Python 3
may be confusing for users that are not aware of the change:$ python3 -c 'print "Hello, world!"'
File "<string>", line 1
print "Hello, world!"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("Hello, world!")?
While this might be obvious for experienced Pythonistas, such scripts
might even be run by people who are not familiar with Python at all.
Avoiding breakage of such third party scripts was the key reason this
PEP used to recommend that python continue to refer to python2.
The error message python: command not found tends to be surprisingly
actionable, even for people unfamiliar with Python.
The pythonX.X (e.g. python3.6) commands exist on modern systems, on
which they invoke specific minor versions of the Python interpreter. It
can be useful for distribution-specific packages to take advantage of these
utilities if they exist, since it will prevent code breakage if the default
minor version of a given major version is changed. However, scripts
intending to be cross-platform should not rely on the presence of these
utilities, but rather should be tested on several recent minor versions of
the target major version, compensating, if necessary, for the small
differences that exist between minor versions. This prevents the need for
sysadmins to install many very similar versions of the interpreter.
When the pythonX.X binaries are provided by a distribution, the
python2 and python3 commands should refer to one of those files
rather than being provided as a separate binary file.
It is strongly encouraged that distribution-specific packages use python3
(or python2) rather than python, even in code that is not intended to
operate on other distributions. This will reduce problems if the
distribution later decides to change the version of the Python interpreter
that the python command invokes, or if a sysadmin installs a custom
python command with a different major version than the distribution
default.
If the above point is adhered to and sysadmins are permitted to change the
python command, then the python command should always be implemented
as a link to the interpreter binary (or a link to a link) and not vice
versa. That way, if a sysadmin does decide to replace the installed
python file, they can do so without inadvertently deleting the
previously installed binary.
Even as the Python 2 interpreter becomes less common, it remains reasonable
for scripts to continue to use the python3 convention, rather than just
python.
If these conventions are adhered to, it will become the case that the
python command is only executed in an interactive manner as a user
convenience, or else when using a virtual environment or similar mechanism.
Backwards Compatibility
A potential problem can arise if a script adhering to the
python2/python3 convention is executed on a system not supporting
these commands. This is mostly a non-issue, since the sysadmin can simply
create these symbolic links and avoid further problems. It is a significantly
more obvious breakage than the sometimes cryptic errors that can arise when
attempting to execute a script containing Python 2 specific syntax with a
Python 3 interpreter or vice versa.
Application to the CPython Reference Interpreter
While technically a new feature, the make install and make bininstall
command in the 2.7 version of CPython were adjusted to create the
following chains of symbolic links in the relevant bin directory (the
final item listed in the chain is the actual installed binary, preceding
items are relative symbolic links):
python -> python2 -> python2.7
python-config -> python2-config -> python2.7-config
Similar adjustments were made to the macOS binary installer.
This feature first appeared in the default installation process in
CPython 2.7.3.
The installation commands in the CPython 3.x series already create the
appropriate symlinks. For example, CPython 3.2 creates:
python3 -> python3.2
idle3 -> idle3.2
pydoc3 -> pydoc3.2
python3-config -> python3.2-config
And CPython 3.3 creates:
python3 -> python3.3
idle3 -> idle3.3
pydoc3 -> pydoc3.3
python3-config -> python3.3-config
pysetup3 -> pysetup3.3
The implementation progress of these features in the default installers was
managed on the tracker as issue #12627 ([3]).
Impact on PYTHON* Environment Variables
The choice of target for the python command implicitly affects a
distribution’s expected interpretation of the various Python related
environment variables. The use of *.pth files in the relevant
site-packages folder, the “per-user site packages” feature (see
python -m site) or more flexible tools such as virtualenv are all more
tolerant of the presence of multiple versions of Python on a system than the
direct use of PYTHONPATH.
Exclusion of MS Windows
This PEP deliberately excludes any proposals relating to Microsoft Windows, as
devising an equivalent solution for Windows was deemed too complex to handle
here. PEP 397 and the related discussion on the python-dev mailing list
address this issue.
References
[1]
Support the /usr/bin/python2 symlink upstream (with bonus grammar class!)
(https://mail.python.org/pipermail/python-dev/2011-March/108491.html)
[2]
Rebooting PEP 394 (aka Support the /usr/bin/python2 symlink upstream)
(https://mail.python.org/pipermail/python-dev/2011-July/112322.html)
[3]
Implement PEP 394 in the CPython Makefile
(http://bugs.python.org/issue12627)
[4]
PEP 394 request for pronouncement (python2 symlink in *nix systems)
(https://mail.python.org/pipermail/python-dev/2012-February/116435.html)
[5]
Arch Linux announcement that their “python” link now refers Python 3
(https://www.archlinux.org/news/python-is-now-python-3/)
[6]
PEP 394 - Clarification of what “python” command should invoke
(https://mail.python.org/pipermail/python-dev/2014-September/136374.html)
[7]
PEP 394: Allow the python command to not be installed, and other
minor edits
(https://github.com/python/peps/pull/630)
[8]
Another update for PEP 394 – The “python” Command on Unix-Like Systems
(https://mail.python.org/pipermail/python-dev/2019-February/156272.html)
[9]
The console_scripts Entry Point
(https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point)
[10]
May 2019 PEP update review
(https://github.com/python/peps/pull/989)
Copyright
This document has been placed in the public domain.
| Active | PEP 394 – The “python” Command on Unix-Like Systems | Informational | This PEP outlines the behavior of Python scripts when the python command
is invoked.
Depending on a distribution or system configuration,
python may or may not be installed.
If python is installed its target interpreter may refer to python2
or python3.
End users may be unaware of this inconsistency across Unix-like systems.
This PEP’s goal is to reduce user confusion about what python references
and what will be the script’s behavior. |
PEP 395 – Qualified Names for Modules
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
04-Mar-2011
Python-Version:
3.4
Post-History:
05-Mar-2011, 19-Nov-2011
Table of Contents
PEP Withdrawal
Abstract
Relationship with Other PEPs
What’s in a __name__?
Traps for the Unwary
Why are my imports broken?
Importing the main module twice
In a bit of a pickle
Where’s the source?
Forkless Windows
Qualified Names for Modules
Alternative Names
Eliminating the Traps
Fixing main module imports inside packages
Optional addition: command line relative imports
Compatibility with PEP 382
Incompatibility with PEP 402
Potential incompatibilities with scripts stored in packages
Fixing dual imports of the main module
Fixing pickling without breaking introspection
Fixing multiprocessing on Windows
Explicit relative imports
Reference Implementation
References
Copyright
PEP Withdrawal
This PEP was withdrawn by the author in December 2013, as other significant
changes in the time since it was written have rendered several aspects
obsolete. Most notably PEP 420 namespace packages rendered some of the
proposals related to package detection unworkable and PEP 451 module
specifications resolved the multiprocessing issues and provide a possible
means to tackle the pickle compatibility issues.
A future PEP to resolve the remaining issues would still be appropriate,
but it’s worth starting any such effort as a fresh PEP restating the
remaining problems in an updated context rather than trying to build on
this one directly.
Abstract
This PEP proposes new mechanisms that eliminate some longstanding traps for
the unwary when dealing with Python’s import system, as well as serialisation
and introspection of functions and classes.
It builds on the “Qualified Name” concept defined in PEP 3155.
Relationship with Other PEPs
Most significantly, this PEP is currently deferred as it requires
significant changes in order to be made compatible with the removal
of mandatory __init__.py files in PEP 420 (which has been implemented
and released in Python 3.3).
This PEP builds on the “qualified name” concept introduced by PEP 3155, and
also shares in that PEP’s aim of fixing some ugly corner cases when dealing
with serialisation of arbitrary functions and classes.
It also builds on PEP 366, which took initial tentative steps towards making
explicit relative imports from the main module work correctly in at least
some circumstances.
Finally, PEP 328 eliminated implicit relative imports from imported modules.
This PEP proposes that the de facto implicit relative imports from main
modules that are provided by the current initialisation behaviour for
sys.path[0] also be eliminated.
What’s in a __name__?
Over time, a module’s __name__ attribute has come to be used to handle a
number of different tasks.
The key use cases identified for this module attribute are:
Flagging the main module in a program, using the if __name__ ==
"__main__": convention.
As the starting point for relative imports
To identify the location of function and class definitions within the
running application
To identify the location of classes for serialisation into pickle objects
which may be shared with other interpreter instances
Traps for the Unwary
The overloading of the semantics of __name__, along with some historically
associated behaviour in the initialisation of sys.path[0], has resulted in
several traps for the unwary. These traps can be quite annoying in practice,
as they are highly unobvious (especially to beginners) and can cause quite
confusing behaviour.
Why are my imports broken?
There’s a general principle that applies when modifying sys.path: never
put a package directory directly on sys.path. The reason this is
problematic is that every module in that directory is now potentially
accessible under two different names: as a top level module (since the
package directory is on sys.path) and as a submodule of the package (if
the higher level directory containing the package itself is also on
sys.path).
As an example, Django (up to and including version 1.3) is guilty of setting
up exactly this situation for site-specific applications - the application
ends up being accessible as both app and site.app in the module
namespace, and these are actually two different copies of the module. This
is a recipe for confusion if there is any meaningful mutable module level
state, so this behaviour is being eliminated from the default site set up in
version 1.4 (site-specific apps will always be fully qualified with the site
name).
However, it’s hard to blame Django for this, when the same part of Python
responsible for setting __name__ = "__main__" in the main module commits
the exact same error when determining the value for sys.path[0].
The impact of this can be seen relatively frequently if you follow the
“python” and “import” tags on Stack Overflow. When I had the time to follow
it myself, I regularly encountered people struggling to understand the
behaviour of straightforward package layouts like the following (I actually
use package layouts along these lines in my own projects):
project/
setup.py
example/
__init__.py
foo.py
tests/
__init__.py
test_foo.py
While I would often see it without the __init__.py files first, that’s a
trivial fix to explain. What’s hard to explain is that all of the following
ways to invoke test_foo.py probably won’t work due to broken imports
(either failing to find example for absolute imports, complaining
about relative imports in a non-package or beyond the toplevel package for
explicit relative imports, or issuing even more obscure errors if some other
submodule happens to shadow the name of a top-level module, such as an
example.json module that handled serialisation or an
example.tests.unittest test runner):
# These commands will most likely *FAIL*, even if the code is correct
# working directory: project/example/tests
./test_foo.py
python test_foo.py
python -m package.tests.test_foo
python -c "from package.tests.test_foo import main; main()"
# working directory: project/package
tests/test_foo.py
python tests/test_foo.py
python -m package.tests.test_foo
python -c "from package.tests.test_foo import main; main()"
# working directory: project
example/tests/test_foo.py
python example/tests/test_foo.py
# working directory: project/..
project/example/tests/test_foo.py
python project/example/tests/test_foo.py
# The -m and -c approaches don't work from here either, but the failure
# to find 'package' correctly is easier to explain in this case
That’s right, that long list is of all the methods of invocation that will
almost certainly break if you try them, and the error messages won’t make
any sense if you’re not already intimately familiar not only with the way
Python’s import system works, but also with how it gets initialised.
For a long time, the only way to get sys.path right with that kind of
setup was to either set it manually in test_foo.py itself (hardly
something a novice, or even many veteran, Python programmers are going to
know how to do) or else to make sure to import the module instead of
executing it directly:
# working directory: project
python -c "from package.tests.test_foo import main; main()"
Since the implementation of PEP 366 (which defined a mechanism that allows
relative imports to work correctly when a module inside a package is executed
via the -m switch), the following also works properly:
# working directory: project
python -m package.tests.test_foo
The fact that most methods of invoking Python code from the command line
break when that code is inside a package, and the two that do work are highly
sensitive to the current working directory is all thoroughly confusing for a
beginner. I personally believe it is one of the key factors leading
to the perception that Python packages are complicated and hard to get right.
This problem isn’t even limited to the command line - if test_foo.py is
open in Idle and you attempt to run it by pressing F5, or if you try to run
it by clicking on it in a graphical filebrowser, then it will fail in just
the same way it would if run directly from the command line.
There’s a reason the general “no package directories on sys.path”
guideline exists, and the fact that the interpreter itself doesn’t follow
it when determining sys.path[0] is the root cause of all sorts of grief.
In the past, this couldn’t be fixed due to backwards compatibility concerns.
However, scripts potentially affected by this problem will already require
fixes when porting to the Python 3.x (due to the elimination of implicit
relative imports when importing modules normally). This provides a convenient
opportunity to implement a corresponding change in the initialisation
semantics for sys.path[0].
Importing the main module twice
Another venerable trap is the issue of importing __main__ twice. This
occurs when the main module is also imported under its real name, effectively
creating two instances of the same module under different names.
If the state stored in __main__ is significant to the correct operation
of the program, or if there is top-level code in the main module that has
non-idempotent side effects, then this duplication can cause obscure and
surprising errors.
In a bit of a pickle
Something many users may not realise is that the pickle module sometimes
relies on the __module__ attribute when serialising instances of arbitrary
classes. So instances of classes defined in __main__ are pickled that way,
and won’t be unpickled correctly by another python instance that only imported
that module instead of running it directly. This behaviour is the underlying
reason for the advice from many Python veterans to do as little as possible
in the __main__ module in any application that involves any form of
object serialisation and persistence.
Similarly, when creating a pseudo-module (see next paragraph), pickles rely
on the name of the module where a class is actually defined, rather than the
officially documented location for that class in the module hierarchy.
For the purposes of this PEP, a “pseudo-module” is a package designed like
the Python 3.2 unittest and concurrent.futures packages. These
packages are documented as if they were single modules, but are in fact
internally implemented as a package. This is supposed to be an
implementation detail that users and other implementations don’t need to
worry about, but, thanks to pickle (and serialisation in general),
the details are often exposed and can effectively become part of the public
API.
While this PEP focuses specifically on pickle as the principal
serialisation scheme in the standard library, this issue may also affect
other mechanisms that support serialisation of arbitrary class instances
and rely on __module__ attributes to determine how to handle
deserialisation.
Where’s the source?
Some sophisticated users of the pseudo-module technique described
above recognise the problem with implementation details leaking out via the
pickle module, and choose to address it by altering __name__ to refer
to the public location for the module before defining any functions or classes
(or else by modifying the __module__ attributes of those objects after
they have been defined).
This approach is effective at eliminating the leakage of information via
pickling, but comes at the cost of breaking introspection for functions and
classes (as their __module__ attribute now points to the wrong place).
Forkless Windows
To get around the lack of os.fork on Windows, the multiprocessing
module attempts to re-execute Python with the same main module, but skipping
over any code guarded by if __name__ == "__main__": checks. It does the
best it can with the information it has, but is forced to make assumptions
that simply aren’t valid whenever the main module isn’t an ordinary directly
executed script or top-level module. Packages and non-top-level modules
executed via the -m switch, as well as directly executed zipfiles or
directories, are likely to make multiprocessing on Windows do the wrong thing
(either quietly or noisily, depending on application details) when spawning a
new process.
While this issue currently only affects Windows directly, it also impacts
any proposals to provide Windows-style “clean process” invocation via the
multiprocessing module on other platforms.
Qualified Names for Modules
To make it feasible to fix these problems once and for all, it is proposed
to add a new module level attribute: __qualname__. This abbreviation of
“qualified name” is taken from PEP 3155, where it is used to store the naming
path to a nested class or function definition relative to the top level
module.
For modules, __qualname__ will normally be the same as __name__, just
as it is for top-level functions and classes in PEP 3155. However, it will
differ in some situations so that the above problems can be addressed.
Specifically, whenever __name__ is modified for some other purpose (such
as to denote the main module), then __qualname__ will remain unchanged,
allowing code that needs it to access the original unmodified value.
If a module loader does not initialise __qualname__ itself, then the
import system will add it automatically (setting it to the same value as
__name__).
Alternative Names
Two alternative names were also considered for the new attribute: “full name”
(__fullname__) and “implementation name” (__implname__).
Either of those would actually be valid for the use case in this PEP.
However, as a meta-issue, PEP 3155 is also adding a new attribute (for
functions and classes) that is “like __name__, but different in some cases
where __name__ is missing necessary information” and those terms aren’t
accurate for the PEP 3155 function and class use case.
PEP 3155 deliberately omits the module information, so the term “full name”
is simply untrue, and “implementation name” implies that it may specify an
object other than that specified by __name__, and that is never the
case for PEP 3155 (in that PEP, __name__ and __qualname__ always
refer to the same function or class, it’s just that __name__ is
insufficient to accurately identify nested functions and classes).
Since it seems needlessly inconsistent to add two new terms for attributes
that only exist because backwards compatibility concerns keep us from
changing the behaviour of __name__ itself, this PEP instead chose to
adopt the PEP 3155 terminology.
If the relative inscrutability of “qualified name” and __qualname__
encourages interested developers to look them up at least once rather than
assuming they know what they mean just from the name and guessing wrong,
that’s not necessarily a bad outcome.
Besides, 99% of Python developers should never need to even care these extra
attributes exist - they’re really an implementation detail to let us fix a
few problematic behaviours exhibited by imports, pickling and introspection,
not something people are going to be dealing with on a regular basis.
Eliminating the Traps
The following changes are interrelated and make the most sense when
considered together. They collectively either completely eliminate the traps
for the unwary noted above, or else provide straightforward mechanisms for
dealing with them.
A rough draft of some of the concepts presented here was first posted on the
python-ideas list ([1]), but they have evolved considerably since first being
discussed in that thread. Further discussion has subsequently taken place on
the import-sig mailing list ([2]. [3]).
Fixing main module imports inside packages
To eliminate this trap, it is proposed that an additional filesystem check be
performed when determining a suitable value for sys.path[0]. This check
will look for Python’s explicit package directory markers and use them to find
the appropriate directory to add to sys.path.
The current algorithm for setting sys.path[0] in relevant cases is roughly
as follows:
# Interactive prompt, -m switch, -c switch
sys.path.insert(0, '')
# Valid sys.path entry execution (i.e. directory and zip execution)
sys.path.insert(0, sys.argv[0])
# Direct script execution
sys.path.insert(0, os.path.dirname(sys.argv[0]))
It is proposed that this initialisation process be modified to take
package details stored on the filesystem into account:
# Interactive prompt, -m switch, -c switch
in_package, path_entry, _ignored = split_path_module(os.getcwd(), '')
if in_package:
sys.path.insert(0, path_entry)
else:
sys.path.insert(0, '')
# Start interactive prompt or run -c command as usual
# __main__.__qualname__ is set to "__main__"
# The -m switches uses the same sys.path[0] calculation, but:
# modname is the argument to the -m switch
# modname is passed to ``runpy._run_module_as_main()`` as usual
# __main__.__qualname__ is set to modname
# Valid sys.path entry execution (i.e. directory and zip execution)
modname = "__main__"
path_entry, modname = split_path_module(sys.argv[0], modname)
sys.path.insert(0, path_entry)
# modname (possibly adjusted) is passed to ``runpy._run_module_as_main()``
# __main__.__qualname__ is set to modname
# Direct script execution
in_package, path_entry, modname = split_path_module(sys.argv[0])
sys.path.insert(0, path_entry)
if in_package:
# Pass modname to ``runpy._run_module_as_main()``
else:
# Run script directly
# __main__.__qualname__ is set to modname
The split_path_module() supporting function used in the above pseudo-code
would have the following semantics:
def _splitmodname(fspath):
path_entry, fname = os.path.split(fspath)
modname = os.path.splitext(fname)[0]
return path_entry, modname
def _is_package_dir(fspath):
return any(os.exists("__init__" + info[0]) for info
in imp.get_suffixes())
def split_path_module(fspath, modname=None):
"""Given a filesystem path and a relative module name, determine an
appropriate sys.path entry and a fully qualified module name.
Returns a 3-tuple of (package_depth, fspath, modname). A reported
package depth of 0 indicates that this would be a top level import.
If no relative module name is given, it is derived from the final
component in the supplied path with the extension stripped.
"""
if modname is None:
fspath, modname = _splitmodname(fspath)
package_depth = 0
while _is_package_dir(fspath):
fspath, pkg = _splitmodname(fspath)
modname = pkg + '.' + modname
return package_depth, fspath, modname
This PEP also proposes that the split_path_module() functionality be
exposed directly to Python users via the runpy module.
With this fix in place, and the same simple package layout described earlier,
all of the following commands would invoke the test suite correctly:
# working directory: project/example/tests
./test_foo.py
python test_foo.py
python -m package.tests.test_foo
python -c "from .test_foo import main; main()"
python -c "from ..tests.test_foo import main; main()"
python -c "from package.tests.test_foo import main; main()"
# working directory: project/package
tests/test_foo.py
python tests/test_foo.py
python -m package.tests.test_foo
python -c "from .tests.test_foo import main; main()"
python -c "from package.tests.test_foo import main; main()"
# working directory: project
example/tests/test_foo.py
python example/tests/test_foo.py
python -m package.tests.test_foo
python -c "from package.tests.test_foo import main; main()"
# working directory: project/..
project/example/tests/test_foo.py
python project/example/tests/test_foo.py
# The -m and -c approaches still don't work from here, but the failure
# to find 'package' correctly is pretty easy to explain in this case
With these changes, clicking Python modules in a graphical file browser
should always execute them correctly, even if they live inside a package.
Depending on the details of how it invokes the script, Idle would likely also
be able to run test_foo.py correctly with F5, without needing any Idle
specific fixes.
Optional addition: command line relative imports
With the above changes in place, it would be a fairly minor addition to allow
explicit relative imports as arguments to the -m switch:
# working directory: project/example/tests
python -m .test_foo
python -m ..tests.test_foo
# working directory: project/example/
python -m .tests.test_foo
With this addition, system initialisation for the -m switch would change
as follows:
# -m switch (permitting explicit relative imports)
in_package, path_entry, pkg_name = split_path_module(os.getcwd(), '')
qualname= <<arguments to -m switch>>
if qualname.startswith('.'):
modname = qualname
while modname.startswith('.'):
modname = modname[1:]
pkg_name, sep, _ignored = pkg_name.rpartition('.')
if not sep:
raise ImportError("Attempted relative import beyond top level package")
qualname = pkg_name + '.' modname
if in_package:
sys.path.insert(0, path_entry)
else:
sys.path.insert(0, '')
# qualname is passed to ``runpy._run_module_as_main()``
# _main__.__qualname__ is set to qualname
Compatibility with PEP 382
Making this proposal compatible with the PEP 382 namespace packaging PEP is
trivial. The semantics of _is_package_dir() are merely changed to be:
def _is_package_dir(fspath):
return (fspath.endswith(".pyp") or
any(os.exists("__init__" + info[0]) for info
in imp.get_suffixes()))
Incompatibility with PEP 402
PEP 402 proposes the elimination of explicit markers in the file system for
Python packages. This fundamentally breaks the proposed concept of being able
to take a filesystem path and a Python module name and work out an unambiguous
mapping to the Python module namespace. Instead, the appropriate mapping
would depend on the current values in sys.path, rendering it impossible
to ever fix the problems described above with the calculation of
sys.path[0] when the interpreter is initialised.
While some aspects of this PEP could probably be salvaged if PEP 402 were
adopted, the core concept of making import semantics from main and other
modules more consistent would no longer be feasible.
This incompatibility is discussed in more detail in the relevant import-sig
threads ([2], [3]).
Potential incompatibilities with scripts stored in packages
The proposed change to sys.path[0] initialisation may break some
existing code. Specifically, it will break scripts stored in package
directories that rely on the implicit relative imports from __main__ in
order to run correctly under Python 3.
While such scripts could be imported in Python 2 (due to implicit relative
imports) it is already the case that they cannot be imported in Python 3,
as implicit relative imports are no longer permitted when a module is
imported.
By disallowing implicit relatives imports from the main module as well,
such modules won’t even work as scripts with this PEP. Switching them
over to explicit relative imports will then get them working again as
both executable scripts and as importable modules.
To support earlier versions of Python, a script could be written to use
different forms of import based on the Python version:
if __name__ == "__main__" and sys.version_info < (3, 3):
import peer # Implicit relative import
else:
from . import peer # explicit relative import
Fixing dual imports of the main module
Given the above proposal to get __qualname__ consistently set correctly
in the main module, one simple change is proposed to eliminate the problem
of dual imports of the main module: the addition of a sys.metapath hook
that detects attempts to import __main__ under its real name and returns
the original main module instead:
class AliasImporter:
def __init__(self, module, alias):
self.module = module
self.alias = alias
def __repr__(self):
fmt = "{0.__class__.__name__}({0.module.__name__}, {0.alias})"
return fmt.format(self)
def find_module(self, fullname, path=None):
if path is None and fullname == self.alias:
return self
return None
def load_module(self, fullname):
if fullname != self.alias:
raise ImportError("{!r} cannot load {!r}".format(self, fullname))
return self.main_module
This metapath hook would be added automatically during import system
initialisation based on the following logic:
main = sys.modules["__main__"]
if main.__name__ != main.__qualname__:
sys.metapath.append(AliasImporter(main, main.__qualname__))
This is probably the least important proposal in the PEP - it just
closes off the last mechanism that is likely to lead to module duplication
after the configuration of sys.path[0] at interpreter startup is
addressed.
Fixing pickling without breaking introspection
To fix this problem, it is proposed to make use of the new module level
__qualname__ attributes to determine the real module location when
__name__ has been modified for any reason.
In the main module, __qualname__ will automatically be set to the main
module’s “real” name (as described above) by the interpreter.
Pseudo-modules that adjust __name__ to point to the public namespace will
leave __qualname__ untouched, so the implementation location remains readily
accessible for introspection.
If __name__ is adjusted at the top of a module, then this will
automatically adjust the __module__ attribute for all functions and
classes subsequently defined in that module.
Since multiple submodules may be set to use the same “public” namespace,
functions and classes will be given a new __qualmodule__ attribute
that refers to the __qualname__ of their module.
This isn’t strictly necessary for functions (you could find out their
module’s qualified name by looking in their globals dictionary), but it is
needed for classes, since they don’t hold a reference to the globals of
their defining module. Once a new attribute is added to classes, it is
more convenient to keep the API consistent and add a new attribute to
functions as well.
These changes mean that adjusting __name__ (and, either directly or
indirectly, the corresponding function and class __module__ attributes)
becomes the officially sanctioned way to implement a namespace as a package,
while exposing the API as if it were still a single module.
All serialisation code that currently uses __name__ and __module__
attributes will then avoid exposing implementation details by default.
To correctly handle serialisation of items from the main module, the class
and function definition logic will be updated to also use __qualname__
for the __module__ attribute in the case where __name__ == "__main__".
With __name__ and __module__ being officially blessed as being used
for the public names of things, the introspection tools in the standard
library will be updated to use __qualname__ and __qualmodule__
where appropriate. For example:
pydoc will report both public and qualified names for modules
inspect.getsource() (and similar tools) will use the qualified names
that point to the implementation of the code
additional pydoc and/or inspect APIs may be provided that report
all modules with a given public __name__.
Fixing multiprocessing on Windows
With __qualname__ now available to tell multiprocessing the real
name of the main module, it will be able to simply include it in the
serialised information passed to the child process, eliminating the
need for the current dubious introspection of the __file__ attribute.
For older Python versions, multiprocessing could be improved by applying
the split_path_module() algorithm described above when attempting to
work out how to execute the main module based on its __file__ attribute.
Explicit relative imports
This PEP proposes that __package__ be unconditionally defined in the
main module as __qualname__.rpartition('.')[0]. Aside from that, it
proposes that the behaviour of explicit relative imports be left alone.
In particular, if __package__ is not set in a module when an explicit
relative import occurs, the automatically cached value will continue to be
derived from __name__ rather than __qualname__. This minimises any
backwards incompatibilities with existing code that deliberately manipulates
relative imports by adjusting __name__ rather than setting __package__
directly.
This PEP does not propose that __package__ be deprecated. While it is
technically redundant following the introduction of __qualname__, it just
isn’t worth the hassle of deprecating it within the lifetime of Python 3.x.
Reference Implementation
None as yet.
References
[1]
Module aliases and/or “real names”
[2] (1, 2)
PEP 395 (Module aliasing) and the namespace PEPs
[3] (1, 2)
Updated PEP 395 (aka “Implicit Relative Imports Must Die!”)
Elaboration of compatibility problems between this PEP and PEP 402
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 395 – Qualified Names for Modules | Standards Track | This PEP proposes new mechanisms that eliminate some longstanding traps for
the unwary when dealing with Python’s import system, as well as serialisation
and introspection of functions and classes. |
PEP 396 – Module Version Numbers
Author:
Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Informational
Topic:
Packaging
Created:
16-Mar-2011
Post-History:
05-Apr-2011
Table of Contents
Abstract
PEP Rejection
User Stories
Rationale
Specification
Examples
Deriving
Classic distutils
Distutils2
PEP 376 metadata
References
Copyright
Abstract
Given that it is useful and common to specify version numbers for
Python modules, and given that different ways of doing this have grown
organically within the Python community, it is useful to establish
standard conventions for module authors to adhere to and reference.
This informational PEP describes best practices for Python module
authors who want to define the version number of their Python module.
Conformance with this PEP is optional, however other Python tools
(such as distutils2 [1]) may be adapted to use the conventions
defined here.
PEP Rejection
This PEP was formally rejected on 2021-04-14. The packaging ecosystem
has changed significantly in the intervening years since this PEP was
first written, and APIs such as importlib.metadata.version() [11]
provide for a much better experience.
User Stories
Alice is writing a new module, called alice, which she wants to
share with other Python developers. alice is a simple module and
lives in one file, alice.py. Alice wants to specify a version
number so that her users can tell which version they are using.
Because her module lives entirely in one file, she wants to add the
version number to that file.
Bob has written a module called bob which he has shared with many
users. bob.py contains a version number for the convenience of
his users. Bob learns about the Cheeseshop [2], and adds some simple
packaging using classic distutils so that he can upload The Bob
Bundle to the Cheeseshop. Because bob.py already specifies a
version number which his users can access programmatically, he wants
the same API to continue to work even though his users now get it from
the Cheeseshop.
Carol maintains several namespace packages, each of which are
independently developed and distributed. In order for her users to
properly specify dependencies on the right versions of her packages,
she specifies the version numbers in the namespace package’s
setup.py file. Because Carol wants to have to update one version
number per package, she specifies the version number in her module and
has the setup.py extract the module version number when she builds
the sdist archive.
David maintains a package in the standard library, and also produces
standalone versions for other versions of Python. The standard
library copy defines the version number in the module, and this same
version number is used for the standalone distributions as well.
Rationale
Python modules, both in the standard library and available from third
parties, have long included version numbers. There are established
de facto standards for describing version numbers, and many ad-hoc
ways have grown organically over the years. Often, version numbers
can be retrieved from a module programmatically, by importing the
module and inspecting an attribute. Classic Python distutils
setup() functions [3] describe a version argument where the
release’s version number can be specified. PEP 8 describes the
use of a module attribute called __version__ for recording
“Subversion, CVS, or RCS” version strings using keyword expansion. In
the PEP author’s own email archives, the earliest example of the use
of an __version__ module attribute by independent module
developers dates back to 1995.
Another example of version information is the sqlite3 [5] module
with its sqlite_version_info, version, and version_info
attributes. It may not be immediately obvious which attribute
contains a version number for the module, and which contains a version
number for the underlying SQLite3 library.
This informational PEP codifies established practice, and recommends
standard ways of describing module version numbers, along with some
use cases for when – and when not – to include them. Its adoption
by module authors is purely voluntary; packaging tools in the standard
library will provide optional support for the standards defined
herein, and other tools in the Python universe may comply as well.
Specification
In general, modules in the standard library SHOULD NOT have version
numbers. They implicitly carry the version number of the Python
release they are included in.
On a case-by-case basis, standard library modules which are also
released in standalone form for other Python versions MAY include a
module version number when included in the standard library, and
SHOULD include a version number when packaged separately.
When a module (or package) includes a version number, the version
SHOULD be available in the __version__ attribute.
For modules which live inside a namespace package, the module
SHOULD include the __version__ attribute. The namespace
package itself SHOULD NOT include its own __version__
attribute.
The __version__ attribute’s value SHOULD be a string.
Module version numbers SHOULD conform to the normalized version
format specified in PEP 386.
Module version numbers SHOULD NOT contain version control system
supplied revision numbers, or any other semantically different
version numbers (e.g. underlying library version number).
The version attribute in a classic distutils setup.py
file, or the PEP 345 Version metadata field SHOULD be
derived from the __version__ field, or vice versa.
Examples
Retrieving the version number from a third party package:
>>> import bzrlib
>>> bzrlib.__version__
'2.3.0'
Retrieving the version number from a standard library package that is
also distributed as a standalone module:
>>> import email
>>> email.__version__
'5.1.0'
Version numbers for namespace packages:
>>> import flufl.i18n
>>> import flufl.enum
>>> import flufl.lock
>>> print flufl.i18n.__version__
1.0.4
>>> print flufl.enum.__version__
3.1
>>> print flufl.lock.__version__
2.1
>>> import flufl
>>> flufl.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute '__version__'
>>>
Deriving
Module version numbers can appear in at least two places, and
sometimes more. For example, in accordance with this PEP, they are
available programmatically on the module’s __version__ attribute.
In a classic distutils setup.py file, the setup() function
takes a version argument, while the distutils2 setup.cfg file
has a version key. The version number must also get into the PEP
345 metadata, preferably when the sdist archive is built. It’s
desirable for module authors to only have to specify the version
number once, and have all the other uses derive from this single
definition.
This could be done in any number of ways, a few of which are outlined
below. These are included for illustrative purposes only and are not
intended to be definitive, complete, or all-encompassing. Other
approaches are possible, and some included below may have limitations
that prevent their use in some situations.
Let’s say Elle adds this attribute to her module file elle.py:
__version__ = '3.1.1'
Classic distutils
In classic distutils, the simplest way to add the version string to
the setup() function in setup.py is to do something like
this:
from elle import __version__
setup(name='elle', version=__version__)
In the PEP author’s experience however, this can fail in some cases,
such as when the module uses automatic Python 3 conversion via the
2to3 program (because setup.py is executed by Python 3 before
the elle module has been converted).
In that case, it’s not much more difficult to write a little code to
parse the __version__ from the file rather than importing it.
Without providing too much detail, it’s likely that modules such as
distutils2 will provide a way to parse version strings from files.
E.g.:
from distutils2 import get_version
setup(name='elle', version=get_version('elle.py'))
Distutils2
Because the distutils2 style setup.cfg is declarative, we can’t
run any code to extract the __version__ attribute, either via
import or via parsing.
In consultation with the distutils-sig [9], two options are
proposed. Both entail containing the version number in a file, and
declaring that file in the setup.cfg. When the entire contents of
the file contains the version number, the version-file key will be
used:
[metadata]
version-file: version.txt
When the version number is contained within a larger file, e.g. of
Python code, such that the file must be parsed to extract the version,
the key version-from-file will be used:
[metadata]
version-from-file: elle.py
A parsing method similar to that described above will be performed on
the file named after the colon. The exact recipe for doing this will
be discussed in the appropriate distutils2 development forum.
An alternative is to only define the version number in setup.cfg
and use the pkgutil module [8] to make it available
programmatically. E.g. in elle.py:
from distutils2._backport import pkgutil
__version__ = pkgutil.get_distribution('elle').metadata['version']
PEP 376 metadata
PEP 376 defines a standard for static metadata, but doesn’t
describe the process by which this metadata gets created. It is
highly desirable for the derived version information to be placed into
the PEP 376 .dist-info metadata at build-time rather than
install-time. This way, the metadata will be available for
introspection even when the code is not installed.
References
[1]
Distutils2 documentation
(http://distutils2.notmyidea.org/)
[2]
The Cheeseshop (Python Package Index)
(http://pypi.python.org)
[3]
http://docs.python.org/distutils/setupscript.html
[5]
sqlite3 module documentation
(http://docs.python.org/library/sqlite3.html)
[8]
pkgutil - Package utilities
(http://distutils2.notmyidea.org/library/pkgutil.html)
[9]
https://mail.python.org/pipermail/distutils-sig/2011-June/017862.html
[11]
importlib.metadata
(https://docs.python.org/3/library/importlib.metadata.html#distribution-versions)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 396 – Module Version Numbers | Informational | Given that it is useful and common to specify version numbers for
Python modules, and given that different ways of doing this have grown
organically within the Python community, it is useful to establish
standard conventions for module authors to adhere to and reference.
This informational PEP describes best practices for Python module
authors who want to define the version number of their Python module. |
PEP 397 – Python launcher for Windows
Author:
Mark Hammond <mhammond at skippinet.com.au>,
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
15-Mar-2011
Python-Version:
3.3
Post-History:
21-Jul-2011, 17-May-2011, 15-Mar-2011
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Specification
Installation
Python Script Launching
Shebang line parsing
Configuration file
Virtual commands in shebang lines
Customized Commands
Python Version Qualifiers
Command-line handling
Process Launching
Discussion
References
Copyright
Abstract
This PEP describes a Python launcher for the Windows platform. A
Python launcher is a single executable which uses a number of
heuristics to locate a Python executable and launch it with a
specified command line.
Rationale
Windows provides “file associations” so an executable can be associated
with an extension, allowing for scripts to be executed directly in some
contexts (eg., double-clicking the file in Windows Explorer.) Until now,
a strategy of “last installed Python wins” has been used and while not
ideal, has generally been workable due to the conservative changes in
Python 2.x releases. As Python 3.x scripts are often syntactically
incompatible with Python 2.x scripts, a different strategy must be used
to allow files with a ‘.py’ extension to use a different executable based
on the Python version the script targets. This will be done by borrowing
the existing practices of another operating system - scripts will be able
to nominate the version of Python they need by way of a “shebang” line, as
described below.
Unix-like operating systems (referred to simply as “Unix” in this
PEP) allow scripts to be executed as if they were executable images
by examining the script for a “shebang” line which specifies the
actual executable to be used to run the script. This is described in
detail in the evecve(2) man page [1] and while user documentation will
be created for this feature, for the purposes of this PEP that man
page describes a valid shebang line.
Additionally, these operating systems provide symbolic-links to
Python executables in well-known directories. For example, many
systems will have a link /usr/bin/python which references a
particular version of Python installed under the operating-system.
These symbolic links allow Python to be executed without regard for
where Python it actually installed on the machine (eg., without
requiring the path where Python is actually installed to be
referenced in the shebang line or in the PATH.) PEP 394 ‘The “python”
command on Unix-Like Systems’ describes additional conventions
for more fine-grained specification of a particular Python version.
These 2 facilities combined allow for a portable and somewhat
predictable way of both starting Python interactively and for allowing
Python scripts to execute. This PEP describes an implementation of a
launcher which can offer the same benefits for Python on the Windows
platform and therefore allows the launcher to be the executable
associated with ‘.py’ files to support multiple Python versions
concurrently.
While this PEP offers the ability to use a shebang line which should
work on both Windows and Unix, this is not the primary motivation for
this PEP - the primary motivation is to allow a specific version to be
specified without inventing new syntax or conventions to describe
it.
Specification
This PEP specifies features of the launcher; a prototype
implementation is provided in [3] which will be distributed
together with the Windows installer of Python, but will also be
available separately (but released along with the Python
installer). New features may be added to the launcher as
long as the features prescribed here continue to work.
Installation
The launcher comes in 2 versions - one which is a console program and
one which is a “windows” (ie., GUI) program. These 2 launchers correspond
to the ‘python.exe’ and ‘pythonw.exe’ executables which currently ship
with Python. The console launcher will be named ‘py.exe’ and the Windows
one named ‘pyw.exe’. The “windows” (ie., GUI) version of the launcher
will attempt to locate and launch pythonw.exe even if a virtual shebang
line nominates simply “python” - in fact, the trailing ‘w’ notation is
not supported in the virtual shebang line at all.
The launcher is installed into the Windows directory (see
discussion below) if installed by a privileged user. The
stand-alone installer asks for an alternative location of the
installer, and adds that location to the user’s PATH.
The installation in the Windows directory is a 32-bit executable
(see discussion); the standalone installer may also offer to install
64-bit versions of the launcher.
The launcher installation is registered in
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CurrentVersion\SharedDLLs
with a reference counter.
It contains a version resource matching the version number of the
pythonXY.dll with which it is distributed. Independent
installations will overwrite older version
of the launcher with newer versions. Stand-alone releases use
a release level of 0x10 in FIELD3 of the CPython release on which
they are based.
Once installed, the “console” version of the launcher is
associated with .py files and the “windows” version associated with .pyw
files.
The launcher is not tied to a specific version of Python - eg., a
launcher distributed with Python 3.3 should be capable of locating and
executing any Python 2.x and Python 3.x version. However, the
launcher binaries have a version resource that is the same as the
version resource in the Python binaries that they are released with.
Python Script Launching
The launcher is restricted to launching Python scripts.
It is not intended as a general-purpose script launcher or
shebang processor.
The launcher supports the syntax of shebang lines as described
in [1], including all restrictions listed.
The launcher supports shebang lines referring to Python
executables with any of the (regex) prefixes “/usr/bin/”, “/usr/local/bin”
and “/usr/bin/env *”, as well as binaries specified without
For example, a shebang line of ‘#! /usr/bin/python’ should work even
though there is unlikely to be an executable in the relative Windows
directory “\usr\bin”. This means that many scripts can use a single
shebang line and be likely to work on both Unix and Windows without
modification.
The launcher will support fully-qualified paths to executables.
While this will make the script inherently non-portable, it is a
feature offered by Unix and would be useful for Windows users in
some cases.
The launcher will be capable of supporting implementations other than
CPython, such as jython and IronPython, but given both the absence of
common links on Unix (such as “/usr/bin/jython”) and the inability for the
launcher to automatically locate the installation location of these
implementations on Windows, the launcher will support this via
customization options. Scripts taking advantage of this will not be
portable (as these customization options must be set to reflect the
configuration of the machine on which the launcher is running) but this
ability is nonetheless considered worthwhile.
On Unix, the user can control which specific version of Python is used
by adjusting the links in /usr/bin to point to the desired version. As
the launcher on Windows will not use Windows links, customization options
(exposed via both environment variables and INI files) will be used to
override the semantics for determining what version of Python will be
used. For example, while a shebang line of “/usr/bin/python2” will
automatically locate a Python 2.x implementation, an environment variable
can override exactly which Python 2.x implementation will be chosen.
Similarly for “/usr/bin/python” and “/usr/bin/python3”. This is
specified in detail later in this PEP.
Shebang line parsing
If the first command-line argument does not start with a dash (‘-‘)
character, an attempt will be made to open that argument as a file
and parsed for a shebang line according to the rules in [1]:
#! interpreter [optional-arg]
Once parsed, the command will be categorized according to the following rules:
If the command starts with the definition of a customized command
followed by a whitespace character (including a newline), the customized
command will be used. See below for a description of customized
commands.
The launcher will define a set of prefixes which are considered Unix
compatible commands to launch Python, namely “/usr/bin/python”,
“/usr/local/bin/python”, “/usr/bin/env python”, and “python”.
If a command starts with one of these strings will be treated as a
‘virtual command’ and the rules described in Python Version Qualifiers
(below) will be used to locate the executable to use.
Otherwise the command is assumed to be directly ready to execute - ie.
a fully-qualified path (or a reference to an executable on the PATH)
optionally followed by arguments. The contents of the string will not
be parsed - it will be passed directly to the Windows CreateProcess
function after appending the name of the script and the launcher
command-line arguments. This means that the rules used by
CreateProcess will be used, including how relative path names and
executable references without extensions are treated. Notably, the
Windows command processor will not be used, so special rules used by the
command processor (such as automatic appending of extensions other than
‘.exe’, support for batch files, etc) will not be used.
The use of ‘virtual’ shebang lines is encouraged as this should
allow for portable shebang lines to be specified which work on
multiple operating systems and different installations of the same
operating system.
If the first argument can not be opened as a file or if no valid
shebang line can be found, the launcher will act as if a shebang line of
‘#!python’ was found - ie., a default Python interpreter will be
located and the arguments passed to that. However, if a valid
shebang line is found but the process specified by that line can not
be started, the default interpreter will not be started - the error
to create the specified child process will cause the launcher to display
an appropriate message and terminate with a specific exit code.
Configuration file
Two .ini files will be searched by the launcher - py.ini in the
current user’s “application data” directory (i.e. the directory returned
by calling the Windows function SHGetFolderPath with CSIDL_LOCAL_APPDATA,
%USERPROFILE%\AppData\Local on Vista+,
%USERPROFILE%\Local Settings\Application Data on XP)
and py.ini in the same directory as the launcher. The same .ini
files are used for both the ‘console’ version of the launcher (i.e.
py.exe) and for the ‘windows’ version (i.e. pyw.exe)
Customization specified in the “application directory” will have
precedence over the one next to the executable, so a user, who may not
have write access to the .ini file next to the launcher, can override
commands in that global .ini file)
Virtual commands in shebang lines
Virtual Commands are shebang lines which start with strings which would
be expected to work on Unix platforms - examples include
‘/usr/bin/python’, ‘/usr/bin/env python’ and ‘python’. Optionally, the
virtual command may be suffixed with a version qualifier (see below),
such as ‘/usr/bin/python2’ or ‘/usr/bin/python3.2’. The command executed
is based on the rules described in Python Version Qualifiers
below.
Customized Commands
The launcher will support the ability to define “Customized Commands” in a
Windows .ini file (ie, a file which can be parsed by the Windows function
GetPrivateProfileString). A section called ‘[commands]’ can be created
with key names defining the virtual command and the value specifying the
actual command-line to be used for this virtual command.
For example, if an INI file has the contents:
[commands]
vpython=c:\bin\vpython.exe -foo
Then a shebang line of ‘#! vpython’ in a script named ‘doit.py’ will
result in the launcher using the command-line
c:\bin\vpython.exe -foo doit.py
The precise details about the names, locations and search order of the
.ini files is in the launcher documentation [4]
Python Version Qualifiers
Some of the features described allow an optional Python version qualifier
to be used.
A version qualifier starts with a major version number and can optionally
be followed by a period (‘.’) and a minor version specifier. If the minor
qualifier is specified, it may optionally be followed by “-32” to indicate
the 32bit implementation of that version be used. Note that no “-64”
qualifier is necessary as this is the default implementation (see below).
On 64bit Windows with both 32bit and 64bit implementations of the
same (major.minor) Python version installed, the 64bit version will
always be preferred. This will be true for both 32bit and 64bit
implementations of the launcher - a 32bit launcher will prefer to
execute a 64bit Python installation of the specified version if
available. This is so the behavior of the launcher can be predicted
knowing only what versions are installed on the PC and without
regard to the order in which they were installed (ie, without knowing
whether a 32 or 64bit version of Python and corresponding launcher was
installed last). As noted above, an optional “-32” suffix can be used
on a version specifier to change this behaviour.
If no version qualifiers are found in a command, the environment variable
PY_PYTHON can be set to specify the default version qualifier - the default
value is “2”. Note this value could specify just a major version (e.g. “2”) or
a major.minor qualifier (e.g. “2.6”), or even major.minor-32.
If no minor version qualifiers are found, the environment variable
PY_PYTHON{major} (where {major} is the current major version qualifier
as determined above) can be set to specify the full version. If no such option
is found, the launcher will enumerate the installed Python versions and use
the latest minor release found for the major version, which is likely,
although not guaranteed, to be the most recently installed version in that
family.
In addition to environment variables, the same settings can be configured
in the .INI file used by the launcher. The section in the INI file is
called [defaults] and the key name will be the same as the
environment variables without the leading PY_ prefix (and note that
the key names in the INI file are case insensitive.) The contents of
an environment variable will override things specified in the INI file.
Command-line handling
Only the first command-line argument will be checked for a shebang line
and only if that argument does not start with a ‘-‘.
If the only command-line argument is “-h” or “–help”, the launcher will
print a small banner and command-line usage, then pass the argument to
the default Python. This will cause help for the launcher being printed
followed by help for Python itself. The output from the launcher will
clearly indicate the extended help information is coming from the
launcher and not Python.
As a concession to interactively launching Python, the launcher will
support the first command-line argument optionally being a dash (“-“)
followed by a version qualifier, as described above, to nominate a
specific version be used. For example, while “py.exe” may locate and
launch the latest Python 2.x implementation installed, a command-line such
as “py.exe -3” could specify the latest Python 3.x implementation be
launched, while “py.exe -2.6-32” could specify a 32bit implementation
Python 2.6 be located and launched. If a Python 2.x implementation is
desired to be launched with the -3 flag, the command-line would need to be
similar to “py.exe -2 -3” (or the specific version of Python could
obviously be launched manually without use of this launcher.) Note that
this feature can not be used with shebang processing as the file scanned
for a shebang line and this argument must both be the first argument and
therefore are mutually exclusive.
All other arguments will be passed untouched to the child Python process.
Process Launching
The launcher offers some conveniences for Python developers working
interactively - for example, starting the launcher with no command-line
arguments will launch the default Python with no command-line arguments.
Further, command-line arguments will be supported to allow a specific
Python version to be launched interactively - however, these conveniences
must not detract from the primary purpose of launching scripts and must
be easy to avoid if desired.
The launcher creates a subprocess to start the actual
interpreter. See Discussion below for the rationale.
Discussion
It may be surprising that the launcher is installed into the
Windows directory, and not the System32 directory. The reason is
that the System32 directory is not on the Path of a 32-bit process
running on a 64-bit system. However, the Windows directory is
always on the path.
The launcher that is installed into the Windows directory is a 32-bit
executable so that the 32-bit CPython installer can provide the same
binary for both 32-bit and 64-bit Windows installations.
Ideally, the launcher process would execute Python directly inside
the same process, primarily so the parent of the launcher process could
terminate the launcher and have the Python interpreter terminate. If the
launcher executes Python as a sub-process and the parent of the launcher
terminates the launcher, the Python process will be unaffected.
However, there are a number of practical problems associated with this
approach. Windows does not support the execv* family of Unix functions,
so this could only be done by the launcher dynamically loading the Python
DLL, but this would have a number of side-effects. The most serious
side effect of this is that the value of sys.executable would refer to the
launcher instead of the Python implementation. Many Python scripts use the
value of sys.executable to launch child processes, and these scripts may
fail to work as expected if the launcher is used. Consider a “parent”
script with a shebang line of ‘#! /usr/bin/python3’ which attempts to
launch a child script (with no shebang) via sys.executable - currently the
child is launched using the exact same version running the parent script.
If sys.executable referred to the launcher the child would be likely
executed using a Python 2.x version and would be likely to fail with a
SyntaxError.
Another hurdle is the support for alternative Python implementations
using the “customized commands” feature described above, where loading
the command dynamically into a running executable is not possible.
The final hurdle is the rules above regarding 64bit and 32bit programs -
a 32bit launcher would be unable to load the 64bit version of Python and
vice-versa.
Given these considerations, the launcher will execute its command in a
child process, remaining alive while the child process is executing, then
terminate with the same exit code as returned by the child. To address
concerns regarding the termination of the launcher not killing the child,
the Win32 Job API will be used to arrange so that the child process is
automatically killed when the parent is terminated (although children of
that child process will continue as is the case now.) As this Windows API
is available in Windows XP and later, this launcher will not work on
Windows 2000 or earlier.
References
[1] (1, 2, 3)
http://linux.die.net/man/2/execve
[3]
https://bitbucket.org/vinay.sajip/pylauncher
[4]
https://bitbucket.org/vinay.sajip/pylauncher/src/tip/Doc/launcher.rst
Copyright
This document has been placed in the public domain.
| Final | PEP 397 – Python launcher for Windows | Standards Track | This PEP describes a Python launcher for the Windows platform. A
Python launcher is a single executable which uses a number of
heuristics to locate a Python executable and launch it with a
specified command line. |
PEP 398 – Python 3.3 Release Schedule
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
23-Mar-2011
Python-Version:
3.3
Table of Contents
Abstract
Release Manager and Crew
3.3 Lifespan
Release Schedule
3.3.0 schedule
3.3.1 schedule
3.3.2 schedule
3.3.3 schedule
3.3.4 schedule
3.3.5 schedule
3.3.6 schedule
3.3.7 schedule
3.3.x end-of-life
Features for 3.3
Copyright
Abstract
This document describes the development and release schedule for
Python 3.3. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.3 Release Managers: Georg Brandl, Ned Deily (3.3.7+)
Windows installers: Martin v. Löwis
Mac installers: Ronald Oussoren/Ned Deily
Documentation: Georg Brandl
3.3 Lifespan
3.3 will receive bugfix updates approximately every 4-6 months for
approximately 18 months. After the release of 3.4.0 final, a final
3.3 bugfix update will be released. After that, security updates
(source only) will be released until 5 years after the release of 3.3
final, which will be September 2017.
As of 2017-09-29, Python 3.3.x reached end-of-life status.
Release Schedule
3.3.0 schedule
3.3.0 alpha 1: March 5, 2012
3.3.0 alpha 2: April 2, 2012
3.3.0 alpha 3: May 1, 2012
3.3.0 alpha 4: May 31, 2012
3.3.0 beta 1: June 27, 2012
(No new features beyond this point.)
3.3.0 beta 2: August 12, 2012
3.3.0 candidate 1: August 24, 2012
3.3.0 candidate 2: September 9, 2012
3.3.0 candidate 3: September 24, 2012
3.3.0 final: September 29, 2012
3.3.1 schedule
3.3.1 candidate 1: March 23, 2013
3.3.1 final: April 6, 2013
3.3.2 schedule
3.3.2 final: May 13, 2013
3.3.3 schedule
3.3.3 candidate 1: October 27, 2013
3.3.3 candidate 2: November 9, 2013
3.3.3 final: November 16, 2013
3.3.4 schedule
3.3.4 candidate 1: January 26, 2014
3.3.4 final: February 9, 2014
3.3.5 schedule
Python 3.3.5 was the last regular maintenance release before 3.3 entered
security-fix only mode.
3.3.5 candidate 1: February 22, 2014
3.3.5 candidate 2: March 1, 2014
3.3.5 final: March 8, 2014
3.3.6 schedule
Security fixes only
3.3.6 candidate 1 (source-only release): October 4, 2014
3.3.6 final (source-only release): October 11, 2014
3.3.7 schedule
Security fixes only
3.3.7 candidate 1 (source-only release): September 6, 2017
3.3.7 final (source-only release): September 19, 2017
3.3.x end-of-life
September 29, 2017
Features for 3.3
Implemented / Final PEPs:
PEP 362: Function Signature Object
PEP 380: Syntax for Delegating to a Subgenerator
PEP 393: Flexible String Representation
PEP 397: Python launcher for Windows
PEP 399: Pure Python/C Accelerator Module Compatibility Requirements
PEP 405: Python Virtual Environments
PEP 409: Suppressing exception context
PEP 412: Key-Sharing Dictionary
PEP 414: Explicit Unicode Literal for Python 3.3
PEP 415: Implement context suppression with exception attributes
PEP 417: Including mock in the Standard Library
PEP 418: Add monotonic time, performance counter, and process time functions
PEP 420: Implicit Namespace Packages
PEP 421: Adding sys.implementation
PEP 3118: Revising the buffer protocol (protocol semantics finalised)
PEP 3144: IP Address manipulation library
PEP 3151: Reworking the OS and IO exception hierarchy
PEP 3155: Qualified name for classes and functions
Other final large-scale changes:
Addition of the “faulthandler” module
Addition of the “lzma” module, and lzma/xz support in tarfile
Implementing __import__ using importlib
Addition of the C decimal implementation
Switch of Windows build toolchain to VS 2010
Candidate PEPs:
None
Other planned large-scale changes:
None
Deferred to post-3.3:
PEP 395: Qualified Names for Modules
PEP 3143: Standard daemon process library
PEP 3154: Pickle protocol version 4
Breaking out standard library and docs in separate repos
Addition of the “packaging” module, deprecating “distutils”
Addition of the “regex” module
Email version 6
A standard event-loop interface (PEP by Jim Fulton pending)
Copyright
This document has been placed in the public domain.
| Final | PEP 398 – Python 3.3 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.3. The schedule primarily concerns itself with PEP-sized
items. |
PEP 399 – Pure Python/C Accelerator Module Compatibility Requirements
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Informational
Created:
04-Apr-2011
Python-Version:
3.3
Post-History:
04-Apr-2011, 12-Apr-2011, 17-Jul-2011, 15-Aug-2011, 01-Jan-2013
Table of Contents
Abstract
Rationale
Details
Copyright
Abstract
The Python standard library under CPython contains various instances
of modules implemented in both pure Python and C (either entirely or
partially). This PEP requires that in these instances that the
C code must pass the test suite used for the pure Python code
so as to act as much as a drop-in replacement as reasonably possible
(C- and VM-specific tests are exempt). It is also required that new
C-based modules lacking a pure Python equivalent implementation get
special permission to be added to the standard library.
Rationale
Python has grown beyond the CPython virtual machine (VM). IronPython,
Jython, and PyPy are all currently viable alternatives to the
CPython VM. The VM ecosystem that has sprung up around the Python
programming language has led to Python being used in many different
areas where CPython cannot be used, e.g., Jython allowing Python to be
used in Java applications.
A problem all of the VMs other than CPython face is handling modules
from the standard library that are implemented (to some extent) in C.
Since other VMs do not typically support the entire C API of CPython
they are unable to use the code used to create the module. Oftentimes
this leads these other VMs to either re-implement the modules in pure
Python or in the programming language used to implement the VM itself
(e.g., in C# for IronPython). This duplication of effort between
CPython, PyPy, Jython, and IronPython is extremely unfortunate as
implementing a module at least in pure Python would help mitigate
this duplicate effort.
The purpose of this PEP is to minimize this duplicate effort by
mandating that all new modules added to Python’s standard library
must have a pure Python implementation unless special dispensation
is given. This makes sure that a module in the stdlib is available to
all VMs and not just to CPython (pre-existing modules that do not meet
this requirement are exempt, although there is nothing preventing
someone from adding in a pure Python implementation retroactively).
Re-implementing parts (or all) of a module in C (in the case
of CPython) is still allowed for performance reasons, but any such
accelerated code must pass the same test suite (sans VM- or C-specific
tests) to verify semantics and prevent divergence. To accomplish this,
the test suite for the module must have comprehensive coverage of the
pure Python implementation before the acceleration code may be added.
Details
Starting in Python 3.3, any modules added to the standard library must
have a pure Python implementation. This rule can only be ignored if
the Python development team grants a special exemption for the module.
Typically the exemption will be granted only when a module wraps a
specific C-based library (e.g., sqlite3). In granting an exemption it
will be recognized that the module will be considered exclusive to
CPython and not part of Python’s standard library that other VMs are
expected to support. Usage of ctypes to provide an
API for a C library will continue to be frowned upon as ctypes
lacks compiler guarantees that C code typically relies upon to prevent
certain errors from occurring (e.g., API changes).
Even though a pure Python implementation is mandated by this PEP, it
does not preclude the use of a companion acceleration module. If an
acceleration module is provided it is to be named the same as the
module it is accelerating with an underscore attached as a prefix,
e.g., _warnings for warnings. The common pattern to access
the accelerated code from the pure Python implementation is to import
it with an import *, e.g., from _warnings import *. This is
typically done at the end of the module to allow it to overwrite
specific Python objects with their accelerated equivalents. This kind
of import can also be done before the end of the module when needed,
e.g., an accelerated base class is provided but is then subclassed by
Python code. This PEP does not mandate that pre-existing modules in
the stdlib that lack a pure Python equivalent gain such a module. But
if people do volunteer to provide and maintain a pure Python
equivalent (e.g., the PyPy team volunteering their pure Python
implementation of the csv module and maintaining it) then such
code will be accepted. In those instances the C version is considered
the reference implementation in terms of expected semantics.
Any new accelerated code must act as a drop-in replacement as close
to the pure Python implementation as reasonable. Technical details of
the VM providing the accelerated code are allowed to differ as
necessary, e.g., a class being a type when implemented in C. To
verify that the Python and equivalent C code operate as similarly as
possible, both code bases must be tested using the same tests which
apply to the pure Python code (tests specific to the C code or any VM
do not follow under this requirement). The test suite is expected to
be extensive in order to verify expected semantics.
Acting as a drop-in replacement also dictates that no public API be
provided in accelerated code that does not exist in the pure Python
code. Without this requirement people could accidentally come to rely
on a detail in the accelerated code which is not made available to
other VMs that use the pure Python implementation. To help verify
that the contract of semantic equivalence is being met, a module must
be tested both with and without its accelerated code as thoroughly as
possible.
As an example, to write tests which exercise both the pure Python and
C accelerated versions of a module, a basic idiom can be followed:
from test.support import import_fresh_module
import unittest
c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
class ExampleTest:
def test_example(self):
self.assertTrue(hasattr(self.module, 'heapify'))
class PyExampleTest(ExampleTest, unittest.TestCase):
module = py_heapq
@unittest.skipUnless(c_heapq, 'requires the C _heapq module')
class CExampleTest(ExampleTest, unittest.TestCase):
module = c_heapq
if __name__ == '__main__':
unittest.main()
The test module defines a base class (ExampleTest) with test methods
that access the heapq module through a self.heapq class attribute,
and two subclasses that set this attribute to either the Python or the C
version of the module. Note that only the two subclasses inherit from
unittest.TestCase – this prevents the ExampleTest class from
being detected as a TestCase subclass by unittest test discovery.
A skipUnless decorator can be added to the class that tests the C code
in order to have these tests skipped when the C module is not available.
If this test were to provide extensive coverage for
heapq.heappop() in the pure Python implementation then the
accelerated C code would be allowed to be added to CPython’s standard
library. If it did not, then the test suite would need to be updated
until proper coverage was provided before the accelerated C code
could be added.
To also help with compatibility, C code should use abstract APIs on
objects to prevent accidental dependence on specific types. For
instance, if a function accepts a sequence then the C code should
default to using PyObject_GetItem() instead of something like
PyList_GetItem(). C code is allowed to have a fast path if the
proper PyList_CheckExact() is used, but otherwise APIs should work
with any object that duck types to the proper interface instead of a
specific type.
Copyright
This document has been placed in the public domain.
| Final | PEP 399 – Pure Python/C Accelerator Module Compatibility Requirements | Informational | The Python standard library under CPython contains various instances
of modules implemented in both pure Python and C (either entirely or
partially). This PEP requires that in these instances that the
C code must pass the test suite used for the pure Python code
so as to act as much as a drop-in replacement as reasonably possible
(C- and VM-specific tests are exempt). It is also required that new
C-based modules lacking a pure Python equivalent implementation get
special permission to be added to the standard library. |
PEP 400 – Deprecate codecs.StreamReader and codecs.StreamWriter
Author:
Victor Stinner <vstinner at python.org>
Status:
Deferred
Type:
Standards Track
Created:
28-May-2011
Python-Version:
3.3
Table of Contents
Abstract
PEP Deferral
Motivation
Rationale
StreamReader and StreamWriter issues
TextIOWrapper features
TextIOWrapper issues
Possible improvements of StreamReader and StreamWriter
Usage of StreamReader and StreamWriter
Backwards Compatibility
Keep the public API, codecs.open
Deprecate StreamReader and StreamWriter
Alternative Approach
Appendix A: Issues with stateful codecs
Stateful codecs
Read and seek(0)
seek(n)
Append mode
Links
Copyright
Footnotes
Abstract
io.TextIOWrapper and codecs.StreamReaderWriter offer the same API
[1]. TextIOWrapper has more features and is faster than
StreamReaderWriter. Duplicate code means that bugs should be fixed
twice and that we may have subtle differences between the two
implementations.
The codecs module was introduced in Python 2.0 (see the PEP 100).
The io module was
introduced in Python 2.6 and 3.0 (see the PEP 3116),
and reimplemented in C in
Python 2.7 and 3.1.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the PEP
and collecting and incorporating feedback, and with sufficient available
time to do so effectively.
Motivation
When the Python I/O model was updated for 3.0, the concept of a
“stream-with-known-encoding” was introduced in the form of
io.TextIOWrapper. As this class is critical to the performance of
text-based I/O in Python 3, this module has an optimised C version
which is used by CPython by default. Many corner cases in handling
buffering, stateful codecs and universal newlines have been dealt with
since the release of Python 3.0.
This new interface overlaps heavily with the legacy
codecs.StreamReader, codecs.StreamWriter and codecs.StreamReaderWriter
interfaces that were part of the original codec interface design in
PEP 100. These interfaces are organised around the principle of an
encoding with an associated stream (i.e. the reverse of arrangement in
the io module), so the original PEP 100 design required that codec
writers provide appropriate StreamReader and StreamWriter
implementations in addition to the core codec encode() and decode()
methods. This places a heavy burden on codec authors providing these
specialised implementations to correctly handle many of the corner
cases (see Appendix A) that have now been dealt with by io.TextIOWrapper. While deeper
integration between the codec and the stream allows for additional
optimisations in theory, these optimisations have in practice either
not been carried out and else the associated code duplication means
that the corner cases that have been fixed in io.TextIOWrapper are
still not handled correctly in the various StreamReader and
StreamWriter implementations.
Accordingly, this PEP proposes that:
codecs.open() be updated to delegate to the builtin open() in Python
3.3;
the legacy codecs.Stream* interfaces, including the streamreader and
streamwriter attributes of codecs.CodecInfo be deprecated in Python
3.3.
Rationale
StreamReader and StreamWriter issues
StreamReader is unable to translate newlines.
StreamWriter doesn’t support “line buffering” (flush if the input
text contains a newline).
StreamReader classes of the CJK encodings (e.g. GB18030) only
supports UNIX newlines (’\n’).
StreamReader and StreamWriter are stateful codecs but don’t expose
functions to control their state (getstate() or setstate()). Each
codec has to handle corner cases, see Appendix A.
StreamReader and StreamWriter are very similar to IncrementalReader
and IncrementalEncoder, some code is duplicated for stateful codecs
(e.g. UTF-16).
Each codec has to reimplement its own StreamReader and StreamWriter
class, even if it’s trivial (just call the encoder/decoder).
codecs.open(filename, “r”) creates an io.TextIOWrapper object.
No codec implements an optimized method in StreamReader or
StreamWriter based on the specificities of the codec.
Issues in the bug tracker:
Issue #5445 (2009-03-08):
codecs.StreamWriter.writelines problem when passed generator
Issue #7262: (2009-11-04):
codecs.open() + eol (windows)
Issue #8260 (2010-03-29):
When I use codecs.open(…) and f.readline() follow up by f.read()
return bad result
Issue #8630 (2010-05-05):
Keepends param in codec readline(s)
Issue #10344 (2010-11-06):
codecs.readline doesn’t care buffering
Issue #11461 (2011-03-10):
Reading UTF-16 with codecs.readline() breaks on surrogate pairs
Issue #12446 (2011-06-30):
StreamReader Readlines behavior odd
Issue #12508 (2011-07-06):
Codecs Anomaly
Issue #12512 (2011-07-07):
codecs: StreamWriter issues with stateful codecs after a seek or
with append mode
Issue #12513 (2011-07-07):
codec.StreamReaderWriter: issues with interlaced read-write
TextIOWrapper features
TextIOWrapper supports any kind of newline, including translating
newlines (to UNIX newlines), to read and write.
TextIOWrapper reuses codecs incremental encoders and decoders (no
duplication of code).
The io module (TextIOWrapper) is faster than the codecs module
(StreamReader). It is implemented in C, whereas codecs is
implemented in Python.
TextIOWrapper has a readahead algorithm which speeds up small
reads: read character by character or line by line (io is 10x
through 25x faster than codecs on these operations).
TextIOWrapper has a write buffer.
TextIOWrapper.tell() is optimized.
TextIOWrapper supports random access (read+write) using a single
class which permit to optimize interlaced read-write (but no such
optimization is implemented).
TextIOWrapper issues
Issue #12215 (2011-05-30):
TextIOWrapper: issues with interlaced read-write
Possible improvements of StreamReader and StreamWriter
By adding codec state read/write functions to the StreamReader and
StreamWriter classes, it will become possible to fix issues with
stateful codecs in a base class instead of in each stateful
StreamReader and StreamWriter classes.
It would be possible to change StreamReader and StreamWriter to make
them use IncrementalDecoder and IncrementalEncoder.
A codec can implement variants which are optimized for the specific
encoding or intercept certain stream methods to add functionality or
improve the encoding/decoding performance. TextIOWrapper cannot
implement such optimization, but TextIOWrapper uses incremental
encoders and decoders and uses read and write buffers, so the overhead
of incomplete inputs is low or nul.
A lot more could be done for other variable length encoding codecs,
e.g. UTF-8, since these often have problems near the end of a read due
to missing bytes. The UTF-32-BE/LE codecs could simply multiply the
character position by 4 to get the byte position.
Usage of StreamReader and StreamWriter
These classes are rarely used directly, but indirectly using
codecs.open(). They are not used in Python 3 standard library (except
in the codecs module).
Some projects implement their own codec with StreamReader and
StreamWriter, but don’t use these classes.
Backwards Compatibility
Keep the public API, codecs.open
codecs.open() can be replaced by the builtin open() function. open()
has a similar API but has also more options. Both functions return
file-like objects (same API).
codecs.open() was the only way to open a text file in Unicode mode
until Python 2.6. Many Python 2 programs uses this function. Removing
codecs.open() implies more work to port programs from Python 2 to
Python 3, especially projects using the same code base for the two
Python versions (without using 2to3 program).
codecs.open() is kept for backward compatibility with Python 2.
Deprecate StreamReader and StreamWriter
Instantiating StreamReader or StreamWriter must emit a DeprecationWarning in
Python 3.3. Defining a subclass doesn’t emit a DeprecationWarning.
codecs.open() will be changed to reuse the builtin open() function
(TextIOWrapper) to read-write text files.
Alternative Approach
An alternative to the deprecation of the codecs.Stream* classes is to rename
codecs.open() to codecs.open_stream(), and to create a new codecs.open()
function reusing open() and so io.TextIOWrapper.
Appendix A: Issues with stateful codecs
It is difficult to use correctly a stateful codec with a stream. Some
cases are supported by the codecs module, while io has no more known
bug related to stateful codecs. The main difference between the codecs
and the io module is that bugs have to be fixed in StreamReader and/or
StreamWriter classes of each codec for the codecs module, whereas bugs
can be fixed only once in io.TextIOWrapper. Here are some examples of
issues with stateful codecs.
Stateful codecs
Python supports the following stateful codecs:
cp932
cp949
cp950
euc_jis_2004
euc_jisx2003
euc_jp
euc_kr
gb18030
gbk
hz
iso2022_jp
iso2022_jp_1
iso2022_jp_2
iso2022_jp_2004
iso2022_jp_3
iso2022_jp_ext
iso2022_kr
shift_jis
shift_jis_2004
shift_jisx0213
utf_8_sig
utf_16
utf_32
Read and seek(0)
with open(filename, 'w', encoding='utf-16') as f:
f.write('abc')
f.write('def')
f.seek(0)
assert f.read() == 'abcdef'
f.seek(0)
assert f.read() == 'abcdef'
The io and codecs modules support this usecase correctly.
seek(n)
with open(filename, 'w', encoding='utf-16') as f:
f.write('abc')
pos = f.tell()
with open(filename, 'w', encoding='utf-16') as f:
f.seek(pos)
f.write('def')
f.seek(0)
f.write('###')
with open(filename, 'r', encoding='utf-16') as f:
assert f.read() == '###def'
The io module supports this usecase, whereas codecs fails because it
writes a new BOM on the second write (issue #12512).
Append mode
with open(filename, 'w', encoding='utf-16') as f:
f.write('abc')
with open(filename, 'a', encoding='utf-16') as f:
f.write('def')
with open(filename, 'r', encoding='utf-16') as f:
assert f.read() == 'abcdef'
The io module supports this usecase, whereas codecs fails because it
writes a new BOM on the second write (issue #12512).
Links
PEP 100: Python Unicode Integration
PEP 3116: New I/O
Issue #8796: Deprecate codecs.open()
[python-dev] Deprecate codecs.open() and StreamWriter/StreamReader
Copyright
This document has been placed in the public domain.
Footnotes
[1]
StreamReaderWriter has two more attributes than
TextIOWrapper, reader and writer.
| Deferred | PEP 400 – Deprecate codecs.StreamReader and codecs.StreamWriter | Standards Track | io.TextIOWrapper and codecs.StreamReaderWriter offer the same API
[1]. TextIOWrapper has more features and is faster than
StreamReaderWriter. Duplicate code means that bugs should be fixed
twice and that we may have subtle differences between the two
implementations. |
PEP 401 – BDFL Retirement
Author:
Barry Warsaw, Brett Cannon
Status:
April Fool!
Type:
Process
Created:
01-Apr-2009
Post-History:
01-Apr-2009
Table of Contents
Abstract
Rationale
Official Acts of the FLUFL
References
Copyright
Abstract
The BDFL, having shepherded Python development for 20 years,
officially announces his retirement, effective immediately. Following
a unanimous vote, his replacement is named.
Rationale
Guido wrote the original implementation of Python in 1989, and after
nearly 20 years of leading the community, has decided to step aside as
its Benevolent Dictator For Life. His official title is now
Benevolent Dictator Emeritus Vacationing Indefinitely from the
Language (BDEVIL). Guido leaves Python in the good hands of its new
leader and its vibrant community, in order to train for his lifelong
dream of climbing Mount Everest.
After unanimous vote of the Python Steering Union (not to be confused
with the Python Secret Underground, which emphatically does not exist)
at the 2009 Python Conference (PyCon 2009), Guido’s successor has been
chosen: Barry Warsaw, or as he is affectionately known, Uncle Barry.
Uncle Barry’s official title is Friendly Language Uncle For Life (FLUFL).
Official Acts of the FLUFL
FLUFL Uncle Barry enacts the following decisions, in order to
demonstrate his intention to lead the community in the same
responsible and open manner as his predecessor, whose name escapes
him:
Recognized that the selection of Hg as the DVCS of choice was
clear proof of the onset of the BDEVIL’s insanity, and reverting
this decision to switch to Bzr instead, the only true choice.
Recognized that the != inequality operator in Python 3.0 was a
horrible, finger pain inducing mistake, the FLUFL reinstates the
<> diamond operator as the sole spelling. This change is
important enough to be implemented for, and released in Python
3.1. To help transition to this feature, a new future statement,
from __future__ import barry_as_FLUFL has been added.
Recognized that the print function in Python 3.0 was a horrible,
pain-inducing mistake, the FLUFL reinstates the print
statement. This change is important enough to be implemented for,
and released in Python 3.0.2.
Recognized that the disappointing adoption curve of Python 3.0
signals its abject failure, all work on Python 3.1 and subsequent
Python 3.x versions is hereby terminated. All features in Python
3.0 shall be back ported to Python 2.7 which will be the official
and sole next release. The Python 3.0 string and bytes types will
be back ported to Python 2.6.2 for the convenience of developers.
Recognized that C is a 20th-century language with almost universal
rejection by programmers under the age of 30, the CPython
implementation will terminate with the release of Python 2.6.2 and
3.0.2. Thereafter, the reference implementation of Python will
target the Parrot [1] virtual machine. Alternative implementations
of Python (e.g. Jython [2], IronPython [3], and PyPy [4]) are
officially discouraged but tolerated.
Recognized that the Python Software Foundation [5] having fulfilled
its mission admirably, is hereby disbanded. The Python Steering
Union [6] (not to be confused with the Python Secret Underground,
which emphatically does not exist), is now the sole steward for all
of Python’s intellectual property. All PSF funds are hereby
transferred to the PSU (not that PSU, the other PSU).
References
[1]
http://www.parrot.org
[2]
http://www.jython.org
[3]
http://www.ironpython.com
[4]
http://www.codespeak.net/pypy
[5]
http://www.python.org/psf
[6]
http://www.pythonlabs.com
Copyright
This document is the property of the Python Steering Union (not to be
confused with the Python Secret Underground, which emphatically does
not exist). We suppose it’s okay for you to read this, but don’t even
think about quoting, copying, modifying, or distributing it.
| April Fool! | PEP 401 – BDFL Retirement | Process | The BDFL, having shepherded Python development for 20 years,
officially announces his retirement, effective immediately. Following
a unanimous vote, his replacement is named. |
PEP 402 – Simplified Package Layout and Partitioning
Author:
Phillip J. Eby
Status:
Rejected
Type:
Standards Track
Topic:
Packaging
Created:
12-Jul-2011
Python-Version:
3.3
Post-History:
20-Jul-2011
Replaces:
382
Table of Contents
Rejection Notice
Abstract
The Problem
The Solution
A Thought Experiment
Self-Contained vs. “Virtual” Packages
Backwards Compatibility and Performance
Specification
Virtual Paths
Standard Library Changes/Additions
Implementation Notes
References
Copyright
Rejection Notice
On the first day of sprints at US PyCon 2012 we had a long and
fruitful discussion about PEP 382 and PEP 402. We ended up rejecting
both but a new PEP will be written to carry on in the spirit of PEP
402. Martin von Löwis wrote up a summary: [3].
Abstract
This PEP proposes an enhancement to Python’s package importing
to:
Surprise users of other languages less,
Make it easier to convert a module into a package, and
Support dividing packages into separately installed components
(ala “namespace packages”, as described in PEP 382)
The proposed enhancements do not change the semantics of any
currently-importable directory layouts, but make it possible for
packages to use a simplified directory layout (that is not importable
currently).
However, the proposed changes do NOT add any performance overhead to
the importing of existing modules or packages, and performance for the
new directory layout should be about the same as that of previous
“namespace package” solutions (such as pkgutil.extend_path()).
The Problem
“Most packages are like modules. Their contents are highly
interdependent and can’t be pulled apart. [However,] some
packages exist to provide a separate namespace. … It should
be possible to distribute sub-packages or submodules of these
[namespace packages] independently.”—Jim Fulton, shortly before the release of Python 2.3 [1]
When new users come to Python from other languages, they are often
confused by Python’s package import semantics. At Google, for example,
Guido received complaints from “a large crowd with pitchforks” [2]
that the requirement for packages to contain an __init__ module
was a “misfeature”, and should be dropped.
In addition, users coming from languages like Java or Perl are
sometimes confused by a difference in Python’s import path searching.
In most other languages that have a similar path mechanism to Python’s
sys.path, a package is merely a namespace that contains modules
or classes, and can thus be spread across multiple directories in
the language’s path. In Perl, for instance, a Foo::Bar module
will be searched for in Foo/ subdirectories all along the module
include path, not just in the first such subdirectory found.
Worse, this is not just a problem for new users: it prevents anyone
from easily splitting a package into separately-installable
components. In Perl terms, it would be as if every possible Net::
module on CPAN had to be bundled up and shipped in a single tarball!
For that reason, various workarounds for this latter limitation exist,
circulated under the term “namespace packages”. The Python standard
library has provided one such workaround since Python 2.3 (via the
pkgutil.extend_path() function), and the “setuptools” package
provides another (via pkg_resources.declare_namespace()).
The workarounds themselves, however, fall prey to a third issue with
Python’s way of laying out packages in the filesystem.
Because a package must contain an __init__ module, any attempt
to distribute modules for that package must necessarily include that
__init__ module, if those modules are to be importable.
However, the very fact that each distribution of modules for a package
must contain this (duplicated) __init__ module, means that OS
vendors who package up these module distributions must somehow handle
the conflict caused by several module distributions installing that
__init__ module to the same location in the filesystem.
This led to the proposing of PEP 382 (“Namespace Packages”) - a way
to signal to Python’s import machinery that a directory was
importable, using unique filenames per module distribution.
However, there was more than one downside to this approach.
Performance for all import operations would be affected, and the
process of designating a package became even more complex. New
terminology had to be invented to explain the solution, and so on.
As terminology discussions continued on the Import-SIG, it soon became
apparent that the main reason it was so difficult to explain the
concepts related to “namespace packages” was because Python’s
current way of handling packages is somewhat underpowered, when
compared to other languages.
That is, in other popular languages with package systems, no special
term is needed to describe “namespace packages”, because all
packages generally behave in the desired fashion.
Rather than being an isolated single directory with a special marker
module (as in Python), packages in other languages are typically just
the union of appropriately-named directories across the entire
import or inclusion path.
In Perl, for example, the module Foo is always found in a
Foo.pm file, and a module Foo::Bar is always found in a
Foo/Bar.pm file. (In other words, there is One Obvious Way to
find the location of a particular module.)
This is because Perl considers a module to be different from a
package: the package is purely a namespace in which other modules
may reside, and is only coincidentally the name of a module as well.
In current versions of Python, however, the module and the package are
more tightly bound together. Foo is always a module – whether it
is found in Foo.py or Foo/__init__.py – and it is tightly
linked to its submodules (if any), which must reside in the exact
same directory where the __init__.py was found.
On the positive side, this design choice means that a package is quite
self-contained, and can be installed, copied, etc. as a unit just by
performing an operation on the package’s root directory.
On the negative side, however, it is non-intuitive for beginners, and
requires a more complex step to turn a module into a package. If
Foo begins its life as Foo.py, then it must be moved and
renamed to Foo/__init__.py.
Conversely, if you intend to create a Foo.Bar module from the
start, but have no particular module contents to put in Foo
itself, then you have to create an empty and seemingly-irrelevant
Foo/__init__.py file, just so that Foo.Bar can be imported.
(And these issues don’t just confuse newcomers to the language,
either: they annoy many experienced developers as well.)
So, after some discussion on the Import-SIG, this PEP was created
as an alternative to PEP 382, in an attempt to solve all of the
above problems, not just the “namespace package” use cases.
And, as a delightful side effect, the solution proposed in this PEP
does not affect the import performance of ordinary modules or
self-contained (i.e. __init__-based) packages.
The Solution
In the past, various proposals have been made to allow more intuitive
approaches to package directory layout. However, most of them failed
because of an apparent backward-compatibility problem.
That is, if the requirement for an __init__ module were simply
dropped, it would open up the possibility for a directory named, say,
string on sys.path, to block importing of the standard library
string module.
Paradoxically, however, the failure of this approach does not arise
from the elimination of the __init__ requirement!
Rather, the failure arises because the underlying approach takes for
granted that a package is just ONE thing, instead of two.
In truth, a package comprises two separate, but related entities: a
module (with its own, optional contents), and a namespace where
other modules or packages can be found.
In current versions of Python, however, the module part (found in
__init__) and the namespace for submodule imports (represented
by the __path__ attribute) are both initialized at the same time,
when the package is first imported.
And, if you assume this is the only way to initialize these two
things, then there is no way to drop the need for an __init__
module, while still being backwards-compatible with existing directory
layouts.
After all, as soon as you encounter a directory on sys.path
matching the desired name, that means you’ve “found” the package, and
must stop searching, right?
Well, not quite.
A Thought Experiment
Let’s hop into the time machine for a moment, and pretend we’re back
in the early 1990s, shortly before Python packages and __init__.py
have been invented. But, imagine that we are familiar with
Perl-like package imports, and we want to implement a similar system
in Python.
We’d still have Python’s module imports to build on, so we could
certainly conceive of having Foo.py as a parent Foo module
for a Foo package. But how would we implement submodule and
subpackage imports?
Well, if we didn’t have the idea of __path__ attributes yet,
we’d probably just search sys.path looking for Foo/Bar.py.
But we’d only do it when someone actually tried to import
Foo.Bar.
NOT when they imported Foo.
And that lets us get rid of the backwards-compatibility problem
of dropping the __init__ requirement, back here in 2011.
How?
Well, when we import Foo, we’re not even looking for Foo/
directories on sys.path, because we don’t care yet. The only
point at which we care, is the point when somebody tries to actually
import a submodule or subpackage of Foo.
That means that if Foo is a standard library module (for example),
and I happen to have a Foo directory on sys.path (without
an __init__.py, of course), then nothing breaks. The Foo
module is still just a module, and it’s still imported normally.
Self-Contained vs. “Virtual” Packages
Of course, in today’s Python, trying to import Foo.Bar will
fail if Foo is just a Foo.py module (and thus lacks a
__path__ attribute).
So, this PEP proposes to dynamically create a __path__, in the
case where one is missing.
That is, if I try to import Foo.Bar the proposed change to the
import machinery will notice that the Foo module lacks a
__path__, and will therefore try to build one before proceeding.
And it will do this by making a list of all the existing Foo/
subdirectories of the directories listed in sys.path.
If the list is empty, the import will fail with ImportError, just
like today. But if the list is not empty, then it is saved in
a new Foo.__path__ attribute, making the module a “virtual
package”.
That is, because it now has a valid __path__, we can proceed
to import submodules or subpackages in the normal way.
Now, notice that this change does not affect “classic”, self-contained
packages that have an __init__ module in them. Such packages
already have a __path__ attribute (initialized at import time)
so the import machinery won’t try to create another one later.
This means that (for example) the standard library email package
will not be affected in any way by you having a bunch of unrelated
directories named email on sys.path. (Even if they contain
*.py files.)
But it does mean that if you want to turn your Foo module into
a Foo package, all you have to do is add a Foo/ directory
somewhere on sys.path, and start adding modules to it.
But what if you only want a “namespace package”? That is, a package
that is only a namespace for various separately-distributed
submodules and subpackages?
For example, if you’re Zope Corporation, distributing dozens of
separate tools like zc.buildout, each in packages under the zc
namespace, you don’t want to have to make and include an empty
zc.py in every tool you ship. (And, if you’re a Linux or other
OS vendor, you don’t want to deal with the package installation
conflicts created by trying to install ten copies of zc.py to the
same location!)
No problem. All we have to do is make one more minor tweak to the
import process: if the “classic” import process fails to find a
self-contained module or package (e.g., if import zc fails to find
a zc.py or zc/__init__.py), then we once more try to build a
__path__ by searching for all the zc/ directories on
sys.path, and putting them in a list.
If this list is empty, we raise ImportError. But if it’s
non-empty, we create an empty zc module, and put the list in
zc.__path__. Congratulations: zc is now a namespace-only,
“pure virtual” package! It has no module contents, but you can still
import submodules and subpackages from it, regardless of where they’re
located on sys.path.
(By the way, both of these additions to the import protocol (i.e. the
dynamically-added __path__, and dynamically-created modules)
apply recursively to child packages, using the parent package’s
__path__ in place of sys.path as a basis for generating a
child __path__. This means that self-contained and virtual
packages can contain each other without limitation, with the caveat
that if you put a virtual package inside a self-contained one, it’s
gonna have a really short __path__!)
Backwards Compatibility and Performance
Notice that these two changes only affect import operations that
today would result in ImportError. As a result, the performance
of imports that do not involve virtual packages is unaffected, and
potential backward compatibility issues are very restricted.
Today, if you try to import submodules or subpackages from a module
with no __path__, it’s an immediate error. And of course, if you
don’t have a zc.py or zc/__init__.py somewhere on sys.path
today, import zc would likewise fail.
Thus, the only potential backwards-compatibility issues are:
Tools that expect package directories to have an __init__
module, that expect directories without an __init__ module
to be unimportable, or that expect __path__ attributes to be
static, will not recognize virtual packages as packages.(In practice, this just means that tools will need updating to
support virtual packages, e.g. by using pkgutil.walk_modules()
instead of using hardcoded filesystem searches.)
Code that expects certain imports to fail may now do something
unexpected. This should be fairly rare in practice, as most sane,
non-test code does not import things that are expected not to
exist!
The biggest likely exception to the above would be when a piece of
code tries to check whether some package is installed by importing
it. If this is done only by importing a top-level module (i.e., not
checking for a __version__ or some other attribute), and there
is a directory of the same name as the sought-for package on
sys.path somewhere, and the package is not actually installed,
then such code could be fooled into thinking a package is installed
that really isn’t.
For example, suppose someone writes a script (datagen.py)
containing the following code:
try:
import json
except ImportError:
import simplejson as json
And runs it in a directory laid out like this:
datagen.py
json/
foo.js
bar.js
If import json succeeded due to the mere presence of the json/
subdirectory, the code would incorrectly believe that the json
module was available, and proceed to fail with an error.
However, we can prevent corner cases like these from arising, simply
by making one small change to the algorithm presented so far. Instead
of allowing you to import a “pure virtual” package (like zc),
we allow only importing of the contents of virtual packages.
That is, a statement like import zc should raise ImportError
if there is no zc.py or zc/__init__.py on sys.path. But,
doing import zc.buildout should still succeed, as long as there’s
a zc/buildout.py or zc/buildout/__init__.py on sys.path.
In other words, we don’t allow pure virtual packages to be imported
directly, only modules and self-contained packages. (This is an
acceptable limitation, because there is no functional value to
importing such a package by itself. After all, the module object
will have no contents until you import at least one of its
subpackages or submodules!)
Once zc.buildout has been successfully imported, though, there
will be a zc module in sys.modules, and trying to import it
will of course succeed. We are only preventing an initial import
from succeeding, in order to prevent false-positive import successes
when clashing subdirectories are present on sys.path.
So, with this slight change, the datagen.py example above will
work correctly. When it does import json, the mere presence of a
json/ directory will simply not affect the import process at all,
even if it contains .py files. The json/ directory will still
only be searched in the case where an import like import
json.converter is attempted.
Meanwhile, tools that expect to locate packages and modules by
walking a directory tree can be updated to use the existing
pkgutil.walk_modules() API, and tools that need to inspect
packages in memory should use the other APIs described in the
Standard Library Changes/Additions section below.
Specification
A change is made to the existing import process, when importing
names containing at least one . – that is, imports of modules
that have a parent package.
Specifically, if the parent package does not exist, or exists but
lacks a __path__ attribute, an attempt is first made to create a
“virtual path” for the parent package (following the algorithm
described in the section on virtual paths, below).
If the computed “virtual path” is empty, an ImportError results,
just as it would today. However, if a non-empty virtual path is
obtained, the normal import of the submodule or subpackage proceeds,
using that virtual path to find the submodule or subpackage. (Just
as it would have with the parent’s __path__, if the parent package
had existed and had a __path__.)
When a submodule or subpackage is found (but not yet loaded),
the parent package is created and added to sys.modules (if it
didn’t exist before), and its __path__ is set to the computed
virtual path (if it wasn’t already set).
In this way, when the actual loading of the submodule or subpackage
occurs, it will see a parent package existing, and any relative
imports will work correctly. However, if no submodule or subpackage
exists, then the parent package will not be created, nor will a
standalone module be converted into a package (by the addition of a
spurious __path__ attribute).
Note, by the way, that this change must be applied recursively: that
is, if foo and foo.bar are pure virtual packages, then
import foo.bar.baz must wait until foo.bar.baz is found before
creating module objects for both foo and foo.bar, and then
create both of them together, properly setting the foo module’s
.bar attribute to point to the foo.bar module.
In this way, pure virtual packages are never directly importable:
an import foo or import foo.bar by itself will fail, and the
corresponding modules will not appear in sys.modules until they
are needed to point to a successfully imported submodule or
self-contained subpackage.
Virtual Paths
A virtual path is created by obtaining a PEP 302 “importer” object for
each of the path entries found in sys.path (for a top-level
module) or the parent __path__ (for a submodule).
(Note: because sys.meta_path importers are not associated with
sys.path or __path__ entry strings, such importers do not
participate in this process.)
Each importer is checked for a get_subpath() method, and if
present, the method is called with the full name of the module/package
the path is being constructed for. The return value is either a
string representing a subdirectory for the requested package, or
None if no such subdirectory exists.
The strings returned by the importers are added to the path list
being built, in the same order as they are found. (None values
and missing get_subpath() methods are simply skipped.)
The resulting list (whether empty or not) is then stored in a
sys.virtual_package_paths dictionary, keyed by module name.
This dictionary has two purposes. First, it serves as a cache, in
the event that more than one attempt is made to import a submodule
of a virtual package.
Second, and more importantly, the dictionary can be used by code that
extends sys.path at runtime to update imported packages’
__path__ attributes accordingly. (See Standard Library
Changes/Additions below for more details.)
In Python code, the virtual path construction algorithm would look
something like this:
def get_virtual_path(modulename, parent_path=None):
if modulename in sys.virtual_package_paths:
return sys.virtual_package_paths[modulename]
if parent_path is None:
parent_path = sys.path
path = []
for entry in parent_path:
# Obtain a PEP 302 importer object - see pkgutil module
importer = pkgutil.get_importer(entry)
if hasattr(importer, 'get_subpath'):
subpath = importer.get_subpath(modulename)
if subpath is not None:
path.append(subpath)
sys.virtual_package_paths[modulename] = path
return path
And a function like this one should be exposed in the standard
library as e.g. imp.get_virtual_path(), so that people creating
__import__ replacements or sys.meta_path hooks can reuse it.
Standard Library Changes/Additions
The pkgutil module should be updated to handle this
specification appropriately, including any necessary changes to
extend_path(), iter_modules(), etc.
Specifically the proposed changes and additions to pkgutil are:
A new extend_virtual_paths(path_entry) function, to extend
existing, already-imported virtual packages’ __path__ attributes
to include any portions found in a new sys.path entry. This
function should be called by applications extending sys.path
at runtime, e.g. when adding a plugin directory or an egg to the
path.The implementation of this function does a simple top-down traversal
of sys.virtual_package_paths, and performs any necessary
get_subpath() calls to identify what path entries need to be
added to the virtual path for that package, given that path_entry
has been added to sys.path. (Or, in the case of sub-packages,
adding a derived subpath entry, based on their parent package’s
virtual path.)
(Note: this function must update both the path values in
sys.virtual_package_paths as well as the __path__ attributes
of any corresponding modules in sys.modules, even though in the
common case they will both be the same list object.)
A new iter_virtual_packages(parent='') function to allow
top-down traversal of virtual packages from
sys.virtual_package_paths, by yielding the child virtual
packages of parent. For example, calling
iter_virtual_packages("zope") might yield zope.app
and zope.products (if they are virtual packages listed in
sys.virtual_package_paths), but not zope.foo.bar.
(This function is needed to implement extend_virtual_paths(),
but is also potentially useful for other code that needs to inspect
imported virtual packages.)
ImpImporter.iter_modules() should be changed to also detect and
yield the names of modules found in virtual packages.
In addition to the above changes, the zipimport importer should
have its iter_modules() implementation similarly changed. (Note:
current versions of Python implement this via a shim in pkgutil,
so technically this is also a change to pkgutil.)
Last, but not least, the imp module (or importlib, if
appropriate) should expose the algorithm described in the virtual
paths section above, as a
get_virtual_path(modulename, parent_path=None) function, so that
creators of __import__ replacements can use it.
Implementation Notes
For users, developers, and distributors of virtual packages:
While virtual packages are easy to set up and use, there is still
a time and place for using self-contained packages. While it’s not
strictly necessary, adding an __init__ module to your
self-contained packages lets users of the package (and Python
itself) know that all of the package’s code will be found in
that single subdirectory. In addition, it lets you define
__all__, expose a public API, provide a package-level docstring,
and do other things that make more sense for a self-contained
project than for a mere “namespace” package.
sys.virtual_package_paths is allowed to contain entries for
non-existent or not-yet-imported package names; code that uses its
contents should not assume that every key in this dictionary is also
present in sys.modules or that importing the name will
necessarily succeed.
If you are changing a currently self-contained package into a
virtual one, it’s important to note that you can no longer use its
__file__ attribute to locate data files stored in a package
directory. Instead, you must search __path__ or use the
__file__ of a submodule adjacent to the desired files, or
of a self-contained subpackage that contains the desired files.(Note: this caveat is already true for existing users of “namespace
packages” today. That is, it is an inherent result of being able
to partition a package, that you must know which partition the
desired data file lives in. We mention it here simply so that
new users converting from self-contained to virtual packages will
also be aware of it.)
XXX what is the __file__ of a “pure virtual” package? None?
Some arbitrary string? The path of the first directory with a
trailing separator? No matter what we put, some code is
going to break, but the last choice might allow some code to
accidentally work. Is that good or bad?
For those implementing PEP 302 importer objects:
Importers that support the iter_modules() method (used by
pkgutil to locate importable modules and packages) and want to
add virtual package support should modify their iter_modules()
method so that it discovers and lists virtual packages as well as
standard modules and packages. To do this, the importer should
simply list all immediate subdirectory names in its jurisdiction
that are valid Python identifiers.XXX This might list a lot of not-really-packages. Should we
require importable contents to exist? If so, how deep do we
search, and how do we prevent e.g. link loops, or traversing onto
different filesystems, etc.? Ick. Also, if virtual packages are
listed, they still can’t be imported, which is a problem for the
way that pkgutil.walk_modules() is currently implemented.
“Meta” importers (i.e., importers placed on sys.meta_path) do
not need to implement get_subpath(), because the method
is only called on importers corresponding to sys.path entries
and __path__ entries. If a meta importer wishes to support
virtual packages, it must do so entirely within its own
find_module() implementation.Unfortunately, it is unlikely that any such implementation will be
able to merge its package subpaths with those of other meta
importers or sys.path importers, so the meaning of “supporting
virtual packages” for a meta importer is currently undefined!
(However, since the intended use case for meta importers is to
replace Python’s normal import process entirely for some subset of
modules, and the number of such importers currently implemented is
quite small, this seems unlikely to be a big issue in practice.)
References
[1]
“namespace” vs “module” packages (mailing list thread)
(http://mail.zope.org/pipermail/zope3-dev/2002-December/004251.html)
[2]
“Dropping __init__.py requirement for subpackages”
(https://mail.python.org/pipermail/python-dev/2006-April/064400.html)
[3]
Namespace Packages resolution
(https://mail.python.org/pipermail/import-sig/2012-March/000421.html)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 402 – Simplified Package Layout and Partitioning | Standards Track | This PEP proposes an enhancement to Python’s package importing
to: |
PEP 403 – General purpose decorator clause (aka “@in” clause)
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
13-Oct-2011
Python-Version:
3.4
Post-History:
13-Oct-2011
Table of Contents
Abstract
Basic Examples
Proposal
Syntax Change
Design Discussion
Background
Relation to PEP 3150
Keyword Choice
Better Debugging Support for Functions and Classes with Short Names
Possible Implementation Strategy
Explaining Container Comprehensions and Generator Expressions
More Examples
Reference Implementation
Acknowledgements
Rejected Concepts
Omitting the decorator prefix character
Anonymous Forward References
Using a nested suite
References
Copyright
Abstract
This PEP proposes the addition of a new @in decorator clause that makes
it possible to override the name binding step of a function or class
definition.
The new clause accepts a single simple statement that can make a forward
reference to decorated function or class definition.
This new clause is designed to be used whenever a “one-shot” function or
class is needed, and placing the function or class definition before the
statement that uses it actually makes the code harder to read. It also
avoids any name shadowing concerns by making sure the new name is visible
only to the statement in the @in clause.
This PEP is based heavily on many of the ideas in PEP 3150 (Statement Local
Namespaces) so some elements of the rationale will be familiar to readers of
that PEP. Both PEPs remain deferred for the time being, primarily due to the
lack of compelling real world use cases in either PEP.
Basic Examples
Before diving into the long history of this problem and the detailed
rationale for this specific proposed solution, here are a few simple
examples of the kind of code it is designed to simplify.
As a trivial example, a weakref callback could be defined as follows:
@in x = weakref.ref(target, report_destruction)
def report_destruction(obj):
print("{} is being destroyed".format(obj))
This contrasts with the current (conceptually) “out of order” syntax for
this operation:
def report_destruction(obj):
print("{} is being destroyed".format(obj))
x = weakref.ref(target, report_destruction)
That structure is OK when you’re using the callable multiple times, but
it’s irritating to be forced into it for one-off operations.
If the repetition of the name seems especially annoying, then a throwaway
name like f can be used instead:
@in x = weakref.ref(target, f)
def f(obj):
print("{} is being destroyed".format(obj))
Similarly, a sorted operation on a particularly poorly defined type could
now be defined as:
@in sorted_list = sorted(original, key=f)
def f(item):
try:
return item.calc_sort_order()
except NotSortableError:
return float('inf')
Rather than:
def force_sort(item):
try:
return item.calc_sort_order()
except NotSortableError:
return float('inf')
sorted_list = sorted(original, key=force_sort)
And early binding semantics in a list comprehension could be attained via:
@in funcs = [adder(i) for i in range(10)]
def adder(i):
return lambda x: x + i
Proposal
This PEP proposes the addition of a new @in clause that is a variant
of the existing class and function decorator syntax.
The new @in clause precedes the decorator lines, and allows forward
references to the trailing function or class definition.
The trailing function or class definition is always named - the name of
the trailing definition is then used to make the forward reference from the
@in clause.
The @in clause is allowed to contain any simple statement (including
those that don’t make any sense in that context, such as pass - while
such code would be legal, there wouldn’t be any point in writing it). This
permissive structure is easier to define and easier to explain, but a more
restrictive approach that only permits operations that “make sense” would
also be possible (see PEP 3150 for a list of possible candidates).
The @in clause will not create a new scope - all name binding
operations aside from the trailing function or class definition will affect
the containing scope.
The name used in the trailing function or class definition is only visible
from the associated @in clause, and behaves as if it was an ordinary
variable defined in that scope. If any nested scopes are created in either
the @in clause or the trailing function or class definition, those scopes
will see the trailing function or class definition rather than any other
bindings for that name in the containing scope.
In a very real sense, this proposal is about making it possible to override
the implicit “name = <defined function or class>” name binding operation
that is part of every function or class definition, specifically in those
cases where the local name binding isn’t actually needed.
Under this PEP, an ordinary class or function definition:
@deco2
@deco1
def name():
...
can be explained as being roughly equivalent to:
@in name = deco2(deco1(name))
def name():
...
Syntax Change
Syntactically, only one new grammar rule is needed:
in_stmt: '@in' simple_stmt decorated
Grammar: http://hg.python.org/cpython/file/default/Grammar/Grammar
Design Discussion
Background
The question of “multi-line lambdas” has been a vexing one for many
Python users for a very long time, and it took an exploration of Ruby’s
block functionality for me to finally understand why this bugs people
so much: Python’s demand that the function be named and introduced
before the operation that needs it breaks the developer’s flow of thought.
They get to a point where they go “I need a one-shot operation that does
<X>”, and instead of being able to just say that directly, they instead
have to back up, name a function to do <X>, then call that function from
the operation they actually wanted to do in the first place. Lambda
expressions can help sometimes, but they’re no substitute for being able to
use a full suite.
Ruby’s block syntax also heavily inspired the style of the solution in this
PEP, by making it clear that even when limited to one anonymous function per
statement, anonymous functions could still be incredibly useful. Consider how
many constructs Python has where one expression is responsible for the bulk of
the heavy lifting:
comprehensions, generator expressions, map(), filter()
key arguments to sorted(), min(), max()
partial function application
provision of callbacks (e.g. for weak references or asynchronous IO)
array broadcast operations in NumPy
However, adopting Ruby’s block syntax directly won’t work for Python, since
the effectiveness of Ruby’s blocks relies heavily on various conventions in
the way functions are defined (specifically, using Ruby’s yield syntax
to call blocks directly and the &arg mechanism to accept a block as a
function’s final argument).
Since Python has relied on named functions for so long, the signatures of
APIs that accept callbacks are far more diverse, thus requiring a solution
that allows one-shot functions to be slotted in at the appropriate location.
The approach taken in this PEP is to retain the requirement to name the
function explicitly, but allow the relative order of the definition and the
statement that references it to be changed to match the developer’s flow of
thought. The rationale is essentially the same as that used when introducing
decorators, but covering a broader set of applications.
Relation to PEP 3150
PEP 3150 (Statement Local Namespaces) describes its primary motivation
as being to elevate ordinary assignment statements to be on par with class
and def statements where the name of the item to be defined is presented
to the reader in advance of the details of how the value of that item is
calculated. This PEP achieves the same goal in a different way, by allowing
the simple name binding of a standard function definition to be replaced
with something else (like assigning the result of the function to a value).
Despite having the same author, the two PEPs are in direct competition with
each other. PEP 403 represents a minimalist approach that attempts to achieve
useful functionality with a minimum of change from the status quo. This PEP
instead aims for a more flexible standalone statement design, which requires
a larger degree of change to the language.
Note that where PEP 403 is better suited to explaining the behaviour of
generator expressions correctly, this PEP is better able to explain the
behaviour of decorator clauses in general. Both PEPs support adequate
explanations for the semantics of container comprehensions.
Keyword Choice
The proposal definitely requires some kind of prefix to avoid parsing
ambiguity and backwards compatibility problems with existing constructs.
It also needs to be clearly highlighted to readers, since it declares that
the following piece of code is going to be executed only after the trailing
function or class definition has been executed.
The in keyword was chosen as an existing keyword that can be used to
denote the concept of a forward reference.
The @ prefix was included in order to exploit the fact that Python
programmers are already used to decorator syntax as an indication of
out of order execution, where the function or class is actually defined
first and then decorators are applied in reverse order.
For functions, the construct is intended to be read as “in <this statement
that references NAME> define NAME as a function that does <operation>”.
The mapping to English prose isn’t as obvious for the class definition case,
but the concept remains the same.
Better Debugging Support for Functions and Classes with Short Names
One of the objections to widespread use of lambda expressions is that they
have a negative effect on traceback intelligibility and other aspects of
introspection. Similar objections are raised regarding constructs that
promote short, cryptic function names (including this one, which requires
that the name of the trailing definition be supplied at least twice,
encouraging the use of shorthand placeholder names like f).
However, the introduction of qualified names in PEP 3155 means that even
anonymous classes and functions will now have different representations if
they occur in different scopes. For example:
>>> def f():
... return lambda: y
...
>>> f()
<function f.<locals>.<lambda> at 0x7f6f46faeae0>
Anonymous functions (or functions that share a name) within the same scope
will still share representations (aside from the object ID), but this is
still a major improvement over the historical situation where everything
except the object ID was identical.
Possible Implementation Strategy
This proposal has at least one titanic advantage over PEP 3150:
implementation should be relatively straightforward.
The @in clause will be included in the AST for the associated function or
class definition and the statement that references it. When the @in
clause is present, it will be emitted in place of the local name binding
operation normally implied by a function or class definition.
The one potentially tricky part is changing the meaning of the references to
the statement local function or namespace while within the scope of the
in statement, but that shouldn’t be too hard to address by maintaining
some additional state within the compiler (it’s much easier to handle this
for a single name than it is for an unknown number of names in a full
nested suite).
Explaining Container Comprehensions and Generator Expressions
One interesting feature of the proposed construct is that it can be used as
a primitive to explain the scoping and execution order semantics of
both generator expressions and container comprehensions:
seq2 = [x for x in y if q(x) for y in seq if p(y)]
# would be equivalent to
@in seq2 = f(seq):
def f(seq)
result = []
for y in seq:
if p(y):
for x in y:
if q(x):
result.append(x)
return result
The important point in this expansion is that it explains why comprehensions
appear to misbehave at class scope: only the outermost iterator is evaluated
at class scope, while all predicates, nested iterators and value expressions
are evaluated inside a nested scope.
An equivalent expansion is possible for generator expressions:
gen = (x for x in y if q(x) for y in seq if p(y))
# would be equivalent to
@in gen = g(seq):
def g(seq)
for y in seq:
if p(y):
for x in y:
if q(x):
yield x
More Examples
Calculating attributes without polluting the local namespace (from os.py):
# Current Python (manual namespace cleanup)
def _createenviron():
... # 27 line function
environ = _createenviron()
del _createenviron
# Becomes:
@in environ = _createenviron()
def _createenviron():
... # 27 line function
Loop early binding:
# Current Python (default argument hack)
funcs = [(lambda x, i=i: x + i) for i in range(10)]
# Becomes:
@in funcs = [adder(i) for i in range(10)]
def adder(i):
return lambda x: x + i
# Or even:
@in funcs = [adder(i) for i in range(10)]
def adder(i):
@in return incr
def incr(x):
return x + i
A trailing class can be used as a statement local namespace:
# Evaluate subexpressions only once
@in c = math.sqrt(x.a*x.a + x.b*x.b)
class x:
a = calculate_a()
b = calculate_b()
A function can be bound directly to a location which isn’t a valid
identifier:
@in dispatch[MyClass] = f
def f():
...
Constructs that verge on decorator abuse can be eliminated:
# Current Python
@call
def f():
...
# Becomes:
@in f()
def f():
...
Reference Implementation
None as yet.
Acknowledgements
Huge thanks to Gary Bernhardt for being blunt in pointing out that I had no
idea what I was talking about in criticising Ruby’s blocks, kicking off a
rather enlightening process of investigation.
Rejected Concepts
To avoid retreading previously covered ground, some rejected alternatives
are documented in this section.
Omitting the decorator prefix character
Earlier versions of this proposal omitted the @ prefix. However, without
that prefix, the bare in keyword didn’t associate the clause strongly
enough with the subsequent function or class definition. Reusing the
decorator prefix and explicitly characterising the new construct as a kind
of decorator clause is intended to help users link the two concepts and
see them as two variants of the same idea.
Anonymous Forward References
A previous incarnation of this PEP (see [1]) proposed a syntax where the
new clause was introduced with : and the forward reference was written
using @. Feedback on this variant was almost universally
negative, as it was considered both ugly and excessively magical:
:x = weakref.ref(target, @)
def report_destruction(obj):
print("{} is being destroyed".format(obj))
A more recent variant always used ... for forward references, along
with genuinely anonymous function and class definitions. However, this
degenerated quickly into a mass of unintelligible dots in more complex
cases:
in funcs = [...(i) for i in range(10)]
def ...(i):
in return ...
def ...(x):
return x + i
in c = math.sqrt(....a*....a + ....b*....b)
class ...:
a = calculate_a()
b = calculate_b()
Using a nested suite
The problems with using a full nested suite are best described in
PEP 3150. It’s comparatively difficult to implement properly, the scoping
semantics are harder to explain and it creates quite a few situations where
there are two ways to do it without clear guidelines for choosing between
them (as almost any construct that can be expressed with ordinary imperative
code could instead be expressed using a given statement). While the PEP does
propose some new PEP 8 guidelines to help address that last problem, the
difficulties in implementation are not so easily dealt with.
By contrast, the decorator inspired syntax in this PEP explicitly limits the
new feature to cases where it should actually improve readability, rather
than harming it. As in the case of the original introduction of decorators,
the idea of this new syntax is that if it can be used (i.e. the local name
binding of the function is completely unnecessary) then it probably should
be used.
Another possible variant of this idea is to keep the decorator based
semantics of this PEP, while adopting the prettier syntax from PEP 3150:
x = weakref.ref(target, report_destruction) given:
def report_destruction(obj):
print("{} is being destroyed".format(obj))
There are a couple of problems with this approach. The main issue is that
this syntax variant uses something that looks like a suite, but really isn’t
one. A secondary concern is that it’s not clear how the compiler will know
which name(s) in the leading expression are forward references (although
that could potentially be addressed through a suitable definition of the
suite-that-is-not-a-suite in the language grammar).
However, a nested suite has not yet been ruled out completely. The latest
version of PEP 3150 uses explicit forward reference and name binding
schemes that greatly simplify the semantics of the statement, and it
does offer the advantage of allowing the definition of arbitrary
subexpressions rather than being restricted to a single function or
class definition.
References
[1]
Start of python-ideas thread:
https://mail.python.org/pipermail/python-ideas/2011-October/012276.html
Copyright
This document has been placed in the public domain.
| Deferred | PEP 403 – General purpose decorator clause (aka “@in” clause) | Standards Track | This PEP proposes the addition of a new @in decorator clause that makes
it possible to override the name binding step of a function or class
definition. |
PEP 404 – Python 2.8 Un-release Schedule
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
09-Nov-2011
Python-Version:
2.8
Table of Contents
Abstract
Un-release Manager and Crew
Un-release Schedule
Official pronouncement
Upgrade path
And Now For Something Completely Different
Strings and bytes
Numbers
Classes
Multiple spellings
Imports
Iterators and views
Copyright
Abstract
This document describes the un-development and un-release schedule for Python
2.8.
Un-release Manager and Crew
Position
Name
2.8 Un-release Manager
Cardinal Biggles
Un-release Schedule
The current un-schedule is:
2.8 final Never
Official pronouncement
Rule number six: there is no official Python 2.8 release. There never will
be an official Python 2.8 release. It is an ex-release. Python 2.7
is the end of the Python 2 line of development.
Upgrade path
The official upgrade path from Python 2.7 is to Python 3.
And Now For Something Completely Different
In all seriousness, there are important reasons why there won’t be an
official Python 2.8 release, and why you should plan to migrate
instead to Python 3.
Python is (as of this writing) more than 20 years old, and Guido and the
community have learned a lot in those intervening years. Guido’s
original concept for Python 3 was to make changes to the language
primarily to remove the warts that had grown in the preceding
versions. Python 3 was not to be a complete redesign, but instead an
evolution of the language, and while maintaining full backward
compatibility with Python 2 was explicitly off-the-table, neither were
gratuitous changes in syntax or semantics acceptable. In most cases,
Python 2 code can be translated fairly easily to Python 3, sometimes
entirely mechanically by such tools as 2to3 (there’s also a non-trivial
subset of the language that will run without modification on both 2.7 and
3.x).
Because maintaining multiple versions of Python is a significant drag
on the resources of the Python developers, and because the
improvements to the language and libraries embodied in Python 3 are so
important, it was decided to end the Python 2 lineage with Python
2.7. Thus, all new development occurs in the Python 3 line of
development, and there will never be an official Python 2.8 release.
Python 2.7 will however be maintained for longer than the usual period
of time.
Here are some highlights of the significant improvements in Python 3.
You can read in more detail on the differences between Python 2 and
Python 3. There are also many good guides on porting from Python 2
to Python 3.
Strings and bytes
Python 2’s basic original strings are called 8-bit strings, and
they play a dual role in Python 2 as both ASCII text and as byte
sequences. While Python 2 also has a unicode string type, the
fundamental ambiguity of the core string type, coupled with Python 2’s
default behavior of supporting automatic coercion from 8-bit strings
to unicode objects when the two are combined, often leads to
UnicodeErrors. Python 3’s standard string type is Unicode based, and
Python 3 adds a dedicated bytes type, but critically, no automatic coercion
between bytes and unicode strings is provided. The closest the language gets
to implicit coercion are a few text-based APIs that assume a default
encoding (usually UTF-8) if no encoding is explicitly stated. Thus, the core
interpreter, its I/O libraries, module names, etc. are clear in their
distinction between unicode strings and bytes. Python 3’s unicode
support even extends to the filesystem, so that non-ASCII file names are
natively supported.
This string/bytes clarity is often a source of difficulty in
transitioning existing code to Python 3, because many third party
libraries and applications are themselves ambiguous in this
distinction. Once migrated though, most UnicodeErrors can be
eliminated.
Numbers
Python 2 has two basic integer types, a native machine-sized int
type, and an arbitrary length long type. These have been merged in
Python 3 into a single int type analogous to Python 2’s long
type.
In addition, integer division now produces floating point numbers for
non-integer results.
Classes
Python 2 has two core class hierarchies, often called classic
classes and new-style classes. The latter allow for such things as
inheriting from the builtin basic types, support descriptor based tools
like the property builtin and provide a generally more sane and coherent
system for dealing with multiple inheritance. Python 3 provided the
opportunity to completely drop support for classic classes, so all classes
in Python 3 automatically use the new-style semantics (although that’s a
misnomer now). There is no need to explicitly inherit from object or set
the default metatype to enable them (in fact, setting a default metatype at
the module level is no longer supported - the default metatype is always
object).
The mechanism for explicitly specifying a metaclass has also changed to use
a metaclass keyword argument in the class header line rather than a
__metaclass__ magic attribute in the class body.
Multiple spellings
There are many cases in Python 2 where multiple spellings of some
constructs exist, such as repr() and backticks, or the two
inequality operators != and <>. In all cases, Python 3 has chosen
exactly one spelling and removed the other (e.g. repr() and !=
were kept).
Imports
In Python 3, implicit relative imports within packages are no longer
available - only absolute imports and explicit relative imports are
supported. In addition, star imports (e.g. from x import *) are only
permitted in module level code.
Also, some areas of the standard library have been reorganized to make
the naming scheme more intuitive. Some rarely used builtins have been
relocated to standard library modules.
Iterators and views
Many APIs, which in Python 2 returned concrete lists, in Python 3 now
return iterators or lightweight views.
Copyright
This document has been placed in the public domain.
| Final | PEP 404 – Python 2.8 Un-release Schedule | Informational | This document describes the un-development and un-release schedule for Python
2.8. |
PEP 406 – Improved Encapsulation of Import State
Author:
Alyssa Coghlan <ncoghlan at gmail.com>, Greg Slodkowicz <jergosh at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
04-Jul-2011
Python-Version:
3.4
Post-History:
31-Jul-2011, 13-Nov-2011, 04-Dec-2011
Table of Contents
Abstract
PEP Withdrawal
Rationale
Proposal
Specification
ImportEngine API
Global variables
No changes to finder/loader interfaces
Open Issues
API design for falling back to global import state
Builtin and extension modules must be process global
Scope of substitution
Reference Implementation
References
Copyright
Abstract
This PEP proposes the introduction of a new ‘ImportEngine’ class as part of
importlib which would encapsulate all state related to importing modules
into a single object. Creating new instances of this object would then provide
an alternative to completely replacing the built-in implementation of the
import statement, by overriding the __import__() function. To work with
the builtin import functionality and importing via import engine objects,
this PEP proposes a context management based approach to temporarily replacing
the global import state.
The PEP also proposes inclusion of a GlobalImportEngine subclass and a
globally accessible instance of that class, which “writes through” to the
process global state. This provides a backwards compatible bridge between the
proposed encapsulated API and the legacy process global state, and allows
straightforward support for related state updates (e.g. selectively
invalidating path cache entries when sys.path is modified).
PEP Withdrawal
The import system has seen substantial changes since this PEP was originally
written, as part of PEP 420 in Python 3.3 and PEP 451 in Python 3.4.
While providing an encapsulation of the import state is still highly
desirable, it is better tackled in a new PEP using PEP 451 as a foundation,
and permitting only the use of PEP 451 compatible finders and loaders (as
those avoid many of the issues of direct manipulation of global state
associated with the previous loader API).
Rationale
Currently, most state related to the import system is stored as module level
attributes in the sys module. The one exception is the import lock, which
is not accessible directly, but only via the related functions in the imp
module. The current process global import state comprises:
sys.modules
sys.path
sys.path_hooks
sys.meta_path
sys.path_importer_cache
the import lock (imp.lock_held()/acquire_lock()/release_lock())
Isolating this state would allow multiple import states to be
conveniently stored within a process. Placing the import functionality
in a self-contained object would also allow subclassing to add additional
features (e.g. module import notifications or fine-grained control
over which modules can be imported). The engine would also be
subclassed to make it possible to use the import engine API to
interact with the existing process-global state.
The namespace PEPs (especially PEP 402) raise a potential need for
additional process global state, in order to correctly update package paths
as sys.path is modified.
Finally, providing a coherent object for all this state makes it feasible to
also provide context management features that allow the import state to be
temporarily substituted.
Proposal
We propose introducing an ImportEngine class to encapsulate import
functionality. This includes an __import__() method which can
be used as an alternative to the built-in __import__() when
desired and also an import_module() method, equivalent to
importlib.import_module() [3].
Since there are global import state invariants that are assumed and should be
maintained, we introduce a GlobalImportState class with an interface
identical to ImportEngine but directly accessing the current global import
state. This can be easily implemented using class properties.
Specification
ImportEngine API
The proposed extension consists of the following objects:
importlib.engine.ImportEngine
from_engine(self, other)
Create a new import object from another ImportEngine instance. The
new object is initialised with a copy of the state in other. When
called on importlib engine.sysengine, from_engine() can be
used to create an ImportEngine object with a copy of the
global import state.
__import__(self, name, globals={}, locals={}, fromlist=[], level=0)
Reimplementation of the builtin __import__() function. The
import of a module will proceed using the state stored in the
ImportEngine instance rather than the global import state. For full
documentation of __import__ functionality, see [2] .
__import__() from ImportEngine and its subclasses can be used
to customise the behaviour of the import statement by replacing
__builtin__.__import__ with ImportEngine().__import__.
import_module(name, package=None)
A reimplementation of importlib.import_module() which uses the
import state stored in the ImportEngine instance. See [3] for a full
reference.
modules, path, path_hooks, meta_path, path_importer_cache
Instance-specific versions of their process global sys equivalents
importlib.engine.GlobalImportEngine(ImportEngine)
Convenience class to provide engine-like access to the global state.
Provides __import__(), import_module() and from_engine()
methods like ImportEngine but writes through to the global state
in sys.
To support various namespace package mechanisms, when sys.path is altered,
tools like pkgutil.extend_path should be used to also modify other parts
of the import state (in this case, package __path__ attributes). The path
importer cache should also be invalidated when a variety of changes are made.
The ImportEngine API will provide convenience methods that automatically
make related import state updates as part of a single operation.
Global variables
importlib.engine.sysengine
A precreated instance of GlobalImportEngine. Intended for use by
importers and loaders that have been updated to accept optional engine
parameters and with ImportEngine.from_engine(sysengine) to start with
a copy of the process global import state.
No changes to finder/loader interfaces
Rather than attempting to update the PEP 302 APIs to accept additional state,
this PEP proposes that ImportEngine support the content management
protocol (similar to the context substitution mechanisms in the decimal
module).
The context management mechanism for ImportEngine would:
On entry:
* Acquire the import lock
* Substitute the global import state with the import engine’s own state
On exit:
* Restore the previous global import state
* Release the import lock
The precise API for this is TBD (but will probably use a distinct context
management object, along the lines of that created by
decimal.localcontext).
Open Issues
API design for falling back to global import state
The current proposal relies on the from_engine() API to fall back to the
global import state. It may be desirable to offer a variant that instead falls
back to the global import state dynamically.
However, one big advantage of starting with an “as isolated as possible”
design is that it becomes possible to experiment with subclasses that blur
the boundaries between the engine instance state and the process global state
in various ways.
Builtin and extension modules must be process global
Due to platform limitations, only one copy of each builtin and extension
module can readily exist in each process. Accordingly, it is impossible for
each ImportEngine instance to load such modules independently.
The simplest solution is for ImportEngine to refuse to load such modules,
raising ImportError. GlobalImportEngine would be able to load them
normally.
ImportEngine will still return such modules from a prepopulated module
cache - it’s only loading them directly which causes problems.
Scope of substitution
Related to the previous open issue is the question of what state to substitute
when using the context management API. It is currently the case that replacing
sys.modules can be unreliable due to cached references and there’s the
underlying fact that having independent copies of some modules is simply
impossible due to platform limitations.
As part of this PEP, it will be necessary to document explicitly:
Which parts of the global import state can be substituted (and declare code
which caches references to that state without dealing with the substitution
case buggy)
Which parts must be modified in-place (and hence are not substituted by the
ImportEngine context management API, or otherwise scoped to
ImportEngine instances)
Reference Implementation
A reference implementation [4] for an earlier draft of this PEP, based on
Brett Cannon’s importlib has been developed by Greg Slodkowicz as part of the
2011 Google Summer of Code. Note that the current implementation avoids
modifying existing code, and hence duplicates a lot of things unnecessarily.
An actual implementation would just modify any such affected code in place.
That earlier draft of the PEP proposed change the PEP 302 APIs to support passing
in an optional engine instance. This had the (serious) downside of not correctly
affecting further imports from the imported module, hence the change to the
context management based proposal for substituting the global state.
References
[2]
__import__() builtin function, The Python Standard Library documentation
(http://docs.python.org/library/functions.html#__import__)
[3] (1, 2)
Importlib documentation, Cannon
(http://docs.python.org/dev/library/importlib)
[4]
Reference implementation
(https://bitbucket.org/jergosh/gsoc_import_engine/src/default/Lib/importlib/engine.py)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 406 – Improved Encapsulation of Import State | Standards Track | This PEP proposes the introduction of a new ‘ImportEngine’ class as part of
importlib which would encapsulate all state related to importing modules
into a single object. Creating new instances of this object would then provide
an alternative to completely replacing the built-in implementation of the
import statement, by overriding the __import__() function. To work with
the builtin import functionality and importing via import engine objects,
this PEP proposes a context management based approach to temporarily replacing
the global import state. |
PEP 407 – New release cycle and introducing long-term support versions
Author:
Antoine Pitrou <solipsis at pitrou.net>,
Georg Brandl <georg at python.org>,
Barry Warsaw <barry at python.org>
Status:
Deferred
Type:
Process
Created:
12-Jan-2012
Post-History:
17-Jan-2012
Table of Contents
Abstract
Scope
Proposal
Periodicity
Pre-release versions
Effects
Effect on development cycle
Effect on bugfix cycle
Effect on workflow
Effect on the community
Discussion
Copyright
Abstract
Finding a release cycle for an open-source project is a delicate
exercise in managing mutually contradicting constraints: developer
manpower, availability of release management volunteers, ease of
maintenance for users and third-party packagers, quick availability of
new features (and behavioural changes), availability of bug fixes
without pulling in new features or behavioural changes.
The current release cycle errs on the conservative side. It is
adequate for people who value stability over reactivity. This PEP is
an attempt to keep the stability that has become a Python trademark,
while offering a more fluid release of features, by introducing the
notion of long-term support versions.
Scope
This PEP doesn’t try to change the maintenance period or release
scheme for the 2.7 branch. Only 3.x versions are considered.
Proposal
Under the proposed scheme, there would be two kinds of feature
versions (sometimes dubbed “minor versions”, for example 3.2 or 3.3):
normal feature versions and long-term support (LTS) versions.
Normal feature versions would get either zero or at most one bugfix
release; the latter only if needed to fix critical issues. Security
fix handling for these branches needs to be decided.
LTS versions would get regular bugfix releases until the next LTS
version is out. They then would go into security fixes mode, up to a
termination date at the release manager’s discretion.
Periodicity
A new feature version would be released every X months. We
tentatively propose X = 6 months.
LTS versions would be one out of N feature versions. We tentatively
propose N = 4.
With these figures, a new LTS version would be out every 24 months,
and remain supported until the next LTS version 24 months later. This
is mildly similar to today’s 18 months bugfix cycle for every feature
version.
Pre-release versions
More frequent feature releases imply a smaller number of disruptive
changes per release. Therefore, the number of pre-release builds
(alphas and betas) can be brought down considerably. Two alpha builds
and a single beta build would probably be enough in the regular case.
The number of release candidates depends, as usual, on the number of
last-minute fixes before final release.
Effects
Effect on development cycle
More feature releases might mean more stress on the development and
release management teams. This is quantitatively alleviated by the
smaller number of pre-release versions; and qualitatively by the
lesser amount of disruptive changes (meaning less potential for
breakage). The shorter feature freeze period (after the first beta
build until the final release) is easier to accept. The rush for
adding features just before feature freeze should also be much
smaller.
Effect on bugfix cycle
The effect on fixing bugs should be minimal with the proposed figures.
The same number of branches would be simultaneously open for bugfix
maintenance (two until 2.x is terminated, then one).
Effect on workflow
The workflow for new features would be the same: developers would only
commit them on the default branch.
The workflow for bug fixes would be slightly updated: developers would
commit bug fixes to the current LTS branch (for example 3.3) and
then merge them into default.
If some critical fixes are needed to a non-LTS version, they can be
grafted from the current LTS branch to the non-LTS branch, just like
fixes are ported from 3.x to 2.7 today.
Effect on the community
People who value stability can just synchronize on the LTS releases
which, with the proposed figures, would give a similar support cycle
(both in duration and in stability).
People who value reactivity and access to new features (without taking
the risk to install alpha versions or Mercurial snapshots) would get
much more value from the new release cycle than currently.
People who want to contribute new features or improvements would be
more motivated to do so, knowing that their contributions will be more
quickly available to normal users. Also, a smaller feature freeze
period makes it less cumbersome to interact with contributors of
features.
Discussion
These are open issues that should be worked out during discussion:
Decide on X (months between feature releases) and N (feature releases
per LTS release) as defined above.
For given values of X and N, is the no-bugfix-releases policy for
non-LTS versions feasible?
What is the policy for security fixes?
Restrict new syntax and similar changes (i.e. everything that was
prohibited by PEP 3003) to LTS versions?
What is the effect on packagers such as Linux distributions?
How will release version numbers or other identifying and marketing
material make it clear to users which versions are normal feature
releases and which are LTS releases? How do we manage user
expectations?
Does the faster release cycle mean we could some day reach 3.10 and
above? Some people expressed a tacit expectation that version numbers
always fit in one decimal digit.
A community poll or survey to collect opinions from the greater Python
community would be valuable before making a final decision.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 407 – New release cycle and introducing long-term support versions | Process | Finding a release cycle for an open-source project is a delicate
exercise in managing mutually contradicting constraints: developer
manpower, availability of release management volunteers, ease of
maintenance for users and third-party packagers, quick availability of
new features (and behavioural changes), availability of bug fixes
without pulling in new features or behavioural changes. |
PEP 408 – Standard library __preview__ package
Author:
Alyssa Coghlan <ncoghlan at gmail.com>,
Eli Bendersky <eliben at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
07-Jan-2012
Python-Version:
3.3
Post-History:
27-Jan-2012
Resolution:
Python-Dev message
Table of Contents
Abstract
PEP Rejection
Proposal - the __preview__ package
Which modules should go through __preview__
Criteria for “graduation”
Example
Rationale
Benefits for the core development team
Benefits for end users
Candidates for inclusion into __preview__
Relationship with PEP 407
Rejected alternatives and variations
Using __future__
Versioning the package
Using a package name without leading and trailing underscores
Preserving pickle compatibility
Credits
References
Copyright
Abstract
The process of including a new module into the Python standard library is
hindered by the API lock-in and promise of backward compatibility implied by
a module being formally part of Python. This PEP proposes a transitional
state for modules - inclusion in a special __preview__ package for the
duration of a minor release (roughly 18 months) prior to full acceptance into
the standard library. On one hand, this state provides the module with the
benefits of being formally part of the Python distribution. On the other hand,
the core development team explicitly states that no promises are made with
regards to the module’s eventual full inclusion into the standard library,
or to the stability of its API, which may change for the next release.
PEP Rejection
Based on his experience with a similar “labs” namespace in Google App Engine,
Guido has rejected this PEP [3] in favour of the simpler alternative of
explicitly marking provisional modules as such in their documentation.
If a module is otherwise considered suitable for standard library inclusion,
but some concerns remain regarding maintainability or certain API details,
then the module can be accepted on a provisional basis. While it is considered
an unlikely outcome, such modules may be removed from the standard library
without a deprecation period if the lingering concerns prove well-founded.
As part of the same announcement, Guido explicitly accepted Matthew
Barnett’s ‘regex’ module [4] as a provisional addition to the standard
library for Python 3.3 (using the ‘regex’ name, rather than as a drop-in
replacement for the existing ‘re’ module).
Proposal - the __preview__ package
Whenever the Python core development team decides that a new module should be
included into the standard library, but isn’t entirely sure about whether the
module’s API is optimal, the module can be placed in a special package named
__preview__ for a single minor release.
In the next minor release, the module may either be “graduated” into the
standard library (and occupy its natural place within its namespace, leaving the
__preview__ package), or be rejected and removed entirely from the Python
source tree. If the module ends up graduating into the standard library after
spending a minor release in __preview__, its API may be changed according
to accumulated feedback. The core development team explicitly makes no
guarantees about API stability and backward compatibility of modules in
__preview__.
Entry into the __preview__ package marks the start of a transition of the
module into the standard library. It means that the core development team
assumes responsibility of the module, similarly to any other module in the
standard library.
Which modules should go through __preview__
We expect most modules proposed for addition into the Python standard library
to go through a minor release in __preview__. There may, however, be some
exceptions, such as modules that use a pre-defined API (for example lzma,
which generally follows the API of the existing bz2 module), or modules
with an API that has wide acceptance in the Python development community.
In any case, modules that are proposed to be added to the standard library,
whether via __preview__ or directly, must fulfill the acceptance conditions
set by PEP 2.
It is important to stress that the aim of this proposal is not to make the
process of adding new modules to the standard library more difficult. On the
contrary, it tries to provide a means to add more useful libraries. Modules
which are obvious candidates for entry can be added as before. Modules which
due to uncertainties about the API could be stalled for a long time now have
a means to still be distributed with Python, via an incubation period in the
__preview__ package.
Criteria for “graduation”
In principle, most modules in the __preview__ package should eventually
graduate to the stable standard library. Some reasons for not graduating are:
The module may prove to be unstable or fragile, without sufficient developer
support to maintain it.
A much better alternative module may be found during the preview release
Essentially, the decision will be made by the core developers on a per-case
basis. The point to emphasize here is that a module’s appearance in the
__preview__ package in some release does not guarantee it will continue
being part of Python in the next release.
Example
Suppose the example module is a candidate for inclusion in the standard
library, but some Python developers aren’t convinced that it presents the best
API for the problem it intends to solve. The module can then be added to the
__preview__ package in release 3.X, importable via:
from __preview__ import example
Assuming the module is then promoted to the standard library proper in
release 3.X+1, it will be moved to a permanent location in the library:
import example
And importing it from __preview__ will no longer work.
Rationale
Benefits for the core development team
Currently, the core developers are really reluctant to add new interfaces to
the standard library. This is because as soon as they’re published in a
release, API design mistakes get locked in due to backward compatibility
concerns.
By gating all major API additions through some kind of a preview mechanism
for a full release, we get one full release cycle of community feedback
before we lock in the APIs with our standard backward compatibility guarantee.
We can also start integrating preview modules with the rest of the standard
library early, so long as we make it clear to packagers that the preview
modules should not be considered optional. The only difference between preview
APIs and the rest of the standard library is that preview APIs are explicitly
exempted from the usual backward compatibility guarantees.
Essentially, the __preview__ package is intended to lower the risk of
locking in minor API design mistakes for extended periods of time. Currently,
this concern can block new additions, even when the core development team
consensus is that a particular addition is a good idea in principle.
Benefits for end users
For future end users, the broadest benefit lies in a better “out-of-the-box”
experience - rather than being told “oh, the standard library tools for task X
are horrible, download this 3rd party library instead”, those superior tools
are more likely to be just be an import away.
For environments where developers are required to conduct due diligence on
their upstream dependencies (severely harming the cost-effectiveness of, or
even ruling out entirely, much of the material on PyPI), the key benefit lies
in ensuring that anything in the __preview__ package is clearly under
python-dev’s aegis from at least the following perspectives:
Licensing: Redistributed by the PSF under a Contributor Licensing Agreement.
Documentation: The documentation of the module is published and organized via
the standard Python documentation tools (i.e. ReST source, output generated
with Sphinx and published on http://docs.python.org).
Testing: The module test suites are run on the python.org buildbot fleet
and results published via http://www.python.org/dev/buildbot.
Issue management: Bugs and feature requests are handled on
http://bugs.python.org
Source control: The master repository for the software is published
on http://hg.python.org.
Candidates for inclusion into __preview__
For Python 3.3, there are a number of clear current candidates:
regex (http://pypi.python.org/pypi/regex)
daemon (PEP 3143)
ipaddr (PEP 3144)
Other possible future use cases include:
Improved HTTP modules (e.g. requests)
HTML 5 parsing support (e.g. html5lib)
Improved URL/URI/IRI parsing
A standard image API (PEP 368)
Encapsulation of the import state (PEP 368)
Standard event loop API (PEP 3153)
A binary version of WSGI for Python 3 (e.g. PEP 444)
Generic function support (e.g. simplegeneric)
Relationship with PEP 407
PEP 407 proposes a change to the core Python release cycle to permit interim
releases every 6 months (perhaps limited to standard library updates). If
such a change to the release cycle is made, the following policy for the
__preview__ namespace is suggested:
For long-term support releases, the __preview__ namespace would always
be empty.
New modules would be accepted into the __preview__ namespace only in
interim releases that immediately follow a long-term support release.
All modules added will either be migrated to their final location in the
standard library or dropped entirely prior to the next long-term support
release.
Rejected alternatives and variations
Using __future__
Python already has a “forward-looking” namespace in the form of the
__future__ module, so it’s reasonable to ask why that can’t be re-used for
this new purpose.
There are two reasons why doing so not appropriate:
1. The __future__ module is actually linked to a separate compiler
directives feature that can actually change the way the Python interpreter
compiles a module. We don’t want that for the preview package - we just want
an ordinary Python package.
2. The __future__ module comes with an express promise that names will be
maintained in perpetuity, long after the associated features have become the
compiler’s default behaviour. Again, this is precisely the opposite of what is
intended for the preview package - it is almost certain that all names added to
the preview will be removed at some point, most likely due to their being moved
to a permanent home in the standard library, but also potentially due to their
being reverted to third party package status (if community feedback suggests the
proposed addition is irredeemably broken).
Versioning the package
One proposed alternative [1] was to add explicit versioning to the
__preview__ package, i.e. __preview34__. We think that it’s better to
simply define that a module being in __preview__ in Python 3.X will either
graduate to the normal standard library namespace in Python 3.X+1 or will
disappear from the Python source tree altogether. Versioning the _preview__
package complicates the process and does not align well with the main intent of
this proposal.
Using a package name without leading and trailing underscores
It was proposed [1] to use a package name like preview or exp, instead
of __preview__. This was rejected in the discussion due to the special
meaning a “dunder” package name (that is, a name with leading and
trailing double-underscores) conveys in Python. Besides, a non-dunder name
would suggest normal standard library API stability guarantees, which is not
the intention of the __preview__ package.
Preserving pickle compatibility
A pickled class instance based on a module in __preview__ in release 3.X
won’t be unpickle-able in release 3.X+1, where the module won’t be in
__preview__. Special code may be added to make this work, but this goes
against the intent of this proposal, since it implies backward compatibility.
Therefore, this PEP does not propose to preserve pickle compatibility.
Credits
Dj Gilcrease initially proposed the idea of having a __preview__ package
in Python [2]. Although his original proposal uses the name
__experimental__, we feel that __preview__ conveys the meaning of this
package in a better way.
References
[1] (1, 2)
Discussed in this thread:
https://mail.python.org/pipermail/python-ideas/2012-January/013246.html
[2]
https://mail.python.org/pipermail/python-ideas/2011-August/011278.html
[3]
Guido’s decision:
https://mail.python.org/pipermail/python-dev/2012-January/115962.html
[4]
Proposal for inclusion of regex: http://bugs.python.org/issue2636
Copyright
This document has been placed in the public domain.
| Rejected | PEP 408 – Standard library __preview__ package | Standards Track | The process of including a new module into the Python standard library is
hindered by the API lock-in and promise of backward compatibility implied by
a module being formally part of Python. This PEP proposes a transitional
state for modules - inclusion in a special __preview__ package for the
duration of a minor release (roughly 18 months) prior to full acceptance into
the standard library. On one hand, this state provides the module with the
benefits of being formally part of the Python distribution. On the other hand,
the core development team explicitly states that no promises are made with
regards to the module’s eventual full inclusion into the standard library,
or to the stability of its API, which may change for the next release. |
PEP 409 – Suppressing exception context
Author:
Ethan Furman <ethan at stoneleaf.us>
Status:
Final
Type:
Standards Track
Created:
26-Jan-2012
Python-Version:
3.3
Post-History:
30-Aug-2002, 01-Feb-2012, 03-Feb-2012
Superseded-By:
415
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Alternatives
Proposal
Implementation Discussion
Language Details
Patches
References
Copyright
Abstract
One of the open issues from PEP 3134 is suppressing context: currently
there is no way to do it. This PEP proposes one.
Rationale
There are two basic ways to generate exceptions:
Python does it (buggy code, missing resources, ending loops, etc.)
manually (with a raise statement)
When writing libraries, or even just custom classes, it can become
necessary to raise exceptions; moreover it can be useful, even
necessary, to change from one exception to another. To take an example
from my dbf module:
try:
value = int(value)
except Exception:
raise DbfError(...)
Whatever the original exception was (ValueError, TypeError, or
something else) is irrelevant. The exception from this point on is a
DbfError, and the original exception is of no value. However, if
this exception is printed, we would currently see both.
Alternatives
Several possibilities have been put forth:
raise as NewException()Reuses the as keyword; can be confusing since we are not really
reraising the originating exception
raise NewException() from NoneFollows existing syntax of explicitly declaring the originating
exception
exc = NewException(); exc.__context__ = None; raise excVery verbose way of the previous method
raise NewException.no_context(...)Make context suppression a class method.
All of the above options will require changes to the core.
Proposal
I propose going with the second option:
raise NewException from None
It has the advantage of using the existing pattern of explicitly setting
the cause:
raise KeyError() from NameError()
but because the cause is None the previous context is not displayed
by the default exception printing routines.
Implementation Discussion
Note: after acceptance of this PEP, a cleaner implementation mechanism
was proposed and accepted in PEP 415. Refer to that PEP for more
details on the implementation actually used in Python 3.3.
Currently, None is the default for both __context__ and __cause__.
In order to support raise ... from None (which would set __cause__ to
None) we need a different default value for __cause__. Several ideas
were put forth on how to implement this at the language level:
Overwrite the previous exception information (side-stepping the issue and
leaving __cause__ at None).Rejected as this can seriously hinder debugging due to
poor error messages.
Use one of the boolean values in __cause__: False would be the
default value, and would be replaced when from ... was used with the
explicitly chained exception or None.Rejected as this encourages the use of two different objects types for
__cause__ with one of them (boolean) not allowed to have the full range
of possible values (True would never be used).
Create a special exception class, __NoException__.Rejected as possibly confusing, possibly being mistakenly raised by users,
and not being a truly unique value as None, True, and False are.
Use Ellipsis as the default value (the ... singleton).Accepted.
Ellipses are commonly used in English as place holders when words are
omitted. This works in our favor here as a signal that __cause__ is
omitted, so look in __context__ for more details.
Ellipsis is not an exception, so cannot be raised.
There is only one Ellipsis, so no unused values.
Error information is not thrown away, so custom code can trace the entire
exception chain even if the default code does not.
Language Details
To support raise Exception from None, __context__ will stay as it is,
but __cause__ will start out as Ellipsis and will change to None
when the raise Exception from None method is used.
form
__context__
__cause__
raise
None
Ellipsis
reraise
previous exception
Ellipsis
reraise from None | ChainedException
previous exception
None | explicitly chained exception
The default exception printing routine will then:
If __cause__ is Ellipsis the __context__ (if any) will be
printed.
If __cause__ is None the __context__ will not be printed.
if __cause__ is anything else, __cause__ will be printed.
In both of the latter cases the exception chain will stop being followed.
Because the default value for __cause__ is now Ellipsis and raise
Exception from Cause is simply syntactic sugar for:
_exc = NewException()
_exc.__cause__ = Cause()
raise _exc
Ellipsis, as well as None, is now allowed as a cause:
raise Exception from Ellipsis
Patches
There is a patch for CPython implementing this attached to Issue 6210.
References
Discussion and refinements in this thread on python-dev.
Copyright
This document has been placed in the public domain.
| Final | PEP 409 – Suppressing exception context | Standards Track | One of the open issues from PEP 3134 is suppressing context: currently
there is no way to do it. This PEP proposes one. |
PEP 410 – Use decimal.Decimal type for timestamps
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
01-Feb-2012
Python-Version:
3.3
Resolution:
Python-Dev message
Table of Contents
Rejection Notice
Abstract
Rationale
Specification
Backwards Compatibility
Objection: clocks accuracy
Alternatives: Timestamp types
Number of nanoseconds (int)
128-bits float
datetime.datetime
datetime.timedelta
Tuple of integers
timespec structure
Alternatives: API design
Add a string argument to specify the return type
Add a global flag to change the timestamp type
Add a protocol to create a timestamp
Add new fields to os.stat
Add a boolean argument
Add new functions
Add a new hires module
Links
Copyright
Rejection Notice
This PEP is rejected.
See https://mail.python.org/pipermail/python-dev/2012-February/116837.html.
Abstract
Decimal becomes the official type for high-resolution timestamps to make Python
support new functions using a nanosecond resolution without loss of precision.
Rationale
Python 2.3 introduced float timestamps to support sub-second resolutions.
os.stat() uses float timestamps by default since Python 2.5. Python 3.3
introduced functions supporting nanosecond resolutions:
os module: futimens(), utimensat()
time module: clock_gettime(), clock_getres(), monotonic(), wallclock()
os.stat() reads nanosecond timestamps but returns timestamps as float.
The Python float type uses binary64 format of the IEEE 754 standard. With a
resolution of one nanosecond (10-9), float timestamps lose precision
for values bigger than 224 seconds (194 days: 1970-07-14 for an Epoch
timestamp).
Nanosecond resolution is required to set the exact modification time on
filesystems supporting nanosecond timestamps (e.g. ext4, btrfs, NTFS, …). It
helps also to compare the modification time to check if a file is newer than
another file. Use cases: copy the modification time of a file using
shutil.copystat(), create a TAR archive with the tarfile module, manage a
mailbox with the mailbox module, etc.
An arbitrary resolution is preferred over a fixed resolution (like nanosecond)
to not have to change the API when a better resolution is required. For
example, the NTP protocol uses fractions of 232 seconds
(approximately 2.3 × 10-10 second), whereas the NTP protocol version
4 uses fractions of 264 seconds (5.4 × 10-20 second).
Note
With a resolution of 1 microsecond (10-6), float timestamps lose
precision for values bigger than 233 seconds (272 years: 2242-03-16
for an Epoch timestamp). With a resolution of 100 nanoseconds
(10-7, resolution used on Windows), float timestamps lose precision
for values bigger than 229 seconds (17 years: 1987-01-05 for an
Epoch timestamp).
Specification
Add decimal.Decimal as a new type for timestamps. Decimal supports any
timestamp resolution, support arithmetic operations and is comparable. It is
possible to coerce a Decimal to float, even if the conversion may lose
precision. The clock resolution can also be stored in a Decimal object.
Add an optional timestamp argument to:
os module: fstat(), fstatat(), lstat(), stat() (st_atime,
st_ctime and st_mtime fields of the stat structure),
sched_rr_get_interval(), times(), wait3() and wait4()
resource module: ru_utime and ru_stime fields of getrusage()
signal module: getitimer(), setitimer()
time module: clock(), clock_gettime(), clock_getres(),
monotonic(), time() and wallclock()
The timestamp argument value can be float or Decimal, float is still the
default for backward compatibility. The following functions support Decimal as
input:
datetime module: date.fromtimestamp(), datetime.fromtimestamp() and
datetime.utcfromtimestamp()
os module: futimes(), futimesat(), lutimes(), utime()
select module: epoll.poll(), kqueue.control(), select()
signal module: setitimer(), sigtimedwait()
time module: ctime(), gmtime(), localtime(), sleep()
The os.stat_float_times() function is deprecated: use an explicit cast using
int() instead.
Note
The decimal module is implemented in Python and is slower than float, but
there is a new C implementation which is almost ready for inclusion in
CPython.
Backwards Compatibility
The default timestamp type (float) is unchanged, so there is no impact on
backward compatibility nor on performances. The new timestamp type,
decimal.Decimal, is only returned when requested explicitly.
Objection: clocks accuracy
Computer clocks and operating systems are inaccurate and fail to provide
nanosecond accuracy in practice. A nanosecond is what it takes to execute a
couple of CPU instructions. Even on a real-time operating system, a
nanosecond-precise measurement is already obsolete when it starts being
processed by the higher-level application. A single cache miss in the CPU will
make the precision worthless.
Note
Linux actually is able to measure time in nanosecond precision, even
though it is not able to keep its clock synchronized to UTC with a
nanosecond accuracy.
Alternatives: Timestamp types
To support timestamps with an arbitrary or nanosecond resolution, the following
types have been considered:
decimal.Decimal
number of nanoseconds
128-bits float
datetime.datetime
datetime.timedelta
tuple of integers
timespec structure
Criteria:
Doing arithmetic on timestamps must be possible
Timestamps must be comparable
An arbitrary resolution, or at least a resolution of one nanosecond without
losing precision
It should be possible to coerce the new timestamp to float for backward
compatibility
A resolution of one nanosecond is enough to support all current C functions.
The best resolution used by operating systems is one nanosecond. In practice,
most clock accuracy is closer to microseconds than nanoseconds. So it sounds
reasonable to use a fixed resolution of one nanosecond.
Number of nanoseconds (int)
A nanosecond resolution is enough for all current C functions and so a
timestamp can simply be a number of nanoseconds, an integer, not a float.
The number of nanoseconds format has been rejected because it would require to
add new specialized functions for this format because it not possible to
differentiate a number of nanoseconds and a number of seconds just by checking
the object type.
128-bits float
Add a new IEEE 754-2008 quad-precision binary float type. The IEEE 754-2008
quad precision float has 1 sign bit, 15 bits of exponent and 112 bits of
mantissa. 128-bits float is supported by GCC (4.3), Clang and ICC compilers.
Python must be portable and so cannot rely on a type only available on some
platforms. For example, Visual C++ 2008 doesn’t support 128-bits float, whereas
it is used to build the official Windows executables. Another example: GCC 4.3
does not support __float128 in 32-bit mode on x86 (but GCC 4.4 does).
There is also a license issue: GCC uses the MPFR library for 128-bits float,
library distributed under the GNU LGPL license. This license is not compatible
with the Python license.
Note
The x87 floating point unit of Intel CPU supports 80-bit floats. This format
is not supported by the SSE instruction set, which is now preferred over
float, especially on x86_64. Other CPU vendors don’t support 80-bit float.
datetime.datetime
The datetime.datetime type is the natural choice for a timestamp because it is
clear that this type contains a timestamp, whereas int, float and Decimal are
raw numbers. It is an absolute timestamp and so is well defined. It gives
direct access to the year, month, day, hours, minutes and seconds. It has
methods related to time like methods to format the timestamp as string (e.g.
datetime.datetime.strftime).
The major issue is that except os.stat(), time.time() and
time.clock_gettime(time.CLOCK_GETTIME), all time functions have an unspecified
starting point and no timezone information, and so cannot be converted to
datetime.datetime.
datetime.datetime has also issues with timezone. For example, a datetime object
without timezone (unaware) and a datetime with a timezone (aware) cannot be
compared. There is also an ordering issues with daylight saving time (DST) in
the duplicate hour of switching from DST to normal time.
datetime.datetime has been rejected because it cannot be used for functions
using an unspecified starting point like os.times() or time.clock().
For time.time() and time.clock_gettime(time.CLOCK_GETTIME): it is already
possible to get the current time as a datetime.datetime object using:
datetime.datetime.now(datetime.timezone.utc)
For os.stat(), it is simple to create a datetime.datetime object from a
decimal.Decimal timestamp in the UTC timezone:
datetime.datetime.fromtimestamp(value, datetime.timezone.utc)
Note
datetime.datetime only supports microsecond resolution, but can be enhanced
to support nanosecond.
datetime.timedelta
datetime.timedelta is the natural choice for a relative timestamp because it is
clear that this type contains a timestamp, whereas int, float and Decimal are
raw numbers. It can be used with datetime.datetime to get an absolute timestamp
when the starting point is known.
datetime.timedelta has been rejected because it cannot be coerced to float and
has a fixed resolution. One new standard timestamp type is enough, Decimal is
preferred over datetime.timedelta. Converting a datetime.timedelta to float
requires an explicit call to the datetime.timedelta.total_seconds() method.
Note
datetime.timedelta only supports microsecond resolution, but can be enhanced
to support nanosecond.
Tuple of integers
To expose C functions in Python, a tuple of integers is the natural choice to
store a timestamp because the C language uses structures with integers fields
(e.g. timeval and timespec structures). Using only integers avoids the loss of
precision (Python supports integers of arbitrary length). Creating and parsing
a tuple of integers is simple and fast.
Depending of the exact format of the tuple, the precision can be arbitrary or
fixed. The precision can be choose as the loss of precision is smaller than
an arbitrary limit like one nanosecond.
Different formats have been proposed:
A: (numerator, denominator)
value = numerator / denominator
resolution = 1 / denominator
denominator > 0
B: (seconds, numerator, denominator)
value = seconds + numerator / denominator
resolution = 1 / denominator
0 <= numerator < denominator
denominator > 0
C: (intpart, floatpart, base, exponent)
value = intpart + floatpart / baseexponent
resolution = 1 / base exponent
0 <= floatpart < base exponent
base > 0
exponent >= 0
D: (intpart, floatpart, exponent)
value = intpart + floatpart / 10exponent
resolution = 1 / 10 exponent
0 <= floatpart < 10 exponent
exponent >= 0
E: (sec, nsec)
value = sec + nsec × 10-9
resolution = 10 -9 (nanosecond)
0 <= nsec < 10 9
All formats support an arbitrary resolution, except of the format (E).
The format (D) may not be able to store the exact value (may loss of precision)
if the clock frequency is arbitrary and cannot be expressed as a power of 10.
The format (C) has a similar issue, but in such case, it is possible to use
base=frequency and exponent=1.
The formats (C), (D) and (E) allow optimization for conversion to float if the
base is 2 and to decimal.Decimal if the base is 10.
The format (A) is a simple fraction. It supports arbitrary precision, is simple
(only two fields), only requires a simple division to get the floating point
value, and is already used by float.as_integer_ratio().
To simplify the implementation (especially the C implementation to avoid
integer overflow), a numerator bigger than the denominator can be accepted.
The tuple may be normalized later.
Tuple of integers have been rejected because they don’t support arithmetic
operations.
Note
On Windows, the QueryPerformanceCounter() clock uses the frequency of
the processor which is an arbitrary number and so may not be a power or 2 or
10. The frequency can be read using QueryPerformanceFrequency().
timespec structure
timespec is the C structure used to store timestamp with a nanosecond
resolution. Python can use a type with the same structure: (seconds,
nanoseconds). For convenience, arithmetic operations on timespec are supported.
Example of an incomplete timespec type supporting addition, subtraction and
coercion to float:
class timespec(tuple):
def __new__(cls, sec, nsec):
if not isinstance(sec, int):
raise TypeError
if not isinstance(nsec, int):
raise TypeError
asec, nsec = divmod(nsec, 10 ** 9)
sec += asec
obj = tuple.__new__(cls, (sec, nsec))
obj.sec = sec
obj.nsec = nsec
return obj
def __float__(self):
return self.sec + self.nsec * 1e-9
def total_nanoseconds(self):
return self.sec * 10 ** 9 + self.nsec
def __add__(self, other):
if not isinstance(other, timespec):
raise TypeError
ns_sum = self.total_nanoseconds() + other.total_nanoseconds()
return timespec(*divmod(ns_sum, 10 ** 9))
def __sub__(self, other):
if not isinstance(other, timespec):
raise TypeError
ns_diff = self.total_nanoseconds() - other.total_nanoseconds()
return timespec(*divmod(ns_diff, 10 ** 9))
def __str__(self):
if self.sec < 0 and self.nsec:
sec = abs(1 + self.sec)
nsec = 10**9 - self.nsec
return '-%i.%09u' % (sec, nsec)
else:
return '%i.%09u' % (self.sec, self.nsec)
def __repr__(self):
return '<timespec(%s, %s)>' % (self.sec, self.nsec)
The timespec type is similar to the format (E) of tuples of integer, except
that it supports arithmetic and coercion to float.
The timespec type was rejected because it only supports nanosecond resolution
and requires to implement each arithmetic operation, whereas the Decimal type
is already implemented and well tested.
Alternatives: API design
Add a string argument to specify the return type
Add a string argument to function returning timestamps, example:
time.time(format=”datetime”). A string is more extensible than a type: it is
possible to request a format that has no type, like a tuple of integers.
This API was rejected because it was necessary to import implicitly modules to
instantiate objects (e.g. import datetime to create datetime.datetime).
Importing a module may raise an exception and may be slow, such behaviour is
unexpected and surprising.
Add a global flag to change the timestamp type
A global flag like os.stat_decimal_times(), similar to os.stat_float_times(),
can be added to set globally the timestamp type.
A global flag may cause issues with libraries and applications expecting float
instead of Decimal. Decimal is not fully compatible with float. float+Decimal
raises a TypeError for example. The os.stat_float_times() case is different
because an int can be coerced to float and int+float gives float.
Add a protocol to create a timestamp
Instead of hard coding how timestamps are created, a new protocol can be added
to create a timestamp from a fraction.
For example, time.time(timestamp=type) would call the class method
type.__fromfraction__(numerator, denominator) to create a timestamp object of
the specified type. If the type doesn’t support the protocol, a fallback is
used: type(numerator) / type(denominator).
A variant is to use a “converter” callback to create a timestamp. Example
creating a float timestamp:
def timestamp_to_float(numerator, denominator):
return float(numerator) / float(denominator)
Common converters can be provided by time, datetime and other modules, or maybe
a specific “hires” module. Users can define their own converters.
Such protocol has a limitation: the timestamp structure has to be decided once
and cannot be changed later. For example, adding a timezone or the absolute
start of the timestamp would break the API.
The protocol proposition was as being excessive given the requirements, but
that the specific syntax proposed (time.time(timestamp=type)) allows this to be
introduced later if compelling use cases are discovered.
Note
Other formats may be used instead of a fraction: see the tuple of integers
section for example.
Add new fields to os.stat
To get the creation, modification and access time of a file with a nanosecond
resolution, three fields can be added to os.stat() structure.
The new fields can be timestamps with nanosecond resolution (e.g. Decimal) or
the nanosecond part of each timestamp (int).
If the new fields are timestamps with nanosecond resolution, populating the
extra fields would be time-consuming. Any call to os.stat() would be slower,
even if os.stat() is only called to check if a file exists. A parameter can be
added to os.stat() to make these fields optional, the structure would have a
variable number of fields.
If the new fields only contain the fractional part (nanoseconds), os.stat()
would be efficient. These fields would always be present and so set to zero if
the operating system does not support sub-second resolution. Splitting a
timestamp in two parts, seconds and nanoseconds, is similar to the timespec
type and tuple of integers, and so have the same drawbacks.
Adding new fields to the os.stat() structure does not solve the nanosecond
issue in other modules (e.g. the time module).
Add a boolean argument
Because we only need one new type (Decimal), a simple boolean flag can be
added. Example: time.time(decimal=True) or time.time(hires=True).
Such flag would require to do a hidden import which is considered as a bad
practice.
The boolean argument API was rejected because it is not “pythonic”. Changing
the return type with a parameter value is preferred over a boolean parameter (a
flag).
Add new functions
Add new functions for each type, examples:
time.clock_decimal()
time.time_decimal()
os.stat_decimal()
os.stat_timespec()
etc.
Adding a new function for each function creating timestamps duplicate a lot of
code and would be a pain to maintain.
Add a new hires module
Add a new module called “hires” with the same API than the time module, except
that it would return timestamp with high resolution, e.g. decimal.Decimal.
Adding a new module avoids to link low-level modules like time or os to the
decimal module.
This idea was rejected because it requires to duplicate most of the code of the
time module, would be a pain to maintain, and timestamps are used modules other
than the time module. Examples: signal.sigtimedwait(), select.select(),
resource.getrusage(), os.stat(), etc. Duplicate the code of each module is not
acceptable.
Links
Python:
Issue #7652: Merge C version of decimal into py3k (cdecimal)
Issue #11457: os.stat(): add new fields to get timestamps as Decimal objects with nanosecond resolution
Issue #13882: PEP 410: Use decimal.Decimal type for timestamps
[Python-Dev] Store timestamps as decimal.Decimal objects
Other languages:
Ruby (1.9.3), the Time class
supports picosecond (10-12)
.NET framework, DateTime type:
number of 100-nanosecond intervals that have elapsed since 12:00:00
midnight, January 1, 0001. DateTime.Ticks uses a signed 64-bit integer.
Java (1.5), System.nanoTime():
wallclock with an unspecified starting point as a number of nanoseconds, use
a signed 64 bits integer (long).
Perl, Time::Hiref module:
use float so has the same loss of precision issue with nanosecond resolution
than Python float timestamps
Copyright
This document has been placed in the public domain.
| Rejected | PEP 410 – Use decimal.Decimal type for timestamps | Standards Track | Decimal becomes the official type for high-resolution timestamps to make Python
support new functions using a nanosecond resolution without loss of precision. |
PEP 411 – Provisional packages in the Python standard library
Author:
Alyssa Coghlan <ncoghlan at gmail.com>,
Eli Bendersky <eliben at gmail.com>
Status:
Superseded
Type:
Informational
Created:
10-Feb-2012
Python-Version:
3.3
Post-History:
10-Feb-2012, 24-Mar-2012
Table of Contents
Abstract
Proposal - a documented provisional state
Marking a package provisional
Which packages should go through the provisional state
Criteria for “graduation”
Rationale
Benefits for the core development team
Benefits for end users
Candidates for provisional inclusion into the standard library
Rejected alternatives and variations
References
Copyright
Note
This PEP has been marked as Superseded. A decade after this PEP
was written, experience has shown this is a rarely used feature in
managing the standard library. It has also not helped prevent
people from relying too heavily on provisional modules, such that
changes can still cause significant breakage in the community.
Abstract
The process of including a new package into the Python standard library is
hindered by the API lock-in and promise of backward compatibility implied by
a package being formally part of Python. This PEP describes a methodology
for marking a standard library package “provisional” for the period of a single
feature release. A provisional package may have its API modified prior to
“graduating” into a “stable” state. On one hand, this state provides the
package with the benefits of being formally part of the Python distribution.
On the other hand, the core development team explicitly states that no promises
are made with regards to the stability of the package’s API, which may
change for the next release. While it is considered an unlikely outcome,
such packages may even be removed from the standard library without a
deprecation period if the concerns regarding their API or maintenance prove
well-founded.
Proposal - a documented provisional state
Whenever the Python core development team decides that a new package should be
included into the standard library, but isn’t entirely sure about whether the
package’s API is optimal, the package can be included and marked as
“provisional”.
In the next feature release, the package may either be “graduated” into a normal
“stable” state in the standard library, remain in provisional state, or be
rejected and removed entirely from the Python source tree. If the package ends
up graduating into the stable state after being provisional, its API may
be changed according to accumulated feedback. The core development team
explicitly makes no guarantees about API stability and backward compatibility
of provisional packages.
Marking a package provisional
A package will be marked provisional by a notice in its documentation page and
its docstring. The following paragraph will be added as a note at the top of
the documentation page:
The <X> package has been included in the standard library on a
provisional basis. Backwards incompatible changes (up to and including
removal of the package) may occur if deemed necessary by the core
developers.
The phrase “provisional basis” will then be a link to the glossary term
“provisional package”, defined as:
A provisional package is one which has been deliberately excluded from the
standard library’s backwards compatibility guarantees. While major
changes to such packages are not expected, as long as they are marked
provisional, backwards incompatible changes (up to and including removal of
the package) may occur if deemed necessary by core developers. Such changes
will not be made gratuitously – they will occur only if serious flaws are
uncovered that were missed prior to the inclusion of the package.This process allows the standard library to continue to evolve over time,
without locking in problematic design errors for extended periods of time.
See PEP 411 for more details.
The following will be added to the start of the package’s docstring:
The API of this package is currently provisional. Refer to the
documentation for details.
Moving a package from the provisional to the stable state simply implies
removing these notes from its documentation page and docstring.
Which packages should go through the provisional state
We expect most packages proposed for addition into the Python standard library
to go through a feature release in the provisional state. There may, however,
be some exceptions, such as packages that use a pre-defined API (for example
lzma, which generally follows the API of the existing bz2 package),
or packages with an API that has wide acceptance in the Python development
community.
In any case, packages that are proposed to be added to the standard library,
whether via the provisional state or directly, must fulfill the acceptance
conditions set by PEP 2.
Criteria for “graduation”
In principle, most provisional packages should eventually graduate to the
stable standard library. Some reasons for not graduating are:
The package may prove to be unstable or fragile, without sufficient developer
support to maintain it.
A much better alternative package may be found during the preview release.
Essentially, the decision will be made by the core developers on a per-case
basis. The point to emphasize here is that a package’s inclusion in the
standard library as “provisional” in some release does not guarantee it will
continue being part of Python in the next release. At the same time, the bar
for making changes in a provisional package is quite high. We expect that
most of the API of most provisional packages will be unchanged at graduation.
Withdrawals are expected to be rare.
Rationale
Benefits for the core development team
Currently, the core developers are really reluctant to add new interfaces to
the standard library. This is because as soon as they’re published in a
release, API design mistakes get locked in due to backward compatibility
concerns.
By gating all major API additions through some kind of a provisional mechanism
for a full release, we get one full release cycle of community feedback
before we lock in the APIs with our standard backward compatibility guarantee.
We can also start integrating provisional packages with the rest of the standard
library early, so long as we make it clear to packagers that the provisional
packages should not be considered optional. The only difference between
provisional APIs and the rest of the standard library is that provisional APIs
are explicitly exempted from the usual backward compatibility guarantees.
Benefits for end users
For future end users, the broadest benefit lies in a better “out-of-the-box”
experience - rather than being told “oh, the standard library tools for task X
are horrible, download this 3rd party library instead”, those superior tools
are more likely to be just be an import away.
For environments where developers are required to conduct due diligence on
their upstream dependencies (severely harming the cost-effectiveness of, or
even ruling out entirely, much of the material on PyPI), the key benefit lies
in ensuring that all packages in the provisional state are clearly under
python-dev’s aegis from at least the following perspectives:
Licensing: Redistributed by the PSF under a Contributor Licensing Agreement.
Documentation: The documentation of the package is published and organized via
the standard Python documentation tools (i.e. ReST source, output generated
with Sphinx and published on http://docs.python.org).
Testing: The package test suites are run on the python.org buildbot fleet
and results published via http://www.python.org/dev/buildbot.
Issue management: Bugs and feature requests are handled on
http://bugs.python.org
Source control: The master repository for the software is published
on http://hg.python.org.
Candidates for provisional inclusion into the standard library
For Python 3.3, there are a number of clear current candidates:
regex (http://pypi.python.org/pypi/regex) - approved by Guido [1].
daemon (PEP 3143)
ipaddr (PEP 3144)
Other possible future use cases include:
Improved HTTP modules (e.g. requests)
HTML 5 parsing support (e.g. html5lib)
Improved URL/URI/IRI parsing
A standard image API (PEP 368)
Improved encapsulation of import state (PEP 406)
Standard event loop API (PEP 3153)
A binary version of WSGI for Python 3 (e.g. PEP 444)
Generic function support (e.g. simplegeneric)
Rejected alternatives and variations
See PEP 408.
References
[1]
https://mail.python.org/pipermail/python-dev/2012-January/115962.html
Copyright
This document has been placed in the public domain.
| Superseded | PEP 411 – Provisional packages in the Python standard library | Informational | The process of including a new package into the Python standard library is
hindered by the API lock-in and promise of backward compatibility implied by
a package being formally part of Python. This PEP describes a methodology
for marking a standard library package “provisional” for the period of a single
feature release. A provisional package may have its API modified prior to
“graduating” into a “stable” state. On one hand, this state provides the
package with the benefits of being formally part of the Python distribution.
On the other hand, the core development team explicitly states that no promises
are made with regards to the stability of the package’s API, which may
change for the next release. While it is considered an unlikely outcome,
such packages may even be removed from the standard library without a
deprecation period if the concerns regarding their API or maintenance prove
well-founded. |
PEP 412 – Key-Sharing Dictionary
Author:
Mark Shannon <mark at hotpy.org>
Status:
Final
Type:
Standards Track
Created:
08-Feb-2012
Python-Version:
3.3
Post-History:
08-Feb-2012
Table of Contents
Abstract
Motivation
Behaviour
Performance
Memory Usage
Speed
Implementation
Split-Table dictionaries
Combined-Table dictionaries
Implementation
Pros and Cons
Pros
Cons
Alternative Implementation
References
Copyright
Abstract
This PEP proposes a change in the implementation of the builtin
dictionary type dict. The new implementation allows dictionaries
which are used as attribute dictionaries (the __dict__ attribute
of an object) to share keys with other attribute dictionaries of
instances of the same class.
Motivation
The current dictionary implementation uses more memory than is
necessary when used as a container for object attributes as the keys
are replicated for each instance rather than being shared across many
instances of the same class. Despite this, the current dictionary
implementation is finely tuned and performs very well as a
general-purpose mapping object.
By separating the keys (and hashes) from the values it is possible to
share the keys between multiple dictionaries and improve memory use.
By ensuring that keys are separated from the values only when
beneficial, it is possible to retain the high-performance of the
current dictionary implementation when used as a general-purpose
mapping object.
Behaviour
The new dictionary behaves in the same way as the old implementation.
It fully conforms to the Python API, the C API and the ABI.
Performance
Memory Usage
Reduction in memory use is directly related to the number of
dictionaries with shared keys in existence at any time. These
dictionaries are typically half the size of the current dictionary
implementation.
Benchmarking shows that memory use is reduced by 10% to 20% for
object-oriented programs with no significant change in memory use for
other programs.
Speed
The performance of the new implementation is dominated by memory
locality effects. When keys are not shared (for example in module
dictionaries and dictionary explicitly created by dict() or
{}) then performance is unchanged (within a percent or two) from
the current implementation.
For the shared keys case, the new implementation tends to separate
keys from values, but reduces total memory usage. This will improve
performance in many cases as the effects of reduced memory usage
outweigh the loss of locality, but some programs may show a small slow
down.
Benchmarking shows no significant change of speed for most benchmarks.
Object-oriented benchmarks show small speed ups when they create large
numbers of objects of the same class (the gcbench benchmark shows a
10% speed up; this is likely to be an upper limit).
Implementation
Both the old and new dictionaries consist of a fixed-sized dict struct
and a re-sizeable table. In the new dictionary the table can be
further split into a keys table and values array. The keys table
holds the keys and hashes and (for non-split tables) the values as
well. It differs only from the original implementation in that it
contains a number of fields that were previously in the dict struct.
If a table is split the values in the keys table are ignored, instead
the values are held in a separate array.
Split-Table dictionaries
When dictionaries are created to fill the __dict__ slot of an object,
they are created in split form. The keys table is cached in the type,
potentially allowing all attribute dictionaries of instances of one
class to share keys. In the event of the keys of these dictionaries
starting to diverge, individual dictionaries will lazily convert to
the combined-table form. This ensures good memory use in the common
case, and correctness in all cases.
When resizing a split dictionary it is converted to a combined table.
If resizing is as a result of storing an instance attribute, and there
is only instance of a class, then the dictionary will be re-split
immediately. Since most OO code will set attributes in the __init__
method, all attributes will be set before a second instance is created
and no more resizing will be necessary as all further instance
dictionaries will have the correct size. For more complex use
patterns, it is impossible to know what is the best approach, so the
implementation allows extra insertions up to the point of a resize
when it reverts to the combined table (non-shared keys).
A deletion from a split dictionary does not change the keys table, it
simply removes the value from the values array.
Combined-Table dictionaries
Explicit dictionaries (dict() or {}), module dictionaries and
most other dictionaries are created as combined-table dictionaries. A
combined-table dictionary never becomes a split-table dictionary.
Combined tables are laid out in much the same way as the tables in the
old dictionary, resulting in very similar performance.
Implementation
The new dictionary implementation is available at [1].
Pros and Cons
Pros
Significant memory savings for object-oriented applications. Small
improvement to speed for programs which create lots of similar
objects.
Cons
Change to data structures: Third party modules which meddle with the
internals of the dictionary implementation will break.
Changes to repr() output and iteration order: For most cases, this
will be unchanged. However, for some split-table dictionaries the
iteration order will change.
Neither of these cons should be a problem. Modules which meddle with
the internals of the dictionary implementation are already broken and
should be fixed to use the API. The iteration order of dictionaries
was never defined and has always been arbitrary; it is different for
Jython and PyPy.
Alternative Implementation
An alternative implementation for split tables, which could save even
more memory, is to store an index in the value field of the keys table
(instead of ignoring the value field). This index would explicitly
state where in the value array to look. The value array would then
only require 1 field for each usable slot in the key table, rather
than each slot in the key table.
This “indexed” version would reduce the size of value array by about
one third. The keys table would need an extra “values_size” field,
increasing the size of combined dicts by one word. The extra
indirection adds more complexity to the code, potentially reducing
performance a little.
The “indexed” version will not be included in this implementation, but
should be considered deferred rather than rejected, pending further
experimentation.
References
[1]
Reference Implementation:
https://bitbucket.org/markshannon/cpython_new_dict
Copyright
This document has been placed in the public domain.
| Final | PEP 412 – Key-Sharing Dictionary | Standards Track | This PEP proposes a change in the implementation of the builtin
dictionary type dict. The new implementation allows dictionaries
which are used as attribute dictionaries (the __dict__ attribute
of an object) to share keys with other attribute dictionaries of
instances of the same class. |
PEP 413 – Faster evolution of the Python Standard Library
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Process
Created:
24-Feb-2012
Post-History:
24-Feb-2012, 25-Feb-2012
Table of Contents
PEP Withdrawal
Abstract
Rationale
Proposal
Release Cycle
Programmatic Version Identification
Security Fixes and Other “Out of Cycle” Releases
User Scenarios
Novice user, downloading Python from python.org in March 2013
Novice user, attempting to judge currency of third party documentation
Novice user, looking for an extension module binary release
Extension module author, deciding whether or not to make a binary release
Python developer, deciding priority of eliminating a Deprecation Warning
Alternative interpreter implementor, updating with new features
Python developer, deciding their minimum version dependency
Python developers, attempting to reproduce a tracker issue
CPython release managers, handling a security fix
Effects
Effect on development cycle
Effect on workflow
Effect on bugfix cycle
Effect on the community
Handling News Updates
What’s New?
NEWS
Other benefits of reduced version coupling
Slowing down the language release cycle
Further increasing the pace of standard library development
Other Questions
Why not use the major version number?
Why not use a four part version number?
Why not use a date-based versioning scheme?
Why isn’t PEP 384 enough?
Why no binary compatible additions to the C ABI in standard library releases?
Why not separate out the standard library entirely?
Acknowledgements
References
Copyright
PEP Withdrawal
With the acceptance of PEP 453 meaning that pip will be available to
most new Python users by default, this will hopefully reduce the pressure
to add new modules to the standard library before they are sufficiently
mature.
The last couple of years have also seen increased usage of the model where
a standard library package also has an equivalent available from the Python
Package Index that also supports older versions of Python.
Given these two developments and the level of engagement throughout the
Python 3.4 release cycle, the PEP author no longer feels it would be
appropriate to make such a fundamental change to the standard library
development process.
Abstract
This PEP proposes the adoption of a separate versioning scheme for the
standard library (distinct from, but coupled to, the existing language
versioning scheme) that allows accelerated releases of the Python standard
library, while maintaining (or even slowing down) the current rate of
change in the core language definition.
Like PEP 407, it aims to adjust the current balance between measured
change that allows the broader community time to adapt and being able to
keep pace with external influences that evolve more rapidly than the current
release cycle can handle (this problem is particularly notable for
standard library elements that relate to web technologies).
However, it’s more conservative in its aims than PEP 407, seeking to
restrict the increased pace of development to builtin and standard library
interfaces, without affecting the rate of change for other elements such
as the language syntax and version numbering as well as the CPython
binary API and bytecode format.
Rationale
To quote the PEP 407 abstract:
Finding a release cycle for an open-source project is a delicate exercise
in managing mutually contradicting constraints: developer manpower,
availability of release management volunteers, ease of maintenance for
users and third-party packagers, quick availability of new features (and
behavioural changes), availability of bug fixes without pulling in new
features or behavioural changes.The current release cycle errs on the conservative side. It is adequate
for people who value stability over reactivity. This PEP is an attempt to
keep the stability that has become a Python trademark, while offering a
more fluid release of features, by introducing the notion of long-term
support versions.
I agree with the PEP 407 authors that the current release cycle of the
standard library is too slow to effectively cope with the pace of change
in some key programming areas (specifically, web protocols and related
technologies, including databases, templating and serialisation formats).
However, I have written this competing PEP because I believe that the
approach proposed in PEP 407 of offering full, potentially binary
incompatible releases of CPython every 6 months places too great a burden
on the wider Python ecosystem.
Under the current CPython release cycle, distributors of key binary
extensions will often support Python releases even after the CPython branches
enter “security fix only” mode (for example, Twisted currently ships binaries
for 2.5, 2.6 and 2.7, NumPy and SciPy support those 3 along with 3.1 and 3.2,
PyGame adds a 2.4 binary release, wxPython provides both 32-bit and 64-bit
binaries for 2.6 and 2.7, etc).
If CPython were to triple (or more) its rate of releases, the developers of
those libraries (many of which are even more resource starved than CPython)
would face an unpalatable choice: either adopt the faster release cycle
themselves (up to 18 simultaneous binary releases for PyGame!), drop
older Python versions more quickly, or else tell their users to stick to the
CPython LTS releases (thus defeating the entire point of speeding up the
CPython release cycle in the first place).
Similarly, many support tools for Python (e.g. syntax highlighters) can take
quite some time to catch up with language level changes.
At a cultural level, the Python community is also accustomed to a certain
meaning for Python version numbers - they’re linked to deprecation periods,
support periods, all sorts of things. PEP 407 proposes that collective
knowledge all be swept aside, without offering a compelling rationale for why
such a course of action is actually necessary (aside from, perhaps, making
the lives of the CPython core developers a little easier at the expense of
everyone else).
However, if we go back to the primary rationale for increasing the pace of
change (i.e. more timely support for web protocols and related technologies),
we can note that those only require standard library changes. That means
many (perhaps even most) of the negative effects on the wider community can
be avoided by explicitly limiting which parts of CPython are affected by the
new release cycle, and allowing other parts to evolve at their current, more
sedate, pace.
Proposal
This PEP proposes the introduction of a new kind of CPython release:
“standard library releases”. As with PEP 407, this will give CPython 3 kinds
of release:
Language release: “x.y.0”
Maintenance release: “x.y.z” (where z > 0)
Standard library release: “x.y (xy.z)” (where z > 0)
Under this scheme, an unqualified version reference (such as “3.3”) would
always refer to the most recent corresponding language or maintenance
release. It will never be used without qualification to refer to a standard
library release (at least, not by python-dev - obviously, we can only set an
example, not force the rest of the Python ecosystem to go along with it).
Language releases will continue as they are now, as new versions of the
Python language definition, along with a new version of the CPython
interpreter and the Python standard library. Accordingly, a language
release may contain any and all of the following changes:
new language syntax
new standard library changes (see below)
new deprecation warnings
removal of previously deprecated features
changes to the emitted bytecode
changes to the AST
any other significant changes to the compilation toolchain
changes to the core interpreter eval loop
binary incompatible changes to the C ABI (although the PEP 384 stable ABI
must still be preserved)
bug fixes
Maintenance releases will also continue as they do today, being strictly
limited to bug fixes for the corresponding language release. No new features
or radical internal changes are permitted.
The new standard library releases will occur in parallel with each
maintenance release and will be qualified with a new version identifier
documenting the standard library version. Standard library releases may
include the following changes:
new features in pure Python modules
new features in C extension modules (subject to PEP 399 compatibility
requirements)
new features in language builtins (provided the C ABI remains unaffected)
bug fixes from the corresponding maintenance release
Standard library version identifiers are constructed by combining the major
and minor version numbers for the Python language release into a single two
digit number and then appending a sequential standard library version
identifier.
Release Cycle
When maintenance releases are created, two new versions of Python would
actually be published on python.org (using the first 3.3 maintenance release,
planned for February 2013 as an example):
3.3.1 # Maintenance release
3.3 (33.1) # Standard library release
A further 6 months later, the next 3.3 maintenance release would again be
accompanied by a new standard library release:
3.3.2 # Maintenance release
3.3 (33.2) # Standard library release
Again, the standard library release would be binary compatible with the
previous language release, merely offering additional features at the
Python level.
Finally, 18 months after the release of 3.3, a new language release would
be made around the same time as the final 3.3 maintenance and standard
library releases:
3.3.3 # Maintenance release
3.3 (33.3) # Standard library release
3.4.0 # Language release
The 3.4 release cycle would then follow a similar pattern to that for 3.3:
3.4.1 # Maintenance release
3.4 (34.1) # Standard library release
3.4.2 # Maintenance release
3.4 (34.2) # Standard library release
3.4.3 # Maintenance release
3.4 (34.3) # Standard library release
3.5.0 # Language release
Programmatic Version Identification
To expose the new version details programmatically, this PEP proposes the
addition of a new sys.stdlib_info attribute that records the new
standard library version above and beyond the underlying interpreter
version. Using the initial Python 3.3 release as an example:
sys.stdlib_info(python=33, version=0, releaselevel='final', serial=0)
This information would also be included in the sys.version string:
Python 3.3.0 (33.0, default, Feb 17 2012, 23:03:41)
[GCC 4.6.1]
Security Fixes and Other “Out of Cycle” Releases
For maintenance releases the process of handling out-of-cycle releases (for
example, to fix a security issue or resolve a critical bug in a new release),
remains the same as it is now: the minor version number is incremented and a
new release is made incorporating the required bug fixes, as well as any
other bug fixes that have been committed since the previous release.
For standard library releases, the process is essentially the same, but the
corresponding “What’s New?” document may require some tidying up for the
release (as the standard library release may incorporate new features,
not just bug fixes).
User Scenarios
The versioning scheme proposed above is based on a number of user scenarios
that are likely to be encountered if this scheme is adopted. In each case,
the scenario is described for both the status quo (i.e. slow release cycle)
the versioning scheme in this PEP and the free wheeling minor version number
scheme proposed in PEP 407.
To give away the ending, the point of using a separate version number is that
for almost all scenarios, the important number is the language version, not
the standard library version. Most users won’t even need to care that the
standard library version number exists. In the two identified cases where
it matters, providing it as a separate number is actually clearer and more
explicit than embedding the two different kinds of number into a single
sequence and then tagging some of the numbers in the unified sequence as
special.
Novice user, downloading Python from python.org in March 2013
Status quo: must choose between 3.3 and 2.7
This PEP: must choose between 3.3 (33.1), 3.3 and 2.7.
PEP 407: must choose between 3.4, 3.3 (LTS) and 2.7.
Verdict: explaining the meaning of a Long Term Support release is about as
complicated as explaining the meaning of the proposed standard library release
version numbers. I call this a tie.
Novice user, attempting to judge currency of third party documentation
Status quo: minor version differences indicate 18-24 months of
language evolution
This PEP: same as status quo for language core, standard library version
numbers indicate 6 months of standard library evolution.
PEP 407: minor version differences indicate 18-24 months of language
evolution up to 3.3, then 6 months of language evolution thereafter.
Verdict: Since language changes and deprecations can have a much bigger
effect on the accuracy of third party documentation than the addition of new
features to the standard library, I’m calling this a win for the scheme
in this PEP.
Novice user, looking for an extension module binary release
Status quo: look for the binary corresponding to the Python version you are
running.
This PEP: same as status quo.
PEP 407 (full releases): same as status quo, but corresponding binary version
is more likely to be missing (or, if it does exist, has to be found amongst
a much larger list of alternatives).
PEP 407 (ABI updates limited to LTS releases): all binary release pages will
need to tell users that Python 3.3, 3.4 and 3.5 all need the 3.3 binary.
Verdict: I call this a clear win for the scheme in this PEP. Absolutely
nothing changes from the current situation, since the standard library
version is actually irrelevant in this case (only binary extension
compatibility is important).
Extension module author, deciding whether or not to make a binary release
Status quo: unless using the PEP 384 stable ABI, a new binary release is
needed every time the minor version number changes.
This PEP: same as status quo.
PEP 407 (full releases): same as status quo, but becomes a far more
frequent occurrence.
PEP 407 (ABI updates limited to LTS releases): before deciding, must first
look up whether the new release is an LTS release or an interim release. If
it is an LTS release, then a new build is necessary.
Verdict: I call this another clear win for the scheme in this PEP. As with
the end user facing side of this problem, the standard library version is
actually irrelevant in this case. Moving that information out to a
separate number avoids creating unnecessary confusion.
Python developer, deciding priority of eliminating a Deprecation Warning
Status quo: code that triggers deprecation warnings is not guaranteed to
run on a version of Python with a higher minor version number.
This PEP: same as status quo
PEP 407: unclear, as the PEP doesn’t currently spell this out. Assuming the
deprecation cycle is linked to LTS releases, then upgrading to a non-LTS
release is safe but upgrading to the next LTS release may require avoiding
the deprecated construct.
Verdict: another clear win for the scheme in this PEP since, once again, the
standard library version is irrelevant in this scenario.
Alternative interpreter implementor, updating with new features
Status quo: new Python versions arrive infrequently, but are a mish-mash of
standard library updates and core language definition and interpreter
changes.
This PEP: standard library updates, which are easier to integrate, are
made available more frequently in a form that is clearly and explicitly
compatible with the previous version of the language definition. This means
that, once an alternative implementation catches up to Python 3.3, they
should have a much easier time incorporating standard library features as
they happen (especially pure Python changes), leaving minor version number
updates as the only task that requires updates to their core compilation and
execution components.
PEP 407 (full releases): same as status quo, but becomes a far more
frequent occurrence.
PEP 407 (language updates limited to LTS releases): unclear, as the PEP
doesn’t currently spell out a specific development strategy. Assuming a
3.3 compatibility branch is adopted (as proposed in this PEP), then the
outcome would be much the same, but the version number signalling would be
slightly less clear (since you would have to check to see if a particular
release was an LTS release or not).
Verdict: while not as clear cut as some previous scenarios, I’m still
calling this one in favour of the scheme in this PEP. Explicit is better than
implicit, and the scheme in this PEP makes a clear split between the two
different kinds of update rather than adding a separate “LTS” tag to an
otherwise ordinary release number. Tagging a particular version as being
special is great for communicating with version control systems and associated
automated tools, but it’s a lousy way to communicate information to other
humans.
Python developer, deciding their minimum version dependency
Status quo: look for “version added” or “version changed” markers in the
documentation, check against sys.version_info
This PEP: look for “version added” or “version changed” markers in the
documentation. If written as a bare Python version, such as “3.3”, check
against sys.version_info. If qualified with a standard library version,
such as “3.3 (33.1)”, check against sys.stdlib_info.
PEP 407: same as status quo
Verdict: the scheme in this PEP actually allows third party libraries to be
more explicit about their rate of adoption of standard library features. More
conservative projects will likely pin their dependency to the language
version and avoid features added in the standard library releases. Faster
moving projects could instead declare their dependency on a particular
standard library version. However, since PEP 407 does have the advantage of
preserving the status quo, I’m calling this one for PEP 407 (albeit with a
slim margin).
Python developers, attempting to reproduce a tracker issue
Status quo: if not already provided, ask the reporter which version of
Python they’re using. This is often done by asking for the first two lines
displayed by the interactive prompt or the value of sys.version.
This PEP: same as the status quo (as sys.version will be updated to
also include the standard library version), but may be needed on additional
occasions (where the user knew enough to state their Python version, but that
proved to be insufficient to reproduce the fault).
PEP 407: same as the status quo
Verdict: another marginal win for PEP 407. The new standard library version
is an extra piece of information that users may need to pass back to
developers when reporting issues with Python libraries (or Python itself,
on our own tracker). However, by including it in sys.version, many
fault reports will already include it, and it is easy to request if needed.
CPython release managers, handling a security fix
Status quo: create a new maintenance release incorporating the security
fix and any other bug fixes under source control. Also create source releases
for any branches open solely for security fixes.
This PEP: same as the status quo for maintenance branches. Also create a
new standard library release (potentially incorporating new features along
with the security fix). For security branches, create source releases for
both the former maintenance branch and the standard library update branch.
PEP 407: same as the status quo for maintenance and security branches,
but handling security fixes for non-LTS releases is currently an open
question.
Verdict: until PEP 407 is updated to actually address this scenario, a
clear win for this PEP.
Effects
Effect on development cycle
Similar to PEP 407, this PEP will break up the delivery of new features into
more discrete chunks. Instead of a whole raft of changes landing all at once
in a language release, each language release will be limited to 6 months
worth of standard library changes, as well as any changes associated with
new syntax.
Effect on workflow
This PEP proposes the creation of a single additional branch for use in the
normal workflow. After the release of 3.3, the following branches would be
in use:
2.7 # Maintenance branch, no change
3.3 # Maintenance branch, as for 3.2
3.3-compat # New branch, backwards compatible changes
default # Language changes, standard library updates that depend on them
When working on a new feature, developers will need to decide whether or not
it is an acceptable change for a standard library release. If so, then it
should be checked in on 3.3-compat and then merged to default.
Otherwise it should be checked in directly to default.
The “version added” and “version changed” markers for any changes made on
the 3.3-compat branch would need to be flagged with both the language
version and the standard library version. For example: “3.3 (33.1)”.
Any changes made directly on the default branch would just be flagged
with “3.4” as usual.
The 3.3-compat branch would be closed to normal development at the
same time as the 3.3 maintenance branch. The 3.3-compat branch would
remain open for security fixes for the same period of time as the 3.3
maintenance branch.
Effect on bugfix cycle
The effect on the bug fix workflow is essentially the same as that on the
workflow for new features - there is one additional branch to pass through
before the change reaches the default branch.
If critical bugs are found in a maintenance release, then new maintenance and
standard library releases will be created to resolve the problem. The final
part of the version number will be incremented for both the language version
and the standard library version.
If critical bugs are found in a standard library release that do not affect
the associated maintenance release, then only a new standard library release
will be created and only the standard library’s version number will be
incremented.
Note that in these circumstances, the standard library release may include
additional features, rather than just containing the bug fix. It is
assumed that anyone that cares about receiving only bug fixes without any
new features mixed in will already be relying strictly on the maintenance
releases rather than using the new standard library releases.
Effect on the community
PEP 407 has this to say about the effects on the community:
People who value stability can just synchronize on the LTS releases which,
with the proposed figures, would give a similar support cycle (both in
duration and in stability).
I believe this statement is just plain wrong. Life isn’t that simple. Instead,
developers of third party modules and frameworks will come under pressure to
support the full pace of the new release cycle with binary updates, teachers
and book authors will receive complaints that they’re only covering an “old”
version of Python (“You’re only using 3.3, the latest is 3.5!”), etc.
As the minor version number starts climbing 3 times faster than it has in the
past, I believe perceptions of language stability would also fall (whether
such opinions were justified or not).
I believe isolating the increased pace of change to the standard library,
and clearly delineating it with a separate version number will greatly
reassure the rest of the community that no, we’re not suddenly
asking them to triple their own rate of development. Instead, we’re merely
going to ship standard library updates for the next language release in
6-monthly installments rather than delaying them all until the next language
definition update, even those changes that are backwards compatible with the
previously released version of Python.
The community benefits listed in PEP 407 are equally applicable to this PEP,
at least as far as the standard library is concerned:
People who value reactivity and access to new features (without taking the
risk to install alpha versions or Mercurial snapshots) would get much more
value from the new release cycle than currently.People who want to contribute new features or improvements would be more
motivated to do so, knowing that their contributions will be more quickly
available to normal users.
If the faster release cycle encourages more people to focus on contributing
to the standard library rather than proposing changes to the language
definition, I don’t see that as a bad thing.
Handling News Updates
What’s New?
The “What’s New” documents would be split out into separate documents for
standard library releases and language releases. So, during the 3.3 release
cycle, we would see:
What’s New in Python 3.3?
What’s New in the Python Standard Library 33.1?
What’s New in the Python Standard Library 33.2?
What’s New in the Python Standard Library 33.3?
And then finally, we would see the next language release:
What’s New in Python 3.4?
For the benefit of users that ignore standard library releases, the 3.4
What’s New would link back to the What’s New documents for each of the
standard library releases in the 3.3 series.
NEWS
Merge conflicts on the NEWS file are already a hassle. Since this PEP
proposes introduction of an additional branch into the normal workflow,
resolving this becomes even more critical. While Mercurial phases may
help to some degree, it would be good to eliminate the problem entirely.
One suggestion from Barry Warsaw is to adopt a non-conflicting
separate-files-per-change approach, similar to that used by Twisted [2].
Given that the current manually updated NEWS file will be used for the 3.3.0
release, one possible layout for such an approach might look like:
Misc/
NEWS # Now autogenerated from news_entries
news_entries/
3.3/
NEWS # Original 3.3 NEWS file
maint.1/ # Maintenance branch changes
core/
<news entries>
builtins/
<news entries>
extensions/
<news entries>
library/
<news entries>
documentation/
<news entries>
tests/
<news entries>
compat.1/ # Compatibility branch changes
builtins/
<news entries>
extensions/
<news entries>
library/
<news entries>
documentation/
<news entries>
tests/
<news entries>
# Add maint.2, compat.2 etc as releases are made
3.4/
core/
<news entries>
builtins/
<news entries>
extensions/
<news entries>
library/
<news entries>
documentation/
<news entries>
tests/
<news entries>
# Add maint.1, compat.1 etc as releases are made
Putting the version information in the directory hierarchy isn’t strictly
necessary (since the NEWS file generator could figure out from the version
history), but does make it easier for humans to keep the different versions
in order.
Other benefits of reduced version coupling
Slowing down the language release cycle
The current release cycle is a compromise between the desire for stability
in the core language definition and C extension ABI, and the desire to get
new features (most notably standard library updates) into user’s hands more
quickly.
With the standard library release cycle decoupled (to some degree) from that
of the core language definition, it provides an opportunity to actually
slow down the rate of change in the language definition. The language
moratorium for Python 3.2 effectively slowed that cycle down to more than 3
years (3.1: June 2009, 3.3: August 2012) without causing any major
problems or complaints.
The NEWS file management scheme described above is actually designed to
allow us the flexibility to slow down language releases at the same time
as standard library releases become more frequent.
As a simple example, if a full two years was allowed between 3.3 and 3.4,
the 3.3 release cycle would end up looking like:
3.2.4 # Maintenance release
3.3.0 # Language release
3.3.1 # Maintenance release
3.3 (33.1) # Standard library release
3.3.2 # Maintenance release
3.3 (33.2) # Standard library release
3.3.3 # Maintenance release
3.3 (33.3) # Standard library release
3.3.4 # Maintenance release
3.3 (33.4) # Standard library release
3.4.0 # Language release
The elegance of the proposed branch structure and NEWS entry layout is that
this decision wouldn’t really need to be made until shortly before the planned
3.4 release date. At that point, the decision could be made to postpone the
3.4 release and keep the 3.3 and 3.3-compat branches open after the
3.3.3 maintenance release and the 3.3 (33.3) standard library release, thus
adding another standard library release to the cycle. The choice between
another standard library release or a full language release would then be
available every 6 months after that.
Further increasing the pace of standard library development
As noted in the previous section, one benefit of the scheme proposed in this
PEP is that it largely decouples the language release cycle from the
standard library release cycle. The standard library could be updated every
3 months, or even once a month, without having any flow on effects on the
language version numbering or the perceived stability of the core language.
While that pace of development isn’t practical as long as the binary
installer creation for Windows and Mac OS X involves several manual steps
(including manual testing) and for as long as we don’t have separate
“<branch>-release” trees that only receive versions that have been marked as
good by the stable buildbots, it’s still a useful criterion to keep in mind
when considering proposed new versioning schemes: what if we eventually want
to make standard library releases even faster than every 6 months?
If the practical issues were ever resolved, then the separate standard
library versioning scheme in this PEP could handle it. The tagged version
number approach proposed in PEP 407 could not (at least, not without a lot
of user confusion and uncertainty).
Other Questions
Why not use the major version number?
The simplest and most logical solution would actually be to map the
major.minor.micro version numbers to the language version, stdlib version
and maintenance release version respectively.
Instead of releasing Python 3.3.0, we would instead release Python 4.0.0
and the release cycle would look like:
4.0.0 # Language release
4.0.1 # Maintenance release
4.1.0 # Standard library release
4.0.2 # Maintenance release
4.2.0 # Standard library release
4.0.3 # Maintenance release
4.3.0 # Standard library release
5.0.0 # Language release
However, the ongoing pain of the Python 2 -> Python 3 transition (and
associated workarounds like the python3 and python2 symlinks to
refer directly to the desired release series) means that this simple option
isn’t viable for historical reasons.
One way that this simple approach could be made to work is to merge the
current major and minor version numbers directly into a 2-digit major
version number:
33.0.0 # Language release
33.0.1 # Maintenance release
33.1.0 # Standard library release
33.0.2 # Maintenance release
33.2.0 # Standard library release
33.0.3 # Maintenance release
33.3.0 # Standard library release
34.0.0 # Language release
Why not use a four part version number?
Another simple versioning scheme would just add a “standard library” version
into the existing versioning scheme:
3.3.0.0 # Language release
3.3.0.1 # Maintenance release
3.3.1.0 # Standard library release
3.3.0.2 # Maintenance release
3.3.2.0 # Standard library release
3.3.0.3 # Maintenance release
3.3.3.0 # Standard library release
3.4.0.0 # Language release
However, this scheme isn’t viable due to backwards compatibility constraints
on the sys.version_info structure.
Why not use a date-based versioning scheme?
Earlier versions of this PEP proposed a date-based versioning scheme for
the standard library. However, such a scheme made it very difficult to
handle out-of-cycle releases to fix security issues and other critical
bugs in standard library releases, as it required the following steps:
Change the release version number to the date of the current month.
Update the What’s New, NEWS and documentation to refer to the new release
number.
Make the new release.
With the sequential scheme now proposed, such releases should at most require
a little tidying up of the What’s New document before making the release.
Why isn’t PEP 384 enough?
PEP 384 introduced the notion of a “Stable ABI” for CPython, a limited
subset of the full C ABI that is guaranteed to remain stable. Extensions
built against the stable ABI should be able to support all subsequent
Python versions with the same binary.
This will help new projects to avoid coupling their C extension modules too
closely to a specific version of CPython. For existing modules, however,
migrating to the stable ABI can involve quite a lot of work (especially for
extension modules that define a lot of classes). With limited development
resources available, any time spent on such a change is time that could
otherwise have been spent working on features that offer more direct benefits
to end users.
There are also other benefits to separate versioning (as described above)
that are not directly related to the question of binary compatibility with
third party C extensions.
Why no binary compatible additions to the C ABI in standard library releases?
There’s a case to be made that additions to the CPython C ABI could
reasonably be permitted in standard library releases. This would give C
extension authors the same freedom as any other package or module author
to depend either on a particular language version or on a standard library
version.
The PEP currently associates the interpreter version with the language
version, and therefore limits major interpreter changes (including C ABI
additions) to the language releases.
An alternative, internally consistent, approach would be to link the
interpreter version with the standard library version, with only changes that
may affect backwards compatibility limited to language releases.
Under such a scheme, the following changes would be acceptable in standard
library releases:
Standard library updates
new features in pure Python modules
new features in C extension modules (subject to PEP 399 compatibility
requirements)
new features in language builtins
Interpreter implementation updates
binary compatible additions to the C ABI
changes to the compilation toolchain that do not affect the AST or alter
the bytecode magic number
changes to the core interpreter eval loop
bug fixes from the corresponding maintenance release
And the following changes would be acceptable in language releases:
new language syntax
any updates acceptable in a standard library release
new deprecation warnings
removal of previously deprecated features
changes to the AST
changes to the emitted bytecode that require altering the magic number
binary incompatible changes to the C ABI (although the PEP 384 stable ABI
must still be preserved)
While such an approach could probably be made to work, there does not appear
to be a compelling justification for it, and the approach currently described
in the PEP is simpler and easier to explain.
Why not separate out the standard library entirely?
A concept that is occasionally discussed is the idea of making the standard
library truly independent from the CPython reference implementation.
My personal opinion is that actually making such a change would involve a
lot of work for next to no pay-off. CPython without the standard library is
useless (the build chain won’t even run, let alone the test suite). You also
can’t create a standalone pure Python standard library either, because too
many “standard library modules” are actually tightly linked in to the
internal details of their respective interpreters (for example, the builtins,
weakref, gc, sys, inspect, ast).
Creating a separate CPython development branch that is kept compatible with
the previous language release, and making releases from that branch that are
identified with a separate standard library version number should provide
most of the benefits of a separate standard library repository with only a
fraction of the pain.
Acknowledgements
Thanks go to the PEP 407 authors for starting this discussion, as well as
to those authors and Larry Hastings for initial discussions of the proposal
made in this PEP.
References
[2]
Twisted’s “topfiles” approach to NEWS generation
https://web.archive.org/web/20120305142914/http://twistedmatrix.com/trac/wiki/ReviewProcess#Newsfiles
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 413 – Faster evolution of the Python Standard Library | Process | This PEP proposes the adoption of a separate versioning scheme for the
standard library (distinct from, but coupled to, the existing language
versioning scheme) that allows accelerated releases of the Python standard
library, while maintaining (or even slowing down) the current rate of
change in the core language definition. |
PEP 414 – Explicit Unicode Literal for Python 3.3
Author:
Armin Ronacher <armin.ronacher at active-4.com>,
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
15-Feb-2012
Python-Version:
3.3
Post-History:
28-Feb-2012, 04-Mar-2012
Resolution:
Python-Dev message
Table of Contents
Abstract
BDFL Pronouncement
Proposal
Exclusion of “Raw” Unicode Literals
Author’s Note
Rationale
Common Objections
Complaint: This PEP may harm adoption of Python 3.2
Complaint: Python 3 shouldn’t be made worse just to support porting from Python 2
Complaint: The WSGI “native strings” concept is an ugly hack
Complaint: The existing tools should be good enough for everyone
References
Copyright
Abstract
This document proposes the reintegration of an explicit unicode literal
from Python 2.x to the Python 3.x language specification, in order to
reduce the volume of changes needed when porting Unicode-aware
Python 2 applications to Python 3.
BDFL Pronouncement
This PEP has been formally accepted for Python 3.3:
I’m accepting the PEP. It’s about as harmless as they come. Make it so.
Proposal
This PEP proposes that Python 3.3 restore support for Python 2’s Unicode
literal syntax, substantially increasing the number of lines of existing
Python 2 code in Unicode aware applications that will run without modification
on Python 3.
Specifically, the Python 3 definition for string literal prefixes will be
expanded to allow:
"u" | "U"
in addition to the currently supported:
"r" | "R"
The following will all denote ordinary Python 3 strings:
'text'
"text"
'''text'''
"""text"""
u'text'
u"text"
u'''text'''
u"""text"""
U'text'
U"text"
U'''text'''
U"""text"""
No changes are proposed to Python 3’s actual Unicode handling, only to the
acceptable forms for string literals.
Exclusion of “Raw” Unicode Literals
Python 2 supports a concept of “raw” Unicode literals that don’t meet the
conventional definition of a raw string: \uXXXX and \UXXXXXXXX escape
sequences are still processed by the compiler and converted to the
appropriate Unicode code points when creating the associated Unicode objects.
Python 3 has no corresponding concept - the compiler performs no
preprocessing of the contents of raw string literals. This matches the
behaviour of 8-bit raw string literals in Python 2.
Since such strings are rarely used and would be interpreted differently in
Python 3 if permitted, it was decided that leaving them out entirely was
a better choice. Code which uses them will thus still fail immediately on
Python 3 (with a Syntax Error), rather than potentially producing different
output.
To get equivalent behaviour that will run on both Python 2 and Python 3,
either an ordinary Unicode literal can be used (with appropriate additional
escaping within the string), or else string concatenation or string
formatting can be combine the raw portions of the string with those that
require the use of Unicode escape sequences.
Note that when using from __future__ import unicode_literals in Python 2,
the nominally “raw” Unicode string literals will process \uXXXX and
\UXXXXXXXX escape sequences, just like Python 2 strings explicitly marked
with the “raw Unicode” prefix.
Author’s Note
This PEP was originally written by Armin Ronacher, and Guido’s approval was
given based on that version.
The currently published version has been rewritten by Alyssa Coghlan to
include additional historical details and rationale that were taken into
account when Guido made his decision, but were not explicitly documented in
Armin’s version of the PEP.
Readers should be aware that many of the arguments in this PEP are not
technical ones. Instead, they relate heavily to the social and personal
aspects of software development.
Rationale
With the release of a Python 3 compatible version of the Web Services Gateway
Interface (WSGI) specification (PEP 3333) for Python 3.2, many parts of the
Python web ecosystem have been making a concerted effort to support Python 3
without adversely affecting their existing developer and user communities.
One major item of feedback from key developers in those communities, including
Chris McDonough (WebOb, Pyramid), Armin Ronacher (Flask, Werkzeug), Jacob
Kaplan-Moss (Django) and Kenneth Reitz (requests) is that the requirement
to change the spelling of every Unicode literal in an application
(regardless of how that is accomplished) is a key stumbling block for porting
efforts.
In particular, unlike many of the other Python 3 changes, it isn’t one that
framework and library authors can easily handle on behalf of their users. Most
of those users couldn’t care less about the “purity” of the Python language
specification, they just want their websites and applications to work as well
as possible.
While it is the Python web community that has been most vocal in highlighting
this concern, it is expected that other highly Unicode aware domains (such as
GUI development) may run into similar issues as they (and their communities)
start making concerted efforts to support Python 3.
Common Objections
Complaint: This PEP may harm adoption of Python 3.2
This complaint is interesting, as it carries within it a tacit admission that
this PEP will make it easier to port Unicode aware Python 2 applications to
Python 3.
There are many existing Python communities that are prepared to put up with
the constraints imposed by the existing suite of porting tools, or to update
their Python 2 code bases sufficiently that the problems are minimised.
This PEP is not for those communities. Instead, it is designed specifically to
help people that don’t want to put up with those difficulties.
However, since the proposal is for a comparatively small tweak to the language
syntax with no semantic changes, it is feasible to support it as a third
party import hook. While such an import hook imposes some import time
overhead, and requires additional steps from each application that needs it
to get the hook in place, it allows applications that target Python 3.2
to use libraries and frameworks that would otherwise only run on Python 3.3+
due to their use of unicode literal prefixes.
One such import hook project is Vinay Sajip’s uprefix [4].
For those that prefer to translate their code in advance rather than
converting on the fly at import time, Armin Ronacher is working on a hook
that runs at install time rather than during import [5].
Combining the two approaches is of course also possible. For example, the
import hook could be used for rapid edit-test cycles during local
development, but the install hook for continuous integration tasks and
deployment on Python 3.2.
The approaches described in this section may prove useful, for example, for
applications that wish to target Python 3 on the Ubuntu 12.04 LTS release,
which will ship with Python 2.7 and 3.2 as officially supported Python
versions.
Complaint: Python 3 shouldn’t be made worse just to support porting from Python 2
This is indeed one of the key design principles of Python 3. However, one of
the key design principles of Python as a whole is that “practicality beats
purity”. If we’re going to impose a significant burden on third party
developers, we should have a solid rationale for doing so.
In most cases, the rationale for backwards incompatible Python 3 changes are
either to improve code correctness (for example, stricter default separation
of binary and text data and integer division upgrading to floats when
necessary), reduce typical memory usage (for example, increased usage of
iterators and views over concrete lists), or to remove distracting nuisances
that make Python code harder to read without increasing its expressiveness
(for example, the comma based syntax for naming caught exceptions). Changes
backed by such reasoning are not going to be reverted, regardless of
objections from Python 2 developers attempting to make the transition to
Python 3.
In many cases, Python 2 offered two ways of doing things for historical reasons.
For example, inequality could be tested with both != and <> and integer
literals could be specified with an optional L suffix. Such redundancies
have been eliminated in Python 3, which reduces the overall size of the
language and improves consistency across developers.
In the original Python 3 design (up to and including Python 3.2), the explicit
prefix syntax for unicode literals was deemed to fall into this category, as it
is completely unnecessary in Python 3. However, the difference between those
other cases and unicode literals is that the unicode literal prefix is not
redundant in Python 2 code: it is a programmatically significant distinction
that needs to be preserved in some fashion to avoid losing information.
While porting tools were created to help with the transition (see next section)
it still creates an additional burden on heavy users of unicode strings in
Python 2, solely so that future developers learning Python 3 don’t need to be
told “For historical reasons, string literals may have an optional u or
U prefix. Never use this yourselves, it’s just there to help with porting
from an earlier version of the language.”
Plenty of students learning Python 2 received similar warnings regarding string
exceptions without being confused or irreparably stunted in their growth as
Python developers. It will be the same with this feature.
This point is further reinforced by the fact that Python 3 still allows the
uppercase variants of the B and R prefixes for bytes literals and raw
bytes and string literals. If the potential for confusion due to string prefix
variants is that significant, where was the outcry asking that these
redundant prefixes be removed along with all the other redundancies that were
eliminated in Python 3?
Just as support for string exceptions was eliminated from Python 2 using the
normal deprecation process, support for redundant string prefix characters
(specifically, B, R, u, U) may eventually be eliminated
from Python 3, regardless of the current acceptance of this PEP. However,
such a change will likely only occur once third party libraries supporting
Python 2.7 is about as common as libraries supporting Python 2.2 or 2.3 is
today.
Complaint: The WSGI “native strings” concept is an ugly hack
One reason the removal of unicode literals has provoked such concern amongst
the web development community is that the updated WSGI specification had to
make a few compromises to minimise the disruption for existing web servers
that provide a WSGI-compatible interface (this was deemed necessary in order
to make the updated standard a viable target for web application authors and
web framework developers).
One of those compromises is the concept of a “native string”. WSGI defines
three different kinds of string:
text strings: handled as unicode in Python 2 and str in Python 3
native strings: handled as str in both Python 2 and Python 3
binary data: handled as str in Python 2 and bytes in Python 3
Some developers consider WSGI’s “native strings” to be an ugly hack, as they
are explicitly documented as being used solely for latin-1 decoded
“text”, regardless of the actual encoding of the underlying data. Using this
approach bypasses many of the updates to Python 3’s data model that are
designed to encourage correct handling of text encodings. However, it
generally works due to the specific details of the problem domain - web server
and web framework developers are some of the individuals most aware of how
blurry the line can get between binary data and text when working with HTTP
and related protocols, and how important it is to understand the implications
of the encodings in use when manipulating encoded text data. At the
application level most of these details are hidden from the developer by
the web frameworks and support libraries (both in Python 2 and in Python 3).
In practice, native strings are a useful concept because there are some APIs
(both in the standard library and in third party frameworks and packages) and
some internal interpreter details that are designed primarily to work with
str. These components often don’t support unicode in Python 2
or bytes in Python 3, or, if they do, require additional encoding details
and/or impose constraints that don’t apply to the str variants.
Some example of interfaces that are best handled by using actual str
instances are:
Python identifiers (as attributes, dict keys, class names, module names,
import references, etc)
URLs for the most part as well as HTTP headers in urllib/http servers
WSGI environment keys and CGI-inherited values
Python source code for dynamic compilation and AST hacks
Exception messages
__repr__ return value
preferred filesystem paths
preferred OS environment
In Python 2.6 and 2.7, these distinctions are most naturally expressed as
follows:
u"": text string (unicode)
"": native string (str)
b"": binary data (str, also aliased as bytes)
In Python 3, the latin-1 decoded native strings are not distinguished
from any other text strings:
"": text string (str)
"": native string (str)
b"": binary data (bytes)
If from __future__ import unicode_literals is used to modify the behaviour
of Python 2, then, along with an appropriate definition of n(), the
distinction can be expressed as:
"": text string
n(""): native string
b"": binary data
(While n=str works for simple cases, it can sometimes have problems
due to non-ASCII source encodings)
In the common subset of Python 2 and Python 3 (with appropriate
specification of a source encoding and definitions of the u() and b()
helper functions), they can be expressed as:
u(""): text string
"": native string
b(""): binary data
That last approach is the only variant that supports Python 2.5 and earlier.
Of all the alternatives, the format currently supported in Python 2.6 and 2.7
is by far the cleanest approach that clearly distinguishes the three desired
kinds of behaviour. With this PEP, that format will also be supported in
Python 3.3+. It will also be supported in Python 3.1 and 3.2 through the use
of import and install hooks. While it is significantly less likely, it is
also conceivable that the hooks could be adapted to allow the use of the
b prefix on Python 2.5.
Complaint: The existing tools should be good enough for everyone
A commonly expressed sentiment from developers that have already successfully
ported applications to Python 3 is along the lines of “if you think it’s hard,
you’re doing it wrong” or “it’s not that hard, just try it!”. While it is no
doubt unintentional, these responses all have the effect of telling the
people that are pointing out inadequacies in the current porting toolset
“there’s nothing wrong with the porting tools, you just suck and don’t know
how to use them properly”.
These responses are a case of completely missing the point of what people are
complaining about. The feedback that resulted in this PEP isn’t due to people
complaining that ports aren’t possible. Instead, the feedback is coming from
people that have successfully completed ports and are objecting that they
found the experience thoroughly unpleasant for the class of application that
they needed to port (specifically, Unicode aware web frameworks and support
libraries).
This is a subjective appraisal, and it’s the reason why the Python 3
porting tools ecosystem is a case where the “one obvious way to do it”
philosophy emphatically does not apply. While it was originally intended that
“develop in Python 2, convert with 2to3, test both” would be the standard
way to develop for both versions in parallel, in practice, the needs of
different projects and developer communities have proven to be sufficiently
diverse that a variety of approaches have been devised, allowing each group
to select an approach that best fits their needs.
Lennart Regebro has produced an excellent overview of the available migration
strategies [2], and a similar review is provided in the official porting
guide [3]. (Note that the official guidance has softened to “it depends on
your specific situation” since Lennart wrote his overview).
However, both of those guides are written from the founding assumption that
all of the developers involved are already committed to the idea of
supporting Python 3. They make no allowance for the social aspects of such a
change when you’re interacting with a user base that may not be especially
tolerant of disruptions without a clear benefit, or are trying to persuade
Python 2 focused upstream developers to accept patches that are solely about
improving Python 3 forward compatibility.
With the current porting toolset, every migration strategy will result in
changes to every Unicode literal in a project. No exceptions. They will
be converted to either an unprefixed string literal (if the project decides to
adopt the unicode_literals import) or else to a converter call like
u("text").
If the unicode_literals import approach is employed, but is not adopted
across the entire project at the same time, then the meaning of a bare string
literal may become annoyingly ambiguous. This problem can be particularly
pernicious for aggregated software, like a Django site - in such a situation,
some files may end up using the unicode_literals import and others may not,
creating definite potential for confusion.
While these problems are clearly solvable at a technical level, they’re a
completely unnecessary distraction at the social level. Developer energy should
be reserved for addressing real technical difficulties associated with the
Python 3 transition (like distinguishing their 8-bit text strings from their
binary data). They shouldn’t be punished with additional code changes (even
automated ones) solely due to the fact that they have already explicitly
identified their Unicode strings in Python 2.
Armin Ronacher has created an experimental extension to 2to3 which only
modernizes Python code to the extent that it runs on Python 2.7 or later with
support from the cross-version compatibility six library. This tool is
available as python-modernize [1]. Currently, the deltas generated by
this tool will affect every Unicode literal in the converted source. This
will create legitimate concerns amongst upstream developers asked to accept
such changes, and amongst framework users being asked to change their
applications.
However, by eliminating the noise from changes to the Unicode literal syntax,
many projects could be cleanly and (comparatively) non-controversially made
forward compatible with Python 3.3+ just by running python-modernize and
applying the recommended changes.
References
[1]
Python-Modernize
(http://github.com/mitsuhiko/python-modernize)
[2]
Porting to Python 3: Migration Strategies
(http://python3porting.com/strategies.html)
[3]
Porting Python 2 Code to Python 3
(http://docs.python.org/howto/pyporting.html)
[4]
uprefix import hook project
(https://bitbucket.org/vinay.sajip/uprefix)
[5]
install hook to remove unicode string prefix characters
(https://github.com/mitsuhiko/unicode-literals-pep/tree/master/install-hook)
Copyright
This document has been placed in the public domain.
| Final | PEP 414 – Explicit Unicode Literal for Python 3.3 | Standards Track | This document proposes the reintegration of an explicit unicode literal
from Python 2.x to the Python 3.x language specification, in order to
reduce the volume of changes needed when porting Unicode-aware
Python 2 applications to Python 3. |
PEP 416 – Add a frozendict builtin type
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
29-Feb-2012
Python-Version:
3.3
Table of Contents
Rejection Notice
Abstract
Rationale
Constraints
Implementation
Recipe: hashable dict
Objections
Alternative: dictproxy
Existing implementations
Links
Copyright
Rejection Notice
I’m rejecting this PEP. A number of reasons (not exhaustive):
According to Raymond Hettinger, use of frozendict is low. Those
that do use it tend to use it as a hint only, such as declaring
global or class-level “constants”: they aren’t really immutable,
since anyone can still assign to the name.
There are existing idioms for avoiding mutable default values.
The potential of optimizing code using frozendict in PyPy is
unsure; a lot of other things would have to change first. The same
holds for compile-time lookups in general.
Multiple threads can agree by convention not to mutate a shared
dict, there’s no great need for enforcement. Multiple processes
can’t share dicts.
Adding a security sandbox written in Python, even with a limited
scope, is frowned upon by many, due to the inherent difficulty with
ever proving that the sandbox is actually secure. Because of this
we won’t be adding one to the stdlib any time soon, so this use
case falls outside the scope of a PEP.
On the other hand, exposing the existing read-only dict proxy as a
built-in type sounds good to me. (It would need to be changed to
allow calling the constructor.) GvR.
Update (2012-04-15): A new MappingProxyType type was added to the types
module of Python 3.3.
Abstract
Add a new frozendict builtin type.
Rationale
A frozendict is a read-only mapping: a key cannot be added nor removed, and a
key is always mapped to the same value. However, frozendict values can be not
hashable. A frozendict is hashable if and only if all values are hashable.
Use cases:
Immutable global variable like a default configuration.
Default value of a function parameter. Avoid the issue of mutable default
arguments.
Implement a cache: frozendict can be used to store function keywords.
frozendict can be used as a key of a mapping or as a member of set.
frozendict avoids the need of a lock when the frozendict is shared
by multiple threads or processes, especially hashable frozendict. It would
also help to prohibe coroutines (generators + greenlets) to modify the
global state.
frozendict lookup can be done at compile time instead of runtime because the
mapping is read-only. frozendict can be used instead of a preprocessor to
remove conditional code at compilation, like code specific to a debug build.
frozendict helps to implement read-only object proxies for security modules.
For example, it would be possible to use frozendict type for __builtins__
mapping or type.__dict__. This is possible because frozendict is compatible
with the PyDict C API.
frozendict avoids the need of a read-only proxy in some cases. frozendict is
faster than a proxy because getting an item in a frozendict is a fast lookup
whereas a proxy requires a function call.
Constraints
frozendict has to implement the Mapping abstract base class
frozendict keys and values can be unorderable
a frozendict is hashable if all keys and values are hashable
frozendict hash does not depend on the items creation order
Implementation
Add a PyFrozenDictObject structure based on PyDictObject with an extra
“Py_hash_t hash;” field
frozendict.__hash__() is implemented using hash(frozenset(self.items())) and
caches the result in its private hash attribute
Register frozendict as a collections.abc.Mapping
frozendict can be used with PyDict_GetItem(), but PyDict_SetItem() and
PyDict_DelItem() raise a TypeError
Recipe: hashable dict
To ensure that a frozendict is hashable, values can be checked
before creating the frozendict:
import itertools
def hashabledict(*args, **kw):
# ensure that all values are hashable
for key, value in itertools.chain(args, kw.items()):
if isinstance(value, (int, str, bytes, float, frozenset, complex)):
# avoid the compute the hash (which may be slow) for builtin
# types known to be hashable for any value
continue
hash(value)
# don't check the key: frozendict already checks the key
return frozendict.__new__(cls, *args, **kw)
Objections
namedtuple may fit the requirements of a frozendict.
A namedtuple is not a mapping, it does not implement the Mapping abstract base
class.
frozendict can be implemented in Python using descriptors” and “frozendict
just need to be practically constant.
If frozendict is used to harden Python (security purpose), it must be
implemented in C. A type implemented in C is also faster.
The PEP 351 was rejected.
The PEP 351 tries to freeze an object and so may convert a mutable object to an
immutable object (using a different type). frozendict doesn’t convert anything:
hash(frozendict) raises a TypeError if a value is not hashable. Freezing an
object is not the purpose of this PEP.
Alternative: dictproxy
Python has a builtin dictproxy type used by type.__dict__ getter descriptor.
This type is not public. dictproxy is a read-only view of a dictionary, but it
is not read-only mapping. If a dictionary is modified, the dictproxy is also
modified.
dictproxy can be used using ctypes and the Python C API, see for example the
make dictproxy object via ctypes.pythonapi and type() (Python recipe 576540)
by Ikkei Shimomura. The recipe contains a test checking that a dictproxy is
“mutable” (modify the dictionary linked to the dictproxy).
However dictproxy can be useful in some cases, where its mutable property is
not an issue, to avoid a copy of the dictionary.
Existing implementations
Whitelist approach.
Implementing an Immutable Dictionary (Python recipe 498072) by Aristotelis Mikropoulos.
Similar to frozendict except that it is not truly read-only: it is possible
to access to this private internal dict. It does not implement __hash__ and
has an implementation issue: it is possible to call again __init__() to
modify the mapping.
PyWebmail contains an ImmutableDict type: webmail.utils.ImmutableDict.
It is hashable if keys and values are hashable. It is not truly read-only:
its internal dict is a public attribute.
remember project: remember.dicts.FrozenDict.
It is used to implement a cache: FrozenDict is used to store function callbacks.
FrozenDict may be hashable. It has an extra supply_dict() class method to
create a FrozenDict from a dict without copying the dict: store the dict as
the internal dict. Implementation issue: __init__() can be called to modify
the mapping and the hash may differ depending on item creation order. The
mapping is not truly read-only: the internal dict is accessible in Python.
Blacklist approach: inherit from dict and override write methods to raise an
exception. It is not truly read-only: it is still possible to call dict methods
on such “frozen dictionary” to modify it.
brownie: brownie.datastructures.ImmutableDict.
It is hashable if keys and values are hashable. werkzeug project has the
same code: werkzeug.datastructures.ImmutableDict.
ImmutableDict is used for global constant (configuration options). The Flask
project uses ImmutableDict of werkzeug for its default configuration.
SQLAlchemy project: sqlalchemy.util.immutabledict.
It is not hashable and has an extra method: union(). immutabledict is used
for the default value of parameter of some functions expecting a mapping.
Example: mapper_args=immutabledict() in SqlSoup.map().
Frozen dictionaries (Python recipe 414283)
by Oren Tirosh. It is hashable if keys and values are hashable. Included in
the following projects:
lingospot: frozendict/frozendict.py
factor-graphics: frozendict type in python/fglib/util_ext_frozendict.py
The gsakkis-utils project written by George Sakkis includes a frozendict
type: datastructs.frozendict
characters: scripts/python/frozendict.py.
It is hashable. __init__() sets __init__ to None.
Old NLTK (1.x): nltk.util.frozendict. Keys and
values must be hashable. __init__() can be called twice to modify the
mapping. frozendict is used to “freeze” an object.
Hashable dict: inherit from dict and just add an __hash__ method.
pypy.rpython.lltypesystem.lltype.frozendict.
It is hashable but don’t deny modification of the mapping.
factor-graphics: hashabledict type in python/fglib/util_ext_frozendict.py
Links
Issue #14162: PEP 416: Add a builtin frozendict type
PEP 412: Key-Sharing Dictionary
(issue #13903)
PEP 351: The freeze protocol
The case for immutable dictionaries; and the central misunderstanding of
PEP 351
make dictproxy object via ctypes.pythonapi and type() (Python recipe
576540) by Ikkei Shimomura.
Python security modules implementing read-only object proxies using a C
extension:
pysandbox
mxProxy
zope.proxy
zope.security
Copyright
This document has been placed in the public domain.
| Rejected | PEP 416 – Add a frozendict builtin type | Standards Track | Add a new frozendict builtin type. |
PEP 417 – Including mock in the Standard Library
Author:
Michael Foord <michael at python.org>
Status:
Final
Type:
Standards Track
Created:
12-Mar-2012
Python-Version:
3.3
Post-History:
12-Mar-2012
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Background
Open Issues
References
Copyright
Abstract
This PEP proposes adding the mock [1] testing library
to the Python standard library as unittest.mock.
Rationale
Creating mock objects for testing is a common need in Python.
Many developers create ad-hoc mocks, as needed, in their test
suites. This is currently what we do in the Python test suite,
where a standardised mock object library would be helpful.
There are many mock object libraries available for Python [2].
Of these, mock is overwhelmingly the most popular, with as many
downloads on PyPI as the other mocking libraries combined.
An advantage of mock is that it is a mocking library and not a
framework. It provides a configurable and flexible mock object,
without being opinionated about how you write your tests. The
mock api is now well battle-tested and stable.
mock also handles safely monkeypatching and unmonkeypatching
objects during the scope of a test. This is hard to do safely
and many developers / projects mimic this functionality
(often incorrectly). A standardised way to do this, handling
the complexity of patching in the presence of the descriptor
protocol (etc) is useful. People are asking for a “patch” [3]
feature to unittest. Doing this via mock.patch is preferable
to re-implementing part of this functionality in unittest.
Background
Addition of mock to the Python standard library was discussed
and agreed to at the Python Language Summit 2012.
Open Issues
As of release 0.8, which is current at the time of writing,
mock is compatible with Python 2.4-3.2. Moving into the Python
standard library will allow for the removal of some Python 2
specific “compatibility hacks”.
mock 0.8 introduced a new feature, “auto-speccing”, obsoletes
an older mock feature called “mocksignature”. The
“mocksignature” functionality can be removed from mock
altogether prior to inclusion.
References
[1]
mock library on PyPI
[2]
http://pypi.python.org/pypi?%3Aaction=search&term=mock&submit=search
[3]
http://bugs.python.org/issue11664
Copyright
This document has been placed in the public domain.
| Final | PEP 417 – Including mock in the Standard Library | Standards Track | This PEP proposes adding the mock [1] testing library
to the Python standard library as unittest.mock. |
PEP 418 – Add monotonic time, performance counter, and process time functions
Author:
Cameron Simpson <cs at cskk.id.au>,
Jim J. Jewett <jimjjewett at gmail.com>,
Stephen J. Turnbull <stephen at xemacs.org>,
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Standards Track
Created:
26-Mar-2012
Python-Version:
3.3
Table of Contents
Abstract
Rationale
Python functions
New Functions
time.get_clock_info(name)
time.monotonic()
time.perf_counter()
time.process_time()
Existing Functions
time.time()
time.sleep()
Deprecated Function
time.clock()
Alternatives: API design
Other names for time.monotonic()
Other names for time.perf_counter()
Only expose operating system clocks
time.monotonic(): Fallback to system time
One function with a flag: time.monotonic(fallback=True)
One time.monotonic() function, no flag
Choosing the clock from a list of constraints
Working around operating system bugs?
Glossary
Hardware clocks
List of hardware clocks
Linux clocksource
FreeBSD timecounter
Performance
NTP adjustment
Operating system time functions
Monotonic Clocks
mach_absolute_time
CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME
Windows: QueryPerformanceCounter
Windows: GetTickCount(), GetTickCount64()
Windows: timeGetTime
Solaris: CLOCK_HIGHRES
Solaris: gethrtime
System Time
Windows: GetSystemTimeAsFileTime
System time on UNIX
Process Time
Functions
Thread Time
Functions
Windows: QueryUnbiasedInterruptTime
Sleep
Functions
clock_nanosleep
select()
Other functions
System Standby
Footnotes
Links
Acceptance
References
Copyright
Abstract
This PEP proposes to add time.get_clock_info(name),
time.monotonic(), time.perf_counter() and
time.process_time() functions to Python 3.3.
Rationale
If a program uses the system time to schedule events or to implement
a timeout, it may fail to run events at the right moment or stop the
timeout too early or too late when the system time is changed manually or
adjusted automatically by NTP. A monotonic clock should be used
instead to not be affected by system time updates:
time.monotonic().
To measure the performance of a function, time.clock() can be used
but it is very different on Windows and on Unix. On Windows,
time.clock() includes time elapsed during sleep, whereas it does
not on Unix. time.clock() resolution is very good on Windows, but
very bad on Unix. The new time.perf_counter() function should be
used instead to always get the most precise performance counter with a
portable behaviour (ex: include time spend during sleep).
Until now, Python did not provide directly a portable
function to measure CPU time. time.clock() can be used on Unix,
but it has bad
resolution. resource.getrusage() or os.times() can also be
used on Unix, but they require to compute the sum of time
spent in kernel space and user space. The new time.process_time()
function acts as a portable counter that always measures CPU time
(excluding time elapsed during sleep) and has the best available
resolution.
Each operating system implements clocks and performance counters
differently, and it is useful to know exactly which function is used
and some properties of the clock like its resolution. The new
time.get_clock_info() function gives access to all available
information about each Python time function.
New functions:
time.monotonic(): timeout and scheduling, not affected by system
clock updates
time.perf_counter(): benchmarking, most precise clock for short
period
time.process_time(): profiling, CPU time of the process
Users of new functions:
time.monotonic(): concurrent.futures, multiprocessing, queue, subprocess,
telnet and threading modules to implement timeout
time.perf_counter(): trace and timeit modules, pybench program
time.process_time(): profile module
time.get_clock_info(): pybench program to display information about the
timer like the resolution
The time.clock() function is deprecated because it is not
portable: it behaves differently depending on the operating system.
time.perf_counter() or time.process_time() should be used
instead, depending on your requirements. time.clock() is marked as
deprecated but is not planned for removal.
Limitations:
The behaviour of clocks after a system suspend is not defined in the
documentation of new functions. The behaviour depends on the
operating system: see the Monotonic Clocks section below. Some
recent operating systems provide two clocks, one including time
elapsed during system suspend, one not including this time. Most
operating systems only provide one kind of clock.
time.monotonic() and time.perf_counter() may or may not be adjusted.
For example, CLOCK_MONOTONIC is slewed on Linux, whereas
GetTickCount() is not adjusted on Windows.
time.get_clock_info('monotonic')['adjustable'] can be used to check
if the monotonic clock is adjustable or not.
No time.thread_time() function is proposed by this PEP because it is
not needed by Python standard library nor a common asked feature.
Such function would only be available on Windows and Linux. On
Linux, it is possible to use
time.clock_gettime(CLOCK_THREAD_CPUTIME_ID). On Windows, ctypes or
another module can be used to call the GetThreadTimes()
function.
Python functions
New Functions
time.get_clock_info(name)
Get information on the specified clock. Supported clock names:
"clock": time.clock()
"monotonic": time.monotonic()
"perf_counter": time.perf_counter()
"process_time": time.process_time()
"time": time.time()
Return a time.clock_info object which has the following attributes:
implementation (str): name of the underlying operating system
function. Examples: "QueryPerformanceCounter()",
"clock_gettime(CLOCK_REALTIME)".
monotonic (bool): True if the clock cannot go backward.
adjustable (bool): True if the clock can be changed automatically
(e.g. by a NTP daemon) or manually by the system administrator, False
otherwise
resolution (float): resolution in seconds of the clock.
time.monotonic()
Monotonic clock, i.e. cannot go backward. It is not affected by system
clock updates. The reference point of the returned value is
undefined, so that only the difference between the results of
consecutive calls is valid and is a number of seconds.
On Windows versions older than Vista, time.monotonic() detects
GetTickCount() integer overflow (32 bits, roll-over after 49.7
days). It increases an internal epoch (reference time by) 232 each time that an overflow is detected. The epoch is stored
in the process-local state and so
the value of time.monotonic() may be different in two Python
processes running for more than 49 days. On more recent versions of
Windows and on other operating systems, time.monotonic() is
system-wide.
Availability: Windows, Mac OS X, Linux, FreeBSD, OpenBSD, Solaris.
Not available on GNU/Hurd.
Pseudo-code [2]:
if os.name == 'nt':
# GetTickCount64() requires Windows Vista, Server 2008 or later
if hasattr(_time, 'GetTickCount64'):
def monotonic():
return _time.GetTickCount64() * 1e-3
else:
def monotonic():
ticks = _time.GetTickCount()
if ticks < monotonic.last:
# Integer overflow detected
monotonic.delta += 2**32
monotonic.last = ticks
return (ticks + monotonic.delta) * 1e-3
monotonic.last = 0
monotonic.delta = 0
elif sys.platform == 'darwin':
def monotonic():
if monotonic.factor is None:
factor = _time.mach_timebase_info()
monotonic.factor = timebase[0] / timebase[1] * 1e-9
return _time.mach_absolute_time() * monotonic.factor
monotonic.factor = None
elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_HIGHRES"):
def monotonic():
return time.clock_gettime(time.CLOCK_HIGHRES)
elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_MONOTONIC"):
def monotonic():
return time.clock_gettime(time.CLOCK_MONOTONIC)
On Windows, QueryPerformanceCounter() is not used even though it
has a better resolution than GetTickCount(). It is not reliable
and has too many issues.
time.perf_counter()
Performance counter with the highest available resolution to measure a
short duration. It does include time elapsed during sleep and is
system-wide. The reference point of the returned value is undefined,
so that only the difference between the results of consecutive calls
is valid and is a number of seconds.
It is available on all platforms.
Pseudo-code:
if os.name == 'nt':
def _win_perf_counter():
if _win_perf_counter.frequency is None:
_win_perf_counter.frequency = _time.QueryPerformanceFrequency()
return _time.QueryPerformanceCounter() / _win_perf_counter.frequency
_win_perf_counter.frequency = None
def perf_counter():
if perf_counter.use_performance_counter:
try:
return _win_perf_counter()
except OSError:
# QueryPerformanceFrequency() fails if the installed
# hardware does not support a high-resolution performance
# counter
perf_counter.use_performance_counter = False
if perf_counter.use_monotonic:
# The monotonic clock is preferred over the system time
try:
return time.monotonic()
except OSError:
perf_counter.use_monotonic = False
return time.time()
perf_counter.use_performance_counter = (os.name == 'nt')
perf_counter.use_monotonic = hasattr(time, 'monotonic')
time.process_time()
Sum of the system and user CPU time of the current process. It does
not include time elapsed during sleep. It is process-wide by
definition. The reference point of the returned value is undefined,
so that only the difference between the results of consecutive calls
is valid.
It is available on all platforms.
Pseudo-code [2]:
if os.name == 'nt':
def process_time():
handle = _time.GetCurrentProcess()
process_times = _time.GetProcessTimes(handle)
return (process_times['UserTime'] + process_times['KernelTime']) * 1e-7
else:
try:
import resource
except ImportError:
has_resource = False
else:
has_resource = True
def process_time():
if process_time.clock_id is not None:
try:
return time.clock_gettime(process_time.clock_id)
except OSError:
process_time.clock_id = None
if process_time.use_getrusage:
try:
usage = resource.getrusage(resource.RUSAGE_SELF)
return usage[0] + usage[1]
except OSError:
process_time.use_getrusage = False
if process_time.use_times:
try:
times = _time.times()
cpu_time = times.tms_utime + times.tms_stime
return cpu_time / process_time.ticks_per_seconds
except OSError:
process_time.use_getrusage = False
return _time.clock()
if (hasattr(time, 'clock_gettime')
and hasattr(time, 'CLOCK_PROF')):
process_time.clock_id = time.CLOCK_PROF
elif (hasattr(time, 'clock_gettime')
and hasattr(time, 'CLOCK_PROCESS_CPUTIME_ID')):
process_time.clock_id = time.CLOCK_PROCESS_CPUTIME_ID
else:
process_time.clock_id = None
process_time.use_getrusage = has_resource
process_time.use_times = hasattr(_time, 'times')
if process_time.use_times:
# sysconf("SC_CLK_TCK"), or the HZ constant, or 60
process_time.ticks_per_seconds = _times.ticks_per_seconds
Existing Functions
time.time()
The system time which is usually the civil time. It is system-wide by
definition. It can be set manually by the system administrator or
automatically by a NTP daemon.
It is available on all platforms and cannot fail.
Pseudo-code [2]:
if os.name == "nt":
def time():
return _time.GetSystemTimeAsFileTime()
else:
def time():
if hasattr(time, "clock_gettime"):
try:
return time.clock_gettime(time.CLOCK_REALTIME)
except OSError:
# CLOCK_REALTIME is not supported (unlikely)
pass
if hasattr(_time, "gettimeofday"):
try:
return _time.gettimeofday()
except OSError:
# gettimeofday() should not fail
pass
if hasattr(_time, "ftime"):
return _time.ftime()
else:
return _time.time()
time.sleep()
Suspend execution for the given number of seconds. The actual
suspension time may be less than that requested because any caught
signal will terminate the time.sleep() following execution of that
signal’s catching routine. Also, the suspension time may be longer
than requested by an arbitrary amount because of the scheduling of
other activity in the system.
Pseudo-code [2]:
try:
import select
except ImportError:
has_select = False
else:
has_select = hasattr(select, "select")
if has_select:
def sleep(seconds):
return select.select([], [], [], seconds)
elif hasattr(_time, "delay"):
def sleep(seconds):
milliseconds = int(seconds * 1000)
_time.delay(milliseconds)
elif os.name == "nt":
def sleep(seconds):
milliseconds = int(seconds * 1000)
win32api.ResetEvent(hInterruptEvent);
win32api.WaitForSingleObject(sleep.sigint_event, milliseconds)
sleep.sigint_event = win32api.CreateEvent(NULL, TRUE, FALSE, FALSE)
# SetEvent(sleep.sigint_event) will be called by the signal handler of SIGINT
elif os.name == "os2":
def sleep(seconds):
milliseconds = int(seconds * 1000)
DosSleep(milliseconds)
else:
def sleep(seconds):
seconds = int(seconds)
_time.sleep(seconds)
Deprecated Function
time.clock()
On Unix, return the current processor time as a floating point number
expressed in seconds. It is process-wide by definition. The resolution,
and in fact the very definition of the meaning of “processor time”,
depends on that of the C function of the same name, but in any case,
this is the function to use for benchmarking Python or timing
algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is
typically better than one microsecond. It is system-wide.
Pseudo-code [2]:
if os.name == 'nt':
def clock():
try:
return _win_perf_counter()
except OSError:
# QueryPerformanceFrequency() fails if the installed
# hardware does not support a high-resolution performance
# counter
pass
return _time.clock()
else:
clock = _time.clock
Alternatives: API design
Other names for time.monotonic()
time.counter()
time.metronomic()
time.seconds()
time.steady(): “steady” is ambiguous: it means different things to
different people. For example, on Linux, CLOCK_MONOTONIC is
adjusted. If we uses the real time as the reference clock, we may
say that CLOCK_MONOTONIC is steady. But CLOCK_MONOTONIC gets
suspended on system suspend, whereas real time includes any time
spent in suspend.
time.timeout_clock()
time.wallclock(): time.monotonic() is not the system time aka the
“wall clock”, but a monotonic clock with an unspecified starting
point.
The name “time.try_monotonic()” was also proposed for an older
version of time.monotonic() which would fall back to the system
time when no monotonic clock was available.
Other names for time.perf_counter()
time.high_precision()
time.highres()
time.hires()
time.performance_counter()
time.timer()
Only expose operating system clocks
To not have to define high-level clocks, which is a difficult task, a
simpler approach is to only expose operating system clocks.
time.clock_gettime() and related clock identifiers were already added
to Python 3.3 for example.
time.monotonic(): Fallback to system time
If no monotonic clock is available, time.monotonic() falls back to the
system time.
Issues:
It is hard to define such a function correctly in the documentation:
is it monotonic? Is it steady? Is it adjusted?
Some users want to decide what to do when no monotonic clock is
available: use another clock, display an error, or do something
else.
Different APIs were proposed to define such function.
One function with a flag: time.monotonic(fallback=True)
time.monotonic(fallback=True) falls back to the system time if no
monotonic clock is available or if the monotonic clock failed.
time.monotonic(fallback=False) raises OSError if monotonic clock
fails and NotImplementedError if the system does not provide a
monotonic clock
A keyword argument that gets passed as a constant in the caller is
usually poor API.
Raising NotImplementedError for a function is something uncommon in
Python and should be avoided.
One time.monotonic() function, no flag
time.monotonic() returns (time: float, is_monotonic: bool).
An alternative is to use a function attribute:
time.monotonic.is_monotonic. The attribute value would be None before
the first call to time.monotonic().
Choosing the clock from a list of constraints
The PEP as proposed offers a few new clocks, but their guarantees
are deliberately loose in order to offer useful clocks on different
platforms. This inherently embeds policy in the calls, and the
caller must thus choose a policy.
The “choose a clock” approach suggests an additional API to let
callers implement their own policy if necessary
by making most platform clocks available and letting the caller pick amongst them.
The PEP’s suggested clocks are still expected to be available for the common
simple use cases.
To do this two facilities are needed:
an enumeration of clocks, and metadata on the clocks to enable the user to
evaluate their suitability.
The primary interface is a function make simple choices easy:
the caller can use time.get_clock(*flags) with some combination of flags.
This includes at least:
time.MONOTONIC: clock cannot go backward
time.STEADY: clock rate is steady
time.ADJUSTED: clock may be adjusted, for example by NTP
time.HIGHRES: clock with the highest resolution
It returns a clock object with a .now() method returning the current time.
The clock object is annotated with metadata describing the clock feature set;
its .flags field will contain at least all the requested flags.
time.get_clock() returns None if no matching clock is found and so calls can
be chained using the or operator. Example of a simple policy decision:
T = get_clock(MONOTONIC) or get_clock(STEADY) or get_clock()
t = T.now()
The available clocks always at least include a wrapper for time.time(),
so a final call with no flags can always be used to obtain a working clock.
Examples of flags of system clocks:
QueryPerformanceCounter: MONOTONIC | HIGHRES
GetTickCount: MONOTONIC | STEADY
CLOCK_MONOTONIC: MONOTONIC | STEADY (or only MONOTONIC on Linux)
CLOCK_MONOTONIC_RAW: MONOTONIC | STEADY
gettimeofday(): (no flag)
The clock objects contain other metadata including the clock flags
with additional feature flags above those listed above, the name
of the underlying OS facility, and clock precisions.
time.get_clock() still chooses a single clock; an enumeration
facility is also required.
The most obvious method is to offer time.get_clocks() with the
same signature as time.get_clock(), but returning a sequence
of all clocks matching the requested flags.
Requesting no flags would thus enumerate all available clocks,
allowing the caller to make an arbitrary choice amongst them based
on their metadata.
Example partial implementation:
clockutils.py.
Working around operating system bugs?
Should Python ensure that a monotonic clock is truly
monotonic by computing the maximum with the clock value and the
previous value?
Since it’s relatively straightforward to cache the last value returned
using a static variable, it might be interesting to use this to make
sure that the values returned are indeed monotonic.
Virtual machines provide less reliable clocks.
QueryPerformanceCounter() has known bugs (only one is not fixed yet)
Python may only work around a specific known operating system bug:
KB274323 contains a code example to workaround the bug (use
GetTickCount() to detect QueryPerformanceCounter() leap).
Issues with “correcting” non-monotonicities:
if the clock is accidentally set forward by an hour and then back
again, you wouldn’t have a useful clock for an hour
the cache is not shared between processes so different processes
wouldn’t see the same clock value
Glossary
Accuracy:
The amount of deviation of measurements by a given instrument from
true values. See also Accuracy and precision.
Inaccuracy in clocks may be caused by lack of precision, drift, or an
incorrect initial setting of the clock (e.g., timing of threads is
inherently inaccurate because perfect synchronization in resetting
counters is quite difficult).
Adjusted:
Resetting a clock to the correct time. This may be done either
with a <Step> or by <Slewing>.
Civil Time:
Time of day; external to the system. 10:45:13am is a Civil time;
45 seconds is not. Provided by existing function
time.localtime() and time.gmtime(). Not changed by this
PEP.
Clock:
An instrument for measuring time. Different clocks have different
characteristics; for example, a clock with nanosecond
<precision> may start to <drift> after a few minutes, while a less
precise clock remained accurate for days. This PEP is primarily
concerned with clocks which use a unit of seconds.
Counter:
A clock which increments each time a certain event occurs. A
counter is strictly monotonic, but not a monotonic clock. It can
be used to generate a unique (and ordered) timestamp, but these
timestamps cannot be mapped to <civil time>; tick creation may well
be bursty, with several advances in the same millisecond followed
by several days without any advance.
CPU Time:
A measure of how much CPU effort has been spent on a certain task.
CPU seconds are often normalized (so that a variable number can
occur in the same actual second). CPU seconds can be important
when profiling, but they do not map directly to user response time,
nor are they directly comparable to (real time) seconds.
Drift:
The accumulated error against “true” time, as defined externally to
the system. Drift may be due to imprecision, or to a difference
between the average rate at which clock time advances and that of
real time.
Epoch:
The reference point of a clock. For clocks providing <civil time>,
this is often midnight as the day (and year) rolled over to January
1, 1970. For a <clock_monotonic> clock, the epoch may be undefined
(represented as None).
Latency:
Delay. By the time a clock call returns, the <real time> has
advanced, possibly by more than the precision of the clock.
Monotonic:
The characteristics expected of a monotonic clock in practice.
Moving in at most one direction; for clocks, that direction is
forward. The <clock> should also be <steady>, and should be
convertible to a unit of seconds. The tradeoffs often include lack
of a defined <epoch> or mapping to <Civil Time>.
Precision:
The amount of deviation among measurements of the same physical
value by a single instrument. Imprecision in clocks may be caused by
a fluctuation of the rate at which clock time advances relative to
real time, including clock adjustment by slewing.
Process Time:
Time elapsed since the process began. It is typically measured in
<CPU time> rather than <real time>, and typically does not advance
while the process is suspended.
Real Time:
Time in the real world. This differs from <Civil time> in that it
is not <adjusted>, but they should otherwise advance in lockstep.
It is not related to the “real time” of “Real Time [Operating]
Systems”. It is sometimes called “wall clock time” to avoid that
ambiguity; unfortunately, that introduces different ambiguities.
Resolution:
The smallest difference between two physical values that results
in a different measurement by a given instrument.
Slew:
A slight change to a clock’s speed, usually intended to correct
<drift> with respect to an external authority.
Stability:
Persistence of accuracy. A measure of expected <drift>.
Steady:
A clock with high <stability> and relatively high <accuracy> and
<precision>. In practice, it is often used to indicate a
<clock_monotonic> clock, but places greater emphasis on the
consistency of the duration between subsequent ticks.
Step:
An instantaneous change in the represented time. Instead of
speeding or slowing the clock (<slew>), a single offset is
permanently added.
System Time:
Time as represented by the Operating System.
Thread Time:
Time elapsed since the thread began. It is typically measured in
<CPU time> rather than <real time>, and typically does not advance
while the thread is idle.
Wallclock:
What the clock on the wall says. This is typically used as a
synonym for <real time>; unfortunately, wall time is itself
ambiguous.
Hardware clocks
List of hardware clocks
HPET: A High Precision Event Timer (HPET) chip consists of a 64-bit
up-counter (main counter) counting at least at 10 MHz and a set of
up to 256 comparators (at least 3). Each HPET can have up to 32
timers. HPET can cause around 3 seconds of drift per day.
TSC (Time Stamp Counter): Historically, the TSC increased with every
internal processor clock cycle, but now the rate is usually constant
(even if the processor changes frequency) and usually equals the
maximum processor frequency. Multiple cores have different TSC
values. Hibernation of system will reset TSC value. The RDTSC
instruction can be used to read this counter. CPU frequency scaling
for power saving.
ACPI Power Management Timer: ACPI 24-bit timer with a frequency of
3.5 MHz (3,579,545 Hz).
Cyclone: The Cyclone timer uses a 32-bit counter on IBM Extended
X-Architecture (EXA) chipsets which include computers that use the
IBM “Summit” series chipsets (ex: x440). This is available in IA32
and IA64 architectures.
PIT (programmable interrupt timer): Intel 8253/8254 chipsets with a
configurable frequency in range 18.2 Hz - 1.2 MHz. It uses a 16-bit
counter.
RTC (Real-time clock). Most RTCs use a crystal oscillator with a
frequency of 32,768 Hz.
Linux clocksource
There were 4 implementations of the time in the Linux kernel: UTIME
(1996), timer wheel (1997), HRT (2001) and hrtimers (2007). The
latter is the result of the “high-res-timers” project started by
George Anzinger in 2001, with contributions by Thomas Gleixner and
Douglas Niehaus. The hrtimers implementation was merged into Linux
2.6.21, released in 2007.
hrtimers supports various clock sources. It sets a priority to each
source to decide which one will be used. Linux supports the following
clock sources:
tsc
hpet
pit
pmtmr: ACPI Power Management Timer
cyclone
High-resolution timers are not supported on all hardware
architectures. They are at least provided on x86/x86_64, ARM and
PowerPC.
clock_getres() returns 1 nanosecond for CLOCK_REALTIME and
CLOCK_MONOTONIC regardless of underlying clock source. Read Re:
clock_getres() and real resolution from Thomas Gleixner (9 Feb
2012) for an explanation.
The /sys/devices/system/clocksource/clocksource0 directory
contains two useful files:
available_clocksource: list of available clock sources
current_clocksource: clock source currently used. It is
possible to change the current clocksource by writing the name of a
clocksource into this file.
/proc/timer_list contains the list of all hardware timers.
Read also the time(7) manual page:
“overview of time and timers”.
FreeBSD timecounter
kern.timecounter.choice lists available hardware clocks with their
priority. The sysctl program can be used to change the timecounter.
Example:
# dmesg | grep Timecounter
Timecounter "i8254" frequency 1193182 Hz quality 0
Timecounter "ACPI-safe" frequency 3579545 Hz quality 850
Timecounter "HPET" frequency 100000000 Hz quality 900
Timecounter "TSC" frequency 3411154800 Hz quality 800
Timecounters tick every 10.000 msec
# sysctl kern.timecounter.choice
kern.timecounter.choice: TSC(800) HPET(900) ACPI-safe(850) i8254(0) dummy(-1000000)
# sysctl kern.timecounter.hardware="ACPI-fast"
kern.timecounter.hardware: HPET -> ACPI-fast
Available clocks:
“TSC”: Time Stamp Counter of the processor
“HPET”: High Precision Event Timer
“ACPI-fast”: ACPI Power Management timer (fast mode)
“ACPI-safe”: ACPI Power Management timer (safe mode)
“i8254”: PIT with Intel 8254 chipset
The commit 222222 (May
2011) decreased ACPI-fast timecounter quality to 900 and increased
HPET timecounter quality to 950: “HPET on modern platforms usually
have better resolution and lower latency than ACPI timer”.
Read Timecounters: Efficient and precise timekeeping in SMP kernels by Poul-Henning Kamp
(2002) for the FreeBSD Project.
Performance
Reading a hardware clock has a cost. The following table compares
the performance of different hardware clocks on Linux 3.3 with Intel
Core i7-2600 at 3.40GHz (8 cores). The bench_time.c program
was used to fill these tables.
Function
TSC
ACPI PM
HPET
time()
2 ns
2 ns
2 ns
CLOCK_REALTIME_COARSE
10 ns
10 ns
10 ns
CLOCK_MONOTONIC_COARSE
12 ns
13 ns
12 ns
CLOCK_THREAD_CPUTIME_ID
134 ns
135 ns
135 ns
CLOCK_PROCESS_CPUTIME_ID
127 ns
129 ns
129 ns
clock()
146 ns
146 ns
143 ns
gettimeofday()
23 ns
726 ns
637 ns
CLOCK_MONOTONIC_RAW
31 ns
716 ns
607 ns
CLOCK_REALTIME
27 ns
707 ns
629 ns
CLOCK_MONOTONIC
27 ns
723 ns
635 ns
FreeBSD 8.0 in kvm with hardware virtualization:
Function
TSC
ACPI-Safe
HPET
i8254
time()
191 ns
188 ns
189 ns
188 ns
CLOCK_SECOND
187 ns
184 ns
187 ns
183 ns
CLOCK_REALTIME_FAST
189 ns
180 ns
187 ns
190 ns
CLOCK_UPTIME_FAST
191 ns
185 ns
186 ns
196 ns
CLOCK_MONOTONIC_FAST
188 ns
187 ns
188 ns
189 ns
CLOCK_THREAD_CPUTIME_ID
208 ns
206 ns
207 ns
220 ns
CLOCK_VIRTUAL
280 ns
279 ns
283 ns
296 ns
CLOCK_PROF
289 ns
280 ns
282 ns
286 ns
clock()
342 ns
340 ns
337 ns
344 ns
CLOCK_UPTIME_PRECISE
197 ns
10380 ns
4402 ns
4097 ns
CLOCK_REALTIME
196 ns
10376 ns
4337 ns
4054 ns
CLOCK_MONOTONIC_PRECISE
198 ns
10493 ns
4413 ns
3958 ns
CLOCK_UPTIME
197 ns
10523 ns
4458 ns
4058 ns
gettimeofday()
202 ns
10524 ns
4186 ns
3962 ns
CLOCK_REALTIME_PRECISE
197 ns
10599 ns
4394 ns
4060 ns
CLOCK_MONOTONIC
201 ns
10766 ns
4498 ns
3943 ns
Each function was called 100,000 times and CLOCK_MONOTONIC was used to
get the time before and after. The benchmark was run 5 times, keeping
the minimum time.
NTP adjustment
NTP has different methods to adjust a clock:
“slewing”: change the clock frequency to be slightly faster or
slower (which is done with adjtime()). Since the slew rate is
limited to 0.5 millisecond per second, each second of adjustment requires an
amortization interval of 2000 seconds. Thus, an adjustment of many
seconds can take hours or days to amortize.
“stepping”: jump by a large amount in a single discrete step (which
is done with settimeofday())
By default, the time is slewed if the offset is less than 128 ms, but
stepped otherwise.
Slewing is generally desirable (i.e. we should use CLOCK_MONOTONIC,
not CLOCK_MONOTONIC_RAW) if one wishes to measure “real” time (and not
a time-like object like CPU cycles). This is because the clock on the
other end of the NTP connection from you is probably better at keeping
time: hopefully that thirty-five thousand dollars of Cesium
timekeeping goodness is doing something better than your PC’s $3
quartz crystal, after all.
Get more detail in the documentation of the NTP daemon.
Operating system time functions
Monotonic Clocks
Name
C Resolution
Adjusted
Include Sleep
Include Suspend
gethrtime()
1 ns
No
Yes
Yes
CLOCK_HIGHRES
1 ns
No
Yes
Yes
CLOCK_MONOTONIC
1 ns
Slewed on Linux
Yes
No
CLOCK_MONOTONIC_COARSE
1 ns
Slewed on Linux
Yes
No
CLOCK_MONOTONIC_RAW
1 ns
No
Yes
No
CLOCK_BOOTTIME
1 ns
?
Yes
Yes
CLOCK_UPTIME
1 ns
No
Yes
?
mach_absolute_time()
1 ns
No
Yes
No
QueryPerformanceCounter()
-
No
Yes
?
GetTickCount[64]()
1 ms
No
Yes
Yes
timeGetTime()
1 ms
No
Yes
?
The “C Resolution” column is the resolution of the underlying C
structure.
Examples of clock resolution on x86_64:
Name
Operating system
OS Resolution
Python Resolution
QueryPerformanceCounter
Windows Seven
10 ns
10 ns
CLOCK_HIGHRES
SunOS 5.11
2 ns
265 ns
CLOCK_MONOTONIC
Linux 3.0
1 ns
322 ns
CLOCK_MONOTONIC_RAW
Linux 3.3
1 ns
628 ns
CLOCK_BOOTTIME
Linux 3.3
1 ns
628 ns
mach_absolute_time()
Mac OS 10.6
1 ns
3 µs
CLOCK_MONOTONIC
FreeBSD 8.2
11 ns
5 µs
CLOCK_MONOTONIC
OpenBSD 5.0
10 ms
5 µs
CLOCK_UPTIME
FreeBSD 8.2
11 ns
6 µs
CLOCK_MONOTONIC_COARSE
Linux 3.3
1 ms
1 ms
CLOCK_MONOTONIC_COARSE
Linux 3.0
4 ms
4 ms
GetTickCount64()
Windows Seven
16 ms
15 ms
The “OS Resolution” is the resolution announced by the operating
system.
The “Python Resolution” is the smallest difference between two calls
to the time function computed in Python using the clock_resolution.py
program.
mach_absolute_time
Mac OS X provides a monotonic clock: mach_absolute_time(). It is
based on absolute elapsed time since system boot. It is not
adjusted and cannot be set.
mach_timebase_info() gives a fraction to convert the clock value to a number of
nanoseconds. See also the Technical Q&A QA1398.
mach_absolute_time() stops during a sleep on a PowerPC CPU, but not on
an Intel CPU: Different behaviour of mach_absolute_time() on i386/ppc.
CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME
CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW represent monotonic time since
some unspecified starting point. They cannot be set. The resolution
can be read using clock_getres().
Documentation: refer to the manual page of your operating system.
Examples:
FreeBSD clock_gettime() manual page
Linux clock_gettime() manual page
CLOCK_MONOTONIC is available at least on the following operating
systems:
DragonFly BSD, FreeBSD >= 5.0, OpenBSD, NetBSD
Linux
Solaris
The following operating systems don’t support CLOCK_MONOTONIC:
GNU/Hurd (see open issues/ clock_gettime)
Mac OS X
Windows
On Linux, NTP may adjust the CLOCK_MONOTONIC rate (slewed), but it cannot
jump backward.
CLOCK_MONOTONIC_RAW is specific to Linux. It is similar to
CLOCK_MONOTONIC, but provides access to a raw hardware-based time that
is not subject to NTP adjustments. CLOCK_MONOTONIC_RAW requires Linux
2.6.28 or later.
Linux 2.6.39 and glibc 2.14 introduces a new clock: CLOCK_BOOTTIME.
CLOCK_BOOTTIME is identical to CLOCK_MONOTONIC, except that it also
includes any time spent in suspend. Read also Waking systems from
suspend (March, 2011).
CLOCK_MONOTONIC stops while the machine is suspended.
Linux provides also CLOCK_MONOTONIC_COARSE since Linux 2.6.32. It is
similar to CLOCK_MONOTONIC, less precise but faster.
clock_gettime() fails if the system does not support the specified
clock, even if the standard C library supports it. For example,
CLOCK_MONOTONIC_RAW requires a kernel version 2.6.28 or later.
Windows: QueryPerformanceCounter
High-resolution performance counter. It is monotonic.
The frequency of the counter can be read using QueryPerformanceFrequency().
The resolution is 1 / QueryPerformanceFrequency().
It has a much higher resolution, but has lower long term precision
than GetTickCount() and timeGetTime() clocks. For example, it will
drift compared to the low precision clocks.
Documentation:
MSDN: QueryPerformanceCounter() documentation
MSDN: QueryPerformanceFrequency() documentation
Hardware clocks used by QueryPerformanceCounter:
Windows XP: RDTSC instruction of Intel processors, the clock
frequency is the frequency of the processor (between 200 MHz and 3
GHz, usually greater than 1 GHz nowadays).
Windows 2000: ACPI power management timer, frequency = 3,549,545 Hz.
It can be forced through the “/usepmtimer” flag in boot.ini.
QueryPerformanceFrequency() should only be called once: the frequency
will not change while the system is running. It fails if the
installed hardware does not support a high-resolution performance
counter.
QueryPerformanceCounter() cannot be adjusted:
SetSystemTimeAdjustment()
only adjusts the system time.
Bugs:
The performance counter value may unexpectedly leap forward because
of a hardware bug, see KB274323.
On VirtualBox, QueryPerformanceCounter() does not increment the high
part every time the low part overflows, see Monotonic timers
(2009).
VirtualBox had a bug in its HPET virtualized device:
QueryPerformanceCounter() did jump forward by approx. 42 seconds (issue
#8707).
Windows XP had a bug (see KB896256): on a multiprocessor
computer, QueryPerformanceCounter() returned a different value for
each processor. The bug was fixed in Windows XP SP2.
Issues with processor with variable frequency: the frequency is
changed depending on the workload to reduce memory consumption.
Chromium don’t use QueryPerformanceCounter() on Athlon X2 CPUs
(model 15) because “QueryPerformanceCounter is unreliable” (see
base/time_win.cc in Chromium source code)
Windows: GetTickCount(), GetTickCount64()
GetTickCount() and GetTickCount64() are monotonic, cannot fail and are
not adjusted by SetSystemTimeAdjustment(). MSDN documentation:
GetTickCount(),
GetTickCount64().
The resolution can be read using GetSystemTimeAdjustment().
The elapsed time retrieved by GetTickCount() or GetTickCount64()
includes time the system spends in sleep or hibernation.
GetTickCount64() was added to Windows Vista and Windows Server 2008.
It is possible to improve the precision using the undocumented
NtSetTimerResolution() function.
There are applications using this undocumented function, example: Timer
Resolution.
WaitForSingleObject() uses the same timer as GetTickCount() with the
same precision.
Windows: timeGetTime
The timeGetTime function retrieves the system time, in milliseconds.
The system time is the time elapsed since Windows was started. Read
the timeGetTime() documentation.
The return type of timeGetTime() is a 32-bit unsigned integer. As
GetTickCount(), timeGetTime() rolls over after 2^32 milliseconds (49.7
days).
The elapsed time retrieved by timeGetTime() includes time the system
spends in sleep.
The default precision of the timeGetTime function can be five
milliseconds or more, depending on the machine.
timeBeginPeriod() can be used to increase the precision of
timeGetTime() up to 1 millisecond, but it negatively affects power
consumption. Calling timeBeginPeriod() also affects the granularity
of some other timing calls, such as CreateWaitableTimer(),
WaitForSingleObject() and Sleep().
Note
timeGetTime() and timeBeginPeriod() are part the Windows multimedia
library and so require to link the program against winmm or to
dynamically load the library.
Solaris: CLOCK_HIGHRES
The Solaris OS has a CLOCK_HIGHRES timer that attempts to use an
optimal hardware source, and may give close to nanosecond resolution.
CLOCK_HIGHRES is the nonadjustable, high-resolution clock. For timers
created with a clockid_t value of CLOCK_HIGHRES, the system will
attempt to use an optimal hardware source.
The resolution of CLOCK_HIGHRES can be read using clock_getres().
Solaris: gethrtime
The gethrtime() function returns the current high-resolution real
time. Time is expressed as nanoseconds since some arbitrary time in
the past; it is not correlated in any way to the time of day, and thus
is not subject to resetting or drifting by way of adjtime() or
settimeofday(). The hires timer is ideally suited to performance
measurement tasks, where cheap, accurate interval timing is required.
The linearity of gethrtime() is not preserved across a suspend-resume
cycle (Bug 4272663).
Read the gethrtime() manual page of Solaris 11.
On Solaris, gethrtime() is the same as clock_gettime(CLOCK_MONOTONIC).
System Time
Name
C Resolution
Include Sleep
Include Suspend
CLOCK_REALTIME
1 ns
Yes
Yes
CLOCK_REALTIME_COARSE
1 ns
Yes
Yes
GetSystemTimeAsFileTime
100 ns
Yes
Yes
gettimeofday()
1 µs
Yes
Yes
ftime()
1 ms
Yes
Yes
time()
1 sec
Yes
Yes
The “C Resolution” column is the resolution of the underlying C
structure.
Examples of clock resolution on x86_64:
Name
Operating system
OS Resolution
Python Resolution
CLOCK_REALTIME
SunOS 5.11
10 ms
238 ns
CLOCK_REALTIME
Linux 3.0
1 ns
238 ns
gettimeofday()
Mac OS 10.6
1 µs
4 µs
CLOCK_REALTIME
FreeBSD 8.2
11 ns
6 µs
CLOCK_REALTIME
OpenBSD 5.0
10 ms
5 µs
CLOCK_REALTIME_COARSE
Linux 3.3
1 ms
1 ms
CLOCK_REALTIME_COARSE
Linux 3.0
4 ms
4 ms
GetSystemTimeAsFileTime()
Windows Seven
16 ms
1 ms
ftime()
Windows Seven
-
1 ms
The “OS Resolution” is the resolution announced by the operating
system.
The “Python Resolution” is the smallest difference between two calls
to the time function computed in Python using the clock_resolution.py
program.
Windows: GetSystemTimeAsFileTime
The system time can be read using GetSystemTimeAsFileTime(), ftime() and
time(). The resolution of the system time can be read using
GetSystemTimeAdjustment().
Read the GetSystemTimeAsFileTime() documentation.
The system time can be set using SetSystemTime().
System time on UNIX
gettimeofday(), ftime(), time() and clock_gettime(CLOCK_REALTIME) return
the system time. The resolution of CLOCK_REALTIME can be read using
clock_getres().
The system time can be set using settimeofday() or
clock_settime(CLOCK_REALTIME).
Linux provides also CLOCK_REALTIME_COARSE since Linux 2.6.32. It is similar
to CLOCK_REALTIME, less precise but faster.
Alexander Shishkin proposed an API for Linux to be notified when the system
clock is changed: timerfd: add TFD_NOTIFY_CLOCK_SET to watch for clock changes (4th version of the API, March 2011). The
API is not accepted yet, but CLOCK_BOOTTIME provides a similar feature.
Process Time
The process time cannot be set. It is not monotonic: the clocks stop
while the process is idle.
Name
C Resolution
Include Sleep
Include Suspend
GetProcessTimes()
100 ns
No
No
CLOCK_PROCESS_CPUTIME_ID
1 ns
No
No
getrusage(RUSAGE_SELF)
1 µs
No
No
times()
-
No
No
clock()
-
Yes on Windows, No otherwise
No
The “C Resolution” column is the resolution of the underlying C
structure.
Examples of clock resolution on x86_64:
Name
Operating system
OS Resolution
Python Resolution
CLOCK_PROCESS_CPUTIME_ID
Linux 3.3
1 ns
1 ns
CLOCK_PROF
FreeBSD 8.2
10 ms
1 µs
getrusage(RUSAGE_SELF)
FreeBSD 8.2
-
1 µs
getrusage(RUSAGE_SELF)
SunOS 5.11
-
1 µs
CLOCK_PROCESS_CPUTIME_ID
Linux 3.0
1 ns
1 µs
getrusage(RUSAGE_SELF)
Mac OS 10.6
-
5 µs
clock()
Mac OS 10.6
1 µs
5 µs
CLOCK_PROF
OpenBSD 5.0
-
5 µs
getrusage(RUSAGE_SELF)
Linux 3.0
-
4 ms
getrusage(RUSAGE_SELF)
OpenBSD 5.0
-
8 ms
clock()
FreeBSD 8.2
8 ms
8 ms
clock()
Linux 3.0
1 µs
10 ms
times()
Linux 3.0
10 ms
10 ms
clock()
OpenBSD 5.0
10 ms
10 ms
times()
OpenBSD 5.0
10 ms
10 ms
times()
Mac OS 10.6
10 ms
10 ms
clock()
SunOS 5.11
1 µs
10 ms
times()
SunOS 5.11
1 µs
10 ms
GetProcessTimes()
Windows Seven
16 ms
16 ms
clock()
Windows Seven
1 ms
1 ms
The “OS Resolution” is the resolution announced by the operating
system.
The “Python Resolution” is the smallest difference between two calls
to the time function computed in Python using the clock_resolution.py
program.
Functions
Windows: GetProcessTimes().
The resolution can be read using GetSystemTimeAdjustment().
clock_gettime(CLOCK_PROCESS_CPUTIME_ID): High-resolution per-process
timer from the CPU. The resolution can be read using clock_getres().
clock(). The resolution is 1 / CLOCKS_PER_SEC.
Windows: The elapsed wall-clock time since the start of the
process (elapsed time in seconds times CLOCKS_PER_SEC). Include
time elapsed during sleep. It can fail.
UNIX: returns an approximation of processor time used by the
program.
getrusage(RUSAGE_SELF) returns a structure of resource usage of the currenet
process. ru_utime is user CPU time and ru_stime is the system CPU time.
times(): structure of process times. The resolution is 1 / ticks_per_seconds,
where ticks_per_seconds is sysconf(_SC_CLK_TCK) or the HZ constant.
Python source code includes a portable library to get the process time (CPU
time): Tools/pybench/systimes.py.
See also the QueryProcessCycleTime() function
(sum of the cycle time of all threads) and clock_getcpuclockid().
Thread Time
The thread time cannot be set. It is not monotonic: the clocks stop
while the thread is idle.
Name
C Resolution
Include Sleep
Include Suspend
CLOCK_THREAD_CPUTIME_ID
1 ns
Yes
Epoch changes
GetThreadTimes()
100 ns
No
?
The “C Resolution” column is the resolution of the underlying C
structure.
Examples of clock resolution on x86_64:
Name
Operating system
OS Resolution
Python Resolution
CLOCK_THREAD_CPUTIME_ID
FreeBSD 8.2
1 µs
1 µs
CLOCK_THREAD_CPUTIME_ID
Linux 3.3
1 ns
649 ns
GetThreadTimes()
Windows Seven
16 ms
16 ms
The “OS Resolution” is the resolution announced by the operating
system.
The “Python Resolution” is the smallest difference between two calls
to the time function computed in Python using the clock_resolution.py
program.
Functions
Windows: GetThreadTimes().
The resolution can be read using GetSystemTimeAdjustment().
clock_gettime(CLOCK_THREAD_CPUTIME_ID): Thread-specific CPU-time
clock. It uses a number of CPU cycles, not a number of seconds.
The resolution can be read using of clock_getres().
See also the QueryThreadCycleTime() function
(cycle time for the specified thread) and pthread_getcpuclockid().
Windows: QueryUnbiasedInterruptTime
Gets the current unbiased interrupt time from the biased interrupt
time and the current sleep bias amount. This time is not affected by
power management sleep transitions.
The elapsed time retrieved by the QueryUnbiasedInterruptTime function
includes only time that the system spends in the working state.
QueryUnbiasedInterruptTime() is not monotonic.
QueryUnbiasedInterruptTime() was introduced in Windows 7.
See also QueryIdleProcessorCycleTime() function
(cycle time for the idle thread of each processor)
Sleep
Suspend execution of the process for the given number of seconds.
Sleep is not affected by system time updates. Sleep is paused during
system suspend. For example, if a process sleeps for 60 seconds and
the system is suspended for 30 seconds in the middle of the sleep, the
sleep duration is 90 seconds in the real time.
Sleep can be interrupted by a signal: the function fails with EINTR.
Name
C Resolution
nanosleep()
1 ns
clock_nanosleep()
1 ns
usleep()
1 µs
delay()
1 µs
sleep()
1 sec
Other functions:
Name
C Resolution
sigtimedwait()
1 ns
pthread_cond_timedwait()
1 ns
sem_timedwait()
1 ns
select()
1 µs
epoll()
1 ms
poll()
1 ms
WaitForSingleObject()
1 ms
The “C Resolution” column is the resolution of the underlying C
structure.
Functions
sleep(seconds)
usleep(microseconds)
nanosleep(nanoseconds, remaining):
Linux manpage of nanosleep()
delay(milliseconds)
clock_nanosleep
clock_nanosleep(clock_id, flags, nanoseconds, remaining): Linux
manpage of clock_nanosleep().
If flags is TIMER_ABSTIME, then request is interpreted as an absolute
time as measured by the clock, clock_id. If request is less than or
equal to the current value of the clock, then clock_nanosleep()
returns immediately without suspending the calling thread.
POSIX.1 specifies that changing the value of the CLOCK_REALTIME clock
via clock_settime(2) shall have no effect on a thread that is blocked
on a relative clock_nanosleep().
select()
select(nfds, readfds, writefds, exceptfs, timeout).
Since Linux 2.6.28, select() uses high-resolution timers to handle the
timeout. A process has a “slack” attribute to configure the precision
of the timeout, the default slack is 50 microseconds. Before Linux
2.6.28, timeouts for select() were handled by the main timing
subsystem at a jiffy-level resolution. Read also High- (but not too
high-) resolution timeouts and
Timer slack.
Other functions
poll(), epoll()
sigtimedwait(). POSIX: “If the Monotonic Clock option is supported,
the CLOCK_MONOTONIC clock shall be used to measure the time
interval specified by the timeout argument.”
pthread_cond_timedwait(), pthread_condattr_setclock(). “The default
value of the clock attribute shall refer to the system time.”
sem_timedwait(): “If the Timers option is supported, the timeout
shall be based on the CLOCK_REALTIME clock. If the Timers option is
not supported, the timeout shall be based on the system time as
returned by the time() function. The precision of the timeout
shall be the precision of the clock on which it is based.”
WaitForSingleObject(): use the same timer than GetTickCount() with
the same precision.
System Standby
The ACPI power state “S3” is a system standby mode, also called
“Suspend to RAM”. RAM remains powered.
On Windows, the WM_POWERBROADCAST message is sent to Windows
applications to notify them of power-management events (ex: owner status
has changed).
For Mac OS X, read Registering and unregistering for sleep and wake
notifications
(Technical Q&A QA1340).
Footnotes
[2] (1, 2, 3, 4, 5)
“_time” is a hypothetical module only used for the example.
The time module is implemented in C and so there is no need for
such a module.
Links
Related Python issues:
Issue #12822: NewGIL should use CLOCK_MONOTONIC if possible.
Issue #14222: Use time.steady() to implement timeout
Issue #14309: Deprecate time.clock()
Issue #14397: Use GetTickCount/GetTickCount64 instead of
QueryPerformanceCounter for monotonic clock
Issue #14428: Implementation of the PEP 418
Issue #14555: clock_gettime/settime/getres: Add more clock identifiers
Libraries exposing monotonic clocks:
Java: System.nanoTime
Qt library: QElapsedTimer
glib library: g_get_monotonic_time ()
uses GetTickCount64()/GetTickCount() on Windows,
clock_gettime(CLOCK_MONOTONIC) on UNIX or falls back to the system
clock
python-monotonic-time (github)
Monoclock.nano_count() uses clock_gettime(CLOCK_MONOTONIC)
and returns a number of nanoseconds
monotonic_clock by Thomas Habets
Perl: Time::HiRes
exposes clock_gettime(CLOCK_MONOTONIC)
Ruby: AbsoluteTime.now: use
clock_gettime(CLOCK_MONOTONIC), mach_absolute_time() or
gettimeofday(). “AbsoluteTime.monotonic?” method indicates if
AbsoluteTime.now is monotonic or not.
libpthread: POSIX thread library for Windows
(clock.c)
Boost.Chrono uses:
system_clock:
mac = gettimeofday()
posix = clock_gettime(CLOCK_REALTIME)
win = GetSystemTimeAsFileTime()
steady_clock:
mac = mach_absolute_time()
posix = clock_gettime(CLOCK_MONOTONIC)
win = QueryPerformanceCounter()
high_resolution_clock:
steady_clock, if available system_clock, otherwise
Time:
Twisted issue #2424: Add reactor option to start with monotonic clock
gettimeofday() should never be used to measure time by Thomas Habets (2010-09-05)
hrtimers - subsystem for high-resolution kernel timers
C++ Timeout Specification by Lawrence Crowl (2010-08-19)
Windows: Game Timing and Multicore Processors by Chuck Walbourn (December 2005)
Implement a Continuously Updating, High-Resolution Time Provider
for Windows by Johan Nilsson (March 2004)
clockspeed uses a hardware tick
counter to compensate for a persistently fast or slow system time, by D. J. Bernstein (1998)
Retrieving system time
lists hardware clocks and time functions with their resolution and
epoch or range
On Windows, the JavaScript runtime of Firefox interpolates
GetSystemTimeAsFileTime() with QueryPerformanceCounter() to get a
higher resolution. See the Bug 363258 - bad millisecond resolution
for (new Date).getTime() / Date.now() on Windows.
When microseconds matter: How the
IBM High Resolution Time Stamp Facility accurately measures itty
bits of time, by W. Nathaniel Mills, III (Apr 2002)
Win32 Performance Measurement Options by Matthew Wilson (May, 2003)
Counter Availability and Characteristics for Feed-forward Based Synchronization
by Timothy Broomhead, Julien Ridoux, Darryl Veitch (2009)
System Management Interrupt (SMI) issues:
System Management Interrupt Free Hardware
by Keith Mannthey (2009)
IBM Real-Time “SMI Free” mode driver by Keith Mannthey (Feb 2009)
Fixing Realtime problems caused by SMI on Ubuntu
[RFC] simple SMI detector by Jon Masters (Jan 2009)
[PATCH 2.6.34-rc3] A nonintrusive SMI sniffer for x86 by Joe Korty (2010-04)
Acceptance
The PEP was accepted on 2012-04-28 by Guido van Rossum [1]. The PEP
implementation has since been committed to the repository.
References
[1]
https://mail.python.org/pipermail/python-dev/2012-April/119094.html
Copyright
This document has been placed in the public domain.
| Final | PEP 418 – Add monotonic time, performance counter, and process time functions | Standards Track | This PEP proposes to add time.get_clock_info(name),
time.monotonic(), time.perf_counter() and
time.process_time() functions to Python 3.3. |
PEP 419 – Protecting cleanup statements from interruptions
Author:
Paul Colomiets <paul at colomiets.name>
Status:
Deferred
Type:
Standards Track
Created:
06-Apr-2012
Python-Version:
3.3
Table of Contents
Abstract
PEP Deferral
Rationale
Coroutine Use Case
Specification
Frame Flag ‘f_in_cleanup’
Function ‘sys.setcleanuphook’
Inspect Module Enhancements
Example
Unresolved Issues
Interruption Inside With Statement Expression
Exception Propagation
Interruption Between Acquiring Resource and Try Block
Handling EINTR Inside a Finally
Setting Interruption Context Inside Finally Itself
Modifying KeyboardInterrupt
Alternative Python Implementations Support
Alternative Names
Alternative Proposals
Propagating ‘f_in_cleanup’ Flag Automatically
Add Bytecodes ‘INCR_CLEANUP’, ‘DECR_CLEANUP’
Expose ‘f_in_cleanup’ as a Counter
Add code object flag ‘CO_CLEANUP’
Have Cleanup Callback on Frame Object Itself
No Cleanup Hook
References
Copyright
Abstract
This PEP proposes a way to protect Python code from being interrupted
inside a finally clause or during context manager cleanup.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the PEP
and collecting and incorporating feedback, and with sufficient available
time to do so effectively.
Rationale
Python has two nice ways to do cleanup. One is a finally
statement and the other is a context manager (usually called using a
with statement). However, neither is protected from interruption
by KeyboardInterrupt or GeneratorExit caused by
generator.throw(). For example:
lock.acquire()
try:
print('starting')
do_something()
finally:
print('finished')
lock.release()
If KeyboardInterrupt occurs just after the second print()
call, the lock will not be released. Similarly, the following code
using the with statement is affected:
from threading import Lock
class MyLock:
def __init__(self):
self._lock_impl = Lock()
def __enter__(self):
self._lock_impl.acquire()
print("LOCKED")
def __exit__(self):
print("UNLOCKING")
self._lock_impl.release()
lock = MyLock()
with lock:
do_something
If KeyboardInterrupt occurs near any of the print() calls, the
lock will never be released.
Coroutine Use Case
A similar case occurs with coroutines. Usually coroutine libraries
want to interrupt the coroutine with a timeout. The
generator.throw() method works for this use case, but there is no
way of knowing if the coroutine is currently suspended from inside a
finally clause.
An example that uses yield-based coroutines follows. The code looks
similar using any of the popular coroutine libraries Monocle [1],
Bluelet [2], or Twisted [3].
def run_locked():
yield connection.sendall('LOCK')
try:
yield do_something()
yield do_something_else()
finally:
yield connection.sendall('UNLOCK')
with timeout(5):
yield run_locked()
In the example above, yield something means to pause executing the
current coroutine and to execute coroutine something until it
finishes execution. Therefore, the coroutine library itself needs to
maintain a stack of generators. The connection.sendall() call waits
until the socket is writable and does a similar thing to what
socket.sendall() does.
The with statement ensures that all code is executed within 5
seconds timeout. It does so by registering a callback in the main
loop, which calls generator.throw() on the top-most frame in the
coroutine stack when a timeout happens.
The greenlets extension works in a similar way, except that it
doesn’t need yield to enter a new stack frame. Otherwise
considerations are similar.
Specification
Frame Flag ‘f_in_cleanup’
A new flag on the frame object is proposed. It is set to True if
this frame is currently executing a finally clause. Internally,
the flag must be implemented as a counter of nested finally statements
currently being executed.
The internal counter also needs to be incremented during execution of
the SETUP_WITH and WITH_CLEANUP bytecodes, and decremented
when execution for these bytecodes is finished. This allows to also
protect __enter__() and __exit__() methods.
Function ‘sys.setcleanuphook’
A new function for the sys module is proposed. This function sets
a callback which is executed every time f_in_cleanup becomes
false. Callbacks get a frame object as their sole argument, so that
they can figure out where they are called from.
The setting is thread local and must be stored in the
PyThreadState structure.
Inspect Module Enhancements
Two new functions are proposed for the inspect module:
isframeincleanup() and getcleanupframe().
isframeincleanup(), given a frame or generator object as its sole
argument, returns the value of the f_in_cleanup attribute of a
frame itself or of the gi_frame attribute of a generator.
getcleanupframe(), given a frame object as its sole argument,
returns the innermost frame which has a true value of
f_in_cleanup, or None if no frames in the stack have a nonzero
value for that attribute. It starts to inspect from the specified
frame and walks to outer frames using f_back pointers, just like
getouterframes() does.
Example
An example implementation of a SIGINT handler that interrupts safely
might look like:
import inspect, sys, functools
def sigint_handler(sig, frame):
if inspect.getcleanupframe(frame) is None:
raise KeyboardInterrupt()
sys.setcleanuphook(functools.partial(sigint_handler, 0))
A coroutine example is out of scope of this document, because its
implementation depends very much on a trampoline (or main loop) used
by coroutine library.
Unresolved Issues
Interruption Inside With Statement Expression
Given the statement
with open(filename):
do_something()
Python can be interrupted after open() is called, but before the
SETUP_WITH bytecode is executed. There are two possible
decisions:
Protect with expressions. This would require another bytecode,
since currently there is no way of recognizing the start of the
with expression.
Let the user write a wrapper if he considers it important for the
use-case. A safe wrapper might look like this:class FileWrapper(object):
def __init__(self, filename, mode):
self.filename = filename
self.mode = mode
def __enter__(self):
self.file = open(self.filename, self.mode)
def __exit__(self):
self.file.close()
Alternatively it can be written using the contextmanager()
decorator:
@contextmanager
def open_wrapper(filename, mode):
file = open(filename, mode)
try:
yield file
finally:
file.close()
This code is safe, as the first part of the generator (before yield)
is executed inside the SETUP_WITH bytecode of the caller.
Exception Propagation
Sometimes a finally clause or an __enter__()/__exit__()
method can raise an exception. Usually this is not a problem, since
more important exceptions like KeyboardInterrupt or SystemExit
should be raised instead. But it may be nice to be able to keep the
original exception inside a __context__ attribute. So the cleanup
hook signature may grow an exception argument:
def sigint_handler(sig, frame)
if inspect.getcleanupframe(frame) is None:
raise KeyboardInterrupt()
sys.setcleanuphook(retry_sigint)
def retry_sigint(frame, exception=None):
if inspect.getcleanupframe(frame) is None:
raise KeyboardInterrupt() from exception
Note
There is no need to have three arguments like in the __exit__
method since there is a __traceback__ attribute in exception in
Python 3.
However, this will set the __cause__ for the exception, which is
not exactly what’s intended. So some hidden interpreter logic may be
used to put a __context__ attribute on every exception raised in a
cleanup hook.
Interruption Between Acquiring Resource and Try Block
The example from the first section is not totally safe. Let’s take a
closer look:
lock.acquire()
try:
do_something()
finally:
lock.release()
The problem might occur if the code is interrupted just after
lock.acquire() is executed but before the try block is
entered.
There is no way the code can be fixed unmodified. The actual fix
depends very much on the use case. Usually code can be fixed using a
with statement:
with lock:
do_something()
However, for coroutines one usually can’t use the with statement
because you need to yield for both the acquire and release
operations. So the code might be rewritten like this:
try:
yield lock.acquire()
do_something()
finally:
yield lock.release()
The actual locking code might need more code to support this use case,
but the implementation is usually trivial, like this: check if the
lock has been acquired and unlock if it is.
Handling EINTR Inside a Finally
Even if a signal handler is prepared to check the f_in_cleanup
flag, InterruptedError might be raised in the cleanup handler,
because the respective system call returned an EINTR error. The
primary use cases are prepared to handle this:
Posix mutexes never return EINTR
Networking libraries are always prepared to handle EINTR
Coroutine libraries are usually interrupted with the throw()
method, not with a signal
The platform-specific function siginterrupt() might be used to
remove the need to handle EINTR. However, it may have hardly
predictable consequences, for example SIGINT a handler is never
called if the main thread is stuck inside an IO routine.
A better approach would be to have the code, which is usually used in
cleanup handlers, be prepared to handle InterruptedError
explicitly. An example of such code might be a file-based lock
implementation.
signal.pthread_sigmask can be used to block signals inside
cleanup handlers which can be interrupted with EINTR.
Setting Interruption Context Inside Finally Itself
Some coroutine libraries may need to set a timeout for the finally
clause itself. For example:
try:
do_something()
finally:
with timeout(0.5):
try:
yield do_slow_cleanup()
finally:
yield do_fast_cleanup()
With current semantics, timeout will either protect the whole with
block or nothing at all, depending on the implementation of each
library. What the author intended is to treat do_slow_cleanup as
ordinary code, and do_fast_cleanup as a cleanup (a
non-interruptible one).
A similar case might occur when using greenlets or tasklets.
This case can be fixed by exposing f_in_cleanup as a counter, and
by calling a cleanup hook on each decrement. A coroutine library may
then remember the value at timeout start, and compare it on each hook
execution.
But in practice, the example is considered to be too obscure to take
into account.
Modifying KeyboardInterrupt
It should be decided if the default SIGINT handler should be
modified to use the described mechanism. The initial proposition is
to keep old behavior, for two reasons:
Most application do not care about cleanup on exit (either they do
not have external state, or they modify it in crash-safe way).
Cleanup may take too much time, not giving user a chance to
interrupt an application.
The latter case can be fixed by allowing an unsafe break if a
SIGINT handler is called twice, but it seems not worth the
complexity.
Alternative Python Implementations Support
We consider f_in_cleanup an implementation detail. The actual
implementation may have some fake frame-like object passed to signal
handler, cleanup hook and returned from getcleanupframe(). The
only requirement is that the inspect module functions work as
expected on these objects. For this reason, we also allow to pass a
generator object to the isframeincleanup() function, which removes
the need to use the gi_frame attribute.
It might be necessary to specify that getcleanupframe() must
return the same object that will be passed to cleanup hook at the next
invocation.
Alternative Names
The original proposal had a f_in_finally frame attribute, as the
original intention was to protect finally clauses. But as it grew
up to protecting __enter__ and __exit__ methods too, the
f_in_cleanup name seems better. Although the __enter__ method
is not a cleanup routine, it at least relates to cleanup done by
context managers.
setcleanuphook, isframeincleanup and getcleanupframe can
be unobscured to set_cleanup_hook, is_frame_in_cleanup and
get_cleanup_frame, although they follow the naming convention of
their respective modules.
Alternative Proposals
Propagating ‘f_in_cleanup’ Flag Automatically
This can make getcleanupframe() unnecessary. But for yield-based
coroutines you need to propagate it yourself. Making it writable
leads to somewhat unpredictable behavior of setcleanuphook().
Add Bytecodes ‘INCR_CLEANUP’, ‘DECR_CLEANUP’
These bytecodes can be used to protect the expression inside the
with statement, as well as making counter increments more explicit
and easy to debug (visible inside a disassembly). Some middle ground
might be chosen, like END_FINALLY and SETUP_WITH implicitly
decrementing the counter (END_FINALLY is present at end of every
with suite).
However, adding new bytecodes must be considered very carefully.
Expose ‘f_in_cleanup’ as a Counter
The original intention was to expose a minimum of needed
functionality. However, as we consider the frame flag
f_in_cleanup an implementation detail, we may expose it as a
counter.
Similarly, if we have a counter we may need to have the cleanup hook
called on every counter decrement. It’s unlikely to have much
performance impact as nested finally clauses are an uncommon case.
Add code object flag ‘CO_CLEANUP’
As an alternative to set the flag inside the SETUP_WITH and
WITH_CLEANUP bytecodes, we can introduce a flag CO_CLEANUP.
When the interpreter starts to execute code with CO_CLEANUP set,
it sets f_in_cleanup for the whole function body. This flag is
set for code objects of __enter__ and __exit__ special
methods. Technically it might be set on functions called
__enter__ and __exit__.
This seems to be less clear solution. It also covers the case where
__enter__ and __exit__ are called manually. This may be
accepted either as a feature or as an unnecessary side-effect (or,
though unlikely, as a bug).
It may also impose a problem when __enter__ or __exit__
functions are implemented in C, as there is no code object to check
for the f_in_cleanup flag.
Have Cleanup Callback on Frame Object Itself
The frame object may be extended to have a f_cleanup_callback
member which is called when f_in_cleanup is reset to 0. This
would help to register different callbacks to different coroutines.
Despite its apparent beauty, this solution doesn’t add anything, as
the two primary use cases are:
Setting the callback in a signal handler. The callback is
inherently a single one for this case.
Use a single callback per loop for the coroutine use case. Here, in
almost all cases, there is only one loop per thread.
No Cleanup Hook
The original proposal included no cleanup hook specification, as there
are a few ways to achieve the same using current tools:
Using sys.settrace() and the f_trace callback. This may
impose some problem to debugging, and has a big performance impact
(although interrupting doesn’t happen very often).
Sleeping a bit more and trying again. For a coroutine library this
is easy. For signals it may be achieved using signal.alert.
Both methods are considered too impractical and a way to catch exit
from finally clauses is proposed.
References
[1]
Monocle
https://github.com/saucelabs/monocle
[2]
Bluelet
https://github.com/sampsyo/bluelet
[3]
Twisted: inlineCallbacks
https://twisted.org/documents/8.1.0/api/twisted.internet.defer.html
[4] Original discussion
https://mail.python.org/pipermail/python-ideas/2012-April/014705.html
[5] Implementation of PEP 419
https://github.com/python/cpython/issues/58935
Copyright
This document has been placed in the public domain.
| Deferred | PEP 419 – Protecting cleanup statements from interruptions | Standards Track | This PEP proposes a way to protect Python code from being interrupted
inside a finally clause or during context manager cleanup. |
PEP 420 – Implicit Namespace Packages
Author:
Eric V. Smith <eric at trueblade.com>
Status:
Final
Type:
Standards Track
Created:
19-Apr-2012
Python-Version:
3.3
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
Terminology
Namespace packages today
Rationale
Specification
Dynamic path computation
Impact on import finders and loaders
Differences between namespace packages and regular packages
Namespace packages in the standard library
Migrating from legacy namespace packages
Packaging Implications
Examples
Nested namespace packages
Dynamic path computation
Discussion
find_module versus find_loader
Dynamic path computation
Module reprs
References
Copyright
Abstract
Namespace packages are a mechanism for splitting a single Python package
across multiple directories on disk. In current Python versions, an algorithm
to compute the packages __path__ must be formulated. With the enhancement
proposed here, the import machinery itself will construct the list of
directories that make up the package. This PEP builds upon previous work,
documented in PEP 382 and PEP 402. Those PEPs have since been rejected in
favor of this one. An implementation of this PEP is at [1].
Terminology
Within this PEP:
“package” refers to Python packages as defined by Python’s import
statement.
“distribution” refers to separately installable sets of Python
modules as stored in the Python package index, and installed by
distutils or setuptools.
“vendor package” refers to groups of files installed by an
operating system’s packaging mechanism (e.g. Debian or Redhat
packages install on Linux systems).
“regular package” refers to packages as they are implemented in
Python 3.2 and earlier.
“portion” refers to a set of files in a single directory (possibly
stored in a zip file) that contribute to a namespace package.
“legacy portion” refers to a portion that uses __path__
manipulation in order to implement namespace packages.
This PEP defines a new type of package, the “namespace package”.
Namespace packages today
Python currently provides pkgutil.extend_path to denote a package
as a namespace package. The recommended way of using it is to put:
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
in the package’s __init__.py. Every distribution needs to provide
the same contents in its __init__.py, so that extend_path is
invoked independent of which portion of the package gets imported
first. As a consequence, the package’s __init__.py cannot
practically define any names as it depends on the order of the package
fragments on sys.path to determine which portion is imported
first. As a special feature, extend_path reads files named
<packagename>.pkg which allows declaration of additional portions.
setuptools provides a similar function named
pkg_resources.declare_namespace that is used in the form:
import pkg_resources
pkg_resources.declare_namespace(__name__)
In the portion’s __init__.py, no assignment to __path__ is
necessary, as declare_namespace modifies the package __path__
through sys.modules. As a special feature, declare_namespace
also supports zip files, and registers the package name internally so
that future additions to sys.path by setuptools can properly add
additional portions to each package.
setuptools allows declaring namespace packages in a distribution’s
setup.py, so that distribution developers don’t need to put the
magic __path__ modification into __init__.py themselves.
See PEP 402’s “The Problem”
section for additional motivations
for namespace packages. Note that PEP 402 has been rejected, but the
motivating use cases are still valid.
Rationale
The current imperative approach to namespace packages has led to
multiple slightly-incompatible mechanisms for providing namespace
packages. For example, pkgutil supports *.pkg files; setuptools
doesn’t. Likewise, setuptools supports inspecting zip files, and
supports adding portions to its _namespace_packages variable,
whereas pkgutil doesn’t.
Namespace packages are designed to support being split across multiple
directories (and hence found via multiple sys.path entries). In
this configuration, it doesn’t matter if multiple portions all provide
an __init__.py file, so long as each portion correctly initializes
the namespace package. However, Linux distribution vendors (amongst
others) prefer to combine the separate portions and install them all
into the same file system directory. This creates a potential for
conflict, as the portions are now attempting to provide the same
file on the target system - something that is not allowed by many
package managers. Allowing implicit namespace packages means that the
requirement to provide an __init__.py file can be dropped
completely, and affected portions can be installed into a common
directory or split across multiple directories as distributions see
fit.
A namespace package will not be constrained by a fixed __path__,
computed from the parent path at namespace package creation time.
Consider the standard library encodings package:
Suppose that encodings becomes a namespace package.
It sometimes gets imported during interpreter startup to
initialize the standard io streams.
An application modifies sys.path after startup and wants to
contribute additional encodings from new path entries.
An attempt is made to import an encoding from an encodings
portion that is found on a path entry added in step 3.
If the import system was restricted to only finding portions along the
value of sys.path that existed at the time the encodings
namespace package was created, the additional paths added in step 3
would never be searched for the additional portions imported in step
4. In addition, if step 2 were sometimes skipped (due to some runtime
flag or other condition), then the path items added in step 3 would
indeed be used the first time a portion was imported. Thus this PEP
requires that the list of path entries be dynamically computed when
each portion is loaded. It is expected that the import machinery will
do this efficiently by caching __path__ values and only refreshing
them when it detects that the parent path has changed. In the case of
a top-level package like encodings, this parent path would be
sys.path.
Specification
Regular packages will continue to have an __init__.py and will
reside in a single directory.
Namespace packages cannot contain an __init__.py. As a
consequence, pkgutil.extend_path and
pkg_resources.declare_namespace become obsolete for purposes of
namespace package creation. There will be no marker file or directory
for specifying a namespace package.
During import processing, the import machinery will continue to
iterate over each directory in the parent path as it does in Python
3.2. While looking for a module or package named “foo”, for each
directory in the parent path:
If <directory>/foo/__init__.py is found, a regular package is
imported and returned.
If not, but <directory>/foo.{py,pyc,so,pyd} is found, a module
is imported and returned. The exact list of extension varies by
platform and whether the -O flag is specified. The list here is
representative.
If not, but <directory>/foo is found and is a directory, it is
recorded and the scan continues with the next directory in the
parent path.
Otherwise the scan continues with the next directory in the parent
path.
If the scan completes without returning a module or package, and at
least one directory was recorded, then a namespace package is created.
The new namespace package:
Has a __path__ attribute set to an iterable of the path strings
that were found and recorded during the scan.
Does not have a __file__ attribute.
Note that if “import foo” is executed and “foo” is found as a
namespace package (using the above rules), then “foo” is immediately
created as a package. The creation of the namespace package is not
deferred until a sub-level import occurs.
A namespace package is not fundamentally different from a regular
package. It is just a different way of creating packages. Once a
namespace package is created, there is no functional difference
between it and a regular package.
Dynamic path computation
The import machinery will behave as if a namespace package’s
__path__ is recomputed before each portion is loaded.
For performance reasons, it is expected that this will be achieved by
detecting that the parent path has changed. If no change has taken
place, then no __path__ recomputation is required. The
implementation must ensure that changes to the contents of the parent
path are detected, as well as detecting the replacement of the parent
path with a new path entry list object.
Impact on import finders and loaders
PEP 302 defines “finders” that are called to search path elements.
These finders’ find_module methods return either a “loader” object
or None.
For a finder to contribute to namespace packages, it must implement a
new find_loader(fullname) method. fullname has the same
meaning as for find_module. find_loader always returns a
2-tuple of (loader, <iterable-of-path-entries>). loader may
be None, in which case <iterable-of-path-entries> (which may
be empty) is added to the list of recorded path entries and path
searching continues. If loader is not None, it is immediately
used to load a module or regular package.
Even if loader is returned and is not None,
<iterable-of-path-entries> must still contain the path entries for
the package. This allows code such as pkgutil.extend_path() to
compute path entries for packages that it does not load.
Note that multiple path entries per finder are allowed. This is to
support the case where a finder discovers multiple namespace portions
for a given fullname. Many finders will support only a single
namespace package portion per find_loader call, in which case this
iterable will contain only a single string.
The import machinery will call find_loader if it exists, else fall
back to find_module. Legacy finders which implement
find_module but not find_loader will be unable to contribute
portions to a namespace package.
The specification expands PEP 302 loaders to include an optional method called
module_repr() which if present, is used to generate module object reprs.
See the section below for further details.
Differences between namespace packages and regular packages
Namespace packages and regular packages are very similar. The
differences are:
Portions of namespace packages need not all come from the same
directory structure, or even from the same loader. Regular packages
are self-contained: all parts live in the same directory hierarchy.
Namespace packages have no __file__ attribute.
Namespace packages’ __path__ attribute is a read-only iterable
of strings, which is automatically updated when the parent path is
modified.
Namespace packages have no __init__.py module.
Namespace packages have a different type of object for their
__loader__ attribute.
Namespace packages in the standard library
It is possible, and this PEP explicitly allows, that parts of the
standard library be implemented as namespace packages. When and if
any standard library packages become namespace packages is outside the
scope of this PEP.
Migrating from legacy namespace packages
As described above, prior to this PEP pkgutil.extend_path() was
used by legacy portions to create namespace packages. Because it is
likely not practical for all existing portions of a namespace package
to be migrated to this PEP at once, extend_path() will be modified
to also recognize PEP 420 namespace packages. This will allow some
portions of a namespace to be legacy portions while others are
migrated to PEP 420. These hybrid namespace packages will not have
the dynamic path computation that normal namespace packages have,
since extend_path() never provided this functionality in the past.
Packaging Implications
Multiple portions of a namespace package can be installed into the
same directory, or into separate directories. For this section,
suppose there are two portions which define “foo.bar” and “foo.baz”.
“foo” itself is a namespace package.
If these are installed in the same location, a single directory “foo”
would be in a directory that is on sys.path. Inside “foo” would
be two directories, “bar” and “baz”. If “foo.bar” is removed (perhaps
by an OS package manager), care must be taken not to remove the
“foo/baz” or “foo” directories. Note that in this case “foo” will be
a namespace package (because it lacks an __init__.py), even though
all of its portions are in the same directory.
Note that “foo.bar” and “foo.baz” can be installed into the same “foo”
directory because they will not have any files in common.
If the portions are installed in different locations, two different
“foo” directories would be in directories that are on sys.path.
“foo/bar” would be in one of these sys.path entries, and “foo/baz”
would be in the other. Upon removal of “foo.bar”, the “foo/bar” and
corresponding “foo” directories can be completely removed. But
“foo/baz” and its corresponding “foo” directory cannot be removed.
It is also possible to have the “foo.bar” portion installed in a
directory on sys.path, and have the “foo.baz” portion provided in
a zip file, also on sys.path.
Examples
Nested namespace packages
This example uses the following directory structure:
Lib/test/namespace_pkgs
project1
parent
child
one.py
project2
parent
child
two.py
Here, both parent and child are namespace packages: Portions of them
exist in different directories, and they do not have __init__.py
files.
Here we add the parent directories to sys.path, and show that the
portions are correctly found:
>>> import sys
>>> sys.path += ['Lib/test/namespace_pkgs/project1', 'Lib/test/namespace_pkgs/project2']
>>> import parent.child.one
>>> parent.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent', 'Lib/test/namespace_pkgs/project2/parent'])
>>> parent.child.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent/child', 'Lib/test/namespace_pkgs/project2/parent/child'])
>>> import parent.child.two
>>>
Dynamic path computation
This example uses a similar directory structure, but adds a third
portion:
Lib/test/namespace_pkgs
project1
parent
child
one.py
project2
parent
child
two.py
project3
parent
child
three.py
We add project1 and project2 to sys.path, then import
parent.child.one and parent.child.two. Then we add the
project3 to sys.path and when parent.child.three is
imported, project3/parent is automatically added to
parent.__path__:
# add the first two parent paths to sys.path
>>> import sys
>>> sys.path += ['Lib/test/namespace_pkgs/project1', 'Lib/test/namespace_pkgs/project2']
# parent.child.one can be imported, because project1 was added to sys.path:
>>> import parent.child.one
>>> parent.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent', 'Lib/test/namespace_pkgs/project2/parent'])
# parent.child.__path__ contains project1/parent/child and project2/parent/child, but not project3/parent/child:
>>> parent.child.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent/child', 'Lib/test/namespace_pkgs/project2/parent/child'])
# parent.child.two can be imported, because project2 was added to sys.path:
>>> import parent.child.two
# we cannot import parent.child.three, because project3 is not in the path:
>>> import parent.child.three
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1286, in _find_and_load
File "<frozen importlib._bootstrap>", line 1250, in _find_and_load_unlocked
ImportError: No module named 'parent.child.three'
# now add project3 to sys.path:
>>> sys.path.append('Lib/test/namespace_pkgs/project3')
# and now parent.child.three can be imported:
>>> import parent.child.three
# project3/parent has been added to parent.__path__:
>>> parent.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent', 'Lib/test/namespace_pkgs/project2/parent', 'Lib/test/namespace_pkgs/project3/parent'])
# and project3/parent/child has been added to parent.child.__path__
>>> parent.child.__path__
_NamespacePath(['Lib/test/namespace_pkgs/project1/parent/child', 'Lib/test/namespace_pkgs/project2/parent/child', 'Lib/test/namespace_pkgs/project3/parent/child'])
>>>
Discussion
At PyCon 2012, we had a discussion about namespace packages at which
PEP 382 and PEP 402 were rejected, to be replaced by this PEP [3].
There is no intention to remove support of regular packages. If a
developer knows that her package will never be a portion of a
namespace package, then there is a performance advantage to it being a
regular package (with an __init__.py). Creation and loading of a
regular package can take place immediately when it is located along
the path. With namespace packages, all entries in the path must be
scanned before the package is created.
Note that an ImportWarning will no longer be raised for a directory
lacking an __init__.py file. Such a directory will now be
imported as a namespace package, whereas in prior Python versions an
ImportWarning would be raised.
Alyssa (Nick) Coghlan presented a list of her objections to this proposal [4].
They are:
Implicit package directories go against the Zen of Python.
Implicit package directories pose awkward backwards compatibility
challenges.
Implicit package directories introduce ambiguity into file system
layouts.
Implicit package directories will permanently entrench current
newbie-hostile behavior in __main__.
Alyssa later gave a detailed response to her own objections [5], which
is summarized here:
The practicality of this PEP wins over other proposals and the
status quo.
Minor backward compatibility issues are okay, as long as they are
properly documented.
This will be addressed in PEP 395.
This will also be addressed in PEP 395.
The inclusion of namespace packages in the standard library was
motivated by Martin v. Löwis, who wanted the encodings package to
become a namespace package [6]. While this PEP allows for standard
library packages to become namespaces, it defers a decision on
encodings.
find_module versus find_loader
An early draft of this PEP specified a change to the find_module
method in order to support namespace packages. It would be modified
to return a string in the case where a namespace package portion was
discovered.
However, this caused a problem with existing code outside of the
standard library which calls find_module. Because this code would
not be upgraded in concert with changes required by this PEP, it would
fail when it would receive unexpected return values from
find_module. Because of this incompatibility, this PEP now
specifies that finders that want to provide namespace portions must
implement the find_loader method, described above.
The use case for supporting multiple portions per find_loader call
is given in [7].
Dynamic path computation
Guido raised a concern that automatic dynamic path computation was an
unnecessary feature [8]. Later in that thread, PJ Eby and Alyssa
Coghlan presented arguments as to why dynamic computation would
minimize surprise to Python users. The conclusion of that discussion
has been included in this PEP’s Rationale section.
An earlier version of this PEP required that dynamic path computation
could only take affect if the parent path object were modified
in-place. That is, this would work:
sys.path.append('new-dir')
But this would not:
sys.path = sys.path + ['new-dir']
In the same thread [8], it was pointed out that this restriction is
not required. If the parent path is looked up by name instead of by
holding a reference to it, then there is no restriction on how the
parent path is modified or replaced. For a top-level namespace
package, the lookup would be the module named "sys" then its
attribute "path". For a namespace package nested inside a package
foo, the lookup would be for the module named "foo" then its
attribute "__path__".
Module reprs
Previously, module reprs were hard coded based on assumptions about a module’s
__file__ attribute. If this attribute existed and was a string, it was
assumed to be a file system path, and the module object’s repr would include
this in its value. The only exception was that PEP 302 reserved missing
__file__ attributes to built-in modules, and in CPython, this assumption
was baked into the module object’s implementation. Because of this
restriction, some modules contained contrived __file__ values that did not
reflect file system paths, and which could cause unexpected problems later
(e.g. os.path.join() on a non-path __file__ would return gibberish).
This PEP relaxes this constraint, and leaves the setting of __file__ to
the purview of the loader producing the module. Loaders may opt to leave
__file__ unset if no file system path is appropriate. Loaders may also
set additional reserved attributes on the module if useful. This means that
the definitive way to determine the origin of a module is to check its
__loader__ attribute.
For example, namespace packages as described in this PEP will have no
__file__ attribute because no corresponding file exists. In order to
provide flexibility and descriptiveness in the reprs of such modules, a new
optional protocol is added to PEP 302 loaders. Loaders can implement a
module_repr() method which takes a single argument, the module object.
This method should return the string to be used verbatim as the repr of the
module. The rules for producing a module repr are now standardized as:
If the module has an __loader__ and that loader has a module_repr()
method, call it with a single argument, which is the module object. The
value returned is used as the module’s repr.
If an exception occurs in module_repr(), the exception is
caught and discarded, and the calculation of the module’s repr
continues as if module_repr() did not exist.
If the module has an __file__ attribute, this is used as part of the
module’s repr.
If the module has no __file__ but does have an __loader__, then the
loader’s repr is used as part of the module’s repr.
Otherwise, just use the module’s __name__ in the repr.
Here is a snippet showing how namespace module reprs are calculated
from its loader:
class NamespaceLoader:
@classmethod
def module_repr(cls, module):
return "<module '{}' (namespace)>".format(module.__name__)
Built-in module reprs would no longer need to be hard-coded, but
instead would come from their loader as well:
class BuiltinImporter:
@classmethod
def module_repr(cls, module):
return "<module '{}' (built-in)>".format(module.__name__)
Here are some example reprs of different types of modules with
different sets of the related attributes:
>>> import email
>>> email
<module 'email' from '/home/barry/projects/python/pep-420/Lib/email/__init__.py'>
>>> m = type(email)('foo')
>>> m
<module 'foo'>
>>> m.__file__ = 'zippy:/de/do/dah'
>>> m
<module 'foo' from 'zippy:/de/do/dah'>
>>> class Loader: pass
...
>>> m.__loader__ = Loader
>>> del m.__file__
>>> m
<module 'foo' (<class '__main__.Loader'>)>
>>> class NewLoader:
... @classmethod
... def module_repr(cls, module):
... return '<mystery module!>'
...
>>> m.__loader__ = NewLoader
>>> m
<mystery module!>
>>>
References
[1]
PEP 420 branch (http://hg.python.org/features/pep-420)
[3]
PyCon 2012 Namespace Package discussion outcome
(https://mail.python.org/pipermail/import-sig/2012-March/000421.html)
[4]
Alyssa Coghlan’s objection to the lack of marker files or directories
(https://mail.python.org/pipermail/import-sig/2012-March/000423.html)
[5]
Alyssa Coghlan’s response to her initial objections
(https://mail.python.org/pipermail/import-sig/2012-April/000464.html)
[6]
Martin v. Löwis’s suggestion to make encodings a namespace
package
(https://mail.python.org/pipermail/import-sig/2012-May/000540.html)
[7]
Use case for multiple portions per find_loader call
(https://mail.python.org/pipermail/import-sig/2012-May/000585.html)
[8] (1, 2)
Discussion about dynamic path computation
(https://mail.python.org/pipermail/python-dev/2012-May/119560.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 420 – Implicit Namespace Packages | Standards Track | Namespace packages are a mechanism for splitting a single Python package
across multiple directories on disk. In current Python versions, an algorithm
to compute the packages __path__ must be formulated. With the enhancement
proposed here, the import machinery itself will construct the list of
directories that make up the package. This PEP builds upon previous work,
documented in PEP 382 and PEP 402. Those PEPs have since been rejected in
favor of this one. An implementation of this PEP is at [1]. |
PEP 422 – Simpler customisation of class creation
Author:
Alyssa Coghlan <ncoghlan at gmail.com>,
Daniel Urban <urban.dani+py at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
05-Jun-2012
Python-Version:
3.5
Post-History:
05-Jun-2012, 10-Feb-2013
Table of Contents
Abstract
PEP Withdrawal
Background
Proposal
Key Benefits
Easier use of custom namespaces for a class
Easier inheritance of definition time behaviour
Reduced chance of metaclass conflicts
Integrates cleanly with PEP 3135
Replaces many use cases for dynamic setting of __metaclass__
Design Notes
Determining if the class being decorated is the base class
Replacing a class with a different kind of object
Open Questions
Is the namespace concept worth the extra complexity?
New Ways of Using Classes
Order preserving classes
Prepopulated namespaces
Cloning a prototype class
Extending a class
Rejected Design Options
Calling __autodecorate__ from type.__init__
Calling the automatic decoration hook __init_class__
Requiring an explicit decorator on __autodecorate__
Making __autodecorate__ implicitly static, like __new__
Passing in the namespace directly rather than a factory function
Reference Implementation
TODO
Copyright
Abstract
Currently, customising class creation requires the use of a custom metaclass.
This custom metaclass then persists for the entire lifecycle of the class,
creating the potential for spurious metaclass conflicts.
This PEP proposes to instead support a wide range of customisation
scenarios through a new namespace parameter in the class header, and
a new __autodecorate__ hook in the class body.
The new mechanism should be easier to understand and use than
implementing a custom metaclass, and thus should provide a gentler
introduction to the full power Python’s metaclass machinery.
PEP Withdrawal
This proposal has been withdrawn in favour of Martin Teichmann’s proposal
in PEP 487, which achieves the same goals through a simpler, easier to use
__init_subclass__ hook that simply isn’t invoked for the base class
that defines the hook.
Background
For an already created class cls, the term “metaclass” has a clear
meaning: it is the value of type(cls).
During class creation, it has another meaning: it is also used to refer to
the metaclass hint that may be provided as part of the class definition.
While in many cases these two meanings end up referring to one and the same
object, there are two situations where that is not the case:
If the metaclass hint refers to an instance of type, then it is
considered as a candidate metaclass along with the metaclasses of all of
the parents of the class being defined. If a more appropriate metaclass is
found amongst the candidates, then it will be used instead of the one
given in the metaclass hint.
Otherwise, an explicit metaclass hint is assumed to be a factory function
and is called directly to create the class object. In this case, the final
metaclass will be determined by the factory function definition. In the
typical case (where the factory functions just calls type, or, in
Python 3.3 or later, types.new_class) the actual metaclass is then
determined based on the parent classes.
It is notable that only the actual metaclass is inherited - a factory
function used as a metaclass hook sees only the class currently being
defined, and is not invoked for any subclasses.
In Python 3, the metaclass hint is provided using the metaclass=Meta
keyword syntax in the class header. This allows the __prepare__ method
on the metaclass to be used to create the locals() namespace used during
execution of the class body (for example, specifying the use of
collections.OrderedDict instead of a regular dict).
In Python 2, there was no __prepare__ method (that API was added for
Python 3 by PEP 3115). Instead, a class body could set the __metaclass__
attribute, and the class creation process would extract that value from the
class namespace to use as the metaclass hint. There is published code that
makes use of this feature.
Another new feature in Python 3 is the zero-argument form of the super()
builtin, introduced by PEP 3135. This feature uses an implicit __class__
reference to the class being defined to replace the “by name” references
required in Python 2. Just as code invoked during execution of a Python 2
metaclass could not call methods that referenced the class by name (as the
name had not yet been bound in the containing scope), similarly, Python 3
metaclasses cannot call methods that rely on the implicit __class__
reference (as it is not populated until after the metaclass has returned
control to the class creation machinery).
Finally, when a class uses a custom metaclass, it can pose additional
challenges to the use of multiple inheritance, as a new class cannot
inherit from parent classes with unrelated metaclasses. This means that
it is impossible to add a metaclass to an already published class: such
an addition is a backwards incompatible change due to the risk of metaclass
conflicts.
Proposal
This PEP proposes that a new mechanism to customise class creation be
added to Python 3.4 that meets the following criteria:
Integrates nicely with class inheritance structures (including mixins and
multiple inheritance)
Integrates nicely with the implicit __class__ reference and
zero-argument super() syntax introduced by PEP 3135
Can be added to an existing base class without a significant risk of
introducing backwards compatibility problems
Restores the ability for class namespaces to have some influence on the
class creation process (above and beyond populating the namespace itself),
but potentially without the full flexibility of the Python 2 style
__metaclass__ hook
One mechanism that can achieve this goal is to add a new implicit class
decoration hook, modelled directly on the existing explicit class
decorators, but defined in the class body or in a parent class, rather than
being part of the class definition header.
Specifically, it is proposed that class definitions be able to provide a
class initialisation hook as follows:
class Example:
def __autodecorate__(cls):
# This is invoked after the class is created, but before any
# explicit decorators are called
# The usual super() mechanisms are used to correctly support
# multiple inheritance. The class decorator style signature helps
# ensure that invoking the parent class is as simple as possible.
cls = super().__autodecorate__()
return cls
To simplify the cooperative multiple inheritance case, object will gain
a default implementation of the hook that returns the class unmodified:
class object:
def __autodecorate__(cls):
return cls
If a metaclass wishes to block implicit class decoration for some reason, it
must arrange for cls.__autodecorate__ to trigger AttributeError.
If present on the created object, this new hook will be called by the class
creation machinery after the __class__ reference has been initialised.
For types.new_class(), it will be called as the last step before
returning the created class object. __autodecorate__ is implicitly
converted to a class method when the class is created (prior to the hook
being invoked).
Note, that when __autodecorate__ is called, the name of the class is not
yet bound to the new class object. As a consequence, the two argument form
of super() cannot be used to call methods (e.g., super(Example, cls)
wouldn’t work in the example above). However, the zero argument form of
super() works as expected, since the __class__ reference is already
initialised.
This general proposal is not a new idea (it was first suggested for
inclusion in the language definition more than 10 years ago, and a
similar mechanism has long been supported by Zope’s ExtensionClass),
but the situation has changed sufficiently in recent years that
the idea is worth reconsidering for inclusion as a native language feature.
In addition, the introduction of the metaclass __prepare__ method in PEP
3115 allows a further enhancement that was not possible in Python 2: this
PEP also proposes that type.__prepare__ be updated to accept a factory
function as a namespace keyword-only argument. If present, the value
provided as the namespace argument will be called without arguments
to create the result of type.__prepare__ instead of using a freshly
created dictionary instance. For example, the following will use
an ordered dictionary as the class namespace:
class OrderedExample(namespace=collections.OrderedDict):
def __autodecorate__(cls):
# cls.__dict__ is still a read-only proxy to the class namespace,
# but the underlying storage is an OrderedDict instance
Note
This PEP, along with the existing ability to use __prepare__ to share a
single namespace amongst multiple class objects, highlights a possible
issue with the attribute lookup caching: when the underlying mapping is
updated by other means, the attribute lookup cache is not invalidated
correctly (this is a key part of the reason class __dict__ attributes
produce a read-only view of the underlying storage).
Since the optimisation provided by that cache is highly desirable,
the use of a preexisting namespace as the class namespace may need to
be declared as officially unsupported (since the observed behaviour is
rather strange when the caches get out of sync).
Key Benefits
Easier use of custom namespaces for a class
Currently, to use a different type (such as collections.OrderedDict) for
a class namespace, or to use a pre-populated namespace, it is necessary to
write and use a custom metaclass. With this PEP, using a custom namespace
becomes as simple as specifying an appropriate factory function in the
class header.
Easier inheritance of definition time behaviour
Understanding Python’s metaclasses requires a deep understanding of
the type system and the class construction process. This is legitimately
seen as challenging, due to the need to keep multiple moving parts (the code,
the metaclass hint, the actual metaclass, the class object, instances of the
class object) clearly distinct in your mind. Even when you know the rules,
it’s still easy to make a mistake if you’re not being extremely careful.
An earlier version of this PEP actually included such a mistake: it
stated “subclass of type” for a constraint that is actually “instance of
type”.
Understanding the proposed implicit class decoration hook only requires
understanding decorators and ordinary method inheritance, which isn’t
quite as daunting a task. The new hook provides a more gradual path
towards understanding all of the phases involved in the class definition
process.
Reduced chance of metaclass conflicts
One of the big issues that makes library authors reluctant to use metaclasses
(even when they would be appropriate) is the risk of metaclass conflicts.
These occur whenever two unrelated metaclasses are used by the desired
parents of a class definition. This risk also makes it very difficult to
add a metaclass to a class that has previously been published without one.
By contrast, adding an __autodecorate__ method to an existing type poses
a similar level of risk to adding an __init__ method: technically, there
is a risk of breaking poorly implemented subclasses, but when that occurs,
it is recognised as a bug in the subclass rather than the library author
breaching backwards compatibility guarantees. In fact, due to the constrained
signature of __autodecorate__, the risk in this case is actually even
lower than in the case of __init__.
Integrates cleanly with PEP 3135
Unlike code that runs as part of the metaclass, code that runs as part of
the new hook will be able to freely invoke class methods that rely on the
implicit __class__ reference introduced by PEP 3135, including methods
that use the zero argument form of super().
Replaces many use cases for dynamic setting of __metaclass__
For use cases that don’t involve completely replacing the defined class,
Python 2 code that dynamically set __metaclass__ can now dynamically
set __autodecorate__ instead. For more advanced use cases, introduction of
an explicit metaclass (possibly made available as a required base class) will
still be necessary in order to support Python 3.
Design Notes
Determining if the class being decorated is the base class
In the body of an __autodecorate__ method, as in any other class method,
__class__ will be bound to the class declaring the method, while the
value passed in may be a subclass.
This makes it relatively straightforward to skip processing the base class
if necessary:
class Example:
def __autodecorate__(cls):
cls = super().__autodecorate__()
# Don't process the base class
if cls is __class__:
return
# Process subclasses here
...
Replacing a class with a different kind of object
As an implicit decorator, __autodecorate__ is able to relatively easily
replace the defined class with a different kind of object. Technically
custom metaclasses and even __new__ methods can already do this
implicitly, but the decorator model makes such code much easier to understand
and implement.
class BuildDict:
def __autodecorate__(cls):
cls = super().__autodecorate__()
# Don't process the base class
if cls is __class__:
return
# Convert subclasses to ordinary dictionaries
return cls.__dict__.copy()
It’s not clear why anyone would ever do this implicitly based on inheritance
rather than just using an explicit decorator, but the possibility seems worth
noting.
Open Questions
Is the namespace concept worth the extra complexity?
Unlike the new __autodecorate__ hook the proposed namespace keyword
argument is not automatically inherited by subclasses. Given the way this
proposal is currently written , the only way to get a special namespace used
consistently in subclasses is still to write a custom metaclass with a
suitable __prepare__ implementation.
Changing the custom namespace factory to also be inherited would
significantly increase the complexity of this proposal, and introduce a
number of the same potential base class conflict issues as arise with the
use of custom metaclasses.
Eric Snow has put forward a
separate proposal
to instead make the execution namespace for class bodies an ordered dictionary
by default, and capture the class attribute definition order for future
reference as an attribute (e.g. __definition_order__) on the class object.
Eric’s suggested approach may be a better choice for a new default behaviour
for type that combines well with the proposed __autodecorate__ hook,
leaving the more complex configurable namespace factory idea to a custom
metaclass like the one shown below.
New Ways of Using Classes
The new namespace keyword in the class header enables a number of
interesting options for controlling the way a class is initialised,
including some aspects of the object models of both Javascript and Ruby.
All of the examples below are actually possible today through the use of a
custom metaclass:
class CustomNamespace(type):
@classmethod
def __prepare__(meta, name, bases, *, namespace=None, **kwds):
parent_namespace = super().__prepare__(name, bases, **kwds)
return namespace() if namespace is not None else parent_namespace
def __new__(meta, name, bases, ns, *, namespace=None, **kwds):
return super().__new__(meta, name, bases, ns, **kwds)
def __init__(cls, name, bases, ns, *, namespace=None, **kwds):
return super().__init__(name, bases, ns, **kwds)
The advantage of implementing the new keyword directly in
type.__prepare__ is that the only persistent effect is then
the change in the underlying storage of the class attributes. The metaclass
of the class remains unchanged, eliminating many of the drawbacks
typically associated with these kinds of customisations.
Order preserving classes
class OrderedClass(namespace=collections.OrderedDict):
a = 1
b = 2
c = 3
Prepopulated namespaces
seed_data = dict(a=1, b=2, c=3)
class PrepopulatedClass(namespace=seed_data.copy):
pass
Cloning a prototype class
class NewClass(namespace=Prototype.__dict__.copy):
pass
Extending a class
Note
Just because the PEP makes it possible to do this relatively
cleanly doesn’t mean anyone should do this!
from collections import MutableMapping
# The MutableMapping + dict combination should give something that
# generally behaves correctly as a mapping, while still being accepted
# as a class namespace
class ClassNamespace(MutableMapping, dict):
def __init__(self, cls):
self._cls = cls
def __len__(self):
return len(dir(self._cls))
def __iter__(self):
for attr in dir(self._cls):
yield attr
def __contains__(self, attr):
return hasattr(self._cls, attr)
def __getitem__(self, attr):
return getattr(self._cls, attr)
def __setitem__(self, attr, value):
setattr(self._cls, attr, value)
def __delitem__(self, attr):
delattr(self._cls, attr)
def extend(cls):
return lambda: ClassNamespace(cls)
class Example:
pass
class ExtendedExample(namespace=extend(Example)):
a = 1
b = 2
c = 3
>>> Example.a, Example.b, Example.c
(1, 2, 3)
Rejected Design Options
Calling __autodecorate__ from type.__init__
Calling the new hook automatically from type.__init__, would achieve most
of the goals of this PEP. However, using that approach would mean that
__autodecorate__ implementations would be unable to call any methods that
relied on the __class__ reference (or used the zero-argument form of
super()), and could not make use of those features themselves.
The current design instead ensures that the implicit decorator hook is able
to do anything an explicit decorator can do by running it after the initial
class creation is already complete.
Calling the automatic decoration hook __init_class__
Earlier versions of the PEP used the name __init_class__ for the name
of the new hook. There were three significant problems with this name:
it was hard to remember if the correct spelling was __init_class__ or
__class_init__
the use of “init” in the name suggested the signature should match that
of type.__init__, which is not the case
the use of “init” in the name suggested the method would be run as part
of initial class object creation, which is not the case
The new name __autodecorate__ was chosen to make it clear that the new
initialisation hook is most usefully thought of as an implicitly invoked
class decorator, rather than as being like an __init__ method.
Requiring an explicit decorator on __autodecorate__
Originally, this PEP required the explicit use of @classmethod on the
__autodecorate__ decorator. It was made implicit since there’s no
sensible interpretation for leaving it out, and that case would need to be
detected anyway in order to give a useful error message.
This decision was reinforced after noticing that the user experience of
defining __prepare__ and forgetting the @classmethod method
decorator is singularly incomprehensible (particularly since PEP 3115
documents it as an ordinary method, and the current documentation doesn’t
explicitly say anything one way or the other).
Making __autodecorate__ implicitly static, like __new__
While it accepts the class to be instantiated as the first argument,
__new__ is actually implicitly treated as a static method rather than
as a class method. This allows it to be readily extracted from its
defining class and called directly on a subclass, rather than being
coupled to the class object it is retrieved from.
Such behaviour initially appears to be potentially useful for the
new __autodecorate__ hook, as it would allow __autodecorate__
methods to readily be used as explicit decorators on other classes.
However, that apparent support would be an illusion as it would only work
correctly if invoked on a subclass, in which case the method can just as
readily be retrieved from the subclass and called that way. Unlike
__new__, there’s no issue with potentially changing method signatures at
different points in the inheritance chain.
Passing in the namespace directly rather than a factory function
At one point, this PEP proposed that the class namespace be passed
directly as a keyword argument, rather than passing a factory function.
However, this encourages an unsupported behaviour (that is, passing the
same namespace to multiple classes, or retaining direct write access
to a mapping used as a class namespace), so the API was switched to
the factory function version.
Reference Implementation
A reference implementation for __autodecorate__ has been posted to the
issue tracker. It uses the original __init_class__ naming. does not yet
allow the implicit decorator to replace the class with a different object and
does not implement the suggested namespace parameter for
type.__prepare__.
TODO
address the 5 points in https://mail.python.org/pipermail/python-dev/2013-February/123970.html
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 422 – Simpler customisation of class creation | Standards Track | Currently, customising class creation requires the use of a custom metaclass.
This custom metaclass then persists for the entire lifecycle of the class,
creating the potential for spurious metaclass conflicts. |
PEP 424 – A method for exposing a length hint
Author:
Alex Gaynor <alex.gaynor at gmail.com>
Status:
Final
Type:
Standards Track
Created:
14-Jul-2012
Python-Version:
3.4
Post-History:
15-Jul-2012
Table of Contents
Abstract
Specification
Rationale
Copyright
Abstract
CPython currently defines a __length_hint__ method on several
types, such as various iterators. This method is then used by various
other functions (such as list) to presize lists based on the
estimate returned by __length_hint__. Types which are not sized,
and thus should not define __len__, can then define
__length_hint__, to allow estimating or computing a size (such as
many iterators).
Specification
This PEP formally documents __length_hint__ for other interpreters
and non-standard-library Python modules to implement.
__length_hint__ must return an integer (else a TypeError is
raised) or NotImplemented, and is not required to be accurate. It
may return a value that is either larger or smaller than the actual
size of the container. A return value of NotImplemented indicates
that there is no finite length estimate. It may not return a negative
value (else a ValueError is raised).
In addition, a new function operator.length_hint hint is added,
with the following semantics (which define how __length_hint__
should be used):
def length_hint(obj, default=0):
"""Return an estimate of the number of items in obj.
This is useful for presizing containers when building from an
iterable.
If the object supports len(), the result will be
exact. Otherwise, it may over- or under-estimate by an
arbitrary amount. The result will be an integer >= 0.
"""
try:
return len(obj)
except TypeError:
try:
get_hint = type(obj).__length_hint__
except AttributeError:
return default
try:
hint = get_hint(obj)
except TypeError:
return default
if hint is NotImplemented:
return default
if not isinstance(hint, int):
raise TypeError("Length hint must be an integer, not %r" %
type(hint))
if hint < 0:
raise ValueError("__length_hint__() should return >= 0")
return hint
Rationale
Being able to pre-allocate lists based on the expected size, as
estimated by __length_hint__, can be a significant optimization.
CPython has been observed to run some code faster than PyPy, purely
because of this optimization being present.
Copyright
This document has been placed into the public domain.
| Final | PEP 424 – A method for exposing a length hint | Standards Track | CPython currently defines a __length_hint__ method on several
types, such as various iterators. This method is then used by various
other functions (such as list) to presize lists based on the
estimate returned by __length_hint__. Types which are not sized,
and thus should not define __len__, can then define
__length_hint__, to allow estimating or computing a size (such as
many iterators). |
PEP 428 – The pathlib module – object-oriented filesystem paths
Author:
Antoine Pitrou <solipsis at pitrou.net>
Status:
Final
Type:
Standards Track
Created:
30-Jul-2012
Python-Version:
3.4
Post-History:
05-Oct-2012
Resolution:
Python-Dev message
Table of Contents
Abstract
Related work
Implementation
Why an object-oriented API
Proposal
Class hierarchy
No confusion with builtins
Immutability
Sane behaviour
Comparisons
Useful notations
Pure paths API
Definitions
Construction
Representing
Properties
Deriving new paths
Joining
Changing the path’s final component
Making the path relative
Sequence-like access
Querying
Concrete paths API
Constructing
File metadata
Path resolution
Directory walking
File opening
Filesystem modification
Discussion
Division operator
joinpath()
Case-sensitivity
Copyright
Abstract
This PEP proposes the inclusion of a third-party module, pathlib, in
the standard library. The inclusion is proposed under the provisional
label, as described in PEP 411. Therefore, API changes can be done,
either as part of the PEP process, or after acceptance in the standard
library (and until the provisional label is removed).
The aim of this library is to provide a simple hierarchy of classes to
handle filesystem paths and the common operations users do over them.
Related work
An object-oriented API for filesystem paths has already been proposed
and rejected in PEP 355. Several third-party implementations of the
idea of object-oriented filesystem paths exist in the wild:
The historical path.py module by Jason Orendorff, Jason R. Coombs
and others, which provides a str-subclassing Path class;
Twisted’s slightly specialized FilePath class;
An AlternativePathClass proposal, subclassing tuple rather than
str;
Unipath, a variation on the str-subclassing approach with two public
classes, an AbstractPath class for operations which don’t do I/O and a
Path class for all common operations.
This proposal attempts to learn from these previous attempts and the
rejection of PEP 355.
Implementation
The implementation of this proposal is tracked in the pep428 branch
of pathlib’s Mercurial repository.
Why an object-oriented API
The rationale to represent filesystem paths using dedicated classes is the
same as for other kinds of stateless objects, such as dates, times or IP
addresses. Python has been slowly moving away from strictly replicating
the C language’s APIs to providing better, more helpful abstractions around
all kinds of common functionality. Even if this PEP isn’t accepted, it is
likely that another form of filesystem handling abstraction will be adopted
one day into the standard library.
Indeed, many people will prefer handling dates and times using the high-level
objects provided by the datetime module, rather than using numeric
timestamps and the time module API. Moreover, using a dedicated class
allows to enable desirable behaviours by default, for example the case
insensitivity of Windows paths.
Proposal
Class hierarchy
The pathlib module implements a simple hierarchy of classes:
+----------+
| |
---------| PurePath |--------
| | | |
| +----------+ |
| | |
| | |
v | v
+---------------+ | +-----------------+
| | | | |
| PurePosixPath | | | PureWindowsPath |
| | | | |
+---------------+ | +-----------------+
| v |
| +------+ |
| | | |
| -------| Path |------ |
| | | | | |
| | +------+ | |
| | | |
| | | |
v v v v
+-----------+ +-------------+
| | | |
| PosixPath | | WindowsPath |
| | | |
+-----------+ +-------------+
This hierarchy divides path classes along two dimensions:
a path class can be either pure or concrete: pure classes support only
operations that don’t need to do any actual I/O, which are most path
manipulation operations; concrete classes support all the operations
of pure classes, plus operations that do I/O.
a path class is of a given flavour according to the kind of operating
system paths it represents. pathlib implements two flavours: Windows
paths for the filesystem semantics embodied in Windows systems, POSIX
paths for other systems.
Any pure class can be instantiated on any system: for example, you can
manipulate PurePosixPath objects under Windows, PureWindowsPath
objects under Unix, and so on. However, concrete classes can only be
instantiated on a matching system: indeed, it would be error-prone to start
doing I/O with WindowsPath objects under Unix, or vice-versa.
Furthermore, there are two base classes which also act as system-dependent
factories: PurePath will instantiate either a PurePosixPath or a
PureWindowsPath depending on the operating system. Similarly, Path
will instantiate either a PosixPath or a WindowsPath.
It is expected that, in most uses, using the Path class is adequate,
which is why it has the shortest name of all.
No confusion with builtins
In this proposal, the path classes do not derive from a builtin type. This
contrasts with some other Path class proposals which were derived from
str. They also do not pretend to implement the sequence protocol:
if you want a path to act as a sequence, you have to lookup a dedicated
attribute (the parts attribute).
The key reasoning behind not inheriting from str is to prevent accidentally
performing operations with a string representing a path and a string that
doesn’t, e.g. path + an_accident. Since operations with a string will not
necessarily lead to a valid or expected file system path, “explicit is better
than implicit” by avoiding accidental operations with strings by not
subclassing it. A blog post by a Python core developer goes into more detail
on the reasons behind this specific design decision.
Immutability
Path objects are immutable, which makes them hashable and also prevents a
class of programming errors.
Sane behaviour
Little of the functionality from os.path is reused. Many os.path functions
are tied by backwards compatibility to confusing or plain wrong behaviour
(for example, the fact that os.path.abspath() simplifies “..” path
components without resolving symlinks first).
Comparisons
Paths of the same flavour are comparable and orderable, whether pure or not:
>>> PurePosixPath('a') == PurePosixPath('b')
False
>>> PurePosixPath('a') < PurePosixPath('b')
True
>>> PurePosixPath('a') == PosixPath('a')
True
Comparing and ordering Windows path objects is case-insensitive:
>>> PureWindowsPath('a') == PureWindowsPath('A')
True
Paths of different flavours always compare unequal, and cannot be ordered:
>>> PurePosixPath('a') == PureWindowsPath('a')
False
>>> PurePosixPath('a') < PureWindowsPath('a')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: PurePosixPath() < PureWindowsPath()
Paths compare unequal to, and are not orderable with instances of builtin
types (such as str) and any other types.
Useful notations
The API tries to provide useful notations all the while avoiding magic.
Some examples:
>>> p = Path('/home/antoine/pathlib/setup.py')
>>> p.name
'setup.py'
>>> p.suffix
'.py'
>>> p.root
'/'
>>> p.parts
('/', 'home', 'antoine', 'pathlib', 'setup.py')
>>> p.relative_to('/home/antoine')
PosixPath('pathlib/setup.py')
>>> p.exists()
True
Pure paths API
The philosophy of the PurePath API is to provide a consistent array of
useful path manipulation operations, without exposing a hodge-podge of
functions like os.path does.
Definitions
First a couple of conventions:
All paths can have a drive and a root. For POSIX paths, the drive is
always empty.
A relative path has neither drive nor root.
A POSIX path is absolute if it has a root. A Windows path is absolute if
it has both a drive and a root. A Windows UNC path (e.g.
\\host\share\myfile.txt) always has a drive and a root
(here, \\host\share and \, respectively).
A path which has either a drive or a root is said to be anchored.
Its anchor is the concatenation of the drive and root. Under POSIX,
“anchored” is the same as “absolute”.
Construction
We will present construction and joining together since they expose
similar semantics.
The simplest way to construct a path is to pass it its string representation:
>>> PurePath('setup.py')
PurePosixPath('setup.py')
Extraneous path separators and "." components are eliminated:
>>> PurePath('a///b/c/./d/')
PurePosixPath('a/b/c/d')
If you pass several arguments, they will be automatically joined:
>>> PurePath('docs', 'Makefile')
PurePosixPath('docs/Makefile')
Joining semantics are similar to os.path.join, in that anchored paths ignore
the information from the previously joined components:
>>> PurePath('/etc', '/usr', 'bin')
PurePosixPath('/usr/bin')
However, with Windows paths, the drive is retained as necessary:
>>> PureWindowsPath('c:/foo', '/Windows')
PureWindowsPath('c:/Windows')
>>> PureWindowsPath('c:/foo', 'd:')
PureWindowsPath('d:')
Also, path separators are normalized to the platform default:
>>> PureWindowsPath('a/b') == PureWindowsPath('a\\b')
True
Extraneous path separators and "." components are eliminated, but not
".." components:
>>> PurePosixPath('a//b/./c/')
PurePosixPath('a/b/c')
>>> PurePosixPath('a/../b')
PurePosixPath('a/../b')
Multiple leading slashes are treated differently depending on the path
flavour. They are always retained on Windows paths (because of the UNC
notation):
>>> PureWindowsPath('//some/path')
PureWindowsPath('//some/path/')
On POSIX, they are collapsed except if there are exactly two leading slashes,
which is a special case in the POSIX specification on pathname resolution
(this is also necessary for Cygwin compatibility):
>>> PurePosixPath('///some/path')
PurePosixPath('/some/path')
>>> PurePosixPath('//some/path')
PurePosixPath('//some/path')
Calling the constructor without any argument creates a path object pointing
to the logical “current directory” (without looking up its absolute path,
which is the job of the cwd() classmethod on concrete paths):
>>> PurePosixPath()
PurePosixPath('.')
Representing
To represent a path (e.g. to pass it to third-party libraries), just call
str() on it:
>>> p = PurePath('/home/antoine/pathlib/setup.py')
>>> str(p)
'/home/antoine/pathlib/setup.py'
>>> p = PureWindowsPath('c:/windows')
>>> str(p)
'c:\\windows'
To force the string representation with forward slashes, use the as_posix()
method:
>>> p.as_posix()
'c:/windows'
To get the bytes representation (which might be useful under Unix systems),
call bytes() on it, which internally uses os.fsencode():
>>> bytes(p)
b'/home/antoine/pathlib/setup.py'
To represent the path as a file: URI, call the as_uri() method:
>>> p = PurePosixPath('/etc/passwd')
>>> p.as_uri()
'file:///etc/passwd'
>>> p = PureWindowsPath('c:/Windows')
>>> p.as_uri()
'file:///c:/Windows'
The repr() of a path always uses forward slashes, even under Windows, for
readability and to remind users that forward slashes are ok:
>>> p = PureWindowsPath('c:/Windows')
>>> p
PureWindowsPath('c:/Windows')
Properties
Several simple properties are provided on every path (each can be empty):
>>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.drive
'c:'
>>> p.root
'\\'
>>> p.anchor
'c:\\'
>>> p.name
'pathlib.tar.gz'
>>> p.stem
'pathlib.tar'
>>> p.suffix
'.gz'
>>> p.suffixes
['.tar', '.gz']
Deriving new paths
Joining
A path can be joined with another using the / operator:
>>> p = PurePosixPath('foo')
>>> p / 'bar'
PurePosixPath('foo/bar')
>>> p / PurePosixPath('bar')
PurePosixPath('foo/bar')
>>> 'bar' / p
PurePosixPath('bar/foo')
As with the constructor, multiple path components can be specified, either
collapsed or separately:
>>> p / 'bar/xyzzy'
PurePosixPath('foo/bar/xyzzy')
>>> p / 'bar' / 'xyzzy'
PurePosixPath('foo/bar/xyzzy')
A joinpath() method is also provided, with the same behaviour:
>>> p.joinpath('Python')
PurePosixPath('foo/Python')
Changing the path’s final component
The with_name() method returns a new path, with the name changed:
>>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.with_name('setup.py')
PureWindowsPath('c:/Downloads/setup.py')
It fails with a ValueError if the path doesn’t have an actual name:
>>> p = PureWindowsPath('c:/')
>>> p.with_name('setup.py')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pathlib.py", line 875, in with_name
raise ValueError("%r has an empty name" % (self,))
ValueError: PureWindowsPath('c:/') has an empty name
>>> p.name
''
The with_suffix() method returns a new path with the suffix changed.
However, if the path has no suffix, the new suffix is added:
>>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')
>>> p.with_suffix('.bz2')
PureWindowsPath('c:/Downloads/pathlib.tar.bz2')
>>> p = PureWindowsPath('README')
>>> p.with_suffix('.bz2')
PureWindowsPath('README.bz2')
Making the path relative
The relative_to() method computes the relative difference of a path to
another:
>>> PurePosixPath('/usr/bin/python').relative_to('/usr')
PurePosixPath('bin/python')
ValueError is raised if the method cannot return a meaningful value:
>>> PurePosixPath('/usr/bin/python').relative_to('/etc')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pathlib.py", line 926, in relative_to
.format(str(self), str(formatted)))
ValueError: '/usr/bin/python' does not start with '/etc'
Sequence-like access
The parts property returns a tuple providing read-only sequence access
to a path’s components:
>>> p = PurePosixPath('/etc/init.d')
>>> p.parts
('/', 'etc', 'init.d')
Windows paths handle the drive and the root as a single path component:
>>> p = PureWindowsPath('c:/setup.py')
>>> p.parts
('c:\\', 'setup.py')
(separating them would be wrong, since C: is not the parent of C:\\).
The parent property returns the logical parent of the path:
>>> p = PureWindowsPath('c:/python33/bin/python.exe')
>>> p.parent
PureWindowsPath('c:/python33/bin')
The parents property returns an immutable sequence of the path’s
logical ancestors:
>>> p = PureWindowsPath('c:/python33/bin/python.exe')
>>> len(p.parents)
3
>>> p.parents[0]
PureWindowsPath('c:/python33/bin')
>>> p.parents[1]
PureWindowsPath('c:/python33')
>>> p.parents[2]
PureWindowsPath('c:/')
Querying
is_relative() returns True if the path is relative (see definition
above), False otherwise.
is_reserved() returns True if a Windows path is a reserved path such
as CON or NUL. It always returns False for POSIX paths.
match() matches the path against a glob pattern. It operates on
individual parts and matches from the right:
>>> p = PurePosixPath('/usr/bin')
>>> p.match('/usr/b*')
True
>>> p.match('usr/b*')
True
>>> p.match('b*')
True
>>> p.match('/u*')
False
This behaviour respects the following expectations:
A simple pattern such as “*.py” matches arbitrarily long paths as long
as the last part matches, e.g. “/usr/foo/bar.py”.
Longer patterns can be used as well for more complex matching, e.g.
“/usr/foo/*.py” matches “/usr/foo/bar.py”.
Concrete paths API
In addition to the operations of the pure API, concrete paths provide
additional methods which actually access the filesystem to query or mutate
information.
Constructing
The classmethod cwd() creates a path object pointing to the current
working directory in absolute form:
>>> Path.cwd()
PosixPath('/home/antoine/pathlib')
File metadata
The stat() returns the file’s stat() result; similarly, lstat()
returns the file’s lstat() result (which is different iff the file is a
symbolic link):
>>> p.stat()
posix.stat_result(st_mode=33277, st_ino=7483155, st_dev=2053, st_nlink=1, st_uid=500, st_gid=500, st_size=928, st_atime=1343597970, st_mtime=1328287308, st_ctime=1343597964)
Higher-level methods help examine the kind of the file:
>>> p.exists()
True
>>> p.is_file()
True
>>> p.is_dir()
False
>>> p.is_symlink()
False
>>> p.is_socket()
False
>>> p.is_fifo()
False
>>> p.is_block_device()
False
>>> p.is_char_device()
False
The file owner and group names (rather than numeric ids) are queried
through corresponding methods:
>>> p = Path('/etc/shadow')
>>> p.owner()
'root'
>>> p.group()
'shadow'
Path resolution
The resolve() method makes a path absolute, resolving any symlink on
the way (like the POSIX realpath() call). It is the only operation which
will remove “..” path components. On Windows, this method will also
take care to return the canonical path (with the right casing).
Directory walking
Simple (non-recursive) directory access is done by calling the iterdir()
method, which returns an iterator over the child paths:
>>> p = Path('docs')
>>> for child in p.iterdir(): child
...
PosixPath('docs/conf.py')
PosixPath('docs/_templates')
PosixPath('docs/make.bat')
PosixPath('docs/index.rst')
PosixPath('docs/_build')
PosixPath('docs/_static')
PosixPath('docs/Makefile')
This allows simple filtering through list comprehensions:
>>> p = Path('.')
>>> [child for child in p.iterdir() if child.is_dir()]
[PosixPath('.hg'), PosixPath('docs'), PosixPath('dist'), PosixPath('__pycache__'), PosixPath('build')]
Simple and recursive globbing is also provided:
>>> for child in p.glob('**/*.py'): child
...
PosixPath('test_pathlib.py')
PosixPath('setup.py')
PosixPath('pathlib.py')
PosixPath('docs/conf.py')
PosixPath('build/lib/pathlib.py')
File opening
The open() method provides a file opening API similar to the builtin
open() method:
>>> p = Path('setup.py')
>>> with p.open() as f: f.readline()
...
'#!/usr/bin/env python3\n'
Filesystem modification
Several common filesystem operations are provided as methods: touch(),
mkdir(), rename(), replace(), unlink(), rmdir(),
chmod(), lchmod(), symlink_to(). More operations could be
provided, for example some of the functionality of the shutil module.
Detailed documentation of the proposed API can be found at the pathlib
docs.
Discussion
Division operator
The division operator came out first in a poll about the path joining
operator. Initial versions of pathlib used square brackets
(i.e. __getitem__) instead.
joinpath()
The joinpath() method was initially called join(), but several people
objected that it could be confused with str.join() which has different
semantics. Therefore, it was renamed to joinpath().
Case-sensitivity
Windows users consider filesystem paths to be case-insensitive and expect
path objects to observe that characteristic, even though in some rare
situations some foreign filesystem mounts may be case-sensitive under
Windows.
In the words of one commenter,
“If glob(”*.py”) failed to find SETUP.PY on Windows, that would be a
usability disaster”.—Paul Moore in
https://mail.python.org/pipermail/python-dev/2013-April/125254.html
Copyright
This document has been placed into the public domain.
| Final | PEP 428 – The pathlib module – object-oriented filesystem paths | Standards Track | This PEP proposes the inclusion of a third-party module, pathlib, in
the standard library. The inclusion is proposed under the provisional
label, as described in PEP 411. Therefore, API changes can be done,
either as part of the PEP process, or after acceptance in the standard
library (and until the provisional label is removed). |
PEP 429 – Python 3.4 Release Schedule
Author:
Larry Hastings <larry at hastings.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
17-Oct-2012
Python-Version:
3.4
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
Features for 3.4
Copyright
Abstract
This document describes the development and release schedule for
Python 3.4. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.4 Release Manager: Larry Hastings
Windows installers: Martin v. Löwis
Mac installers: Ned Deily
Documentation: Georg Brandl
Release Schedule
Python 3.4 has now reached its end-of-life and has been retired.
No more releases will be made.
These are all the historical releases of Python 3.4,
including their release dates.
3.4.0 alpha 1: August 3, 2013
3.4.0 alpha 2: September 9, 2013
3.4.0 alpha 3: September 29, 2013
3.4.0 alpha 4: October 20, 2013
3.4.0 beta 1: November 24, 2013
3.4.0 beta 2: January 5, 2014
3.4.0 beta 3: January 26, 2014
3.4.0 candidate 1: February 10, 2014
3.4.0 candidate 2: February 23, 2014
3.4.0 candidate 3: March 9, 2014
3.4.0 final: March 16, 2014
3.4.1 candidate 1: May 5, 2014
3.4.1 final: May 18, 2014
3.4.2 candidate 1: September 22, 2014
3.4.2 final: October 6, 2014
3.4.3 candidate 1: February 8, 2015
3.4.3 final: February 25, 2015
3.4.4 candidate 1: December 6, 2015
3.4.4 final: December 20, 2015
3.4.5 candidate 1: June 12, 2016
3.4.5 final: June 26, 2016
3.4.6 candidate 1: January 2, 2017
3.4.6 final: January 17, 2017
3.4.7 candidate 1: July 25, 2017
3.4.7 final: August 9, 2017
3.4.8 candidate 1: January 23, 2018
3.4.8 final: February 4, 2018
3.4.9 candidate 1: July 19, 2018
3.4.9 final: August 2, 2018
3.4.10 candidate 1: March 4, 2019
3.4.10 final: March 18, 2019
Features for 3.4
Implemented / Final PEPs:
PEP 428, a “pathlib” module providing object-oriented filesystem paths
PEP 435, a standardized “enum” module
PEP 436, a build enhancement that will help generate introspection information for builtins
PEP 442, improved semantics for object finalization
PEP 443, adding single-dispatch generic functions to the standard library
PEP 445, a new C API for implementing custom memory allocators
PEP 446, changing file descriptors to not be inherited by default in subprocesses
PEP 450, a new “statistics” module
PEP 451, standardizing module metadata for Python’s module import system
PEP 453, a bundled installer for the pip package manager
PEP 454, a new “tracemalloc” module for tracing Python memory allocations
PEP 456, a new hash algorithm for Python strings and binary data
PEP 3154, a new and improved protocol for pickled objects
PEP 3156, a new “asyncio” module, a new framework for asynchronous I/O
Deferred to post-3.4:
PEP 431, improved support for time zone databases
PEP 441, improved Python zip application support
PEP 447, support for __locallookup__ metaclass method
PEP 448, additional unpacking generalizations
PEP 455, key transforming dictionary
Copyright
This document has been placed in the public domain.
| Final | PEP 429 – Python 3.4 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.4. The schedule primarily concerns itself with PEP-sized
items. |
PEP 433 – Easier suppression of file descriptor inheritance
Author:
Victor Stinner <vstinner at python.org>
Status:
Superseded
Type:
Standards Track
Created:
10-Jan-2013
Python-Version:
3.4
Superseded-By:
446
Table of Contents
Abstract
Rationale
Status in Python 3.3
Inherited file descriptors issues
Security
Atomicity
Portability
Scope
Proposal
Alternatives
Inheritance enabled by default, default no configurable
Inheritance enabled by default, default can only be set to True
Disable inheritance by default
Close file descriptors after fork
open(): add “e” flag to mode
Bikeshedding on the name of the new parameter
Applications using inheritance of file descriptors
Performances
Implementation
os.get_cloexec(fd)
os.set_cloexec(fd, cloexec=True)
open()
os.dup()
os.dup2()
os.pipe()
socket.socket()
socket.socketpair()
socket.socket.accept()
Backward compatibility
Appendix: Operating system support
Windows
ioctl
fcntl
Atomic flags
Links
Footnotes
Copyright
Abstract
Add a new optional cloexec parameter on functions creating file
descriptors, add different ways to change default values of this
parameter, and add four new functions:
os.get_cloexec(fd)
os.set_cloexec(fd, cloexec=True)
sys.getdefaultcloexec()
sys.setdefaultcloexec(cloexec)
Rationale
A file descriptor has a close-on-exec flag which indicates if the file
descriptor will be inherited or not.
On UNIX, if the close-on-exec flag is set, the file descriptor is not
inherited: it will be closed at the execution of child processes;
otherwise the file descriptor is inherited by child processes.
On Windows, if the close-on-exec flag is set, the file descriptor is not
inherited; the file descriptor is inherited by child processes if the
close-on-exec flag is cleared and if CreateProcess() is called with
the bInheritHandles parameter set to TRUE (when
subprocess.Popen is created with close_fds=False for example).
Windows does not have “close-on-exec” flag but an inheritance flag which
is just the opposite value. For example, setting close-on-exec flag
means clearing the HANDLE_FLAG_INHERIT flag of a handle.
Status in Python 3.3
On UNIX, the subprocess module closes file descriptors greater than 2 by
default since Python 3.2 [1]. All file descriptors
created by the parent process are automatically closed in the child
process.
xmlrpc.server.SimpleXMLRPCServer sets the close-on-exec flag of
the listening socket, the parent class socketserver.TCPServer
does not set this flag.
There are other cases creating a subprocess or executing a new program
where file descriptors are not closed: functions of the os.spawn*()
and the os.exec*() families and third party modules calling
exec() or fork() + exec(). In this case, file descriptors
are shared between the parent and the child processes which is usually
unexpected and causes various issues.
This PEP proposes to continue the work started with the change in the
subprocess in Python 3.2, to fix the issue in any code, and not just
code using subprocess.
Inherited file descriptors issues
Closing the file descriptor in the parent process does not close the
related resource (file, socket, …) because it is still open in the
child process.
The listening socket of TCPServer is not closed on exec(): the child
process is able to get connection from new clients; if the parent closes
the listening socket and create a new listening socket on the same
address, it would get an “address already is used” error.
Not closing file descriptors can lead to resource exhaustion: even if
the parent closes all files, creating a new file descriptor may fail
with “too many files” because files are still open in the child process.
See also the following issues:
Issue #2320: Race condition in subprocess using stdin (2008)
Issue #3006: subprocess.Popen causes socket to remain open after
close (2008)
Issue #7213: subprocess leaks open file descriptors between Popen
instances causing hangs (2009)
Issue #12786: subprocess wait() hangs when stdin is closed (2011)
Security
Leaking file descriptors is a major security vulnerability. An
untrusted child process can read sensitive data like passwords and
take control of the parent process though leaked file descriptors. It
is for example a known vulnerability to escape from a chroot.
See also the CERT recommendation:
FIO42-C. Ensure files are properly closed when they are no longer needed.
Example of vulnerabilities:
OpenSSH Security Advisory: portable-keysign-rand-helper.adv
(April 2011)
CWE-403: Exposure of File Descriptor to Unintended Control Sphere (2008)
Hijacking Apache https by mod_php (Dec 2003)
Apache: Apr should set FD_CLOEXEC if APR_FOPEN_NOCLEANUP is not set
(fixed in 2009)
PHP: system() (and similar) don’t cleanup opened handles of Apache (not fixed in January
2013)
Atomicity
Using fcntl() to set the close-on-exec flag is not safe in a
multithreaded application. If a thread calls fork() and exec()
between the creation of the file descriptor and the call to
fcntl(fd, F_SETFD, new_flags): the file descriptor will be
inherited by the child process. Modern operating systems offer
functions to set the flag during the creation of the file descriptor,
which avoids the race condition.
Portability
Python 3.2 added socket.SOCK_CLOEXEC flag, Python 3.3 added
os.O_CLOEXEC flag and os.pipe2() function. It is already
possible to set atomically close-on-exec flag in Python 3.3 when
opening a file and creating a pipe or socket.
The problem is that these flags and functions are not portable: only
recent versions of operating systems support them. O_CLOEXEC and
SOCK_CLOEXEC flags are ignored by old Linux versions and so
FD_CLOEXEC flag must be checked using fcntl(fd, F_GETFD). If
the kernel ignores O_CLOEXEC or SOCK_CLOEXEC flag, a call to
fcntl(fd, F_SETFD, flags) is required to set close-on-exec flag.
Note
OpenBSD older 5.2 does not close the file descriptor with
close-on-exec flag set if fork() is used before exec(), but
it works correctly if exec() is called without fork(). Try
openbsd_bug.py.
Scope
Applications still have to close explicitly file descriptors after a
fork(). The close-on-exec flag only closes file descriptors after
exec(), and so after fork() + exec().
This PEP only change the close-on-exec flag of file descriptors
created by the Python standard library, or by modules using the
standard library. Third party modules not using the standard library
should be modified to conform to this PEP. The new
os.set_cloexec() function can be used for example.
Note
See Close file descriptors after fork for a possible solution
for fork() without exec().
Proposal
Add a new optional cloexec parameter on functions creating file
descriptors and different ways to change default value of this
parameter.
Add new functions:
os.get_cloexec(fd:int) -> bool: get the
close-on-exec flag of a file descriptor. Not available on all
platforms.
os.set_cloexec(fd:int, cloexec:bool=True): set or clear the
close-on-exec flag on a file descriptor. Not available on all
platforms.
sys.getdefaultcloexec() -> bool: get the current default value
of the cloexec parameter
sys.setdefaultcloexec(cloexec: bool): set the default value
of the cloexec parameter
Add a new optional cloexec parameter to:
asyncore.dispatcher.create_socket()
io.FileIO
io.open()
open()
os.dup()
os.dup2()
os.fdopen()
os.open()
os.openpty()
os.pipe()
select.devpoll()
select.epoll()
select.kqueue()
socket.socket()
socket.socket.accept()
socket.socket.dup()
socket.socket.fromfd
socket.socketpair()
The default value of the cloexec parameter is
sys.getdefaultcloexec().
Add a new command line option -e and an environment variable
PYTHONCLOEXEC to the set close-on-exec flag by default.
subprocess clears the close-on-exec flag of file descriptors of the
pass_fds parameter.
All functions creating file descriptors in the standard library must
respect the default value of the cloexec parameter:
sys.getdefaultcloexec().
File descriptors 0 (stdin), 1 (stdout) and 2 (stderr) are expected to be
inherited, but Python does not handle them differently. When
os.dup2() is used to replace standard streams, cloexec=False
must be specified explicitly.
Drawbacks of the proposal:
It is not more possible to know if the close-on-exec flag will be
set or not on a newly created file descriptor just by reading the
source code.
If the inheritance of a file descriptor matters, the cloexec
parameter must now be specified explicitly, or the library or the
application will not work depending on the default value of the
cloexec parameter.
Alternatives
Inheritance enabled by default, default no configurable
Add a new optional parameter cloexec on functions creating file
descriptors. The default value of the cloexec parameter is False,
and this default cannot be changed. File descriptor inheritance enabled by
default is also the default on POSIX and on Windows. This alternative is
the most conservative option.
This option does not solve issues listed in the Rationale
section, it only provides a helper to fix them. All functions creating
file descriptors have to be modified to set cloexec=True in each
module used by an application to fix all these issues.
Inheritance enabled by default, default can only be set to True
This alternative is based on the proposal: the only difference is that
sys.setdefaultcloexec() does not take any argument, it can only be
used to set the default value of the cloexec parameter to True.
Disable inheritance by default
This alternative is based on the proposal: the only difference is that
the default value of the cloexec parameter is True (instead of
False).
If a file must be inherited by child processes, cloexec=False
parameter can be used.
Advantages of setting close-on-exec flag by default:
There are far more programs that are bitten by FD inheritance upon
exec (see Inherited file descriptors issues and Security)
than programs relying on it (see Applications using inheritance of
file descriptors).
Drawbacks of setting close-on-exec flag by default:
It violates the principle of least surprise. Developers using the
os module may expect that Python respects the POSIX standard and so
that close-on-exec flag is not set by default.
The os module is written as a thin wrapper to system calls (to
functions of the C standard library). If atomic flags to set
close-on-exec flag are not supported (see Appendix: Operating
system support), a single Python function call may call 2 or 3
system calls (see Performances section).
Extra system calls, if any, may slow down Python: see
Performances.
Backward compatibility: only a few programs rely on inheritance of file
descriptors, and they only pass a few file descriptors, usually just
one. These programs will fail immediately with EBADF error, and it
will be simple to fix them: add cloexec=False parameter or use
os.set_cloexec(fd, False).
The subprocess module will be changed anyway to clear
close-on-exec flag on file descriptors listed in the pass_fds
parameter of Popen constructor. So it possible that these programs will
not need any fix if they use the subprocess module.
Close file descriptors after fork
This PEP does not fix issues with applications using fork()
without exec(). Python needs a generic process to register
callbacks which would be called after a fork, see #16500:
Add an atfork module. Such registry could be used to close file
descriptors just after a fork().
Drawbacks:
It does not solve the problem on Windows: fork() does not exist
on Windows
This alternative does not solve the problem for programs using
exec() without fork().
A third party module may call directly the C function fork()
which will not call “atfork” callbacks.
All functions creating file descriptors must be changed to register
a callback and then unregister their callback when the file is
closed. Or a list of all open file descriptors must be
maintained.
The operating system is a better place than Python to close
automatically file descriptors. For example, it is not easy to
avoid a race condition between closing the file and unregistering
the callback closing the file.
open(): add “e” flag to mode
A new “e” mode would set close-on-exec flag (best-effort).
This alternative only solves the problem for open().
socket.socket() and os.pipe() do not have a mode parameter for
example.
Since its version 2.7, the GNU libc supports "e" flag for
fopen(). It uses O_CLOEXEC if available, or use fcntl(fd,
F_SETFD, FD_CLOEXEC). With Visual Studio, fopen() accepts a “N”
flag which uses O_NOINHERIT.
Bikeshedding on the name of the new parameter
inherit, inherited: closer to Windows definition
sensitive
sterile: “Does not produce offspring.”
Applications using inheritance of file descriptors
Most developers don’t know that file descriptors are inherited by
default. Most programs do not rely on inheritance of file descriptors.
For example, subprocess.Popen was changed in Python 3.2 to close
all file descriptors greater than 2 in the child process by default.
No user complained about this behavior change yet.
Network servers using fork may want to pass the client socket to the
child process. For example, on UNIX a CGI server pass the socket
client through file descriptors 0 (stdin) and 1 (stdout) using
dup2().
To access a restricted resource like creating a socket listening on a
TCP port lower than 1024 or reading a file containing sensitive data
like passwords, a common practice is: start as the root user, create a
file descriptor, create a child process, drop privileges (ex: change the
current user), pass the file descriptor to the child process and exit
the parent process.
Security is very important in such use case: leaking another file
descriptor would be a critical security vulnerability (see Security).
The root process may not exit but monitors the child process instead,
and restarts a new child process and pass the same file descriptor if
the previous child process crashed.
Example of programs taking file descriptors from the parent process
using a command line option:
gpg: --status-fd <fd>, --logger-fd <fd>, etc.
openssl: -pass fd:<fd>
qemu: -add-fd <fd>
valgrind: --log-fd=<fd>, --input-fd=<fd>, etc.
xterm: -S <fd>
On Linux, it is possible to use "/dev/fd/<fd>" filename to pass a
file descriptor to a program expecting a filename.
Performances
Setting close-on-exec flag may require additional system calls for
each creation of new file descriptors. The number of additional system
calls depends on the method used to set the flag:
O_NOINHERIT: no additional system call
O_CLOEXEC: one additional system call, but only at the creation
of the first file descriptor, to check if the flag is supported. If
the flag is not supported, Python has to fallback to the next method.
ioctl(fd, FIOCLEX): one additional system call per file
descriptor
fcntl(fd, F_SETFD, flags): two additional system calls per file
descriptor, one to get old flags and one to set new flags
On Linux, setting the close-on-flag has a low overhead on performances.
Results of
bench_cloexec.py
on Linux 3.6:
close-on-flag not set: 7.8 us
O_CLOEXEC: 1% slower (7.9 us)
ioctl(): 3% slower (8.0 us)
fcntl(): 3% slower (8.0 us)
Implementation
os.get_cloexec(fd)
Get the close-on-exec flag of a file descriptor.
Pseudo-code:
if os.name == 'nt':
def get_cloexec(fd):
handle = _winapi._get_osfhandle(fd);
flags = _winapi.GetHandleInformation(handle)
return not(flags & _winapi.HANDLE_FLAG_INHERIT)
else:
try:
import fcntl
except ImportError:
pass
else:
def get_cloexec(fd):
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
return bool(flags & fcntl.FD_CLOEXEC)
os.set_cloexec(fd, cloexec=True)
Set or clear the close-on-exec flag on a file descriptor. The flag
is set after the creation of the file descriptor and so it is not
atomic.
Pseudo-code:
if os.name == 'nt':
def set_cloexec(fd, cloexec=True):
handle = _winapi._get_osfhandle(fd);
mask = _winapi.HANDLE_FLAG_INHERIT
if cloexec:
flags = 0
else:
flags = mask
_winapi.SetHandleInformation(handle, mask, flags)
else:
fnctl = None
ioctl = None
try:
import ioctl
except ImportError:
try:
import fcntl
except ImportError:
pass
if ioctl is not None and hasattr('FIOCLEX', ioctl):
def set_cloexec(fd, cloexec=True):
if cloexec:
ioctl.ioctl(fd, ioctl.FIOCLEX)
else:
ioctl.ioctl(fd, ioctl.FIONCLEX)
elif fnctl is not None:
def set_cloexec(fd, cloexec=True):
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
if cloexec:
flags |= FD_CLOEXEC
else:
flags &= ~FD_CLOEXEC
fcntl.fcntl(fd, fcntl.F_SETFD, flags)
ioctl is preferred over fcntl because it requires only one syscall,
instead of two syscalls for fcntl.
Note
fcntl(fd, F_SETFD, flags) only supports one flag
(FD_CLOEXEC), so it would be possible to avoid fcntl(fd,
F_GETFD). But it may drop other flags in the future, and so it is
safer to keep the two functions calls.
Note
fopen() function of the GNU libc ignores the error if
fcntl(fd, F_SETFD, flags) failed.
open()
Windows: open() with O_NOINHERIT flag [atomic]
open() with O_CLOEXEC flag [atomic]
open() + os.set_cloexec(fd, True) [best-effort]
os.dup()
Windows: DuplicateHandle() [atomic]
fcntl(fd, F_DUPFD_CLOEXEC) [atomic]
dup() + os.set_cloexec(fd, True) [best-effort]
os.dup2()
fcntl(fd, F_DUP2FD_CLOEXEC, fd2) [atomic]
dup3() with O_CLOEXEC flag [atomic]
dup2() + os.set_cloexec(fd, True) [best-effort]
os.pipe()
Windows: CreatePipe() with
SECURITY_ATTRIBUTES.bInheritHandle=TRUE, or _pipe() with
O_NOINHERIT flag [atomic]
pipe2() with O_CLOEXEC flag [atomic]
pipe() + os.set_cloexec(fd, True) [best-effort]
socket.socket()
Windows: WSASocket() with WSA_FLAG_NO_HANDLE_INHERIT flag
[atomic]
socket() with SOCK_CLOEXEC flag [atomic]
socket() + os.set_cloexec(fd, True) [best-effort]
socket.socketpair()
socketpair() with SOCK_CLOEXEC flag [atomic]
socketpair() + os.set_cloexec(fd, True) [best-effort]
socket.socket.accept()
accept4() with SOCK_CLOEXEC flag [atomic]
accept() + os.set_cloexec(fd, True) [best-effort]
Backward compatibility
There is no backward incompatible change. The default behaviour is
unchanged: the close-on-exec flag is not set by default.
Appendix: Operating system support
Windows
Windows has an O_NOINHERIT flag: “Do not inherit in child
processes”.
For example, it is supported by open() and _pipe().
The flag can be cleared using
SetHandleInformation(fd, HANDLE_FLAG_INHERIT, 0).
CreateProcess() has an bInheritHandles parameter: if it is
FALSE, the handles are not inherited. If it is TRUE, handles
with HANDLE_FLAG_INHERIT flag set are inherited.
subprocess.Popen uses close_fds option to define
bInheritHandles.
ioctl
Functions:
ioctl(fd, FIOCLEX, 0): set the close-on-exec flag
ioctl(fd, FIONCLEX, 0): clear the close-on-exec flag
Availability: Linux, Mac OS X, QNX, NetBSD, OpenBSD, FreeBSD.
fcntl
Functions:
flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags | FD_CLOEXEC):
set the close-on-exec flag
flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC):
clear the close-on-exec flag
Availability: AIX, Digital UNIX, FreeBSD, HP-UX, IRIX, Linux, Mac OS
X, OpenBSD, Solaris, SunOS, Unicos.
Atomic flags
New flags:
O_CLOEXEC: available on Linux (2.6.23), FreeBSD (8.3),
OpenBSD 5.0, Solaris 11, QNX, BeOS, next NetBSD release (6.1?).
This flag is part of POSIX.1-2008.
SOCK_CLOEXEC flag for socket() and socketpair(),
available on Linux 2.6.27, OpenBSD 5.2, NetBSD 6.0.
WSA_FLAG_NO_HANDLE_INHERIT flag for WSASocket(): supported
on Windows 7 with SP1, Windows Server 2008 R2 with SP1, and later
fcntl(): F_DUPFD_CLOEXEC flag, available on Linux 2.6.24,
OpenBSD 5.0, FreeBSD 9.1, NetBSD 6.0, Solaris 11. This flag is part
of POSIX.1-2008.
fcntl(): F_DUP2FD_CLOEXEC flag, available on FreeBSD 9.1
and Solaris 11.
recvmsg(): MSG_CMSG_CLOEXEC, available on Linux 2.6.23,
NetBSD 6.0.
On Linux older than 2.6.23, O_CLOEXEC flag is simply ignored. So
we have to check that the flag is supported by calling fcntl(). If
it does not work, we have to set the flag using ioctl() or
fcntl().
On Linux older than 2.6.27, if the SOCK_CLOEXEC flag is set in the
socket type, socket() or socketpair() fail and errno is set
to EINVAL.
On Windows XPS3, WSASocket() with WSAEPROTOTYPE when
WSA_FLAG_NO_HANDLE_INHERIT flag is used.
New functions:
dup3(): available on Linux 2.6.27 (and glibc 2.9)
pipe2(): available on Linux 2.6.27 (and glibc 2.9)
accept4(): available on Linux 2.6.28 (and glibc 2.10)
If accept4() is called on Linux older than 2.6.28, accept4()
returns -1 (fail) and errno is set to ENOSYS.
Links
Links:
Secure File Descriptor Handling (Ulrich Drepper,
2008)
win32_support.py of the Tornado project:
emulate fcntl(fd, F_SETFD, FD_CLOEXEC) using
SetHandleInformation(fd, HANDLE_FLAG_INHERIT, 1)
LKML: [PATCH] nextfd(2)
Python issues:
#10115: Support accept4() for atomic setting of flags at socket
creation
#12105: open() does not able to set flags, such as O_CLOEXEC
#12107: TCP listening sockets created without FD_CLOEXEC flag
#16500: Add an atfork module
#16850: Add “e” mode to open(): close-and-exec
(O_CLOEXEC) / O_NOINHERIT
#16860: Use O_CLOEXEC in the tempfile module
#17036: Implementation of the PEP 433
#16946: subprocess: _close_open_fd_range_safe() does not set
close-on-exec flag on Linux < 2.6.23 if O_CLOEXEC is defined
#17070: PEP 433: Use the new cloexec to improve security and avoid
bugs
Other languages:
Perl sets the close-on-exec flag on newly created file descriptor if
their number is greater than $SYSTEM_FD_MAX ($^F).
See $SYSTEM_FD_MAX documentation. Perl does
this since the creation of Perl (it was already present in Perl 1).
Ruby: Set FD_CLOEXEC for all fds (except 0, 1, 2)
Ruby: O_CLOEXEC flag missing for Kernel::open: the
commit was reverted later
OCaml: PR#5256: Processes opened using Unix.open_process* inherit
all opened file descriptors (including sockets). OCaml has a
Unix.set_close_on_exec function.
Footnotes
[1]
On UNIX since Python 3.2, subprocess.Popen()
closes all file descriptors by default: close_fds=True. It
closes file descriptors in range 3 inclusive to local_max_fd
exclusive, where local_max_fd is fcntl(0, F_MAXFD) on
NetBSD, or sysconf(_SC_OPEN_MAX) otherwise. If the error pipe
has a descriptor smaller than 3, ValueError is raised.
Copyright
This document has been placed in the public domain.
| Superseded | PEP 433 – Easier suppression of file descriptor inheritance | Standards Track | Add a new optional cloexec parameter on functions creating file
descriptors, add different ways to change default values of this
parameter, and add four new functions: |
PEP 435 – Adding an Enum type to the Python standard library
Author:
Barry Warsaw <barry at python.org>,
Eli Bendersky <eliben at gmail.com>,
Ethan Furman <ethan at stoneleaf.us>
Status:
Final
Type:
Standards Track
Created:
23-Feb-2013
Python-Version:
3.4
Post-History:
23-Feb-2013, 02-May-2013
Replaces:
354
Resolution:
Python-Dev message
Table of Contents
Abstract
Status of discussions
Motivation
Module and type name
Proposed semantics for the new enumeration type
Creating an Enum
Programmatic access to enumeration members
Duplicating enum members and values
Comparisons
Allowed members and attributes of enumerations
Restricted subclassing of enumerations
IntEnum
Other derived enumerations
Pickling
Functional API
Proposed variations
flufl.enum
Not having to specify values for enums
Using special names or forms to auto-assign enum values
Use-cases in the standard library
Acknowledgments
References
Copyright
Abstract
This PEP proposes adding an enumeration type to the Python standard library.
An enumeration is a set of symbolic names bound to unique, constant values.
Within an enumeration, the values can be compared by identity, and the
enumeration itself can be iterated over.
Status of discussions
The idea of adding an enum type to Python is not new - PEP 354 is a
previous attempt that was rejected in 2005. Recently a new set of discussions
was initiated [3] on the python-ideas mailing list. Many new ideas were
proposed in several threads; after a lengthy discussion Guido proposed adding
flufl.enum to the standard library [4]. During the PyCon 2013 language
summit the issue was discussed further. It became clear that many developers
want to see an enum that subclasses int, which can allow us to replace
many integer constants in the standard library by enums with friendly string
representations, without ceding backwards compatibility. An additional
discussion among several interested core developers led to the proposal of
having IntEnum as a special case of Enum.
The key dividing issue between Enum and IntEnum is whether comparing
to integers is semantically meaningful. For most uses of enumerations, it’s
a feature to reject comparison to integers; enums that compare to integers
lead, through transitivity, to comparisons between enums of unrelated types,
which isn’t desirable in most cases. For some uses, however, greater
interoperability with integers is desired. For instance, this is the case for
replacing existing standard library constants (such as socket.AF_INET)
with enumerations.
Further discussion in late April 2013 led to the conclusion that enumeration
members should belong to the type of their enum: type(Color.red) == Color.
Guido has pronounced a decision on this issue [5], as well as related issues
of not allowing to subclass enums [6], unless they define no enumeration
members [7].
The PEP was accepted by Guido on May 10th, 2013 [1].
Motivation
[Based partly on the Motivation stated in PEP 354]
The properties of an enumeration are useful for defining an immutable, related
set of constant values that may or may not have a semantic meaning. Classic
examples are days of the week (Sunday through Saturday) and school assessment
grades (‘A’ through ‘D’, and ‘F’). Other examples include error status values
and states within a defined process.
It is possible to simply define a sequence of values of some other basic type,
such as int or str, to represent discrete arbitrary values. However,
an enumeration ensures that such values are distinct from any others including,
importantly, values within other enumerations, and that operations without
meaning (“Wednesday times two”) are not defined for these values. It also
provides a convenient printable representation of enum values without requiring
tedious repetition while defining them (i.e. no GREEN = 'green').
Module and type name
We propose to add a module named enum to the standard library. The main
type exposed by this module is Enum. Hence, to import the Enum type
user code will run:
>>> from enum import Enum
Proposed semantics for the new enumeration type
Creating an Enum
Enumerations are created using the class syntax, which makes them easy to read
and write. An alternative creation method is described in Functional API.
To define an enumeration, subclass Enum as follows:
>>> from enum import Enum
>>> class Color(Enum):
... red = 1
... green = 2
... blue = 3
A note on nomenclature: we call Color an enumeration (or enum)
and Color.red, Color.green are enumeration members (or
enum members). Enumeration members also have values (the value of
Color.red is 1, etc.)
Enumeration members have human readable string representations:
>>> print(Color.red)
Color.red
…while their repr has more information:
>>> print(repr(Color.red))
<Color.red: 1>
The type of an enumeration member is the enumeration it belongs to:
>>> type(Color.red)
<Enum 'Color'>
>>> isinstance(Color.green, Color)
True
>>>
Enums also have a property that contains just their item name:
>>> print(Color.red.name)
red
Enumerations support iteration, in definition order:
>>> class Shake(Enum):
... vanilla = 7
... chocolate = 4
... cookies = 9
... mint = 3
...
>>> for shake in Shake:
... print(shake)
...
Shake.vanilla
Shake.chocolate
Shake.cookies
Shake.mint
Enumeration members are hashable, so they can be used in dictionaries and sets:
>>> apples = {}
>>> apples[Color.red] = 'red delicious'
>>> apples[Color.green] = 'granny smith'
>>> apples
{<Color.red: 1>: 'red delicious', <Color.green: 2>: 'granny smith'}
Programmatic access to enumeration members
Sometimes it’s useful to access members in enumerations programmatically (i.e.
situations where Color.red won’t do because the exact color is not known
at program-writing time). Enum allows such access:
>>> Color(1)
<Color.red: 1>
>>> Color(3)
<Color.blue: 3>
If you want to access enum members by name, use item access:
>>> Color['red']
<Color.red: 1>
>>> Color['green']
<Color.green: 2>
Duplicating enum members and values
Having two enum members with the same name is invalid:
>>> class Shape(Enum):
... square = 2
... square = 3
...
Traceback (most recent call last):
...
TypeError: Attempted to reuse key: square
However, two enum members are allowed to have the same value. Given two members
A and B with the same value (and A defined first), B is an alias to A. By-value
lookup of the value of A and B will return A. By-name lookup of B will also
return A:
>>> class Shape(Enum):
... square = 2
... diamond = 1
... circle = 3
... alias_for_square = 2
...
>>> Shape.square
<Shape.square: 2>
>>> Shape.alias_for_square
<Shape.square: 2>
>>> Shape(2)
<Shape.square: 2>
Iterating over the members of an enum does not provide the aliases:
>>> list(Shape)
[<Shape.square: 2>, <Shape.diamond: 1>, <Shape.circle: 3>]
The special attribute __members__ is an ordered dictionary mapping names
to members. It includes all names defined in the enumeration, including the
aliases:
>>> for name, member in Shape.__members__.items():
... name, member
...
('square', <Shape.square: 2>)
('diamond', <Shape.diamond: 1>)
('circle', <Shape.circle: 3>)
('alias_for_square', <Shape.square: 2>)
The __members__ attribute can be used for detailed programmatic access to
the enumeration members. For example, finding all the aliases:
>>> [name for name, member in Shape.__members__.items() if member.name != name]
['alias_for_square']
Comparisons
Enumeration members are compared by identity:
>>> Color.red is Color.red
True
>>> Color.red is Color.blue
False
>>> Color.red is not Color.blue
True
Ordered comparisons between enumeration values are not supported. Enums are
not integers (but see IntEnum below):
>>> Color.red < Color.blue
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: Color() < Color()
Equality comparisons are defined though:
>>> Color.blue == Color.red
False
>>> Color.blue != Color.red
True
>>> Color.blue == Color.blue
True
Comparisons against non-enumeration values will always compare not equal
(again, IntEnum was explicitly designed to behave differently, see
below):
>>> Color.blue == 2
False
Allowed members and attributes of enumerations
The examples above use integers for enumeration values. Using integers is
short and handy (and provided by default by the Functional API), but not
strictly enforced. In the vast majority of use-cases, one doesn’t care what
the actual value of an enumeration is. But if the value is important,
enumerations can have arbitrary values.
Enumerations are Python classes, and can have methods and special methods as
usual. If we have this enumeration:
class Mood(Enum):
funky = 1
happy = 3
def describe(self):
# self is the member here
return self.name, self.value
def __str__(self):
return 'my custom str! {0}'.format(self.value)
@classmethod
def favorite_mood(cls):
# cls here is the enumeration
return cls.happy
Then:
>>> Mood.favorite_mood()
<Mood.happy: 3>
>>> Mood.happy.describe()
('happy', 3)
>>> str(Mood.funky)
'my custom str! 1'
The rules for what is allowed are as follows: all attributes defined within an
enumeration will become members of this enumeration, with the exception of
__dunder__ names and descriptors [9]; methods are descriptors too.
Restricted subclassing of enumerations
Subclassing an enumeration is allowed only if the enumeration does not define
any members. So this is forbidden:
>>> class MoreColor(Color):
... pink = 17
...
TypeError: Cannot extend enumerations
But this is allowed:
>>> class Foo(Enum):
... def some_behavior(self):
... pass
...
>>> class Bar(Foo):
... happy = 1
... sad = 2
...
The rationale for this decision was given by Guido in [6]. Allowing to
subclass enums that define members would lead to a violation of some
important invariants of types and instances. On the other hand, it
makes sense to allow sharing some common behavior between a group of
enumerations, and subclassing empty enumerations is also used to implement
IntEnum.
IntEnum
A variation of Enum is proposed which is also a subclass of int.
Members of an IntEnum can be compared to integers; by extension,
integer enumerations of different types can also be compared to each other:
>>> from enum import IntEnum
>>> class Shape(IntEnum):
... circle = 1
... square = 2
...
>>> class Request(IntEnum):
... post = 1
... get = 2
...
>>> Shape == 1
False
>>> Shape.circle == 1
True
>>> Shape.circle == Request.post
True
However they still can’t be compared to Enum:
>>> class Shape(IntEnum):
... circle = 1
... square = 2
...
>>> class Color(Enum):
... red = 1
... green = 2
...
>>> Shape.circle == Color.red
False
IntEnum values behave like integers in other ways you’d expect:
>>> int(Shape.circle)
1
>>> ['a', 'b', 'c'][Shape.circle]
'b'
>>> [i for i in range(Shape.square)]
[0, 1]
For the vast majority of code, Enum is strongly recommended,
since IntEnum breaks some semantic promises of an enumeration (by
being comparable to integers, and thus by transitivity to other
unrelated enumerations). It should be used only in special cases where
there’s no other choice; for example, when integer constants are
replaced with enumerations and backwards compatibility is required
with code that still expects integers.
Other derived enumerations
IntEnum will be part of the enum module. However, it would be very
simple to implement independently:
class IntEnum(int, Enum):
pass
This demonstrates how similar derived enumerations can be defined, for example
a StrEnum that mixes in str instead of int.
Some rules:
When subclassing Enum, mix-in types must appear before Enum itself in the
sequence of bases, as in the IntEnum example above.
While Enum can have members of any type, once you mix in an additional
type, all the members must have values of that type, e.g. int above.
This restriction does not apply to mix-ins which only add methods
and don’t specify another data type such as int or str.
Pickling
Enumerations can be pickled and unpickled:
>>> from enum.tests.fruit import Fruit
>>> from pickle import dumps, loads
>>> Fruit.tomato is loads(dumps(Fruit.tomato))
True
The usual restrictions for pickling apply: picklable enums must be defined in
the top level of a module, since unpickling requires them to be importable
from that module.
Functional API
The Enum class is callable, providing the following functional API:
>>> Animal = Enum('Animal', 'ant bee cat dog')
>>> Animal
<Enum 'Animal'>
>>> Animal.ant
<Animal.ant: 1>
>>> Animal.ant.value
1
>>> list(Animal)
[<Animal.ant: 1>, <Animal.bee: 2>, <Animal.cat: 3>, <Animal.dog: 4>]
The semantics of this API resemble namedtuple. The first argument
of the call to Enum is the name of the enumeration. Pickling enums
created with the functional API will work on CPython and PyPy, but for
IronPython and Jython you may need to specify the module name explicitly
as follows:
>>> Animals = Enum('Animals', 'ant bee cat dog', module=__name__)
The second argument is the source of enumeration member names. It can be a
whitespace-separated string of names, a sequence of names, a sequence of
2-tuples with key/value pairs, or a mapping (e.g. dictionary) of names to
values. The last two options enable assigning arbitrary values to
enumerations; the others auto-assign increasing integers starting with 1. A
new class derived from Enum is returned. In other words, the above
assignment to Animal is equivalent to:
>>> class Animals(Enum):
... ant = 1
... bee = 2
... cat = 3
... dog = 4
The reason for defaulting to 1 as the starting number and not 0 is
that 0 is False in a boolean sense, but enum members all evaluate
to True.
Proposed variations
Some variations were proposed during the discussions in the mailing list.
Here’s some of the more popular ones.
flufl.enum
flufl.enum was the reference implementation upon which this PEP was
originally based. Eventually, it was decided against the inclusion of
flufl.enum because its design separated enumeration members from
enumerations, so the former are not instances of the latter. Its design
also explicitly permits subclassing enumerations for extending them with
more members (due to the member/enum separation, the type invariants are not
violated in flufl.enum with such a scheme).
Not having to specify values for enums
Michael Foord proposed (and Tim Delaney provided a proof-of-concept
implementation) to use metaclass magic that makes this possible:
class Color(Enum):
red, green, blue
The values get actually assigned only when first looked up.
Pros: cleaner syntax that requires less typing for a very common task (just
listing enumeration names without caring about the values).
Cons: involves much magic in the implementation, which makes even the
definition of such enums baffling when first seen. Besides, explicit is
better than implicit.
Using special names or forms to auto-assign enum values
A different approach to avoid specifying enum values is to use a special name
or form to auto assign them. For example:
class Color(Enum):
red = None # auto-assigned to 0
green = None # auto-assigned to 1
blue = None # auto-assigned to 2
More flexibly:
class Color(Enum):
red = 7
green = None # auto-assigned to 8
blue = 19
purple = None # auto-assigned to 20
Some variations on this theme:
A special name auto imported from the enum package.
Georg Brandl proposed ellipsis (...) instead of None to achieve the
same effect.
Pros: no need to manually enter values. Makes it easier to change the enum and
extend it, especially for large enumerations.
Cons: actually longer to type in many simple cases. The argument of explicit
vs. implicit applies here as well.
Use-cases in the standard library
The Python standard library has many places where the usage of enums would be
beneficial to replace other idioms currently used to represent them. Such
usages can be divided to two categories: user-code facing constants, and
internal constants.
User-code facing constants like os.SEEK_*, socket module constants,
decimal rounding modes and HTML error codes could require backwards
compatibility since user code may expect integers. IntEnum as described
above provides the required semantics; being a subclass of int, it does not
affect user code that expects integers, while on the other hand allowing
printable representations for enumeration values:
>>> import socket
>>> family = socket.AF_INET
>>> family == 2
True
>>> print(family)
SocketFamily.AF_INET
Internal constants are not seen by user code but are employed internally by
stdlib modules. These can be implemented with Enum. Some examples
uncovered by a very partial skim through the stdlib: binhex, imaplib,
http/client, urllib/robotparser, idlelib, concurrent.futures,
turtledemo.
In addition, looking at the code of the Twisted library, there are many use
cases for replacing internal state constants with enums. The same can be said
about a lot of networking code (especially implementation of protocols) and
can be seen in test protocols written with the Tulip library as well.
Acknowledgments
This PEP was initially proposing including the flufl.enum package [8]
by Barry Warsaw into the stdlib, and is inspired in large parts by it.
Ben Finney is the author of the earlier enumeration PEP 354.
References
[1]
https://mail.python.org/pipermail/python-dev/2013-May/126112.html
[3]
https://mail.python.org/pipermail/python-ideas/2013-January/019003.html
[4]
https://mail.python.org/pipermail/python-ideas/2013-February/019373.html
[5]
To make enums behave similarly to Python classes like bool, and
behave in a more intuitive way. It would be surprising if the type of
Color.red would not be Color. (Discussion in
https://mail.python.org/pipermail/python-dev/2013-April/125687.html)
[6] (1, 2, 3)
Subclassing enums and adding new members creates an unresolvable
situation; on one hand MoreColor.red and Color.red should
not be the same object, and on the other isinstance checks become
confusing if they are not. The discussion also links to Stack Overflow
discussions that make additional arguments.
(https://mail.python.org/pipermail/python-dev/2013-April/125716.html)
[7]
It may be useful to have a class defining some behavior (methods, with
no actual enumeration members) mixed into an enum, and this would not
create the problem discussed in [6]. (Discussion in
https://mail.python.org/pipermail/python-dev/2013-May/125859.html)
[8]
http://pythonhosted.org/flufl.enum/
[9]
http://docs.python.org/3/howto/descriptor.html
Copyright
This document has been placed in the public domain.
| Final | PEP 435 – Adding an Enum type to the Python standard library | Standards Track | This PEP proposes adding an enumeration type to the Python standard library. |
PEP 437 – A DSL for specifying signatures, annotations and argument converters
Author:
Stefan Krah <skrah at bytereef.org>
Status:
Rejected
Type:
Standards Track
Created:
11-Mar-2013
Python-Version:
3.4
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
Rejection Notice
Rationale
Scope
DSL overview
Type safety and annotations
Include/converters.h
Function specifications
Keyword arguments
Define block
Declaration
C-declarations
Cleanup
Output
Positional-only arguments
Left and right optional arguments
Flexibility in formatting
Benefits of a compact notation
Easy validation of the definition
Reference implementation
Grammar
Comparison with PEP 436
Copyright
References and Footnotes
Abstract
The Python C-API currently has no mechanism for specifying and auto-generating
function signatures, annotations or custom argument converters.
There are several possible approaches to the problem. Cython uses cdef
definitions in .pyx files to generate the required information. However,
CPython’s C-API functions often require additional initialization and
cleanup snippets that would be hard to specify in a cdef.
PEP 436 proposes a domain specific language (DSL) enclosed in C comments
that largely resembles a per-parameter configuration file. A preprocessor
reads the comment and emits an argument parsing function, docstrings and
a header for the function that utilizes the results of the parsing step.
The latter function is subsequently referred to as the implementation
function.
Rejection Notice
This PEP was rejected by Guido van Rossum at PyCon US 2013. However, several
of the specific issues raised by this PEP were taken into account when
designing the second iteration of the PEP 436 DSL.
Rationale
Opinions differ regarding the suitability of the PEP 436 DSL in the context
of a C file. This PEP proposes an alternative DSL. The specific issues with
PEP 436 that spurred the counter proposal will be explained in the final
section of this PEP.
Scope
The PEP focuses exclusively on the DSL. Topics like the output locations of
docstrings or the generated code are outside the scope of this PEP.
It is however vital that the DSL is suitable for generating custom argument
parsers, a feature that is already implemented in Cython. Therefore, one of
the goals of this PEP is to keep the DSL close to existing solutions, thus
facilitating a possible inclusion of the relevant parts of Cython into the
CPython source tree.
DSL overview
Type safety and annotations
A conversion from a Python to a C value is fully defined by the type of
the converter function. The PyArg_Parse* family of functions accepts
custom converters in addition to the well-known default converters “i”,
“f”, etc.
This PEP views the default converters as abstract functions, regardless
of how they are actually implemented.
Include/converters.h
Converter functions must be forward-declared. All converter functions
shall be entered into the file Include/converters.h. The file is read
by the preprocessor prior to translating .c files. This is an excerpt:
/*[converter]
##### Default converters #####
"s": str -> const char *res;
"s*": [str, bytes, bytearray, rw_buffer] -> Py_buffer &res;
[...]
"es#": str -> (const char *res_encoding, char **res, Py_ssize_t *res_length);
[...]
##### Custom converters #####
path_converter: [str, bytes, int] -> path_t &res;
OS_STAT_DIR_FD_CONVERTER: [int, None] -> int res;
[converter_end]*/
Converters are specified by their name, Python input type(s) and C output
type(s). Default converters must have quoted names, custom converters must
have regular names. A Python type is given by its name. If a function accepts
multiple Python types, the set is written in list form.
Since the default converters may have multiple implicit return values,
the C output type(s) are written according to the following convention:
The main return value must be named res. This is a placeholder for
the actual variable name given later in the DSL. Additional implicit
return values must be prefixed by res_.
By default the variables are passed by value to the implementation function.
If the address should be passed instead, res must be prefixed with an
ampersand.
Additional declarations may be placed into .c files. Duplicate declarations
are allowed as long as the function types are identical.
It is encouraged to declare custom converter types a second time right
above the converter function definition. The preprocessor will then catch
any mismatch between the declarations.
In order to keep the converter complexity manageable, PY_SSIZE_T_CLEAN will
be deprecated and Py_ssize_t will be assumed for all length arguments.
TBD: Make a list of fantasy types like rw_buffer.
Function specifications
Keyword arguments
This example contains the definition of os.stat. The individual sections will
be explained in detail. Grammatically, the whole define block consists of a
function specification and an output section. The function specification in
turn consists of a declaration section, an optional C-declaration section and
an optional cleanup code section. Sections within the function specification
are separated in yacc style by ‘%%’:
/*[define posix_stat]
def os.stat(path: path_converter, *, dir_fd: OS_STAT_DIR_FD_CONVERTER = None,
follow_symlinks: "p" = True) -> os.stat_result: pass
%%
path_t path = PATH_T_INITIALIZE("stat", 0, 1);
int dir_fd = DEFAULT_DIR_FD;
int follow_symlinks = 1;
%%
path_cleanup(&path);
[define_end]*/
<literal C output>
/*[define_output_end]*/
Define block
The function specification block starts with a /*[define token, followed
by an optional C function name, followed by a right bracket. If the C function
name is not given, it is generated from the declaration name. In the example,
omitting the name posix_stat would result in a C function name of os_stat.
Declaration
The required declaration is (almost) a valid Python function definition. The
‘def’ keyword and the function body are redundant, but the author of this PEP
finds the definition more readable if they are present.
The function name may be a path instead of a plain identifier. Each argument
is annotated with the name of the converter function that will be applied to it.
Default values are given in the usual Python manner and may be any valid
Python expression.
The return value may be any Python expression. Usually it will be the name
of an object, but alternative return values could be specified in list form.
C-declarations
This optional section contains C variable declarations. Since the converter
functions have been declared beforehand, the preprocessor can type-check
the declarations.
Cleanup
The optional cleanup section contains literal C code that will be inserted
unmodified after the implementation function.
Output
The output section contains the code emitted by the preprocessor.
Positional-only arguments
Functions that do not take keyword arguments are indicated by the presence
of the slash special parameter:
/*[define stat_float_times]
def os.stat_float_times(/, newval: "i") -> os.stat_result: pass
%%
int newval = -1;
[define_end]*/
The preprocessor translates this definition to a PyArg_ParseTuple() call.
All arguments to the right of the slash are optional arguments.
Left and right optional arguments
Some legacy functions contain optional arguments groups both to the left and
right of a central parameter. It is debatable whether a new tool should support
such functions. For completeness’ sake, this is the proposed syntax:
/*[define]
def curses.window.addch(y: "i", x: "i", ch: "O", attr: "l") -> None: pass
where groups = [[ch], [ch, attr], [y, x, ch], [y, x, ch, attr]]
[define_end]*/
Here ch is the central parameter, attr can optionally be added on the
right, and the group [y, x] can optionally be added on the left.
Essentially the rule is that all ordered combinations of the central
parameter and the optional groups must be possible such that no two
combinations have the same length.
This is concisely expressed by putting the central parameter first in
the list and subsequently adding the optional arguments groups to the
left and right.
Flexibility in formatting
If the above os.stat example is considered too compact, it can easily be
formatted this way:
/*[define posix_stat]
def os.stat(path: path_converter,
*,
dir_fd: OS_STAT_DIR_FD_CONVERTER = None,
follow_symlinks: "p" = True)
-> os.stat_result: pass
%%
path_t path = PATH_T_INITIALIZE("stat", 0, 1);
int dir_fd = DEFAULT_DIR_FD;
int follow_symlinks = 1;
%%
path_cleanup(&path);
[define_end]*/
<literal C output>
/*[define_output_end]*/
Benefits of a compact notation
The advantages of a concise notation are especially obvious when a large
number of parameters is involved. The argument parsing part of
_posixsubprocess.fork_exec is fully specified by this definition:
/*[define subprocess_fork_exec]
def _posixsubprocess.fork_exec(
process_args: "O", executable_list: "O",
close_fds: "p", py_fds_to_keep: "O",
cwd_obj: "O", env_list: "O",
p2cread: "i", p2cwrite: "i", c2pread: "i", c2pwrite: "i",
errread: "i", errwrite: "i", errpipe_read: "i", errpipe_write: "i",
restore_signals: "i", call_setsid: "i", preexec_fn: "i", /) -> int: pass
[define_end]*/
Note that the preprocess tool currently emits a redundant C-declaration
section for this example, so the output is longer than necessary.
Easy validation of the definition
How can an inexperienced user validate a definition like os.stat? Simply
by changing os.stat to os_stat, defining missing converters and pasting
the definition into the Python interactive interpreter!
In fact, a converters.py module could be auto-generated from converters.h.
Reference implementation
A reference implementation is available at issue 16612. Since this PEP
was written under time constraints and the author is unfamiliar with the
PLY toolchain, the software is written in Standard ML and utilizes the
ml-yacc/ml-lex toolchain.
The grammar is conflict-free and available in ml-yacc readable BNF form.
Two tools are available:
printsemant reads a converter header and a .c file and dumps
the semantically checked parse tree to stdout.
preprocess reads a converter header and a .c file and dumps
the preprocessed .c file to stdout.
Known deficiencies:
The Python ‘test’ expression is not semantically checked. The syntax
however is checked since it is part of the grammar.
The lexer does not handle triple quoted strings.
C declarations are parsed in a primitive way. The final implementation
should utilize ‘declarator’ and ‘init-declarator’ from the C grammar.
The preprocess tool does not emit code for the left-and-right optional
arguments case. The printsemant tool can deal with this case.
Since the preprocess tool generates the output from the parse
tree, the original indentation of the define block is lost.
Grammar
TBD: The grammar exists in ml-yacc readable form, but should probably be
included here in EBNF notation.
Comparison with PEP 436
The author of this PEP has the following concerns about the DSL proposed
in PEP 436:
The whitespace sensitive configuration file like syntax looks out
of place in a C file.
The structure of the function definition gets lost in the per-parameter
specifications. Keywords like positional-only, required and keyword-only
are scattered across too many different places.By contrast, in the alternative DSL the structure of the function
definition can be understood at a single glance.
The PEP 436 DSL has 14 documented flags and at least one undocumented
(allow_fd) flag. Figuring out which of the 2**15 possible combinations
are valid places an unnecessary burden on the user.Experience with the PEP 3118 buffer flags has shown that sorting out
(and exhaustively testing!) valid combinations is an extremely tedious
task. The PEP 3118 flags are still not well understood by many people.
By contrast, the alternative DSL has a central file Include/converters.h
that can be quickly searched for the desired converter. Many of the
converters are already known, perhaps even memorized by people (due
to frequent use).
The PEP 436 DSL allows too much freedom. Types can apparently be omitted,
the preprocessor accepts (and ignores) unknown keywords, sometimes adding
white space after a docstring results in an assertion error.The alternative DSL on the other hand allows no such freedoms. Omitting
converter or return value annotations is plainly a syntax error. The
LALR(1) grammar is unambiguous and specified for the complete translation
unit.
Copyright
This document is licensed under the Open Publication License.
References and Footnotes
| Rejected | PEP 437 – A DSL for specifying signatures, annotations and argument converters | Standards Track | The Python C-API currently has no mechanism for specifying and auto-generating
function signatures, annotations or custom argument converters. |
PEP 446 – Make newly created file descriptors non-inheritable
Author:
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Standards Track
Created:
05-Aug-2013
Python-Version:
3.4
Replaces:
433
Table of Contents
Abstract
Rationale
Inheritance of File Descriptors
Inheritance of File Descriptors on Windows
Only Inherit Some Handles on Windows
Inheritance of File Descriptors on UNIX
Issues with Inheritable File Descriptors
Security Vulnerability
Issues fixed in the subprocess module
Atomic Creation of non-inheritable File Descriptors
Status of Python 3.3
Closing All Open File Descriptors
Proposal
Non-inheritable File Descriptors
New Functions And Methods
Other Changes
Backward Compatibility
Related Work
Rejected Alternatives
Add a new open_noinherit() function
PEP 433
Python Issues
Copyright
Abstract
Leaking file descriptors in child processes causes various annoying
issues and is a known major security vulnerability. Using the
subprocess module with the close_fds parameter set to True is
not possible in all cases.
This PEP proposes to make all file descriptors created by Python
non-inheritable by default to reduce the risk of these issues. This PEP
fixes also a race condition in multi-threaded applications on operating
systems supporting atomic flags to create non-inheritable file
descriptors.
We are aware of the code breakage this is likely to cause, and doing it
anyway for the good of mankind. (Details in the section “Backward
Compatibility” below.)
Rationale
Inheritance of File Descriptors
Each operating system handles the inheritance of file descriptors
differently. Windows creates non-inheritable handles by default, whereas
UNIX and the POSIX API on Windows create inheritable file descriptors by
default. Python prefers the POSIX API over the native Windows API, to
have a single code base and to use the same type for file descriptors,
and so it creates inheritable file descriptors.
There is one exception: os.pipe() creates non-inheritable pipes on
Windows, whereas it creates inheritable pipes on UNIX. The reason is an
implementation artifact: os.pipe() calls CreatePipe() on Windows
(native API), whereas it calls pipe() on UNIX (POSIX API). The call
to CreatePipe() was added in Python in 1994, before the introduction
of pipe() in the POSIX API in Windows 98. The issue #4708 proposes to change os.pipe() on
Windows to create inheritable pipes.
Inheritance of File Descriptors on Windows
On Windows, the native type of file objects is handles (C type
HANDLE). These handles have a HANDLE_FLAG_INHERIT flag which
defines if a handle can be inherited in a child process or not. For the
POSIX API, the C runtime (CRT) also provides file descriptors (C type
int). The handle of a file descriptor can be retrieve using the
function _get_osfhandle(fd). A file descriptor can be created from a
handle using the function _open_osfhandle(handle).
Using CreateProcess(),
handles are only inherited if their inheritable flag
(HANDLE_FLAG_INHERIT) is set and the bInheritHandles
parameter of CreateProcess() is TRUE; all file descriptors
except standard streams (0, 1, 2) are closed in the child process, even
if bInheritHandles is TRUE. Using the spawnv() function, all
inheritable handles and all inheritable file descriptors are inherited
in the child process. This function uses the undocumented fields
cbReserved2 and lpReserved2 of the STARTUPINFO
structure to pass an array of file descriptors.
To replace standard streams (stdin, stdout, stderr) using
CreateProcess(), the STARTF_USESTDHANDLES flag must be set in
the dwFlags field of the STARTUPINFO structure and the
bInheritHandles parameter of CreateProcess() must be set to
TRUE. So when at least one standard stream is replaced, all
inheritable handles are inherited by the child process.
The default value of the close_fds parameter of subprocess process
is True (bInheritHandles=FALSE) if stdin, stdout and
stderr parameters are None, False (bInheritHandles=TRUE)
otherwise.
See also:
Handle Inheritance
Stackoverflow: Can TCP SOCKET handles be set not inheritable?
Only Inherit Some Handles on Windows
Since Windows Vista, CreateProcess() supports an extension of the
STARTUPINFO structure: the STARTUPINFOEX structure.
Using this new structure, it is possible to specify a list of handles to
inherit: PROC_THREAD_ATTRIBUTE_HANDLE_LIST. Read Programmatically
controlling which handles are inherited by new processes in Win32
(Raymond Chen, Dec 2011) for more information.
Before Windows Vista, it is possible to make handles inheritable and
call CreateProcess() with bInheritHandles=TRUE. This option
works if all other handles are non-inheritable. There is a race
condition: if another thread calls CreateProcess() with
bInheritHandles=TRUE, handles will also be inherited in the second
process.
Microsoft suggests to use a lock to avoid the race condition: read
Q315939: PRB: Child Inherits Unintended Handles During CreateProcess
Call (last review:
November 2006). The Python issue #16500 “Add an atfork module” proposes to add such lock, it can
be used to make handles non-inheritable without the race condition. Such
lock only protects against a race condition between Python threads; C
threads are not protected.
Another option is to duplicate handles that must be inherited, passing the
values of the duplicated handles to the child process, so the child
process can steal duplicated handles using DuplicateHandle()
with DUPLICATE_CLOSE_SOURCE. Handle values change between the
parent and the child process because the handles are duplicated (twice);
the parent and/or the child process must be adapted to handle this
change. If the child program cannot be modified, an intermediate program
can be used to steal handles from the parent process before spawning the
final child program. The intermediate program has to pass the handle from the
child process to the parent process. The parent may have to close
duplicated handles if all handles were not stolen, for example if the
intermediate process fails. If the command line is used to pass the
handle values, the command line must be modified when handles are
duplicated, because their values are modified.
This PEP does not include a solution to this problem because there is no
perfect solution working on all Windows versions. This point is deferred
until use cases relying on handle or file descriptor inheritance on
Windows are well known, so we can choose the best solution and carefully
test its implementation.
Inheritance of File Descriptors on UNIX
POSIX provides a close-on-exec flag on file descriptors to automatically
close a file descriptor when the C function execv() is
called. File descriptors with the close-on-exec flag cleared are
inherited in the child process, file descriptors with the flag set are
closed in the child process.
The flag can be set in two syscalls (one to get current flags, a second
to set new flags) using fcntl():
int flags, res;
flags = fcntl(fd, F_GETFD);
if (flags == -1) { /* handle the error */ }
flags |= FD_CLOEXEC;
/* or "flags &= ~FD_CLOEXEC;" to clear the flag */
res = fcntl(fd, F_SETFD, flags);
if (res == -1) { /* handle the error */ }
FreeBSD, Linux, Mac OS X, NetBSD, OpenBSD and QNX also support setting
the flag in a single syscall using ioctl():
int res;
res = ioctl(fd, FIOCLEX, 0);
if (!res) { /* handle the error */ }
NOTE: The close-on-exec flag has no effect on fork(): all file
descriptors are inherited by the child process. The Python issue #16500
“Add an atfork module” proposes to
add a new atfork module to execute code at fork, which may be used to
automatically close file descriptors.
Issues with Inheritable File Descriptors
Most of the time, inheritable file descriptors “leaked” to child
processes are not noticed, because they don’t cause major bugs. It does
not mean that these bugs must not be fixed.
Two common issues with inherited file descriptors:
On Windows, a directory cannot be removed before all file handles open
in the directory are closed. The same issue can be seen with files,
except if the file was created with the FILE_SHARE_DELETE flag
(O_TEMPORARY mode for open()).
If a listening socket is leaked to a child process, the socket address
cannot be reused before the parent and child processes terminated. For
example, if a web server spawns a new program to handle a process, and
the server restarts while the program is not done, the server cannot
start because the TCP port is still in use.
Example of issues in open source projects:
Mozilla (Firefox):
open since 2002-05
dbus library:
fixed in 2008-05 (dbus commit),
close file descriptors in the child process
autofs:
fixed in 2009-02, set the CLOEXEC flag
qemu:
fixed in 2009-12 (qemu commit),
set CLOEXEC flag
Tor:
fixed in 2010-12, set CLOEXEC flag
OCaml: open since
2011-04, “PR#5256: Processes opened using Unix.open_process* inherit
all opened file descriptors (including sockets)”
ØMQ:
open since 2012-08
Squid:
open since 2012-07
See also: Excuse me son, but your code is leaking !!! (Dan Walsh, March 2012)
for SELinux issues with leaked file descriptors.
Security Vulnerability
Leaking sensitive file handles and file descriptors can lead to security
vulnerabilities. An untrusted child process might read sensitive data like
passwords or take control of the parent process though a leaked file
descriptor. With a leaked listening socket, a child process can accept
new connections to read sensitive data.
Example of vulnerabilities:
Hijacking Apache https by mod_php (2003)
Apache: Apr should set FD_CLOEXEC if APR_FOPEN_NOCLEANUP is not set:
fixed in 2009
PHP: system() (and similar) don’t cleanup opened handles of Apache: open since 2006
CWE-403: Exposure of File Descriptor to Unintended Control Sphere (2008)
OpenSSH Security Advisory: portable-keysign-rand-helper.adv
(2011)
Read also the CERT Secure Coding Standards:
FIO42-C. Ensure files are properly closed when they are no longer
needed.
Issues fixed in the subprocess module
Inherited file descriptors caused 4 issues in the subprocess
module:
Issue #2320: Race condition in subprocess using stdin (opened in 2008)
Issue #3006: subprocess.Popen causes socket to remain open after
close (opened in 2008)
Issue #7213: subprocess leaks open file descriptors between Popen
instances causing hangs
(opened in 2009)
Issue #12786: subprocess wait() hangs when stdin is closed (opened in 2011)
These issues were fixed in Python 3.2 by 4 different changes in the
subprocess module:
Pipes are now non-inheritable;
The default value of the close_fds parameter is now True,
with one exception on Windows: the default value is False if
at least one standard stream is replaced;
A new pass_fds parameter has been added;
Creation of a _posixsubprocess module implemented in C.
Atomic Creation of non-inheritable File Descriptors
In a multi-threaded application, an inheritable file descriptor may be
created just before a new program is spawned, before the file descriptor
is made non-inheritable. In this case, the file descriptor is leaked to
the child process. This race condition could be avoided if the file
descriptor is created directly non-inheritable.
FreeBSD, Linux, Mac OS X, Windows and many other operating systems
support creating non-inheritable file descriptors with the inheritable
flag cleared atomically at the creation of the file descriptor.
A new WSA_FLAG_NO_HANDLE_INHERIT flag for WSASocket() was added
in Windows 7 SP1 and Windows Server 2008 R2 SP1 to create
non-inheritable sockets. If this flag is used on an older Windows
version (ex: Windows XP SP3), WSASocket() fails with
WSAEPROTOTYPE.
On UNIX, new flags were added for files and sockets:
O_CLOEXEC: available on Linux (2.6.23), FreeBSD (8.3),
Mac OS 10.8, OpenBSD 5.0, Solaris 11, QNX, BeOS, next NetBSD release
(6.1?). This flag is part of POSIX.1-2008.
SOCK_CLOEXEC flag for socket() and socketpair(),
available on Linux 2.6.27, OpenBSD 5.2, NetBSD 6.0.
fcntl(): F_DUPFD_CLOEXEC flag, available on Linux 2.6.24,
OpenBSD 5.0, FreeBSD 9.1, NetBSD 6.0, Solaris 11. This flag is part
of POSIX.1-2008.
fcntl(): F_DUP2FD_CLOEXEC flag, available on FreeBSD 9.1
and Solaris 11.
recvmsg(): MSG_CMSG_CLOEXEC, available on Linux 2.6.23,
NetBSD 6.0.
On Linux older than 2.6.23, O_CLOEXEC flag is simply ignored. So
fcntl() must be called to check if the file descriptor is
non-inheritable: O_CLOEXEC is not supported if the FD_CLOEXEC
flag is missing. On Linux older than 2.6.27, socket() or
socketpair() fail with errno set to EINVAL if the
SOCK_CLOEXEC flag is set in the socket type.
New functions:
dup3(): available on Linux 2.6.27 (and glibc 2.9)
pipe2(): available on Linux 2.6.27 (and glibc 2.9)
accept4(): available on Linux 2.6.28 (and glibc 2.10)
On Linux older than 2.6.28, accept4() fails with errno set to
ENOSYS.
Summary:
Operating System
Atomic File
Atomic Socket
FreeBSD
8.3 (2012)
X
Linux
2.6.23 (2007)
2.6.27 (2008)
Mac OS X
10.8 (2012)
X
NetBSD
6.1 (?)
6.0 (2012)
OpenBSD
5.0 (2011)
5.2 (2012)
Solaris
11 (2011)
X
Windows
XP (2001)
Seven SP1 (2011), 2008 R2 SP1 (2011)
Legend:
“Atomic File”: first version of the operating system supporting
creating atomically a non-inheritable file descriptor using
open()
“Atomic Socket”: first version of the operating system supporting
creating atomically a non-inheritable socket
“X”: not supported yet
See also:
Secure File Descriptor Handling (Ulrich Drepper,
2008)
Ghosts of Unix past, part 2: Conflated designs (Neil Brown, 2010) explains the
history of O_CLOEXEC and O_NONBLOCK flags
File descriptor handling changes in 2.6.27
FreeBSD: atomic close on exec
Status of Python 3.3
Python 3.3 creates inheritable file descriptors on all platforms, except
os.pipe() which creates non-inheritable file descriptors on Windows.
New constants and functions related to the atomic creation of
non-inheritable file descriptors were added to Python 3.3:
os.O_CLOEXEC, os.pipe2() and socket.SOCK_CLOEXEC.
On UNIX, the subprocess module closes all file descriptors in the
child process by default, except standard streams (0, 1, 2) and file
descriptors of the pass_fds parameter. If the close_fds parameter is
set to False, all inheritable file descriptors are inherited in the
child process.
On Windows, the subprocess closes all handles and file descriptors
in the child process by default. If at least one standard stream (stdin,
stdout or stderr) is replaced (ex: redirected into a pipe), all
inheritable handles and file descriptors 0, 1 and 2 are inherited in the
child process.
Using the functions of the os.execv*() and os.spawn*() families,
all inheritable handles and all inheritable file descriptors are
inherited by the child process.
On UNIX, the multiprocessing module uses os.fork() and so all
file descriptors are inherited by child processes.
On Windows, all inheritable handles and file descriptors 0, 1 and 2 are
inherited by the child process using the multiprocessing module, all
file descriptors except standard streams are closed.
Summary:
Module
FD on UNIX
Handles on Windows
FD on Windows
subprocess, default
STD, pass_fds
none
STD
subprocess, replace stdout
STD, pass_fds
all
STD
subprocess, close_fds=False
all
all
STD
multiprocessing
not applicable
all
STD
os.execv(), os.spawn()
all
all
all
Legend:
“all”: all inheritable file descriptors or handles are inherited in
the child process
“none”: all handles are closed in the child process
“STD”: only file descriptors 0 (stdin), 1 (stdout) and 2 (stderr) are
inherited in the child process
“pass_fds”: file descriptors of the pass_fds parameter of the
subprocess are inherited
“not applicable”: on UNIX, the multiprocessing uses fork(),
so this case is not affected by this PEP.
Closing All Open File Descriptors
On UNIX, the subprocess module closes almost all file descriptors in
the child process. This operation requires MAXFD system calls, where
MAXFD is the maximum number of file descriptors, even if there are only
few open file descriptors. This maximum can be read using:
os.sysconf("SC_OPEN_MAX").
The operation can be slow if MAXFD is large. For example, on a FreeBSD
buildbot with MAXFD=655,000, the operation took 300 ms: see
issue #11284: slow close file descriptors.
On Linux, Python 3.3 gets the list of all open file descriptors from
/proc/<PID>/fd/, and so performances depends on the number of open
file descriptors, not on MAXFD.
See also:
Python issue #1663329:
subprocess close_fds perform poor if SC_OPEN_MAX is high
Squid Bug #837033:
Squid should set CLOEXEC on opened FDs. “32k+ close() calls in each
child process take a long time ([12-56] seconds) in Xen PV guests.”
Proposal
Non-inheritable File Descriptors
The following functions are modified to make newly created file
descriptors non-inheritable by default:
asyncore.dispatcher.create_socket()
io.FileIO
io.open()
open()
os.dup()
os.fdopen()
os.open()
os.openpty()
os.pipe()
select.devpoll()
select.epoll()
select.kqueue()
socket.socket()
socket.socket.accept()
socket.socket.dup()
socket.socket.fromfd()
socket.socketpair()
os.dup2() still creates inheritable by default, see below.
When available, atomic flags are used to make file descriptors
non-inheritable. The atomicity is not guaranteed because a fallback is
required when atomic flags are not available.
New Functions And Methods
New functions available on all platforms:
os.get_inheritable(fd: int): return True if the file
descriptor can be inherited by child processes, False otherwise.
os.set_inheritable(fd: int, inheritable: bool): set the
inheritable flag of the specified file descriptor.
New functions only available on Windows:
os.get_handle_inheritable(handle: int): return True if the
handle can be inherited by child processes, False otherwise.
os.set_handle_inheritable(handle: int, inheritable: bool):
set the inheritable flag of the specified handle.
New methods:
socket.socket.get_inheritable(): return True if the
socket can be inherited by child processes, False otherwise.
socket.socket.set_inheritable(inheritable: bool):
set the inheritable flag of the specified socket.
Other Changes
On UNIX, subprocess makes file descriptors of the pass_fds parameter
inheritable. The file descriptor is made inheritable in the child
process after the fork() and before execv(), so the inheritable
flag of file descriptors is unchanged in the parent process.
os.dup2() has a new optional inheritable parameter: os.dup2(fd,
fd2, inheritable=True). fd2 is created inheritable by default, but
non-inheritable if inheritable is False.
os.dup2() behaves differently than os.dup() because the most
common use case of os.dup2() is to replace the file descriptors of
the standard streams: stdin (0), stdout (1) and
stderr (2). Standard streams are expected to be inherited by
child processes.
Backward Compatibility
This PEP break applications relying on inheritance of file descriptors.
Developers are encouraged to reuse the high-level Python module
subprocess which handles the inheritance of file descriptors in a
portable way.
Applications using the subprocess module with the pass_fds
parameter or using only os.dup2() to redirect standard streams should
not be affected.
Python no longer conform to POSIX, since file descriptors are now
made non-inheritable by default. Python was not designed to conform to
POSIX, but was designed to develop portable applications.
Related Work
The programming languages Go, Perl and Ruby make newly created file
descriptors non-inheritable by default: since Go 1.0 (2009), Perl 1.0
(1987) and Ruby 2.0 (2013).
The SCons project, written in Python, overrides builtin functions
file() and open() to make files non-inheritable on Windows:
see win32.py.
Rejected Alternatives
Add a new open_noinherit() function
In June 2007, Henning von Bargen proposed on the python-dev mailing list
to add a new open_noinherit() function to fix issues of inherited file
descriptors in child processes. At this time, the default value of the
close_fds parameter of the subprocess module was False.
Read the mail thread: [Python-Dev] Proposal for a new function
“open_noinherit” to avoid problems with subprocesses and security risks.
PEP 433
PEP 433, “Easier suppression of file descriptor inheritance”,
was a previous attempt proposing various other alternatives, but no
consensus could be reached.
Python Issues
#10115: Support accept4() for atomic setting of flags at socket
creation
#12105: open() does not able to set flags, such as O_CLOEXEC
#12107: TCP listening sockets created without FD_CLOEXEC flag
#16850: Add “e” mode to open(): close-and-exec
(O_CLOEXEC) / O_NOINHERIT
#16860: Use O_CLOEXEC in the tempfile module
#16946: subprocess: _close_open_fd_range_safe() does not set
close-on-exec flag on Linux < 2.6.23 if O_CLOEXEC is defined
#17070: Use the new cloexec to improve security and avoid bugs
#18571: Implementation of the PEP 446: non-inheritable file
descriptors
Copyright
This document has been placed into the public domain.
| Final | PEP 446 – Make newly created file descriptors non-inheritable | Standards Track | Leaking file descriptors in child processes causes various annoying
issues and is a known major security vulnerability. Using the
subprocess module with the close_fds parameter set to True is
not possible in all cases. |
PEP 447 – Add __getdescriptor__ method to metaclass
Author:
Ronald Oussoren <ronaldoussoren at mac.com>
Status:
Deferred
Type:
Standards Track
Created:
12-Jun-2013
Post-History:
02-Jul-2013, 15-Jul-2013, 29-Jul-2013, 22-Jul-2015
Table of Contents
Abstract
PEP Status
Rationale
Background
The superclass attribute lookup hook
Aside: Attribute resolution algorithm in Python
In Python code
Example usage
In C code
Use of this hook by the interpreter
Other changes to the implementation
Impact of this PEP on introspection
Performance impact
Micro benchmarks
Pybench
Alternative proposals
__getattribute_super__
Reuse tp_getattro
Alternative placement of the new method
History
Discussion threads
References
Copyright
Abstract
Currently object.__getattribute__ and super.__getattribute__ peek
in the __dict__ of classes on the MRO for a class when looking for
an attribute. This PEP adds an optional __getdescriptor__ method to
a metaclass that replaces this behavior and gives more control over attribute
lookup, especially when using a super object.
That is, the MRO walking loop in _PyType_Lookup and
super.__getattribute__ gets changed from:
def lookup(mro_list, name):
for cls in mro_list:
if name in cls.__dict__:
return cls.__dict__
return NotFound
to:
def lookup(mro_list, name):
for cls in mro_list:
try:
return cls.__getdescriptor__(name)
except AttributeError:
pass
return NotFound
The default implementation of __getdescriptor__ looks in the class
dictionary:
class type:
def __getdescriptor__(cls, name):
try:
return cls.__dict__[name]
except KeyError:
raise AttributeError(name) from None
PEP Status
This PEP is deferred until someone has time to update this PEP and push it forward.
Rationale
It is currently not possible to influence how the super class looks
up attributes (that is, super.__getattribute__ unconditionally
peeks in the class __dict__), and that can be problematic for
dynamic classes that can grow new methods on demand, for example dynamic
proxy classes.
The __getdescriptor__ method makes it possible to dynamically add
attributes even when looking them up using the super class.
The new method affects object.__getattribute__ (and
PyObject_GenericGetAttr) as well for consistency and to have a single
place to implement dynamic attribute resolution for classes.
Background
The current behavior of super.__getattribute__ causes problems for
classes that are dynamic proxies for other (non-Python) classes or types,
an example of which is PyObjC. PyObjC creates a Python class for every
class in the Objective-C runtime, and looks up methods in the Objective-C
runtime when they are used. This works fine for normal access, but doesn’t
work for access with super objects. Because of this PyObjC currently
includes a custom super that must be used with its classes, as well as
completely reimplementing PyObject_GenericGetAttr for normal attribute
access.
The API in this PEP makes it possible to remove the custom super and
simplifies the implementation because the custom lookup behavior can be
added in a central location.
Note
PyObjC cannot precalculate the contents of the class __dict__
because Objective-C classes can grow new methods at runtime. Furthermore,
Objective-C classes tend to contain a lot of methods while most Python
code will only use a small subset of them, this makes precalculating
unnecessarily expensive.
The superclass attribute lookup hook
Both super.__getattribute__ and object.__getattribute__ (or
PyObject_GenericGetAttr and in particular _PyType_Lookup in C code)
walk an object’s MRO and currently peek in the class’ __dict__ to look up
attributes.
With this proposal both lookup methods no longer peek in the class __dict__
but call the special method __getdescriptor__, which is a slot defined
on the metaclass. The default implementation of that method looks
up the name the class __dict__, which means that attribute lookup is
unchanged unless a metatype actually defines the new special method.
Aside: Attribute resolution algorithm in Python
The attribute resolution process as implemented by object.__getattribute__
(or PyObject_GenericGetAttr in CPython’s implementation) is fairly
straightforward, but not entirely so without reading C code.
The current CPython implementation of object.__getattribute__ is basically
equivalent to the following (pseudo-) Python code (excluding some house
keeping and speed tricks):
def _PyType_Lookup(tp, name):
mro = tp.mro()
assert isinstance(mro, tuple)
for base in mro:
assert isinstance(base, type)
# PEP 447 will change these lines:
try:
return base.__dict__[name]
except KeyError:
pass
return None
class object:
def __getattribute__(self, name):
assert isinstance(name, str)
tp = type(self)
descr = _PyType_Lookup(tp, name)
f = None
if descr is not None:
f = descr.__get__
if f is not None and descr.__set__ is not None:
# Data descriptor
return f(descr, self, type(self))
dict = self.__dict__
if dict is not None:
try:
return self.__dict__[name]
except KeyError:
pass
if f is not None:
# Non-data descriptor
return f(descr, self, type(self))
if descr is not None:
# Regular class attribute
return descr
raise AttributeError(name)
class super:
def __getattribute__(self, name):
assert isinstance(name, unicode)
if name != '__class__':
starttype = self.__self_type__
mro = startype.mro()
try:
idx = mro.index(self.__thisclass__)
except ValueError:
pass
else:
for base in mro[idx+1:]:
# PEP 447 will change these lines:
try:
descr = base.__dict__[name]
except KeyError:
continue
f = descr.__get__
if f is not None:
return f(descr,
None if (self.__self__ is self.__self_type__) else self.__self__,
starttype)
else:
return descr
return object.__getattribute__(self, name)
This PEP should change the dict lookup at the lines starting at “# PEP 447” with
a method call to perform the actual lookup, making is possible to affect that
lookup both for normal attribute access and access through the super proxy.
Note that specific classes can already completely override the default
behaviour by implementing their own __getattribute__ slot (with or without
calling the super class implementation).
In Python code
A meta type can define a method __getdescriptor__ that is called during
attribute resolution by both super.__getattribute__
and object.__getattribute:
class MetaType(type):
def __getdescriptor__(cls, name):
try:
return cls.__dict__[name]
except KeyError:
raise AttributeError(name) from None
The __getdescriptor__ method has as its arguments a class (which is an
instance of the meta type) and the name of the attribute that is looked up.
It should return the value of the attribute without invoking descriptors,
and should raise AttributeError when the name cannot be found.
The type class provides a default implementation for __getdescriptor__,
that looks up the name in the class dictionary.
Example usage
The code below implements a silly metaclass that redirects attribute lookup to
uppercase versions of names:
class UpperCaseAccess (type):
def __getdescriptor__(cls, name):
try:
return cls.__dict__[name.upper()]
except KeyError:
raise AttributeError(name) from None
class SillyObject (metaclass=UpperCaseAccess):
def m(self):
return 42
def M(self):
return "fortytwo"
obj = SillyObject()
assert obj.m() == "fortytwo"
As mentioned earlier in this PEP a more realistic use case of this
functionality is a __getdescriptor__ method that dynamically populates the
class __dict__ based on attribute access, primarily when it is not
possible to reliably keep the class dict in sync with its source, for example
because the source used to populate __dict__ is dynamic as well and does
not have triggers that can be used to detect changes to that source.
An example of that are the class bridges in PyObjC: the class bridge is a
Python object (class) that represents an Objective-C class and conceptually
has a Python method for every Objective-C method in the Objective-C class.
As with Python it is possible to add new methods to an Objective-C class, or
replace existing ones, and there are no callbacks that can be used to detect
this.
In C code
A new type flag Py_TPFLAGS_GETDESCRIPTOR with value (1UL << 11) that
indicates that the new slot is present and to be used.
A new slot tp_getdescriptor is added to the PyTypeObject struct, this
slot corresponds to the __getdescriptor__ method on type.
The slot has the following prototype:
PyObject* (*getdescriptorfunc)(PyTypeObject* cls, PyObject* name);
This method should lookup name in the namespace of cls, without looking at
superclasses, and should not invoke descriptors. The method returns NULL
without setting an exception when the name cannot be found, and returns a
new reference otherwise (not a borrowed reference).
Classes with a tp_getdescriptor slot must add Py_TPFLAGS_GETDESCRIPTOR
to tp_flags to indicate that new slot must be used.
Use of this hook by the interpreter
The new method is required for metatypes and as such is defined on type_.
Both super.__getattribute__ and
object.__getattribute__/PyObject_GenericGetAttr
(through _PyType_Lookup) use the this __getdescriptor__ method when
walking the MRO.
Other changes to the implementation
The change for PyObject_GenericGetAttr will be done by changing the private
function _PyType_Lookup. This currently returns a borrowed reference, but
must return a new reference when the __getdescriptor__ method is present.
Because of this _PyType_Lookup will be renamed to _PyType_LookupName,
this will cause compile-time errors for all out-of-tree users of this
private API.
For the same reason _PyType_LookupId is renamed to _PyType_LookupId2.
A number of other functions in typeobject.c with the same issue do not get
an updated name because they are private to that file.
The attribute lookup cache in Objects/typeobject.c is disabled for classes
that have a metaclass that overrides __getdescriptor__, because using the
cache might not be valid for such classes.
Impact of this PEP on introspection
Use of the method introduced in this PEP can affect introspection of classes
with a metaclass that uses a custom __getdescriptor__ method. This section
lists those changes.
The items listed below are only affected by custom __getdescriptor__
methods, the default implementation for object won’t cause problems
because that still only uses the class __dict__ and won’t cause visible
changes to the visible behaviour of the object.__getattribute__.
dir might not show all attributesAs with a custom __getattribute__ method dir() might not see all
(instance) attributes when using the __getdescriptor__() method to
dynamically resolve attributes.
The solution for that is quite simple: classes using __getdescriptor__
should also implement __dir__() if they want full support for the builtin
dir() function.
inspect.getattr_static might not show all attributesThe function inspect.getattr_static intentionally does not invoke
__getattribute__ and descriptors to avoid invoking user code during
introspection with this function. The __getdescriptor__ method will also
be ignored and is another way in which the result of inspect.getattr_static
can be different from that of builtin.getattr.
inspect.getmembers and inspect.classify_class_attrsBoth of these functions directly access the class __dict__ of classes along
the MRO, and hence can be affected by a custom __getdescriptor__ method.
Code with a custom __getdescriptor__ method that want to play nice with
these methods also needs to ensure that the __dict__ is set up correctly
when that is accessed directly by Python code.
Note that inspect.getmembers is used by pydoc and hence this can
affect runtime documentation introspection.
Direct introspection of the class __dict__Any code that directly access the class __dict__ for introspection
can be affected by a custom __getdescriptor__ method, see the previous
item.
Performance impact
WARNING: The benchmark results in this section are old, and will be updated
when I’ve ported the patch to the current trunk. I don’t expect significant
changes to the results in this section.
Micro benchmarks
Issue 18181 has a micro benchmark as one of its attachments
(pep447-micro-bench.py) that specifically tests the speed of attribute
lookup, both directly and through super.
Note that attribute lookup with deep class hierarchies is significantly slower
when using a custom __getdescriptor__ method. This is because the
attribute lookup cache for CPython cannot be used when having this method.
Pybench
The pybench output below compares an implementation of this PEP with the
regular source tree, both based on changeset a5681f50bae2, run on an idle
machine and Core i7 processor running Centos 6.4.
Even though the machine was idle there were clear differences between runs,
I’ve seen difference in “minimum time” vary from -0.1% to +1.5%, with similar
(but slightly smaller) differences in the “average time” difference.
-------------------------------------------------------------------------------
PYBENCH 2.1
-------------------------------------------------------------------------------
* using CPython 3.4.0a0 (default, Jul 29 2013, 13:01:34) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
* disabled garbage collection
* system check interval set to maximum: 2147483647
* using timer: time.perf_counter
* timer: resolution=1e-09, implementation=clock_gettime(CLOCK_MONOTONIC)
-------------------------------------------------------------------------------
Benchmark: pep447.pybench
-------------------------------------------------------------------------------
Rounds: 10
Warp: 10
Timer: time.perf_counter
Machine Details:
Platform ID: Linux-2.6.32-358.114.1.openstack.el6.x86_64-x86_64-with-centos-6.4-Final
Processor: x86_64
Python:
Implementation: CPython
Executable: /tmp/default-pep447/bin/python3
Version: 3.4.0a0
Compiler: GCC 4.4.7 20120313 (Red Hat 4.4.7-3)
Bits: 64bit
Build: Jul 29 2013 14:09:12 (#default)
Unicode: UCS4
-------------------------------------------------------------------------------
Comparing with: default.pybench
-------------------------------------------------------------------------------
Rounds: 10
Warp: 10
Timer: time.perf_counter
Machine Details:
Platform ID: Linux-2.6.32-358.114.1.openstack.el6.x86_64-x86_64-with-centos-6.4-Final
Processor: x86_64
Python:
Implementation: CPython
Executable: /tmp/default/bin/python3
Version: 3.4.0a0
Compiler: GCC 4.4.7 20120313 (Red Hat 4.4.7-3)
Bits: 64bit
Build: Jul 29 2013 13:01:34 (#default)
Unicode: UCS4
Test minimum run-time average run-time
this other diff this other diff
-------------------------------------------------------------------------------
BuiltinFunctionCalls: 45ms 44ms +1.3% 45ms 44ms +1.3%
BuiltinMethodLookup: 26ms 27ms -2.4% 27ms 27ms -2.2%
CompareFloats: 33ms 34ms -0.7% 33ms 34ms -1.1%
CompareFloatsIntegers: 66ms 67ms -0.9% 66ms 67ms -0.8%
CompareIntegers: 51ms 50ms +0.9% 51ms 50ms +0.8%
CompareInternedStrings: 34ms 33ms +0.4% 34ms 34ms -0.4%
CompareLongs: 29ms 29ms -0.1% 29ms 29ms -0.0%
CompareStrings: 43ms 44ms -1.8% 44ms 44ms -1.8%
ComplexPythonFunctionCalls: 44ms 42ms +3.9% 44ms 42ms +4.1%
ConcatStrings: 33ms 33ms -0.4% 33ms 33ms -1.0%
CreateInstances: 47ms 48ms -2.9% 47ms 49ms -3.4%
CreateNewInstances: 35ms 36ms -2.5% 36ms 36ms -2.5%
CreateStringsWithConcat: 69ms 70ms -0.7% 69ms 70ms -0.9%
DictCreation: 52ms 50ms +3.1% 52ms 50ms +3.0%
DictWithFloatKeys: 40ms 44ms -10.1% 43ms 45ms -5.8%
DictWithIntegerKeys: 32ms 36ms -11.2% 35ms 37ms -4.6%
DictWithStringKeys: 29ms 34ms -15.7% 35ms 40ms -11.0%
ForLoops: 30ms 29ms +2.2% 30ms 29ms +2.2%
IfThenElse: 38ms 41ms -6.7% 38ms 41ms -6.9%
ListSlicing: 36ms 36ms -0.7% 36ms 37ms -1.3%
NestedForLoops: 43ms 45ms -3.1% 43ms 45ms -3.2%
NestedListComprehensions: 39ms 40ms -1.7% 39ms 40ms -2.1%
NormalClassAttribute: 86ms 82ms +5.1% 86ms 82ms +5.0%
NormalInstanceAttribute: 42ms 42ms +0.3% 42ms 42ms +0.0%
PythonFunctionCalls: 39ms 38ms +3.5% 39ms 38ms +2.8%
PythonMethodCalls: 51ms 49ms +3.0% 51ms 50ms +2.8%
Recursion: 67ms 68ms -1.4% 67ms 68ms -1.4%
SecondImport: 41ms 36ms +12.5% 41ms 36ms +12.6%
SecondPackageImport: 45ms 40ms +13.1% 45ms 40ms +13.2%
SecondSubmoduleImport: 92ms 95ms -2.4% 95ms 98ms -3.6%
SimpleComplexArithmetic: 28ms 28ms -0.1% 28ms 28ms -0.2%
SimpleDictManipulation: 57ms 57ms -1.0% 57ms 58ms -1.0%
SimpleFloatArithmetic: 29ms 28ms +4.7% 29ms 28ms +4.9%
SimpleIntFloatArithmetic: 37ms 41ms -8.5% 37ms 41ms -8.7%
SimpleIntegerArithmetic: 37ms 41ms -9.4% 37ms 42ms -10.2%
SimpleListComprehensions: 33ms 33ms -1.9% 33ms 34ms -2.9%
SimpleListManipulation: 28ms 30ms -4.3% 29ms 30ms -4.1%
SimpleLongArithmetic: 26ms 26ms +0.5% 26ms 26ms +0.5%
SmallLists: 40ms 40ms +0.1% 40ms 40ms +0.1%
SmallTuples: 46ms 47ms -2.4% 46ms 48ms -3.0%
SpecialClassAttribute: 126ms 120ms +4.7% 126ms 121ms +4.4%
SpecialInstanceAttribute: 42ms 42ms +0.6% 42ms 42ms +0.8%
StringMappings: 94ms 91ms +3.9% 94ms 91ms +3.8%
StringPredicates: 48ms 49ms -1.7% 48ms 49ms -2.1%
StringSlicing: 45ms 45ms +1.4% 46ms 45ms +1.5%
TryExcept: 23ms 22ms +4.9% 23ms 22ms +4.8%
TryFinally: 32ms 32ms -0.1% 32ms 32ms +0.1%
TryRaiseExcept: 17ms 17ms +0.9% 17ms 17ms +0.5%
TupleSlicing: 49ms 48ms +1.1% 49ms 49ms +1.0%
WithFinally: 48ms 47ms +2.3% 48ms 47ms +2.4%
WithRaiseExcept: 45ms 44ms +0.8% 45ms 45ms +0.5%
-------------------------------------------------------------------------------
Totals: 2284ms 2287ms -0.1% 2306ms 2308ms -0.1%
(this=pep447.pybench, other=default.pybench)
A run of the benchmark suite (with option “-b 2n3”) also seems to indicate that
the performance impact is minimal:
Report on Linux fangorn.local 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64
Total CPU cores: 8
### call_method_slots ###
Min: 0.304120 -> 0.282791: 1.08x faster
Avg: 0.304394 -> 0.282906: 1.08x faster
Significant (t=2329.92)
Stddev: 0.00016 -> 0.00004: 4.1814x smaller
### call_simple ###
Min: 0.249268 -> 0.221175: 1.13x faster
Avg: 0.249789 -> 0.221387: 1.13x faster
Significant (t=2770.11)
Stddev: 0.00012 -> 0.00013: 1.1101x larger
### django_v2 ###
Min: 0.632590 -> 0.601519: 1.05x faster
Avg: 0.635085 -> 0.602653: 1.05x faster
Significant (t=321.32)
Stddev: 0.00087 -> 0.00051: 1.6933x smaller
### fannkuch ###
Min: 1.033181 -> 0.999779: 1.03x faster
Avg: 1.036457 -> 1.001840: 1.03x faster
Significant (t=260.31)
Stddev: 0.00113 -> 0.00070: 1.6112x smaller
### go ###
Min: 0.526714 -> 0.544428: 1.03x slower
Avg: 0.529649 -> 0.547626: 1.03x slower
Significant (t=-93.32)
Stddev: 0.00136 -> 0.00136: 1.0028x smaller
### iterative_count ###
Min: 0.109748 -> 0.116513: 1.06x slower
Avg: 0.109816 -> 0.117202: 1.07x slower
Significant (t=-357.08)
Stddev: 0.00008 -> 0.00019: 2.3664x larger
### json_dump_v2 ###
Min: 2.554462 -> 2.609141: 1.02x slower
Avg: 2.564472 -> 2.620013: 1.02x slower
Significant (t=-76.93)
Stddev: 0.00538 -> 0.00481: 1.1194x smaller
### meteor_contest ###
Min: 0.196336 -> 0.191925: 1.02x faster
Avg: 0.196878 -> 0.192698: 1.02x faster
Significant (t=61.86)
Stddev: 0.00053 -> 0.00041: 1.2925x smaller
### nbody ###
Min: 0.228039 -> 0.235551: 1.03x slower
Avg: 0.228857 -> 0.236052: 1.03x slower
Significant (t=-54.15)
Stddev: 0.00130 -> 0.00029: 4.4810x smaller
### pathlib ###
Min: 0.108501 -> 0.105339: 1.03x faster
Avg: 0.109084 -> 0.105619: 1.03x faster
Significant (t=311.08)
Stddev: 0.00022 -> 0.00011: 1.9314x smaller
### regex_effbot ###
Min: 0.057905 -> 0.056447: 1.03x faster
Avg: 0.058055 -> 0.056760: 1.02x faster
Significant (t=79.22)
Stddev: 0.00006 -> 0.00015: 2.7741x larger
### silent_logging ###
Min: 0.070810 -> 0.072436: 1.02x slower
Avg: 0.070899 -> 0.072609: 1.02x slower
Significant (t=-191.59)
Stddev: 0.00004 -> 0.00008: 2.2640x larger
### spectral_norm ###
Min: 0.290255 -> 0.299286: 1.03x slower
Avg: 0.290335 -> 0.299541: 1.03x slower
Significant (t=-572.10)
Stddev: 0.00005 -> 0.00015: 2.8547x larger
### threaded_count ###
Min: 0.107215 -> 0.115206: 1.07x slower
Avg: 0.107488 -> 0.115996: 1.08x slower
Significant (t=-109.39)
Stddev: 0.00016 -> 0.00076: 4.8665x larger
The following not significant results are hidden, use -v to show them:
call_method, call_method_unknown, chaos, fastpickle, fastunpickle, float, formatted_logging, hexiom2, json_load, normal_startup, nqueens, pidigits, raytrace, regex_compile, regex_v8, richards, simple_logging, startup_nosite, telco, unpack_sequence.
Alternative proposals
__getattribute_super__
An earlier version of this PEP used the following static method on classes:
def __getattribute_super__(cls, name, object, owner): pass
This method performed name lookup as well as invoking descriptors and was
necessarily limited to working only with super.__getattribute__.
Reuse tp_getattro
It would be nice to avoid adding a new slot, thus keeping the API simpler and
easier to understand. A comment on Issue 18181 asked about reusing the
tp_getattro slot, that is super could call the tp_getattro slot of all
methods along the MRO.
That won’t work because tp_getattro will look in the instance
__dict__ before it tries to resolve attributes using classes in the MRO.
This would mean that using tp_getattro instead of peeking the class
dictionaries changes the semantics of the super class.
Alternative placement of the new method
This PEP proposes to add __getdescriptor__ as a method on the metaclass.
An alternative would be to add it as a class method on the class itself
(similar to how __new__ is a staticmethod of the class and not a method
of the metaclass).
The advantage of using a method on the metaclass is that will give an error
when two classes on the MRO have different metaclasses that may have different
behaviors for __getdescriptor__. With a normal classmethod that problem
would pass undetected while it might cause subtle errors when running the code.
History
23-Jul-2015: Added type flag Py_TPFLAGS_GETDESCRIPTOR after talking
with Guido.The new flag is primarily useful to avoid crashing when loading an extension
for an older version of CPython and could have positive speed implications
as well.
Jul-2014: renamed slot to __getdescriptor__, the old name didn’t
match the naming style of other slots and was less descriptive.
Discussion threads
The initial version of the PEP was send with
Message-ID mailto:75030FAC-6918-4E94-95DA-67A88D53E6F5@mac.com
Further discussion starting at a message with
Message-ID mailto:5BB87CC4-F31B-4213-AAAC-0C0CE738460C@mac.com
And more discussion starting at message with
Message-ID mailto:00AA7433-C853-4101-9718-060468EBAC54@mac.com
References
Issue 18181 contains an out of date prototype implementation
Copyright
This document has been placed in the public domain.
| Deferred | PEP 447 – Add __getdescriptor__ method to metaclass | Standards Track | Currently object.__getattribute__ and super.__getattribute__ peek
in the __dict__ of classes on the MRO for a class when looking for
an attribute. This PEP adds an optional __getdescriptor__ method to
a metaclass that replaces this behavior and gives more control over attribute
lookup, especially when using a super object. |
PEP 450 – Adding A Statistics Module To The Standard Library
Author:
Steven D’Aprano <steve at pearwood.info>
Status:
Final
Type:
Standards Track
Created:
01-Aug-2013
Python-Version:
3.4
Post-History:
13-Sep-2013
Table of Contents
Abstract
Rationale
Comparison To Other Languages/Packages
R
C#
Ruby
PHP
Delphi
GNU Scientific Library
Design Decisions Of The Module
API
Calculating mean, median and mode
Calculating variance and standard deviation
Other functions
Specification
What Should Be The Name Of The Module?
Discussion And Resolved Issues
Frequently Asked Questions
Shouldn’t this module spend time on PyPI before being considered for the standard library?
Does the standard library really need yet another version of sum?
Will this module be backported to older versions of Python?
Is this supposed to replace numpy?
Future Work
References
Copyright
Abstract
This PEP proposes the addition of a module for common statistics functions such
as mean, median, variance and standard deviation to the Python standard
library. See also http://bugs.python.org/issue18606
Rationale
The proposed statistics module is motivated by the “batteries included”
philosophy towards the Python standard library. Raymond Hettinger and other
senior developers have requested a quality statistics library that falls
somewhere in between high-end statistics libraries and ad hoc code. [1]
Statistical functions such as mean, standard deviation and others are obvious
and useful batteries, familiar to any Secondary School student. Even cheap
scientific calculators typically include multiple statistical functions such
as:
mean
population and sample variance
population and sample standard deviation
linear regression
correlation coefficient
Graphing calculators aimed at Secondary School students typically include all
of the above, plus some or all of:
median
mode
functions for calculating the probability of random variables from the
normal, t, chi-squared, and F distributions
inference on the mean
and others [2]. Likewise spreadsheet applications such as Microsoft Excel,
LibreOffice and Gnumeric include rich collections of statistical
functions [3].
In contrast, Python currently has no standard way to calculate even the
simplest and most obvious statistical functions such as mean. For those who
need statistical functions in Python, there are two obvious solutions:
install numpy and/or scipy [4];
or use a Do It Yourself solution.
Numpy is perhaps the most full-featured solution, but it has a few
disadvantages:
It may be overkill for many purposes. The documentation for numpy even warns
“It can be hard to know what functions are available in numpy. This is
not a complete list, but it does cover most of them.”[5]
and then goes on to list over 270 functions, only a small number of which are
related to statistics.
Numpy is aimed at those doing heavy numerical work, and may be intimidating
to those who don’t have a background in computational mathematics and
computer science. For example, numpy.mean takes four arguments:mean(a, axis=None, dtype=None, out=None)
although fortunately for the beginner or casual numpy user, three are
optional and numpy.mean does the right thing in simple cases:
>>> numpy.mean([1, 2, 3, 4])
2.5
For many people, installing numpy may be difficult or impossible. For
example, people in corporate environments may have to go through a difficult,
time-consuming process before being permitted to install third-party
software. For the casual Python user, having to learn about installing
third-party packages in order to average a list of numbers is unfortunate.
This leads to option number 2, DIY statistics functions. At first glance, this
appears to be an attractive option, due to the apparent simplicity of common
statistical functions. For example:
def mean(data):
return sum(data)/len(data)
def variance(data):
# Use the Computational Formula for Variance.
n = len(data)
ss = sum(x**2 for x in data) - (sum(data)**2)/n
return ss/(n-1)
def standard_deviation(data):
return math.sqrt(variance(data))
The above appears to be correct with a casual test:
>>> data = [1, 2, 4, 5, 8]
>>> variance(data)
7.5
But adding a constant to every data point should not change the variance:
>>> data = [x+1e12 for x in data]
>>> variance(data)
0.0
And variance should never be negative:
>>> variance(data*100)
-1239429440.1282566
By contrast, the proposed reference implementation gets the exactly correct
answer 7.5 for the first two examples, and a reasonably close answer for the
third: 6.012. numpy does no better [6].
Even simple statistical calculations contain traps for the unwary, starting
with the Computational Formula itself. Despite the name, it is numerically
unstable and can be extremely inaccurate, as can be seen above. It is
completely unsuitable for computation by computer [7]. This problem plagues
users of many programming language, not just Python [8], as coders reinvent
the same numerically inaccurate code over and over again [9], or advise others
to do so [10].
It isn’t just the variance and standard deviation. Even the mean is not quite
as straightforward as it might appear. The above implementation seems too
simple to have problems, but it does:
The built-in sum can lose accuracy when dealing with floats of wildly
differing magnitude. Consequently, the above naive mean fails this
“torture test”:assert mean([1e30, 1, 3, -1e30]) == 1
returning 0 instead of 1, a purely computational error of 100%.
Using math.fsum inside mean will make it more accurate with float
data, but it also has the side-effect of converting any arguments to float
even when unnecessary. E.g. we should expect the mean of a list of Fractions
to be a Fraction, not a float.
While the above mean implementation does not fail quite as catastrophically as
the naive variance does, a standard library function can do much better than
the DIY versions.
The example above involves an especially bad set of data, but even for more
realistic data sets accuracy is important. The first step in interpreting
variation in data (including dealing with ill-conditioned data) is often to
standardize it to a series with variance 1 (and often mean 0). This
standardization requires accurate computation of the mean and variance of the
raw series. Naive computation of mean and variance can lose precision very
quickly. Because precision bounds accuracy, it is important to use the most
precise algorithms for computing mean and variance that are practical, or the
results of standardization are themselves useless.
Comparison To Other Languages/Packages
The proposed statistics library is not intended to be a competitor to such
third-party libraries as numpy/scipy, or of proprietary full-featured
statistics packages aimed at professional statisticians such as Minitab, SAS
and Matlab. It is aimed at the level of graphing and scientific calculators.
Most programming languages have little or no built-in support for statistics
functions. Some exceptions:
R
R (and its proprietary cousin, S) is a programming language designed for
statistics work. It is extremely popular with statisticians and is extremely
feature-rich [11].
C#
The C# LINQ package includes extension methods to calculate the average of
enumerables [12].
Ruby
Ruby does not ship with a standard statistics module, despite some apparent
demand [13]. Statsample appears to be a feature-rich third-party library,
aiming to compete with R [14].
PHP
PHP has an extremely feature-rich (although mostly undocumented) set of
advanced statistical functions [15].
Delphi
Delphi includes standard statistical functions including Mean, Sum,
Variance, TotalVariance, MomentSkewKurtosis in its Math library [16].
GNU Scientific Library
The GNU Scientific Library includes standard statistical functions,
percentiles, median and others [17]. One innovation I have borrowed from the
GSL is to allow the caller to optionally specify the pre-calculated mean of
the sample (or an a priori known population mean) when calculating the variance
and standard deviation [18].
Design Decisions Of The Module
My intention is to start small and grow the library as needed, rather than try
to include everything from the start. Consequently, the current reference
implementation includes only a small number of functions: mean, variance,
standard deviation, median, mode. (See the reference implementation for a full
list.)
I have aimed for the following design features:
Correctness over speed. It is easier to speed up a correct but slow function
than to correct a fast but buggy one.
Concentrate on data in sequences, allowing two-passes over the data, rather
than potentially compromise on accuracy for the sake of a one-pass algorithm.
Functions expect data will be passed as a list or other sequence; if given an
iterator, they may internally convert to a list.
Functions should, as much as possible, honour any type of numeric data. E.g.
the mean of a list of Decimals should be a Decimal, not a float. When this is
not possible, treat float as the “lowest common data type”.
Although functions support data sets of floats, Decimals or Fractions, there
is no guarantee that mixed data sets will be supported. (But on the other
hand, they aren’t explicitly rejected either.)
Plenty of documentation, aimed at readers who understand the basic concepts
but may not know (for example) which variance they should use (population or
sample?). Mathematicians and statisticians have a terrible habit of being
inconsistent with both notation and terminology [19], and having spent many
hours making sense of the contradictory/confusing definitions in use, it is
only fair that I do my best to clarify rather than obfuscate the topic.
But avoid going into tedious [20] mathematical detail.
API
The initial version of the library will provide univariate (single variable)
statistics functions. The general API will be based on a functional model
function(data, ...) -> result, where data is a mandatory iterable of
(usually) numeric data.
The author expects that lists will be the most common data type used, but any
iterable type should be acceptable. Where necessary, functions may convert to
lists internally. Where possible, functions are expected to conserve the type
of the data values, for example, the mean of a list of Decimals should be a
Decimal rather than float.
Calculating mean, median and mode
The mean, median* and mode functions take a single mandatory
argument and return the appropriate statistic, e.g.:
>>> mean([1, 2, 3])
2.0
Functions provided are:
mean(data)arithmetic mean of data.
median(data)median (middle value) of data, taking the average of the two
middle values when there are an even number of values.
median_high(data)high median of data, taking the larger of the two middle
values when the number of items is even.
median_low(data)low median of data, taking the smaller of the two middle
values when the number of items is even.
median_grouped(data, interval=1)50th percentile of grouped data, using interpolation.
mode(data)most common data point.
mode is the sole exception to the rule that the data argument must be
numeric. It will also accept an iterable of nominal data, such as strings.
Calculating variance and standard deviation
In order to be similar to scientific calculators, the statistics module will
include separate functions for population and sample variance and standard
deviation. All four functions have similar signatures, with a single mandatory
argument, an iterable of numeric data, e.g.:
>>> variance([1, 2, 2, 2, 3])
0.5
All four functions also accept a second, optional, argument, the mean of the
data. This is modelled on a similar API provided by the GNU Scientific
Library [18]. There are three use-cases for using this argument, in no
particular order:
The value of the mean is known a priori.
You have already calculated the mean, and wish to avoid calculating
it again.
You wish to (ab)use the variance functions to calculate the second
moment about some given point other than the mean.
In each case, it is the caller’s responsibility to ensure that given
argument is meaningful.
Functions provided are:
variance(data, xbar=None)sample variance of data, optionally using xbar as the sample mean.
stdev(data, xbar=None)sample standard deviation of data, optionally using xbar as the
sample mean.
pvariance(data, mu=None)population variance of data, optionally using mu as the population
mean.
pstdev(data, mu=None)population standard deviation of data, optionally using mu as the
population mean.
Other functions
There is one other public function:
sum(data, start=0)high-precision sum of numeric data.
Specification
As the proposed reference implementation is in pure Python, other Python
implementations can easily make use of the module unchanged, or adapt it as
they see fit.
What Should Be The Name Of The Module?
This will be a top-level module statistics.
There was some interest in turning math into a package, and making this a
sub-module of math, but the general consensus eventually agreed on a
top-level module. Other potential but rejected names included stats (too
much risk of confusion with existing stat module), and statslib
(described as “too C-like”).
Discussion And Resolved Issues
This proposal has been previously discussed here [21].
A number of design issues were resolved during the discussion on Python-Ideas
and the initial code review. There was a lot of concern about the addition of
yet another sum function to the standard library, see the FAQs below for
more details. In addition, the initial implementation of sum suffered from
some rounding issues and other design problems when dealing with Decimals.
Oscar Benjamin’s assistance in resolving this was invaluable.
Another issue was the handling of data in the form of iterators. The first
implementation of variance silently swapped between a one- and two-pass
algorithm, depending on whether the data was in the form of an iterator or
sequence. This proved to be a design mistake, as the calculated variance could
differ slightly depending on the algorithm used, and variance etc. were
changed to internally generate a list and always use the more accurate two-pass
implementation.
One controversial design involved the functions to calculate median, which were
implemented as attributes on the median callable, e.g. median,
median.low, median.high etc. Although there is at least one existing
use of this style in the standard library, in unittest.mock, the code
reviewers felt that this was too unusual for the standard library.
Consequently, the design has been changed to a more traditional design of
separate functions with a pseudo-namespace naming convention, median_low,
median_high, etc.
Another issue that was of concern to code reviewers was the existence of a
function calculating the sample mode of continuous data, with some people
questioning the choice of algorithm, and whether it was a sufficiently common
need to be included. So it was dropped from the API, and mode now
implements only the basic schoolbook algorithm based on counting unique values.
Another significant point of discussion was calculating statistics of
timedelta objects. Although the statistics module will not directly
support timedelta objects, it is possible to support this use-case by
converting them to numbers first using the timedelta.total_seconds method.
Frequently Asked Questions
Shouldn’t this module spend time on PyPI before being considered for the standard library?
Older versions of this module have been available on PyPI [22] since 2010.
Being much simpler than numpy, it does not require many years of external
development.
Does the standard library really need yet another version of sum?
This proved to be the most controversial part of the reference implementation.
In one sense, clearly three sums is two too many. But in another sense, yes.
The reasons why the two existing versions are unsuitable are described
here [23] but the short summary is:
the built-in sum can lose precision with floats;
the built-in sum accepts any non-numeric data type that supports the +
operator, apart from strings and bytes;
math.fsum is high-precision, but coerces all arguments to float.
There was some interest in “fixing” one or the other of the existing sums. If
this occurs before 3.4 feature-freeze, the decision to keep statistics.sum
can be re-considered.
Will this module be backported to older versions of Python?
The module currently targets 3.3, and I will make it available on PyPI for
3.3 for the foreseeable future. Backporting to older versions of the 3.x
series is likely (but not yet decided). Backporting to 2.7 is less likely but
not ruled out.
Is this supposed to replace numpy?
No. While it is likely to grow over the years (see open issues below) it is
not aimed to replace, or even compete directly with, numpy. Numpy is a
full-featured numeric library aimed at professionals, the nuclear reactor of
numeric libraries in the Python ecosystem. This is just a battery, as in
“batteries included”, and is aimed at an intermediate level somewhere between
“use numpy” and “roll your own version”.
Future Work
At this stage, I am unsure of the best API for multivariate statistical
functions such as linear regression, correlation coefficient, and covariance.
Possible APIs include:
Separate arguments for x and y data:function([x0, x1, ...], [y0, y1, ...])
A single argument for (x, y) data:function([(x0, y0), (x1, y1), ...])
This API is preferred by GvR [24].
Selecting arbitrary columns from a 2D array:function([[a0, x0, y0, z0], [a1, x1, y1, z1], ...], x=1, y=2)
Some combination of the above.
In the absence of a consensus of preferred API for multivariate stats, I will
defer including such multivariate functions until Python 3.5.
Likewise, functions for calculating probability of random variables and
inference testing (e.g. Student’s t-test) will be deferred until 3.5.
There is considerable interest in including one-pass functions that can
calculate multiple statistics from data in iterator form, without having to
convert to a list. The experimental stats package on PyPI includes
co-routine versions of statistics functions. Including these will be deferred
to 3.5.
References
[1]
https://mail.python.org/pipermail/python-dev/2010-October/104721.html
[2]
http://support.casio.com/pdf/004/CP330PLUSver310_Soft_E.pdf
[3]
Gnumeric::
https://projects.gnome.org/gnumeric/functions.shtmlLibreOffice:
https://help.libreoffice.org/Calc/Statistical_Functions_Part_One
https://help.libreoffice.org/Calc/Statistical_Functions_Part_Two
https://help.libreoffice.org/Calc/Statistical_Functions_Part_Three
https://help.libreoffice.org/Calc/Statistical_Functions_Part_Four
https://help.libreoffice.org/Calc/Statistical_Functions_Part_Five
[4]
Scipy: http://scipy-central.org/
Numpy: http://www.numpy.org/
[5]
http://wiki.scipy.org/Numpy_Functions_by_Category
[6]
Tested with numpy 1.6.1 and Python 2.7.
[7]
http://www.johndcook.com/blog/2008/09/26/comparing-three-methods-of-computing-standard-deviation/
[8]
http://rosettacode.org/wiki/Standard_deviation
[9]
https://bitbucket.org/larsyencken/simplestats/src/c42e048a6625/src/basic.py
[10]
http://stackoverflow.com/questions/2341340/calculate-mean-and-variance-with-one-iteration
[11]
http://www.r-project.org/
[12]
http://msdn.microsoft.com/en-us/library/system.linq.enumerable.average.aspx
[13]
https://www.bcg.wisc.edu/webteam/support/ruby/standard_deviation
[14]
http://ruby-statsample.rubyforge.org/
[15]
http://www.php.net/manual/en/ref.stats.php
[16]
http://www.ayton.id.au/gary/it/Delphi/D_maths.htm#Delphi%20Statistical%20functions.
[17]
http://www.gnu.org/software/gsl/manual/html_node/Statistics.html
[18] (1, 2)
http://www.gnu.org/software/gsl/manual/html_node/Mean-and-standard-deviation-and-variance.html
[19]
http://mathworld.wolfram.com/Skewness.html
[20]
At least, tedious to those who don’t like this sort of thing.
[21]
https://mail.python.org/pipermail/python-ideas/2011-September/011524.html
[22]
https://pypi.python.org/pypi/stats/
[23]
https://mail.python.org/pipermail/python-ideas/2013-August/022630.html
[24]
https://mail.python.org/pipermail/python-dev/2013-September/128429.html
Copyright
This document has been placed in the public domain.
| Final | PEP 450 – Adding A Statistics Module To The Standard Library | Standards Track | This PEP proposes the addition of a module for common statistics functions such
as mean, median, variance and standard deviation to the Python standard
library. See also http://bugs.python.org/issue18606 |
PEP 452 – API for Cryptographic Hash Functions v2.0
Author:
A.M. Kuchling <amk at amk.ca>, Christian Heimes <christian at python.org>
Status:
Final
Type:
Informational
Created:
15-Aug-2013
Post-History:
Replaces:
247
Table of Contents
Abstract
Specification
Rationale
Changes from Version 1.0 to Version 2.0
Recommended names for common hashing algorithms
Changes
Acknowledgements
Copyright
Abstract
There are several different modules available that implement
cryptographic hashing algorithms such as MD5 or SHA. This
document specifies a standard API for such algorithms, to make it
easier to switch between different implementations.
Specification
All hashing modules should present the same interface. Additional
methods or variables can be added, but those described in this
document should always be present.
Hash function modules define one function:
new([string]) (unkeyed hashes)
new(key, [string], [digestmod]) (keyed hashes)Create a new hashing object and return it. The first form is
for hashes that are unkeyed, such as MD5 or SHA. For keyed
hashes such as HMAC, ‘key’ is a required parameter containing
a string giving the key to use. In both cases, the optional
‘string’ parameter, if supplied, will be immediately hashed
into the object’s starting state, as if obj.update(string) was
called.After creating a hashing object, arbitrary bytes can be fed
into the object using its update() method, and the hash value
can be obtained at any time by calling the object’s digest()
method.
Although the parameter is called ‘string’, hashing objects operate
on 8-bit data only. Both ‘key’ and ‘string’ must be a bytes-like
object (bytes, bytearray…). A hashing object may support
one-dimensional, contiguous buffers as argument, too. Text
(unicode) is no longer supported in Python 3.x. Python 2.x
implementations may take ASCII-only unicode as argument, but
portable code should not rely on the feature.
Arbitrary additional keyword arguments can be added to this
function, but if they’re not supplied, sensible default values
should be used. For example, ‘rounds’ and ‘digest_size’
keywords could be added for a hash function which supports a
variable number of rounds and several different output sizes,
and they should default to values believed to be secure.
Hash function modules define one variable:
digest_sizeAn integer value; the size of the digest produced by the
hashing objects created by this module, measured in bytes.
You could also obtain this value by creating a sample object
and accessing its ‘digest_size’ attribute, but it can be
convenient to have this value available from the module.
Hashes with a variable output size will set this variable to
None.
Hashing objects require the following attribute:
digest_sizeThis attribute is identical to the module-level digest_size
variable, measuring the size of the digest produced by the
hashing object, measured in bytes. If the hash has a variable
output size, this output size must be chosen when the hashing
object is created, and this attribute must contain the
selected size. Therefore, None is not a legal value for this
attribute.
block_sizeAn integer value or NotImplemented; the internal block size
of the hash algorithm in bytes. The block size is used by the
HMAC module to pad the secret key to digest_size or to hash the
secret key if it is longer than digest_size. If no HMAC
algorithm is standardized for the hash algorithm, return
NotImplemented instead.
nameA text string value; the canonical, lowercase name of the hashing
algorithm. The name should be a suitable parameter for
hashlib.new.
Hashing objects require the following methods:
copy()Return a separate copy of this hashing object. An update to
this copy won’t affect the original object.
digest()Return the hash value of this hashing object as a bytes
containing 8-bit data. The object is not altered in any way
by this function; you can continue updating the object after
calling this function.
hexdigest()Return the hash value of this hashing object as a string
containing hexadecimal digits. Lowercase letters should be used
for the digits ‘a’ through ‘f’. Like the .digest() method, this
method mustn’t alter the object.
update(string)Hash bytes-like ‘string’ into the current state of the hashing
object. update() can be called any number of times during a
hashing object’s lifetime.
Hashing modules can define additional module-level functions or
object methods and still be compliant with this specification.
Here’s an example, using a module named ‘MD5’:
>>> import hashlib
>>> from Crypto.Hash import MD5
>>> m = MD5.new()
>>> isinstance(m, hashlib.CryptoHash)
True
>>> m.name
'md5'
>>> m.digest_size
16
>>> m.block_size
64
>>> m.update(b'abc')
>>> m.digest()
b'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
>>> m.hexdigest()
'900150983cd24fb0d6963f7d28e17f72'
>>> MD5.new(b'abc').digest()
b'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
Rationale
The digest size is measured in bytes, not bits, even though hash
algorithm sizes are usually quoted in bits; MD5 is a 128-bit
algorithm and not a 16-byte one, for example. This is because, in
the sample code I looked at, the length in bytes is often needed
(to seek ahead or behind in a file; to compute the length of an
output string) while the length in bits is rarely used.
Therefore, the burden will fall on the few people actually needing
the size in bits, who will have to multiply digest_size by 8.
It’s been suggested that the update() method would be better named
append(). However, that method is really causing the current
state of the hashing object to be updated, and update() is already
used by the md5 and sha modules included with Python, so it seems
simplest to leave the name update() alone.
The order of the constructor’s arguments for keyed hashes was a
sticky issue. It wasn’t clear whether the key should come first
or second. It’s a required parameter, and the usual convention is
to place required parameters first, but that also means that the
‘string’ parameter moves from the first position to the second.
It would be possible to get confused and pass a single argument to
a keyed hash, thinking that you’re passing an initial string to an
unkeyed hash, but it doesn’t seem worth making the interface
for keyed hashes more obscure to avoid this potential error.
Changes from Version 1.0 to Version 2.0
Version 2.0 of API for Cryptographic Hash Functions clarifies some
aspects of the API and brings it up-to-date. It also formalized aspects
that were already de facto standards and provided by most
implementations.
Version 2.0 introduces the following new attributes:
nameThe name property was made mandatory by issue 18532.
block_sizeThe new version also specifies that the return value
NotImplemented prevents HMAC support.
Version 2.0 takes the separation of binary and text data in Python
3.0 into account. The ‘string’ argument to new() and update() as
well as the ‘key’ argument must be bytes-like objects. On Python
2.x a hashing object may also support ASCII-only unicode. The actual
name of argument is not changed as it is part of the public API.
Code may depend on the fact that the argument is called ‘string’.
Recommended names for common hashing algorithms
algorithm
variant
recommended name
MD5
md5
RIPEMD-160
ripemd160
SHA-1
sha1
SHA-2
SHA-224
sha224
SHA-256
sha256
SHA-384
sha384
SHA-512
sha512
SHA-3
SHA-3-224
sha3_224
SHA-3-256
sha3_256
SHA-3-384
sha3_384
SHA-3-512
sha3_512
WHIRLPOOL
whirlpool
Changes
2001-09-17: Renamed clear() to reset(); added digest_size attribute
to objects; added .hexdigest() method.
2001-09-20: Removed reset() method completely.
2001-09-28: Set digest_size to None for variable-size hashes.
2013-08-15: Added block_size and name attributes; clarified that
‘string’ actually refers to bytes-like objects.
Acknowledgements
Thanks to Aahz, Andrew Archibald, Rich Salz, Itamar
Shtull-Trauring, and the readers of the python-crypto list for
their comments on this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 452 – API for Cryptographic Hash Functions v2.0 | Informational | There are several different modules available that implement
cryptographic hashing algorithms such as MD5 or SHA. This
document specifies a standard API for such algorithms, to make it
easier to switch between different implementations. |
PEP 460 – Add binary interpolation and formatting
Author:
Antoine Pitrou <solipsis at pitrou.net>
Status:
Withdrawn
Type:
Standards Track
Created:
06-Jan-2014
Python-Version:
3.5
Table of Contents
Abstract
Rationale
Binary formatting features
Supported features
Unsupported features
Criticisms
Other proposals
A new type datatype
Resolution
References
Copyright
Abstract
This PEP proposes to add minimal formatting operations to bytes and
bytearray objects. The proposed additions are:
bytes % ... and bytearray % ... for percent-formatting,
similar in syntax to percent-formatting on str objects
(accepting a single object, a tuple or a dict).
bytes.format(...) and bytearray.format(...) for a formatting
similar in syntax to str.format() (accepting positional as well as
keyword arguments).
bytes.format_map(...) and bytearray.format_map(...) for an
API similar to str.format_map(...), with the same formatting
syntax and semantics as bytes.format() and bytearray.format().
Rationale
In Python 2, str % args and str.format(args) allow the formatting
and interpolation of bytestrings. This feature has commonly been used
for the assembling of protocol messages when protocols are known to use
a fixed encoding.
Python 3 generally mandates that text be stored and manipulated as unicode
(i.e. str objects, not bytes). In some cases, though, it makes
sense to manipulate bytes objects directly. Typical usage is binary
network protocols, where you can want to interpolate and assemble several
bytes object (some of them literals, some of them compute) to produce
complete protocol messages. For example, protocols such as HTTP or SIP
have headers with ASCII names and opaque “textual” values using a varying
and/or sometimes ill-defined encoding. Moreover, those headers can be
followed by a binary body… which can be chunked and decorated with ASCII
headers and trailers!
While there are reasonably efficient ways to accumulate binary data
(such as using a bytearray object, the bytes.join method or
even io.BytesIO), none of them leads to the kind of readable and
intuitive code that is produced by a %-formatted or {}-formatted template
and a formatting operation.
Binary formatting features
Supported features
In this proposal, percent-formatting for bytes and bytearray
supports the following features:
Looking up formatting arguments by position as well as by name (i.e.,
%s as well as %(name)s).
%s will try to get a Py_buffer on the given value, and fallback
on calling __bytes__. The resulting binary data is inserted at
the given point in the string. This is expected to work with bytes,
bytearray and memoryview objects (as well as a couple others such
as pathlib’s path objects).
%c will accept an integer between 0 and 255, and insert a byte of the
given value.
Braces-formatting for bytes and bytearray supports the following
features:
All the kinds of argument lookup supported by str.format() (explicit
positional lookup, auto-incremented positional lookup, keyword lookup,
attribute lookup, etc.)
Insertion of binary data when no modifier or layout is specified
(e.g. {}, {0}, {name}). This has the same semantics as
%s for percent-formatting (see above).
The c modifier will accept an integer between 0 and 255, and insert a
byte of the given value (same as %c above).
Unsupported features
All other features present in formatting of str objects (either
through the percent operator or the str.format() method) are
unsupported. Those features imply treating the recipient of the
operator or method as text, which goes counter to the text / bytes
separation (for example, accepting %d as a format code would imply
that the bytes object really is an ASCII-compatible text string).
Amongst those unsupported features are not only most type-specific
format codes, but also the various layout specifiers such as padding
or alignment. Besides, str objects are not acceptable as arguments
to the formatting operations, even when using e.g. the %s format code.
__format__ isn’t called.
Criticisms
The development cost and maintenance cost.
In 3.3 encoding to ASCII or latin-1 is as fast as memcpy (but it still
creates a separate object).
Developers will have to work around the lack of binary formatting anyway,
if they want to support Python 3.4 and earlier.
bytes.join() is consistently faster than format to join bytes strings
(XXX is it?).
Formatting functions could be implemented in a third party module,
rather than added to builtin types.
Other proposals
A new type datatype
It was proposed to create a new datatype specialized for “network
programming”. The authors of this PEP believe this is counter-productive.
Python 3 already has several major types dedicated to manipulation of
binary data: bytes, bytearray, memoryview, io.BytesIO.
Adding yet another type would make things more confusing for users, and
interoperability between libraries more painful (also potentially
sub-optimal, due to the necessary conversions).
Moreover, not one type would be needed, but two: one immutable type (to
allow for hashing), and one mutable type (as efficient accumulation is
often necessary when working with network messages).
Resolution
This PEP is made obsolete by the acceptance
of PEP 461, which introduces a more extended formatting language for
bytes objects in conjunction with the modulo operator.
References
Issue #3982: support .format for bytes
Mercurial project
Twisted project
Documentation of Python 2 formatting (str % args)
Documentation of Python 2 formatting (str.format)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 460 – Add binary interpolation and formatting | Standards Track | This PEP proposes to add minimal formatting operations to bytes and
bytearray objects. The proposed additions are: |
PEP 461 – Adding % formatting to bytes and bytearray
Author:
Ethan Furman <ethan at stoneleaf.us>
Status:
Final
Type:
Standards Track
Created:
13-Jan-2014
Python-Version:
3.5
Post-History:
14-Jan-2014, 15-Jan-2014, 17-Jan-2014, 22-Feb-2014, 25-Mar-2014,
27-Mar-2014
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Motivation
Proposed semantics for bytes and bytearray formatting
%-interpolation
Compatibility with Python 2
Proposed variations
Objections
Footnotes
Copyright
Abstract
This PEP proposes adding % formatting operations similar to Python 2’s str
type to bytes and bytearray [1] [2].
Rationale
While interpolation is usually thought of as a string operation, there are
cases where interpolation on bytes or bytearrays make sense, and the
work needed to make up for this missing functionality detracts from the overall
readability of the code.
Motivation
With Python 3 and the split between str and bytes, one small but
important area of programming became slightly more difficult, and much more
painful – wire format protocols [3].
This area of programming is characterized by a mixture of binary data and
ASCII compatible segments of text (aka ASCII-encoded text). Bringing back a
restricted %-interpolation for bytes and bytearray will aid both in
writing new wire format code, and in porting Python 2 wire format code.
Common use-cases include dbf and pdf file formats, email
formats, and FTP and HTTP communications, among many others.
Proposed semantics for bytes and bytearray formatting
%-interpolation
All the numeric formatting codes (d, i, o, u, x, X,
e, E, f, F, g, G, and any that are subsequently added
to Python 3) will be supported, and will work as they do for str, including
the padding, justification and other related modifiers (currently #, 0,
-, space, and + (plus any added to Python 3)). The only
non-numeric codes allowed are c, b, a, and s (which is a
synonym for b).
For the numeric codes, the only difference between str and bytes (or
bytearray) interpolation is that the results from these codes will be
ASCII-encoded text, not unicode. In other words, for any numeric formatting
code %x:
b"%x" % val
is equivalent to:
("%x" % val).encode("ascii")
Examples:
>>> b'%4x' % 10
b' a'
>>> b'%#4x' % 10
' 0xa'
>>> b'%04X' % 10
'000A'
%c will insert a single byte, either from an int in range(256), or from
a bytes argument of length 1, not from a str.
Examples:
>>> b'%c' % 48
b'0'
>>> b'%c' % b'a'
b'a'
%b will insert a series of bytes. These bytes are collected in one of two
ways:
input type supports Py_buffer [4]?
use it to collect the necessary bytes
input type is something else?
use its __bytes__ method [5] ; if there isn’t one, raise a TypeError
In particular, %b will not accept numbers nor str. str is rejected
as the string to bytes conversion requires an encoding, and we are refusing to
guess; numbers are rejected because:
what makes a number is fuzzy (float? Decimal? Fraction? some user type?)
allowing numbers would lead to ambiguity between numbers and textual
representations of numbers (3.14 vs ‘3.14’)
given the nature of wire formats, explicit is definitely better than implicit
%s is included as a synonym for %b for the sole purpose of making 2/3 code
bases easier to maintain. Python 3 only code should use %b.
Examples:
>>> b'%b' % b'abc'
b'abc'
>>> b'%b' % 'some string'.encode('utf8')
b'some string'
>>> b'%b' % 3.14
Traceback (most recent call last):
...
TypeError: b'%b' does not accept 'float'
>>> b'%b' % 'hello world!'
Traceback (most recent call last):
...
TypeError: b'%b' does not accept 'str'
%a will give the equivalent of
repr(some_obj).encode('ascii', 'backslashreplace') on the interpolated
value. Use cases include developing a new protocol and writing landmarks
into the stream; debugging data going into an existing protocol to see if
the problem is the protocol itself or bad data; a fall-back for a serialization
format; or any situation where defining __bytes__ would not be appropriate
but a readable/informative representation is needed [6].
%r is included as a synonym for %a for the sole purpose of making 2/3
code bases easier to maintain. Python 3 only code use %a [7].
Examples:
>>> b'%a' % 3.14
b'3.14'
>>> b'%a' % b'abc'
b"b'abc'"
>>> b'%a' % 'def'
b"'def'"
Compatibility with Python 2
As noted above, %s and %r are being included solely to help ease
migration from, and/or have a single code base with, Python 2. This is
important as there are modules both in the wild and behind closed doors that
currently use the Python 2 str type as a bytes container, and hence
are using %s as a bytes interpolator.
However, %b and %a should be used in new, Python 3 only code, so %s
and %r will immediately be deprecated, but not removed from the 3.x series
[7].
Proposed variations
It has been proposed to automatically use .encode('ascii','strict') for
str arguments to %b.
Rejected as this would lead to intermittent failures. Better to have the
operation always fail so the trouble-spot can be correctly fixed.
It has been proposed to have %b return the ascii-encoded repr when the
value is a str (b’%b’ % ‘abc’ –> b“‘abc’”).
Rejected as this would lead to hard to debug failures far from the problem
site. Better to have the operation always fail so the trouble-spot can be
easily fixed.
Originally this PEP also proposed adding format-style formatting, but it was
decided that format and its related machinery were all strictly text (aka
str) based, and it was dropped.
Various new special methods were proposed, such as __ascii__,
__format_bytes__, etc.; such methods are not needed at this time, but can
be visited again later if real-world use shows deficiencies with this solution.
A competing PEP, PEP 460 Add binary interpolation and formatting,
also exists.
Objections
The objections raised against this PEP were mainly variations on two themes:
the bytes and bytearray types are for pure binary data, with no
assumptions about encodings
offering %-interpolation that assumes an ASCII encoding will be an
attractive nuisance and lead us back to the problems of the Python 2
str/unicode text model
As was seen during the discussion, bytes and bytearray are also used
for mixed binary data and ASCII-compatible segments: file formats such as
dbf and pdf, network protocols such as ftp and email, etc.
bytes and bytearray already have several methods which assume an ASCII
compatible encoding. upper(), isalpha(), and expandtabs() to name
just a few. %-interpolation, with its very restricted mini-language, will not
be any more of a nuisance than the already existing methods.
Some have objected to allowing the full range of numeric formatting codes with
the claim that decimal alone would be sufficient. However, at least two
formats (dbf and pdf) make use of non-decimal numbers.
Footnotes
[1]
http://docs.python.org/2/library/stdtypes.html#string-formatting
[2]
neither string.Template, format, nor str.format are under consideration
[3]
https://mail.python.org/pipermail/python-dev/2014-January/131518.html
[4]
http://docs.python.org/3/c-api/buffer.html
examples: memoryview, array.array, bytearray, bytes
[5]
http://docs.python.org/3/reference/datamodel.html#object.__bytes__
[6]
https://mail.python.org/pipermail/python-dev/2014-February/132750.html
[7] (1, 2)
http://bugs.python.org/issue23467 – originally %r was not allowed,
but was added for consistency during the 3.5 alpha stage.
Copyright
This document has been placed in the public domain.
| Final | PEP 461 – Adding % formatting to bytes and bytearray | Standards Track | This PEP proposes adding % formatting operations similar to Python 2’s str
type to bytes and bytearray [1] [2]. |
PEP 462 – Core development workflow automation for CPython
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Process
Requires:
474
Created:
23-Jan-2014
Post-History:
25-Jan-2014, 27-Jan-2014, 01-Feb-2015
Table of Contents
Abstract
PEP Withdrawal
Rationale for changes to the core development workflow
Current Tools
Proposal
Deferred Proposals
Suggested Variants
Perceived Benefits
Technical Challenges
Kallithea vs Gerrit
Mercurial vs Gerrit/git
Buildbot vs Jenkins
Handling of maintenance branches
Handling of security branches
Handling of NEWS file updates
Stability of “stable” Buildbot slaves
Intermittent test failures
Custom Mercurial client workflow support
Social Challenges
Practical Challenges
Open Questions
Next Steps
Acknowledgements
Copyright
Abstract
This PEP proposes investing in automation of several of the tedious,
time-consuming activities that are currently required for the core development
team to incorporate changes into CPython. This proposal is intended to
allow core developers to make more effective use of the time they have
available to contribute to CPython, which should also result in an improved
experience for other contributors that are reliant on the core team to get
their changes incorporated.
PEP Withdrawal
This PEP has been withdrawn by the author
in favour of the GitLab based proposal in PEP 507.
If anyone else would like to take over championing this PEP, contact the
core-workflow mailing list
Rationale for changes to the core development workflow
The current core developer workflow to merge a new feature into CPython
on a POSIX system “works” as follows:
If applying a change submitted to bugs.python.org by another user, first
check they have signed the PSF Contributor Licensing Agreement. If not,
request that they sign one before continuing with merging the change.
Apply the change locally to a current checkout of the main CPython
repository (the change will typically have been discussed and reviewed
as a patch on bugs.python.org first, but this step is not currently
considered mandatory for changes originating directly from core
developers).
Run the test suite locally, at least make test or
./python -m test (depending on system specs, this takes a few
minutes in the default configuration, but substantially longer if all
optional resources, like external network access, are enabled).
Run make patchcheck to fix any whitespace issues and as a reminder
of other changes that may be needed (such as updating Misc/ACKS or
adding an entry to Misc/NEWS)
Commit the change and push it to the main repository. If hg indicates
this would create a new head in the remote repository, run
hg pull --rebase (or an equivalent). Theoretically, you should
rerun the tests at this point, but it’s very tempting to skip that
step.
After pushing, monitor the stable buildbots
for any new failures introduced by your change. In particular, developers
on POSIX systems will often break the Windows buildbots, and vice-versa.
Less commonly, developers on Linux or Mac OS X may break other POSIX
systems.
The steps required on Windows are similar, but the exact commands used
will be different.
Rather than being simpler, the workflow for a bug fix is more complicated
than that for a new feature! New features have the advantage of only being
applied to the default branch, while bug fixes also need to be considered
for inclusion in maintenance branches.
If a bug fix is applicable to Python 2.7, then it is also separately
applied to the 2.7 branch, which is maintained as an independent head
in Mercurial
If a bug fix is applicable to the current 3.x maintenance release, then
it is first applied to the maintenance branch and then merged forward
to the default branch. Both branches are pushed to hg.python.org at the
same time.
Documentation patches are simpler than functional patches, but not
hugely so - the main benefit is only needing to check the docs build
successfully rather than running the test suite.
I would estimate that even when everything goes smoothly, it would still
take me at least 20-30 minutes to commit a bug fix patch that applies
cleanly. Given that it should be possible to automate several of these
tasks, I do not believe our current practices are making effective use
of scarce core developer resources.
There are many, many frustrations involved with this current workflow, and
they lead directly to some undesirable development practices.
Much of this overhead is incurred on a per-patch applied basis. This
encourages large commits, rather than small isolated changes. The time
required to commit a 500 line feature is essentially the same as that
needed to commit a 1 line bug fix - the additional time needed for the
larger change appears in any preceding review rather than as part of the
commit process.
The additional overhead of working on applying bug fixes creates an
additional incentive to work on new features instead, and new features
are already inherently more interesting to work on - they don’t need
workflow difficulties giving them a helping hand!
Getting a preceding review on bugs.python.org is additional work,
creating an incentive to commit changes directly, increasing the reliance
on post-review on the python-checkins mailing list.
Patches on the tracker that are complete, correct and ready to merge may
still languish for extended periods awaiting a core developer with the
time to devote to getting it merged.
The risk of push races (especially when pushing a merged bug fix) creates
a temptation to skip doing full local test runs (especially after a push
race has already been encountered once), increasing the chance of
breaking the buildbots.
The buildbots are sometimes red for extended periods, introducing errors
into local test runs, and also meaning that they sometimes fail to serve
as a reliable indicator of whether or not a patch has introduced cross
platform issues.
Post-conference development sprints are a nightmare, as they collapse
into a mire of push races. It’s tempting to just leave patches on the
tracker until after the sprint is over and then try to clean them up
afterwards.
There are also many, many opportunities for core developers to make
mistakes that inconvenience others, both in managing the Mercurial branches
and in breaking the buildbots without being in a position to fix them
promptly. This both makes the existing core development team cautious in
granting new developers commit access, as well as making those new
developers cautious about actually making use of their increased level of
access.
There are also some incidental annoyances (like keeping the NEWS file up to
date) that will also be necessarily addressed as part of this proposal.
One of the most critical resources of a volunteer-driven open source project
is the emotional energy of its contributors. The current approach to change
incorporation doesn’t score well on that front for anyone:
For core developers, the branch wrangling for bug fixes is delicate and
easy to get wrong. Conflicts on the NEWS file and push races when
attempting to upload changes add to the irritation of something most of
us aren’t being paid to spend time on (and for those that are, contributing
to CPython is likely to be only one of our responsibilities). The time we
spend actually getting a change merged is time we’re not spending coding
additional changes, writing or updating documentation or reviewing
contributions from others.
Red buildbots make life difficult for other developers (since a local
test failure may not be due to anything that developer did), release
managers (since they may need to enlist assistance cleaning up test
failures prior to a release) and for the developers themselves (since
it creates significant pressure to fix any failures we inadvertently
introduce right now, rather than at a more convenient time, as well
as potentially making hg bisect more difficult to use if
hg annotate isn’t sufficient to identify the source of a new failure).
For other contributors, a core developer spending time actually getting
changes merged is a developer that isn’t reviewing and discussing patches
on the issue tracker or otherwise helping others to contribute effectively.
It is especially frustrating for contributors that are accustomed to the
simplicity of a developer just being able to hit “Merge” on a pull
request that has already been automatically tested in the project’s CI
system (which is a common workflow on sites like GitHub and BitBucket), or
where the post-review part of the merge process is fully automated (as is
the case for OpenStack).
Current Tools
The following tools are currently used to manage various parts of the
CPython core development workflow.
Mercurial (hg.python.org) for version control
Roundup (bugs.python.org) for issue tracking
Rietveld (also hosted on bugs.python.org) for code review
Buildbot (buildbot.python.org) for automated testing
This proposal suggests replacing the use of Rietveld for code review with
the more full-featured Kallithea-based forge.python.org service proposed in
PEP 474. Guido has indicated that the original Rietveld implementation was
primarily intended as a public demonstration application for Google App
Engine, and switching to Kallithea will address some of the issues with
identifying intended target branches that arise when working with patch files
on Roundup and the associated reviews in the integrated Rietveld instance.
It also suggests the addition of new tools in order to automate
additional parts of the workflow, as well as a critical review of the
remaining tools to see which, if any, may be candidates for replacement.
Proposal
The essence of this proposal is that CPython aim to adopt a “core reviewer”
development model, similar to that used by the OpenStack project.
The workflow problems experienced by the CPython core development team are
not unique. The OpenStack infrastructure team have come up with a well
designed automated workflow that is designed to ensure:
once a patch has been reviewed, further developer involvement is needed
only if the automated tests fail prior to merging
patches never get merged without being tested relative to the current
state of the branch
the main development branch always stays green. Patches that do not pass
the automated tests do not get merged
If a core developer wants to tweak a patch prior to merging, they download
it from the review tool, modify and upload it back to the review tool
rather than pushing it directly to the source code repository.
The core of this workflow is implemented using a tool called Zuul, a
Python web service created specifically for the OpenStack project, but
deliberately designed with a plugin based trigger and action system to make
it easier to adapt to alternate code review systems, issue trackers and
CI systems. James Blair of the OpenStack infrastructure team provided
an excellent overview of Zuul at linux.conf.au 2014.
While Zuul handles several workflows for OpenStack, the specific one of
interest for this PEP is the “merge gating” workflow.
For this workflow, Zuul is configured to monitor the Gerrit code review
system for patches which have been marked as “Approved”. Once it sees
such a patch, Zuul takes it, and combines it into a queue of “candidate
merges”. It then creates a pipeline of test runs that execute in parallel in
Jenkins (in order to allow more than 24 commits a day when a full test run
takes the better part of an hour), and are merged as they pass (and as all
the candidate merges ahead of them in the queue pass). If a patch fails the
tests, Zuul takes it out of the queue, cancels any test runs after that patch in
the queue, and rebuilds the queue without the failing patch.
If a developer looks at a test which failed on merge and determines that it
was due to an intermittent failure, they can then resubmit the patch for
another attempt at merging.
To adapt this process to CPython, it should be feasible to have Zuul monitor
Kallithea for approved pull requests (which may require a feature addition in
Kallithea), submit them to Buildbot for testing on the stable buildbots, and
then merge the changes appropriately in Mercurial. This idea poses a few
technical challenges, which have their own section below.
For CPython, I don’t believe we will need to take advantage of Zuul’s
ability to execute tests in parallel (certainly not in the initial
iteration - if we get to a point where serial testing of patches by the
merge gating system is our primary bottleneck rather than having the
people we need in order to be able to review and approve patches, then
that will be a very good day).
However, the merge queue itself is a very powerful concept that should
directly address several of the issues described in the Rationale above.
Deferred Proposals
The OpenStack team also use Zuul to coordinate several other activities:
Running preliminary “check” tests against patches posted to Gerrit.
Creation of updated release artefacts and republishing documentation when
changes are merged
The Elastic recheck feature that uses ElasticSearch in conjunction with
a spam filter to monitor test output and suggest the specific intermittent
failure that may have caused a test to fail, rather than requiring users
to search logs manually
While these are possibilities worth exploring in the future (and one of the
possible benefits I see to seeking closer coordination with the OpenStack
Infrastructure team), I don’t see them as offering quite the same kind of
fundamental workflow improvement that merge gating appears to provide.
However, if we find we are having too many problems with intermittent test
failures in the gate, then introducing the “Elastic recheck” feature may
need to be considered as part of the initial deployment.
Suggested Variants
Terry Reedy has suggested doing an initial filter which specifically looks
for approved documentation-only patches (~700 of the 4000+ open CPython
issues are pure documentation updates). This approach would avoid several
of the issues related to flaky tests and cross-platform testing, while
still allowing the rest of the automation flows to be worked out (such as
how to push a patch into the merge queue).
The key downside to this approach is that Zuul wouldn’t have complete
control of the merge process as it usually expects, so there would
potentially be additional coordination needed around that.
It may be worth keeping this approach as a fallback option if the initial
deployment proves to have more trouble with test reliability than is
anticipated.
It would also be possible to tweak the merge gating criteria such that it
doesn’t run the test suite if it detects that the patch hasn’t modified any
files outside the “Docs” tree, and instead only checks that the documentation
builds without errors.
As yet another alternative, it may be reasonable to move some parts of the
documentation (such as the tutorial and the HOWTO guides) out of the main
source repository and manage them using the simpler pull request based model
described in PEP 474.
Perceived Benefits
The benefits of this proposal accrue most directly to the core development
team. First and foremost, it means that once we mark a patch as “Approved”
in the updated code review system, we’re usually done. The extra 20-30
minutes (or more) of actually applying the patch, running the tests and
merging it into Mercurial would all be orchestrated by Zuul. Push races
would also be a thing of the past - if lots of core developers are
approving patches at a sprint, then that just means the queue gets
deeper in Zuul, rather than developers getting frustrated trying to
merge changes and failing. Test failures would still happen, but they
would result in the affected patch being removed from the merge queue,
rather than breaking the code in the main repository.
With the bulk of the time investment moved to the review process, this
also encourages “development for reviewability” - smaller, easier to review
patches, since the overhead of running the tests multiple times will be
incurred by Zuul rather than by the core developers.
However, removing this time sink from the core development team should also
improve the experience of CPython development for other contributors, as it
eliminates several of the opportunities for patches to get “dropped on the
floor”, as well as increasing the time core developers are likely to have
available for reviewing contributed patches.
Another example of benefits to other contributors is that when a sprint
aimed primarily at new contributors is running with just a single core
developer present (such as the sprints at PyCon AU for the last
few years), the merge queue would allow that developer to focus more of
their time on reviewing patches and helping the other contributors at the
sprint, since accepting a patch for inclusion would now be a single click
in the Kallithea UI, rather than the relatively time-consuming process that
it is currently. Even when multiple core developers are present, it is
better to enable them to spend their time and effort on interacting with
the other sprint participants than it is on things that are sufficiently
mechanical that a computer can (and should) handle them.
With most of the ways to make a mistake when committing a change
automated out of existence, there are also substantially fewer new things to
learn when a contributor is nominated to become a core developer. This
should have a dual benefit, both in making the existing core developers more
comfortable with granting that additional level of responsibility, and in
making new contributors more comfortable with exercising it.
Finally, a more stable default branch in CPython makes it easier for
other Python projects to conduct continuous integration directly against the
main repo, rather than having to wait until we get into the release
candidate phase of a new release. At the moment, setting up such a system
isn’t particularly attractive, as it would need to include an additional
mechanism to wait until CPython’s own Buildbot fleet indicated that the
build was in a usable state. With the proposed merge gating system, the
trunk always remains usable.
Technical Challenges
Adapting Zuul from the OpenStack infrastructure to the CPython
infrastructure will at least require the development of additional
Zuul trigger and action plugins, and may require additional development
in some of our existing tools.
Kallithea vs Gerrit
Kallithea does not currently include a voting/approval feature that is
equivalent to Gerrit’s. For CPython, we wouldn’t need anything as
sophisticated as Gerrit’s voting system - a simple core-developer-only
“Approved” marker to trigger action from Zuul should suffice. The
core-developer-or-not flag is available in Roundup, as is the flag
indicating whether or not the uploader of a patch has signed a PSF
Contributor Licensing Agreement, which may require further development to
link contributor accounts between the Kallithea instance and Roundup.
Some of the existing Zuul triggers work by monitoring for particular comments
(in particular, recheck/reverify comments to ask Zuul to try merging a
change again if it was previously rejected due to an unrelated intermittent
failure). We will likely also want similar explicit triggers for Kallithea.
The current Zuul plugins for Gerrit work by monitoring the Gerrit activity
stream for particular events. If Kallithea has no equivalent, we will need
to add something suitable for the events we would like to trigger on.
There would also be development effort needed to create a Zuul plugin
that monitors Kallithea activity rather than Gerrit.
Mercurial vs Gerrit/git
Gerrit uses git as the actual storage mechanism for patches, and
automatically handles merging of approved patches. By contrast, Kallithea
use the RhodeCode created vcs library as
an abstraction layer over specific DVCS implementations (with Mercurial and
git backends currently available).
Zuul is also directly integrated with git for patch manipulation - as far
as I am aware, this part of the design currently isn’t pluggable. However,
at PyCon US 2014, the Mercurial core developers at the sprints expressed
some interest in collaborating with the core development team and the Zuul
developers on enabling the use of Zuul with Mercurial in addition to git.
As Zuul is itself a Python application, migrating it to use the same DVCS
abstraction library as RhodeCode and Kallithea may be a viable path towards
achieving that.
Buildbot vs Jenkins
Zuul’s interaction with the CI system is also pluggable, using Gearman
as the preferred interface.
Accordingly, adapting the CI jobs to run in Buildbot rather than Jenkins
should just be a matter of writing a Gearman client that can process the
requests from Zuul and pass them on to the Buildbot master. Zuul uses the
pure Python gear client library to
communicate with Gearman, and this library should also be useful to handle
the Buildbot side of things.
Note that, in the initial iteration, I am proposing that we do not
attempt to pipeline test execution. This means Zuul would be running in
a very simple mode where only the patch at the head of the merge queue
is being tested on the Buildbot fleet, rather than potentially testing
several patches in parallel. I am picturing something equivalent to
requesting a forced build from the Buildbot master, and then waiting for
the result to come back before moving on to the second patch in the queue.
If we ultimately decide that this is not sufficient, and we need to start
using the CI pipelining features of Zuul, then we may need to look at moving
the test execution to dynamically provisioned cloud images, rather than
relying on volunteer maintained statically provisioned systems as we do
currently. The OpenStack CI infrastructure team are exploring the idea of
replacing their current use of Jenkins masters with a simpler pure Python
test runner, so if we find that we can’t get Buildbot to effectively
support the pipelined testing model, we’d likely participate in that
effort rather than setting up a Jenkins instance for CPython.
In this case, the main technical risk would be a matter of ensuring we
support testing on platforms other than Linux (as our stable buildbots
currently cover Windows, Mac OS X, FreeBSD and OpenIndiana in addition to a
couple of different Linux variants).
In such a scenario, the Buildbot fleet would still have a place in doing
“check” runs against the master repository (either periodically or for
every commit), even if it did not play a part in the merge gating process.
More unusual configurations (such as building without threads, or without
SSL/TLS support) would likely still be handled that way rather than being
included in the gate criteria (at least initially, anyway).
Handling of maintenance branches
The OpenStack project largely leaves the question of maintenance branches
to downstream vendors, rather than handling it directly. This means there
are questions to be answered regarding how we adapt Zuul to handle our
maintenance branches.
Python 2.7 can be handled easily enough by treating it as a separate patch
queue. This would be handled natively in Kallithea by submitting separate
pull requests in order to update the Python 2.7 maintenance branch.
The Python 3.x maintenance branches are potentially more complicated. My
current recommendation is to simply stop using Mercurial merges to manage
them, and instead treat them as independent heads, similar to the Python
2.7 branch. Separate pull requests would need to be submitted for the active
Python 3 maintenance branch and the default development branch. The
downside of this approach is that it increases the risk that a fix is merged
only to the maintenance branch without also being submitted to the default
branch, so we may want to design some additional tooling that ensures that
every maintenance branch pull request either has a corresponding default
branch pull request prior to being merged, or else has an explicit disclaimer
indicating that it is only applicable to that branch and doesn’t need to be
ported forward to later branches.
Such an approach has the benefit of adjusting relatively cleanly to the
intermittent periods where we have two active Python 3 maintenance branches.
This issue does suggest some potential user interface ideas for Kallithea,
where it may be desirable to be able to clone a pull request in order to be
able to apply it to a second branch.
Handling of security branches
For simplicity’s sake, I would suggest leaving the handling of
security-fix only branches alone: the release managers for those branches
would continue to backport specific changes manually. The only change is
that they would be able to use the Kallithea pull request workflow to do the
backports if they would like others to review the updates prior to merging
them.
Handling of NEWS file updates
Our current approach to handling NEWS file updates regularly results in
spurious conflicts when merging bug fixes forward from an active maintenance
branch to a later branch.
Issue #18967* discusses some
possible improvements in that area, which would be beneficial regardless
of whether or not we adopt Zuul as a workflow automation tool.
Stability of “stable” Buildbot slaves
Instability of the nominally stable buildbots has a substantially larger
impact under this proposal. We would need to ensure we’re genuinely happy
with each of those systems gating merges to the development branches, or
else move then to “unstable” status.
Intermittent test failures
Some tests, especially timing tests, exhibit intermittent failures on the
existing Buildbot fleet. In particular, test systems running as VMs may
sometimes exhibit timing failures when the VM host is under higher than
normal load.
The OpenStack CI infrastructure includes a number of additional features to
help deal with intermittent failures, the most basic of which is simply
allowing developers to request that merging a patch be tried again when the
original failure appears to be due to a known intermittent failure (whether
that intermittent failure is in OpenStack itself or just in a flaky test).
The more sophisticated Elastic recheck feature may be worth considering,
especially since the output of the CPython test suite is substantially
simpler than that from OpenStack’s more complex multi-service testing, and
hence likely even more amenable to automated analysis.
Custom Mercurial client workflow support
One useful part of the OpenStack workflow is the “git review” plugin,
which makes it relatively easy to push a branch from a local git clone up
to Gerrit for review.
PEP 474 mentions a draft custom Mercurial
extension
that automates some aspects of the existing CPython core development workflow.
As part of this proposal, that custom extension would be extended to work
with the new Kallithea based review workflow in addition to the legacy
Roundup/Rietveld based review workflow.
Social Challenges
The primary social challenge here is getting the core development team to
change their practices. However, the tedious-but-necessary steps that are
automated by the proposal should create a strong incentive for the
existing developers to go along with the idea.
I believe three specific features may be needed to assure existing
developers that there are no downsides to the automation of this workflow:
Only requiring approval from a single core developer to incorporate a
patch. This could be revisited in the future, but we should preserve the
status quo for the initial rollout.
Explicitly stating that core developers remain free to approve their own
patches, except during the release candidate phase of a release. This
could be revisited in the future, but we should preserve the status quo
for the initial rollout.
Ensuring that at least release managers have a “merge it now” capability
that allows them to force a particular patch to the head of the merge
queue. Using a separate clone for release preparation may be sufficient
for this purpose. Longer term, automatic merge gating may also allow for
more automated preparation of release artefacts as well.
Practical Challenges
The PSF runs its own directly and indirectly sponsored workflow
infrastructure primarily due to past experience with unacceptably poor
performance and inflexibility of infrastructure provided for free to the
general public. CPython development was originally hosted on SourceForge,
with source control moved to self hosting when SF was both slow to offer
Subversion support and suffering from CVS performance issues (see PEP 347),
while issue tracking later moved to the open source Roundup issue tracker
on dedicated sponsored hosting (from Upfront Systems), due to a combination
of both SF performance issues and general usability issues with the SF
tracker at the time (the outcome and process for the new tracker selection
were captured on the python.org wiki rather than in a PEP).
Accordingly, proposals that involve setting ourselves up for “SourceForge
usability and reliability issues, round two” will face significant
opposition from at least some members of the CPython core development team
(including the author of this PEP). This proposal respects that history by
recommending only tools that are available for self-hosting as sponsored
or PSF funded infrastructure, and are also open source Python projects that
can be customised to meet the needs of the CPython core development team.
However, for this proposal to be a success (if it is accepted), we need to
understand how we are going to carry out the necessary configuration,
customisation, integration and deployment work.
The last attempt at adding a new piece to the CPython support infrastructure
(speed.python.org) has unfortunately foundered due to the lack of time to
drive the project from the core developers and PSF board members involved,
and the difficulties of trying to bring someone else up to speed to lead
the activity (the hardware donated to that project by HP is currently in
use to support PyPy instead, but the situation highlights some
of the challenges of relying on volunteer labour with many other higher
priority demands on their time to steer projects to completion).
Even ultimately successful past projects, such as the source control
migrations from CVS to Subversion and from Subversion to Mercurial, the
issue tracker migration from SourceForge to Roundup, the code review
integration between Roundup and Rietveld and the introduction of the
Buildbot continuous integration fleet, have taken an extended period of
time as volunteers worked their way through the many technical and social
challenges involved.
Fortunately, as several aspects of this proposal and PEP 474 align with
various workflow improvements under consideration for Red Hat’s
Beaker open source hardware integration
testing system and other work-related projects, I have arranged to be able
to devote ~1 day a week to working on CPython infrastructure projects.
Together with Rackspace’s existing contributions to maintaining the
pypi.python.org infrastructure, I personally believe this arrangement is
indicative of a more general recognition amongst CPython redistributors and
major users of the merit in helping to sustain upstream infrastructure
through direct contributions of developer time, rather than expecting
volunteer contributors to maintain that infrastructure entirely in their
spare time or funding it indirectly through the PSF (with the additional
management overhead that would entail). I consider this a positive trend, and
one that I will continue to encourage as best I can.
Open Questions
Pretty much everything in the PEP. Do we want to adopt merge gating and
Zuul? How do we want to address the various technical challenges?
Are the Kallithea and Zuul development communities open to the kind
of collaboration that would be needed to make this effort a success?
While I’ve arranged to spend some of my own work time on this, do we want to
approach the OpenStack Foundation for additional assistance, since
we’re a key dependency of OpenStack itself, Zuul is a creation of the
OpenStack infrastructure team, and the available development resources for
OpenStack currently dwarf those for CPython?
Are other interested folks working for Python redistributors and major users
also in a position to make a business case to their superiors for investing
developer time in supporting this effort?
Next Steps
If pursued, this will be a follow-on project to the Kallithea-based
forge.python.org proposal in PEP 474. Refer to that PEP for more details
on the discussion, review and proof-of-concept pilot process currently
under way.
Acknowledgements
Thanks to Jesse Noller, Alex Gaynor and James Blair for providing valuable
feedback on a preliminary draft of this proposal, and to James and Monty
Taylor for additional technical feedback following publication of the
initial draft.
Thanks to Bradley Kuhn, Mads Kiellerich and other Kallithea developers for
the discussions around PEP 474 that led to a significant revision of this
proposal to be based on using Kallithea for the review component rather than
the existing Rietveld installation.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 462 – Core development workflow automation for CPython | Process | This PEP proposes investing in automation of several of the tedious,
time-consuming activities that are currently required for the core development
team to incorporate changes into CPython. This proposal is intended to
allow core developers to make more effective use of the time they have
available to contribute to CPython, which should also result in an improved
experience for other contributors that are reliant on the core team to get
their changes incorporated. |
PEP 463 – Exception-catching expressions
Author:
Chris Angelico <rosuav at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
15-Feb-2014
Python-Version:
3.5
Post-History:
20-Feb-2014, 16-Feb-2014
Resolution:
Python-Dev message
Table of Contents
Rejection Notice
Abstract
Motivation
Rationale
Proposal
Alternative Proposals
Example usage
Narrowing of exception-catching scope
Comparisons with other languages
Deferred sub-proposals
Multiple except clauses
Capturing the exception object
Rejected sub-proposals
finally clause
Bare except having different meaning
Bare except clauses
Parentheses around the except clauses
Short-hand for “except: pass”
Common objections
Colons always introduce suites
Copyright
Rejection Notice
From https://mail.python.org/pipermail/python-dev/2014-March/133118.html:
“””
I want to reject this PEP. I think the proposed syntax is acceptable given
the desired semantics, although it’s still a bit jarring. It’s probably no
worse than the colon used with lambda (which echoes the colon used in a def
just like the colon here echoes the one in a try/except) and definitely
better than the alternatives listed.
But the thing I can’t get behind are the motivation and rationale. I don’t
think that e.g. dict.get() would be unnecessary once we have except
expressions, and I disagree with the position that EAFP is better than
LBYL, or “generally recommended” by Python. (Where do you get that? From
the same sources that are so obsessed with DRY they’d rather introduce a
higher-order-function than repeat one line of code? :-)
This is probably the most you can get out of me as far as a pronouncement.
Given that the language summit is coming up I’d be happy to dive deeper in
my reasons for rejecting it there (if there’s demand).
I do think that (apart from never explaining those dreadful acronyms :-)
this was a well-written and well-researched PEP, and I think you’ve done a
great job moderating the discussion, collecting objections, reviewing
alternatives, and everything else that is required to turn a heated debate
into a PEP. Well done Chris (and everyone who helped), and good luck with
your next PEP!
“””
Abstract
Just as PEP 308 introduced a means of value-based conditions in an
expression, this system allows exception-based conditions to be used
as part of an expression.
Motivation
A number of functions and methods have parameters which will cause
them to return a specified value instead of raising an exception. The
current system is ad-hoc and inconsistent, and requires that each
function be individually written to have this functionality; not all
support this.
dict.get(key, default) - second positional argument in place of
KeyError
next(iter, default) - second positional argument in place of
StopIteration
list.pop() - no way to return a default
seq[index] - no way to handle a bounds error
min(sequence, default=default) - keyword argument in place of
ValueError
statistics.mean(data) - no way to handle an empty iterator
Had this facility existed early in Python’s history, there would have been
no need to create dict.get() and related methods; the one obvious way to
handle an absent key would be to respond to the exception. One method is
written which signals the absence in one way, and one consistent technique
is used to respond to the absence. Instead, we have dict.get(), and as of
Python 3.4, we also have min(… default=default), and myriad others. We
have a LBYL syntax for testing inside an expression, but there is currently
no EAFP notation; compare the following:
# LBYL:
if key in dic:
process(dic[key])
else:
process(None)
# As an expression:
process(dic[key] if key in dic else None)
# EAFP:
try:
process(dic[key])
except KeyError:
process(None)
# As an expression:
process(dic[key] except KeyError: None)
Python generally recommends the EAFP policy, but must then proliferate
utility functions like dic.get(key,None) to enable this.
Rationale
The current system requires that a function author predict the need
for a default, and implement support for it. If this is not done, a
full try/except block is needed.
Since try/except is a statement, it is impossible to catch exceptions
in the middle of an expression. Just as if/else does for conditionals
and lambda does for function definitions, so does this allow exception
catching in an expression context.
This provides a clean and consistent way for a function to provide a
default: it simply raises an appropriate exception, and the caller
catches it.
With some situations, an LBYL technique can be used (checking if some
sequence has enough length before indexing into it, for instance). This is
not safe in all cases, but as it is often convenient, programmers will be
tempted to sacrifice the safety of EAFP in favour of the notational brevity
of LBYL. Additionally, some LBYL techniques (eg involving getattr with
three arguments) warp the code into looking like literal strings rather
than attribute lookup, which can impact readability. A convenient EAFP
notation solves all of this.
There’s no convenient way to write a helper function to do this; the
nearest is something ugly using either lambda:
def except_(expression, exception_list, default):
try:
return expression()
except exception_list:
return default()
value = except_(lambda: 1/x, ZeroDivisionError, lambda: float("nan"))
which is clunky, and unable to handle multiple exception clauses; or
eval:
def except_(expression, exception_list, default):
try:
return eval(expression, globals_of_caller(), locals_of_caller())
except exception_list as exc:
l = locals_of_caller().copy()
l['exc'] = exc
return eval(default, globals_of_caller(), l)
def globals_of_caller():
return sys._getframe(2).f_globals
def locals_of_caller():
return sys._getframe(2).f_locals
value = except_("""1/x""",ZeroDivisionError,""" "Can't divide by zero" """)
which is even clunkier, and relies on implementation-dependent hacks.
(Writing globals_of_caller() and locals_of_caller() for interpreters
other than CPython is left as an exercise for the reader.)
Raymond Hettinger expresses a desire for such a consistent
API. Something similar has been requested multiple times
in the past.
Proposal
Just as the ‘or’ operator and the three part ‘if-else’ expression give
short circuiting methods of catching a falsy value and replacing it,
this syntax gives a short-circuiting method of catching an exception
and replacing it.
This currently works:
lst = [1, 2, None, 3]
value = lst[2] or "No value"
The proposal adds this:
lst = [1, 2]
value = (lst[2] except IndexError: "No value")
Specifically, the syntax proposed is:
(expr except exception_list: default)
where expr, exception_list, and default are all expressions. First,
expr is evaluated. If no exception is raised, its value is the value
of the overall expression. If any exception is raised, exception_list
is evaluated, and should result in either a type or a tuple, just as
with the statement form of try/except. Any matching exception will
result in the corresponding default expression being evaluated and
becoming the value of the expression. As with the statement form of
try/except, non-matching exceptions will propagate upward.
Parentheses are required around the entire expression, unless they
would be completely redundant, according to the same rules as generator
expressions follow. This guarantees correct interpretation of nested
except-expressions, and allows for future expansion of the syntax -
see below on multiple except clauses.
Note that the current proposal does not allow the exception object to
be captured. Where this is needed, the statement form must be used.
(See below for discussion and elaboration on this.)
This ternary operator would be between lambda and if/else in
precedence.
Consider this example of a two-level cache:
for key in sequence:
x = (lvl1[key] except KeyError: (lvl2[key] except KeyError: f(key)))
# do something with x
This cannot be rewritten as:
x = lvl1.get(key, lvl2.get(key, f(key)))
which, despite being shorter, defeats the purpose of the cache, as it must
calculate a default value to pass to get(). The .get() version calculates
backwards; the exception-testing version calculates forwards, as would be
expected. The nearest useful equivalent would be:
x = lvl1.get(key) or lvl2.get(key) or f(key)
which depends on the values being nonzero, as well as depending on the cache
object supporting this functionality.
Alternative Proposals
Discussion on python-ideas brought up the following syntax suggestions:
value = expr except default if Exception [as e]
value = expr except default for Exception [as e]
value = expr except default from Exception [as e]
value = expr except Exception [as e] return default
value = expr except (Exception [as e]: default)
value = expr except Exception [as e] try default
value = expr except Exception [as e] continue with default
value = default except Exception [as e] else expr
value = try expr except Exception [as e]: default
value = expr except default # Catches anything
value = expr except(Exception) default # Catches only the named type(s)
value = default if expr raise Exception
value = expr or else default if Exception
value = expr except Exception [as e] -> default
value = expr except Exception [as e] pass default
It has also been suggested that a new keyword be created, rather than
reusing an existing one. Such proposals fall into the same structure
as the last form, but with a different keyword in place of ‘pass’.
Suggestions include ‘then’, ‘when’, and ‘use’. Also, in the context of
the “default if expr raise Exception” proposal, it was suggested that a
new keyword “raises” be used.
All forms involving the ‘as’ capturing clause have been deferred from
this proposal in the interests of simplicity, but are preserved in the
table above as an accurate record of suggestions.
The four forms most supported by this proposal are, in order:
value = (expr except Exception: default)
value = (expr except Exception -> default)
value = (expr except Exception pass default)
value = (expr except Exception then default)
All four maintain left-to-right evaluation order: first the base expression,
then the exception list, and lastly the default. This is important, as the
expressions are evaluated lazily. By comparison, several of the ad-hoc
alternatives listed above must (by the nature of functions) evaluate their
default values eagerly. The preferred form, using the colon, parallels
try/except by using “except exception_list:”, and parallels lambda by having
“keyword name_list: subexpression”; it also can be read as mapping Exception
to the default value, dict-style. Using the arrow introduces a token many
programmers will not be familiar with, and which currently has no similar
meaning, but is otherwise quite readable. The English word “pass” has a
vaguely similar meaning (consider the common usage “pass by value/reference”
for function arguments), and “pass” is already a keyword, but as its meaning
is distinctly unrelated, this may cause confusion. Using “then” makes sense
in English, but this introduces a new keyword to the language - albeit one
not in common use, but a new keyword all the same.
Left to right evaluation order is extremely important to readability, as it
parallels the order most expressions are evaluated. Alternatives such as:
value = (expr except default if Exception)
break this, by first evaluating the two ends, and then coming to the middle;
while this may not seem terrible (as the exception list will usually be a
constant), it does add to the confusion when multiple clauses meet, either
with multiple except/if or with the existing if/else, or a combination.
Using the preferred order, subexpressions will always be evaluated from
left to right, no matter how the syntax is nested.
Keeping the existing notation, but shifting the mandatory parentheses, we
have the following suggestion:
value = expr except (Exception: default)
value = expr except(Exception: default)
This is reminiscent of a function call, or a dict initializer. The colon
cannot be confused with introducing a suite, but on the other hand, the new
syntax guarantees lazy evaluation, which a dict does not. The potential
to reduce confusion is considered unjustified by the corresponding potential
to increase it.
Example usage
For each example, an approximately-equivalent statement form is given,
to show how the expression will be parsed. These are not always
strictly equivalent, but will accomplish the same purpose. It is NOT
safe for the interpreter to translate one into the other.
A number of these examples are taken directly from the Python standard
library, with file names and line numbers correct as of early Feb 2014.
Many of these patterns are extremely common.
Retrieve an argument, defaulting to None:
cond = (args[1] except IndexError: None)
# Lib/pdb.py:803:
try:
cond = args[1]
except IndexError:
cond = None
Fetch information from the system if available:
pwd = (os.getcwd() except OSError: None)
# Lib/tkinter/filedialog.py:210:
try:
pwd = os.getcwd()
except OSError:
pwd = None
Attempt a translation, falling back on the original:
e.widget = (self._nametowidget(W) except KeyError: W)
# Lib/tkinter/__init__.py:1222:
try:
e.widget = self._nametowidget(W)
except KeyError:
e.widget = W
Read from an iterator, continuing with blank lines once it’s
exhausted:
line = (readline() except StopIteration: '')
# Lib/lib2to3/pgen2/tokenize.py:370:
try:
line = readline()
except StopIteration:
line = ''
Retrieve platform-specific information (note the DRY improvement);
this particular example could be taken further, turning a series of
separate assignments into a single large dict initialization:
# sys.abiflags may not be defined on all platforms.
_CONFIG_VARS['abiflags'] = (sys.abiflags except AttributeError: '')
# Lib/sysconfig.py:529:
try:
_CONFIG_VARS['abiflags'] = sys.abiflags
except AttributeError:
# sys.abiflags may not be defined on all platforms.
_CONFIG_VARS['abiflags'] = ''
Retrieve an indexed item, defaulting to None (similar to dict.get):
def getNamedItem(self, name):
return (self._attrs[name] except KeyError: None)
# Lib/xml/dom/minidom.py:573:
def getNamedItem(self, name):
try:
return self._attrs[name]
except KeyError:
return None
Translate numbers to names, falling back on the numbers:
g = (grp.getgrnam(tarinfo.gname)[2] except KeyError: tarinfo.gid)
u = (pwd.getpwnam(tarinfo.uname)[2] except KeyError: tarinfo.uid)
# Lib/tarfile.py:2198:
try:
g = grp.getgrnam(tarinfo.gname)[2]
except KeyError:
g = tarinfo.gid
try:
u = pwd.getpwnam(tarinfo.uname)[2]
except KeyError:
u = tarinfo.uid
Look up an attribute, falling back on a default:
mode = (f.mode except AttributeError: 'rb')
# Lib/aifc.py:882:
if hasattr(f, 'mode'):
mode = f.mode
else:
mode = 'rb'
return (sys._getframe(1) except AttributeError: None)
# Lib/inspect.py:1350:
return sys._getframe(1) if hasattr(sys, "_getframe") else None
Perform some lengthy calculations in EAFP mode, handling division by
zero as a sort of sticky NaN:
value = (calculate(x) except ZeroDivisionError: float("nan"))
try:
value = calculate(x)
except ZeroDivisionError:
value = float("nan")
Calculate the mean of a series of numbers, falling back on zero:
value = (statistics.mean(lst) except statistics.StatisticsError: 0)
try:
value = statistics.mean(lst)
except statistics.StatisticsError:
value = 0
Looking up objects in a sparse list of overrides:
(overrides[x] or default except IndexError: default).ping()
try:
(overrides[x] or default).ping()
except IndexError:
default.ping()
Narrowing of exception-catching scope
The following examples, taken directly from Python’s standard library,
demonstrate how the scope of the try/except can be conveniently narrowed.
To do this with the statement form of try/except would require a temporary
variable, but it’s far cleaner as an expression.
Lib/ipaddress.py:343:
try:
ips.append(ip.ip)
except AttributeError:
ips.append(ip.network_address)
Becomes:
ips.append(ip.ip except AttributeError: ip.network_address)
The expression form is nearly equivalent to this:
try:
_ = ip.ip
except AttributeError:
_ = ip.network_address
ips.append(_)
Lib/tempfile.py:130:
try:
dirlist.append(_os.getcwd())
except (AttributeError, OSError):
dirlist.append(_os.curdir)
Becomes:
dirlist.append(_os.getcwd() except (AttributeError, OSError): _os.curdir)
Lib/asyncore.py:264:
try:
status.append('%s:%d' % self.addr)
except TypeError:
status.append(repr(self.addr))
Becomes:
status.append('%s:%d' % self.addr except TypeError: repr(self.addr))
In each case, the narrowed scope of the try/except ensures that an unexpected
exception (for instance, AttributeError if “append” were misspelled) does not
get caught by the same handler. This is sufficiently unlikely to be reason
to break the call out into a separate line (as per the five line example
above), but it is a small benefit gained as a side-effect of the conversion.
Comparisons with other languages
(With thanks to Andrew Barnert for compiling this section. Note that the
examples given here do not reflect the current version of the proposal,
and need to be edited.)
Ruby’s “begin…rescue…rescue…else…ensure…end” is an expression
(potentially with statements inside it). It has the equivalent of an “as”
clause, and the equivalent of bare except. And it uses no punctuation or
keyword between the bare except/exception class/exception class with as
clause and the value. (And yes, it’s ambiguous unless you understand
Ruby’s statement/expression rules.)
x = begin computation() rescue MyException => e default(e) end;
x = begin computation() rescue MyException default() end;
x = begin computation() rescue default() end;
x = begin computation() rescue MyException default() rescue OtherException other() end;
In terms of this PEP:
x = computation() except MyException as e default(e)
x = computation() except MyException default(e)
x = computation() except default(e)
x = computation() except MyException default() except OtherException other()
Erlang has a try expression that looks like this
x = try computation() catch MyException:e -> default(e) end;
x = try computation() catch MyException:e -> default(e); OtherException:e -> other(e) end;
The class and “as” name are mandatory, but you can use “_” for either.
There’s also an optional “when” guard on each, and a “throw” clause that
you can catch, which I won’t get into. To handle multiple exceptions,
you just separate the clauses with semicolons, which I guess would map
to commas in Python. So:
x = try computation() except MyException as e -> default(e)
x = try computation() except MyException as e -> default(e), OtherException as e->other_default(e)
Erlang also has a “catch” expression, which, despite using the same keyword,
is completely different, and you don’t want to know about it.
The ML family has two different ways of dealing with this, “handle” and
“try”; the difference between the two is that “try” pattern-matches the
exception, which gives you the effect of multiple except clauses and as
clauses. In either form, the handler clause is punctuated by “=>” in
some dialects, “->” in others.
To avoid confusion, I’ll write the function calls in Python style.
Here’s SML’s “handle”
let x = computation() handle MyException => default();;
Here’s OCaml’s “try”
let x = try computation() with MyException explanation -> default(explanation);;
let x = try computation() with
MyException(e) -> default(e)
| MyOtherException() -> other_default()
| (e) -> fallback(e);;
In terms of this PEP, these would be something like:
x = computation() except MyException => default()
x = try computation() except MyException e -> default()
x = (try computation()
except MyException as e -> default(e)
except MyOtherException -> other_default()
except BaseException as e -> fallback(e))
Many ML-inspired but not-directly-related languages from academia mix things
up, usually using more keywords and fewer symbols. So, the Oz would map
to Python as
x = try computation() catch MyException as e then default(e)
Many Lisp-derived languages, like Clojure, implement try/catch as special
forms (if you don’t know what that means, think function-like macros), so you
write, effectively
try(computation(), catch(MyException, explanation, default(explanation)))
try(computation(),
catch(MyException, explanation, default(explanation)),
catch(MyOtherException, explanation, other_default(explanation)))
In Common Lisp, this is done with a slightly clunkier “handler-case” macro,
but the basic idea is the same.
The Lisp style is, surprisingly, used by some languages that don’t have
macros, like Lua, where xpcall takes functions. Writing lambdas
Python-style instead of Lua-style
x = xpcall(lambda: expression(), lambda e: default(e))
This actually returns (true, expression()) or (false, default(e)), but I think we can ignore that part.
Haskell is actually similar to Lua here (except that it’s all done
with monads, of course):
x = do catch(lambda: expression(), lambda e: default(e))
You can write a pattern matching expression within the function to decide
what to do with it; catching and re-raising exceptions you don’t want is
cheap enough to be idiomatic.
But Haskell infixing makes this nicer:
x = do expression() `catch` lambda: default()
x = do expression() `catch` lambda e: default(e)
And that makes the parallel between the lambda colon and the except
colon in the proposal much more obvious:
x = expression() except Exception: default()
x = expression() except Exception as e: default(e)
Tcl has the other half of Lua’s xpcall; catch is a function which returns
true if an exception was caught, false otherwise, and you get the value out
in other ways. And it’s all built around the implicit quote-and-exec
that everything in Tcl is based on, making it even harder to describe in
Python terms than Lisp macros, but something like
if {[ catch("computation()") "explanation"]} { default(explanation) }
Smalltalk is also somewhat hard to map to Python. The basic version
would be
x := computation() on:MyException do:default()
… but that’s basically Smalltalk’s passing-arguments-with-colons
syntax, not its exception-handling syntax.
Deferred sub-proposals
Multiple except clauses
An examination of use-cases shows that this is not needed as often as
it would be with the statement form, and as its syntax is a point on
which consensus has not been reached, the entire feature is deferred.
Multiple ‘except’ keywords could be used, and they will all catch
exceptions raised in the original expression (only):
# Will catch any of the listed exceptions thrown by expr;
# any exception thrown by a default expression will propagate.
value = (expr
except Exception1: default1
except Exception2: default2
# ... except ExceptionN: defaultN
)
Currently, one of the following forms must be used:
# Will catch an Exception2 thrown by either expr or default1
value = (
(expr except Exception1: default1)
except Exception2: default2
)
# Will catch an Exception2 thrown by default1 only
value = (expr except Exception1:
(default1 except Exception2: default2)
)
Listing multiple exception clauses without parentheses is a syntax error
(see above), and so a future version of Python is free to add this feature
without breaking any existing code.
Capturing the exception object
In a try/except block, the use of ‘as’ to capture the exception object
creates a local name binding, and implicitly deletes that binding (to
avoid creating a reference loop) in a finally clause. In an expression
context, this makes little sense, and a proper sub-scope would be
required to safely capture the exception object - something akin to the
way a list comprehension is handled. However, CPython currently
implements a comprehension’s subscope with a nested function call, which
has consequences in some contexts such as class definitions, and is
therefore unsuitable for this proposal. Should there be, in future, a
way to create a true subscope (which could simplify comprehensions,
except expressions, with blocks, and possibly more), then this proposal
could be revived; until then, its loss is not a great one, as the simple
exception handling that is well suited to the expression notation used
here is generally concerned only with the type of the exception, and not
its value - further analysis below.
This syntax would, admittedly, allow a convenient way to capture
exceptions in interactive Python; returned values are captured by “_”,
but exceptions currently are not. This could be spelled:
>>> (expr except Exception as e: e)
An examination of the Python standard library shows that, while the use
of ‘as’ is fairly common (occurring in roughly one except clause in five),
it is extremely uncommon in the cases which could logically be converted
into the expression form. Its few uses can simply be left unchanged.
Consequently, in the interests of simplicity, the ‘as’ clause is not
included in this proposal. A subsequent Python version can add this without
breaking any existing code, as ‘as’ is already a keyword.
One example where this could possibly be useful is Lib/imaplib.py:568:
try: typ, dat = self._simple_command('LOGOUT')
except: typ, dat = 'NO', ['%s: %s' % sys.exc_info()[:2]]
This could become:
typ, dat = (self._simple_command('LOGOUT')
except BaseException as e: ('NO', '%s: %s' % (type(e), e)))
Or perhaps some other variation. This is hardly the most compelling use-case,
but an intelligent look at this code could tidy it up significantly. In the
absence of further examples showing any need of the exception object, I have
opted to defer indefinitely the recommendation.
Rejected sub-proposals
finally clause
The statement form try… finally or try… except… finally has no
logical corresponding expression form. Therefore, the finally keyword
is not a part of this proposal, in any way.
Bare except having different meaning
With several of the proposed syntaxes, omitting the exception type name
would be easy and concise, and would be tempting. For convenience’s sake,
it might be advantageous to have a bare ‘except’ clause mean something
more useful than “except BaseException”. Proposals included having it
catch Exception, or some specific set of “common exceptions” (subclasses
of a new type called ExpressionError), or have it look for a tuple named
ExpressionError in the current scope, with a built-in default such as
(ValueError, UnicodeError, AttributeError, EOFError, IOError, OSError,
LookupError, NameError, ZeroDivisionError). All of these were rejected,
for several reasons.
First and foremost, consistency with the statement form of try/except
would be broken. Just as a list comprehension or ternary if expression
can be explained by “breaking it out” into its vertical statement form,
an expression-except should be able to be explained by a relatively
mechanical translation into a near-equivalent statement. Any form of
syntax common to both should therefore have the same semantics in each,
and above all should not have the subtle difference of catching more in
one than the other, as it will tend to attract unnoticed bugs.
Secondly, the set of appropriate exceptions to catch would itself be
a huge point of contention. It would be impossible to predict exactly
which exceptions would “make sense” to be caught; why bless some of them
with convenient syntax and not others?
And finally (this partly because the recommendation was that a bare
except should be actively encouraged, once it was reduced to a “reasonable”
set of exceptions), any situation where you catch an exception you don’t
expect to catch is an unnecessary bug magnet.
Consequently, the use of a bare ‘except’ is down to two possibilities:
either it is syntactically forbidden in the expression form, or it is
permitted with the exact same semantics as in the statement form (namely,
that it catch BaseException and be unable to capture it with ‘as’).
Bare except clauses
PEP 8 rightly advises against the use of a bare ‘except’. While it is
syntactically legal in a statement, and for backward compatibility must
remain so, there is little value in encouraging its use. In an expression
except clause, “except:” is a SyntaxError; use the equivalent long-hand
form “except BaseException:” instead. A future version of Python MAY choose
to reinstate this, which can be done without breaking compatibility.
Parentheses around the except clauses
Should it be legal to parenthesize the except clauses, separately from
the expression that could raise? Example:
value = expr (
except Exception1 [as e]: default1
except Exception2 [as e]: default2
# ... except ExceptionN [as e]: defaultN
)
This is more compelling when one or both of the deferred sub-proposals
of multiple except clauses and/or exception capturing is included. In
their absence, the parentheses would be thus:
value = expr except ExceptionType: default
value = expr (except ExceptionType: default)
The advantage is minimal, and the potential to confuse a reader into
thinking the except clause is separate from the expression, or into thinking
this is a function call, makes this non-compelling. The expression can, of
course, be parenthesized if desired, as can the default:
value = (expr) except ExceptionType: (default)
As the entire expression is now required to be in parentheses (which had not
been decided at the time when this was debated), there is less need to
delineate this section, and in many cases it would be redundant.
Short-hand for “except: pass”
The following was been suggested as a similar
short-hand, though not technically an expression:
statement except Exception: pass
try:
statement
except Exception:
pass
For instance, a common use-case is attempting the removal of a file:
os.unlink(some_file) except OSError: pass
There is an equivalent already in Python 3.4, however, in contextlib:
from contextlib import suppress
with suppress(OSError): os.unlink(some_file)
As this is already a single line (or two with a break after the colon),
there is little need of new syntax and a confusion of statement vs
expression to achieve this.
Common objections
Colons always introduce suites
While it is true that many of Python’s syntactic elements use the colon to
introduce a statement suite (if, while, with, for, etcetera), this is not
by any means the sole use of the colon. Currently, Python syntax includes
four cases where a colon introduces a subexpression:
dict display - { … key:value … }
slice notation - [start:stop:step]
function definition - parameter : annotation
lambda - arg list: return value
This proposal simply adds a fifth:
except-expression - exception list: result
Style guides and PEP 8 should recommend not having the colon at the end of
a wrapped line, which could potentially look like the introduction of a
suite, but instead advocate wrapping before the exception list, keeping the
colon clearly between two expressions.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 463 – Exception-catching expressions | Standards Track | Just as PEP 308 introduced a means of value-based conditions in an
expression, this system allows exception-based conditions to be used
as part of an expression. |
PEP 465 – A dedicated infix operator for matrix multiplication
Author:
Nathaniel J. Smith <njs at pobox.com>
Status:
Final
Type:
Standards Track
Created:
20-Feb-2014
Python-Version:
3.5
Post-History:
13-Mar-2014
Resolution:
Python-Dev message
Table of Contents
Abstract
Specification
Motivation
Executive summary
Background: What’s wrong with the status quo?
Why should matrix multiplication be infix?
Transparent syntax is especially crucial for non-expert programmers
But isn’t matrix multiplication a pretty niche requirement?
So @ is good for matrix formulas, but how common are those really?
But isn’t it weird to add an operator with no stdlib uses?
Compatibility considerations
Intended usage details
Semantics
Adoption
Implementation details
Rationale for specification details
Choice of operator
Precedence and associativity
(Non)-Definitions for built-in types
Non-definition of matrix power
Rejected alternatives to adding a new operator
Discussions of this PEP
References
Copyright
Abstract
This PEP proposes a new binary operator to be used for matrix
multiplication, called @. (Mnemonic: @ is * for
mATrices.)
Specification
A new binary operator is added to the Python language, together
with the corresponding in-place version:
Op
Precedence/associativity
Methods
@
Same as *
__matmul__, __rmatmul__
@=
n/a
__imatmul__
No implementations of these methods are added to the builtin or
standard library types. However, a number of projects have reached
consensus on the recommended semantics for these operations; see
Intended usage details below for details.
For details on how this operator will be implemented in CPython, see
Implementation details.
Motivation
Executive summary
In numerical code, there are two important operations which compete
for use of Python’s * operator: elementwise multiplication, and
matrix multiplication. In the nearly twenty years since the Numeric
library was first proposed, there have been many attempts to resolve
this tension [13]; none have been really satisfactory.
Currently, most numerical Python code uses * for elementwise
multiplication, and function/method syntax for matrix multiplication;
however, this leads to ugly and unreadable code in common
circumstances. The problem is bad enough that significant amounts of
code continue to use the opposite convention (which has the virtue of
producing ugly and unreadable code in different circumstances), and
this API fragmentation across codebases then creates yet more
problems. There does not seem to be any good solution to the
problem of designing a numerical API within current Python syntax –
only a landscape of options that are bad in different ways. The
minimal change to Python syntax which is sufficient to resolve these
problems is the addition of a single new infix operator for matrix
multiplication.
Matrix multiplication has a singular combination of features which
distinguish it from other binary operations, which together provide a
uniquely compelling case for the addition of a dedicated infix
operator:
Just as for the existing numerical operators, there exists a vast
body of prior art supporting the use of infix notation for matrix
multiplication across all fields of mathematics, science, and
engineering; @ harmoniously fills a hole in Python’s existing
operator system.
@ greatly clarifies real-world code.
@ provides a smoother onramp for less experienced users, who are
particularly harmed by hard-to-read code and API fragmentation.
@ benefits a substantial and growing portion of the Python user
community.
@ will be used frequently – in fact, evidence suggests it may
be used more frequently than // or the bitwise operators.
@ allows the Python numerical community to reduce fragmentation,
and finally standardize on a single consensus duck type for all
numerical array objects.
Background: What’s wrong with the status quo?
When we crunch numbers on a computer, we usually have lots and lots of
numbers to deal with. Trying to deal with them one at a time is
cumbersome and slow – especially when using an interpreted language.
Instead, we want the ability to write down simple operations that
apply to large collections of numbers all at once. The n-dimensional
array is the basic object that all popular numeric computing
environments use to make this possible. Python has several libraries
that provide such arrays, with numpy being at present the most
prominent.
When working with n-dimensional arrays, there are two different ways
we might want to define multiplication. One is elementwise
multiplication:
[[1, 2], [[11, 12], [[1 * 11, 2 * 12],
[3, 4]] x [13, 14]] = [3 * 13, 4 * 14]]
and the other is matrix multiplication:
[[1, 2], [[11, 12], [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
[3, 4]] x [13, 14]] = [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
Elementwise multiplication is useful because it lets us easily and
quickly perform many multiplications on a large collection of values,
without writing a slow and cumbersome for loop. And this works as
part of a very general schema: when using the array objects provided
by numpy or other numerical libraries, all Python operators work
elementwise on arrays of all dimensionalities. The result is that one
can write functions using straightforward code like a * b + c / d,
treating the variables as if they were simple values, but then
immediately use this function to efficiently perform this calculation
on large collections of values, while keeping them organized using
whatever arbitrarily complex array layout works best for the problem
at hand.
Matrix multiplication is more of a special case. It’s only defined on
2d arrays (also known as “matrices”), and multiplication is the only
operation that has an important “matrix” version – “matrix addition”
is the same as elementwise addition; there is no such thing as “matrix
bitwise-or” or “matrix floordiv”; “matrix division” and “matrix
to-the-power-of” can be defined but are not very useful, etc.
However, matrix multiplication is still used very heavily across all
numerical application areas; mathematically, it’s one of the most
fundamental operations there is.
Because Python syntax currently allows for only a single
multiplication operator *, libraries providing array-like objects
must decide: either use * for elementwise multiplication, or use
* for matrix multiplication. And, unfortunately, it turns out
that when doing general-purpose number crunching, both operations are
used frequently, and there are major advantages to using infix rather
than function call syntax in both cases. Thus it is not at all clear
which convention is optimal, or even acceptable; often it varies on a
case-by-case basis.
Nonetheless, network effects mean that it is very important that we
pick just one convention. In numpy, for example, it is technically
possible to switch between the conventions, because numpy provides two
different types with different __mul__ methods. For
numpy.ndarray objects, * performs elementwise multiplication,
and matrix multiplication must use a function call (numpy.dot).
For numpy.matrix objects, * performs matrix multiplication,
and elementwise multiplication requires function syntax. Writing code
using numpy.ndarray works fine. Writing code using
numpy.matrix also works fine. But trouble begins as soon as we
try to integrate these two pieces of code together. Code that expects
an ndarray and gets a matrix, or vice-versa, may crash or
return incorrect results. Keeping track of which functions expect
which types as inputs, and return which types as outputs, and then
converting back and forth all the time, is incredibly cumbersome and
impossible to get right at any scale. Functions that defensively try
to handle both types as input and DTRT, find themselves floundering
into a swamp of isinstance and if statements.
PEP 238 split / into two operators: / and //. Imagine the
chaos that would have resulted if it had instead split int into
two types: classic_int, whose __div__ implemented floor
division, and new_int, whose __div__ implemented true
division. This, in a more limited way, is the situation that Python
number-crunchers currently find themselves in.
In practice, the vast majority of projects have settled on the
convention of using * for elementwise multiplication, and function
call syntax for matrix multiplication (e.g., using numpy.ndarray
instead of numpy.matrix). This reduces the problems caused by API
fragmentation, but it doesn’t eliminate them. The strong desire to
use infix notation for matrix multiplication has caused a number of
specialized array libraries to continue to use the opposing convention
(e.g., scipy.sparse, pyoperators, pyviennacl) despite the problems
this causes, and numpy.matrix itself still gets used in
introductory programming courses, often appears in StackOverflow
answers, and so forth. Well-written libraries thus must continue to
be prepared to deal with both types of objects, and, of course, are
also stuck using unpleasant funcall syntax for matrix multiplication.
After nearly two decades of trying, the numerical community has still
not found any way to resolve these problems within the constraints of
current Python syntax (see Rejected alternatives to adding a new
operator below).
This PEP proposes the minimum effective change to Python syntax that
will allow us to drain this swamp. It splits * into two
operators, just as was done for /: * for elementwise
multiplication, and @ for matrix multiplication. (Why not the
reverse? Because this way is compatible with the existing consensus,
and because it gives us a consistent rule that all the built-in
numeric operators also apply in an elementwise manner to arrays; the
reverse convention would lead to more special cases.)
So that’s why matrix multiplication doesn’t and can’t just use *.
Now, in the rest of this section, we’ll explain why it nonetheless
meets the high bar for adding a new operator.
Why should matrix multiplication be infix?
Right now, most numerical code in Python uses syntax like
numpy.dot(a, b) or a.dot(b) to perform matrix multiplication.
This obviously works, so why do people make such a fuss about it, even
to the point of creating API fragmentation and compatibility swamps?
Matrix multiplication shares two features with ordinary arithmetic
operations like addition and multiplication on numbers: (a) it is used
very heavily in numerical programs – often multiple times per line of
code – and (b) it has an ancient and universally adopted tradition of
being written using infix syntax. This is because, for typical
formulas, this notation is dramatically more readable than any
function call syntax. Here’s an example to demonstrate:
One of the most useful tools for testing a statistical hypothesis is
the linear hypothesis test for OLS regression models. It doesn’t
really matter what all those words I just said mean; if we find
ourselves having to implement this thing, what we’ll do is look up
some textbook or paper on it, and encounter many mathematical formulas
that look like:
S = (Hβ − r)T(HVHT) − 1(Hβ − r)Here the various variables are all vectors or matrices (details for
the curious: [5]).
Now we need to write code to perform this calculation. In current
numpy, matrix multiplication can be performed using either the
function or method call syntax. Neither provides a particularly
readable translation of the formula:
import numpy as np
from numpy.linalg import inv, solve
# Using dot function:
S = np.dot((np.dot(H, beta) - r).T,
np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
# Using dot method:
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
With the @ operator, the direct translation of the above formula
becomes:
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
Notice that there is now a transparent, 1-to-1 mapping between the
symbols in the original formula and the code that implements it.
Of course, an experienced programmer will probably notice that this is
not the best way to compute this expression. The repeated computation
of Hβ − r should perhaps be factored out; and,
expressions of the form dot(inv(A), B) should almost always be
replaced by the more numerically stable solve(A, B). When using
@, performing these two refactorings gives us:
# Version 1 (as above)
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv(H @ V @ H.T) @ trans_coef
# Version 3
S = trans_coef.T @ solve(H @ V @ H.T, trans_coef)
Notice that when comparing between each pair of steps, it’s very easy
to see exactly what was changed. If we apply the equivalent
transformations to the code using the .dot method, then the changes
are much harder to read out or verify for correctness:
# Version 1 (as above)
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
# Version 2
trans_coef = H.dot(beta) - r
S = trans_coef.T.dot(inv(H.dot(V).dot(H.T))).dot(trans_coef)
# Version 3
S = trans_coef.T.dot(solve(H.dot(V).dot(H.T)), trans_coef)
Readability counts! The statements using @ are shorter, contain
more whitespace, can be directly and easily compared both to each
other and to the textbook formula, and contain only meaningful
parentheses. This last point is particularly important for
readability: when using function-call syntax, the required parentheses
on every operation create visual clutter that makes it very difficult
to parse out the overall structure of the formula by eye, even for a
relatively simple formula like this one. Eyes are terrible at parsing
non-regular languages. I made and caught many errors while trying to
write out the ‘dot’ formulas above. I know they still contain at
least one error, maybe more. (Exercise: find it. Or them.) The
@ examples, by contrast, are not only correct, they’re obviously
correct at a glance.
If we are even more sophisticated programmers, and writing code that
we expect to be reused, then considerations of speed or numerical
accuracy might lead us to prefer some particular order of evaluation.
Because @ makes it possible to omit irrelevant parentheses, we can
be certain that if we do write something like (H @ V) @ H.T,
then our readers will know that the parentheses must have been added
intentionally to accomplish some meaningful purpose. In the dot
examples, it’s impossible to know which nesting decisions are
important, and which are arbitrary.
Infix @ dramatically improves matrix code usability at all stages
of programmer interaction.
Transparent syntax is especially crucial for non-expert programmers
A large proportion of scientific code is written by people who are
experts in their domain, but are not experts in programming. And
there are many university courses run each year with titles like “Data
analysis for social scientists” which assume no programming
background, and teach some combination of mathematical techniques,
introduction to programming, and the use of programming to implement
these mathematical techniques, all within a 10-15 week period. These
courses are more and more often being taught in Python rather than
special-purpose languages like R or Matlab.
For these kinds of users, whose programming knowledge is fragile, the
existence of a transparent mapping between formulas and code often
means the difference between succeeding and failing to write that code
at all. This is so important that such classes often use the
numpy.matrix type which defines * to mean matrix
multiplication, even though this type is buggy and heavily
disrecommended by the rest of the numpy community for the
fragmentation that it causes. This pedagogical use case is, in fact,
the only reason numpy.matrix remains a supported part of numpy.
Adding @ will benefit both beginning and advanced users with
better syntax; and furthermore, it will allow both groups to
standardize on the same notation from the start, providing a smoother
on-ramp to expertise.
But isn’t matrix multiplication a pretty niche requirement?
The world is full of continuous data, and computers are increasingly
called upon to work with it in sophisticated ways. Arrays are the
lingua franca of finance, machine learning, 3d graphics, computer
vision, robotics, operations research, econometrics, meteorology,
computational linguistics, recommendation systems, neuroscience,
astronomy, bioinformatics (including genetics, cancer research, drug
discovery, etc.), physics engines, quantum mechanics, geophysics,
network analysis, and many other application areas. In most or all of
these areas, Python is rapidly becoming a dominant player, in large
part because of its ability to elegantly mix traditional discrete data
structures (hash tables, strings, etc.) on an equal footing with
modern numerical data types and algorithms.
We all live in our own little sub-communities, so some Python users
may be surprised to realize the sheer extent to which Python is used
for number crunching – especially since much of this particular
sub-community’s activity occurs outside of traditional Python/FOSS
channels. So, to give some rough idea of just how many numerical
Python programmers are actually out there, here are two numbers: In
2013, there were 7 international conferences organized specifically on
numerical Python [3] [4]. At PyCon 2014, ~20%
of the tutorials appear to involve the use of matrices
[6].
To quantify this further, we used Github’s “search” function to look
at what modules are actually imported across a wide range of
real-world code (i.e., all the code on Github). We checked for
imports of several popular stdlib modules, a variety of numerically
oriented modules, and various other extremely high-profile modules
like django and lxml (the latter of which is the #1 most downloaded
package on PyPI). Starred lines indicate packages which export
array- or matrix-like objects which will adopt @ if this PEP is
approved:
Count of Python source files on Github matching given search terms
(as of 2014-04-10, ~21:00 UTC)
================ ========== =============== ======= ===========
module "import X" "from X import" total total/numpy
================ ========== =============== ======= ===========
sys 2374638 63301 2437939 5.85
os 1971515 37571 2009086 4.82
re 1294651 8358 1303009 3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings 298195 73150 371345 0.89
subprocess 281290 63644 344934 0.83
django 62795 219302 282097 0.68
math 200084 81903 281987 0.68
threading 212302 45423 257725 0.62
pickle+cPickle 215349 22672 238021 0.57
matplotlib 119054 27859 146913 0.35
sqlalchemy 29842 82850 112692 0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml 19026 38061 57087 0.14
zlib 40486 6623 47109 0.11
multiprocessing 25247 19850 45097 0.11
requests 30896 560 31456 0.08
jinja2 8057 24047 32104 0.08
twisted 13858 6404 20262 0.05
gevent 11309 8529 19838 0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy 2779 9537 12316 0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ========== =============== ======= ===========
These numbers should be taken with several grains of salt (see
footnote for discussion: [12]), but, to the extent they
can be trusted, they suggest that numpy might be the single
most-imported non-stdlib module in the entire Pythonverse; it’s even
more-imported than such stdlib stalwarts as subprocess, math,
pickle, and threading. And numpy users represent only a
subset of the broader numerical community that will benefit from the
@ operator. Matrices may once have been a niche data type
restricted to Fortran programs running in university labs and military
clusters, but those days are long gone. Number crunching is a
mainstream part of modern Python usage.
In addition, there is some precedence for adding an infix operator to
handle a more-specialized arithmetic operation: the floor division
operator //, like the bitwise operators, is very useful under
certain circumstances when performing exact calculations on discrete
values. But it seems likely that there are many Python programmers
who have never had reason to use // (or, for that matter, the
bitwise operators). @ is no more niche than //.
So @ is good for matrix formulas, but how common are those really?
We’ve seen that @ makes matrix formulas dramatically easier to
work with for both experts and non-experts, that matrix formulas
appear in many important applications, and that numerical libraries
like numpy are used by a substantial proportion of Python’s user base.
But numerical libraries aren’t just about matrix formulas, and being
important doesn’t necessarily mean taking up a lot of code: if matrix
formulas only occurred in one or two places in the average
numerically-oriented project, then it still wouldn’t be worth adding a
new operator. So how common is matrix multiplication, really?
When the going gets tough, the tough get empirical. To get a rough
estimate of how useful the @ operator will be, the table below
shows the rate at which different Python operators are actually used
in the stdlib, and also in two high-profile numerical packages – the
scikit-learn machine learning library, and the nipy neuroimaging
library – normalized by source lines of code (SLOC). Rows are sorted
by the ‘combined’ column, which pools all three code bases together.
The combined column is thus strongly weighted towards the stdlib,
which is much larger than both projects put together (stdlib: 411575
SLOC, scikit-learn: 50924 SLOC, nipy: 37078 SLOC). [7]
The dot row (marked ******) counts how common matrix multiply
operations are in each codebase.
==== ====== ============ ==== ========
op stdlib scikit-learn nipy combined
==== ====== ============ ==== ========
= 2969 5536 4932 3376 / 10,000 SLOC
- 218 444 496 261
+ 224 201 348 231
== 177 248 334 196
* 156 284 465 192
% 121 114 107 119
** 59 111 118 68
!= 40 56 74 44
/ 18 121 183 41
> 29 70 110 39
+= 34 61 67 39
< 32 62 76 38
>= 19 17 17 18
<= 18 27 12 18
dot ***** 0 ********** 99 ** 74 ****** 16
| 18 1 2 15
& 14 0 6 12
<< 10 1 1 8
// 9 9 1 8
-= 5 21 14 8
*= 2 19 22 5
/= 0 23 16 4
>> 4 0 0 3
^ 3 0 0 3
~ 2 4 5 2
|= 3 0 0 2
&= 1 0 0 1
//= 1 0 0 1
^= 1 0 0 0
**= 0 2 0 0
%= 0 0 0 0
<<= 0 0 0 0
>>= 0 0 0 0
==== ====== ============ ==== ========
These two numerical packages alone contain ~780 uses of matrix
multiplication. Within these packages, matrix multiplication is used
more heavily than most comparison operators (< != <=
>=). Even when we dilute these counts by including the stdlib
into our comparisons, matrix multiplication is still used more often
in total than any of the bitwise operators, and 2x as often as //.
This is true even though the stdlib, which contains a fair amount of
integer arithmetic and no matrix operations, makes up more than 80% of
the combined code base.
By coincidence, the numeric libraries make up approximately the same
proportion of the ‘combined’ codebase as numeric tutorials make up of
PyCon 2014’s tutorial schedule, which suggests that the ‘combined’
column may not be wildly unrepresentative of new Python code in
general. While it’s impossible to know for certain, from this data it
seems entirely possible that across all Python code currently being
written, matrix multiplication is already used more often than //
and the bitwise operations.
But isn’t it weird to add an operator with no stdlib uses?
It’s certainly unusual (though extended slicing existed for some time
builtin types gained support for it, Ellipsis is still unused
within the stdlib, etc.). But the important thing is whether a change
will benefit users, not where the software is being downloaded from.
It’s clear from the above that @ will be used, and used heavily.
And this PEP provides the critical piece that will allow the Python
numerical community to finally reach consensus on a standard duck type
for all array-like objects, which is a necessary precondition to ever
adding a numerical array type to the stdlib.
Compatibility considerations
Currently, the only legal use of the @ token in Python code is at
statement beginning in decorators. The new operators are both infix;
the one place they can never occur is at statement beginning.
Therefore, no existing code will be broken by the addition of these
operators, and there is no possible parsing ambiguity between
decorator-@ and the new operators.
Another important kind of compatibility is the mental cost paid by
users to update their understanding of the Python language after this
change, particularly for users who do not work with matrices and thus
do not benefit. Here again, @ has minimal impact: even
comprehensive tutorials and references will only need to add a
sentence or two to fully document this PEP’s changes for a
non-numerical audience.
Intended usage details
This section is informative, rather than normative – it documents the
consensus of a number of libraries that provide array- or matrix-like
objects on how @ will be implemented.
This section uses the numpy terminology for describing arbitrary
multidimensional arrays of data, because it is a superset of all other
commonly used models. In this model, the shape of any array is
represented by a tuple of integers. Because matrices are
two-dimensional, they have len(shape) == 2, while 1d vectors have
len(shape) == 1, and scalars have shape == (), i.e., they are “0
dimensional”. Any array contains prod(shape) total entries. Notice
that prod(()) == 1 (for the same reason that sum(()) == 0); scalars
are just an ordinary kind of array, not a special case. Notice also
that we distinguish between a single scalar value (shape == (),
analogous to 1), a vector containing only a single entry (shape ==
(1,), analogous to [1]), a matrix containing only a single entry
(shape == (1, 1), analogous to [[1]]), etc., so the dimensionality
of any array is always well-defined. Other libraries with more
restricted representations (e.g., those that support 2d arrays only)
might implement only a subset of the functionality described here.
Semantics
The recommended semantics for @ for different inputs are:
2d inputs are conventional matrices, and so the semantics are
obvious: we apply conventional matrix multiplication. If we write
arr(2, 3) to represent an arbitrary 2x3 array, then arr(2, 3)
@ arr(3, 4) returns an array with shape (2, 4).
1d vector inputs are promoted to 2d by prepending or appending a ‘1’
to the shape, the operation is performed, and then the added
dimension is removed from the output. The 1 is always added on the
“outside” of the shape: prepended for left arguments, and appended
for right arguments. The result is that matrix @ vector and vector
@ matrix are both legal (assuming compatible shapes), and both
return 1d vectors; vector @ vector returns a scalar. This is
clearer with examples.
arr(2, 3) @ arr(3, 1) is a regular matrix product, and returns
an array with shape (2, 1), i.e., a column vector.
arr(2, 3) @ arr(3) performs the same computation as the
previous (i.e., treats the 1d vector as a matrix containing a
single column, shape = (3, 1)), but returns the result with
shape (2,), i.e., a 1d vector.
arr(1, 3) @ arr(3, 2) is a regular matrix product, and returns
an array with shape (1, 2), i.e., a row vector.
arr(3) @ arr(3, 2) performs the same computation as the
previous (i.e., treats the 1d vector as a matrix containing a
single row, shape = (1, 3)), but returns the result with shape
(2,), i.e., a 1d vector.
arr(1, 3) @ arr(3, 1) is a regular matrix product, and returns
an array with shape (1, 1), i.e., a single value in matrix form.
arr(3) @ arr(3) performs the same computation as the
previous, but returns the result with shape (), i.e., a single
scalar value, not in matrix form. So this is the standard inner
product on vectors.
An infelicity of this definition for 1d vectors is that it makes
@ non-associative in some cases ((Mat1 @ vec) @ Mat2 !=
Mat1 @ (vec @ Mat2)). But this seems to be a case where
practicality beats purity: non-associativity only arises for strange
expressions that would never be written in practice; if they are
written anyway then there is a consistent rule for understanding
what will happen (Mat1 @ vec @ Mat2 is parsed as (Mat1 @ vec)
@ Mat2, just like a - b - c); and, not supporting 1d vectors
would rule out many important use cases that do arise very commonly
in practice. No-one wants to explain to new users why to solve the
simplest linear system in the obvious way, they have to type
(inv(A) @ b[:, np.newaxis]).flatten() instead of inv(A) @ b,
or perform an ordinary least-squares regression by typing
solve(X.T @ X, X @ y[:, np.newaxis]).flatten() instead of
solve(X.T @ X, X @ y). No-one wants to type (a[np.newaxis, :]
@ b[:, np.newaxis])[0, 0] instead of a @ b every time they
compute an inner product, or (a[np.newaxis, :] @ Mat @ b[:,
np.newaxis])[0, 0] for general quadratic forms instead of a @
Mat @ b. In addition, sage and sympy (see below) use these
non-associative semantics with an infix matrix multiplication
operator (they use *), and they report that they haven’t
experienced any problems caused by it.
For inputs with more than 2 dimensions, we treat the last two
dimensions as being the dimensions of the matrices to multiply, and
‘broadcast’ across the other dimensions. This provides a convenient
way to quickly compute many matrix products in a single operation.
For example, arr(10, 2, 3) @ arr(10, 3, 4) performs 10 separate
matrix multiplies, each of which multiplies a 2x3 and a 3x4 matrix
to produce a 2x4 matrix, and then returns the 10 resulting matrices
together in an array with shape (10, 2, 4). The intuition here is
that we treat these 3d arrays of numbers as if they were 1d arrays
of matrices, and then apply matrix multiplication in an
elementwise manner, where now each ‘element’ is a whole matrix.
Note that broadcasting is not limited to perfectly aligned arrays;
in more complicated cases, it allows several simple but powerful
tricks for controlling how arrays are aligned with each other; see
[10] for details. (In particular, it turns out that
when broadcasting is taken into account, the standard scalar *
matrix product is a special case of the elementwise multiplication
operator *.)If one operand is >2d, and another operand is 1d, then the above
rules apply unchanged, with 1d->2d promotion performed before
broadcasting. E.g., arr(10, 2, 3) @ arr(3) first promotes to
arr(10, 2, 3) @ arr(3, 1), then broadcasts the right argument to
create the aligned operation arr(10, 2, 3) @ arr(10, 3, 1),
multiplies to get an array with shape (10, 2, 1), and finally
removes the added dimension, returning an array with shape (10, 2).
Similarly, arr(2) @ arr(10, 2, 3) produces an intermediate array
with shape (10, 1, 3), and a final array with shape (10, 3).
0d (scalar) inputs raise an error. Scalar * matrix multiplication
is a mathematically and algorithmically distinct operation from
matrix @ matrix multiplication, and is already covered by the
elementwise * operator. Allowing scalar @ matrix would thus
both require an unnecessary special case, and violate TOOWTDI.
Adoption
We group existing Python projects which provide array- or matrix-like
types based on what API they currently use for elementwise and matrix
multiplication.
Projects which currently use * for elementwise multiplication, and
function/method calls for matrix multiplication:
The developers of the following projects have expressed an intention
to implement @ on their array-like types using the above
semantics:
numpy
pandas
blaze
theano
The following projects have been alerted to the existence of the PEP,
but it’s not yet known what they plan to do if it’s accepted. We
don’t anticipate that they’ll have any objections, though, since
everything proposed here is consistent with how they already do
things:
pycuda
panda3d
Projects which currently use * for matrix multiplication, and
function/method calls for elementwise multiplication:
The following projects have expressed an intention, if this PEP is
accepted, to migrate from their current API to the elementwise-*,
matmul-@ convention (i.e., this is a list of projects whose API
fragmentation will probably be eliminated if this PEP is accepted):
numpy (numpy.matrix)
scipy.sparse
pyoperators
pyviennacl
The following projects have been alerted to the existence of the PEP,
but it’s not known what they plan to do if it’s accepted (i.e., this
is a list of projects whose API fragmentation may or may not be
eliminated if this PEP is accepted):
cvxopt
Projects which currently use * for matrix multiplication, and which
don’t really care about elementwise multiplication of matrices:
There are several projects which implement matrix types, but from a
very different perspective than the numerical libraries discussed
above. These projects focus on computational methods for analyzing
matrices in the sense of abstract mathematical objects (i.e., linear
maps over free modules over rings), rather than as big bags full of
numbers that need crunching. And it turns out that from the abstract
math point of view, there isn’t much use for elementwise operations in
the first place; as discussed in the Background section above,
elementwise operations are motivated by the bag-of-numbers approach.
So these projects don’t encounter the basic problem that this PEP
exists to address, making it mostly irrelevant to them; while they
appear superficially similar to projects like numpy, they’re actually
doing something quite different. They use * for matrix
multiplication (and for group actions, and so forth), and if this PEP
is accepted, their expressed intention is to continue doing so, while
perhaps adding @ as an alias. These projects include:
sympy
sage
Implementation details
New functions operator.matmul and operator.__matmul__ are
added to the standard library, with the usual semantics.
A corresponding function PyObject* PyObject_MatrixMultiply(PyObject
*o1, PyObject *o2) is added to the C API.
A new AST node is added named MatMult, along with a new token
ATEQUAL and new bytecode opcodes BINARY_MATRIX_MULTIPLY and
INPLACE_MATRIX_MULTIPLY.
Two new type slots are added; whether this is to PyNumberMethods
or a new PyMatrixMethods struct remains to be determined.
Rationale for specification details
Choice of operator
Why @ instead of some other spelling? There isn’t any consensus
across other programming languages about how this operator should be
named [11]; here we discuss the various options.
Restricting ourselves only to symbols present on US English keyboards,
the punctuation characters that don’t already have a meaning in Python
expression context are: @, backtick, $, !, and ?. Of
these options, @ is clearly the best; ! and ? are already
heavily freighted with inapplicable meanings in the programming
context, backtick has been banned from Python by BDFL pronouncement
(see PEP 3099), and $ is uglier, even more dissimilar to * and
⋅, and has Perl/PHP baggage. $ is probably the
second-best option of these, though.
Symbols which are not present on US English keyboards start at a
significant disadvantage (having to spend 5 minutes at the beginning
of every numeric Python tutorial just going over keyboard layouts is
not a hassle anyone really wants). Plus, even if we somehow overcame
the typing problem, it’s not clear there are any that are actually
better than @. Some options that have been suggested include:
U+00D7 MULTIPLICATION SIGN: A × B
U+22C5 DOT OPERATOR: A ⋅ B
U+2297 CIRCLED TIMES: A ⊗ B
U+00B0 DEGREE: A ° B
What we need, though, is an operator that means “matrix
multiplication, as opposed to scalar/elementwise multiplication”.
There is no conventional symbol with this meaning in either
programming or mathematics, where these operations are usually
distinguished by context. (And U+2297 CIRCLED TIMES is actually used
conventionally to mean exactly the wrong things: elementwise
multiplication – the “Hadamard product” – or outer product, rather
than matrix/inner product like our operator). @ at least has the
virtue that it looks like a funny non-commutative operator; a naive
user who knows maths but not programming couldn’t look at A * B
versus A × B, or A * B versus A ⋅ B, or A * B versus
A ° B and guess which one is the usual multiplication, and which
one is the special case.
Finally, there is the option of using multi-character tokens. Some
options:
Matlab and Julia use a .* operator. Aside from being visually
confusable with *, this would be a terrible choice for us
because in Matlab and Julia, * means matrix multiplication and
.* means elementwise multiplication, so using .* for matrix
multiplication would make us exactly backwards from what Matlab and
Julia users expect.
APL apparently used +.×, which by combining a multi-character
token, confusing attribute-access-like . syntax, and a unicode
character, ranks somewhere below U+2603 SNOWMAN on our candidate
list. If we like the idea of combining addition and multiplication
operators as being evocative of how matrix multiplication actually
works, then something like +* could be used – though this may
be too easy to confuse with *+, which is just multiplication
combined with the unary + operator.
PEP 211 suggested ~*. This has the downside that it sort of
suggests that there is a unary * operator that is being combined
with unary ~, but it could work.
R uses %*% for matrix multiplication. In R this forms part of a
general extensible infix system in which all tokens of the form
%foo% are user-defined binary operators. We could steal the
token without stealing the system.
Some other plausible candidates that have been suggested: >< (=
ascii drawing of the multiplication sign ×); the footnote operator
[*] or |*| (but when used in context, the use of vertical
grouping symbols tends to recreate the nested parentheses visual
clutter that was noted as one of the major downsides of the function
syntax we’re trying to get away from); ^*.
So, it doesn’t matter much, but @ seems as good or better than any
of the alternatives:
It’s a friendly character that Pythoneers are already used to typing
in decorators, but the decorator usage and the math expression
usage are sufficiently dissimilar that it would be hard to confuse
them in practice.
It’s widely accessible across keyboard layouts (and thanks to its
use in email addresses, this is true even of weird keyboards like
those in phones).
It’s round like * and ⋅.
The mATrices mnemonic is cute.
The swirly shape is reminiscent of the simultaneous sweeps over rows
and columns that define matrix multiplication
Its asymmetry is evocative of its non-commutative nature.
Whatever, we have to pick something.
Precedence and associativity
There was a long discussion [15] about
whether @ should be right- or left-associative (or even something
more exotic [18]). Almost all Python operators are
left-associative, so following this convention would be the simplest
approach, but there were two arguments that suggested matrix
multiplication might be worth making right-associative as a special
case:
First, matrix multiplication has a tight conceptual association with
function application/composition, so many mathematically sophisticated
users have an intuition that an expression like RSx proceeds
from right-to-left, with first S transforming the vector
x, and then R transforming the result. This isn’t
universally agreed (and not all number-crunchers are steeped in the
pure-math conceptual framework that motivates this intuition
[16]), but at the least this
intuition is more common than for other operations like 2⋅3⋅4 which everyone reads as going from left-to-right.
Second, if expressions like Mat @ Mat @ vec appear often in code,
then programs will run faster (and efficiency-minded programmers will
be able to use fewer parentheses) if this is evaluated as Mat @ (Mat
@ vec) then if it is evaluated like (Mat @ Mat) @ vec.
However, weighing against these arguments are the following:
Regarding the efficiency argument, empirically, we were unable to find
any evidence that Mat @ Mat @ vec type expressions actually
dominate in real-life code. Parsing a number of large projects that
use numpy, we found that when forced by numpy’s current funcall syntax
to choose an order of operations for nested calls to dot, people
actually use left-associative nesting slightly more often than
right-associative nesting [17]. And anyway,
writing parentheses isn’t so bad – if an efficiency-minded programmer
is going to take the trouble to think through the best way to evaluate
some expression, they probably should write down the parentheses
regardless of whether they’re needed, just to make it obvious to the
next reader that they order of operations matter.
In addition, it turns out that other languages, including those with
much more of a focus on linear algebra, overwhelmingly make their
matmul operators left-associative. Specifically, the @ equivalent
is left-associative in R, Matlab, Julia, IDL, and Gauss. The only
exceptions we found are Mathematica, in which a @ b @ c would be
parsed non-associatively as dot(a, b, c), and APL, in which all
operators are right-associative. There do not seem to exist any
languages that make @ right-associative and *
left-associative. And these decisions don’t seem to be controversial
– I’ve never seen anyone complaining about this particular aspect of
any of these other languages, and the left-associativity of *
doesn’t seem to bother users of the existing Python libraries that use
* for matrix multiplication. So, at the least we can conclude from
this that making @ left-associative will certainly not cause any
disasters. Making @ right-associative, OTOH, would be exploring
new and uncertain ground.
And another advantage of left-associativity is that it is much easier
to learn and remember that @ acts like *, than it is to
remember first that @ is unlike other Python operators by being
right-associative, and then on top of this, also have to remember
whether it is more tightly or more loosely binding than
*. (Right-associativity forces us to choose a precedence, and
intuitions were about equally split on which precedence made more
sense. So this suggests that no matter which choice we made, no-one
would be able to guess or remember it.)
On net, therefore, the general consensus of the numerical community is
that while matrix multiplication is something of a special case, it’s
not special enough to break the rules, and @ should parse like
* does.
(Non)-Definitions for built-in types
No __matmul__ or __matpow__ are defined for builtin numeric
types (float, int, etc.) or for the numbers.Number
hierarchy, because these types represent scalars, and the consensus
semantics for @ are that it should raise an error on scalars.
We do not – for now – define a __matmul__ method on the standard
memoryview or array.array objects, for several reasons. Of
course this could be added if someone wants it, but these types would
require quite a bit of additional work beyond __matmul__ before
they could be used for numeric work – e.g., they have no way to do
addition or scalar multiplication either! – and adding such
functionality is beyond the scope of this PEP. In addition, providing
a quality implementation of matrix multiplication is highly
non-trivial. Naive nested loop implementations are very slow and
shipping such an implementation in CPython would just create a trap
for users. But the alternative – providing a modern, competitive
matrix multiply – would require that CPython link to a BLAS library,
which brings a set of new complications. In particular, several
popular BLAS libraries (including the one that ships by default on
OS X) currently break the use of multiprocessing [8].
Together, these considerations mean that the cost/benefit of adding
__matmul__ to these types just isn’t there, so for now we’ll
continue to delegate these problems to numpy and friends, and defer a
more systematic solution to a future proposal.
There are also non-numeric Python builtins which define __mul__
(str, list, …). We do not define __matmul__ for these
types either, because why would we even do that.
Non-definition of matrix power
Earlier versions of this PEP also proposed a matrix power operator,
@@, analogous to **. But on further consideration, it was
decided that the utility of this was sufficiently unclear that it
would be better to leave it out for now, and only revisit the issue if
– once we have more experience with @ – it turns out that @@
is truly missed. [14]
Rejected alternatives to adding a new operator
Over the past few decades, the Python numeric community has explored a
variety of ways to resolve the tension between matrix and elementwise
multiplication operations. PEP 211 and PEP 225, both proposed in 2000
and last seriously discussed in 2008 [9], were early
attempts to add new operators to solve this problem, but suffered from
serious flaws; in particular, at that time the Python numerical
community had not yet reached consensus on the proper API for array
objects, or on what operators might be needed or useful (e.g., PEP 225
proposes 6 new operators with unspecified semantics). Experience
since then has now led to consensus that the best solution, for both
numeric Python and core Python, is to add a single infix operator for
matrix multiply (together with the other new operators this implies
like @=).
We review some of the rejected alternatives here.
Use a second type that defines __mul__ as matrix multiplication:
As discussed above (Background: What’s wrong with the status quo?),
this has been tried this for many years via the numpy.matrix type
(and its predecessors in Numeric and numarray). The result is a
strong consensus among both numpy developers and developers of
downstream packages that numpy.matrix should essentially never be
used, because of the problems caused by having conflicting duck types
for arrays. (Of course one could then argue we should only define
__mul__ to be matrix multiplication, but then we’d have the same
problem with elementwise multiplication.) There have been several
pushes to remove numpy.matrix entirely; the only counter-arguments
have come from educators who find that its problems are outweighed by
the need to provide a simple and clear mapping between mathematical
notation and code for novices (see Transparent syntax is especially
crucial for non-expert programmers). But, of course, starting out
newbies with a dispreferred syntax and then expecting them to
transition later causes its own problems. The two-type solution is
worse than the disease.
Add lots of new operators, or add a new generic syntax for defining
infix operators: In addition to being generally un-Pythonic and
repeatedly rejected by BDFL fiat, this would be using a sledgehammer
to smash a fly. The scientific python community has consensus that
adding one operator for matrix multiplication is enough to fix the one
otherwise unfixable pain point. (In retrospect, we all think PEP 225
was a bad idea too – or at least far more complex than it needed to
be.)
Add a new @ (or whatever) operator that has some other meaning in
general Python, and then overload it in numeric code: This was the
approach taken by PEP 211, which proposed defining @ to be the
equivalent of itertools.product. The problem with this is that
when taken on its own terms, it’s pretty clear that
itertools.product doesn’t actually need a dedicated operator. It
hasn’t even been deemed worth of a builtin. (During discussions of
this PEP, a similar suggestion was made to define @ as a general
purpose function composition operator, and this suffers from the same
problem; functools.compose isn’t even useful enough to exist.)
Matrix multiplication has a uniquely strong rationale for inclusion as
an infix operator. There almost certainly don’t exist any other
binary operations that will ever justify adding any other infix
operators to Python.
Add a .dot method to array types so as to allow “pseudo-infix”
A.dot(B) syntax: This has been in numpy for some years, and in many
cases it’s better than dot(A, B). But it’s still much less readable
than real infix notation, and in particular still suffers from an
extreme overabundance of parentheses. See Why should matrix
multiplication be infix? above.
Use a ‘with’ block to toggle the meaning of * within a single code
block: E.g., numpy could define a special context object so that
we’d have:
c = a * b # element-wise multiplication
with numpy.mul_as_dot:
c = a * b # matrix multiplication
However, this has two serious problems: first, it requires that every
array-like type’s __mul__ method know how to check some global
state (numpy.mul_is_currently_dot or whatever). This is fine if
a and b are numpy objects, but the world contains many
non-numpy array-like objects. So this either requires non-local
coupling – every numpy competitor library has to import numpy and
then check numpy.mul_is_currently_dot on every operation – or
else it breaks duck-typing, with the above code doing radically
different things depending on whether a and b are numpy
objects or some other sort of object. Second, and worse, with
blocks are dynamically scoped, not lexically scoped; i.e., any
function that gets called inside the with block will suddenly find
itself executing inside the mul_as_dot world, and crash and burn
horribly – if you’re lucky. So this is a construct that could only
be used safely in rather limited cases (no function calls), and which
would make it very easy to shoot yourself in the foot without warning.
Use a language preprocessor that adds extra numerically-oriented
operators and perhaps other syntax: (As per recent BDFL suggestion:
[1]) This suggestion seems based on the idea that
numerical code needs a wide variety of syntax additions. In fact,
given @, most numerical users don’t need any other operators or
syntax; it solves the one really painful problem that cannot be solved
by other means, and that causes painful reverberations through the
larger ecosystem. Defining a new language (presumably with its own
parser which would have to be kept in sync with Python’s, etc.), just
to support a single binary operator, is neither practical nor
desirable. In the numerical context, Python’s competition is
special-purpose numerical languages (Matlab, R, IDL, etc.). Compared
to these, Python’s killer feature is exactly that one can mix
specialized numerical code with code for XML parsing, web page
generation, database access, network programming, GUI libraries, and
so forth, and we also gain major benefits from the huge variety of
tutorials, reference material, introductory classes, etc., which use
Python. Fragmenting “numerical Python” from “real Python” would be a
major source of confusion. A major motivation for this PEP is to
reduce fragmentation. Having to set up a preprocessor would be an
especially prohibitive complication for unsophisticated users. And we
use Python because we like Python! We don’t want
almost-but-not-quite-Python.
Use overloading hacks to define a “new infix operator” like *dot*,
as in a well-known Python recipe: (See: [2]) Beautiful is
better than ugly. This is… not beautiful. And not Pythonic. And
especially unfriendly to beginners, who are just trying to wrap their
heads around the idea that there’s a coherent underlying system behind
these magic incantations that they’re learning, when along comes an
evil hack like this that violates that system, creates bizarre error
messages when accidentally misused, and whose underlying mechanisms
can’t be understood without deep knowledge of how object oriented
systems work.
Use a special “facade” type to support syntax like arr.M * arr:
This is very similar to the previous proposal, in that the .M
attribute would basically return the same object as arr *dot would,
and thus suffers the same objections about ‘magicalness’. This
approach also has some non-obvious complexities: for example, while
arr.M * arr must return an array, arr.M * arr.M and
arr * arr.M must return facade objects, or else arr.M * arr.M * arr
and arr * arr.M * arr will not work. But this means that facade
objects must be able to recognize both other array objects and other
facade objects (which creates additional complexity for writing
interoperating array types from different libraries who must now
recognize both each other’s array types and their facade types). It
also creates pitfalls for users who may easily type arr * arr.M or
arr.M * arr.M and expect to get back an array object; instead,
they will get a mysterious object that throws errors when they attempt
to use it. Basically with this approach users must be careful to
think of .M* as an indivisible unit that acts as an infix operator
– and as infix-operator-like token strings go, at least *dot*
is prettier looking (look at its cute little ears!).
Discussions of this PEP
Collected here for reference:
Github pull request containing much of the original discussion and
drafting: https://github.com/numpy/numpy/pull/4351
sympy mailing list discussions of an early draft:
https://groups.google.com/forum/#!topic/sympy/22w9ONLa7qo
https://groups.google.com/forum/#!topic/sympy/4tGlBGTggZY
sage-devel mailing list discussions of an early draft:
https://groups.google.com/forum/#!topic/sage-devel/YxEktGu8DeM
13-Mar-2014 python-ideas thread:
https://mail.python.org/pipermail/python-ideas/2014-March/027053.html
numpy-discussion thread on whether to keep @@:
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069448.html
numpy-discussion threads on precedence/associativity of @:
* http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069444.html
* http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069605.html
References
[1]
From a comment by GvR on a G+ post by GvR; the
comment itself does not seem to be directly linkable: https://plus.google.com/115212051037621986145/posts/hZVVtJ9bK3u
[2]
http://code.activestate.com/recipes/384122-infix-operators/
http://www.sagemath.org/doc/reference/misc/sage/misc/decorators.html#sage.misc.decorators.infix_operator
[3]
http://conference.scipy.org/past.html
[4]
http://pydata.org/events/
[5]
In this formula, β is a vector or matrix of
regression coefficients, V is the estimated
variance/covariance matrix for these coefficients, and we want to
test the null hypothesis that Hβ = r; a large S
then indicates that this hypothesis is unlikely to be true. For
example, in an analysis of human height, the vector β
might contain one value which was the average height of the
measured men, and another value which was the average height of the
measured women, and then setting H = [1, − 1], r = 0 would
let us test whether men and women are the same height on
average. Compare to eq. 2.139 in
http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode17.htmlExample code is adapted from https://github.com/rerpy/rerpy/blob/0d274f85e14c3b1625acb22aed1efa85d122ecb7/rerpy/incremental_ls.py#L202
[6]
Out of the 36 tutorials scheduled for PyCon 2014
(https://us.pycon.org/2014/schedule/tutorials/), we guess that the
8 below will almost certainly deal with matrices:
Dynamics and control with Python
Exploring machine learning with Scikit-learn
How to formulate a (science) problem and analyze it using Python
code
Diving deeper into Machine Learning with Scikit-learn
Data Wrangling for Kaggle Data Science Competitions – An etude
Hands-on with Pydata: how to build a minimal recommendation
engine.
Python for Social Scientists
Bayesian statistics made simple
In addition, the following tutorials could easily involve matrices:
Introduction to game programming
mrjob: Snakes on a Hadoop (“We’ll introduce some data science
concepts, such as user-user similarity, and show how to calculate
these metrics…”)
Mining Social Web APIs with IPython Notebook
Beyond Defaults: Creating Polished Visualizations Using Matplotlib
This gives an estimated range of 8 to 12 / 36 = 22% to 33% of
tutorials dealing with matrices; saying ~20% then gives us some
wiggle room in case our estimates are high.
[7]
SLOCs were defined as physical lines which contain
at least one token that is not a COMMENT, NEWLINE, ENCODING,
INDENT, or DEDENT. Counts were made by using tokenize module
from Python 3.2.3 to examine the tokens in all files ending .py
underneath some directory. Only tokens which occur at least once
in the source trees are included in the table. The counting script
is available in the PEP repository.Matrix multiply counts were estimated by counting how often certain
tokens which are used as matrix multiply function names occurred in
each package. This creates a small number of false positives for
scikit-learn, because we also count instances of the wrappers
around dot that this package uses, and so there are a few dozen
tokens which actually occur in import or def statements.
All counts were made using the latest development version of each
project as of 21 Feb 2014.
‘stdlib’ is the contents of the Lib/ directory in commit
d6aa3fa646e2 to the cpython hg repository, and treats the following
tokens as indicating matrix multiply: n/a.
‘scikit-learn’ is the contents of the sklearn/ directory in commit
69b71623273ccfc1181ea83d8fb9e05ae96f57c7 to the scikit-learn
repository (https://github.com/scikit-learn/scikit-learn), and
treats the following tokens as indicating matrix multiply: dot,
fast_dot, safe_sparse_dot.
‘nipy’ is the contents of the nipy/ directory in commit
5419911e99546401b5a13bd8ccc3ad97f0d31037 to the nipy repository
(https://github.com/nipy/nipy/), and treats the following tokens as
indicating matrix multiply: dot.
[8]
BLAS libraries have a habit of secretly spawning
threads, even when used from single-threaded programs. And threads
play very poorly with fork(); the usual symptom is that
attempting to perform linear algebra in a child process causes an
immediate deadlock.
[9]
http://fperez.org/py4science/numpy-pep225/numpy-pep225.html
[10]
http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
[11]
http://mail.scipy.org/pipermail/scipy-user/2014-February/035499.html
[12]
Counts were produced by manually entering the
string "import foo" or "from foo import" (with quotes) into
the Github code search page, e.g.:
https://github.com/search?q=%22import+numpy%22&ref=simplesearch&type=Code
on 2014-04-10 at ~21:00 UTC. The reported values are the numbers
given in the “Languages” box on the lower-left corner, next to
“Python”. This also causes some undercounting (e.g., leaving out
Cython code, and possibly one should also count HTML docs and so
forth), but these effects are negligible (e.g., only ~1% of numpy
usage appears to occur in Cython code, and probably even less for
the other modules listed). The use of this box is crucial,
however, because these counts appear to be stable, while the
“overall” counts listed at the top of the page (“We’ve found ___
code results”) are highly variable even for a single search –
simply reloading the page can cause this number to vary by a factor
of 2 (!!). (They do seem to settle down if one reloads the page
repeatedly, but nonetheless this is spooky enough that it seemed
better to avoid these numbers.)These numbers should of course be taken with multiple grains of
salt; it’s not clear how representative Github is of Python code in
general, and limitations of the search tool make it impossible to
get precise counts. AFAIK this is the best data set currently
available, but it’d be nice if it were better. In particular:
Lines like import sys, os will only be counted in the sys
row.
A file containing both import X and from X import will be
counted twice
Imports of the form from X.foo import ... are missed. We
could catch these by instead searching for “from X”, but this is
a common phrase in English prose, so we’d end up with false
positives from comments, strings, etc. For many of the modules
considered this shouldn’t matter too much – for example, the
stdlib modules have flat namespaces – but it might especially
lead to undercounting of django, scipy, and twisted.
Also, it’s possible there exist other non-stdlib modules we didn’t
think to test that are even more-imported than numpy – though we
tried quite a few of the obvious suspects. If you find one, let us
know! The modules tested here were chosen based on a combination
of intuition and the top-100 list at pypi-ranking.info.
Fortunately, it doesn’t really matter if it turns out that numpy
is, say, merely the third most-imported non-stdlib module, since
the point is just that numeric programming is a common and
mainstream activity.
Finally, we should point out the obvious: whether a package is
import**ed** is rather different from whether it’s import**ant**.
No-one’s claiming numpy is “the most important package” or anything
like that. Certainly more packages depend on distutils, e.g., then
depend on numpy – and far fewer source files import distutils than
import numpy. But this is fine for our present purposes. Most
source files don’t import distutils because most source files don’t
care how they’re distributed, so long as they are; these source
files thus don’t care about details of how distutils’ API works.
This PEP is in some sense about changing how numpy’s and related
packages’ APIs work, so the relevant metric is to look at source
files that are choosing to directly interact with that API, which
is sort of like what we get by looking at import statements.
[13]
The first such proposal occurs in Jim Hugunin’s very
first email to the matrix SIG in 1995, which lays out the first
draft of what became Numeric. He suggests using * for
elementwise multiplication, and % for matrix multiplication:
https://mail.python.org/pipermail/matrix-sig/1995-August/000002.html
[14]
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069502.html
[15]
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069444.html
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069605.html
[16]
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069610.html
[17]
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069578.html
[18]
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069530.html
Copyright
This document has been placed in the public domain.
| Final | PEP 465 – A dedicated infix operator for matrix multiplication | Standards Track | This PEP proposes a new binary operator to be used for matrix
multiplication, called @. (Mnemonic: @ is * for
mATrices.) |
PEP 466 – Network Security Enhancements for Python 2.7.x
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
23-Mar-2014
Python-Version:
2.7.9
Post-History:
23-Mar-2014, 24-Mar-2014, 25-Mar-2014, 26-Mar-2014, 16-Apr-2014
Resolution:
Python-Dev message
Table of Contents
Abstract
New security related features in Python 2.7 maintenance releases
Implementation status
Backwards compatibility considerations
OpenSSL compatibility
Other Considerations
Maintainability
Security releases
Integration testing
Handling lower security environments with low risk tolerance
Motivation and Rationale
Why these particular changes?
Rejected alternative: just advise developers to migrate to Python 3
Rejected alternative: create and release Python 2.8
Rejected alternative: distribute the security enhancements via PyPI
Rejected variant: provide a “legacy SSL infrastructure” branch
Rejected variant: synchronise particular modules entirely with Python 3
Rejected variant: open ended backport policy
Disclosure of Interest
Acknowledgements
References
Copyright
Abstract
Most CPython tracker issues are classified as errors in behaviour or
proposed enhancements. Most patches to fix behavioural errors are
applied to all active maintenance branches. Enhancement patches are
restricted to the default branch that becomes the next Python version.
This cadence works reasonably well during Python’s normal 18-24 month
feature release cycle, which is still applicable to the Python 3 series.
However, the age of the standard library in Python 2 has now reached a point
where it is sufficiently far behind the state of the art in network security
protocols for it to be causing real problems in use cases where upgrading to
Python 3 in the near term may not be feasible.
In recognition of the additional practical considerations that have arisen
during the 4+ year maintenance cycle for Python 2.7, this PEP allows a
critical set of network security related features to be backported from
Python 3.4 to upcoming Python 2.7.x maintenance releases.
While this PEP does not make any changes to the core development team’s
handling of security-fix-only branches that are no longer in active
maintenance, it does recommend that commercial redistributors providing
extended support periods for the Python standard library either backport
these features to their supported versions, or else explicitly disclaim
support for the use of older versions in roles that involve connecting
directly to the public internet.
New security related features in Python 2.7 maintenance releases
Under this proposal, the following features will be backported from Python
3.4 to upcoming Python 2.7.x maintenance releases:
in the os module:
persistent file descriptor for os.urandom().
in the hmac module:
constant time comparison function (hmac.compare_digest()).
in the hashlib module:
password hashing function (hashlib.pbkdf2_hmac()).
details of hash algorithm availability (hashlib.algorithms_guaranteed
and hashlib.algorithms_available).
in the ssl module:
this module is almost entirely synchronised with its Python 3
counterpart, bringing TLSv1.x settings, SSLContext manipulation, Server
Name Indication, access to platform certificate stores, standard
library support for peer hostname validation and more to the Python 2
series.
the only ssl module features not backported under this policy are
the ssl.RAND_* functions that provide access to OpenSSL’s random
number generation capabilities - use os.urandom() instead.
As a general change in maintenance policy, permission is also granted to
upgrade to newer feature releases of OpenSSL when preparing the binary
installers for new maintenance releases of Python 2.7.
This PEP does NOT propose a general exception for backporting new features
to Python 2.7 - every new feature proposed for backporting will still need
to be justified independently. In particular, it will need to be explained
why relying on an independently updated backport on the Python Package Index
instead is not an acceptable solution.
Implementation status
This PEP originally proposed adding all listed features to the Python 2.7.7
maintenance release. That approach proved to be too ambitious given the
limited time frame between the original creation and acceptance of the PEP
and the release of Python 2.7.7rc1. Instead, the progress of each individual
accepted feature backport is being tracked as an independent enhancement
targeting Python 2.7.
Implemented for Python 2.7.7:
Issue #21306: backport hmac.compare_digest
Issue #21462: upgrade OpenSSL in the Python 2.7 Windows installers
Implemented for Python 2.7.8:
Issue #21304: backport hashlib.pbkdf2
Implemented for Python 2.7.9 (in development):
Issue #21308: backport specified ssl module features
Issue #21307: backport remaining specified hashlib module features
Issue #21305: backport os.urandom shared file descriptor change
Backwards compatibility considerations
As in the Python 3 series, the backported ssl.create_default_context()
API is granted a backwards compatibility exemption that permits the
protocol, options, cipher and other settings of the created SSL context to
be updated in maintenance releases to use higher default security settings.
This allows them to appropriately balance compatibility and security at the
time of the maintenance release, rather than at the time of the original
feature release.
This PEP does not grant any other exemptions to the usual backwards
compatibility policy for maintenance releases. Instead, by explicitly
encouraging the use of feature based checks, it is designed to make it easier
to write more secure cross-version compatible Python software, while still
limiting the risk of breaking currently working software when upgrading to
a new Python 2.7 maintenance release.
In all cases where this proposal allows new features to be backported to
the Python 2.7 release series, it is possible to write cross-version
compatible code that operates by “feature detection” (for example, checking
for particular attributes in a module), without needing to explicitly check
the Python version.
It is then up to library and framework code to provide an appropriate warning
and fallback behaviour if a desired feature is found to be missing. While
some especially security sensitive software MAY fail outright if a desired
security feature is unavailable, most software SHOULD instead emit a warning
and continue operating using a slightly degraded security configuration.
The backported APIs allow library and application code to perform the
following actions after detecting the presence of a relevant
network security related feature:
explicitly opt in to more secure settings (to allow the use of enhanced
security features in older maintenance releases of Python with less
secure default behaviour)
explicitly opt in to less secure settings (to allow the use of newer Python
feature releases in lower security environments)
determine the default setting for the feature (this MAY require explicit
Python version checks to determine the Python feature release, but DOES
NOT require checking for a specific maintenance release)
Security related changes to other modules (such as higher level networking
libraries and data format processing libraries) will continue to be made
available as backports and new modules on the Python Package Index, as
independent distribution remains the preferred approach to handling
software that must continue to evolve to handle changing development
requirements independently of the Python 2 standard library. Refer to
the Motivation and Rationale section for a review of the characteristics
that make the secure networking infrastructure worthy of special
consideration.
OpenSSL compatibility
Under this proposal, OpenSSL may be upgraded to more recent feature releases
in Python 2.7 maintenance releases. On Linux and most other POSIX systems,
the specific version of OpenSSL used already varies, as CPython dynamically
links to the system provided OpenSSL library by default.
For the Windows binary installers, the _ssl and _hashlib modules are
statically linked with OpenSSL and the associated symbols are not exported.
Marc-Andre Lemburg indicates that updating to newer OpenSSL releases in the
egenix-pyopenssl binaries has not resulted in any reported compatibility
issues [3]
The Mac OS X binary installers historically followed the same policy as
other POSIX installations and dynamically linked to the Apple provided
OpenSSL libraries. However, Apple has now ceased updating these
cross-platform libraries, instead requiring that even cross-platform
developers adopt Mac OS X specific interfaces to access up to date security
infrastructure on their platform. Accordingly, and independently of this
PEP, the Mac OS X binary installers were already going to be switched to
statically linker newer versions of OpenSSL [4]
Other Considerations
Maintainability
A number of developers, including Alex Gaynor and Donald Stufft, have
expressed interest in carrying out the feature backports covered by this
policy, and assisting with any additional maintenance burdens that arise
in the Python 2 series as a result.
Steve Dower and Brian Curtin have offered to help with the creation of the
Windows installers, allowing Martin von Löwis the opportunity to step back
from the task of maintaining the 2.7 Windows installer.
This PEP is primarily about establishing the consensus needed to allow them
to carry out this work. For other core developers, this policy change
shouldn’t impose any additional effort beyond potentially reviewing the
resulting patches for those developers specifically interested in the
affected modules.
Security releases
This PEP does not propose any changes to the handling of security
releases - those will continue to be source only releases that
include only critical security fixes.
However, the recommendations for library and application developers are
deliberately designed to accommodate commercial redistributors that choose
to apply these changes to additional Python release series that are either
in security fix only mode, or have been declared “end of life” by the core
development team.
Whether or not redistributors choose to exercise that option will be up
to the individual redistributor.
Integration testing
Third party integration testing services should offer users the ability
to test against multiple Python 2.7 maintenance releases (at least 2.7.6
and 2.7.7+), to ensure that libraries, frameworks and applications can still
test their handling of the legacy security infrastructure correctly (either
failing or degrading gracefully, depending on the security sensitivity of
the software), even after the features covered in this proposal have been
backported to the Python 2.7 series.
Handling lower security environments with low risk tolerance
For better or for worse (mostly worse), there are some environments where
the risk of latent security defects is more tolerated than even a slightly
increased risk of regressions in maintenance releases. This proposal largely
excludes these environments from consideration where the modules covered by
the exemption are concerned - this approach is entirely inappropriate for
software connected to the public internet, and defence in depth security
principles suggest that it is not appropriate for most private networks
either.
Downstream redistributors may still choose to cater to such environments,
but they will need to handle the process of downgrading the security
related modules and doing the associated regression testing themselves.
The main CPython continuous integration infrastructure will not cover this
scenario.
Motivation and Rationale
The creation of this PEP was prompted primarily by the aging SSL support in
the Python 2 series. As of March 2014, the Python 2.7 SSL module is
approaching four years of age, and the SSL support in the still popular
Python 2.6 release had its feature set locked six years ago.
These are simply too old to provide a foundation that can be recommended
in good conscience for secure networking software that operates over the
public internet, especially in an era where it is becoming quite clearly
evident that advanced persistent security threats are even more widespread
and more indiscriminate in their targeting than had previously been
understood. While they represented reasonable security infrastructure in
their time, the state of the art has moved on, and we need to investigate
mechanisms for effectively providing more up to date network security
infrastructure for users that, for whatever reason, are not currently in
a position to migrate to Python 3.
While the use of the system OpenSSL installation addresses many of these
concerns on Linux platforms, it doesn’t address all of them (in particular,
it is still difficult for sotware to explicitly require some higher level
security settings). The standard library support can be bypassed by using a
third party library like PyOpenSSL or Pycurl, but this still results in a
security problem, as these can be difficult dependencies to deploy, and many
users will remain unaware that they might want them. Rather than explaining
to potentially naive users how to obtain and use these libraries, it seems
better to just fix the included batteries.
In the case of the binary installers for Windows and Mac OS X that are
published on python.org, the version of OpenSSL used is entirely within
the control of the Python core development team, but is currently limited
to OpenSSL maintenance releases for the version initially shipped with the
corresponding Python feature release.
With increased popularity comes increased responsibility, and this proposal
aims to acknowledge the fact that Python’s popularity and adoption is at a
sufficiently high level that some of our design and policy decisions have
significant implications beyond the Python development community.
As one example, the Python 2 ssl module does not support the Server
Name Indication standard. While it is possible to obtain SNI support
by using the third party requests client library, actually doing so
currently requires using not only requests and its embedded dependencies,
but also half a dozen or more additional libraries. The lack of support
in the Python 2 series thus serves as an impediment to making effective
use of SNI on servers, as Python 2 clients will frequently fail to handle
it correctly.
Another more critical example is the lack of SSL hostname matching in the
Python 2 standard library - it is currently necessary to rely on a third
party library, such as requests or backports.ssl_match_hostname to
obtain that functionality in Python 2.
The Python 2 series also remains more vulnerable to remote timing attacks
on security sensitive comparisons than the Python 3 series, as it lacks a
standard library equivalent to the timing attack resistant
hmac.compare_digest() function. While appropriate secure comparison
functions can be implemented in third party extensions, many users don’t
even consider the issue and use ordinary equality comparisons instead
- while a standard library solution doesn’t automatically fix that problem,
it does make the barrier to resolution much lower once the problem is
pointed out.
Python 2.7 represents the only long term maintenance release the core
development team has provided, and it is natural that there will be things
that worked over a historically shorter maintenance lifespan that don’t work
over this longer support period. In the specific case of the problem
described in this PEP, the simplest available solution is to acknowledge
that long term maintenance of network security related modules requires
the ability to add new features, even while retaining backwards compatibility
for existing interfaces.
For those familiar with it, it is worth comparing the approach described in
this PEP with Red Hat’s handling of its long term open source support
commitments: it isn’t the RHEL 6.0 release itself that receives 10 years
worth of support, but the overall RHEL 6 series. The individual RHEL 6.x
point releases within the series then receive a wide variety of new
features, including security enhancements, all while meeting strict
backwards compatibility guarantees for existing software. The proposal
covered in this PEP brings our approach to long term maintenance more into
line with this precedent - we retain our strict backwards compatibility
requirements, but make an exception to the restriction against adding new
features.
To date, downstream redistributors have respected our upstream policy of
“no new features in Python maintenance releases”. This PEP explicitly
accepts that a more nuanced policy is appropriate in the case of network
security related features, and the specific change it describes is
deliberately designed such that it is potentially suitable for Red Hat
Enterprise Linux and its downstream derivatives.
Why these particular changes?
The key requirement for a feature to be considered for inclusion in this
proposal was that it must have security implications beyond the specific
application that is written in Python and the system that application is
running on. Thus the focus on network security protocols, password storage
and related cryptographic infrastructure - Python is a popular choice for
the development of web services and clients, and thus the capabilities of
widely used Python versions have implications for the security design of
other services that may themselves be using newer versions of Python or
other development languages, but need to interoperate with clients or
servers written using older versions of Python.
The intent behind this requirement was to minimise any impact that the
introduction of this policy may have on the stability and compatibility of
maintenance releases, while still addressing some key security concerns
relating to the particular aspects of Python 2.7. It would be thoroughly
counterproductive if end users became as cautious about updating to new
Python 2.7 maintenance releases as they are about updating to new feature
releases within the same release series.
The ssl module changes are included in this proposal to bring the
Python 2 series up to date with the past 4 years of evolution in network
security standards, and make it easier for those standards to be broadly
adopted in both servers and clients. Similarly the hash algorithm
availability indicators in hashlib are included to make it easier for
applications to detect and employ appropriate hash definitions across both
Python 2 and 3.
The hmac.compare_digest() and hashlib.pbkdf2_hmac() are included to
help lower the barriers to secure password storage and checking in Python 2
server applications.
The os.urandom() change has been included in this proposal to further
encourage users to leave the task of providing high quality random numbers
for cryptographic use cases to operating system vendors. The use of
insufficiently random numbers has the potential to compromise any
cryptographic system, and operating system developers have more tools
available to address that problem adequately than the typical Python
application runtime.
Rejected alternative: just advise developers to migrate to Python 3
This alternative represents the status quo. Unfortunately, it has proven
to be unworkable in practice, as the backwards compatibility implications
mean that this is a non-trivial migration process for large applications
and integration projects. While the tools for migration have evolved to
a point where it is possible to migrate even large applications
opportunistically and incrementally (rather than all at once) by updating
code to run in the large common subset of Python 2 and Python 3, using the
most recent technology often isn’t a priority in commercial environments.
Previously, this was considered an acceptable harm, as while it was an
unfortunate problem for the affected developers to have to face, it was
seen as an issue between them and their management chain to make the case
for infrastructure modernisation, and this case would become naturally
more compelling as the Python 3 series evolved.
However, now that we’re fully aware of the impact the limitations of the
Python 2 standard library may be having on the evolution of internet
security standards, I no longer believe that it is reasonable to expect
platform and application developers to resolve all of the latent defects
in an application’s Unicode correctness solely in order to gain access to
the network security enhancements already available in Python 3.
While Ubuntu (and to some extent Debian as well) are committed to porting all
default system services and scripts to Python 3, and to removing Python 2
from its default distribution images (but not from its archives), this is
a mammoth task and won’t be completed for the Ubuntu 14.04 LTS release
(at least for the desktop image - it may be achieved for the mobile and
server images).
Fedora has even more work to do to migrate, and it will take a non-trivial
amount of time to migrate the relevant infrastructure components. While
Red Hat are also actively working to make it easier for users to use more
recent versions of Python on our stable platforms, it’s going to take time
for those efforts to start having an impact on end users’ choice of version,
and any such changes also don’t benefit the core platform infrastructure
that runs in the integrated system Python by necessity.
The OpenStack migration to Python 3 is also still in its infancy, and even
though that’s a project with an extensive and relatively robust automated
test suite, it’s still large enough that it is going to take quite some time
to migrate fully to a Python 2/3 compatible code base.
And that’s just three of the highest profile open source projects that
make heavy use of Python. Given the likely existence of large amounts of
legacy code that lacks the kind of automated regression test suite needed
to help support a migration from Python 2 to Python 3, there are likely to
be many cases where reimplementation (perhaps even in Python 3) proves
easier than migration. The key point of this PEP is that those situations
affect more people than just the developers and users of the affected
application: the existence of clients and servers with outdated network
security infrastructure becomes something that developers of secure
networked services need to take into account as part of their security
design, and that’s a problem that inhibits the adoption of better security
standards.
As Terry Reedy noted, if we try to persist with the status quo, the likely
outcome is that commercial redistributors will attempt to do something
like this on behalf of their customers anyway, but in a potentially
inconsistent and ad hoc manner. By drawing the scope definition process
into the upstream project we are in a better position to influence the
approach taken to address the situation and to help ensure some consistency
across redistributors.
The problem is real, so something needs to change, and this PEP describes
my preferred approach to addressing the situation.
Rejected alternative: create and release Python 2.8
With sufficient corporate support, it likely would be possible to create
and release Python 2.8 (it’s highly unlikely such a project would garner
enough interest to be achievable with only volunteers). However, this
wouldn’t actually solve the problem, as the aim is to provide a relatively
low impact way to incorporate enhanced security features into integrated
products and deployments that make use of Python 2.
Upgrading to a new Python feature release would mean both more work for the
core development team, as well as a more disruptive update that most
potential end users would likely just skip entirely.
Attempting to create a Python 2.8 release would also bring in suggestions
to backport many additional features from Python 3 (such as tracemalloc
and the improved coroutine support), making the migration from Python 2.7
to this hypothetical 2.8 release even riskier and more disruptive.
This is not a recommended approach, as it would involve substantial
additional work for a result that is actually less effective in achieving
the original aim (which is to eliminate the current widespread use of the
aging network security infrastructure in the Python 2 series).
Furthermore, while I can’t make any commitments to actually addressing
this issue on Red Hat platforms, I can categorically rule out the idea
of a Python 2.8 being of any use to me in even attempting to get it
addressed.
Rejected alternative: distribute the security enhancements via PyPI
While this initially appears to be an attractive and easier to manage
approach, it actually suffers from several significant problems.
Firstly, this is complex, low level, cross-platform code that integrates
with the underlying operating system across a variety of POSIX platforms
(including Mac OS X) and Windows. The CPython BuildBot fleet is already set
up to handle continuous integration in that context, but most of the
freely available continuous integration services just offer Linux, and
perhaps paid access to Windows. Those services work reasonably well for
software that largely runs on the abstraction layers offered by Python and
other dynamic languages, as well as the more comprehensive abstraction
offered by the JVM, but won’t suffice for the kind of code involved here.
The OpenSSL dependency for the network security support also qualifies as
the kind of “complex binary dependency” that isn’t yet handled well by the
pip based software distribution ecosystem. Relying on a third party
binary dependency also creates potential compatibility problems for pip
when running on other interpreters like PyPy.
Another practical problem with the idea is the fact that pip itself
relies on the ssl support in the standard library (with some additional
support from a bundled copy of requests, which in turn bundles
backport.ssl_match_hostname), and hence would require any replacement
module to also be bundled within pip. This wouldn’t pose any
insurmountable difficulties (it’s just another dependency to vendor), but
it would mean yet another copy of OpenSSL to keep up to date.
This approach also has the same flaw as all other “improve security by
renaming things” approaches: they completely miss the users who most need
help, and raise significant barriers against being able to encourage users
to do the right thing when their infrastructure supports it (since
“use this other module” is a much higher impact change than “turn on this
higher security setting”). Deprecating the aging SSL infrastructure in the
standard library in favour of an external module would be even more user
hostile than accepting the slightly increased risk of regressions associated
with upgrading it in place.
Last, but certainly not least, this approach suffers from the same problem
as the idea of doing a Python 2.8 release: likely not solving the actual
problem. Commercial redistributors of Python are set up to redistribute
Python, and a pre-existing set of additional packages. Getting new
packages added to the pre-existing set can be done, but means approaching
each and every redistributor and asking them to update their
repackaging process accordingly. By contrast, the approach described in
this PEP would require redistributors to deliberately opt out of the
security enhancements by deliberately downgrading the provided network
security infrastructure, which most of them are unlikely to do.
Rejected variant: provide a “legacy SSL infrastructure” branch
Earlier versions of this PEP included the concept of a 2.7-legacy-ssl
branch that preserved the exact feature set of the Python 2.7.6 network
security infrastructure.
In my opinion, anyone that actually wants this is almost certainly making a
mistake, and if they insist they really do want it in their specific
situation, they’re welcome to either make it themselves or arrange for a
downstream redistributor to make it for them.
If they are made publicly available, any such rebuilds should be referred to
as “Python 2.7 with Legacy SSL” to clearly distinguish them from the official
Python 2.7 releases that include more up to date network security
infrastructure.
After the first Python 2.7 maintenance release that implements this PEP, it
would also be appropriate to refer to Python 2.7.6 and earlier releases as
“Python 2.7 with Legacy SSL”.
Rejected variant: synchronise particular modules entirely with Python 3
Earlier versions of this PEP suggested synchronising the hmac,
hashlib and ssl modules entirely with their Python 3 counterparts.
This approach proved too vague to build a compelling case for the exception,
and has thus been replaced by the current more explicit proposal.
Rejected variant: open ended backport policy
Earlier versions of this PEP suggested a general policy change related to
future Python 3 enhancements that impact the general security of the
internet.
That approach created unnecessary uncertainty, so it has been simplified to
propose backport a specific concrete set of changes. Future feature
backport proposals can refer back to this PEP as precedent, but it will
still be necessary to make a specific case for each feature addition to
the Python 2.7 long-term support release.
Disclosure of Interest
The author of this PEP currently works for Red Hat on test automation tools.
If this proposal is accepted, I will be strongly encouraging Red Hat to take
advantage of the resulting opportunity to help improve the overall security
of the Python ecosystem. However, I do not speak for Red Hat in this matter,
and cannot make any commitments on Red Hat’s behalf.
Acknowledgements
Thanks to Christian Heimes and other for their efforts in greatly improving
Python’s SSL support in the Python 3 series, and a variety of members of
the Python community for helping me to better understand the implications
of the default settings we provide in our SSL modules, and the impact that
tolerating the use of SSL infrastructure that was defined in 2010
(Python 2.7) or even 2008 (Python 2.6) potentially has for the security
of the web as a whole.
Thanks to Donald Stufft and Alex Gaynor for identifying a more limited set
of essential security features that allowed the proposal to be made more
fine-grained than backporting entire modules from Python 3.4 ([7], [8]).
Christian and Donald also provided valuable feedback on a preliminary
draft of this proposal.
Thanks also to participants in the python-dev mailing list threads
([1], [2], [5], [6]), as well as the various folks I discussed this issue with at
PyCon 2014 in Montreal.
References
[1]
PEP 466 discussion (round 1)
(https://mail.python.org/pipermail/python-dev/2014-March/133334.html)
[2]
PEP 466 discussion (round 2)
(https://mail.python.org/pipermail/python-dev/2014-March/133389.html)
[3]
Marc-Andre Lemburg’s OpenSSL feedback for Windows
(https://mail.python.org/pipermail/python-dev/2014-March/133438.html)
[4]
Ned Deily’s OpenSSL feedback for Mac OS X
(https://mail.python.org/pipermail/python-dev/2014-March/133347.html)
[5]
PEP 466 discussion (round 3)
(https://mail.python.org/pipermail/python-dev/2014-March/133442.html)
[6]
PEP 466 discussion (round 4)
(https://mail.python.org/pipermail/python-dev/2014-March/133472.html)
[7]
Donald Stufft’s recommended set of backported features
(https://mail.python.org/pipermail/python-dev/2014-March/133500.html)
[8]
Alex Gaynor’s recommended set of backported features
(https://mail.python.org/pipermail/python-dev/2014-March/133503.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 466 – Network Security Enhancements for Python 2.7.x | Standards Track | Most CPython tracker issues are classified as errors in behaviour or
proposed enhancements. Most patches to fix behavioural errors are
applied to all active maintenance branches. Enhancement patches are
restricted to the default branch that becomes the next Python version. |
PEP 469 – Migration of dict iteration code to Python 3
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
18-Apr-2014
Python-Version:
3.5
Post-History:
18-Apr-2014, 21-Apr-2014
Table of Contents
Abstract
PEP Withdrawal
Mapping iteration models
Lists as mutable snapshots
Iterator objects
Set based dynamic views
Migrating directly to Python 3
Migrating to the common subset of Python 2 and 3
Migrating from Python 3 to the common subset with Python 2.7
Possible changes to Python 3.5+
Discussion
Acknowledgements
Copyright
Abstract
For Python 3, PEP 3106 changed the design of the dict builtin and the
mapping API in general to replace the separate list based and iterator based
APIs in Python 2 with a merged, memory efficient set and multiset view
based API. This new style of dict iteration was also added to the Python 2.7
dict type as a new set of iteration methods.
This means that there are now 3 different kinds of dict iteration that may
need to be migrated to Python 3 when an application makes the transition:
Lists as mutable snapshots: d.items() -> list(d.items())
Iterator objects: d.iteritems() -> iter(d.items())
Set based dynamic views: d.viewitems() -> d.items()
There is currently no widely agreed best practice on how to reliably convert
all Python 2 dict iteration code to the common subset of Python 2 and 3,
especially when test coverage of the ported code is limited. This PEP
reviews the various ways the Python 2 iteration APIs may be accessed, and
looks at the available options for migrating that code to Python 3 by way of
the common subset of Python 2.6+ and Python 3.0+.
The PEP also considers the question of whether or not there are any
additions that may be worth making to Python 3.5 that may ease the
transition process for application code that doesn’t need to worry about
supporting earlier versions when eventually making the leap to Python 3.
PEP Withdrawal
In writing the second draft of this PEP, I came to the conclusion that
the readability of hybrid Python 2/3 mapping code can actually be best
enhanced by better helper functions rather than by making changes to
Python 3.5+. The main value I now see in this PEP is as a clear record
of the recommended approaches to migrating mapping iteration code from
Python 2 to Python 3, as well as suggesting ways to keep things readable
and maintainable when writing hybrid code that supports both versions.
Notably, I recommend that hybrid code avoid calling mapping iteration
methods directly, and instead rely on builtin functions where possible,
and some additional helper functions for cases that would be a simple
combination of a builtin and a mapping method in pure Python 3 code, but
need to be handled slightly differently to get the exact same semantics in
Python 2.
Static code checkers like pylint could potentially be extended with an
optional warning regarding direct use of the mapping iteration methods in
a hybrid code base.
Mapping iteration models
Python 2.7 provides three different sets of methods to extract the keys,
values and items from a dict instance, accounting for 9 out of the
18 public methods of the dict type.
In Python 3, this has been rationalised to just 3 out of 11 public methods
(as the has_key method has also been removed).
Lists as mutable snapshots
This is the oldest of the three styles of dict iteration, and hence the
one implemented by the d.keys(), d.values() and d.items()
methods in Python 2.
These methods all return lists that are snapshots of the state of the
mapping at the time the method was called. This has a few consequences:
the original object can be mutated freely without affecting iteration
over the snapshot
the snapshot can be modified independently of the original object
the snapshot consumes memory proportional to the size of the original
mapping
The semantic equivalent of these operations in Python 3 are
list(d.keys()), list(d.values()) and list(d.iteritems()).
Iterator objects
In Python 2.2, dict objects gained support for the then-new iterator
protocol, allowing direct iteration over the keys stored in the dictionary,
thus avoiding the need to build a list just to iterate over the dictionary
contents one entry at a time. iter(d) provides direct access to the
iterator object for the keys.
Python 2 also provides a d.iterkeys() method that is essentially
synonymous with iter(d), along with d.itervalues() and
d.iteritems() methods.
These iterators provide live views of the underlying object, and hence may
fail if the set of keys in the underlying object is changed during
iteration:
>>> d = dict(a=1)
>>> for k in d:
... del d[k]
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
As iterators, iteration over these objects is also a one-time operation:
once the iterator is exhausted, you have to go back to the original mapping
in order to iterate again.
In Python 3, direct iteration over mappings works the same way as it does
in Python 2. There are no method based equivalents - the semantic equivalents
of d.itervalues() and d.iteritems() in Python 3 are
iter(d.values()) and iter(d.items()).
The six and future.utils compatibility modules also both provide
iterkeys(), itervalues() and iteritems() helper functions that
provide efficient iterator semantics in both Python 2 and 3.
Set based dynamic views
The model that is provided in Python 3 as a method based API is that of set
based dynamic views (technically multisets in the case of the values()
view).
In Python 3, the objects returned by d.keys(), d.values() and
d. items() provide a live view of the current state of
the underlying object, rather than taking a full snapshot of the current
state as they did in Python 2. This change is safe in many circumstances,
but does mean that, as with the direct iteration API, it is necessary to
avoid adding or removing keys during iteration, in order to avoid
encountering the following error:
>>> d = dict(a=1)
>>> for k, v in d.items():
... del d[k]
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
Unlike the iteration API, these objects are iterables, rather than iterators:
you can iterate over them multiple times, and each time they will iterate
over the entire underlying mapping.
These semantics are also available in Python 2.7 as the d.viewkeys(),
d.viewvalues() and d.viewitems() methods.
The future.utils compatibility module also provides
viewkeys(), viewvalues() and viewitems() helper functions
when running on Python 2.7 or Python 3.x.
Migrating directly to Python 3
The 2to3 migration tool handles direct migrations to Python 3 in
accordance with the semantic equivalents described above:
d.keys() -> list(d.keys())
d.values() -> list(d.values())
d.items() -> list(d.items())
d.iterkeys() -> iter(d.keys())
d.itervalues() -> iter(d.values())
d.iteritems() -> iter(d.items())
d.viewkeys() -> d.keys()
d.viewvalues() -> d.values()
d.viewitems() -> d.items()
Rather than 9 distinct mapping methods for iteration, there are now only the
3 view methods, which combine in straightforward ways with the two relevant
builtin functions to cover all of the behaviours that are available as
dict methods in Python 2.7.
Note that in many cases d.keys() can be replaced by just d, but the
2to3 migration tool doesn’t attempt that replacement.
The 2to3 migration tool also does not provide any automatic assistance
for migrating references to these objects as bound or unbound methods - it
only automates conversions where the API is called immediately.
Migrating to the common subset of Python 2 and 3
When migrating to the common subset of Python 2 and 3, the above
transformations are not generally appropriate, as they all either result in
the creation of a redundant list in Python 2, have unexpectedly different
semantics in at least some cases, or both.
Since most code running in the common subset of Python 2 and 3 supports
at least as far back as Python 2.6, the currently recommended approach to
conversion of mapping iteration operation depends on two helper functions
for efficient iteration over mapping values and mapping item tuples:
d.keys() -> list(d)
d.values() -> list(itervalues(d))
d.items() -> list(iteritems(d))
d.iterkeys() -> iter(d)
d.itervalues() -> itervalues(d)
d.iteritems() -> iteritems(d)
Both six and future.utils provide appropriate definitions of
itervalues() and iteritems() (along with essentially redundant
definitions of iterkeys()). Creating your own definitions of these
functions in a custom compatibility module is also relatively
straightforward:
try:
dict.iteritems
except AttributeError:
# Python 3
def itervalues(d):
return iter(d.values())
def iteritems(d):
return iter(d.items())
else:
# Python 2
def itervalues(d):
return d.itervalues()
def iteritems(d):
return d.iteritems()
The greatest loss of readability currently arises when converting code that
actually needs the list based snapshots that were the default in Python
2. This readability loss could likely be mitigated by also providing
listvalues and listitems helper functions, allowing the affected
conversions to be simplified to:
d.values() -> listvalues(d)
d.items() -> listitems(d)
The corresponding compatibility function definitions are as straightforward
as their iterator counterparts:
try:
dict.iteritems
except AttributeError:
# Python 3
def listvalues(d):
return list(d.values())
def listitems(d):
return list(d.items())
else:
# Python 2
def listvalues(d):
return d.values()
def listitems(d):
return d.items()
With that expanded set of compatibility functions, Python 2 code would
then be converted to “idiomatic” hybrid 2/3 code as:
d.keys() -> list(d)
d.values() -> listvalues(d)
d.items() -> listitems(d)
d.iterkeys() -> iter(d)
d.itervalues() -> itervalues(d)
d.iteritems() -> iteritems(d)
This compares well for readability with the idiomatic pure Python 3
code that uses the mapping methods and builtins directly:
d.keys() -> list(d)
d.values() -> list(d.values())
d.items() -> list(d.items())
d.iterkeys() -> iter(d)
d.itervalues() -> iter(d.values())
d.iteritems() -> iter(d.items())
It’s also notable that when using this approach, hybrid code would never
invoke the mapping methods directly: it would always invoke either a
builtin or helper function instead, in order to ensure the exact same
semantics on both Python 2 and 3.
Migrating from Python 3 to the common subset with Python 2.7
While the majority of migrations are currently from Python 2 either directly
to Python 3 or to the common subset of Python 2 and Python 3, there are also
some migrations of newer projects that start in Python 3 and then later
add Python 2 support, either due to user demand, or to gain access to
Python 2 libraries that are not yet available in Python 3 (and porting them
to Python 3 or creating a Python 3 compatible replacement is not a trivial
exercise).
In these cases, Python 2.7 compatibility is often sufficient, and the 2.7+
only view based helper functions provided by future.utils allow the bare
accesses to the Python 3 mapping view methods to be replaced with code that
is compatible with both Python 2.7 and Python 3 (note, this is the only
migration chart in the PEP that has Python 3 code on the left of the
conversion):
d.keys() -> viewkeys(d)
d.values() -> viewvalues(d)
d.items() -> viewitems(d)
list(d.keys()) -> list(d)
list(d.values()) -> listvalues(d)
list(d.items()) -> listitems(d)
iter(d.keys()) -> iter(d)
iter(d.values()) -> itervalues(d)
iter(d.items()) -> iteritems(d)
As with migrations from Python 2 to the common subset, note that the hybrid
code ends up never invoking the mapping methods directly - it only calls
builtins and helper methods, with the latter addressing the semantic
differences between Python 2 and Python 3.
Possible changes to Python 3.5+
The main proposal put forward to potentially aid migration of existing
Python 2 code to Python 3 is the restoration of some or all of the
alternate iteration APIs to the Python 3 mapping API. In particular,
the initial draft of this PEP proposed making the following conversions
possible when migrating to the common subset of Python 2 and Python 3.5+:
d.keys() -> list(d)
d.values() -> list(d.itervalues())
d.items() -> list(d.iteritems())
d.iterkeys() -> d.iterkeys()
d.itervalues() -> d.itervalues()
d.iteritems() -> d.iteritems()
Possible mitigations of the additional language complexity in Python 3
created by restoring these methods included immediately deprecating them,
as well as potentially hiding them from the dir() function (or perhaps
even defining a way to make pydoc aware of function deprecations).
However, in the case where the list output is actually desired, the end
result of that proposal is actually less readable than an appropriately
defined helper function, and the function and method forms of the iterator
versions are pretty much equivalent from a readability perspective.
So unless I’ve missed something critical, readily available listvalues()
and listitems() helper functions look like they will improve the
readability of hybrid code more than anything we could add back to the
Python 3.5+ mapping API, and won’t have any long-term impact on the
complexity of Python 3 itself.
Discussion
The fact that 5 years in to the Python 3 migration we still have users
considering the dict API changes a significant barrier to migration suggests
that there are problems with previously recommended approaches. This PEP
attempts to explore those issues and tries to isolate those cases where
previous advice (such as it was) could prove problematic.
My assessment (largely based on feedback from Twisted devs) is that
problems are most likely to arise when attempting to use d.keys(),
d.values(), and d.items() in hybrid code. While superficially it
seems as though there should be cases where it is safe to ignore the
semantic differences, in practice, the change from “mutable snapshot” to
“dynamic view” is significant enough that it is likely better
to just force the use of either list or iterator semantics for hybrid code,
and leave the use of the view semantics to pure Python 3 code.
This approach also creates rules that are simple enough and safe enough that
it should be possible to automate them in code modernisation scripts that
target the common subset of Python 2 and Python 3, just as 2to3 converts
them automatically when targeting pure Python 3 code.
Acknowledgements
Thanks to the folks at the Twisted sprint table at PyCon for a very
vigorous discussion of this idea (and several other topics), and especially
to Hynek Schlawack for acting as a moderator when things got a little too
heated :)
Thanks also to JP Calderone and Itamar Turner-Trauring for their email
feedback, as well to the participants in the python-dev review of
the initial version of the PEP.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 469 – Migration of dict iteration code to Python 3 | Standards Track | For Python 3, PEP 3106 changed the design of the dict builtin and the
mapping API in general to replace the separate list based and iterator based
APIs in Python 2 with a merged, memory efficient set and multiset view
based API. This new style of dict iteration was also added to the Python 2.7
dict type as a new set of iteration methods. |
PEP 473 – Adding structured data to built-in exceptions
Author:
Sebastian Kreft <skreft at deezer.com>
Status:
Rejected
Type:
Standards Track
Created:
29-Mar-2014
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Examples
IndexError
KeyError
AttributeError
NameError
Other Cases
Proposal
Potential Uses
Performance
References
Copyright
Abstract
Exceptions like AttributeError, IndexError, KeyError,
LookupError, NameError, TypeError, and ValueError do not
provide all information required by programmers to debug and better understand
what caused them.
Furthermore, in some cases the messages even have slightly different formats,
which makes it really difficult for tools to automatically provide additional
information to diagnose the problem.
To tackle the former and to lay ground for the latter, it is proposed to expand
these exceptions so to hold both the offending and affected entities.
Rationale
The main issue this PEP aims to solve is the fact that currently error messages
are not that expressive and lack some key information to resolve the exceptions.
Additionally, the information present on the error message is not always in the
same format, which makes it very difficult for third-party libraries to
provide automated diagnosis of the error.
These automated tools could, for example, detect typos or display or log extra
debug information. These could be particularly useful when running tests or in a
long running application.
Although it is in theory possible to have such libraries, they need to resort to
hacks in order to achieve the goal. One such example is
python-improved-exceptions [1], which modifies the byte-code to keep references
to the possibly interesting objects and also parses the error messages to
extract information like types or names. Unfortunately, such approach is
extremely fragile and not portable.
A similar proposal [2] has been implemented for ImportError and in the same
fashion this idea has received support [3]. Additionally, almost 10 years ago
Guido asked in [11] to have a clean API to access the affected objects in
Exceptions like KeyError, AttributeError, NameError, and
IndexError. Similar issues and proposals ideas have been written in the
last year. Some other issues have been created, but despite receiving support
they finally get abandoned. References to the created issues are listed below:
AttributeError: [11], [10], [5], [4], [3]
IndexError: [11], [6], [3]
KeyError: [11], [7], [3]
LookupError: [11]
NameError: [11], [10], [3]
TypeError: [8]
ValueError: [9]
To move forward with the development and to centralize the information and
discussion, this PEP aims to be a meta-issue summarizing all the above
discussions and ideas.
Examples
IndexError
The error message does not reference the list’s length nor the index used.
a = [1, 2, 3, 4, 5]
a[5]
IndexError: list index out of range
KeyError
By convention the key is the first element of the error’s argument, but there’s
no other information regarding the affected dictionary (keys types, size, etc.)
b = {'foo': 1}
b['fo']
KeyError: 'fo'
AttributeError
The object’s type and the offending attribute are part of the error message.
However, there are some different formats and the information is not always
available. Furthermore, although the object type is useful in some cases, given
the dynamic nature of Python, it would be much more useful to have a reference
to the object itself. Additionally the reference to the type is not fully
qualified and in some cases the type is just too generic to provide useful
information, for example in case of accessing a module’s attribute.
c = object()
c.foo
AttributeError: 'object' object has no attribute 'foo'
import string
string.foo
AttributeError: 'module' object has no attribute 'foo'
a = string.Formatter()
a.foo
AttributeError: 'Formatter' object has no attribute 'foo'
NameError
The error message provides typically the name.
foo = 1
fo
NameError: global name 'fo' is not defined
Other Cases
Issues are even harder to debug when the target object is the result of
another expression, for example:
a[b[c[0]]]
This issue is also related to the fact that opcodes only have line number
information and not the offset. This proposal would help in this case but not as
much as having offsets.
Proposal
Extend the exceptions AttributeError, IndexError, KeyError,
LookupError, NameError, TypeError, and ValueError with the
following:
AttributeError: target w, attribute
IndexError: target w, key w, index (just an alias to
key)
KeyError: target w, key w
LookupError: target w, key w
NameError: name, scope?
TypeError: unexpected_type
ValueError: unexpected_value w
Attributes with the superscript w may need to be weak references [12] to
prevent any memory cycles. However, this may add an unnecessary extra
complexity as noted by R. David Murray [13]. This is specially true given that
builtin types do not support being weak referenced.
TODO(skreft): expand this with examples of corner cases.
To remain backwards compatible these new attributes will be optional and keyword
only.
It is proposed to add this information, rather than just improve the error, as
the former would allow new debugging frameworks and tools and also in the future
to switch to a lazy generated message. Generated messages are discussed in [2],
although they are not implemented at the moment. They would not only save some
resources, but also uniform the messages.
The stdlib will be then gradually changed so to start using these new
attributes.
Potential Uses
An automated tool could for example search for similar keys within the object,
allowing to display the following::
a = {'foo': 1}
a['fo']
KeyError: 'fo'. Did you mean 'foo'?
foo = 1
fo
NameError: global name 'fo' is not defined. Did you mean 'foo'?
See [3] for the output a TestRunner could display.
Performance
Filling these new attributes would only require two extra parameters with data
already available so the impact should be marginal. However, it may need
special care for KeyError as the following pattern is already widespread.
try:
a[foo] = a[foo] + 1
except:
a[foo] = 0
Note as well that storing these objects into the error itself would allow the
lazy generation of the error message, as discussed in [2].
References
[1]
Python Exceptions Improved
(https://www.github.com/sk-/python-exceptions-improved)
[2] (1, 2, 3)
ImportError needs attributes for module and file name
(http://bugs.python.org/issue1559549)
[3] (1, 2, 3, 4, 5, 6)
Enhance exceptions by attaching some more information to them
(https://mail.python.org/pipermail/python-ideas/2014-February/025601.html)
[4]
Specificity in AttributeError
(https://mail.python.org/pipermail/python-ideas/2013-April/020308.html)
[5]
Add an ‘attr’ attribute to AttributeError
(http://bugs.python.org/issue18156)
[6]
Add index attribute to IndexError
(http://bugs.python.org/issue18162)
[7]
Add a ‘key’ attribute to KeyError
(http://bugs.python.org/issue18163)
[8]
Add ‘unexpected_type’ to TypeError
(http://bugs.python.org/issue18165)
[9]
‘value’ attribute for ValueError
(http://bugs.python.org/issue18166)
[10] (1, 2)
making builtin exceptions more informative
(http://bugs.python.org/issue1182143)
[11] (1, 2, 3, 4, 5, 6)
LookupError etc. need API to get the key
(http://bugs.python.org/issue614557)
[12]
weakref - Weak References
(https://docs.python.org/3/library/weakref.html)
[13]
Message by R. David Murray: Weak refs on exceptions?
(http://bugs.python.org/issue18163#msg190791)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 473 – Adding structured data to built-in exceptions | Standards Track | Exceptions like AttributeError, IndexError, KeyError,
LookupError, NameError, TypeError, and ValueError do not
provide all information required by programmers to debug and better understand
what caused them.
Furthermore, in some cases the messages even have slightly different formats,
which makes it really difficult for tools to automatically provide additional
information to diagnose the problem.
To tackle the former and to lay ground for the latter, it is proposed to expand
these exceptions so to hold both the offending and affected entities. |
PEP 474 – Creating forge.python.org
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Process
Created:
19-Jul-2014
Post-History:
19-Jul-2014, 08-Jan-2015, 01-Feb-2015
Table of Contents
Abstract
PEP Withdrawal
Proposal
Rationale
Intended Benefits
Sustaining Engineering Considerations
Personal Motivation
Technical Concerns and Challenges
Service hosting
Ongoing infrastructure maintenance
User account management
Breaking existing SSH access and links for Mercurial repositories
Integration with Roundup
Accepting pull requests on GitHub and BitBucket
Transparent Git and Mercurial interoperability
Pilot Objectives and Timeline
Future Implications for CPython Core Development
Copyright
Abstract
This PEP proposes setting up a new PSF provided resource, forge.python.org,
as a location for maintaining various supporting repositories
(such as the repository for Python Enhancement Proposals) in a way that is
more accessible to new contributors, and easier to manage for core
developers.
This PEP does not propose any changes to the core development workflow
for CPython itself (see PEP 462 in relation to that).
PEP Withdrawal
This PEP has been withdrawn by the author
in favour of the GitLab based proposal in PEP 507.
If anyone else would like to take over championing this PEP, contact the
core-workflow mailing list
Proposal
This PEP proposes that an instance of the self-hosted Kallithea code
repository management system be deployed as “forge.python.org”.
Individual repositories (such as the developer guide or the PEPs repository)
may then be migrated from the existing hg.python.org infrastructure to the
new forge.python.org infrastructure on a case-by-case basis. Each migration
will need to decide whether to retain a read-only mirror on hg.python.org,
or whether to just migrate wholesale to the new location.
In addition to supporting read-only mirrors on hg.python.org,
forge.python.org will also aim to support hosting mirrors on popular
proprietary hosting sites like GitHub and BitBucket. The aim will be to
allow users familiar with these sites to submit and discuss pull requests
using their preferred workflow, with forge.python.org automatically bringing
those contributions over to the master repository.
Given the availability and popularity of commercially backed “free for open
source projects” repository hosting services, this would not be a general
purpose hosting site for arbitrary Python projects. The initial focus will be
specifically on CPython and other repositories currently hosted on
hg.python.org. In the future, this could potentially be expanded to
consolidating other PSF managed repositories that are currently externally
hosted to gain access to a pull request based workflow, such as the
repository for the python.org Django application. As with the initial
migrations, any such future migrations would be considered on a case-by-case
basis, taking into account the preferences of the primary users of each
repository.
Rationale
Currently, hg.python.org hosts more than just the core CPython repository,
it also hosts other repositories such as those for the CPython developer
guide and for Python Enhancement Proposals, along with various “sandbox”
repositories for core developer experimentation.
While the simple “pull request” style workflow made popular by code hosting
sites like GitHub and BitBucket isn’t adequate for the complex branching
model needed for parallel maintenance and development of the various
CPython releases, it’s a good fit for several of the ancillary projects
that surround CPython that we don’t wish to move to a proprietary hosting
site.
The key requirements proposed for a PSF provided software forge are:
MUST support simple “pull request” style workflows
MUST support online editing for simple changes
MUST be backed by an active development organisation (community or
commercial)
MUST support self-hosting of the master repository on PSF infrastructure
without ongoing fees
Additional recommended requirements that are satisfied by this proposal,
but may be negotiable if a sufficiently compelling alternative is presented:
SHOULD be a fully open source application written in Python
SHOULD support Mercurial (for consistency with existing tooling)
SHOULD support Git (to provide that option to users that prefer it)
SHOULD allow users of git and Mercurial clients to transparently
collaborate on the same repository
SHOULD allow users of GitHub and BitBucket to submit proposed changes using
the standard pull request workflows offered by those tools
SHOULD be open to customisation to meet the needs of CPython core
development, including providing a potential path forward for the
proposed migration to a core reviewer model in PEP 462
The preference for self-hosting without ongoing fees rules out the
free-as-in-beer providers like GitHub and BitBucket, in addition to the
various proprietary source code management offerings.
The preference for Mercurial support not only rules out GitHub, but also
other Git-only solutions like GitLab and Gitorious.
The hard requirement for online editing support rules out the Apache
Allura/HgForge combination.
The preference for a fully open source solution rules out RhodeCode.
Of the various options considered by the author of this proposal, that
leaves Kallithea SCM as the proposed
foundation for a forge.python.org service.
Kallithea is a full GPLv3 application (derived from the clearly
and unambiguously GPLv3 licensed components of RhodeCode) that is being
developed under the auspices of the Software Freedom Conservancy. The
Conservancy has affirmed that the
Kallithea codebase is completely and validly licensed under GPLv3. In
addition to their role in building the initial Kallithea community, the
Conservancy is also the legal home of both the Mercurial and Git projects.
Other SFC member projects that may be familiar to Python users include
Twisted, Gevent, BuildBot and PyPy.
Intended Benefits
The primary benefit of deploying Kallithea as forge.python.org is that
supporting repositories such as the developer guide and the PEP repo could
potentially be managed using pull requests and online editing. This would be
much simpler than the current workflow which requires PEP editors and
other core developers to act as intermediaries to apply updates suggested
by other users.
The richer administrative functionality would also make it substantially
easier to grant users access to particular repositories for collaboration
purposes, without having to grant them general access to the entire
installation. This helps lower barriers to entry, as trust can more
readily be granted and earned incrementally, rather than being an
all-or-nothing decision around granting core developer access.
Sustaining Engineering Considerations
Even with its current workflow, CPython itself remains one of the largest
open source projects in the world (in the
top 2%
of projects tracked on OpenHub). Unfortunately, we have been significantly
less effective at encouraging contributions to the projects that make up
CPython’s workflow infrastructure, including ensuring that our installations
track upstream, and that wherever feasible, our own customisations are
contributed back to the original project.
As such, a core component of this proposal is to actively engage with the
upstream Kallithea community to lower the barriers to working with and on
the Kallithea SCM, as well as with the PSF Infrastructure team to ensure
the forge.python.org service integrates cleanly with the PSF’s infrastructure
automation.
This approach aims to provide a number of key benefits:
allowing those of us contributing to maintenance of this service to be
as productive as possible in the time we have available
offering a compelling professional development opportunity to those
volunteers that choose to participate in maintenance of this service
making the Kallithea project itself more attractive to other potential
users by making it as easy as possible to adopt, deploy and manage
as a result of the above benefits, attracting sufficient contributors both
in the upstream Kallithea community, and within the CPython infrastructure
community, to allow the forge.python.org service to evolve effectively to
meet changing developer expectations
Some initial steps have already been taken to address these sustaining
engineering concerns:
Tymoteusz Jankowski has been working with Donald Stufft to work out what
would be involved
in deploying Kallithea using the PSF’s Salt based infrastructure automation.
Graham Dumpleton and I have been working on
making it easy
to deploy demonstration Kallithea instances to the free tier of Red Hat’s open
source hosting service, OpenShift Online. (See the comments on that post, or
the quickstart issue tracker for links to
Graham’s follow on work)
The next major step to be undertaken is to come up with a local development
workflow that allows contributors on Windows, Mac OS X and Linux to run
the Kallithea tests locally, without interfering with the operation of
their own system. The currently planned approach for this is to focus on
Vagrant, which is a popular automated virtual machine management system
specifically aimed at developers running local VMs for testing purposes.
The Vagrant based development guidelines
for OpenShift Origin provide an extended example of the kind of workflow this
approach enables. It’s also worth noting that Vagrant is one of the options
for working with a local build of the main python.org website.
If these workflow proposals end up working well for Kallithea, they may also
be worth proposing for use by the upstream projects backing other PSF and
CPython infrastructure services, including Roundup, BuildBot, and the main
python.org web site.
Personal Motivation
As of July 2015, I now work for Red Hat as a software development workflow
designer and process architect, focusing on the upstream developer experience
in Fedora. Two of the key pieces of that experience will be familiar to many
web service developers: Docker for local container management, and Vagrant for
cross-platform local development VM management. Spending time applying these
technologies in multiple upstream contexts helps provide additional insight
into what works well and what still needs further improvement to provide a good
software development experience that is well integrated on Fedora, but also
readily available on other Linux distributions, Windows, Mac OS X.
In relation to code review workflows in particular, the primary code review
workflow management tools I’ve used in my career are
Gerrit (for multi-step code review with fine-grained access control), GitHub
and BitBucket (for basic pull request based workflows), and Rietveld (for
CPython’s optional pre-commit reviews).
Kallithea is interesting as a base project to build, as it’s currently a
combined repo hosting and code review management platform, but doesn’t
directly integrate the two by offering online merges. This creates the
opportunity to blend the low barrier to entry benefits of the GitHub/BitBucket
pull request model with the mentoring and task hand-off benefits of Gerrit
in defining an online code merging model for Kallithea in collaboration with
the upstream Kallithea developers.
Technical Concerns and Challenges
Introducing a new service into the CPython infrastructure presents a number
of interesting technical concerns and challenges. This section covers several
of the most significant ones.
Service hosting
The default position of this PEP is that the new forge.python.org service
will be integrated into the existing PSF Salt infrastructure and hosted on
the PSF’s Rackspace cloud infrastructure.
However, other hosting options will also be considered, in particular,
possible deployment as a Kubernetes hosted web
service on either
Google Container Engine or
the next generation of Red Hat’s
OpenShift Online service, by using either
GCEPersistentDisk or the open source
GlusterFS distributed filesystem
to hold the source code repositories.
Ongoing infrastructure maintenance
Ongoing infrastructure maintenance is an area of concern within the PSF,
as we currently lack a system administrator mentorship program equivalent to
the Fedora Infrastructure Apprentice or
GNOME Infrastructure Apprentice
programs.
Instead, systems tend to be maintained largely by developers as a part-time
activity on top of their development related contributions, rather than
seeking to recruit folks that are more interested in operations (i.e.
keeping existing systems running well) than they are in development (i.e.
making changes to the services to provide new features or a better user
experience, or to address existing issues).
While I’d personally like to see the PSF operating such a program at some
point in the future, I don’t consider setting one up to be a
feasible near term goal. However, I do consider it feasible to continue
laying the groundwork for such a program by extending the PSF’s existing
usage of modern infrastructure technologies like OpenStack and Salt to
cover more services, as well as starting to explore the potential benefits of
containers and container platforms when it comes to maintaining and enhancing
PSF provided services.
I also plan to look into the question of whether or not an open source cloud
management platform like ManageIQ may help us
bring our emerging “cloud sprawl” problem across Rackspace, Google, Amazon
and other services more under control.
User account management
Ideally we’d like to be able to offer a single account that spans all
python.org services, including Kallithea, Roundup/Rietveld, PyPI and the
back end for the new python.org site, but actually implementing that would
be a distinct infrastructure project, independent of this PEP. (It’s also
worth noting that the fine-grained control of ACLs offered by such a
capability is a prerequisite for setting up an
effective system administrator mentorship program)
For the initial rollout of forge.python.org, we will likely create yet another
identity silo within the PSF infrastructure. A potentially superior
alternative would be to add support for python-social-auth to Kallithea, but actually
doing so would not be a requirement for the initial rollout of the service
(the main technical concern there is that Kallithea is a Pylons application
that has not yet been ported to Pyramid, so integration will require either
adding a Pylons backend to python-social-auth, or else embarking on the
Pyramid migration in Kallithea).
Breaking existing SSH access and links for Mercurial repositories
This PEP proposes leaving the existing hg.python.org installation alone,
and setting up Kallithea on a new host. This approach minimises the risk
of interfering with the development of CPython itself (and any other
projects that don’t migrate to the new software forge), but does make any
migrations of existing repos more disruptive (since existing checkouts will
break).
Integration with Roundup
Kallithea provides configurable issue tracker integration. This will need
to be set up appropriately to integrate with the Roundup issue tracker at
bugs.python.org before the initial rollout of the forge.python.org service.
Accepting pull requests on GitHub and BitBucket
The initial rollout of forge.python.org would support publication of read-only
mirrors, both on hg.python.org and other services, as that is a relatively
straightforward operation that can be implemented in a commit hook.
While a highly desirable feature, accepting pull requests on external
services, and mirroring them as submissions to the master repositories on
forge.python.org is a more complex problem, and would likely not be included
as part of the initial rollout of the forge.python.org service.
Transparent Git and Mercurial interoperability
Kallithea’s native support for both Git and Mercurial offers an opportunity
to make it relatively straightforward for developers to use the client
of their choice to interact with repositories hosted on forge.python.org.
This transparent interoperability does not exist yet, but running our own
multi-VCS repository hosting service provides the opportunity to make this
capability a reality, rather than passively waiting for a proprietary
provider to deign to provide a feature that likely isn’t in their commercial
interest. There’s a significant misalignment of incentives between open
source communities and commercial providers in this particular area, as even
though offering VCS client choice can significantly reduce community friction
by eliminating the need for projects to make autocratic decisions that force
particular tooling choices on potential contributors, top down enforcement
of tool selection (regardless of developer preference) is currently still
the norm in the corporate and other organisational environments that produce
GitHub and Atlassian’s paying customers.
Prior to acceptance, in the absence of transparent interoperability, this PEP
should propose specific recommendations for inclusion in the CPython
developer’s guide section for
git users for creating
pull requests against forge.python.org hosted Mercurial repositories.
Pilot Objectives and Timeline
[TODO: Update this section for Brett’s revised timeline, which aims to have
a CPython demo repository online by October 31st, to get a better indication
of future capabilities once CPython itself migrates over to the new
system, rather than just the support repos]
This proposal is part of Brett Cannon’s current evaluation
of improvement proposals for various aspects of the CPython development
workflow. Key dates in that timeline are:
Feb 1: Draft proposal published (for Kallithea, this PEP)
Apr 8: Discussion of final proposals at Python Language Summit
May 1: Brett’s decision on which proposal to accept
Sep 13: Python 3.5 released, adopting new workflows for Python 3.6
If this proposal is selected for further development, it is proposed to start
with the rollout of the following pilot deployment:
a reference implementation operational at kallithea-pilot.python.org,
containing at least the developer guide and PEP repositories. This will
be a “throwaway” instance, allowing core developers and other contributors
to experiment freely without worrying about the long term consequences for
the repository history.
read-only live mirrors of the Kallithea hosted repositories on GitHub and
BitBucket. As with the pilot service itself, these would be temporary repos,
to be discarded after the pilot period ends.
clear documentation on using those mirrors to create pull requests against
Kallithea hosted Mercurial repositories (for the pilot, this will likely
not include using the native pull request workflows of those hosted
services)
automatic linking of issue references in code review comments and commit
messages to the corresponding issues on bugs.python.org
draft updates to PEP 1 explaining the Kallithea-based PEP editing and
submission workflow
The following items would be needed for a production migration, but there
doesn’t appear to be an obvious way to trial an updated implementation as
part of the pilot:
adjusting the PEP publication process and the developer guide publication
process to be based on the relocated Mercurial repos
The following items would be objectives of the overall workflow improvement
process, but are considered “desirable, but not essential” for the initial
adoption of the new service in September (if this proposal is the one
selected and the proposed pilot deployment is successful):
allowing the use of python-social-auth to authenticate against the PSF
hosted Kallithea instance
allowing the use of the GitHub and BitBucket pull request workflows to
submit pull requests to the main Kallithea repo
allowing easy triggering of forced BuildBot runs based on Kallithea hosted
repos and pull requests (prior to the implementation of PEP 462, this
would be intended for use with sandbox repos rather than the main CPython
repo)
Future Implications for CPython Core Development
The workflow requirements for the main CPython development repository are
significantly more complex than those for the repositories being discussed
in this PEP. These concerns are covered in more detail in PEP 462.
Given Guido’s recommendation to replace Rietveld with a more actively
maintained code review system, my current plan is to rewrite that PEP to
use Kallithea as the proposed glue layer, with enhanced Kallithea pull
requests eventually replacing the current practice of uploading patche files
directly to the issue tracker.
I’ve also started working with Pierre Yves-David on a custom Mercurial
extension
that automates some aspects of the CPython core development workflow.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 474 – Creating forge.python.org | Process | This PEP proposes setting up a new PSF provided resource, forge.python.org,
as a location for maintaining various supporting repositories
(such as the repository for Python Enhancement Proposals) in a way that is
more accessible to new contributors, and easier to manage for core
developers. |