jpodivin/pep_summarization
Text2Text Generation
•
Updated
•
11
text
stringlengths 330
67k
| status
stringclasses 9
values | title
stringlengths 18
80
| type
stringclasses 3
values | abstract
stringlengths 4
917
|
---|---|---|---|---|
PEP 5 – Guidelines for Language Evolution
Author:
Paul Prescod <paul at prescod.net>
Status:
Superseded
Type:
Process
Created:
26-Oct-2000
Post-History:
Superseded-By:
387
Table of Contents
Abstract
Implementation Details
Scope
Steps For Introducing Backwards-Incompatible Features
Abstract
In the natural evolution of programming languages it is sometimes
necessary to make changes that modify the behavior of older programs.
This PEP proposes a policy for implementing these changes in a manner
respectful of the installed base of Python users.
Implementation Details
Implementation of this PEP requires the addition of a formal warning
and deprecation facility that will be described in another proposal.
Scope
These guidelines apply to future versions of Python that introduce
backward-incompatible behavior. Backward incompatible behavior is a
major deviation in Python interpretation from an earlier behavior
described in the standard Python documentation. Removal of a feature
also constitutes a change of behavior.
This PEP does not replace or preclude other compatibility strategies
such as dynamic loading of backwards-compatible parsers. On the other
hand, if execution of “old code” requires a special switch or pragma
then that is indeed a change of behavior from the point of view of the
user and that change should be implemented according to these
guidelines.
In general, common sense must prevail in the implementation of these
guidelines. For instance changing “sys.copyright” does not constitute
a backwards-incompatible change of behavior!
Steps For Introducing Backwards-Incompatible Features
Propose backwards-incompatible behavior in a PEP. The PEP must
include a section on backwards compatibility that describes in
detail a plan to complete the remainder of these steps.
Once the PEP is accepted as a productive direction, implement an
alternate way to accomplish the task previously provided by the
feature that is being removed or changed. For instance if the
addition operator were scheduled for removal, a new version of
Python could implement an “add()” built-in function.
Formally deprecate the obsolete construct in the Python
documentation.
Add an optional warning mode to the parser that will inform users
when the deprecated construct is used. In other words, all
programs that will behave differently in the future must trigger
warnings in this mode. Compile-time warnings are preferable to
runtime warnings. The warning messages should steer people from
the deprecated construct to the alternative construct.
There must be at least a one-year transition period between the
release of the transitional version of Python and the release of
the backwards incompatible version. Users will have at least a
year to test their programs and migrate them from use of the
deprecated construct to the alternative one.
| Superseded | PEP 5 – Guidelines for Language Evolution | Process | In the natural evolution of programming languages it is sometimes
necessary to make changes that modify the behavior of older programs.
This PEP proposes a policy for implementing these changes in a manner
respectful of the installed base of Python users. |
PEP 6 – Bug Fix Releases
Author:
Aahz <aahz at pythoncraft.com>, Anthony Baxter <anthony at interlink.com.au>
Status:
Superseded
Type:
Process
Created:
15-Mar-2001
Post-History:
15-Mar-2001, 18-Apr-2001, 19-Aug-2004
Table of Contents
Abstract
Motivation
Prohibitions
Not-Quite-Prohibitions
Applicability of Prohibitions
Helping the Bug Fix Releases Happen
Version Numbers
Procedure
Patch Czar History
History
References
Copyright
Note
This PEP is obsolete.
The current release policy is documented in the devguide.
See also PEP 101 for mechanics of the release process.
Abstract
Python has historically had only a single fork of development, with
releases having the combined purpose of adding new features and
delivering bug fixes (these kinds of releases will be referred to as
“major releases”). This PEP describes how to fork off maintenance, or
bug fix, releases of old versions for the primary purpose of fixing
bugs.
This PEP is not, repeat NOT, a guarantee of the existence of bug fix
releases; it only specifies a procedure to be followed if bug fix
releases are desired by enough of the Python community willing to do
the work.
Motivation
With the move to SourceForge, Python development has accelerated.
There is a sentiment among part of the community that there was too
much acceleration, and many people are uncomfortable with upgrading to
new versions to get bug fixes when so many features have been added,
sometimes late in the development cycle.
One solution for this issue is to maintain the previous major release,
providing bug fixes until the next major release. This should make
Python more attractive for enterprise development, where Python may
need to be installed on hundreds or thousands of machines.
Prohibitions
Bug fix releases are required to adhere to the following restrictions:
There must be zero syntax changes. All .pyc and .pyo files must
work (no regeneration needed) with all bugfix releases forked off
from a major release.
There must be zero pickle changes.
There must be no incompatible C API changes. All extensions must
continue to work without recompiling in all bugfix releases in the
same fork as a major release.
Breaking any of these prohibitions requires a BDFL proclamation (and a
prominent warning in the release notes).
Not-Quite-Prohibitions
Where possible, bug fix releases should also:
Have no new features. The purpose of a bug fix release is to fix
bugs, not add the latest and greatest whizzo feature from the HEAD
of the CVS root.
Be a painless upgrade. Users should feel confident that an upgrade
from 2.x.y to 2.x.(y+1) will not break their running systems. This
means that, unless it is necessary to fix a bug, the standard
library should not change behavior, or worse yet, APIs.
Applicability of Prohibitions
The above prohibitions and not-quite-prohibitions apply both for a
final release to a bugfix release (for instance, 2.4 to 2.4.1) and for
one bugfix release to the next in a series (for instance 2.4.1 to
2.4.2).
Following the prohibitions listed in this PEP should help keep the
community happy that a bug fix release is a painless and safe upgrade.
Helping the Bug Fix Releases Happen
Here’s a few pointers on helping the bug fix release process along.
Backport bug fixes. If you fix a bug, and it seems appropriate,
port it to the CVS branch for the current bug fix release. If
you’re unwilling or unable to backport it yourself, make a note in
the commit message, with words like ‘Bugfix candidate’ or
‘Backport candidate’.
If you’re not sure, ask. Ask the person managing the current bug
fix releases if they think a particular fix is appropriate.
If there’s a particular bug you’d particularly like fixed in a bug
fix release, jump up and down and try to get it done. Do not wait
until 48 hours before a bug fix release is due, and then start
asking for bug fixes to be included.
Version Numbers
Starting with Python 2.0, all major releases are required to have a
version number of the form X.Y; bugfix releases will always be of the
form X.Y.Z.
The current major release under development is referred to as release
N; the just-released major version is referred to as N-1.
In CVS, the bug fix releases happen on a branch. For release 2.x, the
branch is named ‘release2x-maint’. For example, the branch for the 2.3
maintenance releases is release23-maint
Procedure
The process for managing bugfix releases is modeled in part on the Tcl
system [1].
The Patch Czar is the counterpart to the BDFL for bugfix releases.
However, the BDFL and designated appointees retain veto power over
individual patches. A Patch Czar might only be looking after a single
branch of development - it’s quite possible that a different person
might be maintaining the 2.3.x and the 2.4.x releases.
As individual patches get contributed to the current trunk of CVS,
each patch committer is requested to consider whether the patch is a
bug fix suitable for inclusion in a bugfix release. If the patch is
considered suitable, the committer can either commit the release to
the maintenance branch, or else mark the patch in the commit message.
In addition, anyone from the Python community is free to suggest
patches for inclusion. Patches may be submitted specifically for
bugfix releases; they should follow the guidelines in PEP 3. In
general, though, it’s probably better that a bug in a specific release
also be fixed on the HEAD as well as the branch.
The Patch Czar decides when there are a sufficient number of patches
to warrant a release. The release gets packaged up, including a
Windows installer, and made public. If any new bugs are found, they
must be fixed immediately and a new bugfix release publicized (with an
incremented version number). For the 2.3.x cycle, the Patch Czar
(Anthony) has been trying for a release approximately every six
months, but this should not be considered binding in any way on any
future releases.
Bug fix releases are expected to occur at an interval of roughly six
months. This is only a guideline, however - obviously, if a major bug
is found, a bugfix release may be appropriate sooner. In general, only
the N-1 release will be under active maintenance at any time. That is,
during Python 2.4’s development, Python 2.3 gets bugfix releases. If,
however, someone qualified wishes to continue the work to maintain an
older release, they should be encouraged.
Patch Czar History
Anthony Baxter is the Patch Czar for 2.3.1 through 2.3.4.
Barry Warsaw is the Patch Czar for 2.2.3.
Guido van Rossum is the Patch Czar for 2.2.2.
Michael Hudson is the Patch Czar for 2.2.1.
Anthony Baxter is the Patch Czar for 2.1.2 and 2.1.3.
Thomas Wouters is the Patch Czar for 2.1.1.
Moshe Zadka is the Patch Czar for 2.0.1.
History
This PEP started life as a proposal on comp.lang.python. The original
version suggested a single patch for the N-1 release to be released
concurrently with the N release. The original version also argued for
sticking with a strict bug fix policy.
Following feedback from the BDFL and others, the draft PEP was written
containing an expanded bugfix release cycle that permitted any
previous major release to obtain patches and also relaxed the strict
bug fix requirement (mainly due to the example of PEP 235, which
could be argued as either a bug fix or a feature).
Discussion then mostly moved to python-dev, where BDFL finally issued
a proclamation basing the Python bugfix release process on Tcl’s,
which essentially returned to the original proposal in terms of being
only the N-1 release and only bug fixes, but allowing multiple bugfix
releases until release N is published.
Anthony Baxter then took this PEP and revised it, based on lessons
from the 2.3 release cycle.
References
[1]
http://www.tcl.tk/cgi-bin/tct/tip/28.html
Copyright
This document has been placed in the public domain.
| Superseded | PEP 6 – Bug Fix Releases | Process | Python has historically had only a single fork of development, with
releases having the combined purpose of adding new features and
delivering bug fixes (these kinds of releases will be referred to as
“major releases”). This PEP describes how to fork off maintenance, or
bug fix, releases of old versions for the primary purpose of fixing
bugs. |
PEP 10 – Voting Guidelines
Author:
Barry Warsaw <barry at python.org>
Status:
Active
Type:
Process
Created:
07-Mar-2002
Post-History:
07-Mar-2002
Table of Contents
Abstract
Rationale
Voting Scores
References
Copyright
Abstract
This PEP outlines the python-dev voting guidelines. These guidelines
serve to provide feedback or gauge the “wind direction” on a
particular proposal, idea, or feature. They don’t have a binding
force.
Rationale
When a new idea, feature, patch, etc. is floated in the Python
community, either through a PEP or on the mailing lists (most likely
on python-dev [1]), it is sometimes helpful to gauge the community’s
general sentiment. Sometimes people just want to register their
opinion of an idea. Sometimes the BDFL wants to take a straw poll.
Whatever the reason, these guidelines have been adopted so as to
provide a common language for developers.
While opinions are (sometimes) useful, but they are never binding.
Opinions that are accompanied by rationales are always valued higher
than bare scores (this is especially true with -1 votes).
Voting Scores
The scoring guidelines are loosely derived from the Apache voting
procedure [2], with of course our own spin on things. There are 4
possible vote scores:
+1 I like it
+0 I don’t care, but go ahead
-0 I don’t care, so why bother?
-1 I hate it
You may occasionally see wild flashes of enthusiasm (either for or
against) with vote scores like +2, +1000, or -1000. These aren’t
really valued much beyond the above scores, but it’s nice to see
people get excited about such geeky stuff.
References
[1]
Python Developer’s Guide,
(http://www.python.org/dev/)
[2]
Apache Project Guidelines and Voting Rules
(http://httpd.apache.org/dev/guidelines.html)
Copyright
This document has been placed in the public domain.
| Active | PEP 10 – Voting Guidelines | Process | This PEP outlines the python-dev voting guidelines. These guidelines
serve to provide feedback or gauge the “wind direction” on a
particular proposal, idea, or feature. They don’t have a binding
force. |
PEP 11 – CPython platform support
Author:
Martin von Löwis <martin at v.loewis.de>,
Brett Cannon <brett at python.org>
Status:
Active
Type:
Process
Created:
07-Jul-2002
Post-History:
18-Aug-2007,
14-May-2014,
20-Feb-2015,
10-Mar-2022
Table of Contents
Abstract
Rationale
Support tiers
Tier 1
Tier 2
Tier 3
All other platforms
Notes
Microsoft Windows
Legacy C Locale
Unsupporting platforms
No-longer-supported platforms
Discussions
Copyright
Abstract
This PEP documents how an operating system (platform) becomes
supported in CPython, what platforms are currently supported, and
documents past support.
Rationale
Over time, the CPython source code has collected various pieces of
platform-specific code, which, at some point in time, was
considered necessary to use CPython on a specific platform.
Without access to this platform, it is not possible to determine
whether this code is still needed. As a result, this code may
either break during CPython’s evolution, or it may become
unnecessary as the platforms evolve as well.
Allowing these fragments to grow poses the risk of
unmaintainability: without having experts for a large number of
platforms, it is not possible to determine whether a certain
change to the CPython source code will work on all supported
platforms.
To reduce this risk, this PEP specifies what is required for a
platform to be considered supported by CPython as well as providing a
procedure to remove code for platforms with few or no CPython
users.
This PEP also lists what platforms are supported by the CPython
interpreter. This lets people know what platforms are directly
supported by the CPython development team.
Support tiers
Platform support is broken down into tiers. Each tier comes with
different requirements which lead to different promises being made
about support.
To be promoted to a tier, steering council support is required and is
expected to be driven by team consensus. Demotion to a lower tier
occurs when the requirements of the current tier are no longer met for
a platform for an extended period of time based on the judgment of
the release manager or steering council. For platforms which no longer
meet the requirements of any tier by b1 of a new feature release, an
announcement will be made to warn the community of the pending removal
of support for the platform (e.g. in the b1 announcement). If the
platform is not brought into line for at least one of the tiers by the
first release candidate, it will be listed as unsupported in this PEP.
Tier 1
STATUS
CI failures block releases.
Changes which would break the main branch are not allowed to be merged;
any breakage should be fixed or reverted immediately.
All core developers are responsible to keep main, and thus these
platforms, working.
Failures on these platforms block a release.
Target Triple
Notes
i686-pc-windows-msvc
x86_64-pc-windows-msvc
x86_64-apple-darwin
BSD libc, clang
x86_64-unknown-linux-gnu
glibc, gcc
Tier 2
STATUS
Must have a reliable buildbot.
At least two core developers are signed up to support the platform.
Changes which break any of these platforms are to be fixed or
reverted within 24 hours.
Failures on these platforms block a release.
Target Triple
Notes
Contacts
aarch64-apple-darwin
clang
Ned Deily, Ronald Oussoren, Dong-hee Na
aarch64-unknown-linux-gnu
glibc, gccglibc, clang
Petr Viktorin, Victor StinnerVictor Stinner, Gregory P. Smith
wasm32-unknown-wasi
WASI SDK, Wasmtime
Brett Cannon, Eric Snow
x86_64-unknown-linux-gnu
glibc, clang
Victor Stinner, Gregory P. Smith
Tier 3
STATUS
Must have a reliable buildbot.
At least one core developer is signed up to support the platform.
No response SLA to failures.
Failures on these platforms do not block a release.
Target Triple
Notes
Contacts
aarch64-pc-windows-msvc
Steve Dower
armv7l-unknown-linux-gnueabihf
Raspberry Pi OS, glibc, gcc
Gregory P. Smith
powerpc64le-unknown-linux-gnu
glibc, clangglibc, gcc
Victor StinnerVictor Stinner
s390x-unknown-linux-gnu
glibc, gcc
Victor Stinner
x86_64-unknown-freebsd
BSD libc, clang
Victor Stinner
All other platforms
Support for a platform may be partial within the code base, such as
from active development around platform support or accidentally.
Code changes to platforms not listed in the above tiers may be rejected
or removed from the code base without a deprecation process if they
cause a maintenance burden or obstruct general improvements.
Platforms not listed here may be supported by the wider Python
community in some way. If your desired platform is not listed above,
please perform a search online to see if someone is already providing
support in some form.
Notes
Microsoft Windows
Windows versions prior to Windows 10 follow Microsoft’s Fixed Lifecycle Policy,
with a mainstream support phase for 5 years after release,
where the product is generally commercially available,
and an additional 5 year extended support phase,
where paid support is still available and certain bug fixes are released.
Extended Security Updates (ESU)
is a paid program available to high-volume enterprise customers
as a “last resort” option to receive certain security updates after extended support ends.
ESU is considered a distinct phase that follows the expiration of extended support.
Windows 10 and later follow Microsoft’s Modern Lifecycle Policy,
which varies per-product, per-version, per-edition and per-channel.
Generally, feature updates (1709, 22H2) occur every 6-12 months
and are supported for 18-36 months;
Server and IoT editions, and LTSC channel releases are supported for 5-10 years,
and the latest feature release of a major version (Windows 10, Windows 11)
generally receives new updates for at least 10 years following release.
Microsoft’s Windows Lifecycle FAQ
has more specific and up-to-date guidance.
CPython’s Windows support currently follows Microsoft’s lifecycles.
A new feature release X.Y.0 will support all Windows versions
whose extended support phase has not yet expired.
Subsequent bug fix releases will support the same Windows versions
as the original feature release, even if no longer supported by Microsoft.
New versions of Windows released while CPython is in maintenance mode
may be supported at the discretion of the core team and release manager.
As of 2024, our current interpretation of Microsoft’s lifecycles is that
Windows for IoT and embedded systems is out of scope for new CPython releases,
as the intent of those is to avoid feature updates. Windows Server will usually
be the oldest version still receiving free security fixes, and that will
determine the earliest supported client release with equivalent API version
(which will usually be past its end-of-life).
Each feature release is built by a specific version of Microsoft
Visual Studio. That version should have mainstream support when the
release is made. Developers of extension modules will generally need
to use the same Visual Studio release; they are concerned both with
the availability of the versions they need to use, and with keeping
the zoo of versions small. The CPython source tree will keep
unmaintained build files for older Visual Studio releases, for which
patches will be accepted. Such build files will be removed from the
source tree 3 years after the extended support for the compiler has
ended (but continue to remain available in revision control).
Legacy C Locale
Starting with CPython 3.7.0, *nix platforms are expected to provide
at least one of C.UTF-8 (full locale), C.utf8 (full locale) or
UTF-8 (LC_CTYPE-only locale) as an alternative to the legacy C
locale.
Any Unicode-related integration problems that occur only in the legacy C
locale and cannot be reproduced in an appropriately configured non-ASCII
locale will be closed as “won’t fix”.
Unsupporting platforms
If a platform drops out of tiered support, a note must be posted
in this PEP that the platform is no longer actively supported. This
note must include:
The name of the system,
The first release number that does not support this platform
anymore, and
The first release where the historical support code is actively
removed.
In some cases, it is not possible to identify the specific list of
systems for which some code is used (e.g. when autoconf tests for
absence of some feature which is considered present on all
supported systems). In this case, the name will give the precise
condition (usually a preprocessor symbol) that will become
unsupported.
At the same time, the CPython build must be changed to produce a
warning if somebody tries to install CPython on this platform. On
platforms using autoconf, configure should also be made emit a warning
about the unsupported platform.
This gives potential users of the platform a chance to step forward
and offer maintenance. We do not treat a platform that loses Tier 3
support any worse than a platform that was never supported.
No-longer-supported platforms
Name: MS-DOS, MS-Windows 3.x
Unsupported in: Python 2.0
Code removed in: Python 2.1
Name: SunOS 4
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: DYNIX
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: dgux
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Minix
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Irix 4 and –with-sgi-dl
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Linux 1
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems defining __d6_pthread_create (configure.in)
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems defining PY_PTHREAD_D4, PY_PTHREAD_D6,
or PY_PTHREAD_D7 in thread_pthread.h
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems using –with-dl-dld
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems using –without-universal-newlines,
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: MacOS 9
Unsupported in: Python 2.4
Code removed in: Python 2.4
Name: Systems using –with-wctype-functions
Unsupported in: Python 2.6
Code removed in: Python 2.6
Name: Win9x, WinME, NT4
Unsupported in: Python 2.6 (warning in 2.5 installer)
Code removed in: Python 2.6
Name: AtheOS
Unsupported in: Python 2.6 (with “AtheOS” changed to “Syllable”)
Build broken in: Python 2.7 (edit configure to re-enable)
Code removed in: Python 3.0
Details: http://www.syllable.org/discussion.php?id=2320
Name: BeOS
Unsupported in: Python 2.6 (warning in configure)
Build broken in: Python 2.7 (edit configure to re-enable)
Code removed in: Python 3.0
Name: Systems using Mach C Threads
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: SunOS lightweight processes (LWP)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: Systems using –with-pth (GNU pth threads)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: Systems using Irix threads
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: OSF* systems (issue 8606)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: OS/2 (issue 16135)
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: VMS (issue 16136)
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: Windows 2000
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: Windows systems where COMSPEC points to command.com
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: RISC OS
Unsupported in: Python 3.0 (some code actually removed)
Code removed in: Python 3.4
Name: IRIX
Unsupported in: Python 3.7
Code removed in: Python 3.7
Name: Systems without multithreading support
Unsupported in: Python 3.7
Code removed in: Python 3.7
Name: wasm32-unknown-emscripten
Unsupported in: Python 3.13
Code removed in: Unknown
Discussions
April 2022: Consider adding a Tier 3 to tiered platform support
(Victor Stinner)
March 2022: Proposed tiered platform support
(Brett Cannon)
February 2015: Update to PEP 11 to clarify garnering platform support
(Brett Cannon)
May 2014: Where is our official policy of what platforms we do support?
(Brett Cannon)
August 2007: PEP 11 update - Call for port maintainers to step forward
(Skip Montanaro)
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Active | PEP 11 – CPython platform support | Process | This PEP documents how an operating system (platform) becomes
supported in CPython, what platforms are currently supported, and
documents past support. |
PEP 12 – Sample reStructuredText PEP Template
Author:
David Goodger <goodger at python.org>,
Barry Warsaw <barry at python.org>,
Brett Cannon <brett at python.org>
Status:
Active
Type:
Process
Created:
05-Aug-2002
Post-History:
30-Aug-2002
Table of Contents
Abstract
Rationale
How to Use This Template
ReStructuredText PEP Formatting Requirements
General
Section Headings
Paragraphs
Inline Markup
Block Quotes
Literal Blocks
Lists
Tables
Hyperlinks
Internal and PEP/RFC Links
Footnotes
Images
Comments
Escaping Mechanism
Canonical Documentation and Intersphinx
Habits to Avoid
Suggested Sections
Resources
Copyright
Note
For those who have written a PEP before, there is a template
(which is included as a file in the PEPs repository).
Abstract
This PEP provides a boilerplate or sample template for creating your
own reStructuredText PEPs. In conjunction with the content guidelines
in PEP 1, this should make it easy for you to conform your own
PEPs to the format outlined below.
Note: if you are reading this PEP via the web, you should first grab
the text (reStructuredText) source of this PEP in order to complete
the steps below. DO NOT USE THE HTML FILE AS YOUR TEMPLATE!
The source for this (or any) PEP can be found in the
PEPs repository,
as well as via a link at the bottom of each PEP.
Rationale
If you intend to submit a PEP, you MUST use this template, in
conjunction with the format guidelines below, to ensure that your PEP
submission won’t get automatically rejected because of form.
ReStructuredText provides PEP authors with useful functionality and
expressivity, while maintaining easy readability in the source text.
The processed HTML form makes the functionality accessible to readers:
live hyperlinks, styled text, tables, images, and automatic tables of
contents, among other advantages.
How to Use This Template
To use this template you must first decide whether your PEP is going
to be an Informational or Standards Track PEP. Most PEPs are
Standards Track because they propose a new feature for the Python
language or standard library. When in doubt, read PEP 1 for details,
or open a tracker issue on the PEPs repo to ask for assistance.
Once you’ve decided which type of PEP yours is going to be, follow the
directions below.
Make a copy of this file (the .rst file, not the HTML!) and
perform the following edits. Name the new file pep-NNNN.rst, using
the next available number (not used by a published or in-PR PEP).
Replace the “PEP: 12” header with “PEP: NNNN”,
matching the file name. Note that the file name should be padded with
zeros (eg pep-0012.rst), but the header should not (PEP: 12).
Change the Title header to the title of your PEP.
Change the Author header to include your name, and optionally your
email address. Be sure to follow the format carefully: your name
must appear first, and it must not be contained in parentheses.
Your email address may appear second (or it can be omitted) and if
it appears, it must appear in angle brackets. It is okay to
obfuscate your email address.
If none of the authors are Python core developers, include a Sponsor
header with the name of the core developer sponsoring your PEP.
Add the direct URL of the PEP’s canonical discussion thread
(on e.g. Python-Dev, Discourse, etc) under the Discussions-To header.
If the thread will be created after the PEP is submitted as an official
draft, it is okay to just list the venue name initially, but remember to
update the PEP with the URL as soon as the PEP is successfully merged
to the PEPs repository and you create the corresponding discussion thread.
See PEP 1 for more details.
Change the Status header to “Draft”.
For Standards Track PEPs, change the Type header to “Standards
Track”.
For Informational PEPs, change the Type header to “Informational”.
For Standards Track PEPs, if your feature depends on the acceptance
of some other currently in-development PEP, add a Requires header
right after the Type header. The value should be the PEP number of
the PEP yours depends on. Don’t add this header if your dependent
feature is described in a Final PEP.
Change the Created header to today’s date. Be sure to follow the
format carefully: it must be in dd-mmm-yyyy format, where the
mmm is the 3 English letter month abbreviation, i.e. one of Jan,
Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec.
For Standards Track PEPs, after the Created header, add a
Python-Version header and set the value to the next planned version
of Python, i.e. the one your new feature will hopefully make its
first appearance in. Do not use an alpha or beta release
designation here. Thus, if the last version of Python was 2.2 alpha
1 and you’re hoping to get your new feature into Python 2.2, set the
header to:Python-Version: 2.2
Add a Topic header if the PEP belongs under one shown at the Topic Index.
Most PEPs don’t.
Leave Post-History alone for now; you’ll add dates and corresponding links
to this header each time you post your PEP to the designated discussion forum
(and update the Discussions-To header with said link, as above).
For each thread, use the date (in the dd-mmm-yyy format) as the
linked text, and insert the URLs inline as anonymous reST hyperlinks,
with commas in between each posting.If you posted threads for your PEP on August 14, 2001 and September 3, 2001,
the Post-History header would look like, e.g.:
Post-History: `14-Aug-2001 <https://www.example.com/thread_1>`__,
`03-Sept-2001 <https://www.example.com/thread_2>`__
You should add the new dates/links here as soon as you post a
new discussion thread.
Add a Replaces header if your PEP obsoletes an earlier PEP. The
value of this header is the number of the PEP that your new PEP is
replacing. Only add this header if the older PEP is in “final”
form, i.e. is either Accepted, Final, or Rejected. You aren’t
replacing an older open PEP if you’re submitting a competing idea.
Now write your Abstract, Rationale, and other content for your PEP,
replacing all this gobbledygook with your own text. Be sure to
adhere to the format guidelines below, specifically on the
prohibition of tab characters and the indentation requirements.
See “Suggested Sections” below for a template of sections to include.
Update your Footnotes section, listing any footnotes and
non-inline link targets referenced by the text.
Run ./build.py to ensure the PEP is rendered without errors,
and check that the output in build/pep-NNNN.html looks as you intend.
Create a pull request against the PEPs repository.
For reference, here are all of the possible header fields (everything
in brackets should either be replaced or have the field removed if
it has a leading * marking it as optional and it does not apply to
your PEP):
PEP: [NNN]
Title: [...]
Author: [Full Name <email at example.com>]
Sponsor: *[Full Name <email at example.com>]
PEP-Delegate:
Discussions-To: [URL]
Status: Draft
Type: [Standards Track | Informational | Process]
Topic: *[Governance | Packaging | Release | Typing]
Requires: *[NNN]
Created: [DD-MMM-YYYY]
Python-Version: *[M.N]
Post-History: [`DD-MMM-YYYY <URL>`__]
Replaces: *[NNN]
Superseded-By: *[NNN]
Resolution:
ReStructuredText PEP Formatting Requirements
The following is a PEP-specific summary of reStructuredText syntax.
For the sake of simplicity and brevity, much detail is omitted. For
more detail, see Resources below. Literal blocks (in which no
markup processing is done) are used for examples throughout, to
illustrate the plaintext markup.
General
Lines should usually not extend past column 79,
excepting URLs and similar circumstances.
Tab characters must never appear in the document at all.
Section Headings
PEP headings must begin in column zero and the initial letter of each
word must be capitalized as in book titles. Acronyms should be in all
capitals. Section titles must be adorned with an underline, a single
repeated punctuation character, which begins in column zero and must
extend at least as far as the right edge of the title text (4
characters minimum). First-level section titles are underlined with
“=” (equals signs), second-level section titles with “-” (hyphens),
and third-level section titles with “’” (single quotes or
apostrophes). For example:
First-Level Title
=================
Second-Level Title
------------------
Third-Level Title
'''''''''''''''''
If there are more than three levels of sections in your PEP, you may
insert overline/underline-adorned titles for the first and second
levels as follows:
============================
First-Level Title (optional)
============================
-----------------------------
Second-Level Title (optional)
-----------------------------
Third-Level Title
=================
Fourth-Level Title
------------------
Fifth-Level Title
'''''''''''''''''
You shouldn’t have more than five levels of sections in your PEP. If
you do, you should consider rewriting it.
You must use two blank lines between the last line of a section’s body
and the next section heading. If a subsection heading immediately
follows a section heading, a single blank line in-between is
sufficient.
The body of each section is not normally indented, although some
constructs do use indentation, as described below. Blank lines are
used to separate constructs.
Paragraphs
Paragraphs are left-aligned text blocks separated by blank lines.
Paragraphs are not indented unless they are part of an indented
construct (such as a block quote or a list item).
Inline Markup
Portions of text within paragraphs and other text blocks may be
styled. For example:
Text may be marked as *emphasized* (single asterisk markup,
typically shown in italics) or **strongly emphasized** (double
asterisks, typically boldface). ``Inline literals`` (using double
backquotes) are typically rendered in a monospaced typeface. No
further markup recognition is done within the double backquotes,
so they're safe for any kind of code snippets.
Block Quotes
Block quotes consist of indented body elements. For example:
This is a paragraph.
This is a block quote.
A block quote may contain many paragraphs.
Block quotes are used to quote extended passages from other sources.
Block quotes may be nested inside other body elements. Use 4 spaces
per indent level.
Literal Blocks
Literal blocks are used for code samples and other preformatted text.
To indicate a literal block, preface the indented text block with
“::” (two colons), or use the .. code-block:: directive.
Indent the text block by 4 spaces; the literal block continues until the end
of the indentation. For example:
This is a typical paragraph. A literal block follows.
::
for a in [5, 4, 3, 2, 1]: # this is program code, shown as-is
print(a)
print("it's...")
“::” is also recognized at the end of any paragraph; if not immediately
preceded by whitespace, one colon will remain visible in the final output:
This is an example::
Literal block
By default, literal blocks will be syntax-highlighted as Python code.
For specific blocks that contain code or data in other languages/formats,
use the .. code-block:: language directive, substituting the “short name”
of the appropriate Pygments lexer
(or text to disable highlighting) for language. For example:
.. code-block:: rst
An example of the ``rst`` lexer (i.e. *reStructuredText*).
For PEPs that predominantly contain literal blocks of a specific language,
use the .. highlight:: language directive with the appropriate language
at the top of the PEP body (below the headers and above the Abstract).
All literal blocks will then be treated as that language,
unless specified otherwise in the specific .. code-block. For example:
.. highlight:: c
Abstract
========
Here's some C code::
printf("Hello, World!\n");
Lists
Bullet list items begin with one of “-”, “*”, or “+” (hyphen,
asterisk, or plus sign), followed by whitespace and the list item
body. List item bodies must be left-aligned and indented relative to
the bullet; the text immediately after the bullet determines the
indentation. For example:
This paragraph is followed by a list.
* This is the first bullet list item. The blank line above the
first list item is required; blank lines between list items
(such as below this paragraph) are optional.
* This is the first paragraph in the second item in the list.
This is the second paragraph in the second item in the list.
The blank line above this paragraph is required. The left edge
of this paragraph lines up with the paragraph above, both
indented relative to the bullet.
- This is a sublist. The bullet lines up with the left edge of
the text blocks above. A sublist is a new list so requires a
blank line above and below.
* This is the third item of the main list.
This paragraph is not part of the list.
Enumerated (numbered) list items are similar, but use an enumerator
instead of a bullet. Enumerators are numbers (1, 2, 3, …), letters
(A, B, C, …; uppercase or lowercase), or Roman numerals (i, ii, iii,
iv, …; uppercase or lowercase), formatted with a period suffix
(“1.”, “2.”), parentheses (“(1)”, “(2)”), or a right-parenthesis
suffix (“1)”, “2)”). For example:
1. As with bullet list items, the left edge of paragraphs must
align.
2. Each list item may contain multiple paragraphs, sublists, etc.
This is the second paragraph of the second list item.
a) Enumerated lists may be nested.
b) Blank lines may be omitted between list items.
Definition lists are written like this:
what
Definition lists associate a term with a definition.
how
The term is a one-line phrase, and the definition is one
or more paragraphs or body elements, indented relative to
the term.
Tables
Simple tables are easy and compact:
===== ===== =======
A B A and B
===== ===== =======
False False False
True False False
False True False
True True True
===== ===== =======
There must be at least two columns in a table (to differentiate from
section titles). Column spans use underlines of hyphens (“Inputs”
spans the first two columns):
===== ===== ======
Inputs Output
------------ ------
A B A or B
===== ===== ======
False False False
True False True
False True True
True True True
===== ===== ======
Text in a first-column cell starts a new row. No text in the first
column indicates a continuation line; the rest of the cells may
consist of multiple lines. For example:
===== =========================
col 1 col 2
===== =========================
1 Second column of row 1.
2 Second column of row 2.
Second line of paragraph.
3 - Second column of row 3.
- Second item in bullet
list (row 3, column 2).
===== =========================
Hyperlinks
When referencing an external web page in the body of a PEP, you should
include the title of the page or a suitable description in the text, with
either an inline hyperlink or a separate explicit target with the URL.
Do not include bare URLs in the body text of the PEP, and use HTTPS
links wherever available.
Hyperlink references use backquotes and a trailing underscore to mark
up the reference text; backquotes are optional if the reference text
is a single word. For example, to reference a hyperlink target named
Python website, you would write:
In this paragraph, we refer to the `Python website`_.
If you intend to only reference a link once, and want to define it inline
with the text, insert the link into angle brackets (<>) after the text
you want to link, but before the closing backtick, with a space between the
text and the opening backtick. You should also use a double-underscore after
the closing backtick instead of a single one, which makes it an anonymous
reference to avoid conflicting with other target names. For example:
Visit the `website <https://www.python.org/>`__ for more.
If you want to use one link multiple places with different linked text,
or want to ensure you don’t have to update your link target names when
changing the linked text, include the target name within angle brackets
following the text to link, with an underscore after the target name
but before the closing angle bracket (or the link will not work).
For example:
For further examples, see the `documentation <pydocs_>`_.
An explicit target provides the URL. Put targets in the Footnotes section
at the end of the PEP, or immediately after the paragraph with the reference.
Hyperlink targets begin with two periods and a space (the “explicit
markup start”), followed by a leading underscore, the reference text,
a colon, and the URL.
.. _Python web site: https://www.python.org/
.. _pydocs: https://docs.python.org/
The reference text and the target text must match (although the match
is case-insensitive and ignores differences in whitespace). Note that
the underscore trails the reference text but precedes the target text.
If you think of the underscore as a right-pointing arrow, it points
away from the reference and toward the target.
Internal and PEP/RFC Links
The same mechanism as hyperlinks can be used for internal references.
Every unique section title implicitly defines an internal hyperlink target.
We can make a link to the Abstract section like this:
Here is a hyperlink reference to the `Abstract`_ section. The
backquotes are optional since the reference text is a single word;
we can also just write: Abstract_.
To refer to PEPs or RFCs, always use the :pep: and :rfc: roles,
never hardcoded URLs.
For example:
See :pep:`1` for more information on how to write a PEP,
and :pep:`the Hyperlink section of PEP 12 <12#hyperlinks>` for how to link.
This renders as:
See PEP 1 for more information on how to write a PEP,
and the Hyperlink section of PEP 12 for how to link.
PEP numbers in the text are never padded, and there is a space (not a dash)
between “PEP” or “RFC” and the number; the above roles will take care of
that for you.
Footnotes
Footnote references consist of a left square bracket, a label, a
right square bracket, and a trailing underscore.
Instead of a number, use a label of the
form “#word”, where “word” is a mnemonic consisting of alphanumerics
plus internal hyphens, underscores, and periods (no whitespace or
other characters are allowed).
For example:
Refer to The TeXbook [#TeXbook]_ for more information.
which renders as
Refer to The TeXbook [1] for more information.
Whitespace must precede the footnote reference. Leave a space between
the footnote reference and the preceding word.
Use footnotes for additional notes, explanations and caveats, as well as
for references to books and other sources not readily available online.
Native reST hyperlink targets or inline hyperlinks in the text should be
used in preference to footnotes for including URLs to online resources.
Footnotes begin with “.. “ (the explicit
markup start), followed by the footnote marker (no underscores),
followed by the footnote body. For example:
.. [#TeXbook] Donald Knuth's *The TeXbook*, pages 195 and 196.
which renders as
[1]
Donald Knuth’s The TeXbook, pages 195 and 196.
Footnotes and footnote references will be numbered automatically, and
the numbers will always match.
Images
If your PEP contains a diagram or other graphic, you may include it in the
processed output using the image directive:
.. image:: diagram.png
Any browser-friendly graphics format is possible; PNG should be
preferred for graphics, JPEG for photos and GIF for animations.
Currently, SVG must be avoided due to compatibility issues with the
PEP build system.
For accessibility and readers of the source text, you should include
a description of the image and any key information contained within
using the :alt: option to the image directive:
.. image:: dataflow.png
:alt: Data flows from the input module, through the "black box"
module, and finally into (and through) the output module.
Comments
A comment is an indented block of arbitrary text immediately
following an explicit markup start: two periods and whitespace. Leave
the “..” on a line by itself to ensure that the comment is not
misinterpreted as another explicit markup construct. Comments are not
visible in the processed document. For example:
..
This section should be updated in the final PEP.
Ensure the date is accurate.
Escaping Mechanism
reStructuredText uses backslashes (”\”) to override the special
meaning given to markup characters and get the literal characters
themselves. To get a literal backslash, use an escaped backslash
(”\\”). There are two contexts in which backslashes have no
special meaning: literal blocks and inline literals (see Inline
Markup above). In these contexts, no markup recognition is done,
and a single backslash represents a literal backslash, without having
to double up.
If you find that you need to use a backslash in your text, consider
using inline literals or a literal block instead.
Canonical Documentation and Intersphinx
As PEP 1 describes,
PEPs are considered historical documents once marked Final,
and their canonical documentation/specification should be moved elsewhere.
To indicate this, use the canonical-doc directive
or an appropriate subclass:
canonical-pypa-spec for packaging standards
canonical-typing-spec for typing standards
Furthermore, you can use
Intersphinx references
to other Sphinx sites,
currently the Python documentation
and packaging.python.org,
to easily cross-reference pages, sections and Python/C objects.
This works with both the “canonical” directives and anywhere in your PEP.
Add the directive between the headers and the first section of the PEP
(typically the Abstract)
and pass as an argument an Intersphinx reference of the canonical doc/spec
(or if the target is not on a Sphinx site, a reST hyperlink).
For example,
to create a banner pointing to the sqlite3 docs,
you would write the following:
.. canonical-doc:: :mod:`python:sqlite3`
which would generate the banner:
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at sqlite3.
×
See PEP 1 for how to propose changes.
Or for a PyPA spec,
such as the Core metadata specifications,
you would use:
.. canonical-pypa-spec:: :ref:`packaging:core-metadata`
which renders as:
Attention
This PEP is a historical document. The up-to-date, canonical spec, Core metadata specifications, is maintained on the PyPA specs page.
×
See the PyPA specification update process for how to propose changes.
The argument accepts arbitrary reST,
so you can include multiple linked docs/specs and name them whatever you like,
and you can also include directive content that will be inserted into the text.
The following advanced example:
.. canonical-doc:: the :ref:`python:sqlite3-connection-objects` and :exc:`python:~sqlite3.DataError` docs
Also, see the :ref:`Data Persistence docs <persistence>` for other examples.
would render as:
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at the Connection objects and sqlite3.DataError docs.
×
Also, see the Data Persistence docs for other examples.
See PEP 1 for how to propose changes.
Habits to Avoid
Many programmers who are familiar with TeX often write quotation marks
like this:
`single-quoted' or ``double-quoted''
Backquotes are significant in reStructuredText, so this practice
should be avoided. For ordinary text, use ordinary ‘single-quotes’ or
“double-quotes”. For inline literal text (see Inline Markup
above), use double-backquotes:
``literal text: in here, anything goes!``
Suggested Sections
Various sections are found to be common across PEPs and are outlined in
PEP 1. Those sections are provided here for convenience.
PEP: <REQUIRED: pep number>
Title: <REQUIRED: pep title>
Author: <REQUIRED: list of authors' real names and optionally, email addrs>
Sponsor: <real name of sponsor>
PEP-Delegate: <PEP delegate's real name>
Discussions-To: <REQUIRED: URL of current canonical discussion thread>
Status: <REQUIRED: Draft | Active | Accepted | Provisional | Deferred | Rejected | Withdrawn | Final | Superseded>
Type: <REQUIRED: Standards Track | Informational | Process>
Topic: <Governance | Packaging | Release | Typing>
Requires: <pep numbers>
Created: <date created on, in dd-mmm-yyyy format>
Python-Version: <version number>
Post-History: <REQUIRED: dates, in dd-mmm-yyyy format, and corresponding links to PEP discussion threads>
Replaces: <pep number>
Superseded-By: <pep number>
Resolution: <url>
Abstract
========
[A short (~200 word) description of the technical issue being addressed.]
Motivation
==========
[Clearly explain why the existing language specification is inadequate to address the problem that the PEP solves.]
Rationale
=========
[Describe why particular design decisions were made.]
Specification
=============
[Describe the syntax and semantics of any new language feature.]
Backwards Compatibility
=======================
[Describe potential impact and severity on pre-existing code.]
Security Implications
=====================
[How could a malicious user take advantage of this new feature?]
How to Teach This
=================
[How to teach users, new and experienced, how to apply the PEP to their work.]
Reference Implementation
========================
[Link to any existing implementation and details about its state, e.g. proof-of-concept.]
Rejected Ideas
==============
[Why certain ideas that were brought while discussing this PEP were not ultimately pursued.]
Open Issues
===========
[Any points that are still being decided/discussed.]
Footnotes
=========
[A collection of footnotes cited in the PEP, and a place to list non-inline hyperlink targets.]
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
Resources
Many other constructs and variations are possible,
both those supported by basic Docutils
and the extensions added by Sphinx.
A number of resources are available to learn more about them:
Sphinx ReStructuredText Primer,
a gentle but fairly detailed introduction.
reStructuredText Markup Specification,
the authoritative, comprehensive documentation of the basic reST syntax,
directives, roles and more.
Sphinx Roles
and Sphinx Directives,
the extended constructs added by the Sphinx documentation system used to
render the PEPs to HTML.
If you have questions or require assistance with writing a PEP that the above
resources don’t address, ping @python/pep-editors on GitHub, open an
issue on the PEPs repository
or reach out to a PEP editor directly.
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Active | PEP 12 – Sample reStructuredText PEP Template | Process | This PEP provides a boilerplate or sample template for creating your
own reStructuredText PEPs. In conjunction with the content guidelines
in PEP 1, this should make it easy for you to conform your own
PEPs to the format outlined below. |
PEP 13 – Python Language Governance
Author:
The Python core team and community
Status:
Active
Type:
Process
Topic:
Governance
Created:
16-Dec-2018
Table of Contents
Abstract
Current steering council
Specification
The steering council
Composition
Mandate
Powers
Electing the council
Term
Vacancies
Conflicts of interest
Ejecting core team members
Vote of no confidence
The core team
Role
Prerogatives
Membership
Changing this document
History
Creation of this document
History of council elections
History of amendments
Acknowledgements
Copyright
Abstract
This PEP defines the formal governance process for Python, and records
how this has changed over time. Currently, governance is based around
a steering council. The council has broad authority, which they seek
to exercise as rarely as possible.
Current steering council
The 2024 term steering council consists of:
Barry Warsaw
Emily Morehouse
Gregory P. Smith
Pablo Galindo Salgado
Thomas Wouters
Per the results of the vote tracked in PEP 8105.
The core team consists of those listed in the private
https://github.com/python/voters/ repository which is publicly
shared via https://devguide.python.org/developers/.
Specification
The steering council
Composition
The steering council is a 5-person committee.
Mandate
The steering council shall work to:
Maintain the quality and stability of the Python language and
CPython interpreter,
Make contributing as accessible, inclusive, and sustainable as
possible,
Formalize and maintain the relationship between the core team and
the PSF,
Establish appropriate decision-making processes for PEPs,
Seek consensus among contributors and the core team before acting in
a formal capacity,
Act as a “court of final appeal” for decisions where all other
methods have failed.
Powers
The council has broad authority to make decisions about the project.
For example, they can:
Accept or reject PEPs
Enforce or update the project’s code of conduct
Work with the PSF to manage any project assets
Delegate parts of their authority to other subcommittees or
processes
However, they cannot modify this PEP, or affect the membership of the
core team, except via the mechanisms specified in this PEP.
The council should look for ways to use these powers as little as
possible. Instead of voting, it’s better to seek consensus. Instead of
ruling on individual PEPs, it’s better to define a standard process
for PEP decision making (for example, by accepting one of the other
801x series of PEPs). It’s better to establish a Code of Conduct
committee than to rule on individual cases. And so on.
To use its powers, the council votes. Every council member must either
vote or explicitly abstain. Members with conflicts of interest on a
particular vote must abstain. Passing requires a strict majority of
non-abstaining council members.
Whenever possible, the council’s deliberations and votes shall be held
in public.
Electing the council
A council election consists of two phases:
Phase 1: Candidates advertise their interest in serving. Candidates
must be nominated by a core team member. Self-nominations are
allowed.
Phase 2: Each core team member can vote for zero or more of the
candidates. Voting is performed anonymously. Candidates are ranked
by the total number of votes they receive. If a tie occurs, it may
be resolved by mutual agreement among the candidates, or else the
winner will be chosen at random.
Each phase lasts one to two weeks, at the outgoing council’s discretion.
For the initial election, both phases will last two weeks.
The election process is managed by a returns officer nominated by the
outgoing steering council. For the initial election, the returns
officer will be nominated by the PSF Executive Director.
The council should ideally reflect the diversity of Python
contributors and users, and core team members are encouraged to vote
accordingly.
Term
A new council is elected after each feature release. Each council’s
term runs from when their election results are finalized until the
next council’s term starts. There are no term limits.
Vacancies
Council members may resign their position at any time.
Whenever there is a vacancy during the regular council term, the
council may vote to appoint a replacement to serve out the rest of the
term.
If a council member drops out of touch and cannot be contacted for a
month or longer, then the rest of the council may vote to replace
them.
Conflicts of interest
While we trust council members to act in the best interests of Python
rather than themselves or their employers, the mere appearance of any
one company dominating Python development could itself be harmful and
erode trust. In order to avoid any appearance of conflict of interest,
at most 2 members of the council can work for any single employer.
In a council election, if 3 of the top 5 vote-getters work for the
same employer, then whichever of them ranked lowest is disqualified
and the 6th-ranking candidate moves up into 5th place; this is
repeated until a valid council is formed.
During a council term, if changing circumstances cause this rule to be
broken (for instance, due to a council member changing employment),
then one or more council members must resign to remedy the issue, and
the resulting vacancies can then be filled as normal.
Ejecting core team members
In exceptional circumstances, it may be necessary to remove someone
from the core team against their will. (For example: egregious and
ongoing code of conduct violations.) This can be accomplished by a
steering council vote, but unlike other steering council votes, this
requires at least a two-thirds majority. With 5 members voting, this
means that a 3:2 vote is insufficient; 4:1 in favor is the minimum
required for such a vote to succeed. In addition, this is the one
power of the steering council which cannot be delegated, and this
power cannot be used while a vote of no confidence is in process.
If the ejected core team member is also on the steering council, then
they are removed from the steering council as well.
Vote of no confidence
In exceptional circumstances, the core team may remove a sitting
council member, or the entire council, via a vote of no confidence.
A no-confidence vote is triggered when a core team member calls for
one publicly on an appropriate project communication channel, and
another core team member seconds the proposal.
The vote lasts for two weeks. Core team members vote for or against.
If at least two thirds of voters express a lack of confidence, then
the vote succeeds.
There are two forms of no-confidence votes: those targeting a single
member, and those targeting the council as a whole. The initial call
for a no-confidence vote must specify which type is intended. If a
single-member vote succeeds, then that member is removed from the
council and the resulting vacancy can be handled in the usual way. If
a whole-council vote succeeds, the council is dissolved and a new
council election is triggered immediately.
The core team
Role
The core team is the group of trusted volunteers who manage Python.
They assume many roles required to achieve the project’s goals,
especially those that require a high level of trust. They make the
decisions that shape the future of the project.
Core team members are expected to act as role models for the community
and custodians of the project, on behalf of the community and all
those who rely on Python.
They will intervene, where necessary, in online discussions or at
official Python events on the rare occasions that a situation arises
that requires intervention.
They have authority over the Python Project infrastructure, including
the Python Project website itself, the Python GitHub organization and
repositories, the bug tracker, the mailing lists, IRC channels, etc.
Prerogatives
Core team members may participate in formal votes, typically to nominate new
team members and to elect the steering council.
Membership
Python core team members demonstrate:
a good grasp of the philosophy of the Python Project
a solid track record of being constructive and helpful
significant contributions to the project’s goals, in any form
willingness to dedicate some time to improving Python
As the project matures, contributions go beyond code. Here’s an
incomplete list of areas where contributions may be considered for
joining the core team, in no particular order:
Working on community management and outreach
Providing support on the mailing lists and on IRC
Triaging tickets
Writing patches (code, docs, or tests)
Reviewing patches (code, docs, or tests)
Participating in design decisions
Providing expertise in a particular domain (security, i18n, etc.)
Managing the continuous integration infrastructure
Managing the servers (website, tracker, documentation, etc.)
Maintaining related projects (alternative interpreters, core
infrastructure like packaging, etc.)
Creating visual designs
Core team membership acknowledges sustained and valuable efforts that
align well with the philosophy and the goals of the Python project.
It is granted by receiving at least two-thirds positive votes in a
core team vote that is open for one week and is not vetoed by the
steering council.
Core team members are always looking for promising contributors,
teaching them how the project is managed, and submitting their names
to the core team’s vote when they’re ready.
There’s no time limit on core team membership. However, in order to
provide the general public with a reasonable idea of how many people
maintain Python, core team members who have stopped contributing are
encouraged to declare themselves as “inactive”. Those who haven’t made
any non-trivial contribution in two years may be asked to move
themselves to this category, and moved there if they don’t respond. To
record and honor their contributions, inactive team members will
continue to be listed alongside active core team members; and, if they
later resume contributing, they can switch back to active status at
will. While someone is in inactive status, though, they lose their
active privileges like voting or nominating for the steering council,
and commit access.
The initial active core team members will consist of everyone
currently listed in the “Python core” team on GitHub (access
granted for core members only), and the
initial inactive members will consist of everyone else who has been a
committer in the past.
Changing this document
Changes to this document require at least a two-thirds majority of
votes cast in a core team vote which should be open for two weeks.
History
Creation of this document
The Python project was started by Guido van Rossum, who served as its
Benevolent Dictator for Life (BDFL) from inception until July 2018,
when he stepped down.
After discussion, a number of proposals were put forward for a new
governance model, and the core devs voted to choose between them. The
overall process is described in PEP 8000 and PEP 8001, a review of
other projects was performed in PEP 8002, and the proposals themselves
were written up as the 801x series of PEPs. Eventually the proposal in
PEP 8016 was selected
as the new governance model, and was used to create the initial
version of this PEP. The 8000-series PEPs are preserved for historical
reference (and in particular, PEP 8016 contains additional rationale
and links to contemporary discussions), but this PEP is now the
official reference, and will evolve following the rules described
herein.
History of council elections
January 2019: PEP 8100
December 2019: PEP 8101
December 2020: PEP 8102
December 2021: PEP 8103
December 2022: PEP 8104
December 2023: PEP 8105
History of amendments
2019-04-17: Added the vote length for core devs and changes to this document.
Acknowledgements
This PEP began as PEP 8016, which was written by Nathaniel J. Smith
and Donald Stufft, based on a Django governance document written by
Aymeric Augustin, and incorporated feedback and assistance from
numerous others.
Copyright
This document has been placed in the public domain.
| Active | PEP 13 – Python Language Governance | Process | This PEP defines the formal governance process for Python, and records
how this has changed over time. Currently, governance is based around
a steering council. The council has broad authority, which they seek
to exercise as rarely as possible. |
PEP 20 – The Zen of Python
Author:
Tim Peters <tim.peters at gmail.com>
Status:
Active
Type:
Informational
Created:
19-Aug-2004
Post-History:
22-Aug-2004
Table of Contents
Abstract
The Zen of Python
Easter Egg
References
Copyright
Abstract
Long time Pythoneer Tim Peters succinctly channels the BDFL’s guiding
principles for Python’s design into 20 aphorisms, only 19 of which
have been written down.
The Zen of Python
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Easter Egg
>>> import this
References
Originally posted to comp.lang.python/python-list@python.org under a
thread called “The Way of Python”
Copyright
This document has been placed in the public domain.
| Active | PEP 20 – The Zen of Python | Informational | Long time Pythoneer Tim Peters succinctly channels the BDFL’s guiding
principles for Python’s design into 20 aphorisms, only 19 of which
have been written down. |
PEP 101 – Doing Python Releases 101
Author:
Barry Warsaw <barry at python.org>, Guido van Rossum <guido at python.org>
Status:
Active
Type:
Informational
Created:
22-Aug-2001
Post-History:
Replaces:
102
Table of Contents
Abstract
Things You’ll Need
Types of Releases
How To Make A Release
What Next?
Moving to End-of-life
Windows Notes
Copyright
Abstract
Making a Python release is a thrilling and crazy process. You’ve heard
the expression “herding cats”? Imagine trying to also saddle those
purring little creatures up, and ride them into town, with some of their
buddies firmly attached to your bare back, anchored by newly sharpened
claws. At least they’re cute, you remind yourself.
Actually, no, that’s a slight exaggeration 😉 The Python release
process has steadily improved over the years and now, with the help of our
amazing community, is really not too difficult. This PEP attempts to
collect, in one place, all the steps needed to make a Python release.
Most of the steps are now automated or guided by automation, so manually
following this list is no longer necessary.
Things You’ll Need
As a release manager there are a lot of resources you’ll need to access.
Here’s a hopefully-complete list.
A GPG key.Python releases are digitally signed with GPG; you’ll need a key,
which hopefully will be on the “web of trust” with at least one of
the other release managers.
A bunch of software:
A checkout of the python/release-tools repo.
It contains a requirements.txt file that you need to install
dependencies from first. Afterwards, you can fire up scripts in the
repo, covered later in this PEP.
blurb, the
Misc/NEWS
management tool. You can pip install it.
A fairly complete installation of a recent TeX distribution,
such as texlive. You need that for building the PDF docs.
Access to servers where you will upload files:
downloads.nyc1.psf.io, the server that hosts download files; and
docs.nyc1.psf.io, the server that hosts the documentation.
Administrator access to https://github.com/python/cpython.
An administrator account on www.python.org, including an “API key”.
Write access to the PEP repository.If you’re reading this, you probably already have this–the first
task of any release manager is to draft the release schedule. But
in case you just signed up… sucker! I mean, uh, congratulations!
Posting access to http://blog.python.org, a Blogger-hosted weblog.
The RSS feed from this blog is used for the ‘Python News’ section
on www.python.org.
A subscription to the super secret release manager mailing list, which may
or may not be called python-cabal. Bug Barry about this.
A @python.org email address that you will use to sign your releases
with. Ask postmaster@ for an address; you can either get a full
account, or a redirecting alias + SMTP credentials to send email from
this address that looks legit to major email providers.
Types of Releases
There are several types of releases you will need to make. These include:
alpha
begin beta, also known as beta 1, also known as new branch
beta 2+
release candidate 1
release candidate 2+
final
new branch
begin bugfix mode
begin security-only mode
end-of-life
Some of these release types actually involve more than
one release branch. In particular, a new branch is that point in the
release cycle when a new feature release cycle begins. Under the current
organization of the cpython git repository, the main branch is always
the target for new features. At some point in the release cycle of the
next feature release, a new branch release is made which creates a
new separate branch for stabilization and later maintenance of the
current in-progress feature release (3.n.0) and the main branch is modified
to build a new version (which will eventually be released as 3.n+1.0).
While the new branch release step could occur at one of several points
in the release cycle, current practice is for it to occur at feature code
cutoff for the release which is scheduled for the first beta release.
In the descriptions that follow, steps specific to release types are
labeled accordingly, for now, new branch and final.
How To Make A Release
Here are the steps taken to make a Python release. Some steps are more
fuzzy than others because there’s little that can be automated (e.g.
writing the NEWS entries). Where a step is usually performed by An
Expert, the role of that expert is given. Otherwise, assume the step is
done by the Release Manager (RM), the designated person performing the
release. The roles and their current experts are:
RM = Release Manager
Thomas Wouters <thomas@python.org> (NL)
Pablo Galindo Salgado <pablogsal@python.org> (UK)
Łukasz Langa <lukasz@python.org> (PL)
WE = Windows - Steve Dower <steve.dower@python.org>
ME = Mac - Ned Deily <nad@python.org> (US)
DE = Docs - Julien Palard <julien@python.org> (Central Europe)
Note
It is highly recommended that the RM contact the Experts the day
before the release. Because the world is round and everyone lives
in different timezones, the RM must ensure that the release tag is
created in enough time for the Experts to cut binary releases.
You should not make the release public (by updating the website and
sending announcements) before all experts have updated their bits.
In rare cases where the expert for Windows or Mac is MIA, you may add
a message “(Platform) binaries will be provided shortly” and proceed.
As much as possible, the release steps are automated and guided by the
release script, which is available in a separate repository:
https://github.com/python/release-tools
We use the following conventions in the examples below. Where a release
number is given, it is of the form 3.X.YaN, e.g. 3.13.0a3 for Python 3.13.0
alpha 3, where “a” == alpha, “b” == beta, “rc” == release candidate.
Release tags are named v3.X.YaN. The branch name for minor release
maintenance branches is 3.X.
This helps by performing several automatic editing steps, and guides you
to perform some manual editing steps.
Log into Discord and join the Python Core Devs server. Ask Thomas
or Łukasz for an invite.You probably need to coordinate with other people around the world.
This communication channel is where we’ve arranged to meet.
Check to see if there are any showstopper bugs.Go to https://github.com/python/cpython/issues and look for any open
bugs that can block this release. You’re looking at two relevant labels:
release-blockerStops the release dead in its tracks. You may not
make any release with any open release blocker bugs.
deferred-blockerDoesn’t block this release, but it will block a
future release. You may not make a final or
candidate release with any open deferred blocker
bugs.
Review the release blockers and either resolve them, bump them down to
deferred, or stop the release and ask for community assistance. If
you’re making a final or candidate release, do the same with any open
deferred.
Check the stable buildbots.Go to https://buildbot.python.org/all/#/release_status
Look at the buildbots for the release
you’re making. Ignore any that are offline (or inform the community so
they can be restarted). If what remains are (mostly) green buildbots,
you’re good to go. If you have non-offline red buildbots, you may want
to hold up the release until they are fixed. Review the problems and
use your judgement, taking into account whether you are making an alpha,
beta, or final release.
Make a release clone.On a fork of the cpython repository on GitHub, create a release branch
within it (called the “release clone” from now on). You can use the same
GitHub fork you use for cpython development. Using the standard setup
recommended in the Python Developer’s Guide, your fork would be referred
to as origin and the standard cpython repo as upstream. You will
use the branch on your fork to do the release engineering work, including
tagging the release, and you will use it to share with the other experts
for making the binaries.
For a final or release candidate 2+ release, if you are going
to cherry-pick a subset of changes for the next rc or final from all those
merged since the last rc, you should create a release
engineering branch starting from the most recent release candidate tag,
i.e. v3.8.0rc1. You will then cherry-pick changes from the standard
release branch as necessary into the release engineering branch and
then proceed as usual. If you are going to take all of the changes
since the previous rc, you can proceed as normal.
Make sure the current branch of your release clone is the branch you
want to release from. (git status)
Run blurb release <version> specifying the version number
(e.g. blurb release 3.4.7rc1). This merges all the recent news
blurbs into a single file marked with this release’s version number.
Regenerate Lib/pydoc-topics.py.While still in the Doc directory, run make pydoc-topics. Then copy
build/pydoc-topics/topics.py to ../Lib/pydoc_data/topics.py.
Commit your changes to pydoc_topics.py
(and any fixes you made in the docs).
Consider running autoconf using the currently accepted standard version
in case configure or other autoconf-generated files were last
committed with a newer or older version and may contain spurious or
harmful differences. Currently, autoconf 2.71 is our de facto standard.
if there are differences, commit them.
Make sure the SOURCE_URI in Doc/tools/extensions/pyspecific.py
points to the right branch in the git repository (main or 3.X).
For a new branch release, change the branch in the file from main
to the new release branch you are about to create (3.X).
Bump version numbers via the release script:$ .../release-tools/release.py --bump 3.X.YaN
Reminder: X, Y, and N should be integers.
a should be one of “a”, “b”, or “rc” (e.g. “3.4.3rc1”).
For final releases omit the aN (“3.4.3”). For the first
release of a new version Y should be 0 (“3.6.0”).
This automates updating various release numbers, but you will have to
modify a few files manually. If your $EDITOR environment variable is
set up correctly, release.py will pop up editor windows with the files
you need to edit.
Review the blurb-generated Misc/NEWS file and edit as necessary.
Make sure all changes have been committed. (release.py --bump
doesn’t check in its changes for you.)
Check the years on the copyright notice. If the last release
was some time last year, add the current year to the copyright
notice in several places:
README
LICENSE (make sure to change on trunk and the branch)
Python/getcopyright.c
Doc/copyright.rst
Doc/license.rst
PC/python_ver_rc.h sets up the DLL version resource for Windows
(displayed when you right-click on the DLL and select
Properties). This isn’t a C include file, it’s a Windows
“resource file” include file.
For a final major release, edit the first paragraph of
Doc/whatsnew/3.X.rst to include the actual release date; e.g. “Python
2.5 was released on August 1, 2003.” There’s no need to edit this for
alpha or beta releases.
Do a “git status” in this directory.You should not see any files. I.e. you better not have any uncommitted
changes in your working directory.
Tag the release for 3.X.YaN:$ .../release-tools/release.py --tag 3.X.YaN
This executes a git tag command with the -s option so that the
release tag in the repo is signed with your gpg key. When prompted
choose the private key you use for signing release tarballs etc.
For begin security-only mode and end-of-life releases, review the
two files and update the versions accordingly in all active branches.
Time to build the source tarball. Use the release script to create
the source gzip and xz tarballs,
documentation tar and zip files, and gpg signature files:$ .../release-tools/release.py --export 3.X.YaN
This can take a while for final releases, and it will leave all the
tarballs and signatures in a subdirectory called 3.X.YaN/src, and the
built docs in 3.X.YaN/docs (for final releases).
Note that the script will sign your release with Sigstore. Please use
your @python.org email address for this. See here for more information:
https://www.python.org/download/sigstore/.
Now you want to perform the very important step of checking the
tarball you just created, to make sure a completely clean,
virgin build passes the regression test. Here are the best
steps to take:$ cd /tmp
$ tar xvf /path/to/your/release/clone/<version>//Python-3.2rc2.tgz
$ cd Python-3.2rc2
$ ls
(Do things look reasonable?)
$ ls Lib
(Are there stray .pyc files?)
$ ./configure
(Loads of configure output)
$ make test
(Do all the expected tests pass?)
If you’re feeling lucky and have some time to kill, or if you are making
a release candidate or final release, run the full test suite:
$ make testall
If the tests pass, then you can feel good that the tarball is
fine. If some of the tests fail, or anything else about the
freshly unpacked directory looks weird, you better stop now and
figure out what the problem is.
Push your commits to the remote release branch in your GitHub fork.:# Do a dry run first.
$ git push --dry-run --tags origin
# Make sure you are pushing to your GitHub fork, *not* to the main
# python/cpython repo!
$ git push --tags origin
Notify the experts that they can start building binaries.
Warning
STOP: at this point you must receive the “green light” from other experts
in order to create the release. There are things you can do while you wait
though, so keep reading until you hit the next STOP.
The WE generates and publishes the Windows files using the Azure
Pipelines build scripts in .azure-pipelines/windows-release/,
currently set up at https://dev.azure.com/Python/cpython/_build?definitionId=21.The build process runs in multiple stages, with each stage’s output being
available as a downloadable artifact. The stages are:
Compile all variants of binaries (32-bit, 64-bit, debug/release),
including running profile-guided optimization.
Compile the HTML Help file containing the Python documentation
Codesign all the binaries with the PSF’s certificate
Create packages for python.org, nuget.org, the embeddable distro and
the Windows Store
Perform basic verification of the installers
Upload packages to python.org and nuget.org, purge download caches and
run a test download.
After the uploads are complete, the WE copies the generated hashes from
the build logs and emails them to the RM. The Windows Store packages are
uploaded manually to https://partner.microsoft.com/dashboard/home by the
WE.
The ME builds Mac installer packages and uploads them to
downloads.nyc1.psf.io together with gpg signature files.
scp or rsync all the files built by release.py --export
to your home directory on downloads.nyc1.psf.io.While you’re waiting for the files to finish uploading, you can continue
on with the remaining tasks. You can also ask folks on #python-dev
and/or python-committers to download the files as they finish uploading
so that they can test them on their platforms as well.
Now you need to go to downloads.nyc1.psf.io and move all the files in place
over there. Our policy is that every Python version gets its own
directory, but each directory contains all releases of that version.
On downloads.nyc1.psf.io, cd /srv/www.python.org/ftp/python/3.X.Y
creating it if necessary. Make sure it is owned by group ‘downloads’
and group-writable.
Move the release .tgz, and .tar.xz files into place, as well as the
.asc GPG signature files. The Win/Mac binaries are usually put there
by the experts themselves.Make sure they are world readable. They should also be group
writable, and group-owned by downloads.
Use gpg --verify to make sure they got uploaded intact.
If this is a final or rc release: Move the doc zips and tarballs to
/srv/www.python.org/ftp/python/doc/3.X.Y[rcA], creating the directory
if necessary, and adapt the “current” symlink in .../doc to point to
that directory. Note though that if you’re releasing a maintenance
release for an older version, don’t change the current link.
If this is a final or rc release (even a maintenance release), also
unpack the HTML docs to /srv/docs.python.org/release/3.X.Y[rcA] on
docs.nyc1.psf.io. Make sure the files are in group docs and are
group-writeable.
Let the DE check if the docs are built and work all right.
Note both the documentation and downloads are behind a caching CDN. If
you change archives after downloading them through the website, you’ll
need to purge the stale data in the CDN like this:$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/Python-3.12.0.tar.xz
You should always purge the cache of the directory listing as people
use that to browse the release files:
$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/
For the extra paranoid, do a completely clean test of the release.
This includes downloading the tarball from www.python.org.Make sure the md5 checksums match. Then unpack the tarball,
and do a clean make test.:
$ make distclean
$ ./configure
$ make test
To ensure that the regression test suite passes. If not, you
screwed up somewhere!
Warning
STOP and confirm:
Have you gotten the green light from the WE?
Have you gotten the green light from the ME?
Have you gotten the green light from the DE?
If green, it’s time to merge the release engineering branch back into
the main repo.
In order to push your changes to GitHub, you’ll have to temporarily
disable branch protection for administrators. Go to the
Settings | Branches page:https://github.com/python/cpython/settings/branches/
“Edit” the settings for the branch you’re releasing on.
This will load the settings page for that branch.
Uncheck the “Include administrators” box and press the
“Save changes” button at the bottom.
Merge your release clone into the main development repo:# Pristine copy of the upstream repo branch
$ git clone git@github.com:python/cpython.git merge
$ cd merge
# Checkout the correct branch:
# 1. For feature pre-releases up to and including a
# **new branch** release, i.e. alphas and first beta
# do a checkout of the main branch
$ git checkout main
# 2. Else, for all other releases, checkout the
# appropriate release branch.
$ git checkout 3.X
# Fetch the newly created and signed tag from your clone repo
$ git fetch --tags git@github.com:your-github-id/cpython.git v3.X.YaN
# Merge the temporary release engineering branch back into
$ git merge --no-squash v3.X.YaN
$ git commit -m 'Merge release engineering branch'
If this is a new branch release, i.e. first beta,
now create the new release branch:$ git checkout -b 3.X
Do any steps needed to setup the new release branch, including:
In README.rst, change all references from main to
the new branch, in particular, GitHub repo URLs.
For all releases, do the guided post-release steps with the
release script.:$ .../release-tools/release.py --done 3.X.YaN
For a final or release candidate 2+ release, you may need to
do some post-merge cleanup. Check the top-level README.rst
and include/patchlevel.h files to ensure they now reflect
the desired post-release values for on-going development.
The patchlevel should be the release tag with a +.
Also, if you cherry-picked changes from the standard release
branch into the release engineering branch for this release,
you will now need to manual remove each blurb entry from
the Misc/NEWS.d/next directory that was cherry-picked
into the release you are working on since that blurb entry
is now captured in the merged x.y.z.rst file for the new
release. Otherwise, the blurb entry will appear twice in
the changelog.html file, once under Python next and again
under x.y.z.
Review and commit these changes:$ git commit -m 'Post release updates'
If this is a new branch release (e.g. the first beta),
update the main branch to start development for the
following feature release. When finished, the main
branch will now build Python X.Y+1.
First, set main up to be the next release, i.e.X.Y+1.a0:$ git checkout main
$ .../release-tools/release.py --bump 3.9.0a0
Edit all version references in README.rst
Move any historical “what’s new” entries from Misc/NEWS to
Misc/HISTORY.
Edit Doc/tutorial/interpreter.rst (2 references to ‘[Pp]ython3x’,
one to ‘Python 3.x’, also make the date in the banner consistent).
Edit Doc/tutorial/stdlib.rst and Doc/tutorial/stdlib2.rst, which
have each one reference to ‘[Pp]ython3x’.
Add a new whatsnew/3.x.rst file (with the comment near the top
and the toplevel sections copied from the previous file) and
add it to the toctree in whatsnew/index.rst. But beware that
the initial whatsnew/3.x.rst checkin from previous releases
may be incorrect due to the initial midstream change to blurb
that propagates from release to release! Help break the cycle: if
necessary make the following change:- For full details, see the :source:`Misc/NEWS` file.
+ For full details, see the :ref:`changelog <changelog>`.
Update the version number in configure.ac and re-run autoconf.
Make sure the SOURCE_URI in Doc/tools/extensions/pyspecific.py
points to main.
Update the version numbers for the Windows builds in PC/ and
PCbuild/, which have references to python38.
NOTE, check with Steve Dower about this step, it is probably obsolete.:$ find PC/ PCbuild/ -type f | xargs sed -i 's/python38/python39/g'
$ git mv -f PC/os2emx/python38.def PC/os2emx/python39.def
$ git mv -f PC/python38stub.def PC/python39stub.def
$ git mv -f PC/python38gen.py PC/python39gen.py
Commit these changes to the main branch:$ git status
$ git add ...
$ git commit -m 'Bump to 3.9.0a0'
Do another git status in this directory.You should not see any files. I.e. you better not have any uncommitted
changes in your working directory.
Commit and push to the main repo.:# Do a dry run first.
# For feature pre-releases prior to a **new branch** release,
# i.e. a feature alpha release:
$ git push --dry-run --tags git@github.com:python/cpython.git main
# If it looks OK, take the plunge. There's no going back!
$ git push --tags git@github.com:python/cpython.git main
# For a **new branch** release, i.e. first beta:
$ git push --dry-run --tags git@github.com:python/cpython.git 3.X
$ git push --dry-run --tags git@github.com:python/cpython.git main
# If it looks OK, take the plunge. There's no going back!
$ git push --tags git@github.com:python/cpython.git 3.X
$ git push --tags git@github.com:python/cpython.git main
# For all other releases:
$ git push --dry-run --tags git@github.com:python/cpython.git 3.X
# If it looks OK, take the plunge. There's no going back!
$ git push --tags git@github.com:python/cpython.git 3.X
If this is a new branch release, add a Branch protection rule
for the newly created branch (3.X). Look at the values for the previous
release branch (3.X-1) and use them as a template.
https://github.com/python/cpython/settings/branches/Also, add a needs backport to 3.X label to the GitHub repo.
https://github.com/python/cpython/labels
You can now re-enable enforcement of branch settings against administrators
on GitHub. Go back to the Settings | Branch page:https://github.com/python/cpython/settings/branches/
“Edit” the settings for the branch you’re releasing on.
Re-check the “Include administrators” box and press the
“Save changes” button at the bottom.
Now it’s time to twiddle the web site. Almost none of this is automated, sorry.
To do these steps, you must have the permission to edit the website. If you
don’t have that, ask someone on pydotorg@python.org for the proper
permissions. (Or ask Ewa, who coordinated the effort for the new website
with RevSys.)
Log in to https://www.python.org/admin .
Create a new “release” for the release. Currently “Releases” are
sorted under “Downloads”.The easiest thing is probably to copy fields from an existing
Python release “page”, editing as you go.
You can use Markdown or
ReStructured Text
to describe your release. The former is less verbose, while the latter has nifty
integration for things like referencing PEPs.
Leave the “Release page” field on the form empty.
“Save” the release.
Populate the release with the downloadable files.Your friend and mine, Georg Brandl, made a lovely tool
called “add-to-pydotorg.py”. You can find it in the
“release” tree (next to “release.py”). You run the
tool on downloads.nyc1.psf.io, like this:
$ AUTH_INFO=<username>:<python.org-api-key> python add-to-pydotorg.py <version>
This walks the correct download directory for <version>,
looks for files marked with <version>, and populates
the “Release Files” for the correct “release” on the web
site with these files. Note that clears the “Release Files”
for the relevant version each time it’s run. You may run
it from any directory you like, and you can run it as
many times as you like if the files happen to change.
Keep a copy in your home directory on dl-files and
keep it fresh.
If new types of files are added to the release, someone will need to
update add-to-pydotorg.py so it recognizes these new files.
(It’s best to update add-to-pydotorg.py when file types
are removed, too.)
The script will also sign any remaining files that were not
signed with Sigstore until this point. Again, if this happens,
do use your @python.org address for this process. More info:
https://www.python.org/download/sigstore/
In case the CDN already cached a version of the Downloads page
without the files present, you can invalidate the cache using:$ curl -X PURGE https://www.python.org/downloads/release/python-XXX/
If this is a final release:
Add the new version to the Python Documentation by Version
page https://www.python.org/doc/versions/ and
remove the current version from any ‘in development’ section.
For 3.X.Y, edit all the previous X.Y releases’ page(s) to
point to the new release. This includes the content field of the
Downloads -> Releases entry for the release:Note: Python 3.x.(y-1) has been superseded by
`Python 3.x.y </downloads/release/python-3xy/>`_.
And, for those releases having separate release page entries
(phasing these out?), update those pages as well,
e.g. download/releases/3.x.y:
Note: Python 3.x.(y-1) has been superseded by
`Python 3.x.y </download/releases/3.x.y/>`_.
Update the “Current Pre-release Testing Versions web page”.There’s a page that lists all the currently-in-testing versions
of Python:
https://www.python.org/download/pre-releases/
Every time you make a release, one way or another you’ll
have to update this page:
If you’re releasing a version before 3.x.0,
you should add it to this page, removing the previous pre-release
of version 3.x as needed.
If you’re releasing 3.x.0 final, you need to remove the pre-release
version from this page.
This is in the “Pages” category on the Django-based website, and finding
it through that UI is kind of a chore. However! If you’re already logged
in to the admin interface (which, at this point, you should be), Django
will helpfully add a convenient “Edit this page” link to the top of the
page itself. So you can simply follow the link above, click on the
“Edit this page” link, and make your changes as needed. How convenient!
If appropriate, update the “Python Documentation by Version” page:
https://www.python.org/doc/versions/
This lists all releases of Python by version number and links to their
static (not built daily) online documentation. There’s a list at the
bottom of in-development versions, which is where all alphas/betas/RCs
should go. And yes you should be able to click on the link above then
press the shiny, exciting “Edit this page” button.
Write the announcement on https://discuss.python.org/. This is the
fuzzy bit because not much can be automated. You can use an earlier
announcement as a template, but edit it for content!
Once the announcement is up on Discourse, send an equivalent to the
following mailing lists:python-list@python.org
python-announce@python.org
python-dev@python.org
Also post the announcement to
The Python Insider blog.
To add a new entry, go to
your Blogger home page, here.
Update any release PEPs (e.g. 719) with the release dates.
Update the labels on https://github.com/python/cpython/issues:
Flip all the deferred-blocker issues back to release-blocker
for the next release.
Add version 3.X+1 as when version 3.X enters alpha.
Change non-doc feature requests to version 3.X+1 when version 3.X
enters beta.
Update issues from versions that your release makes
unsupported to the next supported version.
Review open issues, as this might find lurking showstopper bugs,
besides reminding people to fix the easy ones they forgot about.
You can delete the remote release clone branch from your repo clone.
If this is a new branch release, you will need to ensure various
pieces of the development infrastructure are updated for the new branch.
These include:
Update the issue tracker for the new branch: add the new version to
the versions list.
Update the devguide to reflect the new branches and versions.
Create a PR to update the supported releases table on the
downloads page.
(See https://github.com/python/pythondotorg/issues/1302)
Ensure buildbots are defined for the new branch (contact Łukasz
or Zach Ware).
Ensure the various GitHub bots are updated, as needed, for the
new branch, in particular, make sure backporting to the new
branch works (contact core-workflow team)
https://github.com/python/core-workflow/issues
Review the most recent commit history for the main and new release
branches to identify and backport any merges that might have been made
to the main branch during the release engineering phase and that
should be in the release branch.
Verify that CI is working for new PRs for the main and new release
branches and that the release branch is properly protected (no direct
pushes, etc).
Verify that the on-line docs are building properly (this may take up to
24 hours for a complete build on the web site).
What Next?
Verify! Pretend you’re a user: download the files from python.org, and
make Python from it. This step is too easy to overlook, and on several
occasions we’ve had useless release files. Once a general server problem
caused mysterious corruption of all files; once the source tarball got
built incorrectly; more than once the file upload process on SF truncated
files; and so on.
Rejoice. Drink. Be Merry. Write a PEP like this one. Or be
like unto Guido and take A Vacation.
You’ve just made a Python release!
Moving to End-of-life
Under current policy, a release branch normally reaches end-of-life status
5 years after its initial release. The policy is discussed in more detail
in the Python Developer’s Guide.
When end-of-life is reached, there are a number of tasks that need to be
performed either directly by you as release manager or by ensuring someone
else does them. Some of those tasks include:
Optionally making a final release to publish any remaining unreleased
changes.
Freeze the state of the release branch by creating a tag of its current HEAD
and then deleting the branch from the cpython repo. The current HEAD should
be at or beyond the final security release for the branch:git fetch upstream
git tag --sign -m 'Final head of the former 3.3 branch' 3.3 upstream/3.3
git push upstream refs/tags/3.3
If all looks good, delete the branch. This may require the assistance of
someone with repo administrator privileges:git push upstream --delete 3.3 # or perform from GitHub Settings page
Remove the release from the list of “Active Python Releases” on the Downloads
page. To do this, log in to the admin page for python.org, navigate to Boxes,
and edit the downloads-active-releases entry. Simply strip out the relevant
paragraph of HTML for your release. (You’ll probably have to do the curl -X PURGE
trick to purge the cache if you want to confirm you made the change correctly.)
Add retired notice to each release page on python.org for the retired branch.
For example:
https://www.python.org/downloads/release/python-337/https://www.python.org/downloads/release/python-336/
In the developer’s guide, add the branch to the recent end-of-life branches
list (https://devguide.python.org/devcycle/#end-of-life-branches) and update
or remove references to the branch elsewhere in the devguide.
Retire the release from the issue tracker. Tasks include:
remove version label from list of versions
remove the “needs backport to” label for the retired version
review and dispose of open issues marked for this branch
Announce the branch retirement in the usual places:
discuss.python.org
mailing lists (python-dev, python-list, python-announcements)
Python Dev blog
Enjoy your retirement and bask in the glow of a job well done!
Windows Notes
Windows has a MSI installer, various flavors of Windows have
“special limitations”, and the Windows installer also packs
precompiled “foreign” binaries (Tcl/Tk, expat, etc).
The installer is tested as part of the Azure Pipeline. In the past,
those steps were performed manually. We’re keeping this for posterity.
Concurrent with uploading the installer, the WE installs Python
from it twice: once into the default directory suggested by the
installer, and later into a directory with embedded spaces in its
name. For each installation, the WE runs the full regression suite
from a DOS box, and both with and without -0. For maintenance
release, the WE also tests whether upgrade installations succeed.
The WE also tries every shortcut created under Start -> Menu -> the
Python group. When trying IDLE this way, you need to verify that
Help -> Python Documentation works. When trying pydoc this way
(the “Module Docs” Start menu entry), make sure the “Start
Browser” button works, and make sure you can search for a random
module (like “random” <wink>) and then that the “go to selected”
button works.
It’s amazing how much can go wrong here – and even more amazing
how often last-second checkins break one of these things. If
you’re “the Windows geek”, keep in mind that you’re likely the
only person routinely testing on Windows, and that Windows is
simply a mess.
Repeat the testing for each target architecture. Try both an
Admin and a plain User (not Power User) account.
Copyright
This document has been placed in the public domain.
| Active | PEP 101 – Doing Python Releases 101 | Informational | Making a Python release is a thrilling and crazy process. You’ve heard
the expression “herding cats”? Imagine trying to also saddle those
purring little creatures up, and ride them into town, with some of their
buddies firmly attached to your bare back, anchored by newly sharpened
claws. At least they’re cute, you remind yourself. |
PEP 102 – Doing Python Micro Releases
Author:
Anthony Baxter <anthony at interlink.com.au>,
Barry Warsaw <barry at python.org>,
Guido van Rossum <guido at python.org>
Status:
Superseded
Type:
Informational
Created:
09-Jan-2002
Post-History:
Superseded-By:
101
Table of Contents
Replacement Note
Abstract
How to Make A Release
What Next?
Final Release Notes
Windows Notes
Copyright
Replacement Note
Although the size of the to-do list in this PEP is much less scary
than that in PEP 101, it turns out not to be enough justification
for the duplication of information, and with it, the danger of one
of the copies to become out of date. Therefore, this PEP is not
maintained anymore, and micro releases are fully covered by PEP 101.
Abstract
Making a Python release is an arduous process that takes a
minimum of half a day’s work even for an experienced releaser.
Until recently, most – if not all – of that burden was borne by
Guido himself. But several recent releases have been performed by
other folks, so this PEP attempts to collect, in one place, all
the steps needed to make a Python bugfix release.
The major Python release process is covered in PEP 101 - this PEP
is just PEP 101, trimmed down to only include the bits that are
relevant for micro releases, a.k.a. patch, or bug fix releases.
It is organized as a recipe and you can actually print this out and
check items off as you complete them.
How to Make A Release
Here are the steps taken to make a Python release. Some steps are
more fuzzy than others because there’s little that can be
automated (e.g. writing the NEWS entries). Where a step is
usually performed by An Expert, the name of that expert is given.
Otherwise, assume the step is done by the Release Manager (RM),
the designated person performing the release. Almost every place
the RM is mentioned below, this step can also be done by the BDFL
of course!
XXX: We should include a dependency graph to illustrate the steps
that can be taken in parallel, or those that depend on other
steps.
We use the following conventions in the examples below. Where a
release number is given, it is of the form X.Y.MaA, e.g. 2.1.2c1
for Python 2.1.2 release candidate 1, where “a” == alpha, “b” ==
beta, “c” == release candidate. Final releases are tagged with
“releaseXYZ” in CVS. The micro releases are made from the
maintenance branch of the major release, e.g. Python 2.1.2 is made
from the release21-maint branch.
Send an email to python-dev@python.org indicating the release is
about to start.
Put a freeze on check ins into the maintenance branch. At this
point, nobody except the RM should make any commits to the branch
(or his duly assigned agents, i.e. Guido the BDFL, Fred Drake for
documentation, or Thomas Heller for Windows). If the RM screwed up
and some desperate last minute change to the branch is
necessary, it can mean extra work for Fred and Thomas. So try to
avoid this!
On the branch, change Include/patchlevel.h in two places, to
reflect the new version number you’ve just created. You’ll want
to change the PY_VERSION macro, and one or several of the
version subpart macros just above PY_VERSION, as appropriate.
Change the “%define version” line of Misc/RPM/python-2.3.spec to the
same string as PY_VERSION was changed to above. E.g:%define version 2.3.1
You also probably want to reset the %define release line
to ‘1pydotorg’ if it’s not already that.
If you’re changing the version number for Python (e.g. from
Python 2.1.1 to Python 2.1.2), you also need to update the
README file, which has a big banner at the top proclaiming its
identity. Don’t do this if you’re just releasing a new alpha or
beta release, but /do/ do this if you’re release a new micro,
minor or major release.
The LICENSE file also needs to be changed, due to several
references to the release number. As for the README file, changing
these are necessary for a new micro, minor or major release.The LICENSE file contains a table that describes the legal
heritage of Python; you should add an entry for the X.Y.Z
release you are now making. You should update this table in the
LICENSE file on the CVS trunk too.
When the year changes, copyright legends need to be updated in
many places, including the README and LICENSE files.
For the Windows build, additional files have to be updated.PCbuild/BUILDno.txt contains the Windows build number, see the
instructions in this file how to change it. Saving the project
file PCbuild/pythoncore.dsp results in a change to
PCbuild/pythoncore.dsp as well.
PCbuild/python20.wse sets up the Windows installer version
resource (displayed when you right-click on the installer .exe
and select Properties), and also contains the Python version
number.
(Before version 2.3.2, it was required to manually edit
PC/python_nt.rc, this step is now automated by the build
process.)
After starting the process, the most important thing to do next
is to update the Misc/NEWS file. Thomas will need this in order to
do the Windows release and he likes to stay up late. This step
can be pretty tedious, so it’s best to get to it immediately
after making the branch, or even before you’ve made the branch.
The sooner the better (but again, watch for new checkins up
until the release is made!)Add high level items new to this release. E.g. if we’re
releasing 2.2a3, there must be a section at the top of the file
explaining “What’s new in Python 2.2a3”. It will be followed by
a section entitled “What’s new in Python 2.2a2”.
Note that you /hope/ that as developers add new features to the
trunk, they’ve updated the NEWS file accordingly. You can’t be
positive, so double check. If you’re a Unix weenie, it helps to
verify with Thomas about changes on Windows, and Jack Jansen
about changes on the Mac.
This command should help you (but substitute the correct -r tag!):
% cvs log -rr22a1: | python Tools/scripts/logmerge.py > /tmp/news.txt
IOW, you’re printing out all the cvs log entries from the
previous release until now. You can then troll through the
news.txt file looking for interesting things to add to NEWS.
Check your NEWS changes into the maintenance branch. It’s easy
to forget to update the release date in this file!
Check in any changes to IDLE’s NEWS.txt. Update the header in
Lib/idlelib/NEWS.txt to reflect its release version and date.
Update the IDLE version in Lib/idlelib/idlever.py to match.
Once the release process has started, the documentation needs to
be built and posted on python.org according to the instructions
in PEP 101.Note that Fred is responsible both for merging doc changes from
the trunk to the branch AND for merging any branch changes from
the branch to the trunk during the cleaning up phase.
Basically, if it’s in Doc/ Fred will take care of it.
Thomas compiles everything with MSVC 6.0 SP5, and moves the
python23.chm file into the src/chm directory. The installer
executable is then generated with Wise Installation System.The installer includes the MSVC 6.0 runtime in the files
MSVCRT.DLL and MSVCIRT.DLL. It leads to disaster if these files
are taken from the system directory of the machine where the
installer is built, instead it must be absolutely made sure that
these files come from the VCREDIST.EXE redistributable package
contained in the MSVC SP5 CD. VCREDIST.EXE must be unpacked
with winzip, and the Wise Installation System prompts for the
directory.
After building the installer, it should be opened with winzip,
and the MS dlls extracted again and check for the same version
number as those unpacked from VCREDIST.EXE.
Thomas uploads this file to the starship. He then sends the RM
a notice which includes the location and MD5 checksum of the
Windows executable.
Note that Thomas’s creation of the Windows executable may generate
a few more commits on the branch. Thomas will be responsible for
merging Windows-specific changes from trunk to branch, and from
branch to trunk.
Sean performs his Red Hat magic, generating a set of RPMs. He
uploads these files to python.org. He then sends the RM a notice
which includes the location and MD5 checksum of the RPMs.
It’s Build Time!Now, you’re ready to build the source tarball. First cd to your
working directory for the branch. E.g.
% cd …/python-22a3
Do a “cvs update” in this directory. Do NOT include the -A flag!You should not see any “M” files, but you may see several “P”
and/or “U” files. I.e. you better not have any uncommitted
changes in your working directory, but you may pick up some of
Fred’s or Thomas’s last minute changes.
Now tag the branch using a symbolic name like “rXYMaZ”,
e.g. r212% cvs tag r212
Be sure to tag only the python/dist/src subdirectory of the
Python CVS tree!
Change to a neutral directory, i.e. one in which you can do a
fresh, virgin, cvs export of the branch. You will be creating a
new directory at this location, to be named “Python-X.Y.M”. Do
a CVS export of the tagged branch.% cd ~
% cvs -d cvs.sf.net:/cvsroot/python export -rr212 \
-d Python-2.1.2 python/dist/src
Generate the tarball. Note that we’re not using the ‘z’ option
on the tar command because 1) that’s only supported by GNU tar
as far as we know, and 2) we’re going to max out the compression
level, which isn’t a supported option. We generate both tar.gz
tar.bz2 formats, as the latter is about 1/6th smaller.% tar -cf - Python-2.1.2 | gzip -9 > Python-2.1.2.tgz
% tar -cf - Python-2.1.2 | bzip2 -9 > Python-2.1.2.tar.bz2
Calculate the MD5 checksum of the tgz and tar.bz2 files you
just created% md5sum Python-2.1.2.tgz
Note that if you don’t have the md5sum program, there is a
Python replacement in the Tools/scripts/md5sum.py file.
Create GPG keys for each of the files.% gpg -ba Python-2.1.2.tgz
% gpg -ba Python-2.1.2.tar.bz2
% gpg -ba Python-2.1.2.exe
Now you want to perform the very important step of checking the
tarball you just created, to make sure a completely clean,
virgin build passes the regression test. Here are the best
steps to take:% cd /tmp
% tar zxvf ~/Python-2.1.2.tgz
% cd Python-2.1.2
% ls
(Do things look reasonable?)
% ./configure
(Loads of configure output)
% make test
(Do all the expected tests pass?)
If the tests pass, then you can feel good that the tarball is
fine. If some of the tests fail, or anything else about the
freshly unpacked directory looks weird, you better stop now and
figure out what the problem is.
You need to upload the tgz and the exe file to creosote.python.org.
This step can take a long time depending on your network
bandwidth. scp both files from your own machine to creosote.
While you’re waiting, you can start twiddling the web pages to
include the announcement.
In the top of the python.org web site CVS tree, create a
subdirectory for the X.Y.Z release. You can actually copy an
earlier patch release’s subdirectory, but be sure to delete
the X.Y.Z/CVS directory and “cvs add X.Y.Z”, for example:% cd .../pydotorg
% cp -r 2.2.2 2.2.3
% rm -rf 2.2.3/CVS
% cvs add 2.2.3
% cd 2.2.3
Edit the files for content: usually you can globally replace
X.Ya(Z-1) with X.YaZ. However, you’ll need to think about the
“What’s New?” section.
Copy the Misc/NEWS file to NEWS.txt in the X.Y.Z directory for
python.org; this contains the “full scoop” of changes to
Python since the previous release for this version of Python.
Copy the .asc GPG signatures you created earlier here as well.
Also, update the MD5 checksums.
Preview the web page by doing a “make” or “make install” (as
long as you’ve created a new directory for this release!)
Similarly, edit the ../index.ht file, i.e. the python.org home
page. In the Big Blue Announcement Block, move the paragraph
for the new version up to the top and boldify the phrase
“Python X.YaZ is out”. Edit for content, and preview locally,
but do NOT do a “make install” yet!
Now we’re waiting for the scp to creosote to finish. Da de da,
da de dum, hmm, hmm, dum de dum.
Once that’s done you need to go to creosote.python.org and move
all the files in place over there. Our policy is that every
Python version gets its own directory, but each directory may
contain several releases. We keep all old releases, moving them
into a “prev” subdirectory when we have a new release.So, there’s a directory called “2.2” which contains
Python-2.2a2.exe and Python-2.2a2.tgz, along with a “prev”
subdirectory containing Python-2.2a1.exe and Python-2.2a1.tgz.
So…
On creosote, cd to ~ftp/pub/python/X.Y creating it if
necessary.
Move the previous release files to a directory called “prev”
creating the directory if necessary (make sure the directory
has g+ws bits on). If this is the first alpha release of a
new Python version, skip this step.
Move the .tgz file and the .exe file to this directory. Make
sure they are world readable. They should also be group
writable, and group-owned by webmaster.
md5sum the files and make sure they got uploaded intact.
the X.Y/bugs.ht file if necessary. It is best to get
BDFL input for this step.
Go up to the parent directory (i.e. the root of the web page
hierarchy) and do a “make install” there. You’re release is now
live!
Now it’s time to write the announcement for the mailing lists.
This is the fuzzy bit because not much can be automated. You
can use one of Guido’s earlier announcements as a template, but
please edit it for content!Once the announcement is ready, send it to the following
addresses:
python-list@python.org
python-announce@python.org
python-dev@python.org
Send a SourceForge News Item about the release. From the
project’s “menu bar”, select the “News” link; once in News,
select the “Submit” link. Type a suitable subject (e.g. “Python
2.2c1 released” :-) in the Subject box, add some text to the
Details box (at the very least including the release URL at
www.python.org and the fact that you’re happy with the release)
and click the SUBMIT button.Feel free to remove any old news items.
Now it’s time to do some cleanup. These steps are very important!
Edit the file Include/patchlevel.h so that the PY_VERSION
string says something like “X.YaZ+”. Note the trailing ‘+’
indicating that the trunk is going to be moving forward with
development. E.g. the line should look like:#define PY_VERSION "2.1.2+"
Make sure that the other PY_ version macros contain the
correct values. Commit this change.
For the extra paranoid, do a completely clean test of the
release. This includes downloading the tarball from
www.python.org.
Make sure the md5 checksums match. Then unpack the tarball,
and do a clean make test.% make distclean
% ./configure
% make test
To ensure that the regression test suite passes. If not, you
screwed up somewhere!
Step 5 …
Verify! This can be interleaved with Step 4. Pretend you’re a
user: download the files from python.org, and make Python from
it. This step is too easy to overlook, and on several occasions
we’ve had useless release files. Once a general server problem
caused mysterious corruption of all files; once the source tarball
got built incorrectly; more than once the file upload process on
SF truncated files; and so on.
What Next?
Rejoice. Drink. Be Merry. Write a PEP like this one. Or be
like unto Guido and take A Vacation.
You’ve just made a Python release!
Actually, there is one more step. You should turn over ownership
of the branch to Jack Jansen. All this means is that now he will
be responsible for making commits to the branch. He’s going to
use this to build the MacOS versions. He may send you information
about the Mac release that should be merged into the informational
pages on www.python.org. When he’s done, he’ll tag the branch
something like “rX.YaZ-mac”. He’ll also be responsible for
merging any Mac-related changes back into the trunk.
Final Release Notes
The Final release of any major release, e.g. Python 2.2 final, has
special requirements, specifically because it will be one of the
longest lived releases (i.e. betas don’t last more than a couple
of weeks, but final releases can last for years!).
For this reason we want to have a higher coordination between the
three major releases: Windows, Mac, and source. The Windows and
source releases benefit from the close proximity of the respective
release-bots. But the Mac-bot, Jack Jansen, is 6 hours away. So
we add this extra step to the release process for a final
release:
Hold up the final release until Jack approves, or until we
lose patience <wink>.
The python.org site also needs some tweaking when a new bugfix release
is issued.
The documentation should be installed at doc/<version>/.
Add a link from doc/<previous-minor-release>/index.ht to the
documentation for the new version.
All older doc/<old-release>/index.ht files should be updated to
point to the documentation for the new version.
/robots.txt should be modified to prevent the old version’s
documentation from being crawled by search engines.
Windows Notes
Windows has a GUI installer, various flavors of Windows have
“special limitations”, and the Windows installer also packs
precompiled “foreign” binaries (Tcl/Tk, expat, etc). So Windows
testing is tiresome but very necessary.
Concurrent with uploading the installer, Thomas installs Python
from it twice: once into the default directory suggested by the
installer, and later into a directory with embedded spaces in its
name. For each installation, he runs the full regression suite
from a DOS box, and both with and without -0.
He also tries every shortcut created under Start -> Menu -> the
Python group. When trying IDLE this way, you need to verify that
Help -> Python Documentation works. When trying pydoc this way
(the “Module Docs” Start menu entry), make sure the “Start
Browser” button works, and make sure you can search for a random
module (Thomas uses “random” <wink>) and then that the “go to
selected” button works.
It’s amazing how much can go wrong here – and even more amazing
how often last-second checkins break one of these things. If
you’re “the Windows geek”, keep in mind that you’re likely the
only person routinely testing on Windows, and that Windows is
simply a mess.
Repeat all of the above on at least one flavor of Win9x, and one
of NT/2000/XP. On NT/2000/XP, try both an Admin and a plain User
(not Power User) account.
WRT Step 5 above (verify the release media), since by the time
release files are ready to download Thomas has generally run many
Windows tests on the installer he uploaded, he usually doesn’t do
anything for Step 5 except a full byte-comparison (“fc /b” if
using a Windows shell) of the downloaded file against the file he
uploaded.
Copyright
This document has been placed in the public domain.
| Superseded | PEP 102 – Doing Python Micro Releases | Informational | Making a Python release is an arduous process that takes a
minimum of half a day’s work even for an experienced releaser.
Until recently, most – if not all – of that burden was borne by
Guido himself. But several recent releases have been performed by
other folks, so this PEP attempts to collect, in one place, all
the steps needed to make a Python bugfix release. |
PEP 103 – Collecting information about git
Author:
Oleg Broytman <phd at phdru.name>
Status:
Withdrawn
Type:
Informational
Created:
01-Jun-2015
Post-History:
12-Sep-2015
Table of Contents
Withdrawal
Abstract
Documentation
Documentation for starters
Advanced documentation
Offline documentation
Quick start
Download and installation
Initial configuration
Examples in this PEP
Branches and branches
Remote repositories and remote branches
Updating local and remote-tracking branches
Fetch and pull
Push
Tags
Private information
Commit editing and caveats
Undo
git checkout: restore file’s content
git reset: remove (non-pushed) commits
Unstaging
git reflog: reference log
git revert: revert a commit
One thing that cannot be undone
Merge or rebase?
Null-merges
Branching models
Advanced configuration
Line endings
Useful assets
Advanced topics
Staging area
Root
ReReRe
Database maintenance
Tips and tricks
Command-line options and arguments
bash/zsh completion
bash/zsh prompt
SSH connection sharing
git on server
From Mercurial to git
Git and GitHub
Copyright
Withdrawal
This PEP was withdrawn as it’s too generic and doesn’t really deals
with Python development. It is no longer updated.
The content was moved to Python Wiki. Make further updates in the
wiki.
Abstract
This Informational PEP collects information about git. There is, of
course, a lot of documentation for git, so the PEP concentrates on
more complex (and more related to Python development) issues,
scenarios and examples.
The plan is to extend the PEP in the future collecting information
about equivalence of Mercurial and git scenarios to help migrating
Python development from Mercurial to git.
The author of the PEP doesn’t currently plan to write a Process PEP on
migration Python development from Mercurial to git.
Documentation
Git is accompanied with a lot of documentation, both online and
offline.
Documentation for starters
Git Tutorial: part 1,
part 2.
Git User’s manual.
Everyday GIT With 20 Commands Or So.
Git workflows.
Advanced documentation
Git Magic,
with a number of translations.
Pro Git. The Book about git. Buy it at
Amazon or download in PDF, mobi, or ePub form. It has translations to
many different languages. Download Russian translation from GArik.
Git Wiki.
Git Buch (German).
Offline documentation
Git has builtin help: run git help $TOPIC. For example, run
git help git or git help help.
Quick start
Download and installation
Unix users: download and install using your package manager.
Microsoft Windows: download git-for-windows.
MacOS X: use git installed with XCode or download from MacPorts or
git-osx-installer or
install git with Homebrew: brew install git.
git-cola (repository) is a Git GUI written in
Python and GPL licensed. Linux, Windows, MacOS X.
TortoiseGit is a Windows Shell Interface
to Git based on TortoiseSVN; open source.
Initial configuration
This simple code is often appears in documentation, but it is
important so let repeat it here. Git stores author and committer
names/emails in every commit, so configure your real name and
preferred email:
$ git config --global user.name "User Name"
$ git config --global user.email user.name@example.org
Examples in this PEP
Examples of git commands in this PEP use the following approach. It is
supposed that you, the user, works with a local repository named
python that has an upstream remote repo named origin. Your
local repo has two branches v1 and master. For most examples
the currently checked out branch is master. That is, it’s assumed
you have done something like that:
$ git clone https://git.python.org/python.git
$ cd python
$ git branch v1 origin/v1
The first command clones remote repository into local directory
python, creates a new local branch master, sets
remotes/origin/master as its upstream remote-tracking branch and
checks it out into the working directory.
The last command creates a new local branch v1 and sets
remotes/origin/v1 as its upstream remote-tracking branch.
The same result can be achieved with commands:
$ git clone -b v1 https://git.python.org/python.git
$ cd python
$ git checkout --track origin/master
The last command creates a new local branch master, sets
remotes/origin/master as its upstream remote-tracking branch and
checks it out into the working directory.
Branches and branches
Git terminology can be a bit misleading. Take, for example, the term
“branch”. In git it has two meanings. A branch is a directed line of
commits (possibly with merges). And a branch is a label or a pointer
assigned to a line of commits. It is important to distinguish when you
talk about commits and when about their labels. Lines of commits are
by itself unnamed and are usually only lengthening and merging.
Labels, on the other hand, can be created, moved, renamed and deleted
freely.
Remote repositories and remote branches
Remote-tracking branches are branches (pointers to commits) in your
local repository. They are there for git (and for you) to remember
what branches and commits have been pulled from and pushed to what
remote repos (you can pull from and push to many remotes).
Remote-tracking branches live under remotes/$REMOTE namespaces,
e.g. remotes/origin/master.
To see the status of remote-tracking branches run:
$ git branch -rv
To see local and remote-tracking branches (and tags) pointing to
commits:
$ git log --decorate
You never do your own development on remote-tracking branches. You
create a local branch that has a remote branch as upstream and do
development on that local branch. On push git pushes commits to the
remote repo and updates remote-tracking branches, on pull git fetches
commits from the remote repo, updates remote-tracking branches and
fast-forwards, merges or rebases local branches.
When you do an initial clone like this:
$ git clone -b v1 https://git.python.org/python.git
git clones remote repository https://git.python.org/python.git to
directory python, creates a remote named origin, creates
remote-tracking branches, creates a local branch v1, configure it
to track upstream remotes/origin/v1 branch and checks out v1 into
the working directory.
Some commands, like git status --branch and git branch --verbose,
report the difference between local and remote branches.
Please remember they only do comparison with remote-tracking branches
in your local repository, and the state of those remote-tracking
branches can be outdated. To update remote-tracking branches you
either fetch and merge (or rebase) commits from the remote repository
or update remote-tracking branches without updating local branches.
Updating local and remote-tracking branches
To update remote-tracking branches without updating local branches run
git remote update [$REMOTE...]. For example:
$ git remote update
$ git remote update origin
Fetch and pull
There is a major difference between
$ git fetch $REMOTE $BRANCH
and
$ git fetch $REMOTE $BRANCH:$BRANCH
The first command fetches commits from the named $BRANCH in the
$REMOTE repository that are not in your repository, updates
remote-tracking branch and leaves the id (the hash) of the head commit
in file .git/FETCH_HEAD.
The second command fetches commits from the named $BRANCH in the
$REMOTE repository that are not in your repository and updates both
the local branch $BRANCH and its upstream remote-tracking branch. But
it refuses to update branches in case of non-fast-forward. And it
refuses to update the current branch (currently checked out branch,
where HEAD is pointing to).
The first command is used internally by git pull.
$ git pull $REMOTE $BRANCH
is equivalent to
$ git fetch $REMOTE $BRANCH
$ git merge FETCH_HEAD
Certainly, $BRANCH in that case should be your current branch. If you
want to merge a different branch into your current branch first update
that non-current branch and then merge:
$ git fetch origin v1:v1 # Update v1
$ git pull --rebase origin master # Update the current branch master
# using rebase instead of merge
$ git merge v1
If you have not yet pushed commits on v1, though, the scenario has
to become a bit more complex. Git refuses to update
non-fast-forwardable branch, and you don’t want to do force-pull
because that would remove your non-pushed commits and you would need
to recover. So you want to rebase v1 but you cannot rebase
non-current branch. Hence, checkout v1 and rebase it before
merging:
$ git checkout v1
$ git pull --rebase origin v1
$ git checkout master
$ git pull --rebase origin master
$ git merge v1
It is possible to configure git to make it fetch/pull a few branches
or all branches at once, so you can simply run
$ git pull origin
or even
$ git pull
Default remote repository for fetching/pulling is origin. Default
set of references to fetch is calculated using matching algorithm: git
fetches all branches having the same name on both ends.
Push
Pushing is a bit simpler. There is only one command push. When you
run
$ git push origin v1 master
git pushes local v1 to remote v1 and local master to remote master.
The same as:
$ git push origin v1:v1 master:master
Git pushes commits to the remote repo and updates remote-tracking
branches. Git refuses to push commits that aren’t fast-forwardable.
You can force-push anyway, but please remember - you can force-push to
your own repositories but don’t force-push to public or shared repos.
If you find git refuses to push commits that aren’t fast-forwardable,
better fetch and merge commits from the remote repo (or rebase your
commits on top of the fetched commits), then push. Only force-push if
you know what you do and why you do it. See the section Commit
editing and caveats below.
It is possible to configure git to make it push a few branches or all
branches at once, so you can simply run
$ git push origin
or even
$ git push
Default remote repository for pushing is origin. Default set of
references to push in git before 2.0 is calculated using matching
algorithm: git pushes all branches having the same name on both ends.
Default set of references to push in git 2.0+ is calculated using
simple algorithm: git pushes the current branch back to its
@{upstream}.
To configure git before 2.0 to the new behaviour run:
$ git config push.default simple
To configure git 2.0+ to the old behaviour run:
$ git config push.default matching
Git doesn’t allow to push a branch if it’s the current branch in the
remote non-bare repository: git refuses to update remote working
directory. You really should push only to bare repositories. For
non-bare repositories git prefers pull-based workflow.
When you want to deploy code on a remote host and can only use push
(because your workstation is behind a firewall and you cannot pull
from it) you do that in two steps using two repositories: you push
from the workstation to a bare repo on the remote host, ssh to the
remote host and pull from the bare repo to a non-bare deployment repo.
That changed in git 2.3, but see the blog post
for caveats; in 2.4 the push-to-deploy feature was further improved.
Tags
Git automatically fetches tags that point to commits being fetched
during fetch/pull. To fetch all tags (and commits they point to) run
git fetch --tags origin. To fetch some specific tags fetch them
explicitly:
$ git fetch origin tag $TAG1 tag $TAG2...
For example:
$ git fetch origin tag 1.4.2
$ git fetch origin v1:v1 tag 2.1.7
Git doesn’t automatically pushes tags. That allows you to have private
tags. To push tags list them explicitly:
$ git push origin tag 1.4.2
$ git push origin v1 master tag 2.1.7
Or push all tags at once:
$ git push --tags origin
Don’t move tags with git tag -f or remove tags with git tag -d
after they have been published.
Private information
When cloning/fetching/pulling/pushing git copies only database objects
(commits, trees, files and tags) and symbolic references (branches and
lightweight tags). Everything else is private to the repository and
never cloned, updated or pushed. It’s your config, your hooks, your
private exclude file.
If you want to distribute hooks, copy them to the working tree, add,
commit, push and instruct the team to update and install the hooks
manually.
Commit editing and caveats
A warning not to edit published (pushed) commits also appears in
documentation but it’s repeated here anyway as it’s very important.
It is possible to recover from a forced push but it’s PITA for the
entire team. Please avoid it.
To see what commits have not been published yet compare the head of the
branch with its upstream remote-tracking branch:
$ git log origin/master.. # from origin/master to HEAD (of master)
$ git log origin/v1..v1 # from origin/v1 to the head of v1
For every branch that has an upstream remote-tracking branch git
maintains an alias @{upstream} (short version @{u}), so the commands
above can be given as:
$ git log @{u}..
$ git log v1@{u}..v1
To see the status of all branches:
$ git branch -avv
To compare the status of local branches with a remote repo:
$ git remote show origin
Read how to recover from upstream rebase.
It is in git help rebase.
On the other hand, don’t be too afraid about commit editing. You can
safely edit, reorder, remove, combine and split commits that haven’t
been pushed yet. You can even push commits to your own (backup) repo,
edit them later and force-push edited commits to replace what have
already been pushed. Not a problem until commits are in a public
or shared repository.
Undo
Whatever you do, don’t panic. Almost anything in git can be undone.
git checkout: restore file’s content
git checkout, for example, can be used to restore the content of
file(s) to that one of a commit. Like this:
git checkout HEAD~ README
The commands restores the contents of README file to the last but one
commit in the current branch. By default the commit ID is simply HEAD;
i.e. git checkout README restores README to the latest commit.
(Do not use git checkout to view a content of a file in a commit,
use git cat-file -p; e.g. git cat-file -p HEAD~:path/to/README).
git reset: remove (non-pushed) commits
git reset moves the head of the current branch. The head can be
moved to point to any commit but it’s often used to remove a commit or
a few (preferably, non-pushed ones) from the top of the branch - that
is, to move the branch backward in order to undo a few (non-pushed)
commits.
git reset has three modes of operation - soft, hard and mixed.
Default is mixed. ProGit explains the
difference very clearly. Bare repositories don’t have indices or
working trees so in a bare repo only soft reset is possible.
Unstaging
Mixed mode reset with a path or paths can be used to unstage changes -
that is, to remove from index changes added with git add for
committing. See The Book for details
about unstaging and other undo tricks.
git reflog: reference log
Removing commits with git reset or moving the head of a branch
sounds dangerous and it is. But there is a way to undo: another
reset back to the original commit. Git doesn’t remove commits
immediately; unreferenced commits (in git terminology they are called
“dangling commits”) stay in the database for some time (default is two
weeks) so you can reset back to it or create a new branch pointing to
the original commit.
For every move of a branch’s head - with git commit, git
checkout, git fetch, git pull, git rebase, git reset
and so on - git stores a reference log (reflog for short). For every
move git stores where the head was. Command git reflog can be used
to view (and manipulate) the log.
In addition to the moves of the head of every branch git stores the
moves of the HEAD - a symbolic reference that (usually) names the
current branch. HEAD is changed with git checkout $BRANCH.
By default git reflog shows the moves of the HEAD, i.e. the
command is equivalent to git reflog HEAD. To show the moves of the
head of a branch use the command git reflog $BRANCH.
So to undo a git reset lookup the original commit in git
reflog, verify it with git show or git log and run git
reset $COMMIT_ID. Git stores the move of the branch’s head in
reflog, so you can undo that undo later again.
In a more complex situation you’d want to move some commits along with
resetting the head of the branch. Cherry-pick them to the new branch.
For example, if you want to reset the branch master back to the
original commit but preserve two commits created in the current branch
do something like:
$ git branch save-master # create a new branch saving master
$ git reflog # find the original place of master
$ git reset $COMMIT_ID
$ git cherry-pick save-master~ save-master
$ git branch -D save-master # remove temporary branch
git revert: revert a commit
git revert reverts a commit or commits, that is, it creates a new
commit or commits that revert(s) the effects of the given commits.
It’s the only way to undo published commits (git commit --amend,
git rebase and git reset change the branch in
non-fast-forwardable ways so they should only be used for non-pushed
commits.)
There is a problem with reverting a merge commit. git revert can
undo the code created by the merge commit but it cannot undo the fact
of merge. See the discussion How to revert a faulty merge.
One thing that cannot be undone
Whatever you undo, there is one thing that cannot be undone -
overwritten uncommitted changes. Uncommitted changes don’t belong to
git so git cannot help preserving them.
Most of the time git warns you when you’re going to execute a command
that overwrites uncommitted changes. Git doesn’t allow you to switch
branches with git checkout. It stops you when you’re going to
rebase with non-clean working tree. It refuses to pull new commits
over non-committed files.
But there are commands that do exactly that - overwrite files in the
working tree. Commands like git checkout $PATHs or git reset
--hard silently overwrite files including your uncommitted changes.
With that in mind you can understand the stance “commit early, commit
often”. Commit as often as possible. Commit on every save in your
editor or IDE. You can edit your commits before pushing - edit commit
messages, change commits, reorder, combine, split, remove. But save
your changes in git database, either commit changes or at least stash
them with git stash.
Merge or rebase?
Internet is full of heated discussions on the topic: “merge or
rebase?” Most of them are meaningless. When a DVCS is being used in a
big team with a big and complex project with many branches there is
simply no way to avoid merges. So the question’s diminished to
“whether to use rebase, and if yes - when to use rebase?” Considering
that it is very much recommended not to rebase published commits the
question’s diminished even further: “whether to use rebase on
non-pushed commits?”
That small question is for the team to decide. To preserve the beauty
of linear history it’s recommended to use rebase when pulling, i.e. do
git pull --rebase or even configure automatic setup of rebase for
every new branch:
$ git config branch.autosetuprebase always
and configure rebase for existing branches:
$ git config branch.$NAME.rebase true
For example:
$ git config branch.v1.rebase true
$ git config branch.master.rebase true
After that git pull origin master becomes equivalent to git pull
--rebase origin master.
It is recommended to create new commits in a separate feature or topic
branch while using rebase to update the mainline branch. When the
topic branch is ready merge it into mainline. To avoid a tedious task
of resolving large number of conflicts at once you can merge the topic
branch to the mainline from time to time and switch back to the topic
branch to continue working on it. The entire workflow would be
something like:
$ git checkout -b issue-42 # create a new issue branch and switch to it
...edit/test/commit...
$ git checkout master
$ git pull --rebase origin master # update master from the upstream
$ git merge issue-42
$ git branch -d issue-42 # delete the topic branch
$ git push origin master
When the topic branch is deleted only the label is removed, commits
are stayed in the database, they are now merged into master:
o--o--o--o--o--M--< master - the mainline branch
\ /
--*--*--* - the topic branch, now unnamed
The topic branch is deleted to avoid cluttering branch namespace with
small topic branches. Information on what issue was fixed or what
feature was implemented should be in the commit messages.
But even that small amount of rebasing could be too big in case of
long-lived merged branches. Imagine you’re doing work in both v1
and master branches, regularly merging v1 into master.
After some time you will have a lot of merge and non-merge commits in
master. Then you want to push your finished work to a shared
repository and find someone has pushed a few commits to v1. Now
you have a choice of two equally bad alternatives: either you fetch
and rebase v1 and then have to recreate all you work in master
(reset master to the origin, merge v1 and cherry-pick all
non-merge commits from the old master); or merge the new v1 and
loose the beauty of linear history.
Null-merges
Git has a builtin merge strategy for what Python core developers call
“null-merge”:
$ git merge -s ours v1 # null-merge v1 into master
Branching models
Git doesn’t assume any particular development model regarding
branching and merging. Some projects prefer to graduate patches from
the oldest branch to the newest, some prefer to cherry-pick commits
backwards, some use squashing (combining a number of commits into
one). Anything is possible.
There are a few examples to start with. git help workflows
describes how the very git authors develop git.
ProGit book has a few chapters devoted to branch management in
different projects: Git Branching - Branching Workflows and
Distributed Git - Contributing to a Project.
There is also a well-known article A successful Git branching model by Vincent
Driessen. It recommends a set of very detailed rules on creating and
managing mainline, topic and bugfix branches. To support the model the
author implemented git flow
extension.
Advanced configuration
Line endings
Git has builtin mechanisms to handle line endings between platforms
with different end-of-line styles. To allow git to do CRLF conversion
assign text attribute to files using .gitattributes.
For files that have to have specific line endings assign eol
attribute. For binary files the attribute is, naturally, binary.
For example:
$ cat .gitattributes
*.py text
*.txt text
*.png binary
/readme.txt eol=CRLF
To check what attributes git uses for files use git check-attr
command. For example:
$ git check-attr -a -- \*.py
Useful assets
GitAlias (repository) is a big collection of
aliases. A careful selection of aliases for frequently used commands
could save you a lot of keystrokes!
GitIgnore and
https://github.com/github/gitignore are collections of .gitignore
files for all kinds of IDEs and programming languages. Python
included!
pre-commit (repositories) is a framework for managing and
maintaining multi-language pre-commit hooks. The framework is written
in Python and has a lot of plugins for many programming languages.
Advanced topics
Staging area
Staging area aka index aka cache is a distinguishing feature of git.
Staging area is where git collects patches before committing them.
Separation between collecting patches and commit phases provides a
very useful feature of git: you can review collected patches before
commit and even edit them - remove some hunks, add new hunks and
review again.
To add files to the index use git add. Collecting patches before
committing means you need to do that for every change, not only to add
new (untracked) files. To simplify committing in case you just want to
commit everything without reviewing run git commit --all (or just
-a) - the command adds every changed tracked file to the index and
then commit. To commit a file or files regardless of patches collected
in the index run git commit [--only|-o] -- $FILE....
To add hunks of patches to the index use git add --patch (or just
-p). To remove collected files from the index use git reset HEAD
-- $FILE... To add/inspect/remove collected hunks use git add
--interactive (-i).
To see the diff between the index and the last commit (i.e., collected
patches) use git diff --cached. To see the diff between the
working tree and the index (i.e., uncollected patches) use just git
diff. To see the diff between the working tree and the last commit
(i.e., both collected and uncollected patches) run git diff HEAD.
See WhatIsTheIndex and
IndexCommandQuickref in Git
Wiki.
Root
Git switches to the root (top-level directory of the project where
.git subdirectory exists) before running any command. Git
remembers though the directory that was current before the switch.
Some programs take into account the current directory. E.g., git
status shows file paths of changed and unknown files relative to the
current directory; git grep searches below the current directory;
git apply applies only those hunks from the patch that touch files
below the current directory.
But most commands run from the root and ignore the current directory.
Imagine, for example, that you have two work trees, one for the branch
v1 and the other for master. If you want to merge v1 from
a subdirectory inside the second work tree you must write commands as
if you’re in the top-level dir. Let take two work trees,
project-v1 and project, for example:
$ cd project/subdirectory
$ git fetch ../project-v1 v1:v1
$ git merge v1
Please note the path in git fetch ../project-v1 v1:v1 is
../project-v1 and not ../../project-v1 despite the fact that
we run the commands from a subdirectory, not from the root.
ReReRe
Rerere is a mechanism that helps to resolve repeated merge conflicts.
The most frequent source of recurring merge conflicts are topic
branches that are merged into mainline and then the merge commits are
removed; that’s often performed to test the topic branches and train
rerere; merge commits are removed to have clean linear history and
finish the topic branch with only one last merge commit.
Rerere works by remembering the states of tree before and after a
successful commit. That way rerere can automatically resolve conflicts
if they appear in the same files.
Rerere can be used manually with git rerere command but most often
it’s used automatically. Enable rerere with these commands in a
working tree:
$ git config rerere.enabled true
$ git config rerere.autoupdate true
You don’t need to turn rerere on globally - you don’t want rerere in
bare repositories or single-branch repositories; you only need rerere
in repos where you often perform merges and resolve merge conflicts.
See Rerere in The
Book.
Database maintenance
Git object database and other files/directories under .git require
periodic maintenance and cleanup. For example, commit editing left
unreferenced objects (dangling objects, in git terminology) and these
objects should be pruned to avoid collecting cruft in the DB. The
command git gc is used for maintenance. Git automatically runs
git gc --auto as a part of some commands to do quick maintenance.
Users are recommended to run git gc --aggressive from time to
time; git help gc recommends to run it every few hundred
changesets; for more intensive projects it should be something like
once a week and less frequently (biweekly or monthly) for lesser
active projects.
git gc --aggressive not only removes dangling objects, it also
repacks object database into indexed and better optimized pack(s); it
also packs symbolic references (branches and tags). Another way to do
it is to run git repack.
There is a well-known message from Linus
Torvalds regarding “stupidity” of git gc --aggressive. The message
can safely be ignored now. It is old and outdated, git gc
--aggressive became much better since that time.
For those who still prefer git repack over git gc --aggressive
the recommended parameters are git repack -a -d -f --depth=20
--window=250. See this detailed experiment
for explanation of the effects of these parameters.
From time to time run git fsck [--strict] to verify integrity of
the database. git fsck may produce a list of dangling objects;
that’s not an error, just a reminder to perform regular maintenance.
Tips and tricks
Command-line options and arguments
git help cli
recommends not to combine short options/flags. Most of the times
combining works: git commit -av works perfectly, but there are
situations when it doesn’t. E.g., git log -p -5 cannot be combined
as git log -p5.
Some options have arguments, some even have default arguments. In that
case the argument for such option must be spelled in a sticky way:
-Oarg, never -O arg because for an option that has a default
argument the latter means “use default value for option -O and
pass arg further to the option parser”. For example, git grep
has an option -O that passes a list of names of the found files to
a program; default program for -O is a pager (usually less),
but you can use your editor:
$ git grep -Ovim # but not -O vim
BTW, if git is instructed to use less as the pager (i.e., if pager
is not configured in git at all it uses less by default, or if it
gets less from GIT_PAGER or PAGER environment variables, or if it
was configured with git config [--global] core.pager less, or
less is used in the command git grep -Oless) git grep
passes +/$pattern option to less which is quite convenient.
Unfortunately, git grep doesn’t pass the pattern if the pager is
not exactly less, even if it’s less with parameters (something
like git config [--global] core.pager less -FRSXgimq); fortunately,
git grep -Oless always passes the pattern.
bash/zsh completion
It’s a bit hard to type git rebase --interactive --preserve-merges
HEAD~5 manually even for those who are happy to use command-line,
and this is where shell completion is of great help. Bash/zsh come
with programmable completion, often automatically installed and
enabled, so if you have bash/zsh and git installed, chances are you
are already done - just go and use it at the command-line.
If you don’t have necessary bits installed, install and enable
bash_completion package. If you want to upgrade your git completion to
the latest and greatest download necessary file from git contrib.
Git-for-windows comes with git-bash for which bash completion is
installed and enabled.
bash/zsh prompt
For command-line lovers shell prompt can carry a lot of useful
information. To include git information in the prompt use
git-prompt.sh.
Read the detailed instructions in the file.
Search the Net for “git prompt” to find other prompt variants.
SSH connection sharing
SSH connection sharing is a feature of OpenSSH and perhaps derivatives
like PuTTY. SSH connection sharing is a way to decrease ssh client
startup time by establishing one connection and reusing it for all
subsequent clients connecting to the same server. SSH connection
sharing can be used to speedup a lot of short ssh sessions like scp,
sftp, rsync and of course git over ssh. If you regularly
fetch/pull/push from/to remote repositories accessible over ssh then
using ssh connection sharing is recommended.
To turn on ssh connection sharing add something like this to your
~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/mux-%r@%h:%p
ControlPersist 600
See OpenSSH wikibook and
search for
more information.
SSH connection sharing can be used at GitHub, GitLab and SourceForge
repositories, but please be advised that BitBucket doesn’t allow it
and forcibly closes master connection after a short inactivity period
so you will see errors like this from ssh: “Connection to bitbucket.org
closed by remote host.”
git on server
The simplest way to publish a repository or a group of repositories is
git daemon. The daemon provides anonymous access, by default it is
read-only. The repositories are accessible by git protocol (git://
URLs). Write access can be enabled but the protocol lacks any
authentication means, so it should be enabled only within a trusted
LAN. See git help daemon for details.
Git over ssh provides authentication and repo-level authorisation as
repositories can be made user- or group-writeable (see parameter
core.sharedRepository in git help config). If that’s too
permissive or too restrictive for some project’s needs there is a
wrapper gitolite that can
be configured to allow access with great granularity; gitolite is
written in Perl and has a lot of documentation.
Web interface to browse repositories can be created using gitweb or cgit. Both are CGI scripts (written in
Perl and C). In addition to web interface both provide read-only dumb
http access for git (http(s):// URLs). Klaus is a small and simple WSGI web
server that implements both web interface and git smart HTTP
transport; supports Python 2 and Python 3, performs syntax
highlighting.
There are also more advanced web-based development environments that
include ability to manage users, groups and projects; private,
group-accessible and public repositories; they often include issue
trackers, wiki pages, pull requests and other tools for development
and communication. Among these environments are Kallithea and pagure,
both are written in Python; pagure was written by Fedora developers
and is being used to develop some Fedora projects. GitPrep is yet another GitHub clone,
written in Perl. Gogs is written in Go.
GitBucket is
written in Scala.
And last but not least, GitLab. It’s
perhaps the most advanced web-based development environment for git.
Written in Ruby, community edition is free and open source (MIT
license).
From Mercurial to git
There are many tools to convert Mercurial repositories to git. The
most famous are, probably, hg-git and
fast-export (many years ago
it was known under the name hg2git).
But a better tool, perhaps the best, is git-remote-hg. It provides transparent
bidirectional (pull and push) access to Mercurial repositories from
git. Its author wrote a comparison of alternatives
that seems to be mostly objective.
To use git-remote-hg, install or clone it, add to your PATH (or copy
script git-remote-hg to a directory that’s already in PATH) and
prepend hg:: to Mercurial URLs. For example:
$ git clone https://github.com/felipec/git-remote-hg.git
$ PATH=$PATH:"`pwd`"/git-remote-hg
$ git clone hg::https://hg.python.org/peps/ PEPs
To work with the repository just use regular git commands including
git fetch/pull/push.
To start converting your Mercurial habits to git see the page
Mercurial for Git users at Mercurial wiki.
At the second half of the page there is a table that lists
corresponding Mercurial and git commands. Should work perfectly in
both directions.
Python Developer’s Guide also has a chapter Mercurial for git
developers that
documents a few differences between git and hg.
Git and GitHub
gitsome - Git/GitHub
command line interface (CLI). Written in Python, work on MacOS, Unix,
Windows. Git/GitHub CLI with autocomplete, includes many GitHub
integrated commands that work with all shells, builtin xonsh with
Python REPL to run Python commands alongside shell commands, command
history, customizable highlighting, thoroughly documented.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 103 – Collecting information about git | Informational | This Informational PEP collects information about git. There is, of
course, a lot of documentation for git, so the PEP concentrates on
more complex (and more related to Python development) issues,
scenarios and examples. |
PEP 207 – Rich Comparisons
Author:
Guido van Rossum <guido at python.org>, David Ascher <DavidA at ActiveState.com>
Status:
Final
Type:
Standards Track
Created:
25-Jul-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Motivation
Previous Work
Concerns
Proposed Resolutions
Implementation Proposal
C API
Changes to the interpreter
Classes
Copyright
Appendix
Abstract
Motivation
Current State of Affairs
Proposed Mechanism
Chained Comparisons
Problem
Solution
Abstract
This PEP proposes several new features for comparisons:
Allow separately overloading of <, >, <=, >=, ==, !=, both in
classes and in C extensions.
Allow any of those overloaded operators to return something else
besides a Boolean result.
Motivation
The main motivation comes from NumPy, whose users agree that A<B
should return an array of elementwise comparison outcomes; they
currently have to spell this as less(A,B) because A<B can only
return a Boolean result or raise an exception.
An additional motivation is that frequently, types don’t have a
natural ordering, but still need to be compared for equality.
Currently such a type must implement comparison and thus define
an arbitrary ordering, just so that equality can be tested.
Also, for some object types an equality test can be implemented
much more efficiently than an ordering test; for example, lists
and dictionaries that differ in length are unequal, but the
ordering requires inspecting some (potentially all) items.
Previous Work
Rich Comparisons have been proposed before; in particular by David
Ascher, after experience with Numerical Python:
http://starship.python.net/crew/da/proposals/richcmp.html
It is also included below as an Appendix. Most of the material in
this PEP is derived from David’s proposal.
Concerns
Backwards compatibility, both at the Python level (classes using
__cmp__ need not be changed) and at the C level (extensions
defining tp_comparea need not be changed, code using
PyObject_Compare() must work even if the compared objects use
the new rich comparison scheme).
When A<B returns a matrix of elementwise comparisons, an easy
mistake to make is to use this expression in a Boolean context.
Without special precautions, it would always be true. This use
should raise an exception instead.
If a class overrides x==y but nothing else, should x!=y be
computed as not(x==y), or fail? What about the similar
relationship between < and >=, or between > and <=?
Similarly, should we allow x<y to be calculated from y>x? And
x<=y from not(x>y)? And x==y from y==x, or x!=y from y!=x?
When comparison operators return elementwise comparisons, what
to do about shortcut operators like A<B<C, A<B and C<D,
A<B or C<D?
What to do about min() and max(), the ‘in’ and ‘not in’
operators, list.sort(), dictionary key comparison, and other
uses of comparisons by built-in operations?
Proposed Resolutions
Full backwards compatibility can be achieved as follows. When
an object defines tp_compare() but not tp_richcompare(), and a
rich comparison is requested, the outcome of tp_compare() is
used in the obvious way. E.g. if “<” is requested, an exception if
tp_compare() raises an exception, the outcome is 1 if
tp_compare() is negative, and 0 if it is zero or positive. Etc.Full forward compatibility can be achieved as follows. When a
classic comparison is requested on an object that implements
tp_richcompare(), up to three comparisons are used: first == is
tried, and if it returns true, 0 is returned; next, < is tried
and if it returns true, -1 is returned; next, > is tried and if
it returns true, +1 is returned. If any operator tried returns
a non-Boolean value (see below), the exception raised by
conversion to Boolean is passed through. If none of the
operators tried returns true, the classic comparison fallbacks
are tried next.
(I thought long and hard about the order in which the three
comparisons should be tried. At one point I had a convincing
argument for doing it in this order, based on the behavior of
comparisons for cyclical data structures. But since that code
has changed again, I’m not so sure that it makes a difference
any more.)
Any type that returns a collection of Booleans instead of a
single boolean should define nb_nonzero() to raise an exception.
Such a type is considered a non-Boolean.
The == and != operators are not assumed to be each other’s
complement (e.g. IEEE 754 floating point numbers do not satisfy
this). It is up to the type to implement this if desired.
Similar for < and >=, or > and <=; there are lots of examples
where these assumptions aren’t true (e.g. tabnanny).
The reflexivity rules are assumed by Python. Thus, the
interpreter may swap y>x with x<y, y>=x with x<=y, and may swap
the arguments of x==y and x!=y. (Note: Python currently assumes
that x==x is always true and x!=x is never true; this should not
be assumed.)
In the current proposal, when A<B returns an array of
elementwise comparisons, this outcome is considered non-Boolean,
and its interpretation as Boolean by the shortcut operators
raises an exception. David Ascher’s proposal tries to deal
with this; I don’t think this is worth the additional complexity
in the code generator. Instead of A<B<C, you can write
(A<B)&(B<C).
The min() and list.sort() operations will only use the
< operator; max() will only use the > operator. The ‘in’ and
‘not in’ operators and dictionary lookup will only use the ==
operator.
Implementation Proposal
This closely follows David Ascher’s proposal.
C API
New functions:PyObject *PyObject_RichCompare(PyObject *, PyObject *, int)
This performs the requested rich comparison, returning a Python
object or raising an exception. The 3rd argument must be one of
Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT or Py_GE.
int PyObject_RichCompareBool(PyObject *, PyObject *, int)
This performs the requested rich comparison, returning a
Boolean: -1 for exception, 0 for false, 1 for true. The 3rd
argument must be one of Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT or
Py_GE. Note that when PyObject_RichCompare() returns a
non-Boolean object, PyObject_RichCompareBool() will raise an
exception.
New typedef:typedef PyObject *(*richcmpfunc) (PyObject *, PyObject *, int);
New slot in type object, replacing spare tp_xxx7:richcmpfunc tp_richcompare;
This should be a function with the same signature as
PyObject_RichCompare(), and performing the same comparison.
At least one of the arguments is of the type whose
tp_richcompare slot is being used, but the other may have a
different type. If the function cannot compare the particular
combination of objects, it should return a new reference to
Py_NotImplemented.
PyObject_Compare() is changed to try rich comparisons if they
are defined (but only if classic comparisons aren’t defined).
Changes to the interpreter
Whenever PyObject_Compare() is called with the intent of getting
the outcome of a particular comparison (e.g. in list.sort(), and
of course for the comparison operators in ceval.c), the code is
changed to call PyObject_RichCompare() or
PyObject_RichCompareBool() instead; if the C code needs to know
the outcome of the comparison, PyObject_IsTrue() is called on
the result (which may raise an exception).
Most built-in types that currently define a comparison will be
modified to define a rich comparison instead. (This is
optional; I’ve converted lists, tuples, complex numbers, and
arrays so far, and am not sure whether I will convert others.)
Classes
Classes can define new special methods __lt__, __le__, __eq__,
__ne__, __gt__, __ge__ to override the corresponding operators.
(I.e., <, <=, ==, !=, >, >=. You gotta love the Fortran
heritage.) If a class defines __cmp__ as well, it is only used
when __lt__ etc. have been tried and return NotImplemented.
Copyright
This document has been placed in the public domain.
Appendix
Here is most of David Ascher’s original proposal (version 0.2.1,
dated Wed Jul 22 16:49:28 1998; I’ve left the Contents, History
and Patches sections out). It addresses almost all concerns
above.
Abstract
A new mechanism allowing comparisons of Python objects to return
values other than -1, 0, or 1 (or raise exceptions) is
proposed. This mechanism is entirely backwards compatible, and can
be controlled at the level of the C PyObject type or of the Python
class definition. There are three cooperating parts to the
proposed mechanism:
the use of the last slot in the type object structure to store a
pointer to a rich comparison function
the addition of special methods for classes
the addition of an optional argument to the builtin cmp()
function.
Motivation
The current comparison protocol for Python objects assumes that
any two Python objects can be compared (as of Python 1.5, object
comparisons can raise exceptions), and that the return value for
any comparison should be -1, 0 or 1. -1 indicates that the first
argument to the comparison function is less than the right one, +1
indicating the contrapositive, and 0 indicating that the two
objects are equal. While this mechanism allows the establishment
of an order relationship (e.g. for use by the sort() method of list
objects), it has proven to be limited in the context of Numeric
Python (NumPy).
Specifically, NumPy allows the creation of multidimensional
arrays, which support most of the numeric operators. Thus:
x = array((1,2,3,4)) y = array((2,2,4,4))
are two NumPy arrays. While they can be added elementwise,:
z = x + y # z == array((3,4,7,8))
they cannot be compared in the current framework - the released
version of NumPy compares the pointers, (thus yielding junk
information) which was the only solution before the recent
addition of the ability (in 1.5) to raise exceptions in comparison
functions.
Even with the ability to raise exceptions, the current protocol
makes array comparisons useless. To deal with this fact, NumPy
includes several functions which perform the comparisons: less(),
less_equal(), greater(), greater_equal(), equal(),
not_equal(). These functions return arrays with the same shape as
their arguments (modulo broadcasting), filled with 0’s and 1’s
depending on whether the comparison is true or not for each
element pair. Thus, for example, using the arrays x and y defined
above:
less(x,y)
would be an array containing the numbers (1,0,0,0).
The current proposal is to modify the Python object interface to
allow the NumPy package to make it so that x < y returns the same
thing as less(x,y). The exact return value is up to the NumPy
package – what this proposal really asks for is changing the
Python core so that extension objects have the ability to return
something other than -1, 0, 1, should their authors choose to do
so.
Current State of Affairs
The current protocol is, at the C level, that each object type
defines a tp_compare slot, which is a pointer to a function which
takes two PyObject* references and returns -1, 0, or 1. This
function is called by the PyObject_Compare() function defined in
the C API. PyObject_Compare() is also called by the builtin
function cmp() which takes two arguments.
Proposed Mechanism
Changes to the C structure for type objectsThe last available slot in the PyTypeObject, reserved up to now
for future expansion, is used to optionally store a pointer to a
new comparison function, of type richcmpfunc defined by:
typedef PyObject *(*richcmpfunc)
Py_PROTO((PyObject *, PyObject *, int));
This function takes three arguments. The first two are the objects
to be compared, and the third is an integer corresponding to an
opcode (one of LT, LE, EQ, NE, GT, GE). If this slot is left NULL,
then rich comparison for that object type is not supported (except
for class instances whose class provide the special methods
described below).
The above opcodes need to be added to the published Python/C API
(probably under the names Py_LT, Py_LE, etc.)
Additions of special methods for classesClasses wishing to support the rich comparison mechanisms must add
one or more of the following new special methods:
def __lt__(self, other):
...
def __le__(self, other):
...
def __gt__(self, other):
...
def __ge__(self, other):
...
def __eq__(self, other):
...
def __ne__(self, other):
...
Each of these is called when the class instance is the on the
left-hand-side of the corresponding operators (<, <=, >, >=, ==,
and != or <>). The argument other is set to the object on the
right side of the operator. The return value of these methods is
up to the class implementor (after all, that’s the entire point of
the proposal).
If the object on the left side of the operator does not define an
appropriate rich comparison operator (either at the C level or
with one of the special methods, then the comparison is reversed,
and the right hand operator is called with the opposite operator,
and the two objects are swapped. This assumes that a < b and b > a
are equivalent, as are a <= b and b >= a, and that == and != are
commutative (e.g. a == b if and only if b == a).
For example, if obj1 is an object which supports the rich
comparison protocol and x and y are objects which do not support
the rich comparison protocol, then obj1 < x will call the __lt__
method of obj1 with x as the second argument. x < obj1 will call
obj1’s __gt__ method with x as a second argument, and x < y will
just use the existing (non-rich) comparison mechanism.
The above mechanism is such that classes can get away with not
implementing either __lt__ and __le__ or __gt__ and
__ge__. Further smarts could have been added to the comparison
mechanism, but this limited set of allowed “swaps” was chosen
because it doesn’t require the infrastructure to do any processing
(negation) of return values. The choice of six special methods was
made over a single (e.g. __richcmp__) method to allow the
dispatching on the opcode to be performed at the level of the C
implementation rather than the user-defined method.
Addition of an optional argument to the builtin cmp()The builtin cmp() is still used for simple comparisons. For rich
comparisons, it is called with a third argument, one of “<”, “<=”,
“>”, “>=”, “==”, “!=”, “<>” (the last two have the same
meaning). When called with one of these strings as the third
argument, cmp() can return any Python object. Otherwise, it can
only return -1, 0 or 1 as before.
Chained Comparisons
Problem
It would be nice to allow objects for which the comparison returns
something other than -1, 0, or 1 to be used in chained
comparisons, such as:
x < y < z
Currently, this is interpreted by Python as:
temp1 = x < y
if temp1:
return y < z
else:
return temp1
Note that this requires testing the truth value of the result of
comparisons, with potential “shortcutting” of the right-side
comparison testings. In other words, the truth-value of the result
of the result of the comparison determines the result of a chained
operation. This is problematic in the case of arrays, since if x,
y and z are three arrays, then the user expects:
x < y < z
to be an array of 0’s and 1’s where 1’s are in the locations
corresponding to the elements of y which are between the
corresponding elements in x and z. In other words, the right-hand
side must be evaluated regardless of the result of x < y, which is
incompatible with the mechanism currently in use by the parser.
Solution
Guido mentioned that one possible way out would be to change the
code generated by chained comparisons to allow arrays to be
chained-compared intelligently. What follows is a mixture of his
idea and my suggestions. The code generated for x < y < z would be
equivalent to:
temp1 = x < y
if temp1:
temp2 = y < z
return boolean_combine(temp1, temp2)
else:
return temp1
where boolean_combine is a new function which does something like
the following:
def boolean_combine(a, b):
if hasattr(a, '__boolean_and__') or \
hasattr(b, '__boolean_and__'):
try:
return a.__boolean_and__(b)
except:
return b.__boolean_and__(a)
else: # standard behavior
if a:
return b
else:
return 0
where the __boolean_and__ special method is implemented for
C-level types by another value of the third argument to the
richcmp function. This method would perform a boolean comparison
of the arrays (currently implemented in the umath module as the
logical_and ufunc).
Thus, objects returned by rich comparisons should always test
true, but should define another special method which creates
boolean combinations of them and their argument.
This solution has the advantage of allowing chained comparisons to
work for arrays, but the disadvantage that it requires comparison
arrays to always return true (in an ideal world, I’d have them
always raise an exception on truth testing, since the meaning of
testing “if a>b:” is massively ambiguous.
The inlining already present which deals with integer comparisons
would still apply, resulting in no performance cost for the most
common cases.
| Final | PEP 207 – Rich Comparisons | Standards Track | This PEP proposes several new features for comparisons: |
PEP 208 – Reworking the Coercion Model
Author:
Neil Schemenauer <nas at arctrix.com>, Marc-André Lemburg <mal at lemburg.com>
Status:
Final
Type:
Standards Track
Created:
04-Dec-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Rationale
Specification
Reference Implementation
Credits
Copyright
References
Abstract
Many Python types implement numeric operations. When the arguments of
a numeric operation are of different types, the interpreter tries to
coerce the arguments into a common type. The numeric operation is
then performed using this common type. This PEP proposes a new type
flag to indicate that arguments to a type’s numeric operations should
not be coerced. Operations that do not support the supplied types
indicate it by returning a new singleton object. Types which do not
set the type flag are handled in a backwards compatible manner.
Allowing operations handle different types is often simpler, more
flexible, and faster than having the interpreter do coercion.
Rationale
When implementing numeric or other related operations, it is often
desirable to provide not only operations between operands of one type
only, e.g. integer + integer, but to generalize the idea behind the
operation to other type combinations as well, e.g. integer + float.
A common approach to this mixed type situation is to provide a method
of “lifting” the operands to a common type (coercion) and then use
that type’s operand method as execution mechanism. Yet, this strategy
has a few drawbacks:
the “lifting” process creates at least one new (temporary)
operand object,
since the coercion method is not being told about the operation
that is to follow, it is not possible to implement operation
specific coercion of types,
there is no elegant way to solve situations were a common type
is not at hand, and
the coercion method will always have to be called prior to the
operation’s method itself.
A fix for this situation is obviously needed, since these drawbacks
make implementations of types needing these features very cumbersome,
if not impossible. As an example, have a look at the DateTime and
DateTimeDelta [1] types, the first being absolute, the second
relative. You can always add a relative value to an absolute one,
giving a new absolute value. Yet, there is no common type which the
existing coercion mechanism could use to implement that operation.
Currently, PyInstance types are treated specially by the interpreter
in that their numeric methods are passed arguments of different types.
Removing this special case simplifies the interpreter and allows other
types to implement numeric methods that behave like instance types.
This is especially useful for extension types like ExtensionClass.
Specification
Instead of using a central coercion method, the process of handling
different operand types is simply left to the operation. If the
operation finds that it cannot handle the given operand type
combination, it may return a special singleton as indicator.
Note that “numbers” (anything that implements the number protocol, or
part of it) written in Python already use the first part of this
strategy - it is the C level API that we focus on here.
To maintain nearly 100% backward compatibility we have to be very
careful to make numbers that don’t know anything about the new
strategy (old style numbers) work just as well as those that expect
the new scheme (new style numbers). Furthermore, binary compatibility
is a must, meaning that the interpreter may only access and use new
style operations if the number indicates the availability of these.
A new style number is considered by the interpreter as such if and
only if it sets the type flag Py_TPFLAGS_CHECKTYPES. The main
difference between an old style number and a new style one is that the
numeric slot functions can no longer assume to be passed arguments of
identical type. New style slots must check all arguments for proper
type and implement the necessary conversions themselves. This may seem
to cause more work on the behalf of the type implementor, but is in
fact no more difficult than writing the same kind of routines for an
old style coercion slot.
If a new style slot finds that it cannot handle the passed argument
type combination, it may return a new reference of the special
singleton Py_NotImplemented to the caller. This will cause the caller
to try the other operands operation slots until it finds a slot that
does implement the operation for the specific type combination. If
none of the possible slots succeed, it raises a TypeError.
To make the implementation easy to understand (the whole topic is
esoteric enough), a new layer in the handling of numeric operations is
introduced. This layer takes care of all the different cases that need
to be taken into account when dealing with all the possible
combinations of old and new style numbers. It is implemented by the
two static functions binary_op() and ternary_op(), which are both
internal functions that only the functions in Objects/abstract.c
have access to. The numeric API (PyNumber_*) is easy to adapt to
this new layer.
As a side-effect all numeric slots can be NULL-checked (this has to be
done anyway, so the added feature comes at no extra cost).
The scheme used by the layer to execute a binary operation is as
follows:
v
w
Action taken
new
new
v.op(v,w), w.op(v,w)
new
old
v.op(v,w), coerce(v,w), v.op(v,w)
old
new
w.op(v,w), coerce(v,w), v.op(v,w)
old
old
coerce(v,w), v.op(v,w)
The indicated action sequence is executed from left to right until
either the operation succeeds and a valid result (!=
Py_NotImplemented) is returned or an exception is raised. Exceptions
are returned to the calling function as-is. If a slot returns
Py_NotImplemented, the next item in the sequence is executed.
Note that coerce(v,w) will use the old style nb_coerce slot methods
via a call to PyNumber_Coerce().
Ternary operations have a few more cases to handle:
v
w
z
Action taken
new
new
new
v.op(v,w,z), w.op(v,w,z), z.op(v,w,z)
new
old
new
v.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
new
new
w.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
old
new
z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
new
new
old
v.op(v,w,z), w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
new
old
old
v.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
new
old
w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
old
old
coerce(v,w,z), v.op(v,w,z)
The same notes as above, except that coerce(v,w,z) actually does:
if z != Py_None:
coerce(v,w), coerce(v,z), coerce(w,z)
else:
# treat z as absent variable
coerce(v,w)
The current implementation uses this scheme already (there’s only one
ternary slot: nb_pow(a,b,c)).
Note that the numeric protocol is also used for some other related
tasks, e.g. sequence concatenation. These can also benefit from the
new mechanism by implementing right-hand operations for type
combinations that would otherwise fail to work. As an example, take
string concatenation: currently you can only do string + string. With
the new mechanism, a new string-like type could implement new_type +
string and string + new_type, even though strings don’t know anything
about new_type.
Since comparisons also rely on coercion (every time you compare an
integer to a float, the integer is first converted to float and then
compared…), a new slot to handle numeric comparisons is needed:
PyObject *nb_cmp(PyObject *v, PyObject *w)
This slot should compare the two objects and return an integer object
stating the result. Currently, this result integer may only be -1, 0, 1.
If the slot cannot handle the type combination, it may return a
reference to Py_NotImplemented. [XXX Note that this slot is still
in flux since it should take into account rich comparisons
(i.e. PEP 207).]
Numeric comparisons are handled by a new numeric protocol API:
PyObject *PyNumber_Compare(PyObject *v, PyObject *w)
This function compare the two objects as “numbers” and return an
integer object stating the result. Currently, this result integer may
only be -1, 0, 1. In case the operation cannot be handled by the given
objects, a TypeError is raised.
The PyObject_Compare() API needs to adjusted accordingly to make use
of this new API.
Other changes include adapting some of the built-in functions (e.g.
cmp()) to use this API as well. Also, PyNumber_CoerceEx() will need to
check for new style numbers before calling the nb_coerce slot. New
style numbers don’t provide a coercion slot and thus cannot be
explicitly coerced.
Reference Implementation
A preliminary patch for the CVS version of Python is available through
the Source Forge patch manager [2].
Credits
This PEP and the patch are heavily based on work done by Marc-André
Lemburg [3].
Copyright
This document has been placed in the public domain.
References
[1]
http://www.lemburg.com/files/python/mxDateTime.html
[2]
http://sourceforge.net/patch/?func=detailpatch&patch_id=102652&group_id=5470
[3]
http://www.lemburg.com/files/python/CoercionProposal.html
| Final | PEP 208 – Reworking the Coercion Model | Standards Track | Many Python types implement numeric operations. When the arguments of
a numeric operation are of different types, the interpreter tries to
coerce the arguments into a common type. The numeric operation is
then performed using this common type. This PEP proposes a new type
flag to indicate that arguments to a type’s numeric operations should
not be coerced. Operations that do not support the supplied types
indicate it by returning a new singleton object. Types which do not
set the type flag are handled in a backwards compatible manner.
Allowing operations handle different types is often simpler, more
flexible, and faster than having the interpreter do coercion. |