id
int64
0
2.72k
content
stringlengths
5
4.1k
language
stringclasses
4 values
embedding
unknown
0
symtable Access to the compiler s symbol tables Source code Lib symtable py Symbol tables are generated by the compiler from AST just before bytecode is generated The symbol table is responsible for calculating the scope of every identifier in the code symtable provides an interface to examine these tables Generating Symbol Tables symtable symtable code filename compile_type Return the toplevel SymbolTable for the Python source code filename is the name of the file containing the code compile_type is like the mode argument to compile Examining Symbol Tables class symtable SymbolTable A namespace table for a block The constructor is not public get_type Return the type of the symbol table Possible values are class module function annotation TypeVar bound type alias and type parameter The latter four refer to different flavors of annotation scopes Changed in version 3 12 Added annotation TypeVar bound type alias and type parameter as possible return values get_id Return the table s identifier get_name Return the table s name This is the name of the class if the table is for a class the name of the function if the table is for a function or top if the table is global get_type returns module For type parameter scopes which are used for generic classes functions and type aliases it is the name of the underlying class function or type alias For type alias scopes it is the name of the type alias For TypeVar bound scopes it is the name of the TypeVar get_lineno Return the number of the first line in the block this table represents is_optimized Return True if the locals in this table can be optimized is_nested Return True if the block is a nested class or function has_children Return True if the block has nested namespaces within it These can be obtained with get_children get_identifiers Return a view object containing the names of symbols in the table See the documentation of view objects lookup name Lookup name in the table and return a Symbol instance get_symbols Return a list of Symbol instances for names in the table get_children Return a list of the nested symbol tables class symtable Function A namespace for a function or method This class inherits from SymbolTable get_parameters Return a tuple containing names of parameters to this function get_locals Return a tuple containing names of locals in this function get_globals Return a tuple containing names of globals in this function get_nonlocals Return a tuple containing names of nonlocals in this function get_frees Return a tuple containing names of free variables in this function class symtable Class A namespace of a class This class inherits from SymbolTable get_methods Return a tuple containing the names of methods declared in the class class symtable Symbol An entry in a SymbolTable corresponding to an identifier in the source The constructor is not public get_name Return the symbol s name is_referenced Return True if the symbol is used in its block is_imported Return True if the symbol is created from an import statement is_parameter Return True if the symbol is a parameter is_global Return True if the symbol is global is_nonlocal Return True if the symbol is nonlocal is_declared_global Return True if the symbol is declared global with a global statement is_local Return True if the symbol is local to its block is_annotated Return True if the symbol is annotated New in version 3 6 is_free Return True if the symbol is referenced in its block but not assigned to is_assigned Return True if the symbol is assigned to in its block is_namespace Return True if name binding introduces new namespace If the name is used as the target of a function or class statement this will be true For example table symtable symtable def some_func pass string exec table lookup some_func is_namespace True Note that a single name can be bound to multiple objects If the result is True the name may also be bound to other objects like an int or list that does not introduce a new namespace get_namespaces Return a list of namespaces bound to this name get_namespace Return the namespace bound to this na
en
null
1
me If more than one or no namespace is bound to this name a ValueError is raised
en
null
2
lzma Compression using the LZMA algorithm New in version 3 3 Source code Lib lzma py This module provides classes and convenience functions for compressing and decompressing data using the LZMA compression algorithm Also included is a file interface supporting the xz and legacy lzma file formats used by the xz utility as well as raw compressed streams The interface provided by this module is very similar to that of the bz2 module Note that LZMAFile and bz2 BZ2File are not thread safe so if you need to use a single LZMAFile instance from multiple threads it is necessary to protect it with a lock exception lzma LZMAError This exception is raised when an error occurs during compression or decompression or while initializing the compressor decompressor state Reading and writing compressed files lzma open filename mode rb format None check 1 preset None filters None encoding None errors None newline None Open an LZMA compressed file in binary or text mode returning a file object The filename argument can be either an actual file name given as a str bytes or path like object in which case the named file is opened or it can be an existing file object to read from or write to The mode argument can be any of r rb w wb x xb a or ab for binary mode or rt wt xt or at for text mode The default is rb When opening a file for reading the format and filters arguments have the same meanings as for LZMADecompressor In this case the check and preset arguments should not be used When opening a file for writing the format check preset and filters arguments have the same meanings as for LZMACompressor For binary mode this function is equivalent to the LZMAFile constructor LZMAFile filename mode In this case the encoding errors and newline arguments must not be provided For text mode a LZMAFile object is created and wrapped in an io TextIOWrapper instance with the specified encoding error handling behavior and line ending s Changed in version 3 4 Added support for the x xb and xt modes Changed in version 3 6 Accepts a path like object class lzma LZMAFile filename None mode r format None check 1 preset None filters None Open an LZMA compressed file in binary mode An LZMAFile can wrap an already open file object or operate directly on a named file The filename argument specifies either the file object to wrap or the name of the file to open as a str bytes or path like object When wrapping an existing file object the wrapped file will not be closed when the LZMAFile is closed The mode argument can be either r for reading default w for overwriting x for exclusive creation or a for appending These can equivalently be given as rb wb xb and ab respectively If filename is a file object rather than an actual file name a mode of w does not truncate the file and is instead equivalent to a When opening a file for reading the input file may be the concatenation of multiple separate compressed streams These are transparently decoded as a single logical stream When opening a file for reading the format and filters arguments have the same meanings as for LZMADecompressor In this case the check and preset arguments should not be used When opening a file for writing the format check preset and filters arguments have the same meanings as for LZMACompressor LZMAFile supports all the members specified by io BufferedIOBase except for detach and truncate Iteration and the with statement are supported The following method is also provided peek size 1 Return buffered data without advancing the file position At least one byte of data will be returned unless EOF has been reached The exact number of bytes returned is unspecified the size argument is ignored Note While calling peek does not change the file position of the LZMAFile it may change the position of the underlying file object e g if the LZMAFile was constructed by passing a file object for filename Changed in version 3 4 Added support for the x and xb modes Changed in version 3 5 The read method now accepts an argument of None Changed in version 3 6 Accepts a path like object Compressing and decompressing data i
en
null
3
n memory class lzma LZMACompressor format FORMAT_XZ check 1 preset None filters None Create a compressor object which can be used to compress data incrementally For a more convenient way of compressing a single chunk of data see compress The format argument specifies what container format should be used Possible values are FORMAT_XZ The xz container format This is the default format FORMAT_ALONE The legacy lzma container format This format is more limited than xz it does not support integrity checks or multiple filters FORMAT_RAW A raw data stream not using any container format This format specifier does not support integrity checks and requires that you always specify a custom filter chain for both compression and decompression Additionally data compressed in this manner cannot be decompressed using FORMAT_AUTO see LZMADecompressor The check argument specifies the type of integrity check to include in the compressed data This check is used when decompressing to ensure that the data has not been corrupted Possible values are CHECK_NONE No integrity check This is the default and the only acceptable value for FORMAT_ALONE and FORMAT_RAW CHECK_CRC32 32 bit Cyclic Redundancy Check CHECK_CRC64 64 bit Cyclic Redundancy Check This is the default for FORMAT_XZ CHECK_SHA256 256 bit Secure Hash Algorithm If the specified check is not supported an LZMAError is raised The compression settings can be specified either as a preset compression level with the preset argument or in detail as a custom filter chain with the filters argument The preset argument if provided should be an integer between 0 and 9 inclusive optionally OR ed with the constant PRESET_EXTREME If neither preset nor filters are given the default behavior is to use PRESET_DEFAULT preset level 6 Higher presets produce smaller output but make the compression process slower Note In addition to being more CPU intensive compression with higher presets also requires much more memory and produces output that needs more memory to decompress With preset 9 for example the overhead for an LZMACompressor object can be as high as 800 MiB For this reason it is generally best to stick with the default preset The filters argument if provided should be a filter chain specifier See Specifying custom filter chains for details compress data Compress data a bytes object returning a bytes object containing compressed data for at least part of the input Some of data may be buffered internally for use in later calls to compress and flush The returned data should be concatenated with the output of any previous calls to compress flush Finish the compression process returning a bytes object containing any data stored in the compressor s internal buffers The compressor cannot be used after this method has been called class lzma LZMADecompressor format FORMAT_AUTO memlimit None filters None Create a decompressor object which can be used to decompress data incrementally For a more convenient way of decompressing an entire compressed stream at once see decompress The format argument specifies the container format that should be used The default is FORMAT_AUTO which can decompress both xz and lzma files Other possible values are FORMAT_XZ FORMAT_ALONE and FORMAT_RAW The memlimit argument specifies a limit in bytes on the amount of memory that the decompressor can use When this argument is used decompression will fail with an LZMAError if it is not possible to decompress the input within the given memory limit The filters argument specifies the filter chain that was used to create the stream being decompressed This argument is required if format is FORMAT_RAW but should not be used for other formats See Specifying custom filter chains for more information about filter chains Note This class does not transparently handle inputs containing multiple compressed streams unlike decompress and LZMAFile To decompress a multi stream input with LZMADecompressor you must create a new decompressor for each stream decompress data max_length 1 Decompress data a bytes like object returning uncompressed data as bytes
en
null
4
Some of data may be buffered internally for use in later calls to decompress The returned data should be concatenated with the output of any previous calls to decompress If max_length is nonnegative returns at most max_length bytes of decompressed data If this limit is reached and further output can be produced the needs_input attribute will be set to False In this case the next call to decompress may provide data as b to obtain more of the output If all of the input data was decompressed and returned either because this was less than max_length bytes or because max_length was negative the needs_input attribute will be set to True Attempting to decompress data after the end of stream is reached raises an EOFError Any data found after the end of the stream is ignored and saved in the unused_data attribute Changed in version 3 5 Added the max_length parameter check The ID of the integrity check used by the input stream This may be CHECK_UNKNOWN until enough of the input has been decoded to determine what integrity check it uses eof True if the end of stream marker has been reached unused_data Data found after the end of the compressed stream Before the end of the stream is reached this will be b needs_input False if the decompress method can provide more decompressed data before requiring new uncompressed input New in version 3 5 lzma compress data format FORMAT_XZ check 1 preset None filters None Compress data a bytes object returning the compressed data as a bytes object See LZMACompressor above for a description of the format check preset and filters arguments lzma decompress data format FORMAT_AUTO memlimit None filters None Decompress data a bytes object returning the uncompressed data as a bytes object If data is the concatenation of multiple distinct compressed streams decompress all of these streams and return the concatenation of the results See LZMADecompressor above for a description of the format memlimit and filters arguments Miscellaneous lzma is_check_supported check Return True if the given integrity check is supported on this system CHECK_NONE and CHECK_CRC32 are always supported CHECK_CRC64 and CHECK_SHA256 may be unavailable if you are using a version of liblzma that was compiled with a limited feature set Specifying custom filter chains A filter chain specifier is a sequence of dictionaries where each dictionary contains the ID and options for a single filter Each dictionary must contain the key id and may contain additional keys to specify filter dependent options Valid filter IDs are as follows Compression filters FILTER_LZMA1 for use with FORMAT_ALONE FILTER_LZMA2 for use with FORMAT_XZ and FORMAT_RAW Delta filter FILTER_DELTA Branch Call Jump BCJ filters FILTER_X86 FILTER_IA64 FILTER_ARM FILTER_ARMTHUMB FILTER_POWERPC FILTER_SPARC A filter chain can consist of up to 4 filters and cannot be empty The last filter in the chain must be a compression filter and any other filters must be delta or BCJ filters Compression filters support the following options specified as additional entries in the dictionary representing the filter preset A compression preset to use as a source of default values for options that are not specified explicitly dict_size Dictionary size in bytes This should be between 4 KiB and 1 5 GiB inclusive lc Number of literal context bits lp Number of literal position bits The sum lc lp must be at most 4 pb Number of position bits must be at most 4 mode MODE_FAST or MODE_NORMAL nice_len What should be considered a nice length for a match This should be 273 or less mf What match finder to use MF_HC3 MF_HC4 MF_BT2 MF_BT3 or MF_BT4 depth Maximum search depth used by match finder 0 default means to select automatically based on other filter options The delta filter stores the differences between bytes producing more repetitive input for the compressor in certain circumstances It supports one option dist This indicates the distance between bytes to be subtracted The default is 1 i e take the differences between adjacent bytes The BCJ filters are intended to be applied to machine code They co
en
null
5
nvert relative branches calls and jumps in the code to use absolute addressing with the aim of increasing the redundancy that can be exploited by the compressor These filters support one option start_offset This specifies the address that should be mapped to the beginning of the input data The default is 0 Examples Reading in a compressed file import lzma with lzma open file xz as f file_content f read Creating a compressed file import lzma data b Insert Data Here with lzma open file xz w as f f write data Compressing data in memory import lzma data_in b Insert Data Here data_out lzma compress data_in Incremental compression import lzma lzc lzma LZMACompressor out1 lzc compress b Some data n out2 lzc compress b Another piece of data n out3 lzc compress b Even more data n out4 lzc flush Concatenate all the partial results result b join out1 out2 out3 out4 Writing compressed data to an already open file import lzma with open file xz wb as f f write b This data will not be compressed n with lzma open f w as lzf lzf write b This will be compressed n f write b Not compressed n Creating a compressed file using a custom filter chain import lzma my_filters id lzma FILTER_DELTA dist 5 id lzma FILTER_LZMA2 preset 7 lzma PRESET_EXTREME with lzma open file xz w filters my_filters as f f write b blah blah blah
en
null
6
keyword Testing for Python keywords Source code Lib keyword py This module allows a Python program to determine if a string is a keyword or soft keyword keyword iskeyword s Return True if s is a Python keyword keyword kwlist Sequence containing all the keywords defined for the interpreter If any keywords are defined to only be active when particular __future__ statements are in effect these will be included as well keyword issoftkeyword s Return True if s is a Python soft keyword New in version 3 9 keyword softkwlist Sequence containing all the soft keywords defined for the interpreter If any soft keywords are defined to only be active when particular __future__ statements are in effect these will be included as well New in version 3 9
en
null
7
Design and History FAQ Why does Python use indentation for grouping of statements Guido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program Most people learn to love this feature after a while Since there are no begin end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader Occasionally C programmers will encounter a fragment of code like this if x y x y z Only the x statement is executed if the condition is true but the indentation leads many to believe otherwise Even experienced C programmers will sometimes stare at it a long time wondering as to why y is being decremented even for x y Because there are no begin end brackets Python is much less prone to coding style conflicts In C there are many different ways to place the braces After becoming used to reading and writing code using a particular style it is normal to feel somewhat uneasy when reading or being required to write in a different one Many coding styles place begin end brackets on a line by themselves This makes programs considerably longer and wastes valuable screen space making it harder to get a good overview of a program Ideally a function should fit on one screen say 20 30 lines 20 lines of Python can do a lot more work than 20 lines of C This is not solely due to the lack of begin end brackets the lack of declarations and the high level data types are also responsible but the indentation based syntax certainly helps Why am I getting strange results with simple arithmetic operations See the next question Why are floating point calculations so inaccurate Users are often surprised by results like this 1 2 1 0 0 19999999999999996 and think it is a bug in Python It s not This has little to do with Python and much more to do with how the underlying platform handles floating point numbers The float type in CPython uses a C double for storage A float object s value is stored in binary floating point with a fixed precision typically 53 bits and Python uses C operations which in turn rely on the hardware implementation in the processor to perform floating point operations This means that as far as floating point operations are concerned Python behaves like many popular languages including C and Java Many numbers that can be written easily in decimal notation cannot be expressed exactly in binary floating point For example after x 1 2 the value stored for x is a very good approximation to the decimal value 1 2 but is not exactly equal to it On a typical machine the actual stored value is 1 0011001100110011001100110011001100110011001100110011 binary which is exactly 1 1999999999999999555910790149937383830547332763671875 decimal The typical precision of 53 bits provides Python floats with 15 16 decimal digits of accuracy For a fuller explanation please see the floating point arithmetic chapter in the Python tutorial Why are Python strings immutable There are several advantages One is performance knowing that a string is immutable means we can allocate space for it at creation time and the storage requirements are fixed and unchanging This is also one of the reasons for the distinction between tuples and lists Another advantage is that strings in Python are considered as elemental as numbers No amount of activity will change the value 8 to anything else and in Python no amount of activity will change the string eight to anything else Why must self be used explicitly in method definitions and calls The idea was borrowed from Modula 3 It turns out to be very useful for a variety of reasons First it s more obvious that you are using a method or instance attribute instead of a local variable Reading self x or self meth makes it absolutely clear that an instance variable or method is used even if you don t know the class definition by heart In C you can sort of tell by the lack of a local variable declaration assuming globals are rare or easily recognizable but in Python there are no local variable declarations so you d have to look
en
null
8
up the class definition to be sure Some C and Java coding standards call for instance attributes to have an m_ prefix so this explicitness is still useful in those languages too Second it means that no special syntax is necessary if you want to explicitly reference or call the method from a particular class In C if you want to use a method from a base class which is overridden in a derived class you have to use the operator in Python you can write baseclass methodname self argument list This is particularly useful for __init__ methods and in general in cases where a derived class method wants to extend the base class method of the same name and thus has to call the base class method somehow Finally for instance variables it solves a syntactic problem with assignment since local variables in Python are by definition those variables to which a value is assigned in a function body and that aren t explicitly declared global there has to be some way to tell the interpreter that an assignment was meant to assign to an instance variable instead of to a local variable and it should preferably be syntactic for efficiency reasons C does this through declarations but Python doesn t have declarations and it would be a pity having to introduce them just for this purpose Using the explicit self var solves this nicely Similarly for using instance variables having to write self var means that references to unqualified names inside a method don t have to search the instance s directories To put it another way local variables and instance variables live in two different namespaces and you need to tell Python which namespace to use Why can t I use an assignment in an expression Starting in Python 3 8 you can Assignment expressions using the walrus operator assign a variable in an expression while chunk fp read 200 print chunk See PEP 572 for more information Why does Python use methods for some functionality e g list index but functions for other e g len list As Guido said a For some operations prefix notation just reads better than postfix prefix and infix operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem Compare the easy with which we rewrite a formula like x a b into x a x b to the clumsiness of doing the same thing using a raw OO notation b When I read code that says len x I know that it is asking for the length of something This tells me two things the result is an integer and the argument is some kind of container To the contrary when I read x len I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len Witness the confusion we occasionally have when a class that is not implementing a mapping has a get or keys method or something that isn t a file has a write method https mail python org pipermail python 3000 2006 November 004 643 html Why is join a string method instead of a list or tuple method Strings became much more like other standard types starting in Python 1 6 when methods were added which give the same functionality that has always been available using the functions of the string module Most of these new methods have been widely accepted but the one which appears to make some programmers feel uncomfortable is join 1 2 4 8 16 which gives the result 1 2 4 8 16 There are two common arguments against this usage The first runs along the lines of It looks really ugly using a method of a string literal string constant to which the answer is that it might but a string literal is just a fixed value If the methods are to be allowed on names bound to strings there is no logical reason to make them unavailable on literals The second objection is typically cast as I am really telling a sequence to join its members together with a string constant Sadly you aren t For some reason there seems to be much less difficulty with having split as a string method since in that case it is easy to see that 1 2 4 8 16 split is an instruction to a string literal to return the substrings deli
en
null
9
mited by the given separator or by default arbitrary runs of white space join is a string method because in using it you are telling the separator string to iterate over a sequence of strings and insert itself between adjacent elements This method can be used with any argument which obeys the rules for sequence objects including any new classes you might define yourself Similar methods exist for bytes and bytearray objects How fast are exceptions A try except block is extremely efficient if no exceptions are raised Actually catching an exception is expensive In versions of Python prior to 2 0 it was common to use this idiom try value mydict key except KeyError mydict key getvalue key value mydict key This only made sense when you expected the dict to have the key almost all the time If that wasn t the case you coded it like this if key in mydict value mydict key else value mydict key getvalue key For this specific case you could also use value dict setdefault key getvalue key but only if the getvalue call is cheap enough because it is evaluated in all cases Why isn t there a switch or case statement in Python In general structured switch statements execute one block of code when an expression has a particular value or set of values Since Python 3 10 one can easily match literal values or constants within a namespace with a match case statement An older alternative is a sequence of if elif elif else For cases where you need to choose from a very large number of possibilities you can create a dictionary mapping case values to functions to call For example functions a function_1 b function_2 c self method_1 func functions value func For calling methods on objects you can simplify yet further by using the getattr built in to retrieve methods with a particular name class MyVisitor def visit_a self def dispatch self value method_name visit_ str value method getattr self method_name method It s suggested that you use a prefix for the method names such as visit_ in this example Without such a prefix if values are coming from an untrusted source an attacker would be able to call any method on your object Imitating switch with fallthrough as with C s switch case default is possible much harder and less needed Can t you emulate threads in the interpreter instead of relying on an OS specific thread implementation Answer 1 Unfortunately the interpreter pushes at least one C stack frame for each Python stack frame Also extensions can call back into Python at almost random moments Therefore a complete threads implementation requires thread support for C Answer 2 Fortunately there is Stackless Python which has a completely redesigned interpreter loop that avoids the C stack Why can t lambda expressions contain statements Python lambda expressions cannot contain statements because Python s syntactic framework can t handle statements nested inside expressions However in Python this is not a serious problem Unlike lambda forms in other languages where they add functionality Python lambdas are only a shorthand notation if you re too lazy to define a function Functions are already first class objects in Python and can be declared in a local scope Therefore the only advantage of using a lambda instead of a locally defined function is that you don t need to invent a name for the function but that s just a local variable to which the function object which is exactly the same type of object that a lambda expression yields is assigned Can Python be compiled to machine code C or some other language Cython compiles a modified version of Python with optional annotations into C extensions Nuitka is an up and coming compiler of Python into C code aiming to support the full Python language How does Python manage memory The details of Python memory management depend on the implementation The standard implementation of Python CPython uses reference counting to detect inaccessible objects and another mechanism to collect reference cycles periodically executing a cycle detection algorithm which looks for inaccessible cycles and deletes the objects involved
en
null
10
The gc module provides functions to perform a garbage collection obtain debugging statistics and tune the collector s parameters Other implementations such as Jython or PyPy however can rely on a different mechanism such as a full blown garbage collector This difference can cause some subtle porting problems if your Python code depends on the behavior of the reference counting implementation In some Python implementations the following code which is fine in CPython will probably run out of file descriptors for file in very_long_list_of_files f open file c f read 1 Indeed using CPython s reference counting and destructor scheme each new assignment to f closes the previous file With a traditional GC however those file objects will only get collected and closed at varying and possibly long intervals If you want to write code that will work with any Python implementation you should explicitly close the file or use the with statement this will work regardless of memory management scheme for file in very_long_list_of_files with open file as f c f read 1 Why doesn t CPython use a more traditional garbage collection scheme For one thing this is not a C standard feature and hence it s not portable Yes we know about the Boehm GC library It has bits of assembler code for most common platforms not for all of them and although it is mostly transparent it isn t completely transparent patches are required to get Python to work with it Traditional GC also becomes a problem when Python is embedded into other applications While in a standalone Python it s fine to replace the standard malloc and free with versions provided by the GC library an application embedding Python may want to have its own substitute for malloc and free and may not want Python s Right now CPython works with anything that implements malloc and free properly Why isn t all memory freed when CPython exits Objects referenced from the global namespaces of Python modules are not always deallocated when Python exits This may happen if there are circular references There are also certain bits of memory that are allocated by the C library that are impossible to free e g a tool like Purify will complain about these Python is however aggressive about cleaning up memory on exit and does try to destroy every single object If you want to force Python to delete certain things on deallocation use the atexit module to run a function that will force those deletions Why are there separate tuple and list data types Lists and tuples while similar in many respects are generally used in fundamentally different ways Tuples can be thought of as being similar to Pascal records or C structs they re small collections of related data which may be of different types which are operated on as a group For example a Cartesian coordinate is appropriately represented as a tuple of two or three numbers Lists on the other hand are more like arrays in other languages They tend to hold a varying number of objects all of which have the same type and which are operated on one by one For example os listdir returns a list of strings representing the files in the current directory Functions which operate on this output would generally not break if you added another file or two to the directory Tuples are immutable meaning that once a tuple has been created you can t replace any of its elements with a new value Lists are mutable meaning that you can always change a list s elements Only immutable elements can be used as dictionary keys and hence only tuples and not lists can be used as keys How are lists implemented in CPython CPython s lists are really variable length arrays not Lisp style linked lists The implementation uses a contiguous array of references to other objects and keeps a pointer to this array and the array s length in a list head structure This makes indexing a list a i an operation whose cost is independent of the size of the list or the value of the index When items are appended or inserted the array of references is resized Some cleverness is applied to improve the performance of appending it
en
null
11
ems repeatedly when the array must be grown some extra space is allocated so the next few times don t require an actual resize How are dictionaries implemented in CPython CPython s dictionaries are implemented as resizable hash tables Compared to B trees this gives better performance for lookup the most common operation by far under most circumstances and the implementation is simpler Dictionaries work by computing a hash code for each key stored in the dictionary using the hash built in function The hash code varies widely depending on the key and a per process seed for example Python could hash to 539294296 while python a string that differs by a single bit could hash to 1142331976 The hash code is then used to calculate a location in an internal array where the value will be stored Assuming that you re storing keys that all have different hash values this means that dictionaries take constant time O 1 in Big O notation to retrieve a key Why must dictionary keys be immutable The hash table implementation of dictionaries uses a hash value calculated from the key value to find the key If the key were a mutable object its value could change and thus its hash could also change But since whoever changes the key object can t tell that it was being used as a dictionary key it can t move the entry around in the dictionary Then when you try to look up the same object in the dictionary it won t be found because its hash value is different If you tried to look up the old value it wouldn t be found either because the value of the object found in that hash bin would be different If you want a dictionary indexed with a list simply convert the list to a tuple first the function tuple L creates a tuple with the same entries as the list L Tuples are immutable and can therefore be used as dictionary keys Some unacceptable solutions that have been proposed Hash lists by their address object ID This doesn t work because if you construct a new list with the same value it won t be found e g mydict 1 2 12 print mydict 1 2 would raise a KeyError exception because the id of the 1 2 used in the second line differs from that in the first line In other words dictionary keys should be compared using not using is Make a copy when using a list as a key This doesn t work because the list being a mutable object could contain a reference to itself and then the copying code would run into an infinite loop Allow lists as keys but tell the user not to modify them This would allow a class of hard to track bugs in programs when you forgot or modified a list by accident It also invalidates an important invariant of dictionaries every value in d keys is usable as a key of the dictionary Mark lists as read only once they are used as a dictionary key The problem is that it s not just the top level object that could change its value you could use a tuple containing a list as a key Entering anything as a key into a dictionary would require marking all objects reachable from there as read only and again self referential objects could cause an infinite loop There is a trick to get around this if you need to but use it at your own risk You can wrap a mutable structure inside a class instance which has both a __eq__ and a __hash__ method You must then make sure that the hash value for all such wrapper objects that reside in a dictionary or other hash based structure remain fixed while the object is in the dictionary or other structure class ListWrapper def __init__ self the_list self the_list the_list def __eq__ self other return self the_list other the_list def __hash__ self l self the_list result 98767 len l 555 for i el in enumerate l try result result hash el 9999999 1001 i except Exception result result 7777777 i 333 return result Note that the hash computation is complicated by the possibility that some members of the list may be unhashable and also by the possibility of arithmetic overflow Furthermore it must always be the case that if o1 o2 ie o1 __eq__ o2 is True then hash o1 hash o2 ie o1 __hash__ o2 __hash__ regardless of whether the object is in a diction
en
null
12
ary or not If you fail to meet these restrictions dictionaries and other hash based structures will misbehave In the case of ListWrapper whenever the wrapper object is in a dictionary the wrapped list must not change to avoid anomalies Don t do this unless you are prepared to think hard about the requirements and the consequences of not meeting them correctly Consider yourself warned Why doesn t list sort return the sorted list In situations where performance matters making a copy of the list just to sort it would be wasteful Therefore list sort sorts the list in place In order to remind you of that fact it does not return the sorted list This way you won t be fooled into accidentally overwriting a list when you need a sorted copy but also need to keep the unsorted version around If you want to return a new list use the built in sorted function instead This function creates a new list from a provided iterable sorts it and returns it For example here s how to iterate over the keys of a dictionary in sorted order for key in sorted mydict do whatever with mydict key How do you specify and enforce an interface spec in Python An interface specification for a module as provided by languages such as C and Java describes the prototypes for the methods and functions of the module Many feel that compile time enforcement of interface specifications helps in the construction of large programs Python 2 6 adds an abc module that lets you define Abstract Base Classes ABCs You can then use isinstance and issubclass to check whether an instance or a class implements a particular ABC The collections abc module defines a set of useful ABCs such as Iterable Container and MutableMapping For Python many of the advantages of interface specifications can be obtained by an appropriate test discipline for components A good test suite for a module can both provide a regression test and serve as a module interface specification and a set of examples Many Python modules can be run as a script to provide a simple self test Even modules which use complex external interfaces can often be tested in isolation using trivial stub emulations of the external interface The doctest and unittest modules or third party test frameworks can be used to construct exhaustive test suites that exercise every line of code in a module An appropriate testing discipline can help build large complex applications in Python as well as having interface specifications would In fact it can be better because an interface specification cannot test certain properties of a program For example the list append method is expected to add new elements to the end of some internal list an interface specification cannot test that your list append implementation will actually do this correctly but it s trivial to check this property in a test suite Writing test suites is very helpful and you might want to design your code to make it easily tested One increasingly popular technique test driven development calls for writing parts of the test suite first before you write any of the actual code Of course Python allows you to be sloppy and not write test cases at all Why is there no goto In the 1970s people realized that unrestricted goto could lead to messy spaghetti code that was hard to understand and revise In a high level language it is also unneeded as long as there are ways to branch in Python with if statements and or and and if else expressions and loop with while and for statements possibly containing continue and break One can also use exceptions to provide a structured goto that works even across function calls Many feel that exceptions can conveniently emulate all reasonable uses of the go or goto constructs of C Fortran and other languages For example class label Exception pass declare a label try if condition raise label goto label except label where to goto pass This doesn t allow you to jump into the middle of a loop but that s usually considered an abuse of goto anyway Use sparingly Why can t raw strings r strings end with a backslash More precisely they can t end with an odd nu
en
null
13
mber of backslashes the unpaired backslash at the end escapes the closing quote character leaving an unterminated string Raw strings were designed to ease creating input for processors chiefly regular expression engines that want to do their own backslash escape processing Such processors consider an unmatched trailing backslash to be an error anyway so raw strings disallow that In return they allow you to pass on the string quote character by escaping it with a backslash These rules work well when r strings are used for their intended purpose If you re trying to build Windows pathnames note that all Windows system calls accept forward slashes too f open mydir file txt works fine If you re trying to build a pathname for a DOS command try e g one of dir r this is my dos dir dir r this is my dos dir 1 dir this is my dos dir Why doesn t Python have a with statement for attribute assignments Python has a with statement that wraps the execution of a block calling code on the entrance and exit from the block Some languages have a construct that looks like this with obj a 1 equivalent to obj a 1 total total 1 obj total obj total 1 In Python such a construct would be ambiguous Other languages such as Object Pascal Delphi and C use static types so it s possible to know in an unambiguous way what member is being assigned to This is the main point of static typing the compiler always knows the scope of every variable at compile time Python uses dynamic types It is impossible to know in advance which attribute will be referenced at runtime Member attributes may be added or removed from objects on the fly This makes it impossible to know from a simple reading what attribute is being referenced a local one a global one or a member attribute For instance take the following incomplete snippet def foo a with a print x The snippet assumes that a must have a member attribute called x However there is nothing in Python that tells the interpreter this What should happen if a is let us say an integer If there is a global variable named x will it be used inside the with block As you see the dynamic nature of Python makes such choices much harder The primary benefit of with and similar language features reduction of code volume can however easily be achieved in Python by assignment Instead of function args mydict index index a 21 function args mydict index index b 42 function args mydict index index c 63 write this ref function args mydict index index ref a 21 ref b 42 ref c 63 This also has the side effect of increasing execution speed because name bindings are resolved at run time in Python and the second version only needs to perform the resolution once Similar proposals that would introduce syntax to further reduce code volume such as using a leading dot have been rejected in favour of explicitness see https mail python org pipermail python ideas 2016 May 040070 html Why don t generators support the with statement For technical reasons a generator used directly as a context manager would not work correctly When as is most common a generator is used as an iterator run to completion no closing is needed When it is wrap it as contextlib closing generator in the with statement Why are colons required for the if while def class statements The colon is required primarily to enhance readability one of the results of the experimental ABC language Consider this if a b print a versus if a b print a Notice how the second one is slightly easier to read Notice further how a colon sets off the example in this FAQ answer it s a standard usage in English Another minor reason is that the colon makes it easier for editors with syntax highlighting they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text Why does Python allow commas at the end of lists and tuples Python lets you add a trailing comma at the end of lists tuples and dictionaries 1 2 3 a b c d A 1 5 B 6 7 last trailing comma is optional but good style There are several reasons to allow this When you have a litera
en
null
14
l value for a list tuple or dictionary spread across multiple lines it s easier to add more elements because you don t have to remember to add a comma to the previous line The lines can also be reordered without creating a syntax error Accidentally omitting the comma can lead to errors that are hard to diagnose For example x fee fie foo fum This list looks like it has four elements but it actually contains three fee fiefoo and fum Always adding the comma avoids this source of error Allowing the trailing comma may also make programmatic code generation easier
en
null
15
plistlib Generate and parse Apple plist files Source code Lib plistlib py This module provides an interface for reading and writing the property list files used by Apple primarily on macOS and iOS This module supports both binary and XML plist files The property list plist file format is a simple serialization supporting basic object types like dictionaries lists numbers and strings Usually the top level object is a dictionary To write out and to parse a plist file use the dump and load functions To work with plist data in bytes objects use dumps and loads Values can be strings integers floats booleans tuples lists dictionaries but only with string keys bytes bytearray or datetime datetime objects Changed in version 3 4 New API old API deprecated Support for binary format plists added Changed in version 3 8 Support added for reading and writing UID tokens in binary plists as used by NSKeyedArchiver and NSKeyedUnarchiver Changed in version 3 9 Old API removed See also PList manual page Apple s documentation of the file format This module defines the following functions plistlib load fp fmt None dict_type dict Read a plist file fp should be a readable and binary file object Return the unpacked root object which usually is a dictionary The fmt is the format of the file and the following values are valid None Autodetect the file format FMT_XML XML file format FMT_BINARY Binary plist format The dict_type is the type used for dictionaries that are read from the plist file XML data for the FMT_XML format is parsed using the Expat parser from xml parsers expat see its documentation for possible exceptions on ill formed XML Unknown elements will simply be ignored by the plist parser The parser for the binary format raises InvalidFileException when the file cannot be parsed New in version 3 4 plistlib loads data fmt None dict_type dict Load a plist from a bytes object See load for an explanation of the keyword arguments New in version 3 4 plistlib dump value fp fmt FMT_XML sort_keys True skipkeys False Write value to a plist file Fp should be a writable binary file object The fmt argument specifies the format of the plist file and can be one of the following values FMT_XML XML formatted plist file FMT_BINARY Binary formatted plist file When sort_keys is true the default the keys for dictionaries will be written to the plist in sorted order otherwise they will be written in the iteration order of the dictionary When skipkeys is false the default the function raises TypeError when a key of a dictionary is not a string otherwise such keys are skipped A TypeError will be raised if the object is of an unsupported type or a container that contains objects of unsupported types An OverflowError will be raised for integer values that cannot be represented in binary plist files New in version 3 4 plistlib dumps value fmt FMT_XML sort_keys True skipkeys False Return value as a plist formatted bytes object See the documentation for dump for an explanation of the keyword arguments of this function New in version 3 4 The following classes are available class plistlib UID data Wraps an int This is used when reading or writing NSKeyedArchiver encoded data which contains UID see PList manual It has one attribute data which can be used to retrieve the int value of the UID data must be in the range 0 data 2 64 New in version 3 8 The following constants are available plistlib FMT_XML The XML format for plist files New in version 3 4 plistlib FMT_BINARY The binary format for plist files New in version 3 4 Examples Generating a plist import datetime import plistlib pl dict aString Doodah aList A B 12 32 1 1 2 3 aFloat 0 1 anInt 728 aDict dict anotherString hello hi there aThirdString M xe4ssig Ma xdf aTrueValue True aFalseValue False someData b binary gunk someMoreData b lots of binary gunk 10 aDate datetime datetime now print plistlib dumps pl decode Parsing a plist import plistlib plist b plist version 1 0 dict key foo key string bar string dict plist pl plistlib loads plist print pl foo
en
null
16
shelve Python object persistence Source code Lib shelve py A shelf is a persistent dictionary like object The difference with dbm databases is that the values not the keys in a shelf can be essentially arbitrary Python objects anything that the pickle module can handle This includes most class instances recursive data types and objects containing lots of shared sub objects The keys are ordinary strings shelve open filename flag c protocol None writeback False Open a persistent dictionary The filename specified is the base filename for the underlying database As a side effect an extension may be added to the filename and more than one file may be created By default the underlying database file is opened for reading and writing The optional flag parameter has the same interpretation as the flag parameter of dbm open By default pickles created with pickle DEFAULT_PROTOCOL are used to serialize values The version of the pickle protocol can be specified with the protocol parameter Because of Python semantics a shelf cannot know when a mutable persistent dictionary entry is modified By default modified objects are written only when assigned to the shelf see Example If the optional writeback parameter is set to True all entries accessed are also cached in memory and written back on sync and close this can make it handier to mutate mutable entries in the persistent dictionary but if many entries are accessed it can consume vast amounts of memory for the cache and it can make the close operation very slow since all accessed entries are written back there is no way to determine which accessed entries are mutable nor which ones were actually mutated Changed in version 3 10 pickle DEFAULT_PROTOCOL is now used as the default pickle protocol Changed in version 3 11 Accepts path like object for filename Note Do not rely on the shelf being closed automatically always call close explicitly when you don t need it any more or use shelve open as a context manager with shelve open spam as db db eggs eggs Warning Because the shelve module is backed by pickle it is insecure to load a shelf from an untrusted source Like with pickle loading a shelf can execute arbitrary code Shelf objects support most of methods and operations supported by dictionaries except copying constructors and operators and This eases the transition from dictionary based scripts to those requiring persistent storage Two additional methods are supported Shelf sync Write back all entries in the cache if the shelf was opened with writeback set to True Also empty the cache and synchronize the persistent dictionary on disk if feasible This is called automatically when the shelf is closed with close Shelf close Synchronize and close the persistent dict object Operations on a closed shelf will fail with a ValueError See also Persistent dictionary recipe with widely supported storage formats and having the speed of native dictionaries Restrictions The choice of which database package will be used such as dbm ndbm or dbm gnu depends on which interface is available Therefore it is not safe to open the database directly using dbm The database is also unfortunately subject to the limitations of dbm if it is used this means that the pickled representation of the objects stored in the database should be fairly small and in rare cases key collisions may cause the database to refuse updates The shelve module does not support concurrent read write access to shelved objects Multiple simultaneous read accesses are safe When a program has a shelf open for writing no other program should have it open for reading or writing Unix file locking can be used to solve this but this differs across Unix versions and requires knowledge about the database implementation used On macOS dbm ndbm can silently corrupt the database file on updates which can cause hard crashes when trying to read from the database class shelve Shelf dict protocol None writeback False keyencoding utf 8 A subclass of collections abc MutableMapping which stores pickled values in the dict object By default pickles created with pic
en
null
17
kle DEFAULT_PROTOCOL are used to serialize values The version of the pickle protocol can be specified with the protocol parameter See the pickle documentation for a discussion of the pickle protocols If the writeback parameter is True the object will hold a cache of all entries accessed and write them back to the dict at sync and close times This allows natural operations on mutable entries but can consume much more memory and make sync and close take a long time The keyencoding parameter is the encoding used to encode keys before they are used with the underlying dict A Shelf object can also be used as a context manager in which case it will be automatically closed when the with block ends Changed in version 3 2 Added the keyencoding parameter previously keys were always encoded in UTF 8 Changed in version 3 4 Added context manager support Changed in version 3 10 pickle DEFAULT_PROTOCOL is now used as the default pickle protocol class shelve BsdDbShelf dict protocol None writeback False keyencoding utf 8 A subclass of Shelf which exposes first next previous last and set_location methods These are available in the third party bsddb module from pybsddb but not in other database modules The dict object passed to the constructor must support those methods This is generally accomplished by calling one of bsddb hashopen bsddb btopen or bsddb rnopen The optional protocol writeback and keyencoding parameters have the same interpretation as for the Shelf class class shelve DbfilenameShelf filename flag c protocol None writeback False A subclass of Shelf which accepts a filename instead of a dict like object The underlying file will be opened using dbm open By default the file will be created and opened for both read and write The optional flag parameter has the same interpretation as for the open function The optional protocol and writeback parameters have the same interpretation as for the Shelf class Example To summarize the interface key is a string data is an arbitrary object import shelve d shelve open filename open file may get suffix added by low level library d key data store data at key overwrites old data if using an existing key data d key retrieve a COPY of data at key raise KeyError if no such key del d key delete data stored at key raises KeyError if no such key flag key in d true if the key exists klist list d keys a list of all existing keys slow as d was opened WITHOUT writeback True beware d xx 0 1 2 this works as expected but d xx append 3 this doesn t d xx is STILL 0 1 2 having opened d without writeback True you need to code carefully temp d xx extracts the copy temp append 5 mutates the copy d xx temp stores the copy right back to persist it or d shelve open filename writeback True would let you just code d xx append 5 and have it work as expected BUT it would also consume more memory and make the d close operation slower d close close it See also Module dbm Generic interface to dbm style databases Module pickle Object serialization used by shelve
en
null
18
dis Disassembler for Python bytecode Source code Lib dis py The dis module supports the analysis of CPython bytecode by disassembling it The CPython bytecode which this module takes as an input is defined in the file Include opcode h and used by the compiler and the interpreter CPython implementation detail Bytecode is an implementation detail of the CPython interpreter No guarantees are made that bytecode will not be added removed or changed between versions of Python Use of this module should not be considered to work across Python VMs or Python releases Changed in version 3 6 Use 2 bytes for each instruction Previously the number of bytes varied by instruction Changed in version 3 10 The argument of jump exception handling and loop instructions is now the instruction offset rather than the byte offset Changed in version 3 11 Some instructions are accompanied by one or more inline cache entries which take the form of CACHE instructions These instructions are hidden by default but can be shown by passing show_caches True to any dis utility Furthermore the interpreter now adapts the bytecode to specialize it for different runtime conditions The adaptive bytecode can be shown by passing adaptive True Changed in version 3 12 The argument of a jump is the offset of the target instruction relative to the instruction that appears immediately after the jump instruction s CACHE entries As a consequence the presence of the CACHE instructions is transparent for forward jumps but needs to be taken into account when reasoning about backward jumps Example Given the function myfunc def myfunc alist return len alist the following command can be used to display the disassembly of myfunc dis dis myfunc 2 0 RESUME 0 3 2 LOAD_GLOBAL 1 NULL len 12 LOAD_FAST 0 alist 14 CALL 1 22 RETURN_VALUE The 2 is a line number Command line interface The dis module can be invoked as a script from the command line python m dis h infile The following options are accepted h help Display usage and exit If infile is specified its disassembled code will be written to stdout Otherwise disassembly is performed on compiled source code recieved from stdin Bytecode analysis New in version 3 4 The bytecode analysis API allows pieces of Python code to be wrapped in a Bytecode object that provides easy access to details of the compiled code class dis Bytecode x first_line None current_offset None show_caches False adaptive False Analyse the bytecode corresponding to a function generator asynchronous generator coroutine method string of source code or a code object as returned by compile This is a convenience wrapper around many of the functions listed below most notably get_instructions as iterating over a Bytecode instance yields the bytecode operations as Instruction instances If first_line is not None it indicates the line number that should be reported for the first source line in the disassembled code Otherwise the source line information if any is taken directly from the disassembled code object If current_offset is not None it refers to an instruction offset in the disassembled code Setting this means dis will display a current instruction marker against the specified opcode If show_caches is True dis will display inline cache entries used by the interpreter to specialize the bytecode If adaptive is True dis will display specialized bytecode that may be different from the original bytecode classmethod from_traceback tb show_caches False Construct a Bytecode instance from the given traceback setting current_offset to the instruction responsible for the exception codeobj The compiled code object first_line The first source line of the code object if available dis Return a formatted view of the bytecode operations the same as printed by dis dis but returned as a multi line string info Return a formatted multi line string with detailed information about the code object like code_info Changed in version 3 7 This can now handle coroutine and asynchronous generator objects Changed in version 3 11 Added the show_caches and adaptive parameters Example bytecode dis Bytec
en
null
19
ode myfunc for instr in bytecode print instr opname RESUME LOAD_GLOBAL LOAD_FAST CALL RETURN_VALUE Analysis functions The dis module also defines the following analysis functions that convert the input directly to the desired output They can be useful if only a single operation is being performed so the intermediate analysis object isn t useful dis code_info x Return a formatted multi line string with detailed code object information for the supplied function generator asynchronous generator coroutine method source code string or code object Note that the exact contents of code info strings are highly implementation dependent and they may change arbitrarily across Python VMs or Python releases New in version 3 2 Changed in version 3 7 This can now handle coroutine and asynchronous generator objects dis show_code x file None Print detailed code object information for the supplied function method source code string or code object to file or sys stdout if file is not specified This is a convenient shorthand for print code_info x file file intended for interactive exploration at the interpreter prompt New in version 3 2 Changed in version 3 4 Added file parameter dis dis x None file None depth None show_caches False adaptive False Disassemble the x object x can denote either a module a class a method a function a generator an asynchronous generator a coroutine a code object a string of source code or a byte sequence of raw bytecode For a module it disassembles all functions For a class it disassembles all methods including class and static methods For a code object or sequence of raw bytecode it prints one line per bytecode instruction It also recursively disassembles nested code objects These can include generator expressions nested functions the bodies of nested classes and the code objects used for annotation scopes Strings are first compiled to code objects with the compile built in function before being disassembled If no object is provided this function disassembles the last traceback The disassembly is written as text to the supplied file argument if provided and to sys stdout otherwise The maximal depth of recursion is limited by depth unless it is None depth 0 means no recursion If show_caches is True this function will display inline cache entries used by the interpreter to specialize the bytecode If adaptive is True this function will display specialized bytecode that may be different from the original bytecode Changed in version 3 4 Added file parameter Changed in version 3 7 Implemented recursive disassembling and added depth parameter Changed in version 3 7 This can now handle coroutine and asynchronous generator objects Changed in version 3 11 Added the show_caches and adaptive parameters dis distb tb None file None show_caches False adaptive False Disassemble the top of stack function of a traceback using the last traceback if none was passed The instruction causing the exception is indicated The disassembly is written as text to the supplied file argument if provided and to sys stdout otherwise Changed in version 3 4 Added file parameter Changed in version 3 11 Added the show_caches and adaptive parameters dis disassemble code lasti 1 file None show_caches False adaptive False dis disco code lasti 1 file None show_caches False adaptive False Disassemble a code object indicating the last instruction if lasti was provided The output is divided in the following columns 1 the line number for the first instruction of each line 2 the current instruction indicated as 3 a labelled instruction indicated with 4 the address of the instruction 5 the operation code name 6 operation parameters and 7 interpretation of the parameters in parentheses The parameter interpretation recognizes local and global variable names constant values branch targets and compare operators The disassembly is written as text to the supplied file argument if provided and to sys stdout otherwise Changed in version 3 4 Added file parameter Changed in version 3 11 Added the show_caches and adaptive parameters dis get_instructions x first_line None
en
null
20
show_caches False adaptive False Return an iterator over the instructions in the supplied function method source code string or code object The iterator generates a series of Instruction named tuples giving the details of each operation in the supplied code If first_line is not None it indicates the line number that should be reported for the first source line in the disassembled code Otherwise the source line information if any is taken directly from the disassembled code object The show_caches and adaptive parameters work as they do in dis New in version 3 4 Changed in version 3 11 Added the show_caches and adaptive parameters dis findlinestarts code This generator function uses the co_lines method of the code object code to find the offsets which are starts of lines in the source code They are generated as offset lineno pairs Changed in version 3 6 Line numbers can be decreasing Before they were always increasing Changed in version 3 10 The PEP 626 co_lines method is used instead of the co_firstlineno and co_lnotab attributes of the code object dis findlabels code Detect all offsets in the raw compiled bytecode string code which are jump targets and return a list of these offsets dis stack_effect opcode oparg None jump None Compute the stack effect of opcode with argument oparg If the code has a jump target and jump is True stack_effect will return the stack effect of jumping If jump is False it will return the stack effect of not jumping And if jump is None default it will return the maximal stack effect of both cases New in version 3 4 Changed in version 3 8 Added jump parameter Python Bytecode Instructions The get_instructions function and Bytecode class provide details of bytecode instructions as Instruction instances class dis Instruction Details for a bytecode operation opcode numeric code for operation corresponding to the opcode values listed below and the bytecode values in the Opcode collections opname human readable name for operation arg numeric argument to operation if any otherwise None argval resolved arg value if any otherwise None argrepr human readable description of operation argument if any otherwise an empty string offset start index of operation within bytecode sequence starts_line line started by this opcode if any otherwise None is_jump_target True if other code jumps to here otherwise False positions dis Positions object holding the start and end locations that are covered by this instruction New in version 3 4 Changed in version 3 11 Field positions is added class dis Positions In case the information is not available some fields might be None lineno end_lineno col_offset end_col_offset New in version 3 11 The Python compiler currently generates the following bytecode instructions General instructions In the following We will refer to the interpreter stack as STACK and describe operations on it as if it was a Python list The top of the stack corresponds to STACK 1 in this language NOP Do nothing code Used as a placeholder by the bytecode optimizer and to generate line tracing events POP_TOP Removes the top of stack item STACK pop END_FOR Removes the top two values from the stack Equivalent to POP_TOP POP_TOP Used to clean up at the end of loops hence the name New in version 3 12 END_SEND Implements del STACK 2 Used to clean up when a generator exits New in version 3 12 COPY i Push the i th item to the top of the stack without removing it from its original location assert i 0 STACK append STACK i New in version 3 11 SWAP i Swap the top of the stack with the i th element STACK i STACK 1 STACK 1 STACK i New in version 3 11 CACHE Rather than being an actual instruction this opcode is used to mark extra space for the interpreter to cache useful data directly in the bytecode itself It is automatically hidden by all dis utilities but can be viewed with show_caches True Logically this space is part of the preceding instruction Many opcodes expect to be followed by an exact number of caches and will instruct the interpreter to skip over them at runtime Populated caches can look like arbitrary instruc
en
null
21
tions so great care should be taken when reading or modifying raw adaptive bytecode containing quickened data New in version 3 11 Unary operations Unary operations take the top of the stack apply the operation and push the result back on the stack UNARY_NEGATIVE Implements STACK 1 STACK 1 UNARY_NOT Implements STACK 1 not STACK 1 UNARY_INVERT Implements STACK 1 STACK 1 GET_ITER Implements STACK 1 iter STACK 1 GET_YIELD_FROM_ITER If STACK 1 is a generator iterator or coroutine object it is left as is Otherwise implements STACK 1 iter STACK 1 New in version 3 5 Binary and in place operations Binary operations remove the top two items from the stack STACK 1 and STACK 2 They perform the operation then put the result back on the stack In place operations are like binary operations but the operation is done in place when STACK 2 supports it and the resulting STACK 1 may be but does not have to be the original STACK 2 BINARY_OP op Implements the binary and in place operators depending on the value of op rhs STACK pop lhs STACK pop STACK append lhs op rhs New in version 3 11 BINARY_SUBSCR Implements key STACK pop container STACK pop STACK append container key STORE_SUBSCR Implements key STACK pop container STACK pop value STACK pop container key value DELETE_SUBSCR Implements key STACK pop container STACK pop del container key BINARY_SLICE Implements end STACK pop start STACK pop container STACK pop STACK append container start end New in version 3 12 STORE_SLICE Implements end STACK pop start STACK pop container STACK pop values STACK pop container start end value New in version 3 12 Coroutine opcodes GET_AWAITABLE where Implements STACK 1 get_awaitable STACK 1 where get_awaitable o returns o if o is a coroutine object or a generator object with the CO_ITERABLE_COROUTINE flag or resolves o __await__ If the where operand is nonzero it indicates where the instruction occurs 1 After a call to __aenter__ 2 After a call to __aexit__ New in version 3 5 Changed in version 3 11 Previously this instruction did not have an oparg GET_AITER Implements STACK 1 STACK 1 __aiter__ New in version 3 5 Changed in version 3 7 Returning awaitable objects from __aiter__ is no longer supported GET_ANEXT Implement STACK append get_awaitable STACK 1 __anext__ to the stack See GET_AWAITABLE for details about get_awaitable New in version 3 5 END_ASYNC_FOR Terminates an async for loop Handles an exception raised when awaiting a next item The stack contains the async iterable in STACK 2 and the raised exception in STACK 1 Both are popped If the exception is not StopAsyncIteration it is re raised New in version 3 8 Changed in version 3 11 Exception representation on the stack now consist of one not three items CLEANUP_THROW Handles an exception raised during a throw or close call through the current frame If STACK 1 is an instance of StopIteration pop three values from the stack and push its value member Otherwise re raise STACK 1 New in version 3 12 BEFORE_ASYNC_WITH Resolves __aenter__ and __aexit__ from STACK 1 Pushes __aexit__ and result of __aenter__ to the stack STACK extend __aexit__ __aenter__ New in version 3 5 Miscellaneous opcodes SET_ADD i Implements item STACK pop set add STACK i item Used to implement set comprehensions LIST_APPEND i Implements item STACK pop list append STACK i item Used to implement list comprehensions MAP_ADD i Implements value STACK pop key STACK pop dict __setitem__ STACK i key value Used to implement dict comprehensions New in version 3 1 Changed in version 3 8 Map value is STACK 1 and map key is STACK 2 Before those were reversed For all of the SET_ADD LIST_APPEND and MAP_ADD instructions while the added value or key value pair is popped off the container object remains on the stack so that it is available for further iterations of the loop RETURN_VALUE Returns with STACK 1 to the caller of the function RETURN_CONST consti Returns with co_consts consti to the caller of the function New in version 3 12 YIELD_VALUE Yields STACK pop from a generator Changed in version 3 11 oparg set to be the stack depth Changed in version
en
null
22
3 12 oparg set to be the exception block depth for efficient closing of generators SETUP_ANNOTATIONS Checks whether __annotations__ is defined in locals if not it is set up to an empty dict This opcode is only emitted if a class or module body contains variable annotations statically New in version 3 6 POP_EXCEPT Pops a value from the stack which is used to restore the exception state Changed in version 3 11 Exception representation on the stack now consist of one not three items RERAISE Re raises the exception currently on top of the stack If oparg is non zero pops an additional value from the stack which is used to set f_lasti of the current frame New in version 3 9 Changed in version 3 11 Exception representation on the stack now consist of one not three items PUSH_EXC_INFO Pops a value from the stack Pushes the current exception to the top of the stack Pushes the value originally popped back to the stack Used in exception handlers New in version 3 11 CHECK_EXC_MATCH Performs exception matching for except Tests whether the STACK 2 is an exception matching STACK 1 Pops STACK 1 and pushes the boolean result of the test New in version 3 11 CHECK_EG_MATCH Performs exception matching for except Applies split STACK 1 on the exception group representing STACK 2 In case of a match pops two items from the stack and pushes the non matching subgroup None in case of full match followed by the matching subgroup When there is no match pops one item the match type and pushes None New in version 3 11 WITH_EXCEPT_START Calls the function in position 4 on the stack with arguments type val tb representing the exception at the top of the stack Used to implement the call context_manager __exit__ exc_info when an exception has occurred in a with statement New in version 3 9 Changed in version 3 11 The __exit__ function is in position 4 of the stack rather than 7 Exception representation on the stack now consist of one not three items LOAD_ASSERTION_ERROR Pushes AssertionError onto the stack Used by the assert statement New in version 3 9 LOAD_BUILD_CLASS Pushes builtins __build_class__ onto the stack It is later called to construct a class BEFORE_WITH This opcode performs several operations before a with block starts First it loads __exit__ from the context manager and pushes it onto the stack for later use by WITH_EXCEPT_START Then __enter__ is called Finally the result of calling the __enter__ method is pushed onto the stack New in version 3 11 GET_LEN Perform STACK append len STACK 1 New in version 3 10 MATCH_MAPPING If STACK 1 is an instance of collections abc Mapping or more technically if it has the Py_TPFLAGS_MAPPING flag set in its tp_flags push True onto the stack Otherwise push False New in version 3 10 MATCH_SEQUENCE If STACK 1 is an instance of collections abc Sequence and is not an instance of str bytes bytearray or more technically if it has the Py_TPFLAGS_SEQUENCE flag set in its tp_flags push True onto the stack Otherwise push False New in version 3 10 MATCH_KEYS STACK 1 is a tuple of mapping keys and STACK 2 is the match subject If STACK 2 contains all of the keys in STACK 1 push a tuple containing the corresponding values Otherwise push None New in version 3 10 Changed in version 3 11 Previously this instruction also pushed a boolean value indicating success True or failure False STORE_NAME namei Implements name STACK pop namei is the index of name in the attribute co_names of the code object The compiler tries to use STORE_FAST or STORE_GLOBAL if possible DELETE_NAME namei Implements del name where namei is the index into co_names attribute of the code object UNPACK_SEQUENCE count Unpacks STACK 1 into count individual values which are put onto the stack right to left Require there to be exactly count values assert len STACK 1 count STACK extend STACK pop count 1 1 UNPACK_EX counts Implements assignment with a starred target Unpacks an iterable in STACK 1 into individual values where the total number of values can be smaller than the number of items in the iterable one of the new values will be a list of all leftover item
en
null
23
s The number of values before and after the list value is limited to 255 The number of values before the list value is encoded in the argument of the opcode The number of values after the list if any is encoded using an EXTENDED_ARG As a consequence the argument can be seen as a two bytes values where the low byte of counts is the number of values before the list value the high byte of counts the number of values after it The extracted values are put onto the stack right to left i e a b c d will be stored after execution as STACK extend a b c STORE_ATTR namei Implements obj STACK pop value STACK pop obj name value where namei is the index of name in co_names of the code object DELETE_ATTR namei Implements obj STACK pop del obj name where namei is the index of name into co_names of the code object STORE_GLOBAL namei Works as STORE_NAME but stores the name as a global DELETE_GLOBAL namei Works as DELETE_NAME but deletes a global name LOAD_CONST consti Pushes co_consts consti onto the stack LOAD_NAME namei Pushes the value associated with co_names namei onto the stack The name is looked up within the locals then the globals then the builtins LOAD_LOCALS Pushes a reference to the locals dictionary onto the stack This is used to prepare namespace dictionaries for LOAD_FROM_DICT_OR_DEREF and LOAD_FROM_DICT_OR_GLOBALS New in version 3 12 LOAD_FROM_DICT_OR_GLOBALS i Pops a mapping off the stack and looks up the value for co_names namei If the name is not found there looks it up in the globals and then the builtins similar to LOAD_GLOBAL This is used for loading global variables in annotation scopes within class bodies New in version 3 12 BUILD_TUPLE count Creates a tuple consuming count items from the stack and pushes the resulting tuple onto the stack assert count 0 STACK values STACK count STACK count STACK append tuple values BUILD_LIST count Works as BUILD_TUPLE but creates a list BUILD_SET count Works as BUILD_TUPLE but creates a set BUILD_MAP count Pushes a new dictionary object onto the stack Pops 2 count items so that the dictionary holds count entries STACK 4 STACK 3 STACK 2 STACK 1 Changed in version 3 5 The dictionary is created from stack items instead of creating an empty dictionary pre sized to hold count items BUILD_CONST_KEY_MAP count The version of BUILD_MAP specialized for constant keys Pops the top element on the stack which contains a tuple of keys then starting from STACK 2 pops count values to form values in the built dictionary New in version 3 6 BUILD_STRING count Concatenates count strings from the stack and pushes the resulting string onto the stack New in version 3 6 LIST_EXTEND i Implements seq STACK pop list extend STACK i seq Used to build lists New in version 3 9 SET_UPDATE i Implements seq STACK pop set update STACK i seq Used to build sets New in version 3 9 DICT_UPDATE i Implements map STACK pop dict update STACK i map Used to build dicts New in version 3 9 DICT_MERGE i Like DICT_UPDATE but raises an exception for duplicate keys New in version 3 9 LOAD_ATTR namei If the low bit of namei is not set this replaces STACK 1 with getattr STACK 1 co_names namei 1 If the low bit of namei is set this will attempt to load a method named co_names namei 1 from the STACK 1 object STACK 1 is popped This bytecode distinguishes two cases if STACK 1 has a method with the correct name the bytecode pushes the unbound method and STACK 1 STACK 1 will be used as the first argument self by CALL when calling the unbound method Otherwise NULL and the object returned by the attribute lookup are pushed Changed in version 3 12 If the low bit of namei is set then a NULL or self is pushed to the stack before the attribute or unbound method respectively LOAD_SUPER_ATTR namei This opcode implements super both in its zero argument and two argument forms e g super method super attr and super cls self method super cls self attr It pops three values from the stack from top of stack down self the first argument to the current method cls the class within which the current method was defined the global super With respect to its argum
en
null
24
ent it works similarly to LOAD_ATTR except that namei is shifted left by 2 bits instead of 1 The low bit of namei signals to attempt a method load as with LOAD_ATTR which results in pushing None and the loaded method When it is unset a single value is pushed to the stack The second low bit of namei if set means that this was a two argument call to super unset means zero argument New in version 3 12 COMPARE_OP opname Performs a Boolean operation The operation name can be found in cmp_op opname IS_OP invert Performs is comparison or is not if invert is 1 New in version 3 9 CONTAINS_OP invert Performs in comparison or not in if invert is 1 New in version 3 9 IMPORT_NAME namei Imports the module co_names namei STACK 1 and STACK 2 are popped and provide the fromlist and level arguments of __import__ The module object is pushed onto the stack The current namespace is not affected for a proper import statement a subsequent STORE_FAST instruction modifies the namespace IMPORT_FROM namei Loads the attribute co_names namei from the module found in STACK 1 The resulting object is pushed onto the stack to be subsequently stored by a STORE_FAST instruction JUMP_FORWARD delta Increments bytecode counter by delta JUMP_BACKWARD delta Decrements bytecode counter by delta Checks for interrupts New in version 3 11 JUMP_BACKWARD_NO_INTERRUPT delta Decrements bytecode counter by delta Does not check for interrupts New in version 3 11 POP_JUMP_IF_TRUE delta If STACK 1 is true increments the bytecode counter by delta STACK 1 is popped Changed in version 3 11 The oparg is now a relative delta rather than an absolute target This opcode is a pseudo instruction replaced in final bytecode by the directed versions forward backward Changed in version 3 12 This is no longer a pseudo instruction POP_JUMP_IF_FALSE delta If STACK 1 is false increments the bytecode counter by delta STACK 1 is popped Changed in version 3 11 The oparg is now a relative delta rather than an absolute target This opcode is a pseudo instruction replaced in final bytecode by the directed versions forward backward Changed in version 3 12 This is no longer a pseudo instruction POP_JUMP_IF_NOT_NONE delta If STACK 1 is not None increments the bytecode counter by delta STACK 1 is popped This opcode is a pseudo instruction replaced in final bytecode by the directed versions forward backward New in version 3 11 Changed in version 3 12 This is no longer a pseudo instruction POP_JUMP_IF_NONE delta If STACK 1 is None increments the bytecode counter by delta STACK 1 is popped This opcode is a pseudo instruction replaced in final bytecode by the directed versions forward backward New in version 3 11 Changed in version 3 12 This is no longer a pseudo instruction FOR_ITER delta STACK 1 is an iterator Call its __next__ method If this yields a new value push it on the stack leaving the iterator below it If the iterator indicates it is exhausted then the byte code counter is incremented by delta Changed in version 3 12 Up until 3 11 the iterator was popped when it was exhausted LOAD_GLOBAL namei Loads the global named co_names namei 1 onto the stack Changed in version 3 11 If the low bit of namei is set then a NULL is pushed to the stack before the global variable LOAD_FAST var_num Pushes a reference to the local co_varnames var_num onto the stack Changed in version 3 12 This opcode is now only used in situations where the local variable is guaranteed to be initialized It cannot raise UnboundLocalError LOAD_FAST_CHECK var_num Pushes a reference to the local co_varnames var_num onto the stack raising an UnboundLocalError if the local variable has not been initialized New in version 3 12 LOAD_FAST_AND_CLEAR var_num Pushes a reference to the local co_varnames var_num onto the stack or pushes NULL onto the stack if the local variable has not been initialized and sets co_varnames var_num to NULL New in version 3 12 STORE_FAST var_num Stores STACK pop into the local co_varnames var_num DELETE_FAST var_num Deletes local co_varnames var_num MAKE_CELL i Creates a new cell in slot i If that slot is nonemp
en
null
25
ty then that value is stored into the new cell New in version 3 11 LOAD_CLOSURE i Pushes a reference to the cell contained in slot i of the fast locals storage The name of the variable is co_fastlocalnames i Note that LOAD_CLOSURE is effectively an alias for LOAD_FAST It exists to keep bytecode a little more readable Changed in version 3 11 i is no longer offset by the length of co_varnames LOAD_DEREF i Loads the cell contained in slot i of the fast locals storage Pushes a reference to the object the cell contains on the stack Changed in version 3 11 i is no longer offset by the length of co_varnames LOAD_FROM_DICT_OR_DEREF i Pops a mapping off the stack and looks up the name associated with slot i of the fast locals storage in this mapping If the name is not found there loads it from the cell contained in slot i similar to LOAD_DEREF This is used for loading free variables in class bodies which previously used LOAD_CLASSDEREF and in annotation scopes within class bodies New in version 3 12 STORE_DEREF i Stores STACK pop into the cell contained in slot i of the fast locals storage Changed in version 3 11 i is no longer offset by the length of co_varnames DELETE_DEREF i Empties the cell contained in slot i of the fast locals storage Used by the del statement New in version 3 2 Changed in version 3 11 i is no longer offset by the length of co_varnames COPY_FREE_VARS n Copies the n free variables from the closure into the frame Removes the need for special code on the caller s side when calling closures New in version 3 11 RAISE_VARARGS argc Raises an exception using one of the 3 forms of the raise statement depending on the value of argc 0 raise re raise previous exception 1 raise STACK 1 raise exception instance or type at STACK 1 2 raise STACK 2 from STACK 1 raise exception instance or type at STACK 2 with __cause__ set to STACK 1 CALL argc Calls a callable object with the number of arguments specified by argc including the named arguments specified by the preceding KW_NAMES if any On the stack are in ascending order either NULL The callable The positional arguments The named arguments or The callable self The remaining positional arguments The named arguments argc is the total of the positional and named arguments excluding self when a NULL is not present CALL pops all arguments and the callable object off the stack calls the callable object with those arguments and pushes the return value returned by the callable object New in version 3 11 CALL_FUNCTION_EX flags Calls a callable object with variable set of positional and keyword arguments If the lowest bit of flags is set the top of the stack contains a mapping object containing additional keyword arguments Before the callable is called the mapping object and iterable object are each unpacked and their contents passed in as keyword and positional arguments respectively CALL_FUNCTION_EX pops all arguments and the callable object off the stack calls the callable object with those arguments and pushes the return value returned by the callable object New in version 3 6 PUSH_NULL Pushes a NULL to the stack Used in the call sequence to match the NULL pushed by LOAD_METHOD for non method calls New in version 3 11 KW_NAMES consti Prefixes CALL Stores a reference to co_consts consti into an internal variable for use by CALL co_consts consti must be a tuple of strings New in version 3 11 MAKE_FUNCTION flags Pushes a new function object on the stack From bottom to top the consumed stack must consist of values if the argument carries a specified flag value 0x01 a tuple of default values for positional only and positional or keyword parameters in positional order 0x02 a dictionary of keyword only parameters default values 0x04 a tuple of strings containing parameters annotations 0x08 a tuple containing cells for free variables making a closure the code associated with the function at STACK 1 Changed in version 3 10 Flag value 0x04 is a tuple of strings instead of dictionary Changed in version 3 11 Qualified name at STACK 1 was removed BUILD_SLICE argc Pushes a slice object on the s
en
null
26
tack argc must be 2 or 3 If it is 2 implements end STACK pop start STACK pop STACK append slice start stop if it is 3 implements step STACK pop end STACK pop start STACK pop STACK append slice start end step See the slice built in function for more information EXTENDED_ARG ext Prefixes any opcode which has an argument too big to fit into the default one byte ext holds an additional byte which act as higher bits in the argument For each opcode at most three prefixal EXTENDED_ARG are allowed forming an argument from two byte to four byte FORMAT_VALUE flags Used for implementing formatted literal strings f strings Pops an optional fmt_spec from the stack then a required value flags is interpreted as follows flags 0x03 0x00 value is formatted as is flags 0x03 0x01 call str on value before formatting it flags 0x03 0x02 call repr on value before formatting it flags 0x03 0x03 call ascii on value before formatting it flags 0x04 0x04 pop fmt_spec from the stack and use it else use an empty fmt_spec Formatting is performed using PyObject_Format The result is pushed on the stack New in version 3 6 MATCH_CLASS count STACK 1 is a tuple of keyword attribute names STACK 2 is the class being matched against and STACK 3 is the match subject count is the number of positional sub patterns Pop STACK 1 STACK 2 and STACK 3 If STACK 3 is an instance of STACK 2 and has the positional and keyword attributes required by count and STACK 1 push a tuple of extracted attributes Otherwise push None New in version 3 10 Changed in version 3 11 Previously this instruction also pushed a boolean value indicating success True or failure False RESUME where A no op Performs internal tracing debugging and optimization checks The where operand marks where the RESUME occurs 0 The start of a function which is neither a generator coroutine nor an async generator 1 After a yield expression 2 After a yield from expression 3 After an await expression New in version 3 11 RETURN_GENERATOR Create a generator coroutine or async generator from the current frame Used as first opcode of in code object for the above mentioned callables Clear the current frame and return the newly created generator New in version 3 11 SEND delta Equivalent to STACK 1 STACK 2 send STACK 1 Used in yield from and await statements If the call raises StopIteration pop the top value from the stack push the exception s value attribute and increment the bytecode counter by delta New in version 3 11 HAVE_ARGUMENT This is not really an opcode It identifies the dividing line between opcodes in the range 0 255 which don t use their argument and those that do HAVE_ARGUMENT and HAVE_ARGUMENT respectively If your application uses pseudo instructions use the hasarg collection instead Changed in version 3 6 Now every instruction has an argument but opcodes HAVE_ARGUMENT ignore it Before only opcodes HAVE_ARGUMENT had an argument Changed in version 3 12 Pseudo instructions were added to the dis module and for them it is not true that comparison with HAVE_ARGUMENT indicates whether they use their arg CALL_INTRINSIC_1 Calls an intrinsic function with one argument Passes STACK 1 as the argument and sets STACK 1 to the result Used to implement functionality that is not performance critical The operand determines which intrinsic function is called Operand Description INTRINSIC_1_INVALID Not valid INTRINSIC_PRINT Prints the argument to standard out Used in the REPL INTRINSIC_IMPORT_STAR Performs import for the named module INTRINSIC_STOPITERATION_ERROR Extracts the return value from a StopIteration exception INTRINSIC_ASYNC_GEN_WRAP Wraps an aync generator value INTRINSIC_UNARY_POSITIVE Performs the unary operation INTRINSIC_LIST_TO_TUPLE Converts a list to a tuple INTRINSIC_TYPEVAR Creates a typing TypeVar INTRINSIC_PARAMSPEC Creates a typing ParamSpec INTRINSIC_TYPEVARTUPLE Creates a typing TypeVarTuple INTRINSIC_SUBSCRIPT_GENERIC Returns typing Generic subscripted with the argument INTRINSIC_TYPEALIAS Creates a typing TypeAliasType used in the type statement The argument is a tuple of the type alias s name type p
en
null
27
arameters and value New in version 3 12 CALL_INTRINSIC_2 Calls an intrinsic function with two arguments Used to implement functionality that is not performance critical arg2 STACK pop arg1 STACK pop result intrinsic2 arg1 arg2 STACK push result The operand determines which intrinsic function is called Operand Description INTRINSIC_2_INVALID Not valid INTRINSIC_PREP_RERAISE_STAR Calculates the ExceptionGroup to raise from a try except INTRINSIC_TYPEVAR_WITH_BOUND Creates a typing TypeVar with a bound INTRINSIC_TYPEVAR_WITH_CONSTRAINTS Creates a typing TypeVar with constraints INTRINSIC_SET_FUNCTION_TYPE_PARAMS Sets the __type_params__ attribute of a function New in version 3 12 Pseudo instructions These opcodes do not appear in Python bytecode They are used by the compiler but are replaced by real opcodes or removed before bytecode is generated SETUP_FINALLY target Set up an exception handler for the following code block If an exception occurs the value stack level is restored to its current state and control is transferred to the exception handler at target SETUP_CLEANUP target Like SETUP_FINALLY but in case of an exception also pushes the last instruction lasti to the stack so that RERAISE can restore it If an exception occurs the value stack level and the last instruction on the frame are restored to their current state and control is transferred to the exception handler at target SETUP_WITH target Like SETUP_CLEANUP but in case of an exception one more item is popped from the stack before control is transferred to the exception handler at target This variant is used in with and async with constructs which push the return value of the context manager s __enter__ or __aenter__ to the stack POP_BLOCK Marks the end of the code block associated with the last SETUP_FINALLY SETUP_CLEANUP or SETUP_WITH JUMP JUMP_NO_INTERRUPT Undirected relative jump instructions which are replaced by their directed forward backward counterparts by the assembler LOAD_METHOD Optimized unbound method lookup Emitted as a LOAD_ATTR opcode with a flag set in the arg Opcode collections These collections are provided for automatic introspection of bytecode instructions Changed in version 3 12 The collections now contain pseudo instructions and instrumented instructions as well These are opcodes with values MIN_PSEUDO_OPCODE and MIN_INSTRUMENTED_OPCODE dis opname Sequence of operation names indexable using the bytecode dis opmap Dictionary mapping operation names to bytecodes dis cmp_op Sequence of all compare operation names dis hasarg Sequence of bytecodes that use their argument New in version 3 12 dis hasconst Sequence of bytecodes that access a constant dis hasfree Sequence of bytecodes that access a free variable free in this context refers to names in the current scope that are referenced by inner scopes or names in outer scopes that are referenced from this scope It does not include references to global or builtin scopes dis hasname Sequence of bytecodes that access an attribute by name dis hasjrel Sequence of bytecodes that have a relative jump target dis hasjabs Sequence of bytecodes that have an absolute jump target dis haslocal Sequence of bytecodes that access a local variable dis hascompare Sequence of bytecodes of Boolean operations dis hasexc Sequence of bytecodes that set an exception handler New in version 3 12
en
null
28
typing Support for type hints New in version 3 5 Source code Lib typing py Note The Python runtime does not enforce function and variable type annotations They can be used by third party tools such as type checkers IDEs linters etc This module provides runtime support for type hints For the original specification of the typing system see PEP 484 For a simplified introduction to type hints see PEP 483 The function below takes and returns a string and is annotated as follows def greeting name str str return Hello name In the function greeting the argument name is expected to be of type str and the return type str Subtypes are accepted as arguments New features are frequently added to the typing module The typing_extensions package provides backports of these new features to older versions of Python For a summary of deprecated features and a deprecation timeline please see Deprecation Timeline of Major Features See also Typing cheat sheet A quick overview of type hints hosted at the mypy docs Type System Reference section of the mypy docs The Python typing system is standardised via PEPs so this reference should broadly apply to most Python type checkers Some parts may still be specific to mypy Static Typing with Python Type checker agnostic documentation written by the community detailing type system features useful typing related tools and typing best practices Relevant PEPs Since the initial introduction of type hints in PEP 484 and PEP 483 a number of PEPs have modified and enhanced Python s framework for type annotations PEP 526 Syntax for Variable Annotations Introducing syntax for annotating variables outside of function definitions and ClassVar PEP 544 Protocols Structural subtyping static duck typing Introducing Protocol and the runtime_checkable decorator PEP 585 Type Hinting Generics In Standard Collections Introducing types GenericAlias and the ability to use standard library classes as generic types PEP 586 Literal Types Introducing Literal PEP 589 TypedDict Type Hints for Dictionaries with a Fixed Set of Keys Introducing TypedDict PEP 591 Adding a final qualifier to typing Introducing Final and the final decorator PEP 593 Flexible function and variable annotations Introducing Annotated PEP 604 Allow writing union types as X Y Introducing types UnionType and the ability to use the binary or operator to signify a union of types PEP 612 Parameter Specification Variables Introducing ParamSpec and Concatenate PEP 613 Explicit Type Aliases Introducing TypeAlias PEP 646 Variadic Generics Introducing TypeVarTuple PEP 647 User Defined Type Guards Introducing TypeGuard PEP 655 Marking individual TypedDict items as required or potentially missing Introducing Required and NotRequired PEP 673 Self type Introducing Self PEP 675 Arbitrary Literal String Type Introducing LiteralString PEP 681 Data Class Transforms Introducing the dataclass_transform decorator PEP 692 Using TypedDict for more precise kwargs typing Introducing a new way of typing kwargs with Unpack and TypedDict PEP 695 Type Parameter Syntax Introducing builtin syntax for creating generic functions classes and type aliases PEP 698 Adding an override decorator to typing Introducing the override decorator Type aliases A type alias is defined using the type statement which creates an instance of TypeAliasType In this example Vector and list float will be treated equivalently by static type checkers type Vector list float def scale scalar float vector Vector Vector return scalar num for num in vector passes type checking a list of floats qualifies as a Vector new_vector scale 2 0 1 0 4 2 5 4 Type aliases are useful for simplifying complex type signatures For example from collections abc import Sequence type ConnectionOptions dict str str type Address tuple str int type Server tuple Address ConnectionOptions def broadcast_message message str servers Sequence Server None The static type checker will treat the previous type signature as being exactly equivalent to this one def broadcast_message message str servers Sequence tuple tuple str int dict str str None The typ
en
null
29
e statement is new in Python 3 12 For backwards compatibility type aliases can also be created through simple assignment Vector list float Or marked with TypeAlias to make it explicit that this is a type alias not a normal variable assignment from typing import TypeAlias Vector TypeAlias list float NewType Use the NewType helper to create distinct types from typing import NewType UserId NewType UserId int some_id UserId 524313 The static type checker will treat the new type as if it were a subclass of the original type This is useful in helping catch logical errors def get_user_name user_id UserId str passes type checking user_a get_user_name UserId 42351 fails type checking an int is not a UserId user_b get_user_name 1 You may still perform all int operations on a variable of type UserId but the result will always be of type int This lets you pass in a UserId wherever an int might be expected but will prevent you from accidentally creating a UserId in an invalid way output is of type int not UserId output UserId 23413 UserId 54341 Note that these checks are enforced only by the static type checker At runtime the statement Derived NewType Derived Base will make Derived a callable that immediately returns whatever parameter you pass it That means the expression Derived some_value does not create a new class or introduce much overhead beyond that of a regular function call More precisely the expression some_value is Derived some_value is always true at runtime It is invalid to create a subtype of Derived from typing import NewType UserId NewType UserId int Fails at runtime and does not pass type checking class AdminUserId UserId pass However it is possible to create a NewType based on a derived NewType from typing import NewType UserId NewType UserId int ProUserId NewType ProUserId UserId and typechecking for ProUserId will work as expected See PEP 484 for more details Note Recall that the use of a type alias declares two types to be equivalent to one another Doing type Alias Original will make the static type checker treat Alias as being exactly equivalent to Original in all cases This is useful when you want to simplify complex type signatures In contrast NewType declares one type to be a subtype of another Doing Derived NewType Derived Original will make the static type checker treat Derived as a subclass of Original which means a value of type Original cannot be used in places where a value of type Derived is expected This is useful when you want to prevent logic errors with minimal runtime cost New in version 3 5 2 Changed in version 3 10 NewType is now a class rather than a function As a result there is some additional runtime cost when calling NewType over a regular function Changed in version 3 11 The performance of calling NewType has been restored to its level in Python 3 9 Annotating callable objects Functions or other callable objects can be annotated using collections abc Callable or typing Callable Callable int str signifies a function that takes a single parameter of type int and returns a str For example from collections abc import Callable Awaitable def feeder get_next_item Callable str None Body def async_query on_success Callable int None on_error Callable int Exception None None Body async def on_update value str None Body callback Callable str Awaitable None on_update The subscription syntax must always be used with exactly two values the argument list and the return type The argument list must be a list of types a ParamSpec Concatenate or an ellipsis The return type must be a single type If a literal ellipsis is given as the argument list it indicates that a callable with any arbitrary parameter list would be acceptable def concat x str y str str return x y x Callable str x str OK x concat Also OK Callable cannot express complex signatures such as functions that take a variadic number of arguments overloaded functions or functions that have keyword only parameters However these signatures can be expressed by defining a Protocol class with a __call__ method from collections abc import Iterable from typ
en
null
30
ing import Protocol class Combiner Protocol def __call__ self vals bytes maxlen int None None list bytes def batch_proc data Iterable bytes cb_results Combiner bytes for item in data def good_cb vals bytes maxlen int None None list bytes def bad_cb vals bytes maxitems int None list bytes batch_proc good_cb OK batch_proc bad_cb Error Argument 2 has incompatible type because of different name and kind in the callback Callables which take other callables as arguments may indicate that their parameter types are dependent on each other using ParamSpec Additionally if that callable adds or removes arguments from other callables the Concatenate operator may be used They take the form Callable ParamSpecVariable ReturnType and Callable Concatenate Arg1Type Arg2Type ParamSpecVariable ReturnType respectively Changed in version 3 10 Callable now supports ParamSpec and Concatenate See PEP 612 for more details See also The documentation for ParamSpec and Concatenate provides examples of usage in Callable Generics Since type information about objects kept in containers cannot be statically inferred in a generic way many container classes in the standard library support subscription to denote the expected types of container elements from collections abc import Mapping Sequence class Employee Sequence Employee indicates that all elements in the sequence must be instances of Employee Mapping str str indicates that all keys and all values in the mapping must be strings def notify_by_email employees Sequence Employee overrides Mapping str str None Generic functions and classes can be parameterized by using type parameter syntax from collections abc import Sequence def first T l Sequence T T Function is generic over the TypeVar T return l 0 Or by using the TypeVar factory directly from collections abc import Sequence from typing import TypeVar U TypeVar U Declare type variable U def second l Sequence U U Function is generic over the TypeVar U return l 1 Changed in version 3 12 Syntactic support for generics is new in Python 3 12 Annotating tuples For most containers in Python the typing system assumes that all elements in the container will be of the same type For example from collections abc import Mapping Type checker will infer that all elements in x are meant to be ints x list int Type checker error list only accepts a single type argument y list int str 1 foo Type checker will infer that all keys in z are meant to be strings and that all values in z are meant to be either strings or ints z Mapping str str int list only accepts one type argument so a type checker would emit an error on the y assignment above Similarly Mapping only accepts two type arguments the first indicates the type of the keys and the second indicates the type of the values Unlike most other Python containers however it is common in idiomatic Python code for tuples to have elements which are not all of the same type For this reason tuples are special cased in Python s typing system tuple accepts any number of type arguments OK x is assigned to a tuple of length 1 where the sole element is an int x tuple int 5 OK y is assigned to a tuple of length 2 element 1 is an int element 2 is a str y tuple int str 5 foo Error the type annotation indicates a tuple of length 1 but z has been assigned to a tuple of length 3 z tuple int 1 2 3 To denote a tuple which could be of any length and in which all elements are of the same type T use tuple T To denote an empty tuple use tuple Using plain tuple as an annotation is equivalent to using tuple Any x tuple int 1 2 These reassignments are OK tuple int indicates x can be of any length x 1 2 3 x This reassignment is an error all elements in x must be ints x foo bar y can only ever be assigned to an empty tuple y tuple z tuple foo bar These reassignments are OK plain tuple is equivalent to tuple Any z 1 2 3 z The type of class objects A variable annotated with C may accept a value of type C In contrast a variable annotated with type C or typing Type C may accept values that are classes themselves specifically it will accept the class o
en
null
31
bject of C For example a 3 Has type int b int Has type type int c type a Also has type type int Note that type C is covariant class User class ProUser User class TeamUser User def make_new_user user_class type User User return user_class make_new_user User OK make_new_user ProUser Also OK type ProUser is a subtype of type User make_new_user TeamUser Still fine make_new_user User Error expected type User but got User make_new_user int Error type int is not a subtype of type User The only legal parameters for type are classes Any type variables and unions of any of these types For example def new_non_team_user user_class type BasicUser ProUser new_non_team_user BasicUser OK new_non_team_user ProUser OK new_non_team_user TeamUser Error type TeamUser is not a subtype of type BasicUser ProUser new_non_team_user User Also an error type Any is equivalent to type which is the root of Python s metaclass hierarchy User defined generic types A user defined class can be defined as a generic class from logging import Logger class LoggedVar T def __init__ self value T name str logger Logger None self name name self logger logger self value value def set self new T None self log Set repr self value self value new def get self T self log Get repr self value return self value def log self message str None self logger info s s self name message This syntax indicates that the class LoggedVar is parameterised around a single type variable T This also makes T valid as a type within the class body Generic classes implicitly inherit from Generic For compatibility with Python 3 11 and lower it is also possible to inherit explicitly from Generic to indicate a generic class from typing import TypeVar Generic T TypeVar T class LoggedVar Generic T Generic classes have __class_getitem__ methods meaning they can be parameterised at runtime e g LoggedVar int below from collections abc import Iterable def zero_all_vars vars Iterable LoggedVar int None for var in vars var set 0 A generic type can have any number of type variables All varieties of TypeVar are permissible as parameters for a generic type from typing import TypeVar Generic Sequence class WeirdTrio T B Sequence bytes S int str OldT TypeVar OldT contravariant True OldB TypeVar OldB bound Sequence bytes covariant True OldS TypeVar OldS int str class OldWeirdTrio Generic OldT OldB OldS Each type variable argument to Generic must be distinct This is thus invalid from typing import TypeVar Generic class Pair M M SyntaxError T TypeVar T class Pair Generic T T INVALID Generic classes can also inherit from other classes from collections abc import Sized class LinkedList T Sized When inheriting from generic classes some type parameters could be fixed from collections abc import Mapping class MyDict T Mapping str T In this case MyDict has a single parameter T Using a generic class without specifying type parameters assumes Any for each position In the following example MyIterable is not generic but implicitly inherits from Iterable Any from collections abc import Iterable class MyIterable Iterable Same as Iterable Any User defined generic type aliases are also supported Examples from collections abc import Iterable type Response S Iterable S int Return type here is same as Iterable str int def response query str Response str type Vec T Iterable tuple T T def inproduct T int float complex v Vec T T Same as Iterable tuple T T return sum x y for x y in v For backward compatibility generic type aliases can also be created through a simple assignment from collections abc import Iterable from typing import TypeVar S TypeVar S Response Iterable S int Changed in version 3 7 Generic no longer has a custom metaclass Changed in version 3 12 Syntactic support for generics and type aliases is new in version 3 12 Previously generic classes had to explicitly inherit from Generic or contain a type variable in one of their bases User defined generics for parameter expressions are also supported via parameter specification variables in the form P The behavior is consistent with type variables described above as parame
en
null
32
ter specification variables are treated by the typing module as a specialized type variable The one exception to this is that a list of types can be used to substitute a ParamSpec class Z T P T is a TypeVar P is a ParamSpec Z int dict float __main__ Z int dict float Classes generic over a ParamSpec can also be created using explicit inheritance from Generic In this case is not used from typing import ParamSpec Generic P ParamSpec P class Z Generic P Another difference between TypeVar and ParamSpec is that a generic with only one parameter specification variable will accept parameter lists in the forms X Type1 Type2 and also X Type1 Type2 for aesthetic reasons Internally the latter is converted to the former so the following are equivalent class X P X int str __main__ X int str X int str __main__ X int str Note that generics with ParamSpec may not have correct __parameters__ after substitution in some cases because they are intended primarily for static type checking Changed in version 3 10 Generic can now be parameterized over parameter expressions See ParamSpec and PEP 612 for more details A user defined generic class can have ABCs as base classes without a metaclass conflict Generic metaclasses are not supported The outcome of parameterizing generics is cached and most types in the typing module are hashable and comparable for equality The Any type A special kind of type is Any A static type checker will treat every type as being compatible with Any and Any as being compatible with every type This means that it is possible to perform any operation or method call on a value of type Any and assign it to any variable from typing import Any a Any None a OK a 2 OK s str s a OK def foo item Any int Passes type checking item could be any type and that type might have a bar method item bar Notice that no type checking is performed when assigning a value of type Any to a more precise type For example the static type checker did not report an error when assigning a to s even though s was declared to be of type str and receives an int value at runtime Furthermore all functions without a return type or parameter types will implicitly default to using Any def legacy_parser text return data A static type checker will treat the above as having the same signature as def legacy_parser text Any Any return data This behavior allows Any to be used as an escape hatch when you need to mix dynamically and statically typed code Contrast the behavior of Any with the behavior of object Similar to Any every type is a subtype of object However unlike Any the reverse is not true object is not a subtype of every other type That means when the type of a value is object a type checker will reject almost all operations on it and assigning it to a variable or using it as a return value of a more specialized type is a type error For example def hash_a item object int Fails type checking an object does not have a magic method item magic def hash_b item Any int Passes type checking item magic Passes type checking since ints and strs are subclasses of object hash_a 42 hash_a foo Passes type checking since Any is compatible with all types hash_b 42 hash_b foo Use object to indicate that a value could be any type in a typesafe manner Use Any to indicate that a value is dynamically typed Nominal vs structural subtyping Initially PEP 484 defined the Python static type system as using nominal subtyping This means that a class A is allowed where a class B is expected if and only if A is a subclass of B This requirement previously also applied to abstract base classes such as Iterable The problem with this approach is that a class had to be explicitly marked to support them which is unpythonic and unlike what one would normally do in idiomatic dynamically typed Python code For example this conforms to PEP 484 from collections abc import Sized Iterable Iterator class Bucket Sized Iterable int def __len__ self int def __iter__ self Iterator int PEP 544 allows to solve this problem by allowing users to write the above code without explicit base classes in the class
en
null
33
definition allowing Bucket to be implicitly considered a subtype of both Sized and Iterable int by static type checkers This is known as structural subtyping or static duck typing from collections abc import Iterator Iterable class Bucket Note no base classes def __len__ self int def __iter__ self Iterator int def collect items Iterable int int result collect Bucket Passes type check Moreover by subclassing a special class Protocol a user can define new custom protocols to fully enjoy structural subtyping see examples below Module contents The typing module defines the following classes functions and decorators Special typing primitives Special types These can be used as types in annotations They do not support subscription using typing Any Special type indicating an unconstrained type Every type is compatible with Any Any is compatible with every type Changed in version 3 11 Any can now be used as a base class This can be useful for avoiding type checker errors with classes that can duck type anywhere or are highly dynamic typing AnyStr A constrained type variable Definition AnyStr TypeVar AnyStr str bytes AnyStr is meant to be used for functions that may accept str or bytes arguments but cannot allow the two to mix For example def concat a AnyStr b AnyStr AnyStr return a b concat foo bar OK output has type str concat b foo b bar OK output has type bytes concat foo b bar Error cannot mix str and bytes Note that despite its name AnyStr has nothing to do with the Any type nor does it mean any string In particular AnyStr and str bytes are different from each other and have different use cases Invalid use of AnyStr The type variable is used only once in the function signature so cannot be solved by the type checker def greet_bad cond bool AnyStr return hi there if cond else b greetings The better way of annotating this function def greet_proper cond bool str bytes return hi there if cond else b greetings typing LiteralString Special type that includes only literal strings Any string literal is compatible with LiteralString as is another LiteralString However an object typed as just str is not A string created by composing LiteralString typed objects is also acceptable as a LiteralString Example def run_query sql LiteralString None def caller arbitrary_string str literal_string LiteralString None run_query SELECT FROM students OK run_query literal_string OK run_query SELECT FROM literal_string OK run_query arbitrary_string type checker error run_query type checker error f SELECT FROM students WHERE name arbitrary_string LiteralString is useful for sensitive APIs where arbitrary user generated strings could generate problems For example the two cases above that generate type checker errors could be vulnerable to an SQL injection attack See PEP 675 for more details New in version 3 11 typing Never The bottom type a type that has no members This can be used to define a function that should never be called or a function that never returns from typing import Never def never_call_me arg Never None pass def int_or_str arg int str None never_call_me arg type checker error match arg case int print It s an int case str print It s a str case _ never_call_me arg OK arg is of type Never New in version 3 11 On older Python versions NoReturn may be used to express the same concept Never was added to make the intended meaning more explicit typing NoReturn Special type indicating that a function never returns For example from typing import NoReturn def stop NoReturn raise RuntimeError no way NoReturn can also be used as a bottom type a type that has no values Starting in Python 3 11 the Never type should be used for this concept instead Type checkers should treat the two equivalently New in version 3 6 2 typing Self Special type to represent the current enclosed class For example from typing import Self reveal_type class Foo def return_self self Self return self class SubclassOfFoo Foo pass reveal_type Foo return_self Revealed type is Foo reveal_type SubclassOfFoo return_self Revealed type is SubclassOfFoo This annotation is semantically equ
en
null
34
ivalent to the following albeit in a more succinct fashion from typing import TypeVar Self TypeVar Self bound Foo class Foo def return_self self Self Self return self In general if something returns self as in the above examples you should use Self as the return annotation If Foo return_self was annotated as returning Foo then the type checker would infer the object returned from SubclassOfFoo return_self as being of type Foo rather than SubclassOfFoo Other common use cases include classmethod s that are used as alternative constructors and return instances of the cls parameter Annotating an __enter__ method which returns self You should not use Self as the return annotation if the method is not guaranteed to return an instance of a subclass when the class is subclassed class Eggs Self would be an incorrect return annotation here as the object returned is always an instance of Eggs even in subclasses def returns_eggs self Eggs return Eggs See PEP 673 for more details New in version 3 11 typing TypeAlias Special annotation for explicitly declaring a type alias For example from typing import TypeAlias Factors TypeAlias list int TypeAlias is particularly useful on older Python versions for annotating aliases that make use of forward references as it can be hard for type checkers to distinguish these from normal variable assignments from typing import Generic TypeAlias TypeVar T TypeVar T Box does not exist yet so we have to use quotes for the forward reference on Python 3 12 Using TypeAlias tells the type checker that this is a type alias declaration not a variable assignment to a string BoxOfStrings TypeAlias Box str class Box Generic T classmethod def make_box_of_strings cls BoxOfStrings See PEP 613 for more details New in version 3 10 Deprecated since version 3 12 TypeAlias is deprecated in favor of the type statement which creates instances of TypeAliasType and which natively supports forward references Note that while TypeAlias and TypeAliasType serve similar purposes and have similar names they are distinct and the latter is not the type of the former Removal of TypeAlias is not currently planned but users are encouraged to migrate to type statements Special forms These can be used as types in annotations They all support subscription using but each has a unique syntax typing Union Union type Union X Y is equivalent to X Y and means either X or Y To define a union use e g Union int str or the shorthand int str Using that shorthand is recommended Details The arguments must be types and there must be at least one Unions of unions are flattened e g Union Union int str float Union int str float Unions of a single argument vanish e g Union int int The constructor actually returns int Redundant arguments are skipped e g Union int str int Union int str int str When comparing unions the argument order is ignored e g Union int str Union str int You cannot subclass or instantiate a Union You cannot write Union X Y Changed in version 3 7 Don t remove explicit subclasses from unions at runtime Changed in version 3 10 Unions can now be written as X Y See union type expressions typing Optional Optional X is equivalent to X None or Union X None Note that this is not the same concept as an optional argument which is one that has a default An optional argument with a default does not require the Optional qualifier on its type annotation just because it is optional For example def foo arg int 0 None On the other hand if an explicit value of None is allowed the use of Optional is appropriate whether the argument is optional or not For example def foo arg Optional int None None Changed in version 3 10 Optional can now be written as X None See union type expressions typing Concatenate Special form for annotating higher order functions Concatenate can be used in conjunction with Callable and ParamSpec to annotate a higher order callable which adds removes or transforms parameters of another callable Usage is in the form Concatenate Arg1Type Arg2Type ParamSpecVariable Concatenate is currently only valid when used as the first argument to a
en
null
35
Callable The last parameter to Concatenate must be a ParamSpec or ellipsis For example to annotate a decorator with_lock which provides a threading Lock to the decorated function Concatenate can be used to indicate that with_lock expects a callable which takes in a Lock as the first argument and returns a callable with a different type signature In this case the ParamSpec indicates that the returned callable s parameter types are dependent on the parameter types of the callable being passed in from collections abc import Callable from threading import Lock from typing import Concatenate Use this lock to ensure that only one thread is executing a function at any time my_lock Lock def with_lock P R f Callable Concatenate Lock P R Callable P R A type safe decorator which provides a lock def inner args P args kwargs P kwargs R Provide the lock as the first argument return f my_lock args kwargs return inner with_lock def sum_threadsafe lock Lock numbers list float float Add a list of numbers together in a thread safe manner with lock return sum numbers We don t need to pass in the lock ourselves thanks to the decorator sum_threadsafe 1 1 2 2 3 3 New in version 3 10 See also PEP 612 Parameter Specification Variables the PEP which introduced ParamSpec and Concatenate ParamSpec Annotating callable objects typing Literal Special typing form to define literal types Literal can be used to indicate to type checkers that the annotated object has a value equivalent to one of the provided literals For example def validate_simple data Any Literal True always returns True type Mode Literal r rb w wb def open_helper file str mode Mode str open_helper some path r Passes type check open_helper other path typo Error in type checker Literal cannot be subclassed At runtime an arbitrary value is allowed as type argument to Literal but type checkers may impose restrictions See PEP 586 for more details about literal types New in version 3 8 Changed in version 3 9 1 Literal now de duplicates parameters Equality comparisons of Literal objects are no longer order dependent Literal objects will now raise a TypeError exception during equality comparisons if one of their parameters are not hashable typing ClassVar Special type construct to mark class variables As introduced in PEP 526 a variable annotation wrapped in ClassVar indicates that a given attribute is intended to be used as a class variable and should not be set on instances of that class Usage class Starship stats ClassVar dict str int class variable damage int 10 instance variable ClassVar accepts only types and cannot be further subscribed ClassVar is not a class itself and should not be used with isinstance or issubclass ClassVar does not change Python runtime behavior but it can be used by third party type checkers For example a type checker might flag the following code as an error enterprise_d Starship 3000 enterprise_d stats Error setting class variable on instance Starship stats This is OK New in version 3 5 3 typing Final Special typing construct to indicate final names to type checkers Final names cannot be reassigned in any scope Final names declared in class scopes cannot be overridden in subclasses For example MAX_SIZE Final 9000 MAX_SIZE 1 Error reported by type checker class Connection TIMEOUT Final int 10 class FastConnector Connection TIMEOUT 1 Error reported by type checker There is no runtime checking of these properties See PEP 591 for more details New in version 3 8 typing Required Special typing construct to mark a TypedDict key as required This is mainly useful for total False TypedDicts See TypedDict and PEP 655 for more details New in version 3 11 typing NotRequired Special typing construct to mark a TypedDict key as potentially missing See TypedDict and PEP 655 for more details New in version 3 11 typing Annotated Special typing form to add context specific metadata to an annotation Add metadata x to a given type T by using the annotation Annotated T x Metadata added using Annotated can be used by static analysis tools or at runtime At runtime the metadata is stored
en
null
36
in a __metadata__ attribute If a library or tool encounters an annotation Annotated T x and has no special logic for the metadata it should ignore the metadata and simply treat the annotation as T As such Annotated can be useful for code that wants to use annotations for purposes outside Python s static typing system Using Annotated T x as an annotation still allows for static typechecking of T as type checkers will simply ignore the metadata x In this way Annotated differs from the no_type_check decorator which can also be used for adding annotations outside the scope of the typing system but completely disables typechecking for a function or class The responsibility of how to interpret the metadata lies with the tool or library encountering an Annotated annotation A tool or library encountering an Annotated type can scan through the metadata elements to determine if they are of interest e g using isinstance Annotated type metadata Here is an example of how you might use Annotated to add metadata to type annotations if you were doing range analysis dataclass class ValueRange lo int hi int T1 Annotated int ValueRange 10 5 T2 Annotated T1 ValueRange 20 3 Details of the syntax The first argument to Annotated must be a valid type Multiple metadata elements can be supplied Annotated supports variadic arguments dataclass class ctype kind str Annotated int ValueRange 3 10 ctype char It is up to the tool consuming the annotations to decide whether the client is allowed to add multiple metadata elements to one annotation and how to merge those annotations Annotated must be subscripted with at least two arguments Annotated int is not valid The order of the metadata elements is preserved and matters for equality checks assert Annotated int ValueRange 3 10 ctype char Annotated int ctype char ValueRange 3 10 Nested Annotated types are flattened The order of the metadata elements starts with the innermost annotation assert Annotated Annotated int ValueRange 3 10 ctype char Annotated int ValueRange 3 10 ctype char Duplicated metadata elements are not removed assert Annotated int ValueRange 3 10 Annotated int ValueRange 3 10 ValueRange 3 10 Annotated can be used with nested and generic aliases dataclass class MaxLen value int type Vec T Annotated list tuple T T MaxLen 10 When used in a type annotation a type checker will treat V the same as Annotated list tuple int int MaxLen 10 type V Vec int Annotated cannot be used with an unpacked TypeVarTuple type Variadic Ts Annotated Ts Ann1 NOT valid This would be equivalent to Annotated T1 T2 T3 Ann1 where T1 T2 etc are TypeVars This would be invalid only one type should be passed to Annotated By default get_type_hints strips the metadata from annotations Pass include_extras True to have the metadata preserved from typing import Annotated get_type_hints def func x Annotated int metadata None pass get_type_hints func x class int return class NoneType get_type_hints func include_extras True x typing Annotated int metadata return class NoneType At runtime the metadata associated with an Annotated type can be retrieved via the __metadata__ attribute from typing import Annotated X Annotated int very important metadata X typing Annotated int very important metadata X __metadata__ very important metadata See also PEP 593 Flexible function and variable annotations The PEP introducing Annotated to the standard library New in version 3 9 typing TypeGuard Special typing construct for marking user defined type guard functions TypeGuard can be used to annotate the return type of a user defined type guard function TypeGuard only accepts a single type argument At runtime functions marked this way should return a boolean TypeGuard aims to benefit type narrowing a technique used by static type checkers to determine a more precise type of an expression within a program s code flow Usually type narrowing is done by analyzing conditional code flow and applying the narrowing to a block of code The conditional expression here is sometimes referred to as a type guard def is_str val str float isinstance type guard i
en
null
37
f isinstance val str Type of val is narrowed to str else Else type of val is narrowed to float Sometimes it would be convenient to use a user defined boolean function as a type guard Such a function should use TypeGuard as its return type to alert static type checkers to this intention Using TypeGuard tells the static type checker that for a given function 1 The return value is a boolean 2 If the return value is True the type of its argument is the type inside TypeGuard For example def is_str_list val list object TypeGuard list str Determines whether all objects in the list are strings return all isinstance x str for x in val def func1 val list object if is_str_list val Type of val is narrowed to list str print join val else Type of val remains as list object print Not a list of strings If is_str_list is a class or instance method then the type in TypeGuard maps to the type of the second parameter after cls or self In short the form def foo arg TypeA TypeGuard TypeB means that if foo arg returns True then arg narrows from TypeA to TypeB Note TypeB need not be a narrower form of TypeA it can even be a wider form The main reason is to allow for things like narrowing list object to list str even though the latter is not a subtype of the former since list is invariant The responsibility of writing type safe type guards is left to the user TypeGuard also works with type variables See PEP 647 for more details New in version 3 10 typing Unpack Typing operator to conceptually mark an object as having been unpacked For example using the unpack operator on a type variable tuple is equivalent to using Unpack to mark the type variable tuple as having been unpacked Ts TypeVarTuple Ts tup tuple Ts Effectively does tup tuple Unpack Ts In fact Unpack can be used interchangeably with in the context of typing TypeVarTuple and builtins tuple types You might see Unpack being used explicitly in older versions of Python where couldn t be used in certain places In older versions of Python TypeVarTuple and Unpack are located in the typing_extensions backports package from typing_extensions import TypeVarTuple Unpack Ts TypeVarTuple Ts tup tuple Ts Syntax error on Python 3 10 tup tuple Unpack Ts Semantically equivalent and backwards compatible Unpack can also be used along with typing TypedDict for typing kwargs in a function signature from typing import TypedDict Unpack class Movie TypedDict name str year int This function expects two keyword arguments name of type str and year of type int def foo kwargs Unpack Movie See PEP 692 for more details on using Unpack for kwargs typing New in version 3 11 Building generic types and type aliases The following classes should not be used directly as annotations Their intended purpose is to be building blocks for creating generic types and type aliases These objects can be created through special syntax type parameter lists and the type statement For compatibility with Python 3 11 and earlier they can also be created without the dedicated syntax as documented below class typing Generic Abstract base class for generic types A generic type is typically declared by adding a list of type parameters after the class name class Mapping KT VT def __getitem__ self key KT VT Etc Such a class implicitly inherits from Generic The runtime semantics of this syntax are discussed in the Language Reference This class can then be used as follows def lookup_name X Y mapping Mapping X Y key X default Y Y try return mapping key except KeyError return default Here the brackets after the function name indicate a generic function For backwards compatibility generic classes can also be declared by explicitly inheriting from Generic In this case the type parameters must be declared separately KT TypeVar KT VT TypeVar VT class Mapping Generic KT VT def __getitem__ self key KT VT Etc class typing TypeVar name constraints bound None covariant False contravariant False infer_variance False Type variable The preferred way to construct a type variable is via the dedicated syntax for generic functions generic classes and generic type ali
en
null
38
ases class Sequence T T is a TypeVar This syntax can also be used to create bound and constrained type variables class StrSequence S str S is a TypeVar bound to str class StrOrBytesSequence A str bytes A is a TypeVar constrained to str or bytes However if desired reusable type variables can also be constructed manually like so T TypeVar T Can be anything S TypeVar S bound str Can be any subtype of str A TypeVar A str bytes Must be exactly str or bytes Type variables exist primarily for the benefit of static type checkers They serve as the parameters for generic types as well as for generic function and type alias definitions See Generic for more information on generic types Generic functions work as follows def repeat T x T n int Sequence T Return a list containing n references to x return x n def print_capitalized S str x S S Print x capitalized and return x print x capitalize return x def concatenate A str bytes x A y A A Add two strings or bytes objects together return x y Note that type variables can be bound constrained or neither but cannot be both bound and constrained The variance of type variables is inferred by type checkers when they are created through the type parameter syntax or when infer_variance True is passed Manually created type variables may be explicitly marked covariant or contravariant by passing covariant True or contravariant True By default manually created type variables are invariant See PEP 484 and PEP 695 for more details Bound type variables and constrained type variables have different semantics in several important ways Using a bound type variable means that the TypeVar will be solved using the most specific type possible x print_capitalized a string reveal_type x revealed type is str class StringSubclass str pass y print_capitalized StringSubclass another string reveal_type y revealed type is StringSubclass z print_capitalized 45 error int is not a subtype of str Type variables can be bound to concrete types abstract types ABCs or protocols and even unions of types Can be anything with an __abs__ method def print_abs T SupportsAbs arg T None print Absolute value abs arg U TypeVar U bound str bytes Can be any subtype of the union str bytes V TypeVar V bound SupportsAbs Can be anything with an __abs__ method Using a constrained type variable however means that the TypeVar can only ever be solved as being exactly one of the constraints given a concatenate one two reveal_type a revealed type is str b concatenate StringSubclass one StringSubclass two reveal_type b revealed type is str despite StringSubclass being passed in c concatenate one b two error type variable A can be either str or bytes in a function call but not both At runtime isinstance x T will raise TypeError __name__ The name of the type variable __covariant__ Whether the type var has been explicitly marked as covariant __contravariant__ Whether the type var has been explicitly marked as contravariant __infer_variance__ Whether the type variable s variance should be inferred by type checkers New in version 3 12 __bound__ The bound of the type variable if any Changed in version 3 12 For type variables created through type parameter syntax the bound is evaluated only when the attribute is accessed not when the type variable is created see Lazy evaluation __constraints__ A tuple containing the constraints of the type variable if any Changed in version 3 12 For type variables created through type parameter syntax the constraints are evaluated only when the attribute is accessed not when the type variable is created see Lazy evaluation Changed in version 3 12 Type variables can now be declared using the type parameter syntax introduced by PEP 695 The infer_variance parameter was added class typing TypeVarTuple name Type variable tuple A specialized form of type variable that enables variadic generics Type variable tuples can be declared in type parameter lists using a single asterisk before the name def move_first_element_to_last T Ts tup tuple T Ts tuple Ts T return tup 1 tup 0 Or by explicitly invoking the TypeVarTuple constructor
en
null
39
T TypeVar T Ts TypeVarTuple Ts def move_first_element_to_last tup tuple T Ts tuple Ts T return tup 1 tup 0 A normal type variable enables parameterization with a single type A type variable tuple in contrast allows parameterization with an arbitrary number of types by acting like an arbitrary number of type variables wrapped in a tuple For example T is bound to int Ts is bound to Return value is 1 which has type tuple int move_first_element_to_last tup 1 T is bound to int Ts is bound to str Return value is spam 1 which has type tuple str int move_first_element_to_last tup 1 spam T is bound to int Ts is bound to str float Return value is spam 3 0 1 which has type tuple str float int move_first_element_to_last tup 1 spam 3 0 This fails to type check and fails at runtime because tuple is not compatible with tuple T Ts at least one element is required move_first_element_to_last tup Note the use of the unpacking operator in tuple T Ts Conceptually you can think of Ts as a tuple of type variables T1 T2 tuple T Ts would then become tuple T T1 T2 which is equivalent to tuple T T1 T2 Note that in older versions of Python you might see this written using Unpack instead as Unpack Ts Type variable tuples must always be unpacked This helps distinguish type variable tuples from normal type variables x Ts Not valid x tuple Ts Not valid x tuple Ts The correct way to do it Type variable tuples can be used in the same contexts as normal type variables For example in class definitions arguments and return types class Array Shape def __getitem__ self key tuple Shape float def __abs__ self Array Shape def get_shape self tuple Shape Type variable tuples can be happily combined with normal type variables class Array DType Shape This is fine pass class Array2 Shape DType This would also be fine pass class Height class Width float_array_1d Array float Height Array Totally fine int_array_2d Array int Height Width Array Yup fine too However note that at most one type variable tuple may appear in a single list of type arguments or type parameters x tuple Ts Ts Not valid class Array Shape Shape Not valid pass Finally an unpacked type variable tuple can be used as the type annotation of args def call_soon Ts callback Callable Ts None args Ts None callback args In contrast to non unpacked annotations of args e g args int which would specify that all arguments are int args Ts enables reference to the types of the individual arguments in args Here this allows us to ensure the types of the args passed to call_soon match the types of the positional arguments of callback See PEP 646 for more details on type variable tuples __name__ The name of the type variable tuple New in version 3 11 Changed in version 3 12 Type variable tuples can now be declared using the type parameter syntax introduced by PEP 695 class typing ParamSpec name bound None covariant False contravariant False Parameter specification variable A specialized version of type variables In type parameter lists parameter specifications can be declared with two asterisks type IntFunc P Callable P int For compatibility with Python 3 11 and earlier ParamSpec objects can also be created as follows P ParamSpec P Parameter specification variables exist primarily for the benefit of static type checkers They are used to forward the parameter types of one callable to another callable a pattern commonly found in higher order functions and decorators They are only valid when used in Concatenate or as the first argument to Callable or as parameters for user defined Generics See Generic for more information on generic types For example to add basic logging to a function one can create a decorator add_logging to log function calls The parameter specification variable tells the type checker that the callable passed into the decorator and the new callable returned by it have inter dependent type parameters from collections abc import Callable import logging def add_logging T P f Callable P T Callable P T A type safe decorator to add logging to a function def inner args P args kwargs P kwargs T logging info f f
en
null
40
__name__ was called return f args kwargs return inner add_logging def add_two x float y float float Add two numbers together return x y Without ParamSpec the simplest way to annotate this previously was to use a TypeVar with bound Callable Any However this causes two problems 1 The type checker can t type check the inner function because args and kwargs have to be typed Any 2 cast may be required in the body of the add_logging decorator when returning the inner function or the static type checker must be told to ignore the return inner args kwargs Since ParamSpec captures both positional and keyword parameters P args and P kwargs can be used to split a ParamSpec into its components P args represents the tuple of positional parameters in a given call and should only be used to annotate args P kwargs represents the mapping of keyword parameters to their values in a given call and should be only be used to annotate kwargs Both attributes require the annotated parameter to be in scope At runtime P args and P kwargs are instances respectively of ParamSpecArgs and ParamSpecKwargs __name__ The name of the parameter specification Parameter specification variables created with covariant True or contravariant True can be used to declare covariant or contravariant generic types The bound argument is also accepted similar to TypeVar However the actual semantics of these keywords are yet to be decided New in version 3 10 Changed in version 3 12 Parameter specifications can now be declared using the type parameter syntax introduced by PEP 695 Note Only parameter specification variables defined in global scope can be pickled See also PEP 612 Parameter Specification Variables the PEP which introduced ParamSpec and Concatenate Concatenate Annotating callable objects typing ParamSpecArgs typing ParamSpecKwargs Arguments and keyword arguments attributes of a ParamSpec The P args attribute of a ParamSpec is an instance of ParamSpecArgs and P kwargs is an instance of ParamSpecKwargs They are intended for runtime introspection and have no special meaning to static type checkers Calling get_origin on either of these objects will return the original ParamSpec from typing import ParamSpec get_origin P ParamSpec P get_origin P args is P True get_origin P kwargs is P True New in version 3 10 class typing TypeAliasType name value type_params The type of type aliases created through the type statement Example type Alias int type Alias class typing TypeAliasType New in version 3 12 __name__ The name of the type alias type Alias int Alias __name__ Alias __module__ The module in which the type alias was defined type Alias int Alias __module__ __main__ __type_params__ The type parameters of the type alias or an empty tuple if the alias is not generic type ListOrSet T list T set T ListOrSet __type_params__ T type NotGeneric int NotGeneric __type_params__ __value__ The type alias s value This is lazily evaluated so names used in the definition of the alias are not resolved until the __value__ attribute is accessed type Mutually Recursive type Recursive Mutually Mutually Mutually Recursive Recursive Mutually __value__ Recursive Recursive __value__ Mutually Other special directives These functions and classes should not be used directly as annotations Their intended purpose is to be building blocks for creating and declaring types class typing NamedTuple Typed version of collections namedtuple Usage class Employee NamedTuple name str id int This is equivalent to Employee collections namedtuple Employee name id To give a field a default value you can assign to it in the class body class Employee NamedTuple name str id int 3 employee Employee Guido assert employee id 3 Fields with a default value must come after any fields without a default The resulting class has an extra attribute __annotations__ giving a dict that maps the field names to the field types The field names are in the _fields attribute and the default values are in the _field_defaults attribute both of which are part of the namedtuple API NamedTuple subclasses can also have docstrings and meth
en
null
41
ods class Employee NamedTuple Represents an employee name str id int 3 def __repr__ self str return f Employee self name id self id NamedTuple subclasses can be generic class Group T NamedTuple key T group list T Backward compatible usage For creating a generic NamedTuple on Python 3 11 or lower class Group NamedTuple Generic T key T group list T A functional syntax is also supported Employee NamedTuple Employee name str id int Changed in version 3 6 Added support for PEP 526 variable annotation syntax Changed in version 3 6 1 Added support for default values methods and docstrings Changed in version 3 8 The _field_types and __annotations__ attributes are now regular dictionaries instead of instances of OrderedDict Changed in version 3 9 Removed the _field_types attribute in favor of the more standard __annotations__ attribute which has the same information Changed in version 3 11 Added support for generic namedtuples class typing NewType name tp Helper class to create low overhead distinct types A NewType is considered a distinct type by a typechecker At runtime however calling a NewType returns its argument unchanged Usage UserId NewType UserId int Declare the NewType UserId first_user UserId 1 UserId returns the argument unchanged at runtime __module__ The module in which the new type is defined __name__ The name of the new type __supertype__ The type that the new type is based on New in version 3 5 2 Changed in version 3 10 NewType is now a class rather than a function class typing Protocol Generic Base class for protocol classes Protocol classes are defined like this class Proto Protocol def meth self int Such classes are primarily used with static type checkers that recognize structural subtyping static duck typing for example class C def meth self int return 0 def func x Proto int return x meth func C Passes static type check See PEP 544 for more details Protocol classes decorated with runtime_checkable described later act as simple minded runtime protocols that check only the presence of given attributes ignoring their type signatures Protocol classes can be generic for example class GenProto T Protocol def meth self T In code that needs to be compatible with Python 3 11 or older generic Protocols can be written as follows T TypeVar T class GenProto Protocol T def meth self T New in version 3 8 typing runtime_checkable Mark a protocol class as a runtime protocol Such a protocol can be used with isinstance and issubclass This raises TypeError when applied to a non protocol class This allows a simple minded structural check very similar to one trick ponies in collections abc such as Iterable For example runtime_checkable class Closable Protocol def close self assert isinstance open some file Closable runtime_checkable class Named Protocol name str import threading assert isinstance threading Thread name Bob Named Note runtime_checkable will check only the presence of the required methods or attributes not their type signatures or types For example ssl SSLObject is a class therefore it passes an issubclass check against Callable However the ssl SSLObject __init__ method exists only to raise a TypeError with a more informative message therefore making it impossible to call instantiate ssl SSLObject Note An isinstance check against a runtime checkable protocol can be surprisingly slow compared to an isinstance check against a non protocol class Consider using alternative idioms such as hasattr calls for structural checks in performance sensitive code New in version 3 8 Changed in version 3 12 The internal implementation of isinstance checks against runtime checkable protocols now uses inspect getattr_static to look up attributes previously hasattr was used As a result some objects which used to be considered instances of a runtime checkable protocol may no longer be considered instances of that protocol on Python 3 12 and vice versa Most users are unlikely to be affected by this change Changed in version 3 12 The members of a runtime checkable protocol are now considered frozen at runtime as soon as the class has been
en
null
42
created Monkey patching attributes onto a runtime checkable protocol will still work but will have no impact on isinstance checks comparing objects to the protocol See What s new in Python 3 12 for more details class typing TypedDict dict Special construct to add type hints to a dictionary At runtime it is a plain dict TypedDict declares a dictionary type that expects all of its instances to have a certain set of keys where each key is associated with a value of a consistent type This expectation is not checked at runtime but is only enforced by type checkers Usage class Point2D TypedDict x int y int label str a Point2D x 1 y 2 label good OK b Point2D z 3 label bad Fails type check assert Point2D x 1 y 2 label first dict x 1 y 2 label first To allow using this feature with older versions of Python that do not support PEP 526 TypedDict supports two additional equivalent syntactic forms Using a literal dict as the second argument Point2D TypedDict Point2D x int y int label str Using keyword arguments Point2D TypedDict Point2D x int y int label str Deprecated since version 3 11 will be removed in version 3 13 The keyword argument syntax is deprecated in 3 11 and will be removed in 3 13 It may also be unsupported by static type checkers The functional syntax should also be used when any of the keys are not valid identifiers for example because they are keywords or contain hyphens Example raises SyntaxError class Point2D TypedDict in int in is a keyword x y int name with hyphens OK functional syntax Point2D TypedDict Point2D in int x y int By default all keys must be present in a TypedDict It is possible to mark individual keys as non required using NotRequired class Point2D TypedDict x int y int label NotRequired str Alternative syntax Point2D TypedDict Point2D x int y int label NotRequired str This means that a Point2D TypedDict can have the label key omitted It is also possible to mark all keys as non required by default by specifying a totality of False class Point2D TypedDict total False x int y int Alternative syntax Point2D TypedDict Point2D x int y int total False This means that a Point2D TypedDict can have any of the keys omitted A type checker is only expected to support a literal False or True as the value of the total argument True is the default and makes all items defined in the class body required Individual keys of a total False TypedDict can be marked as required using Required class Point2D TypedDict total False x Required int y Required int label str Alternative syntax Point2D TypedDict Point2D x Required int y Required int label str total False It is possible for a TypedDict type to inherit from one or more other TypedDict types using the class based syntax Usage class Point3D Point2D z int Point3D has three items x y and z It is equivalent to this definition class Point3D TypedDict x int y int z int A TypedDict cannot inherit from a non TypedDict class except for Generic For example class X TypedDict x int class Y TypedDict y int class Z object pass A non TypedDict class class XY X Y pass OK class XZ X Z pass raises TypeError A TypedDict can be generic class Group T TypedDict key T group list T To create a generic TypedDict that is compatible with Python 3 11 or lower inherit from Generic explicitly T TypeVar T class Group TypedDict Generic T key T group list T A TypedDict can be introspected via annotations dicts see Annotations Best Practices for more information on annotations best practices __total__ __required_keys__ and __optional_keys__ __total__ Point2D __total__ gives the value of the total argument Example from typing import TypedDict class Point2D TypedDict pass Point2D __total__ True class Point2D TypedDict total False pass Point2D __total__ False class Point3D Point2D pass Point3D __total__ True This attribute reflects only the value of the total argument to the current TypedDict class not whether the class is semantically total For example a TypedDict with __total__ set to True may have keys marked with NotRequired or it may inherit from another TypedDict with total False Therefore it is g
en
null
43
enerally better to use __required_keys__ and __optional_keys__ for introspection __required_keys__ New in version 3 9 __optional_keys__ Point2D __required_keys__ and Point2D __optional_keys__ return frozenset objects containing required and non required keys respectively Keys marked with Required will always appear in __required_keys__ and keys marked with NotRequired will always appear in __optional_keys__ For backwards compatibility with Python 3 10 and below it is also possible to use inheritance to declare both required and non required keys in the same TypedDict This is done by declaring a TypedDict with one value for the total argument and then inheriting from it in another TypedDict with a different value for total class Point2D TypedDict total False x int y int class Point3D Point2D z int Point3D __required_keys__ frozenset z True Point3D __optional_keys__ frozenset x y True New in version 3 9 Note If from __future__ import annotations is used or if annotations are given as strings annotations are not evaluated when the TypedDict is defined Therefore the runtime introspection that __required_keys__ and __optional_keys__ rely on may not work properly and the values of the attributes may be incorrect See PEP 589 for more examples and detailed rules of using TypedDict New in version 3 8 Changed in version 3 11 Added support for marking individual keys as Required or NotRequired See PEP 655 Changed in version 3 11 Added support for generic TypedDict s Protocols The following protocols are provided by the typing module All are decorated with runtime_checkable class typing SupportsAbs An ABC with one abstract method __abs__ that is covariant in its return type class typing SupportsBytes An ABC with one abstract method __bytes__ class typing SupportsComplex An ABC with one abstract method __complex__ class typing SupportsFloat An ABC with one abstract method __float__ class typing SupportsIndex An ABC with one abstract method __index__ New in version 3 8 class typing SupportsInt An ABC with one abstract method __int__ class typing SupportsRound An ABC with one abstract method __round__ that is covariant in its return type ABCs for working with IO class typing IO class typing TextIO class typing BinaryIO Generic type IO AnyStr and its subclasses TextIO IO str and BinaryIO IO bytes represent the types of I O streams such as returned by open Functions and decorators typing cast typ val Cast a value to a type This returns the value unchanged To the type checker this signals that the return value has the designated type but at runtime we intentionally don t check anything we want this to be as fast as possible typing assert_type val typ Ask a static type checker to confirm that val has an inferred type of typ At runtime this does nothing it returns the first argument unchanged with no checks or side effects no matter the actual type of the argument When a static type checker encounters a call to assert_type it emits an error if the value is not of the specified type def greet name str None assert_type name str OK inferred type of name is str assert_type name int type checker error This function is useful for ensuring the type checker s understanding of a script is in line with the developer s intentions def complex_function arg object Do some complex type narrowing logic after which we hope the inferred type will be int Test whether the type checker correctly understands our function assert_type arg int New in version 3 11 typing assert_never arg Ask a static type checker to confirm that a line of code is unreachable Example def int_or_str arg int str None match arg case int print It s an int case str print It s a str case _ as unreachable assert_never unreachable Here the annotations allow the type checker to infer that the last case can never execute because arg is either an int or a str and both options are covered by earlier cases If a type checker finds that a call to assert_never is reachable it will emit an error For example if the type annotation for arg was instead int str float the type checker would emit an error
en
null
44
pointing out that unreachable is of type float For a call to assert_never to pass type checking the inferred type of the argument passed in must be the bottom type Never and nothing else At runtime this throws an exception when called See also Unreachable Code and Exhaustiveness Checking has more information about exhaustiveness checking with static typing New in version 3 11 typing reveal_type obj Ask a static type checker to reveal the inferred type of an expression When a static type checker encounters a call to this function it emits a diagnostic with the inferred type of the argument For example x int 1 reveal_type x Revealed type is builtins int This can be useful when you want to debug how your type checker handles a particular piece of code At runtime this function prints the runtime type of its argument to sys stderr and returns the argument unchanged allowing the call to be used within an expression x reveal_type 1 prints Runtime type is int print x prints 1 Note that the runtime type may be different from more or less specific than the type statically inferred by a type checker Most type checkers support reveal_type anywhere even if the name is not imported from typing Importing the name from typing however allows your code to run without runtime errors and communicates intent more clearly New in version 3 11 typing dataclass_transform eq_default True order_default False kw_only_default False frozen_default False field_specifiers kwargs Decorator to mark an object as providing dataclass like behavior dataclass_transform may be used to decorate a class metaclass or a function that is itself a decorator The presence of dataclass_transform tells a static type checker that the decorated object performs runtime magic that transforms a class in a similar way to dataclasses dataclass Example usage with a decorator function dataclass_transform def create_model T cls type T type T return cls create_model class CustomerModel id int name str On a base class dataclass_transform class ModelBase class CustomerModel ModelBase id int name str On a metaclass dataclass_transform class ModelMeta type class ModelBase metaclass ModelMeta class CustomerModel ModelBase id int name str The CustomerModel classes defined above will be treated by type checkers similarly to classes created with dataclasses dataclass For example type checkers will assume these classes have __init__ methods that accept id and name The decorated class metaclass or function may accept the following bool arguments which type checkers will assume have the same effect as they would have on the dataclasses dataclass decorator init eq order unsafe_hash frozen match_args kw_only and slots It must be possible for the value of these arguments True or False to be statically evaluated The arguments to the dataclass_transform decorator can be used to customize the default behaviors of the decorated class metaclass or function Parameters eq_default bool Indicates whether the eq parameter is assumed to be True or False if it is omitted by the caller Defaults to True order_default bool Indicates whether the order parameter is assumed to be True or False if it is omitted by the caller Defaults to False kw_only_default bool Indicates whether the kw_only parameter is assumed to be True or False if it is omitted by the caller Defaults to False frozen_default bool Indicates whether the frozen parameter is assumed to be True or False if it is omitted by the caller Defaults to False New in version 3 12 field_specifiers tuple Callable Any Specifies a static list of supported classes or functions that describe fields similar to dataclasses field Defaults to kwargs Any Arbitrary other keyword arguments are accepted in order to allow for possible future extensions Type checkers recognize the following optional parameters on field specifiers Recognised parameters for field specifiers Parameter name Description init Indicates whether the field should be included in the synthesized __init__ method If unspecified init defaults to True default Provides the default value for the field def
en
null
45
ault_factory Provides a runtime callback that returns the default value for the field If neither default nor default_factory are specified the field is assumed to have no default value and must be provided a value when the class is instantiated factory An alias for the default_factory parameter on field specifiers kw_only Indicates whether the field should be marked as keyword only If True the field will be keyword only If False it will not be keyword only If unspecified the value of the kw_only parameter on the object decorated with dataclass_transform will be used or if that is unspecified the value of kw_only_default on dataclass_transform will be used alias Provides an alternative name for the field This alternative name is used in the synthesized __init__ method At runtime this decorator records its arguments in the __dataclass_transform__ attribute on the decorated object It has no other runtime effect See PEP 681 for more details New in version 3 11 typing overload Decorator for creating overloaded functions and methods The overload decorator allows describing functions and methods that support multiple different combinations of argument types A series of overload decorated definitions must be followed by exactly one non overload decorated definition for the same function method overload decorated definitions are for the benefit of the type checker only since they will be overwritten by the non overload decorated definition The non overload decorated definition meanwhile will be used at runtime but should be ignored by a type checker At runtime calling an overload decorated function directly will raise NotImplementedError An example of overload that gives a more precise type than can be expressed using a union or a type variable overload def process response None None overload def process response int tuple int str overload def process response bytes str def process response actual implementation goes here See PEP 484 for more details and comparison with other typing semantics Changed in version 3 11 Overloaded functions can now be introspected at runtime using get_overloads typing get_overloads func Return a sequence of overload decorated definitions for func func is the function object for the implementation of the overloaded function For example given the definition of process in the documentation for overload get_overloads process will return a sequence of three function objects for the three defined overloads If called on a function with no overloads get_overloads returns an empty sequence get_overloads can be used for introspecting an overloaded function at runtime New in version 3 11 typing clear_overloads Clear all registered overloads in the internal registry This can be used to reclaim the memory used by the registry New in version 3 11 typing final Decorator to indicate final methods and final classes Decorating a method with final indicates to a type checker that the method cannot be overridden in a subclass Decorating a class with final indicates that it cannot be subclassed For example class Base final def done self None class Sub Base def done self None Error reported by type checker final class Leaf class Other Leaf Error reported by type checker There is no runtime checking of these properties See PEP 591 for more details New in version 3 8 Changed in version 3 11 The decorator will now attempt to set a __final__ attribute to True on the decorated object Thus a check like if getattr obj __final__ False can be used at runtime to determine whether an object obj has been marked as final If the decorated object does not support setting attributes the decorator returns the object unchanged without raising an exception typing no_type_check Decorator to indicate that annotations are not type hints This works as a class or function decorator With a class it applies recursively to all methods and classes defined in that class but not to methods defined in its superclasses or subclasses Type checkers will ignore all annotations in a function or class with this decorator no_type_check mutates the decorated o
en
null
46
bject in place typing no_type_check_decorator Decorator to give another decorator the no_type_check effect This wraps the decorator with something that wraps the decorated function in no_type_check typing override Decorator to indicate that a method in a subclass is intended to override a method or attribute in a superclass Type checkers should emit an error if a method decorated with override does not in fact override anything This helps prevent bugs that may occur when a base class is changed without an equivalent change to a child class For example class Base def log_status self None class Sub Base override def log_status self None Okay overrides Base log_status override def done self None Error reported by type checker There is no runtime checking of this property The decorator will attempt to set an __override__ attribute to True on the decorated object Thus a check like if getattr obj __override__ False can be used at runtime to determine whether an object obj has been marked as an override If the decorated object does not support setting attributes the decorator returns the object unchanged without raising an exception See PEP 698 for more details New in version 3 12 typing type_check_only Decorator to mark a class or function as unavailable at runtime This decorator is itself not available at runtime It is mainly intended to mark classes that are defined in type stub files if an implementation returns an instance of a private class type_check_only class Response private or not available at runtime code int def get_header self name str str def fetch_response Response Note that returning instances of private classes is not recommended It is usually preferable to make such classes public Introspection helpers typing get_type_hints obj globalns None localns None include_extras False Return a dictionary containing type hints for a function method module or class object This is often the same as obj __annotations__ In addition forward references encoded as string literals are handled by evaluating them in globals and locals namespaces For a class C return a dictionary constructed by merging all the __annotations__ along C __mro__ in reverse order The function recursively replaces all Annotated T with T unless include_extras is set to True see Annotated for more information For example class Student NamedTuple name Annotated str some marker assert get_type_hints Student name str assert get_type_hints Student include_extras False name str assert get_type_hints Student include_extras True name Annotated str some marker Note get_type_hints does not work with imported type aliases that include forward references Enabling postponed evaluation of annotations PEP 563 may remove the need for most forward references Changed in version 3 9 Added include_extras parameter as part of PEP 593 See the documentation on Annotated for more information Changed in version 3 11 Previously Optional t was added for function and method annotations if a default value equal to None was set Now the annotation is returned unchanged typing get_origin tp Get the unsubscripted version of a type for a typing object of the form X Y Z return X If X is a typing module alias for a builtin or collections class it will be normalized to the original class If X is an instance of ParamSpecArgs or ParamSpecKwargs return the underlying ParamSpec Return None for unsupported objects Examples assert get_origin str is None assert get_origin Dict str int is dict assert get_origin Union int str is Union P ParamSpec P assert get_origin P args is P assert get_origin P kwargs is P New in version 3 8 typing get_args tp Get type arguments with all substitutions performed for a typing object of the form X Y Z return Y Z If X is a union or Literal contained in another generic type the order of Y Z may be different from the order of the original arguments Y Z due to type caching Return for unsupported objects Examples assert get_args int assert get_args Dict int str int str assert get_args Union int str int str New in version 3 8 typing is_typeddict tp Check if a type is a Ty
en
null
47
pedDict For example class Film TypedDict title str year int assert is_typeddict Film assert not is_typeddict list str TypedDict is a factory for creating typed dicts not a typed dict itself assert not is_typeddict TypedDict New in version 3 10 class typing ForwardRef Class used for internal typing representation of string forward references For example List SomeClass is implicitly transformed into List ForwardRef SomeClass ForwardRef should not be instantiated by a user but may be used by introspection tools Note PEP 585 generic types such as list SomeClass will not be implicitly transformed into list ForwardRef SomeClass and thus will not automatically resolve to list SomeClass New in version 3 7 4 Constant typing TYPE_CHECKING A special constant that is assumed to be True by 3rd party static type checkers It is False at runtime Usage if TYPE_CHECKING import expensive_mod def fun arg expensive_mod SomeType None local_var expensive_mod AnotherType other_fun The first type annotation must be enclosed in quotes making it a forward reference to hide the expensive_mod reference from the interpreter runtime Type annotations for local variables are not evaluated so the second annotation does not need to be enclosed in quotes Note If from __future__ import annotations is used annotations are not evaluated at function definition time Instead they are stored as strings in __annotations__ This makes it unnecessary to use quotes around the annotation see PEP 563 New in version 3 5 2 Deprecated aliases This module defines several deprecated aliases to pre existing standard library classes These were originally included in the typing module in order to support parameterizing these generic classes using However the aliases became redundant in Python 3 9 when the corresponding pre existing classes were enhanced to support see PEP 585 The redundant types are deprecated as of Python 3 9 However while the aliases may be removed at some point removal of these aliases is not currently planned As such no deprecation warnings are currently issued by the interpreter for these aliases If at some point it is decided to remove these deprecated aliases a deprecation warning will be issued by the interpreter for at least two releases prior to removal The aliases are guaranteed to remain in the typing module without deprecation warnings until at least Python 3 14 Type checkers are encouraged to flag uses of the deprecated types if the program they are checking targets a minimum Python version of 3 9 or newer Aliases to built in types class typing Dict dict MutableMapping KT VT Deprecated alias to dict Note that to annotate arguments it is preferred to use an abstract collection type such as Mapping rather than to use dict or typing Dict This type can be used as follows def count_words text str Dict str int Deprecated since version 3 9 builtins dict now supports subscripting See PEP 585 and Generic Alias Type class typing List list MutableSequence T Deprecated alias to list Note that to annotate arguments it is preferred to use an abstract collection type such as Sequence or Iterable rather than to use list or typing List This type may be used as follows def vec2 T int float x T y T List T return x y def keep_positives T int float vector Sequence T List T return item for item in vector if item 0 Deprecated since version 3 9 builtins list now supports subscripting See PEP 585 and Generic Alias Type class typing Set set MutableSet T Deprecated alias to builtins set Note that to annotate arguments it is preferred to use an abstract collection type such as AbstractSet rather than to use set or typing Set Deprecated since version 3 9 builtins set now supports subscripting See PEP 585 and Generic Alias Type class typing FrozenSet frozenset AbstractSet T_co Deprecated alias to builtins frozenset Deprecated since version 3 9 builtins frozenset now supports subscripting See PEP 585 and Generic Alias Type typing Tuple Deprecated alias for tuple tuple and Tuple are special cased in the type system see Annotating tuples for more details Deprecated since version
en
null
48
3 9 builtins tuple now supports subscripting See PEP 585 and Generic Alias Type class typing Type Generic CT_co Deprecated alias to type See The type of class objects for details on using type or typing Type in type annotations New in version 3 5 2 Deprecated since version 3 9 builtins type now supports subscripting See PEP 585 and Generic Alias Type Aliases to types in collections class typing DefaultDict collections defaultdict MutableMapping KT VT Deprecated alias to collections defaultdict New in version 3 5 2 Deprecated since version 3 9 collections defaultdict now supports subscripting See PEP 585 and Generic Alias Type class typing OrderedDict collections OrderedDict MutableMapping KT VT Deprecated alias to collections OrderedDict New in version 3 7 2 Deprecated since version 3 9 collections OrderedDict now supports subscripting See PEP 585 and Generic Alias Type class typing ChainMap collections ChainMap MutableMapping KT VT Deprecated alias to collections ChainMap New in version 3 6 1 Deprecated since version 3 9 collections ChainMap now supports subscripting See PEP 585 and Generic Alias Type class typing Counter collections Counter Dict T int Deprecated alias to collections Counter New in version 3 6 1 Deprecated since version 3 9 collections Counter now supports subscripting See PEP 585 and Generic Alias Type class typing Deque deque MutableSequence T Deprecated alias to collections deque New in version 3 6 1 Deprecated since version 3 9 collections deque now supports subscripting See PEP 585 and Generic Alias Type Aliases to other concrete types Deprecated since version 3 8 will be removed in version 3 13 The typing io namespace is deprecated and will be removed These types should be directly imported from typing instead class typing Pattern class typing Match Deprecated aliases corresponding to the return types from re compile and re match These types and the corresponding functions are generic over AnyStr Pattern can be specialised as Pattern str or Pattern bytes Match can be specialised as Match str or Match bytes Deprecated since version 3 8 will be removed in version 3 13 The typing re namespace is deprecated and will be removed These types should be directly imported from typing instead Deprecated since version 3 9 Classes Pattern and Match from re now support See PEP 585 and Generic Alias Type class typing Text Deprecated alias for str Text is provided to supply a forward compatible path for Python 2 code in Python 2 Text is an alias for unicode Use Text to indicate that a value must contain a unicode string in a manner that is compatible with both Python 2 and Python 3 def add_unicode_checkmark text Text Text return text u u2713 New in version 3 5 2 Deprecated since version 3 11 Python 2 is no longer supported and most type checkers also no longer support type checking Python 2 code Removal of the alias is not currently planned but users are encouraged to use str instead of Text Aliases to container ABCs in collections abc class typing AbstractSet Collection T_co Deprecated alias to collections abc Set Deprecated since version 3 9 collections abc Set now supports subscripting See PEP 585 and Generic Alias Type class typing ByteString Sequence int This type represents the types bytes bytearray and memoryview of byte sequences Deprecated since version 3 9 will be removed in version 3 14 Prefer collections abc Buffer or a union like bytes bytearray memoryview class typing Collection Sized Iterable T_co Container T_co Deprecated alias to collections abc Collection New in version 3 6 Deprecated since version 3 9 collections abc Collection now supports subscripting See PEP 585 and Generic Alias Type class typing Container Generic T_co Deprecated alias to collections abc Container Deprecated since version 3 9 collections abc Container now supports subscripting See PEP 585 and Generic Alias Type class typing ItemsView MappingView AbstractSet tuple KT_co VT_co Deprecated alias to collections abc ItemsView Deprecated since version 3 9 collections abc ItemsView now supports subscripting See PEP 585 and Generic
en
null
49
Alias Type class typing KeysView MappingView AbstractSet KT_co Deprecated alias to collections abc KeysView Deprecated since version 3 9 collections abc KeysView now supports subscripting See PEP 585 and Generic Alias Type class typing Mapping Collection KT Generic KT VT_co Deprecated alias to collections abc Mapping This type can be used as follows def get_position_in_index word_list Mapping str int word str int return word_list word Deprecated since version 3 9 collections abc Mapping now supports subscripting See PEP 585 and Generic Alias Type class typing MappingView Sized Deprecated alias to collections abc MappingView Deprecated since version 3 9 collections abc MappingView now supports subscripting See PEP 585 and Generic Alias Type class typing MutableMapping Mapping KT VT Deprecated alias to collections abc MutableMapping Deprecated since version 3 9 collections abc MutableMapping now supports subscripting See PEP 585 and Generic Alias Type class typing MutableSequence Sequence T Deprecated alias to collections abc MutableSequence Deprecated since version 3 9 collections abc MutableSequence now supports subscripting See PEP 585 and Generic Alias Type class typing MutableSet AbstractSet T Deprecated alias to collections abc MutableSet Deprecated since version 3 9 collections abc MutableSet now supports subscripting See PEP 585 and Generic Alias Type class typing Sequence Reversible T_co Collection T_co Deprecated alias to collections abc Sequence Deprecated since version 3 9 collections abc Sequence now supports subscripting See PEP 585 and Generic Alias Type class typing ValuesView MappingView Collection _VT_co Deprecated alias to collections abc ValuesView Deprecated since version 3 9 collections abc ValuesView now supports subscripting See PEP 585 and Generic Alias Type Aliases to asynchronous ABCs in collections abc class typing Coroutine Awaitable ReturnType Generic YieldType SendType ReturnType Deprecated alias to collections abc Coroutine The variance and order of type variables correspond to those of Generator for example from collections abc import Coroutine c Coroutine list str str int Some coroutine defined elsewhere x c send hi Inferred type of x is list str async def bar None y await c Inferred type of y is int New in version 3 5 3 Deprecated since version 3 9 collections abc Coroutine now supports subscripting See PEP 585 and Generic Alias Type class typing AsyncGenerator AsyncIterator YieldType Generic YieldType SendType Deprecated alias to collections abc AsyncGenerator An async generator can be annotated by the generic type AsyncGenerator YieldType SendType For example async def echo_round AsyncGenerator int float sent yield 0 while sent 0 0 rounded await round sent sent yield rounded Unlike normal generators async generators cannot return a value so there is no ReturnType type parameter As with Generator the SendType behaves contravariantly If your generator will only yield values set the SendType to None async def infinite_stream start int AsyncGenerator int None while True yield start start await increment start Alternatively annotate your generator as having a return type of either AsyncIterable YieldType or AsyncIterator YieldType async def infinite_stream start int AsyncIterator int while True yield start start await increment start New in version 3 6 1 Deprecated since version 3 9 collections abc AsyncGenerator now supports subscripting See PEP 585 and Generic Alias Type class typing AsyncIterable Generic T_co Deprecated alias to collections abc AsyncIterable New in version 3 5 2 Deprecated since version 3 9 collections abc AsyncIterable now supports subscripting See PEP 585 and Generic Alias Type class typing AsyncIterator AsyncIterable T_co Deprecated alias to collections abc AsyncIterator New in version 3 5 2 Deprecated since version 3 9 collections abc AsyncIterator now supports subscripting See PEP 585 and Generic Alias Type class typing Awaitable Generic T_co Deprecated alias to collections abc Awaitable New in version 3 5 2 Deprecated since version 3 9 collections abc Awaitable now
en
null
50
supports subscripting See PEP 585 and Generic Alias Type Aliases to other ABCs in collections abc class typing Iterable Generic T_co Deprecated alias to collections abc Iterable Deprecated since version 3 9 collections abc Iterable now supports subscripting See PEP 585 and Generic Alias Type class typing Iterator Iterable T_co Deprecated alias to collections abc Iterator Deprecated since version 3 9 collections abc Iterator now supports subscripting See PEP 585 and Generic Alias Type typing Callable Deprecated alias to collections abc Callable See Annotating callable objects for details on how to use collections abc Callable and typing Callable in type annotations Deprecated since version 3 9 collections abc Callable now supports subscripting See PEP 585 and Generic Alias Type Changed in version 3 10 Callable now supports ParamSpec and Concatenate See PEP 612 for more details class typing Generator Iterator YieldType Generic YieldType SendType ReturnType Deprecated alias to collections abc Generator A generator can be annotated by the generic type Generator YieldType SendType ReturnType For example def echo_round Generator int float str sent yield 0 while sent 0 sent yield round sent return Done Note that unlike many other generics in the typing module the SendType of Generator behaves contravariantly not covariantly or invariantly If your generator will only yield values set the SendType and ReturnType to None def infinite_stream start int Generator int None None while True yield start start 1 Alternatively annotate your generator as having a return type of either Iterable YieldType or Iterator YieldType def infinite_stream start int Iterator int while True yield start start 1 Deprecated since version 3 9 collections abc Generator now supports subscripting See PEP 585 and Generic Alias Type class typing Hashable Deprecated alias to collections abc Hashable Deprecated since version 3 12 Use collections abc Hashable directly instead class typing Reversible Iterable T_co Deprecated alias to collections abc Reversible Deprecated since version 3 9 collections abc Reversible now supports subscripting See PEP 585 and Generic Alias Type class typing Sized Deprecated alias to collections abc Sized Deprecated since version 3 12 Use collections abc Sized directly instead Aliases to contextlib ABCs class typing ContextManager Generic T_co Deprecated alias to contextlib AbstractContextManager New in version 3 5 4 Deprecated since version 3 9 contextlib AbstractContextManager now supports subscripting See PEP 585 and Generic Alias Type class typing AsyncContextManager Generic T_co Deprecated alias to contextlib AbstractAsyncContextManager New in version 3 6 2 Deprecated since version 3 9 contextlib AbstractAsyncContextManager now supports subscripting See PEP 585 and Generic Alias Type Deprecation Timeline of Major Features Certain features in typing are deprecated and may be removed in a future version of Python The following table summarizes major deprecations for your convenience This is subject to change and not all deprecations are listed Feature Deprecated in Projected removal PEP issue typing io and 3 8 3 13 bpo 38291 typing re submodules typing versions of 3 9 Undecided see Deprecated PEP 585 standard collections aliases for more information typing ByteString 3 9 3 14 gh 91896 typing Text 3 11 Undecided gh 92332 typing Hashable and 3 12 Undecided gh 94309 typing Sized typing TypeAlias 3 12 Undecided PEP 695
en
null
51
warnings Warning control Source code Lib warnings py Warning messages are typically issued in situations where it is useful to alert the user of some condition in a program where that condition normally doesn t warrant raising an exception and terminating the program For example one might want to issue a warning when a program uses an obsolete module Python programmers issue warnings by calling the warn function defined in this module C programmers use PyErr_WarnEx see Exception Handling for details Warning messages are normally written to sys stderr but their disposition can be changed flexibly from ignoring all warnings to turning them into exceptions The disposition of warnings can vary based on the warning category the text of the warning message and the source location where it is issued Repetitions of a particular warning for the same source location are typically suppressed There are two stages in warning control first each time a warning is issued a determination is made whether a message should be issued or not next if a message is to be issued it is formatted and printed using a user settable hook The determination whether to issue a warning message is controlled by the warning filter which is a sequence of matching rules and actions Rules can be added to the filter by calling filterwarnings and reset to its default state by calling resetwarnings The printing of warning messages is done by calling showwarning which may be overridden the default implementation of this function formats the message by calling formatwarning which is also available for use by custom implementations See also logging captureWarnings allows you to handle all warnings with the standard logging infrastructure Warning Categories There are a number of built in exceptions that represent warning categories This categorization is useful to be able to filter out groups of warnings While these are technically built in exceptions they are documented here because conceptually they belong to the warnings mechanism User code can define additional warning categories by subclassing one of the standard warning categories A warning category must always be a subclass of the Warning class The following warnings category classes are currently defined Class Description Warning This is the base class of all warning category classes It is a subclass of Exception UserWarning The default category for warn DeprecationWarning Base category for warnings about deprecated features when those warnings are intended for other Python developers ignored by default unless triggered by code in __main__ SyntaxWarning Base category for warnings about dubious syntactic features RuntimeWarning Base category for warnings about dubious runtime features FutureWarning Base category for warnings about deprecated features when those warnings are intended for end users of applications that are written in Python PendingDeprecationWarning Base category for warnings about features that will be deprecated in the future ignored by default ImportWarning Base category for warnings triggered during the process of importing a module ignored by default UnicodeWarning Base category for warnings related to Unicode BytesWarning Base category for warnings related to bytes and bytearray ResourceWarning Base category for warnings related to resource usage ignored by default Changed in version 3 7 Previously DeprecationWarning and FutureWarning were distinguished based on whether a feature was being removed entirely or changing its behaviour They are now distinguished based on their intended audience and the way they re handled by the default warnings filters The Warnings Filter The warnings filter controls whether warnings are ignored displayed or turned into errors raising an exception Conceptually the warnings filter maintains an ordered list of filter specifications any specific warning is matched against each filter specification in the list in turn until a match is found the filter determines the disposition of the match Each entry is a tuple of the form action message category module lineno w
en
null
52
here action is one of the following strings Value Disposition default print the first occurrence of matching warnings for each location module line number where the warning is issued error turn matching warnings into exceptions ignore never print matching warnings always always print matching warnings module print the first occurrence of matching warnings for each module where the warning is issued regardless of line number once print only the first occurrence of matching warnings regardless of location message is a string containing a regular expression that the start of the warning message must match case insensitively In W and PYTHONWARNINGS message is a literal string that the start of the warning message must contain case insensitively ignoring any whitespace at the start or end of message category is a class a subclass of Warning of which the warning category must be a subclass in order to match module is a string containing a regular expression that the start of the fully qualified module name must match case sensitively In W and PYTHONWARNINGS module is a literal string that the fully qualified module name must be equal to case sensitively ignoring any whitespace at the start or end of module lineno is an integer that the line number where the warning occurred must match or 0 to match all line numbers Since the Warning class is derived from the built in Exception class to turn a warning into an error we simply raise category message If a warning is reported and doesn t match any registered filter then the default action is applied hence its name Describing Warning Filters The warnings filter is initialized by W options passed to the Python interpreter command line and the PYTHONWARNINGS environment variable The interpreter saves the arguments for all supplied entries without interpretation in sys warnoptions the warnings module parses these when it is first imported invalid options are ignored after printing a message to sys stderr Individual warnings filters are specified as a sequence of fields separated by colons action message category module line The meaning of each of these fields is as described in The Warnings Filter When listing multiple filters on a single line as for PYTHONWARNINGS the individual filters are separated by commas and the filters listed later take precedence over those listed before them as they re applied left to right and the most recently applied filters take precedence over earlier ones Commonly used warning filters apply to either all warnings warnings in a particular category or warnings raised by particular modules or packages Some examples default Show all warnings even those ignored by default ignore Ignore all warnings error Convert all warnings to errors error ResourceWarning Treat ResourceWarning messages as errors default DeprecationWarning Show DeprecationWarning messages ignore default mymodule Only report warnings triggered by mymodule error mymodule Convert warnings to errors in mymodule Default Warning Filter By default Python installs several warning filters which can be overridden by the W command line option the PYTHONWARNINGS environment variable and calls to filterwarnings In regular release builds the default warning filter has the following entries in order of precedence default DeprecationWarning __main__ ignore DeprecationWarning ignore PendingDeprecationWarning ignore ImportWarning ignore ResourceWarning In a debug build the list of default warning filters is empty Changed in version 3 2 DeprecationWarning is now ignored by default in addition to PendingDeprecationWarning Changed in version 3 7 DeprecationWarning is once again shown by default when triggered directly by code in __main__ Changed in version 3 7 BytesWarning no longer appears in the default filter list and is instead configured via sys warnoptions when b is specified twice Overriding the default filter Developers of applications written in Python may wish to hide all Python level warnings from their users by default and only display them when running tests or otherwise working on the application T
en
null
53
he sys warnoptions attribute used to pass filter configurations to the interpreter can be used as a marker to indicate whether or not warnings should be disabled import sys if not sys warnoptions import warnings warnings simplefilter ignore Developers of test runners for Python code are advised to instead ensure that all warnings are displayed by default for the code under test using code like import sys if not sys warnoptions import os warnings warnings simplefilter default Change the filter in this process os environ PYTHONWARNINGS default Also affect subprocesses Finally developers of interactive shells that run user code in a namespace other than __main__ are advised to ensure that DeprecationWarning messages are made visible by default using code like the following where user_ns is the module used to execute code entered interactively import warnings warnings filterwarnings default category DeprecationWarning module user_ns get __name__ Temporarily Suppressing Warnings If you are using code that you know will raise a warning such as a deprecated function but do not want to see the warning even when warnings have been explicitly configured via the command line then it is possible to suppress the warning using the catch_warnings context manager import warnings def fxn warnings warn deprecated DeprecationWarning with warnings catch_warnings warnings simplefilter ignore fxn While within the context manager all warnings will simply be ignored This allows you to use known deprecated code without having to see the warning while not suppressing the warning for other code that might not be aware of its use of deprecated code Note this can only be guaranteed in a single threaded application If two or more threads use the catch_warnings context manager at the same time the behavior is undefined Testing Warnings To test warnings raised by code use the catch_warnings context manager With it you can temporarily mutate the warnings filter to facilitate your testing For instance do the following to capture all raised warnings to check import warnings def fxn warnings warn deprecated DeprecationWarning with warnings catch_warnings record True as w Cause all warnings to always be triggered warnings simplefilter always Trigger a warning fxn Verify some things assert len w 1 assert issubclass w 1 category DeprecationWarning assert deprecated in str w 1 message One can also cause all warnings to be exceptions by using error instead of always One thing to be aware of is that if a warning has already been raised because of a once default rule then no matter what filters are set the warning will not be seen again unless the warnings registry related to the warning has been cleared Once the context manager exits the warnings filter is restored to its state when the context was entered This prevents tests from changing the warnings filter in unexpected ways between tests and leading to indeterminate test results The showwarning function in the module is also restored to its original value Note this can only be guaranteed in a single threaded application If two or more threads use the catch_warnings context manager at the same time the behavior is undefined When testing multiple operations that raise the same kind of warning it is important to test them in a manner that confirms each operation is raising a new warning e g set warnings to be raised as exceptions and check the operations raise exceptions check that the length of the warning list continues to increase after each operation or else delete the previous entries from the warnings list before each new operation Updating Code For New Versions of Dependencies Warning categories that are primarily of interest to Python developers rather than end users of applications written in Python are ignored by default Notably this ignored by default list includes DeprecationWarning for every module except __main__ which means developers should make sure to test their code with typically ignored warnings made visible in order to receive timely notifications of future breaking API changes whether in
en
null
54
the standard library or third party packages In the ideal case the code will have a suitable test suite and the test runner will take care of implicitly enabling all warnings when running tests the test runner provided by the unittest module does this In less ideal cases applications can be checked for use of deprecated interfaces by passing Wd to the Python interpreter this is shorthand for W default or setting PYTHONWARNINGS default in the environment This enables default handling for all warnings including those that are ignored by default To change what action is taken for encountered warnings you can change what argument is passed to W e g W error See the W flag for more details on what is possible Available Functions warnings warn message category None stacklevel 1 source None skip_file_prefixes None Issue a warning or maybe ignore it or raise an exception The category argument if given must be a warning category class it defaults to UserWarning Alternatively message can be a Warning instance in which case category will be ignored and message __class__ will be used In this case the message text will be str message This function raises an exception if the particular warning issued is changed into an error by the warnings filter The stacklevel argument can be used by wrapper functions written in Python like this def deprecated_api message warnings warn message DeprecationWarning stacklevel 2 This makes the warning refer to deprecated_api s caller rather than to the source of deprecated_api itself since the latter would defeat the purpose of the warning message The skip_file_prefixes keyword argument can be used to indicate which stack frames are ignored when counting stack levels This can be useful when you want the warning to always appear at call sites outside of a package when a constant stacklevel does not fit all call paths or is otherwise challenging to maintain If supplied it must be a tuple of strings When prefixes are supplied stacklevel is implicitly overridden to be max 2 stacklevel To cause a warning to be attributed to the caller from outside of the current package you might write example lower py _warn_skips os path dirname __file__ def one_way r_luxury_yacht None t_wobbler_mangrove None if r_luxury_yacht warnings warn Please migrate to t_wobbler_mangrove skip_file_prefixes _warn_skips example higher py from import lower def another_way kw lower one_way kw This makes the warning refer to both the example lower one_way and package higher another_way call sites only from calling code living outside of example package source if supplied is the destroyed object which emitted a ResourceWarning Changed in version 3 6 Added source parameter Changed in version 3 12 Added skip_file_prefixes warnings warn_explicit message category filename lineno module None registry None module_globals None source None This is a low level interface to the functionality of warn passing in explicitly the message category filename and line number and optionally the module name and the registry which should be the __warningregistry__ dictionary of the module The module name defaults to the filename with py stripped if no registry is passed the warning is never suppressed message must be a string and category a subclass of Warning or message may be a Warning instance in which case category will be ignored module_globals if supplied should be the global namespace in use by the code for which the warning is issued This argument is used to support displaying source for modules found in zipfiles or other non filesystem import sources source if supplied is the destroyed object which emitted a ResourceWarning Changed in version 3 6 Add the source parameter warnings showwarning message category filename lineno file None line None Write a warning to a file The default implementation calls formatwarning message category filename lineno line and writes the resulting string to file which defaults to sys stderr You may replace this function with any callable by assigning to warnings showwarning line is a line of source code to be included in the
en
null
55
warning message if line is not supplied showwarning will try to read the line specified by filename and lineno warnings formatwarning message category filename lineno line None Format a warning the standard way This returns a string which may contain embedded newlines and ends in a newline line is a line of source code to be included in the warning message if line is not supplied formatwarning will try to read the line specified by filename and lineno warnings filterwarnings action message category Warning module lineno 0 append False Insert an entry into the list of warnings filter specifications The entry is inserted at the front by default if append is true it is inserted at the end This checks the types of the arguments compiles the message and module regular expressions and inserts them as a tuple in the list of warnings filters Entries closer to the front of the list override entries later in the list if both match a particular warning Omitted arguments default to a value that matches everything warnings simplefilter action category Warning lineno 0 append False Insert a simple entry into the list of warnings filter specifications The meaning of the function parameters is as for filterwarnings but regular expressions are not needed as the filter inserted always matches any message in any module as long as the category and line number match warnings resetwarnings Reset the warnings filter This discards the effect of all previous calls to filterwarnings including that of the W command line options and calls to simplefilter Available Context Managers class warnings catch_warnings record False module None action None category Warning lineno 0 append False A context manager that copies and upon exit restores the warnings filter and the showwarning function If the record argument is False the default the context manager returns None on entry If record is True a list is returned that is progressively populated with objects as seen by a custom showwarning function which also suppresses output to sys stdout Each object in the list has attributes with the same names as the arguments to showwarning The module argument takes a module that will be used instead of the module returned when you import warnings whose filter will be protected This argument exists primarily for testing the warnings module itself If the action argument is not None the remaining arguments are passed to simplefilter as if it were called immediately on entering the context Note The catch_warnings manager works by replacing and then later restoring the module s showwarning function and internal list of filter specifications This means the context manager is modifying global state and therefore is not thread safe Changed in version 3 11 Added the action category lineno and append parameters
en
null
56
urllib parse Parse URLs into components Source code Lib urllib parse py This module defines a standard interface to break Uniform Resource Locator URL strings up in components addressing scheme network location path etc to combine the components back into a URL string and to convert a relative URL to an absolute URL given a base URL The module has been designed to match the internet RFC on Relative Uniform Resource Locators It supports the following URL schemes file ftp gopher hdl http https imap mailto mms news nntp prospero rsync rtsp rtsps rtspu sftp shttp sip sips snews svn svn ssh telnet wais ws wss The urllib parse module defines functions that fall into two broad categories URL parsing and URL quoting These are covered in detail in the following sections URL Parsing The URL parsing functions focus on splitting a URL string into its components or on combining URL components into a URL string urllib parse urlparse urlstring scheme allow_fragments True Parse a URL into six components returning a 6 item named tuple This corresponds to the general structure of a URL scheme netloc path parameters query fragment Each tuple item is a string possibly empty The components are not broken up into smaller parts for example the network location is a single string and escapes are not expanded The delimiters as shown above are not part of the result except for a leading slash in the path component which is retained if present For example from urllib parse import urlparse urlparse scheme netloc path parameters query fragment ParseResult scheme scheme netloc netloc path path parameters params query query fragment fragment o urlparse http docs python org 80 3 library urllib parse html highlight params url parsing o ParseResult scheme http netloc docs python org 80 path 3 library urllib parse html params query highlight params fragment url parsing o scheme http o netloc docs python org 80 o hostname docs python org o port 80 o _replace fragment geturl http docs python org 80 3 library urllib parse html highlight params Following the syntax specifications in RFC 1808 urlparse recognizes a netloc only if it is properly introduced by Otherwise the input is presumed to be a relative URL and thus to start with a path component from urllib parse import urlparse urlparse www cwi nl 80 7Eguido Python html ParseResult scheme netloc www cwi nl 80 path 7Eguido Python html params query fragment urlparse www cwi nl 7Eguido Python html ParseResult scheme netloc path www cwi nl 7Eguido Python html params query fragment urlparse help Python html ParseResult scheme netloc path help Python html params query fragment The scheme argument gives the default addressing scheme to be used only if the URL does not specify one It should be the same type text or bytes as urlstring except that the default value is always allowed and is automatically converted to b if appropriate If the allow_fragments argument is false fragment identifiers are not recognized Instead they are parsed as part of the path parameters or query component and fragment is set to the empty string in the return value The return value is a named tuple which means that its items can be accessed by index or as named attributes which are Attribute Index Value Value if not present scheme 0 URL scheme specifier scheme parameter netloc 1 Network location part empty string path 2 Hierarchical path empty string params 3 Parameters for last path empty string element query 4 Query component empty string fragment 5 Fragment identifier empty string username User name None password Password None hostname Host name lower case None port Port number as integer None if present Reading the port attribute will raise a ValueError if an invalid port is specified in the URL See section Structured Parse Results for more information on the result object Unmatched square brackets in the netloc attribute will raise a ValueError Characters in the netloc attribute that decompose under NFKC normalization as used by the IDNA encoding into any of or will raise a ValueError If the URL is decomposed before parsing no error
en
null
57
will be raised As is the case with all named tuples the subclass has a few additional methods and attributes that are particularly useful One such method is _replace The _replace method will return a new ParseResult object replacing specified fields with new values from urllib parse import urlparse u urlparse www cwi nl 80 7Eguido Python html u ParseResult scheme netloc www cwi nl 80 path 7Eguido Python html params query fragment u _replace scheme http ParseResult scheme http netloc www cwi nl 80 path 7Eguido Python html params query fragment Warning urlparse does not perform validation See URL parsing security for details Changed in version 3 2 Added IPv6 URL parsing capabilities Changed in version 3 3 The fragment is now parsed for all URL schemes unless allow_fragment is false in accordance with RFC 3986 Previously an allowlist of schemes that support fragments existed Changed in version 3 6 Out of range port numbers now raise ValueError instead of returning None Changed in version 3 8 Characters that affect netloc parsing under NFKC normalization will now raise ValueError urllib parse parse_qs qs keep_blank_values False strict_parsing False encoding utf 8 errors replace max_num_fields None separator Parse a query string given as a string argument data of type application x www form urlencoded Data are returned as a dictionary The dictionary keys are the unique query variable names and the values are lists of values for each name The optional argument keep_blank_values is a flag indicating whether blank values in percent encoded queries should be treated as blank strings A true value indicates that blanks should be retained as blank strings The default false value indicates that blank values are to be ignored and treated as if they were not included The optional argument strict_parsing is a flag indicating what to do with parsing errors If false the default errors are silently ignored If true errors raise a ValueError exception The optional encoding and errors parameters specify how to decode percent encoded sequences into Unicode characters as accepted by the bytes decode method The optional argument max_num_fields is the maximum number of fields to read If set then throws a ValueError if there are more than max_num_fields fields read The optional argument separator is the symbol to use for separating the query arguments It defaults to Use the urllib parse urlencode function with the doseq parameter set to True to convert such dictionaries into query strings Changed in version 3 2 Add encoding and errors parameters Changed in version 3 8 Added max_num_fields parameter Changed in version 3 10 Added separator parameter with the default value of Python versions earlier than Python 3 10 allowed using both and as query parameter separator This has been changed to allow only a single separator key with as the default separator urllib parse parse_qsl qs keep_blank_values False strict_parsing False encoding utf 8 errors replace max_num_fields None separator Parse a query string given as a string argument data of type application x www form urlencoded Data are returned as a list of name value pairs The optional argument keep_blank_values is a flag indicating whether blank values in percent encoded queries should be treated as blank strings A true value indicates that blanks should be retained as blank strings The default false value indicates that blank values are to be ignored and treated as if they were not included The optional argument strict_parsing is a flag indicating what to do with parsing errors If false the default errors are silently ignored If true errors raise a ValueError exception The optional encoding and errors parameters specify how to decode percent encoded sequences into Unicode characters as accepted by the bytes decode method The optional argument max_num_fields is the maximum number of fields to read If set then throws a ValueError if there are more than max_num_fields fields read The optional argument separator is the symbol to use for separating the query arguments It defaults to Use the urllib pars
en
null
58
e urlencode function to convert such lists of pairs into query strings Changed in version 3 2 Add encoding and errors parameters Changed in version 3 8 Added max_num_fields parameter Changed in version 3 10 Added separator parameter with the default value of Python versions earlier than Python 3 10 allowed using both and as query parameter separator This has been changed to allow only a single separator key with as the default separator urllib parse urlunparse parts Construct a URL from a tuple as returned by urlparse The parts argument can be any six item iterable This may result in a slightly different but equivalent URL if the URL that was parsed originally had unnecessary delimiters for example a with an empty query the RFC states that these are equivalent urllib parse urlsplit urlstring scheme allow_fragments True This is similar to urlparse but does not split the params from the URL This should generally be used instead of urlparse if the more recent URL syntax allowing parameters to be applied to each segment of the path portion of the URL see RFC 2396 is wanted A separate function is needed to separate the path segments and parameters This function returns a 5 item named tuple addressing scheme network location path query fragment identifier The return value is a named tuple its items can be accessed by index or as named attributes Attribute Index Value Value if not present scheme 0 URL scheme specifier scheme parameter netloc 1 Network location part empty string path 2 Hierarchical path empty string query 3 Query component empty string fragment 4 Fragment identifier empty string username User name None password Password None hostname Host name lower case None port Port number as integer None if present Reading the port attribute will raise a ValueError if an invalid port is specified in the URL See section Structured Parse Results for more information on the result object Unmatched square brackets in the netloc attribute will raise a ValueError Characters in the netloc attribute that decompose under NFKC normalization as used by the IDNA encoding into any of or will raise a ValueError If the URL is decomposed before parsing no error will be raised Following some of the WHATWG spec that updates RFC 3986 leading C0 control and space characters are stripped from the URL n r and tab t characters are removed from the URL at any position Warning urlsplit does not perform validation See URL parsing security for details Changed in version 3 6 Out of range port numbers now raise ValueError instead of returning None Changed in version 3 8 Characters that affect netloc parsing under NFKC normalization will now raise ValueError Changed in version 3 10 ASCII newline and tab characters are stripped from the URL Changed in version 3 12 Leading WHATWG C0 control and space characters are stripped from the URL urllib parse urlunsplit parts Combine the elements of a tuple as returned by urlsplit into a complete URL as a string The parts argument can be any five item iterable This may result in a slightly different but equivalent URL if the URL that was parsed originally had unnecessary delimiters for example a with an empty query the RFC states that these are equivalent urllib parse urljoin base url allow_fragments True Construct a full absolute URL by combining a base URL base with another URL url Informally this uses components of the base URL in particular the addressing scheme the network location and part of the path to provide missing components in the relative URL For example from urllib parse import urljoin urljoin http www cwi nl 7Eguido Python html FAQ html http www cwi nl 7Eguido FAQ html The allow_fragments argument has the same meaning and default as for urlparse Note If url is an absolute URL that is it starts with or scheme the url s hostname and or scheme will be present in the result For example urljoin http www cwi nl 7Eguido Python html www python org 7Eguido http www python org 7Eguido If you do not want that behavior preprocess the url with urlsplit and urlunsplit removing possible scheme and netloc parts Chan
en
null
59
ged in version 3 5 Behavior updated to match the semantics defined in RFC 3986 urllib parse urldefrag url If url contains a fragment identifier return a modified version of url with no fragment identifier and the fragment identifier as a separate string If there is no fragment identifier in url return url unmodified and an empty string The return value is a named tuple its items can be accessed by index or as named attributes Attribute Index Value Value if not present url 0 URL with no fragment empty string fragment 1 Fragment identifier empty string See section Structured Parse Results for more information on the result object Changed in version 3 2 Result is a structured object rather than a simple 2 tuple urllib parse unwrap url Extract the url from a wrapped URL that is a string formatted as URL scheme host path scheme host path URL scheme host path or scheme host path If url is not a wrapped URL it is returned without changes URL parsing security The urlsplit and urlparse APIs do not perform validation of inputs They may not raise errors on inputs that other applications consider invalid They may also succeed on some inputs that might not be considered URLs elsewhere Their purpose is for practical functionality rather than purity Instead of raising an exception on unusual input they may instead return some component parts as empty strings Or components may contain more than perhaps they should We recommend that users of these APIs where the values may be used anywhere with security implications code defensively Do some verification within your code before trusting a returned component part Does that scheme make sense Is that a sensible path Is there anything strange about that hostname etc What constitutes a URL is not universally well defined Different applications have different needs and desired constraints For instance the living WHATWG spec describes what user facing web clients such as a web browser require While RFC 3986 is more general These functions incorporate some aspects of both but cannot be claimed compliant with either The APIs and existing user code with expectations on specific behaviors predate both standards leading us to be very cautious about making API behavior changes Parsing ASCII Encoded Bytes The URL parsing functions were originally designed to operate on character strings only In practice it is useful to be able to manipulate properly quoted and encoded URLs as sequences of ASCII bytes Accordingly the URL parsing functions in this module all operate on bytes and bytearray objects in addition to str objects If str data is passed in the result will also contain only str data If bytes or bytearray data is passed in the result will contain only bytes data Attempting to mix str data with bytes or bytearray in a single function call will result in a TypeError being raised while attempting to pass in non ASCII byte values will trigger UnicodeDecodeError To support easier conversion of result objects between str and bytes all return values from URL parsing functions provide either an encode method when the result contains str data or a decode method when the result contains bytes data The signatures of these methods match those of the corresponding str and bytes methods except that the default encoding is ascii rather than utf 8 Each produces a value of a corresponding type that contains either bytes data for encode methods or str data for decode methods Applications that need to operate on potentially improperly quoted URLs that may contain non ASCII data will need to do their own decoding from bytes to characters before invoking the URL parsing methods The behaviour described in this section applies only to the URL parsing functions The URL quoting functions use their own rules when producing or consuming byte sequences as detailed in the documentation of the individual URL quoting functions Changed in version 3 2 URL parsing functions now accept ASCII encoded byte sequences Structured Parse Results The result objects from the urlparse urlsplit and urldefrag functions are subclasses of the tup
en
null
60
le type These subclasses add the attributes listed in the documentation for those functions the encoding and decoding support described in the previous section as well as an additional method urllib parse SplitResult geturl Return the re combined version of the original URL as a string This may differ from the original URL in that the scheme may be normalized to lower case and empty components may be dropped Specifically empty parameters queries and fragment identifiers will be removed For urldefrag results only empty fragment identifiers will be removed For urlsplit and urlparse results all noted changes will be made to the URL returned by this method The result of this method remains unchanged if passed back through the original parsing function from urllib parse import urlsplit url HTTP www Python org doc r1 urlsplit url r1 geturl http www Python org doc r2 urlsplit r1 geturl r2 geturl http www Python org doc The following classes provide the implementations of the structured parse results when operating on str objects class urllib parse DefragResult url fragment Concrete class for urldefrag results containing str data The encode method returns a DefragResultBytes instance New in version 3 2 class urllib parse ParseResult scheme netloc path params query fragment Concrete class for urlparse results containing str data The encode method returns a ParseResultBytes instance class urllib parse SplitResult scheme netloc path query fragment Concrete class for urlsplit results containing str data The encode method returns a SplitResultBytes instance The following classes provide the implementations of the parse results when operating on bytes or bytearray objects class urllib parse DefragResultBytes url fragment Concrete class for urldefrag results containing bytes data The decode method returns a DefragResult instance New in version 3 2 class urllib parse ParseResultBytes scheme netloc path params query fragment Concrete class for urlparse results containing bytes data The decode method returns a ParseResult instance New in version 3 2 class urllib parse SplitResultBytes scheme netloc path query fragment Concrete class for urlsplit results containing bytes data The decode method returns a SplitResult instance New in version 3 2 URL Quoting The URL quoting functions focus on taking program data and making it safe for use as URL components by quoting special characters and appropriately encoding non ASCII text They also support reversing these operations to recreate the original data from the contents of a URL component if that task isn t already covered by the URL parsing functions above urllib parse quote string safe encoding None errors None Replace special characters in string using the xx escape Letters digits and the characters _ are never quoted By default this function is intended for quoting the path section of a URL The optional safe parameter specifies additional ASCII characters that should not be quoted its default value is string may be either a str or a bytes object Changed in version 3 7 Moved from RFC 2396 to RFC 3986 for quoting URL strings is now included in the set of unreserved characters The optional encoding and errors parameters specify how to deal with non ASCII characters as accepted by the str encode method encoding defaults to utf 8 errors defaults to strict meaning unsupported characters raise a UnicodeEncodeError encoding and errors must not be supplied if string is a bytes or a TypeError is raised Note that quote string safe encoding errors is equivalent to quote_from_bytes string encode encoding errors safe Example quote El Niño yields El 20Ni C3 B1o urllib parse quote_plus string safe encoding None errors None Like quote but also replace spaces with plus signs as required for quoting HTML form values when building up a query string to go into a URL Plus signs in the original string are escaped unless they are included in safe It also does not have safe default to Example quote_plus El Niño yields 2FEl Ni C3 B1o 2F urllib parse quote_from_bytes bytes safe Like quote but accepts a bytes object ra
en
null
61
ther than a str and does not perform string to bytes encoding Example quote_from_bytes b a xef yields a 26 EF urllib parse unquote string encoding utf 8 errors replace Replace xx escapes with their single character equivalent The optional encoding and errors parameters specify how to decode percent encoded sequences into Unicode characters as accepted by the bytes decode method string may be either a str or a bytes object encoding defaults to utf 8 errors defaults to replace meaning invalid sequences are replaced by a placeholder character Example unquote El 20Ni C3 B1o yields El Niño Changed in version 3 9 string parameter supports bytes and str objects previously only str urllib parse unquote_plus string encoding utf 8 errors replace Like unquote but also replace plus signs with spaces as required for unquoting HTML form values string must be a str Example unquote_plus El Ni C3 B1o yields El Niño urllib parse unquote_to_bytes string Replace xx escapes with their single octet equivalent and return a bytes object string may be either a str or a bytes object If it is a str unescaped non ASCII characters in string are encoded into UTF 8 bytes Example unquote_to_bytes a 26 EF yields b a xef urllib parse urlencode query doseq False safe encoding None errors None quote_via quote_plus Convert a mapping object or a sequence of two element tuples which may contain str or bytes objects to a percent encoded ASCII text string If the resultant string is to be used as a data for POST operation with the urlopen function then it should be encoded to bytes otherwise it would result in a TypeError The resulting string is a series of key value pairs separated by characters where both key and value are quoted using the quote_via function By default quote_plus is used to quote the values which means spaces are quoted as a character and characters are encoded as 2F which follows the standard for GET requests application x www form urlencoded An alternate function that can be passed as quote_via is quote which will encode spaces as 20 and not encode characters For maximum control of what is quoted use quote and specify a value for safe When a sequence of two element tuples is used as the query argument the first element of each tuple is a key and the second is a value The value element in itself can be a sequence and in that case if the optional parameter doseq evaluates to True individual key value pairs separated by are generated for each element of the value sequence for the key The order of parameters in the encoded string will match the order of parameter tuples in the sequence The safe encoding and errors parameters are passed down to quote_via the encoding and errors parameters are only passed when a query element is a str To reverse this encoding process parse_qs and parse_qsl are provided in this module to parse query strings into Python data structures Refer to urllib examples to find out how the urllib parse urlencode method can be used for generating the query string of a URL or data for a POST request Changed in version 3 2 query supports bytes and string objects Changed in version 3 5 Added the quote_via parameter See also WHATWG URL Living standard Working Group for the URL Standard that defines URLs domains IP addresses the application x www form urlencoded format and their API RFC 3986 Uniform Resource Identifiers This is the current standard STD66 Any changes to urllib parse module should conform to this Certain deviations could be observed which are mostly for backward compatibility purposes and for certain de facto parsing requirements as commonly observed in major browsers RFC 2732 Format for Literal IPv6 Addresses in URL s This specifies the parsing requirements of IPv6 URLs RFC 2396 Uniform Resource Identifiers URI Generic Syntax Document describing the generic syntactic requirements for both Uniform Resource Names URNs and Uniform Resource Locators URLs RFC 2368 The mailto URL scheme Parsing requirements for mailto URL schemes RFC 1808 Relative Uniform Resource Locators This Request For Comments includes the rules for
en
null
62
joining an absolute and a relative URL including a fair number of Abnormal Examples which govern the treatment of border cases RFC 1738 Uniform Resource Locators URL This specifies the formal syntax and semantics of absolute URLs
en
null
63
spwd The shadow password database Deprecated since version 3 11 will be removed in version 3 13 The spwd module is deprecated see PEP 594 for details and alternatives This module provides access to the Unix shadow password database It is available on various Unix versions Availability not Emscripten not WASI This module does not work or is not available on WebAssembly platforms wasm32 emscripten and wasm32 wasi See WebAssembly platforms for more information You must have enough privileges to access the shadow password database this usually means you have to be root Shadow password database entries are reported as a tuple like object whose attributes correspond to the members of the spwd structure Attribute field below see shadow h Index Attribute Meaning 0 sp_namp Login name 1 sp_pwdp Encrypted password 2 sp_lstchg Date of last change 3 sp_min Minimal number of days between changes 4 sp_max Maximum number of days between changes 5 sp_warn Number of days before password expires to warn user about it 6 sp_inact Number of days after password expires until account is disabled 7 sp_expire Number of days since 1970 01 01 when account expires 8 sp_flag Reserved The sp_namp and sp_pwdp items are strings all others are integers KeyError is raised if the entry asked for cannot be found The following functions are defined spwd getspnam name Return the shadow password database entry for the given user name Changed in version 3 6 Raises a PermissionError instead of KeyError if the user doesn t have privileges spwd getspall Return a list of all available shadow password database entries in arbitrary order See also Module grp An interface to the group database similar to this Module pwd An interface to the normal password database similar to this
en
null
64
logging handlers Logging handlers Source code Lib logging handlers py Important This page contains only reference information For tutorials please see Basic Tutorial Advanced Tutorial Logging Cookbook The following useful handlers are provided in the package Note that three of the handlers StreamHandler FileHandler and NullHandler are actually defined in the logging module itself but have been documented here along with the other handlers StreamHandler The StreamHandler class located in the core logging package sends logging output to streams such as sys stdout sys stderr or any file like object or more precisely any object which supports write and flush methods class logging StreamHandler stream None Returns a new instance of the StreamHandler class If stream is specified the instance will use it for logging output otherwise sys stderr will be used emit record If a formatter is specified it is used to format the record The record is then written to the stream followed by terminator If exception information is present it is formatted using traceback print_exception and appended to the stream flush Flushes the stream by calling its flush method Note that the close method is inherited from Handler and so does no output so an explicit flush call may be needed at times setStream stream Sets the instance s stream to the specified value if it is different The old stream is flushed before the new stream is set Parameters stream The stream that the handler should use Returns the old stream if the stream was changed or None if it wasn t New in version 3 7 terminator String used as the terminator when writing a formatted record to a stream Default value is n If you don t want a newline termination you can set the handler instance s terminator attribute to the empty string In earlier versions the terminator was hardcoded as n New in version 3 2 FileHandler The FileHandler class located in the core logging package sends logging output to a disk file It inherits the output functionality from StreamHandler class logging FileHandler filename mode a encoding None delay False errors None Returns a new instance of the FileHandler class The specified file is opened and used as the stream for logging If mode is not specified a is used If encoding is not None it is used to open the file with that encoding If delay is true then file opening is deferred until the first call to emit By default the file grows indefinitely If errors is specified it s used to determine how encoding errors are handled Changed in version 3 6 As well as string values Path objects are also accepted for the filename argument Changed in version 3 9 The errors parameter was added close Closes the file emit record Outputs the record to the file Note that if the file was closed due to logging shutdown at exit and the file mode is w the record will not be emitted see bpo 42378 NullHandler New in version 3 1 The NullHandler class located in the core logging package does not do any formatting or output It is essentially a no op handler for use by library developers class logging NullHandler Returns a new instance of the NullHandler class emit record This method does nothing handle record This method does nothing createLock This method returns None for the lock since there is no underlying I O to which access needs to be serialized See Configuring Logging for a Library for more information on how to use NullHandler WatchedFileHandler The WatchedFileHandler class located in the logging handlers module is a FileHandler which watches the file it is logging to If the file changes it is closed and reopened using the file name A file change can happen because of usage of programs such as newsyslog and logrotate which perform log file rotation This handler intended for use under Unix Linux watches the file to see if it has changed since the last emit A file is deemed to have changed if its device or inode have changed If the file has changed the old file stream is closed and the file opened to get a new stream This handler is not appropriate for use under Windows because under Window
en
null
65
s open log files cannot be moved or renamed logging opens the files with exclusive locks and so there is no need for such a handler Furthermore ST_INO is not supported under Windows stat always returns zero for this value class logging handlers WatchedFileHandler filename mode a encoding None delay False errors None Returns a new instance of the WatchedFileHandler class The specified file is opened and used as the stream for logging If mode is not specified a is used If encoding is not None it is used to open the file with that encoding If delay is true then file opening is deferred until the first call to emit By default the file grows indefinitely If errors is provided it determines how encoding errors are handled Changed in version 3 6 As well as string values Path objects are also accepted for the filename argument Changed in version 3 9 The errors parameter was added reopenIfNeeded Checks to see if the file has changed If it has the existing stream is flushed and closed and the file opened again typically as a precursor to outputting the record to the file New in version 3 6 emit record Outputs the record to the file but first calls reopenIfNeeded to reopen the file if it has changed BaseRotatingHandler The BaseRotatingHandler class located in the logging handlers module is the base class for the rotating file handlers RotatingFileHandler and TimedRotatingFileHandler You should not need to instantiate this class but it has attributes and methods you may need to override class logging handlers BaseRotatingHandler filename mode encoding None delay False errors None The parameters are as for FileHandler The attributes are namer If this attribute is set to a callable the rotation_filename method delegates to this callable The parameters passed to the callable are those passed to rotation_filename Note The namer function is called quite a few times during rollover so it should be as simple and as fast as possible It should also return the same output every time for a given input otherwise the rollover behaviour may not work as expected It s also worth noting that care should be taken when using a namer to preserve certain attributes in the filename which are used during rotation For example RotatingFileHandler expects to have a set of log files whose names contain successive integers so that rotation works as expected and TimedRotatingFileHandler deletes old log files based on the backupCount parameter passed to the handler s initializer by determining the oldest files to delete For this to happen the filenames should be sortable using the date time portion of the filename and a namer needs to respect this If a namer is wanted that doesn t respect this scheme it will need to be used in a subclass of TimedRotatingFileHandler which overrides the getFilesToDelete method to fit in with the custom naming scheme New in version 3 3 rotator If this attribute is set to a callable the rotate method delegates to this callable The parameters passed to the callable are those passed to rotate New in version 3 3 rotation_filename default_name Modify the filename of a log file when rotating This is provided so that a custom filename can be provided The default implementation calls the namer attribute of the handler if it s callable passing the default name to it If the attribute isn t callable the default is None the name is returned unchanged Parameters default_name The default name for the log file New in version 3 3 rotate source dest When rotating rotate the current log The default implementation calls the rotator attribute of the handler if it s callable passing the source and dest arguments to it If the attribute isn t callable the default is None the source is simply renamed to the destination Parameters source The source filename This is normally the base filename e g test log dest The destination filename This is normally what the source is rotated to e g test log 1 New in version 3 3 The reason the attributes exist is to save you having to subclass you can use the same callables for instances of RotatingFileHandler and TimedRo
en
null
66
tatingFileHandler If either the namer or rotator callable raises an exception this will be handled in the same way as any other exception during an emit call i e via the handleError method of the handler If you need to make more significant changes to rotation processing you can override the methods For an example see Using a rotator and namer to customize log rotation processing RotatingFileHandler The RotatingFileHandler class located in the logging handlers module supports rotation of disk log files class logging handlers RotatingFileHandler filename mode a maxBytes 0 backupCount 0 encoding None delay False errors None Returns a new instance of the RotatingFileHandler class The specified file is opened and used as the stream for logging If mode is not specified a is used If encoding is not None it is used to open the file with that encoding If delay is true then file opening is deferred until the first call to emit By default the file grows indefinitely If errors is provided it determines how encoding errors are handled You can use the maxBytes and backupCount values to allow the file to rollover at a predetermined size When the size is about to be exceeded the file is closed and a new file is silently opened for output Rollover occurs whenever the current log file is nearly maxBytes in length but if either of maxBytes or backupCount is zero rollover never occurs so you generally want to set backupCount to at least 1 and have a non zero maxBytes When backupCount is non zero the system will save old log files by appending the extensions 1 2 etc to the filename For example with a backupCount of 5 and a base file name of app log you would get app log app log 1 app log 2 up to app log 5 The file being written to is always app log When this file is filled it is closed and renamed to app log 1 and if files app log 1 app log 2 etc exist then they are renamed to app log 2 app log 3 etc respectively Changed in version 3 6 As well as string values Path objects are also accepted for the filename argument Changed in version 3 9 The errors parameter was added doRollover Does a rollover as described above emit record Outputs the record to the file catering for rollover as described previously TimedRotatingFileHandler The TimedRotatingFileHandler class located in the logging handlers module supports rotation of disk log files at certain timed intervals class logging handlers TimedRotatingFileHandler filename when h interval 1 backupCount 0 encoding None delay False utc False atTime None errors None Returns a new instance of the TimedRotatingFileHandler class The specified file is opened and used as the stream for logging On rotating it also sets the filename suffix Rotating happens based on the product of when and interval You can use the when to specify the type of interval The list of possible values is below Note that they are not case sensitive Value Type of interval If how atTime is used S Seconds Ignored M Minutes Ignored H Hours Ignored D Days Ignored W0 W6 Weekday 0 Monday Used to compute initial rollover time midnight Roll over at midnight if Used to compute initial atTime not specified else rollover time at time atTime When using weekday based rotation specify W0 for Monday W1 for Tuesday and so on up to W6 for Sunday In this case the value passed for interval isn t used The system will save old log files by appending extensions to the filename The extensions are date and time based using the strftime format Y m d_ H M S or a leading portion thereof depending on the rollover interval When computing the next rollover time for the first time when the handler is created the last modification time of an existing log file or else the current time is used to compute when the next rotation will occur If the utc argument is true times in UTC will be used otherwise local time is used If backupCount is nonzero at most backupCount files will be kept and if more would be created when rollover occurs the oldest one is deleted The deletion logic uses the interval to determine which files to delete so changing the interval may leave old
en
null
67
files lying around If delay is true then file opening is deferred until the first call to emit If atTime is not None it must be a datetime time instance which specifies the time of day when rollover occurs for the cases where rollover is set to happen at midnight or on a particular weekday Note that in these cases the atTime value is effectively used to compute the initial rollover and subsequent rollovers would be calculated via the normal interval calculation If errors is specified it s used to determine how encoding errors are handled Note Calculation of the initial rollover time is done when the handler is initialised Calculation of subsequent rollover times is done only when rollover occurs and rollover occurs only when emitting output If this is not kept in mind it might lead to some confusion For example if an interval of every minute is set that does not mean you will always see log files with times in the filename separated by a minute if during application execution logging output is generated more frequently than once a minute then you can expect to see log files with times separated by a minute If on the other hand logging messages are only output once every five minutes say then there will be gaps in the file times corresponding to the minutes where no output and hence no rollover occurred Changed in version 3 4 atTime parameter was added Changed in version 3 6 As well as string values Path objects are also accepted for the filename argument Changed in version 3 9 The errors parameter was added doRollover Does a rollover as described above emit record Outputs the record to the file catering for rollover as described above getFilesToDelete Returns a list of filenames which should be deleted as part of rollover These are the absolute paths of the oldest backup log files written by the handler SocketHandler The SocketHandler class located in the logging handlers module sends logging output to a network socket The base class uses a TCP socket class logging handlers SocketHandler host port Returns a new instance of the SocketHandler class intended to communicate with a remote machine whose address is given by host and port Changed in version 3 4 If port is specified as None a Unix domain socket is created using the value in host otherwise a TCP socket is created close Closes the socket emit Pickles the record s attribute dictionary and writes it to the socket in binary format If there is an error with the socket silently drops the packet If the connection was previously lost re establishes the connection To unpickle the record at the receiving end into a LogRecord use the makeLogRecord function handleError Handles an error which has occurred during emit The most likely cause is a lost connection Closes the socket so that we can retry on the next event makeSocket This is a factory method which allows subclasses to define the precise type of socket they want The default implementation creates a TCP socket socket SOCK_STREAM makePickle record Pickles the record s attribute dictionary in binary format with a length prefix and returns it ready for transmission across the socket The details of this operation are equivalent to data pickle dumps record_attr_dict 1 datalen struct pack L len data return datalen data Note that pickles aren t completely secure If you are concerned about security you may want to override this method to implement a more secure mechanism For example you can sign pickles using HMAC and then verify them on the receiving end or alternatively you can disable unpickling of global objects on the receiving end send packet Send a pickled byte string packet to the socket The format of the sent byte string is as described in the documentation for makePickle This function allows for partial sends which can happen when the network is busy createSocket Tries to create a socket on failure uses an exponential back off algorithm On initial failure the handler will drop the message it was trying to send When subsequent messages are handled by the same instance it will not try connecting until some time has pass
en
null
68
ed The default parameters are such that the initial delay is one second and if after that delay the connection still can t be made the handler will double the delay each time up to a maximum of 30 seconds This behaviour is controlled by the following handler attributes retryStart initial delay defaulting to 1 0 seconds retryFactor multiplier defaulting to 2 0 retryMax maximum delay defaulting to 30 0 seconds This means that if the remote listener starts up after the handler has been used you could lose messages since the handler won t even attempt a connection until the delay has elapsed but just silently drop messages during the delay period DatagramHandler The DatagramHandler class located in the logging handlers module inherits from SocketHandler to support sending logging messages over UDP sockets class logging handlers DatagramHandler host port Returns a new instance of the DatagramHandler class intended to communicate with a remote machine whose address is given by host and port Note As UDP is not a streaming protocol there is no persistent connection between an instance of this handler and host For this reason when using a network socket a DNS lookup might have to be made each time an event is logged which can introduce some latency into the system If this affects you you can do a lookup yourself and initialize this handler using the looked up IP address rather than the hostname Changed in version 3 4 If port is specified as None a Unix domain socket is created using the value in host otherwise a UDP socket is created emit Pickles the record s attribute dictionary and writes it to the socket in binary format If there is an error with the socket silently drops the packet To unpickle the record at the receiving end into a LogRecord use the makeLogRecord function makeSocket The factory method of SocketHandler is here overridden to create a UDP socket socket SOCK_DGRAM send s Send a pickled byte string to a socket The format of the sent byte string is as described in the documentation for SocketHandler makePickle SysLogHandler The SysLogHandler class located in the logging handlers module supports sending logging messages to a remote or local Unix syslog class logging handlers SysLogHandler address localhost SYSLOG_UDP_PORT facility LOG_USER socktype socket SOCK_DGRAM Returns a new instance of the SysLogHandler class intended to communicate with a remote Unix machine whose address is given by address in the form of a host port tuple If address is not specified localhost 514 is used The address is used to open a socket An alternative to providing a host port tuple is providing an address as a string for example dev log In this case a Unix domain socket is used to send the message to the syslog If facility is not specified LOG_USER is used The type of socket opened depends on the socktype argument which defaults to socket SOCK_DGRAM and thus opens a UDP socket To open a TCP socket for use with the newer syslog daemons such as rsyslog specify a value of socket SOCK_STREAM Note that if your server is not listening on UDP port 514 SysLogHandler may appear not to work In that case check what address you should be using for a domain socket it s system dependent For example on Linux it s usually dev log but on OS X it s var run syslog You ll need to check your platform and use the appropriate address you may need to do this check at runtime if your application needs to run on several platforms On Windows you pretty much have to use the UDP option Note On macOS 12 x Monterey Apple has changed the behaviour of their syslog daemon it no longer listens on a domain socket Therefore you cannot expect SysLogHandler to work on this system See gh 91070 for more information Changed in version 3 2 socktype was added close Closes the socket to the remote host createSocket Tries to create a socket and if it s not a datagram socket connect it to the other end This method is called during handler initialization but it s not regarded as an error if the other end isn t listening at this point the method will be called again when emitting an e
en
null
69
vent if there is no socket at that point New in version 3 11 emit record The record is formatted and then sent to the syslog server If exception information is present it is not sent to the server Changed in version 3 2 1 See bpo 12168 In earlier versions the message sent to the syslog daemons was always terminated with a NUL byte because early versions of these daemons expected a NUL terminated message even though it s not in the relevant specification RFC 5424 More recent versions of these daemons don t expect the NUL byte but strip it off if it s there and even more recent daemons which adhere more closely to RFC 5424 pass the NUL byte on as part of the message To enable easier handling of syslog messages in the face of all these differing daemon behaviours the appending of the NUL byte has been made configurable through the use of a class level attribute append_nul This defaults to True preserving the existing behaviour but can be set to False on a SysLogHandler instance in order for that instance to not append the NUL terminator Changed in version 3 3 See bpo 12419 In earlier versions there was no facility for an ident or tag prefix to identify the source of the message This can now be specified using a class level attribute defaulting to to preserve existing behaviour but which can be overridden on a SysLogHandler instance in order for that instance to prepend the ident to every message handled Note that the provided ident must be text not bytes and is prepended to the message exactly as is encodePriority facility priority Encodes the facility and priority into an integer You can pass in strings or integers if strings are passed internal mapping dictionaries are used to convert them to integers The symbolic LOG_ values are defined in SysLogHandler and mirror the values defined in the sys syslog h header file Priorities Name string Symbolic value alert LOG_ALERT crit or critical LOG_CRIT debug LOG_DEBUG emerg or panic LOG_EMERG err or error LOG_ERR info LOG_INFO notice LOG_NOTICE warn or warning LOG_WARNING Facilities Name string Symbolic value auth LOG_AUTH authpriv LOG_AUTHPRIV cron LOG_CRON daemon LOG_DAEMON ftp LOG_FTP kern LOG_KERN lpr LOG_LPR mail LOG_MAIL news LOG_NEWS syslog LOG_SYSLOG user LOG_USER uucp LOG_UUCP local0 LOG_LOCAL0 local1 LOG_LOCAL1 local2 LOG_LOCAL2 local3 LOG_LOCAL3 local4 LOG_LOCAL4 local5 LOG_LOCAL5 local6 LOG_LOCAL6 local7 LOG_LOCAL7 mapPriority levelname Maps a logging level name to a syslog priority name You may need to override this if you are using custom levels or if the default algorithm is not suitable for your needs The default algorithm maps DEBUG INFO WARNING ERROR and CRITICAL to the equivalent syslog names and all other level names to warning NTEventLogHandler The NTEventLogHandler class located in the logging handlers module supports sending logging messages to a local Windows NT Windows 2000 or Windows XP event log Before you can use it you need Mark Hammond s Win32 extensions for Python installed class logging handlers NTEventLogHandler appname dllname None logtype Application Returns a new instance of the NTEventLogHandler class The appname is used to define the application name as it appears in the event log An appropriate registry entry is created using this name The dllname should give the fully qualified pathname of a dll or exe which contains message definitions to hold in the log if not specified win32service pyd is used this is installed with the Win32 extensions and contains some basic placeholder message definitions Note that use of these placeholders will make your event logs big as the entire message source is held in the log If you want slimmer logs you have to pass in the name of your own dll or exe which contains the message definitions you want to use in the event log The logtype is one of Application System or Security and defaults to Application close At this point you can remove the application name from the registry as a source of event log entries However if you do this you will not be able to see the events as you intended in the Event Log Viewer it n
en
null
70
eeds to be able to access the registry to get the dll name The current version does not do this emit record Determines the message ID event category and event type and then logs the message in the NT event log getEventCategory record Returns the event category for the record Override this if you want to specify your own categories This version returns 0 getEventType record Returns the event type for the record Override this if you want to specify your own types This version does a mapping using the handler s typemap attribute which is set up in __init__ to a dictionary which contains mappings for DEBUG INFO WARNING ERROR and CRITICAL If you are using your own levels you will either need to override this method or place a suitable dictionary in the handler s typemap attribute getMessageID record Returns the message ID for the record If you are using your own messages you could do this by having the msg passed to the logger being an ID rather than a format string Then in here you could use a dictionary lookup to get the message ID This version returns 1 which is the base message ID in win32service pyd SMTPHandler The SMTPHandler class located in the logging handlers module supports sending logging messages to an email address via SMTP class logging handlers SMTPHandler mailhost fromaddr toaddrs subject credentials None secure None timeout 1 0 Returns a new instance of the SMTPHandler class The instance is initialized with the from and to addresses and subject line of the email The toaddrs should be a list of strings To specify a non standard SMTP port use the host port tuple format for the mailhost argument If you use a string the standard SMTP port is used If your SMTP server requires authentication you can specify a username password tuple for the credentials argument To specify the use of a secure protocol TLS pass in a tuple to the secure argument This will only be used when authentication credentials are supplied The tuple should be either an empty tuple or a single value tuple with the name of a keyfile or a 2 value tuple with the names of the keyfile and certificate file This tuple is passed to the smtplib SMTP starttls method A timeout can be specified for communication with the SMTP server using the timeout argument Changed in version 3 3 Added the timeout parameter emit record Formats the record and sends it to the specified addressees getSubject record If you want to specify a subject line which is record dependent override this method MemoryHandler The MemoryHandler class located in the logging handlers module supports buffering of logging records in memory periodically flushing them to a target handler Flushing occurs whenever the buffer is full or when an event of a certain severity or greater is seen MemoryHandler is a subclass of the more general BufferingHandler which is an abstract class This buffers logging records in memory Whenever each record is added to the buffer a check is made by calling shouldFlush to see if the buffer should be flushed If it should then flush is expected to do the flushing class logging handlers BufferingHandler capacity Initializes the handler with a buffer of the specified capacity Here capacity means the number of logging records buffered emit record Append the record to the buffer If shouldFlush returns true call flush to process the buffer flush For a BufferingHandler instance flushing means that it sets the buffer to an empty list This method can be overwritten to implement more useful flushing behavior shouldFlush record Return True if the buffer is up to capacity This method can be overridden to implement custom flushing strategies class logging handlers MemoryHandler capacity flushLevel ERROR target None flushOnClose True Returns a new instance of the MemoryHandler class The instance is initialized with a buffer size of capacity number of records buffered If flushLevel is not specified ERROR is used If no target is specified the target will need to be set using setTarget before this handler does anything useful If flushOnClose is specified as False then the buffer is no
en
null
71
t flushed when the handler is closed If not specified or specified as True the previous behaviour of flushing the buffer will occur when the handler is closed Changed in version 3 6 The flushOnClose parameter was added close Calls flush sets the target to None and clears the buffer flush For a MemoryHandler instance flushing means just sending the buffered records to the target if there is one The buffer is also cleared when buffered records are sent to the target Override if you want different behavior setTarget target Sets the target handler for this handler shouldFlush record Checks for buffer full or a record at the flushLevel or higher HTTPHandler The HTTPHandler class located in the logging handlers module supports sending logging messages to a web server using either GET or POST semantics class logging handlers HTTPHandler host url method GET secure False credentials None context None Returns a new instance of the HTTPHandler class The host can be of the form host port should you need to use a specific port number If no method is specified GET is used If secure is true a HTTPS connection will be used The context parameter may be set to a ssl SSLContext instance to configure the SSL settings used for the HTTPS connection If credentials is specified it should be a 2 tuple consisting of userid and password which will be placed in a HTTP Authorization header using Basic authentication If you specify credentials you should also specify secure True so that your userid and password are not passed in cleartext across the wire Changed in version 3 5 The context parameter was added mapLogRecord record Provides a dictionary based on record which is to be URL encoded and sent to the web server The default implementation just returns record __dict__ This method can be overridden if e g only a subset of LogRecord is to be sent to the web server or if more specific customization of what s sent to the server is required emit record Sends the record to the web server as a URL encoded dictionary The mapLogRecord method is used to convert the record to the dictionary to be sent Note Since preparing a record for sending it to a web server is not the same as a generic formatting operation using setFormatter to specify a Formatter for a HTTPHandler has no effect Instead of calling format this handler calls mapLogRecord and then urllib parse urlencode to encode the dictionary in a form suitable for sending to a web server QueueHandler New in version 3 2 The QueueHandler class located in the logging handlers module supports sending logging messages to a queue such as those implemented in the queue or multiprocessing modules Along with the QueueListener class QueueHandler can be used to let handlers do their work on a separate thread from the one which does the logging This is important in web applications and also other service applications where threads servicing clients need to respond as quickly as possible while any potentially slow operations such as sending an email via SMTPHandler are done on a separate thread class logging handlers QueueHandler queue Returns a new instance of the QueueHandler class The instance is initialized with the queue to send messages to The queue can be any queue like object it s used as is by the enqueue method which needs to know how to send messages to it The queue is not required to have the task tracking API which means that you can use SimpleQueue instances for queue Note If you are using multiprocessing you should avoid using SimpleQueue and instead use multiprocessing Queue emit record Enqueues the result of preparing the LogRecord Should an exception occur e g because a bounded queue has filled up the handleError method is called to handle the error This can result in the record silently being dropped if logging raiseExceptions is False or a message printed to sys stderr if logging raiseExceptions is True prepare record Prepares a record for queuing The object returned by this method is enqueued The base implementation formats the record to merge the message arguments exception and stack informati
en
null
72
on if present It also removes unpickleable items from the record in place Specifically it overwrites the record s msg and message attributes with the merged message obtained by calling the handler s format method and sets the args exc_info and exc_text attributes to None You might want to override this method if you want to convert the record to a dict or JSON string or send a modified copy of the record while leaving the original intact Note The base implementation formats the message with arguments sets the message and msg attributes to the formatted message and sets the args and exc_text attributes to None to allow pickling and to prevent further attempts at formatting This means that a handler on the QueueListener side won t have the information to do custom formatting e g of exceptions You may wish to subclass QueueHandler and override this method to e g avoid setting exc_text to None Note that the message msg args changes are related to ensuring the record is pickleable and you might or might not be able to avoid doing that depending on whether your args are pickleable Note that you may have to consider not only your own code but also code in any libraries that you use enqueue record Enqueues the record on the queue using put_nowait you may want to override this if you want to use blocking behaviour or a timeout or a customized queue implementation listener When created via configuration using dictConfig this attribute will contain a QueueListener instance for use with this handler Otherwise it will be None New in version 3 12 QueueListener New in version 3 2 The QueueListener class located in the logging handlers module supports receiving logging messages from a queue such as those implemented in the queue or multiprocessing modules The messages are received from a queue in an internal thread and passed on the same thread to one or more handlers for processing While QueueListener is not itself a handler it is documented here because it works hand in hand with QueueHandler Along with the QueueHandler class QueueListener can be used to let handlers do their work on a separate thread from the one which does the logging This is important in web applications and also other service applications where threads servicing clients need to respond as quickly as possible while any potentially slow operations such as sending an email via SMTPHandler are done on a separate thread class logging handlers QueueListener queue handlers respect_handler_level False Returns a new instance of the QueueListener class The instance is initialized with the queue to send messages to and a list of handlers which will handle entries placed on the queue The queue can be any queue like object it s passed as is to the dequeue method which needs to know how to get messages from it The queue is not required to have the task tracking API though it s used if available which means that you can use SimpleQueue instances for queue Note If you are using multiprocessing you should avoid using SimpleQueue and instead use multiprocessing Queue If respect_handler_level is True a handler s level is respected compared with the level for the message when deciding whether to pass messages to that handler otherwise the behaviour is as in previous Python versions to always pass each message to each handler Changed in version 3 5 The respect_handler_level argument was added dequeue block Dequeues a record and return it optionally blocking The base implementation uses get You may want to override this method if you want to use timeouts or work with custom queue implementations prepare record Prepare a record for handling This implementation just returns the passed in record You may want to override this method if you need to do any custom marshalling or manipulation of the record before passing it to the handlers handle record Handle a record This just loops through the handlers offering them the record to handle The actual object passed to the handlers is that which is returned from prepare start Starts the listener This starts up a background thread to monitor the
en
null
73
queue for LogRecords to process stop Stops the listener This asks the thread to terminate and then waits for it to do so Note that if you don t call this before your application exits there may be some records still left on the queue which won t be processed enqueue_sentinel Writes a sentinel to the queue to tell the listener to quit This implementation uses put_nowait You may want to override this method if you want to use timeouts or work with custom queue implementations New in version 3 3 See also Module logging API reference for the logging module Module logging config Configuration API for the logging module
en
null
74
shutil High level file operations Source code Lib shutil py The shutil module offers a number of high level operations on files and collections of files In particular functions are provided which support file copying and removal For operations on individual files see also the os module Warning Even the higher level file copying functions shutil copy shutil copy2 cannot copy all file metadata On POSIX platforms this means that file owner and group are lost as well as ACLs On Mac OS the resource fork and other metadata are not used This means that resources will be lost and file type and creator codes will not be correct On Windows file owners ACLs and alternate data streams are not copied Directory and files operations shutil copyfileobj fsrc fdst length Copy the contents of the file like object fsrc to the file like object fdst The integer length if given is the buffer size In particular a negative length value means to copy the data without looping over the source data in chunks by default the data is read in chunks to avoid uncontrolled memory consumption Note that if the current file position of the fsrc object is not 0 only the contents from the current file position to the end of the file will be copied shutil copyfile src dst follow_symlinks True Copy the contents no metadata of the file named src to a file named dst and return dst in the most efficient way possible src and dst are path like objects or path names given as strings dst must be the complete target file name look at copy for a copy that accepts a target directory path If src and dst specify the same file SameFileError is raised The destination location must be writable otherwise an OSError exception will be raised If dst already exists it will be replaced Special files such as character or block devices and pipes cannot be copied with this function If follow_symlinks is false and src is a symbolic link a new symbolic link will be created instead of copying the file src points to Raises an auditing event shutil copyfile with arguments src dst Changed in version 3 3 IOError used to be raised instead of OSError Added follow_symlinks argument Now returns dst Changed in version 3 4 Raise SameFileError instead of Error Since the former is a subclass of the latter this change is backward compatible Changed in version 3 8 Platform specific fast copy syscalls may be used internally in order to copy the file more efficiently See Platform dependent efficient copy operations section exception shutil SameFileError This exception is raised if source and destination in copyfile are the same file New in version 3 4 shutil copymode src dst follow_symlinks True Copy the permission bits from src to dst The file contents owner and group are unaffected src and dst are path like objects or path names given as strings If follow_symlinks is false and both src and dst are symbolic links copymode will attempt to modify the mode of dst itself rather than the file it points to This functionality is not available on every platform please see copystat for more information If copymode cannot modify symbolic links on the local platform and it is asked to do so it will do nothing and return Raises an auditing event shutil copymode with arguments src dst Changed in version 3 3 Added follow_symlinks argument shutil copystat src dst follow_symlinks True Copy the permission bits last access time last modification time and flags from src to dst On Linux copystat also copies the extended attributes where possible The file contents owner and group are unaffected src and dst are path like objects or path names given as strings If follow_symlinks is false and src and dst both refer to symbolic links copystat will operate on the symbolic links themselves rather than the files the symbolic links refer to reading the information from the src symbolic link and writing the information to the dst symbolic link Note Not all platforms provide the ability to examine and modify symbolic links Python itself can tell you what functionality is locally available If os chmod in os supports_follow_symlinks is
en
null
75
True copystat can modify the permission bits of a symbolic link If os utime in os supports_follow_symlinks is True copystat can modify the last access and modification times of a symbolic link If os chflags in os supports_follow_symlinks is True copystat can modify the flags of a symbolic link os chflags is not available on all platforms On platforms where some or all of this functionality is unavailable when asked to modify a symbolic link copystat will copy everything it can copystat never returns failure Please see os supports_follow_symlinks for more information Raises an auditing event shutil copystat with arguments src dst Changed in version 3 3 Added follow_symlinks argument and support for Linux extended attributes shutil copy src dst follow_symlinks True Copies the file src to the file or directory dst src and dst should be path like objects or strings If dst specifies a directory the file will be copied into dst using the base filename from src If dst specifies a file that already exists it will be replaced Returns the path to the newly created file If follow_symlinks is false and src is a symbolic link dst will be created as a symbolic link If follow_symlinks is true and src is a symbolic link dst will be a copy of the file src refers to copy copies the file data and the file s permission mode see os chmod Other metadata like the file s creation and modification times is not preserved To preserve all file metadata from the original use copy2 instead Raises an auditing event shutil copyfile with arguments src dst Raises an auditing event shutil copymode with arguments src dst Changed in version 3 3 Added follow_symlinks argument Now returns path to the newly created file Changed in version 3 8 Platform specific fast copy syscalls may be used internally in order to copy the file more efficiently See Platform dependent efficient copy operations section shutil copy2 src dst follow_symlinks True Identical to copy except that copy2 also attempts to preserve file metadata When follow_symlinks is false and src is a symbolic link copy2 attempts to copy all metadata from the src symbolic link to the newly created dst symbolic link However this functionality is not available on all platforms On platforms where some or all of this functionality is unavailable copy2 will preserve all the metadata it can copy2 never raises an exception because it cannot preserve file metadata copy2 uses copystat to copy the file metadata Please see copystat for more information about platform support for modifying symbolic link metadata Raises an auditing event shutil copyfile with arguments src dst Raises an auditing event shutil copystat with arguments src dst Changed in version 3 3 Added follow_symlinks argument try to copy extended file system attributes too currently Linux only Now returns path to the newly created file Changed in version 3 8 Platform specific fast copy syscalls may be used internally in order to copy the file more efficiently See Platform dependent efficient copy operations section shutil ignore_patterns patterns This factory function creates a function that can be used as a callable for copytree s ignore argument ignoring files and directories that match one of the glob style patterns provided See the example below shutil copytree src dst symlinks False ignore None copy_function copy2 ignore_dangling_symlinks False dirs_exist_ok False Recursively copy an entire directory tree rooted at src to a directory named dst and return the destination directory All intermediate directories needed to contain dst will also be created by default Permissions and times of directories are copied with copystat individual files are copied using copy2 If symlinks is true symbolic links in the source tree are represented as symbolic links in the new tree and the metadata of the original links will be copied as far as the platform allows if false or omitted the contents and metadata of the linked files are copied to the new tree When symlinks is false if the file pointed by the symlink doesn t exist an exception will be added in the list
en
null
76
of errors raised in an Error exception at the end of the copy process You can set the optional ignore_dangling_symlinks flag to true if you want to silence this exception Notice that this option has no effect on platforms that don t support os symlink If ignore is given it must be a callable that will receive as its arguments the directory being visited by copytree and a list of its contents as returned by os listdir Since copytree is called recursively the ignore callable will be called once for each directory that is copied The callable must return a sequence of directory and file names relative to the current directory i e a subset of the items in its second argument these names will then be ignored in the copy process ignore_patterns can be used to create such a callable that ignores names based on glob style patterns If exception s occur an Error is raised with a list of reasons If copy_function is given it must be a callable that will be used to copy each file It will be called with the source path and the destination path as arguments By default copy2 is used but any function that supports the same signature like copy can be used If dirs_exist_ok is false the default and dst already exists a FileExistsError is raised If dirs_exist_ok is true the copying operation will continue if it encounters existing directories and files within the dst tree will be overwritten by corresponding files from the src tree Raises an auditing event shutil copytree with arguments src dst Changed in version 3 2 Added the copy_function argument to be able to provide a custom copy function Added the ignore_dangling_symlinks argument to silence dangling symlinks errors when symlinks is false Changed in version 3 3 Copy metadata when symlinks is false Now returns dst Changed in version 3 8 Platform specific fast copy syscalls may be used internally in order to copy the file more efficiently See Platform dependent efficient copy operations section Changed in version 3 8 Added the dirs_exist_ok parameter shutil rmtree path ignore_errors False onerror None onexc None dir_fd None Delete an entire directory tree path must point to a directory but not a symbolic link to a directory If ignore_errors is true errors resulting from failed removals will be ignored if false or omitted such errors are handled by calling a handler specified by onexc or onerror or if both are omitted exceptions are propagated to the caller This function can support paths relative to directory descriptors Note On platforms that support the necessary fd based functions a symlink attack resistant version of rmtree is used by default On other platforms the rmtree implementation is susceptible to a symlink attack given proper timing and circumstances attackers can manipulate symlinks on the filesystem to delete files they wouldn t be able to access otherwise Applications can use the rmtree avoids_symlink_attacks function attribute to determine which case applies If onexc is provided it must be a callable that accepts three parameters function path and excinfo The first parameter function is the function which raised the exception it depends on the platform and implementation The second parameter path will be the path name passed to function The third parameter excinfo is the exception that was raised Exceptions raised by onexc will not be caught The deprecated onerror is similar to onexc except that the third parameter it receives is the tuple returned from sys exc_info Raises an auditing event shutil rmtree with arguments path dir_fd Changed in version 3 3 Added a symlink attack resistant version that is used automatically if platform supports fd based functions Changed in version 3 8 On Windows will no longer delete the contents of a directory junction before removing the junction Changed in version 3 11 The dir_fd parameter Changed in version 3 12 Added the onexc parameter deprecated onerror rmtree avoids_symlink_attacks Indicates whether the current platform and implementation provides a symlink attack resistant version of rmtree Currently this is only true for platforms
en
null
77
supporting fd based directory access functions New in version 3 3 shutil move src dst copy_function copy2 Recursively move a file or directory src to another location and return the destination If dst is an existing directory or a symlink to a directory then src is moved inside that directory The destination path in that directory must not already exist If dst already exists but is not a directory it may be overwritten depending on os rename semantics If the destination is on the current filesystem then os rename is used Otherwise src is copied to the destination using copy_function and then removed In case of symlinks a new symlink pointing to the target of src will be created as the destination and src will be removed If copy_function is given it must be a callable that takes two arguments src and the destination and will be used to copy src to the destination if os rename cannot be used If the source is a directory copytree is called passing it the copy_function The default copy_function is copy2 Using copy as the copy_function allows the move to succeed when it is not possible to also copy the metadata at the expense of not copying any of the metadata Raises an auditing event shutil move with arguments src dst Changed in version 3 3 Added explicit symlink handling for foreign filesystems thus adapting it to the behavior of GNU s mv Now returns dst Changed in version 3 5 Added the copy_function keyword argument Changed in version 3 8 Platform specific fast copy syscalls may be used internally in order to copy the file more efficiently See Platform dependent efficient copy operations section Changed in version 3 9 Accepts a path like object for both src and dst shutil disk_usage path Return disk usage statistics about the given path as a named tuple with the attributes total used and free which are the amount of total used and free space in bytes path may be a file or a directory Note On Unix filesystems path must point to a path within a mounted filesystem partition On those platforms CPython doesn t attempt to retrieve disk usage information from non mounted filesystems New in version 3 3 Changed in version 3 8 On Windows path can now be a file or directory Availability Unix Windows shutil chown path user None group None Change owner user and or group of the given path user can be a system user name or a uid the same applies to group At least one argument is required See also os chown the underlying function Raises an auditing event shutil chown with arguments path user group Availability Unix New in version 3 3 shutil which cmd mode os F_OK os X_OK path None Return the path to an executable which would be run if the given cmd was called If no cmd would be called return None mode is a permission mask passed to os access by default determining if the file exists and executable When no path is specified the results of os environ are used returning either the PATH value or a fallback of os defpath On Windows the current directory is prepended to the path if mode does not include os X_OK When the mode does include os X_OK the Windows API NeedCurrentDirectoryForExePathW will be consulted to determine if the current directory should be prepended to path To avoid consulting the current working directory for executables set the environment variable NoDefaultCurrentDirectoryInExePath Also on Windows the PATHEXT variable is used to resolve commands that may not already include an extension For example if you call shutil which python which will search PATHEXT to know that it should look for python exe within the path directories For example on Windows shutil which python C Python33 python EXE This is also applied when cmd is a path that contains a directory component shutil which C Python33 python C Python33 python EXE New in version 3 3 Changed in version 3 8 The bytes type is now accepted If cmd type is bytes the result type is also bytes Changed in version 3 12 On Windows the current directory is no longer prepended to the search path if mode includes os X_OK and WinAPI NeedCurrentDirectoryForExePathW cmd is false else the c
en
null
78
urrent directory is prepended even if it is already in the search path PATHEXT is used now even when cmd includes a directory component or ends with an extension that is in PATHEXT and filenames that have no extension can now be found Changed in version 3 12 1 On Windows if mode includes os X_OK executables with an extension in PATHEXT will be preferred over executables without a matching extension This brings behavior closer to that of Python 3 11 exception shutil Error This exception collects exceptions that are raised during a multi file operation For copytree the exception argument is a list of 3 tuples srcname dstname exception Platform dependent efficient copy operations Starting from Python 3 8 all functions involving a file copy copyfile copy copy2 copytree and move may use platform specific fast copy syscalls in order to copy the file more efficiently see bpo 33671 fast copy means that the copying operation occurs within the kernel avoiding the use of userspace buffers in Python as in outfd write infd read On macOS fcopyfile is used to copy the file content not metadata On Linux os sendfile is used On Windows shutil copyfile uses a bigger default buffer size 1 MiB instead of 64 KiB and a memoryview based variant of shutil copyfileobj is used If the fast copy operation fails and no data was written in the destination file then shutil will silently fallback on using less efficient copyfileobj function internally Changed in version 3 8 copytree example An example that uses the ignore_patterns helper from shutil import copytree ignore_patterns copytree source destination ignore ignore_patterns pyc tmp This will copy everything except pyc files and files or directories whose name starts with tmp Another example that uses the ignore argument to add a logging call from shutil import copytree import logging def _logpath path names logging info Working in s path return nothing will be ignored copytree source destination ignore _logpath rmtree example This example shows how to remove a directory tree on Windows where some of the files have their read only bit set It uses the onexc callback to clear the readonly bit and reattempt the remove Any subsequent failure will propagate import os stat import shutil def remove_readonly func path _ Clear the readonly bit and reattempt the removal os chmod path stat S_IWRITE func path shutil rmtree directory onexc remove_readonly Archiving operations New in version 3 2 Changed in version 3 5 Added support for the xztar format High level utilities to create and read compressed and archived files are also provided They rely on the zipfile and tarfile modules shutil make_archive base_name format root_dir base_dir verbose dry_run owner group logger Create an archive file such as zip or tar and return its name base_name is the name of the file to create including the path minus any format specific extension format is the archive format one of zip if the zlib module is available tar gztar if the zlib module is available bztar if the bz2 module is available or xztar if the lzma module is available root_dir is a directory that will be the root directory of the archive all paths in the archive will be relative to it for example we typically chdir into root_dir before creating the archive base_dir is the directory where we start archiving from i e base_dir will be the common prefix of all files and directories in the archive base_dir must be given relative to root_dir See Archiving example with base_dir for how to use base_dir and root_dir together root_dir and base_dir both default to the current directory If dry_run is true no archive is created but the operations that would be executed are logged to logger owner and group are used when creating a tar archive By default uses the current owner and group logger must be an object compatible with PEP 282 usually an instance of logging Logger The verbose argument is unused and deprecated Raises an auditing event shutil make_archive with arguments base_name format root_dir base_dir Note This function is not thread safe when custom archivers registe
en
null
79
red with register_archive_format do not support the root_dir argument In this case it temporarily changes the current working directory of the process to root_dir to perform archiving Changed in version 3 8 The modern pax POSIX 1 2001 format is now used instead of the legacy GNU format for archives created with format tar Changed in version 3 10 6 This function is now made thread safe during creation of standard zip and tar archives shutil get_archive_formats Return a list of supported formats for archiving Each element of the returned sequence is a tuple name description By default shutil provides these formats zip ZIP file if the zlib module is available tar Uncompressed tar file Uses POSIX 1 2001 pax format for new archives gztar gzip ed tar file if the zlib module is available bztar bzip2 ed tar file if the bz2 module is available xztar xz ed tar file if the lzma module is available You can register new formats or provide your own archiver for any existing formats by using register_archive_format shutil register_archive_format name function extra_args description Register an archiver for the format name function is the callable that will be used to unpack archives The callable will receive the base_name of the file to create followed by the base_dir which defaults to os curdir to start archiving from Further arguments are passed as keyword arguments owner group dry_run and logger as passed in make_archive If function has the custom attribute function supports_root_dir set to True the root_dir argument is passed as a keyword argument Otherwise the current working directory of the process is temporarily changed to root_dir before calling function In this case make_archive is not thread safe If given extra_args is a sequence of name value pairs that will be used as extra keywords arguments when the archiver callable is used description is used by get_archive_formats which returns the list of archivers Defaults to an empty string Changed in version 3 12 Added support for functions supporting the root_dir argument shutil unregister_archive_format name Remove the archive format name from the list of supported formats shutil unpack_archive filename extract_dir format filter Unpack an archive filename is the full path of the archive extract_dir is the name of the target directory where the archive is unpacked If not provided the current working directory is used format is the archive format one of zip tar gztar bztar or xztar Or any other format registered with register_unpack_format If not provided unpack_archive will use the archive file name extension and see if an unpacker was registered for that extension In case none is found a ValueError is raised The keyword only filter argument is passed to the underlying unpacking function For zip files filter is not accepted For tar files it is recommended to set it to data unless using features specific to tar and UNIX like filesystems See Extraction filters for details The data filter will become the default for tar files in Python 3 14 Raises an auditing event shutil unpack_archive with arguments filename extract_dir format Warning Never extract archives from untrusted sources without prior inspection It is possible that files are created outside of the path specified in the extract_dir argument e g members that have absolute filenames starting with or filenames with two dots Changed in version 3 7 Accepts a path like object for filename and extract_dir Changed in version 3 12 Added the filter argument shutil register_unpack_format name extensions function extra_args description Registers an unpack format name is the name of the format and extensions is a list of extensions corresponding to the format like zip for Zip files function is the callable that will be used to unpack archives The callable will receive the path of the archive as a positional argument the directory the archive must be extracted to as a positional argument possibly a filter keyword argument if it was given to unpack_archive additional keyword arguments specified by extra_args as a sequence of name value t
en
null
80
uples description can be provided to describe the format and will be returned by the get_unpack_formats function shutil unregister_unpack_format name Unregister an unpack format name is the name of the format shutil get_unpack_formats Return a list of all registered formats for unpacking Each element of the returned sequence is a tuple name extensions description By default shutil provides these formats zip ZIP file unpacking compressed files works only if the corresponding module is available tar uncompressed tar file gztar gzip ed tar file if the zlib module is available bztar bzip2 ed tar file if the bz2 module is available xztar xz ed tar file if the lzma module is available You can register new formats or provide your own unpacker for any existing formats by using register_unpack_format Archiving example In this example we create a gzip ed tar file archive containing all files found in the ssh directory of the user from shutil import make_archive import os archive_name os path expanduser os path join myarchive root_dir os path expanduser os path join ssh make_archive archive_name gztar root_dir Users tarek myarchive tar gz The resulting archive contains tar tzvf Users tarek myarchive tar gz drwx tarek staff 0 2010 02 01 16 23 40 rw r r tarek staff 609 2008 06 09 13 26 54 authorized_keys rwxr xr x tarek staff 65 2008 06 09 13 26 54 config rwx tarek staff 668 2008 06 09 13 26 54 id_dsa rwxr xr x tarek staff 609 2008 06 09 13 26 54 id_dsa pub rw tarek staff 1675 2008 06 09 13 26 54 id_rsa rw r r tarek staff 397 2008 06 09 13 26 54 id_rsa pub rw r r tarek staff 37192 2010 02 06 18 23 10 known_hosts Archiving example with base_dir In this example similar to the one above we show how to use make_archive but this time with the usage of base_dir We now have the following directory structure tree tmp tmp root structure content please_add txt do_not_add txt In the final archive please_add txt should be included but do_not_add txt should not Therefore we use the following from shutil import make_archive import os archive_name os path expanduser os path join myarchive make_archive archive_name tar root_dir tmp root base_dir structure content Users tarek my_archive tar Listing the files in the resulting archive gives us python m tarfile l Users tarek myarchive tar structure content structure content please_add txt Querying the size of the output terminal shutil get_terminal_size fallback columns lines Get the size of the terminal window For each of the two dimensions the environment variable COLUMNS and LINES respectively is checked If the variable is defined and the value is a positive integer it is used When COLUMNS or LINES is not defined which is the common case the terminal connected to sys __stdout__ is queried by invoking os get_terminal_size If the terminal size cannot be successfully queried either because the system doesn t support querying or because we are not connected to a terminal the value given in fallback parameter is used fallback defaults to 80 24 which is the default size used by many terminal emulators The value returned is a named tuple of type os terminal_size See also The Single UNIX Specification Version 2 Other Environment Variables New in version 3 3 Changed in version 3 11 The fallback values are also used if os get_terminal_size returns zeroes
en
null
81
datetime Basic date and time types Source code Lib datetime py The datetime module supplies classes for manipulating dates and times While date and time arithmetic is supported the focus of the implementation is on efficient attribute extraction for output formatting and manipulation Tip Skip to the format codes See also Module calendar General calendar related functions Module time Time access and conversions Module zoneinfo Concrete time zones representing the IANA time zone database Package dateutil Third party library with expanded time zone and parsing support Package DateType Third party library that introduces distinct static types to e g allow static type checkers to differentiate between naive and aware datetimes Aware and Naive Objects Date and time objects may be categorized as aware or naive depending on whether or not they include timezone information With sufficient knowledge of applicable algorithmic and political time adjustments such as time zone and daylight saving time information an aware object can locate itself relative to other aware objects An aware object represents a specific moment in time that is not open to interpretation 1 A naive object does not contain enough information to unambiguously locate itself relative to other date time objects Whether a naive object represents Coordinated Universal Time UTC local time or time in some other timezone is purely up to the program just like it is up to the program whether a particular number represents metres miles or mass Naive objects are easy to understand and to work with at the cost of ignoring some aspects of reality For applications requiring aware objects datetime and time objects have an optional time zone information attribute tzinfo that can be set to an instance of a subclass of the abstract tzinfo class These tzinfo objects capture information about the offset from UTC time the time zone name and whether daylight saving time is in effect Only one concrete tzinfo class the timezone class is supplied by the datetime module The timezone class can represent simple timezones with fixed offsets from UTC such as UTC itself or North American EST and EDT timezones Supporting timezones at deeper levels of detail is up to the application The rules for time adjustment across the world are more political than rational change frequently and there is no standard suitable for every application aside from UTC Constants The datetime module exports the following constants datetime MINYEAR The smallest year number allowed in a date or datetime object MINYEAR is 1 datetime MAXYEAR The largest year number allowed in a date or datetime object MAXYEAR is 9999 datetime UTC Alias for the UTC timezone singleton datetime timezone utc New in version 3 11 Available Types class datetime date An idealized naive date assuming the current Gregorian calendar always was and always will be in effect Attributes year month and day class datetime time An idealized time independent of any particular day assuming that every day has exactly 24 60 60 seconds There is no notion of leap seconds here Attributes hour minute second microsecond and tzinfo class datetime datetime A combination of a date and a time Attributes year month day hour minute second microsecond and tzinfo class datetime timedelta A duration expressing the difference between two datetime or date instances to microsecond resolution class datetime tzinfo An abstract base class for time zone information objects These are used by the datetime and time classes to provide a customizable notion of time adjustment for example to account for time zone and or daylight saving time class datetime timezone A class that implements the tzinfo abstract base class as a fixed offset from the UTC New in version 3 2 Objects of these types are immutable Subclass relationships object timedelta tzinfo timezone time date datetime Common Properties The date datetime time and timezone types share these common features Objects of these types are immutable Objects of these types are hashable meaning that they can be used as dictionary keys O
en
null
82
bjects of these types support efficient pickling via the pickle module Determining if an Object is Aware or Naive Objects of the date type are always naive An object of type time or datetime may be aware or naive A datetime object d is aware if both of the following hold 1 d tzinfo is not None 2 d tzinfo utcoffset d does not return None Otherwise d is naive A time object t is aware if both of the following hold 1 t tzinfo is not None 2 t tzinfo utcoffset None does not return None Otherwise t is naive The distinction between aware and naive doesn t apply to timedelta objects timedelta Objects A timedelta object represents a duration the difference between two datetime or date instances class datetime timedelta days 0 seconds 0 microseconds 0 milliseconds 0 minutes 0 hours 0 weeks 0 All arguments are optional and default to 0 Arguments may be integers or floats and may be positive or negative Only days seconds and microseconds are stored internally Arguments are converted to those units A millisecond is converted to 1000 microseconds A minute is converted to 60 seconds An hour is converted to 3600 seconds A week is converted to 7 days and days seconds and microseconds are then normalized so that the representation is unique with 0 microseconds 1000000 0 seconds 3600 24 the number of seconds in one day 999999999 days 999999999 The following example illustrates how any arguments besides days seconds and microseconds are merged and normalized into those three resulting attributes from datetime import timedelta delta timedelta days 50 seconds 27 microseconds 10 milliseconds 29000 minutes 5 hours 8 weeks 2 Only days seconds and microseconds remain delta datetime timedelta days 64 seconds 29156 microseconds 10 If any argument is a float and there are fractional microseconds the fractional microseconds left over from all arguments are combined and their sum is rounded to the nearest microsecond using round half to even tiebreaker If no argument is a float the conversion and normalization processes are exact no information is lost If the normalized value of days lies outside the indicated range OverflowError is raised Note that normalization of negative values may be surprising at first For example from datetime import timedelta d timedelta microseconds 1 d days d seconds d microseconds 1 86399 999999 Class attributes timedelta min The most negative timedelta object timedelta 999999999 timedelta max The most positive timedelta object timedelta days 999999999 hours 23 minutes 59 seconds 59 microseconds 999999 timedelta resolution The smallest possible difference between non equal timedelta objects timedelta microseconds 1 Note that because of normalization timedelta max timedelta min timedelta max is not representable as a timedelta object Instance attributes read only Attribute Value days Between 999999999 and 999999999 inclusive seconds Between 0 and 86399 inclusive microseconds Between 0 and 999999 inclusive Supported operations Operation Result t1 t2 t3 Sum of t2 and t3 Afterwards t1 t2 t3 and t1 t3 t2 are true 1 t1 t2 t3 Difference of t2 and t3 Afterwards t1 t2 t3 and t2 t1 t3 are true 1 6 t1 t2 i or t1 i t2 Delta multiplied by an integer Afterwards t1 i t2 is true provided i 0 In general t1 i t1 i 1 t1 is true 1 t1 t2 f or t1 f t2 Delta multiplied by a float The result is rounded to the nearest multiple of timedelta resolution using round half to even f t2 t3 Division 3 of overall duration t2 by interval unit t3 Returns a float object t1 t2 f or t1 t2 i Delta divided by a float or an int The result is rounded to the nearest multiple of timedelta resolution using round half to even t1 t2 i or t1 t2 The floor is computed and the remainder if t3 any is thrown away In the second case an integer is returned 3 t1 t2 t3 The remainder is computed as a timedelta object 3 q r divmod t1 t2 Computes the quotient and the remainder q t1 t2 3 and r t1 t2 q is an integer and r is a timedelta object t1 Returns a timedelta object with the same value 2 t1 equivalent to timedelta t1 days t1 seconds t1 microseconds and to t1 1 1 4 abs t equivalen
en
null
83
t to t when t days 0 and to t when t days 0 2 str t Returns a string in the form D day s H H MM SS UUUUUU where D is negative for negative t 5 repr t Returns a string representation of the timedelta object as a constructor call with canonical attribute values Notes 1 This is exact but may overflow 2 This is exact and cannot overflow 3 Division by 0 raises ZeroDivisionError 4 timedelta max is not representable as a timedelta object 5 String representations of timedelta objects are normalized similarly to their internal representation This leads to somewhat unusual results for negative timedeltas For example timedelta hours 5 datetime timedelta days 1 seconds 68400 print _ 1 day 19 00 00 6 The expression t2 t3 will always be equal to the expression t2 t3 except when t3 is equal to timedelta max in that case the former will produce a result while the latter will overflow In addition to the operations listed above timedelta objects support certain additions and subtractions with date and datetime objects see below Changed in version 3 2 Floor division and true division of a timedelta object by another timedelta object are now supported as are remainder operations and the divmod function True division and multiplication of a timedelta object by a float object are now supported timedelta objects support equality and order comparisons In Boolean contexts a timedelta object is considered to be true if and only if it isn t equal to timedelta 0 Instance methods timedelta total_seconds Return the total number of seconds contained in the duration Equivalent to td timedelta seconds 1 For interval units other than seconds use the division form directly e g td timedelta microseconds 1 Note that for very large time intervals greater than 270 years on most platforms this method will lose microsecond accuracy New in version 3 2 Examples of usage timedelta An additional example of normalization Components of another_year add up to exactly 365 days from datetime import timedelta year timedelta days 365 another_year timedelta weeks 40 days 84 hours 23 minutes 50 seconds 600 year another_year True year total_seconds 31536000 0 Examples of timedelta arithmetic from datetime import timedelta year timedelta days 365 ten_years 10 year ten_years datetime timedelta days 3650 ten_years days 365 10 nine_years ten_years year nine_years datetime timedelta days 3285 three_years nine_years 3 three_years three_years days 365 datetime timedelta days 1095 3 date Objects A date object represents a date year month and day in an idealized calendar the current Gregorian calendar indefinitely extended in both directions January 1 of year 1 is called day number 1 January 2 of year 1 is called day number 2 and so on 2 class datetime date year month day All arguments are required Arguments must be integers in the following ranges MINYEAR year MAXYEAR 1 month 12 1 day number of days in the given month and year If an argument outside those ranges is given ValueError is raised Other constructors all class methods classmethod date today Return the current local date This is equivalent to date fromtimestamp time time classmethod date fromtimestamp timestamp Return the local date corresponding to the POSIX timestamp such as is returned by time time This may raise OverflowError if the timestamp is out of the range of values supported by the platform C localtime function and OSError on localtime failure It s common for this to be restricted to years from 1970 through 2038 Note that on non POSIX systems that include leap seconds in their notion of a timestamp leap seconds are ignored by fromtimestamp Changed in version 3 3 Raise OverflowError instead of ValueError if the timestamp is out of the range of values supported by the platform C localtime function Raise OSError instead of ValueError on localtime failure classmethod date fromordinal ordinal Return the date corresponding to the proleptic Gregorian ordinal where January 1 of year 1 has ordinal 1 ValueError is raised unless 1 ordinal date max toordinal For any date d date fromordinal d toordinal d classmethod date from
en
null
84
isoformat date_string Return a date corresponding to a date_string given in any valid ISO 8601 format with the following exceptions 1 Reduced precision dates are not currently supported YYYY MM YYYY 2 Extended date representations are not currently supported YYYYYY MM DD 3 Ordinal dates are not currently supported YYYY OOO Examples from datetime import date date fromisoformat 2019 12 04 datetime date 2019 12 4 date fromisoformat 20191204 datetime date 2019 12 4 date fromisoformat 2021 W01 1 datetime date 2021 1 4 New in version 3 7 Changed in version 3 11 Previously this method only supported the format YYYY MM DD classmethod date fromisocalendar year week day Return a date corresponding to the ISO calendar date specified by year week and day This is the inverse of the function date isocalendar New in version 3 8 Class attributes date min The earliest representable date date MINYEAR 1 1 date max The latest representable date date MAXYEAR 12 31 date resolution The smallest possible difference between non equal date objects timedelta days 1 Instance attributes read only date year Between MINYEAR and MAXYEAR inclusive date month Between 1 and 12 inclusive date day Between 1 and the number of days in the given month of the given year Supported operations Operation Result date2 date1 timedelta date2 will be timedelta days days after date1 1 date2 date1 timedelta Computes date2 such that date2 timedelta date1 2 timedelta date1 date2 3 date1 date2 date1 Equality comparison 4 date2 date1 date2 date1 date2 Order comparison 5 date1 date2 date1 date2 Notes 1 date2 is moved forward in time if timedelta days 0 or backward if timedelta days 0 Afterward date2 date1 timedelta days timedelta seconds and timedelta microseconds are ignored OverflowError is raised if date2 year would be smaller than MINYEAR or larger than MAXYEAR 2 timedelta seconds and timedelta microseconds are ignored 3 This is exact and cannot overflow timedelta seconds and timedelta microseconds are 0 and date2 timedelta date1 after 4 date objects are equal if they represent the same date 5 date1 is considered less than date2 when date1 precedes date2 in time In other words date1 date2 if and only if date1 toordinal date2 toordinal In Boolean contexts all date objects are considered to be true Instance methods date replace year self year month self month day self day Return a date with the same value except for those parameters given new values by whichever keyword arguments are specified Example from datetime import date d date 2002 12 31 d replace day 26 datetime date 2002 12 26 date timetuple Return a time struct_time such as returned by time localtime The hours minutes and seconds are 0 and the DST flag is 1 d timetuple is equivalent to time struct_time d year d month d day 0 0 0 d weekday yday 1 where yday d toordinal date d year 1 1 toordinal 1 is the day number within the current year starting with 1 for January 1st date toordinal Return the proleptic Gregorian ordinal of the date where January 1 of year 1 has ordinal 1 For any date object d date fromordinal d toordinal d date weekday Return the day of the week as an integer where Monday is 0 and Sunday is 6 For example date 2002 12 4 weekday 2 a Wednesday See also isoweekday date isoweekday Return the day of the week as an integer where Monday is 1 and Sunday is 7 For example date 2002 12 4 isoweekday 3 a Wednesday See also weekday isocalendar date isocalendar Return a named tuple object with three components year week and weekday The ISO calendar is a widely used variant of the Gregorian calendar 3 The ISO year consists of 52 or 53 full weeks and where a week starts on a Monday and ends on a Sunday The first week of an ISO year is the first Gregorian calendar week of a year containing a Thursday This is called week number 1 and the ISO year of that Thursday is the same as its Gregorian year For example 2004 begins on a Thursday so the first week of ISO year 2004 begins on Monday 29 Dec 2003 and ends on Sunday 4 Jan 2004 from datetime import date date 2003 12 29 isocalendar datetime IsoCalendarDate year 2004 we
en
null
85
ek 1 weekday 1 date 2004 1 4 isocalendar datetime IsoCalendarDate year 2004 week 1 weekday 7 Changed in version 3 9 Result changed from a tuple to a named tuple date isoformat Return a string representing the date in ISO 8601 format YYYY MM DD from datetime import date date 2002 12 4 isoformat 2002 12 04 date __str__ For a date d str d is equivalent to d isoformat date ctime Return a string representing the date from datetime import date date 2002 12 4 ctime Wed Dec 4 00 00 00 2002 d ctime is equivalent to time ctime time mktime d timetuple on platforms where the native C ctime function which time ctime invokes but which date ctime does not invoke conforms to the C standard date strftime format Return a string representing the date controlled by an explicit format string Format codes referring to hours minutes or seconds will see 0 values See also strftime and strptime Behavior and date isoformat date __format__ format Same as date strftime This makes it possible to specify a format string for a date object in formatted string literals and when using str format See also strftime and strptime Behavior and date isoformat Examples of Usage date Example of counting days to an event import time from datetime import date today date today today datetime date 2007 12 5 today date fromtimestamp time time True my_birthday date today year 6 24 if my_birthday today my_birthday my_birthday replace year today year 1 my_birthday datetime date 2008 6 24 time_to_birthday abs my_birthday today time_to_birthday days 202 More examples of working with date from datetime import date d date fromordinal 730920 730920th day after 1 1 0001 d datetime date 2002 3 11 Methods related to formatting string output d isoformat 2002 03 11 d strftime d m y 11 03 02 d strftime A d B Y Monday 11 March 2002 d ctime Mon Mar 11 00 00 00 2002 The 1 is 0 d the 2 is 0 B format d day month The day is 11 the month is March Methods for to extracting components under different calendars t d timetuple for i in t print i 2002 year 3 month 11 day 0 0 0 0 weekday 0 Monday 70 70th day in the year 1 ic d isocalendar for i in ic print i 2002 ISO year 11 ISO week number 1 ISO day number 1 Monday A date object is immutable all operations produce a new object d replace year 2005 datetime date 2005 3 11 datetime Objects A datetime object is a single object containing all the information from a date object and a time object Like a date object datetime assumes the current Gregorian calendar extended in both directions like a time object datetime assumes there are exactly 3600 24 seconds in every day Constructor class datetime datetime year month day hour 0 minute 0 second 0 microsecond 0 tzinfo None fold 0 The year month and day arguments are required tzinfo may be None or an instance of a tzinfo subclass The remaining arguments must be integers in the following ranges MINYEAR year MAXYEAR 1 month 12 1 day number of days in the given month and year 0 hour 24 0 minute 60 0 second 60 0 microsecond 1000000 fold in 0 1 If an argument outside those ranges is given ValueError is raised Changed in version 3 6 Added the fold parameter Other constructors all class methods classmethod datetime today Return the current local datetime with tzinfo None Equivalent to datetime fromtimestamp time time See also now fromtimestamp This method is functionally equivalent to now but without a tz parameter classmethod datetime now tz None Return the current local date and time If optional argument tz is None or not specified this is like today but if possible supplies more precision than can be gotten from going through a time time timestamp for example this may be possible on platforms supplying the C gettimeofday function If tz is not None it must be an instance of a tzinfo subclass and the current date and time are converted to tz s time zone This function is preferred over today and utcnow classmethod datetime utcnow Return the current UTC date and time with tzinfo None This is like now but returns the current UTC date and time as a naive datetime object An aware current UTC datetime can be obtaine
en
null
86
d by calling datetime now timezone utc See also now Warning Because naive datetime objects are treated by many datetime methods as local times it is preferred to use aware datetimes to represent times in UTC As such the recommended way to create an object representing the current time in UTC is by calling datetime now timezone utc Deprecated since version 3 12 Use datetime now with UTC instead classmethod datetime fromtimestamp timestamp tz None Return the local date and time corresponding to the POSIX timestamp such as is returned by time time If optional argument tz is None or not specified the timestamp is converted to the platform s local date and time and the returned datetime object is naive If tz is not None it must be an instance of a tzinfo subclass and the timestamp is converted to tz s time zone fromtimestamp may raise OverflowError if the timestamp is out of the range of values supported by the platform C localtime or gmtime functions and OSError on localtime or gmtime failure It s common for this to be restricted to years in 1970 through 2038 Note that on non POSIX systems that include leap seconds in their notion of a timestamp leap seconds are ignored by fromtimestamp and then it s possible to have two timestamps differing by a second that yield identical datetime objects This method is preferred over utcfromtimestamp Changed in version 3 3 Raise OverflowError instead of ValueError if the timestamp is out of the range of values supported by the platform C localtime or gmtime functions Raise OSError instead of ValueError on localtime or gmtime failure Changed in version 3 6 fromtimestamp may return instances with fold set to 1 classmethod datetime utcfromtimestamp timestamp Return the UTC datetime corresponding to the POSIX timestamp with tzinfo None The resulting object is naive This may raise OverflowError if the timestamp is out of the range of values supported by the platform C gmtime function and OSError on gmtime failure It s common for this to be restricted to years in 1970 through 2038 To get an aware datetime object call fromtimestamp datetime fromtimestamp timestamp timezone utc On the POSIX compliant platforms it is equivalent to the following expression datetime 1970 1 1 tzinfo timezone utc timedelta seconds timestamp except the latter formula always supports the full years range between MINYEAR and MAXYEAR inclusive Warning Because naive datetime objects are treated by many datetime methods as local times it is preferred to use aware datetimes to represent times in UTC As such the recommended way to create an object representing a specific timestamp in UTC is by calling datetime fromtimestamp timestamp tz timezone utc Changed in version 3 3 Raise OverflowError instead of ValueError if the timestamp is out of the range of values supported by the platform C gmtime function Raise OSError instead of ValueError on gmtime failure Deprecated since version 3 12 Use datetime fromtimestamp with UTC instead classmethod datetime fromordinal ordinal Return the datetime corresponding to the proleptic Gregorian ordinal where January 1 of year 1 has ordinal 1 ValueError is raised unless 1 ordinal datetime max toordinal The hour minute second and microsecond of the result are all 0 and tzinfo is None classmethod datetime combine date time tzinfo time tzinfo Return a new datetime object whose date components are equal to the given date object s and whose time components are equal to the given time object s If the tzinfo argument is provided its value is used to set the tzinfo attribute of the result otherwise the tzinfo attribute of the time argument is used If the date argument is a datetime object its time components and tzinfo attributes are ignored For any datetime object d d datetime combine d date d time d tzinfo Changed in version 3 6 Added the tzinfo argument classmethod datetime fromisoformat date_string Return a datetime corresponding to a date_string in any valid ISO 8601 format with the following exceptions 1 Time zone offsets may have fractional seconds 2 The T separator may be replaced by any singl
en
null
87
e unicode character 3 Fractional hours and minutes are not supported 4 Reduced precision dates are not currently supported YYYY MM YYYY 5 Extended date representations are not currently supported YYYYYY MM DD 6 Ordinal dates are not currently supported YYYY OOO Examples from datetime import datetime datetime fromisoformat 2011 11 04 datetime datetime 2011 11 4 0 0 datetime fromisoformat 20111104 datetime datetime 2011 11 4 0 0 datetime fromisoformat 2011 11 04T00 05 23 datetime datetime 2011 11 4 0 5 23 datetime fromisoformat 2011 11 04T00 05 23Z datetime datetime 2011 11 4 0 5 23 tzinfo datetime timezone utc datetime fromisoformat 20111104T000523 datetime datetime 2011 11 4 0 5 23 datetime fromisoformat 2011 W01 2T00 05 23 283 datetime datetime 2011 1 4 0 5 23 283000 datetime fromisoformat 2011 11 04 00 05 23 283 datetime datetime 2011 11 4 0 5 23 283000 datetime fromisoformat 2011 11 04 00 05 23 283 00 00 datetime datetime 2011 11 4 0 5 23 283000 tzinfo datetime timezone utc datetime fromisoformat 2011 11 04T00 05 23 04 00 datetime datetime 2011 11 4 0 5 23 tzinfo datetime timezone datetime timedelta seconds 14400 New in version 3 7 Changed in version 3 11 Previously this method only supported formats that could be emitted by date isoformat or datetime isoformat classmethod datetime fromisocalendar year week day Return a datetime corresponding to the ISO calendar date specified by year week and day The non date components of the datetime are populated with their normal default values This is the inverse of the function datetime isocalendar New in version 3 8 classmethod datetime strptime date_string format Return a datetime corresponding to date_string parsed according to format If format does not contain microseconds or timezone information this is equivalent to datetime time strptime date_string format 0 6 ValueError is raised if the date_string and format can t be parsed by time strptime or if it returns a value which isn t a time tuple See also strftime and strptime Behavior and datetime fromisoformat Class attributes datetime min The earliest representable datetime datetime MINYEAR 1 1 tzinfo None datetime max The latest representable datetime datetime MAXYEAR 12 31 23 59 59 999999 tzinfo None datetime resolution The smallest possible difference between non equal datetime objects timedelta microseconds 1 Instance attributes read only datetime year Between MINYEAR and MAXYEAR inclusive datetime month Between 1 and 12 inclusive datetime day Between 1 and the number of days in the given month of the given year datetime hour In range 24 datetime minute In range 60 datetime second In range 60 datetime microsecond In range 1000000 datetime tzinfo The object passed as the tzinfo argument to the datetime constructor or None if none was passed datetime fold In 0 1 Used to disambiguate wall times during a repeated interval A repeated interval occurs when clocks are rolled back at the end of daylight saving time or when the UTC offset for the current zone is decreased for political reasons The value 0 1 represents the earlier later of the two moments with the same wall time representation New in version 3 6 Supported operations Operation Result datetime2 datetime1 timedelta 1 datetime2 datetime1 timedelta 2 timedelta datetime1 datetime2 3 datetime1 datetime2 datetime1 Equality comparison 4 datetime2 datetime1 datetime2 datetime1 Order comparison 5 datetime2 datetime1 datetime2 datetime1 datetime2 1 datetime2 is a duration of timedelta removed from datetime1 moving forward in time if timedelta days 0 or backward if timedelta days 0 The result has the same tzinfo attribute as the input datetime and datetime2 datetime1 timedelta after OverflowError is raised if datetime2 year would be smaller than MINYEAR or larger than MAXYEAR Note that no time zone adjustments are done even if the input is an aware object 2 Computes the datetime2 such that datetime2 timedelta datetime1 As for addition the result has the same tzinfo attribute as the input datetime and no time zone adjustments are done even if the input is aware 3 Subtraction o
en
null
88
f a datetime from a datetime is defined only if both operands are naive or if both are aware If one is aware and the other is naive TypeError is raised If both are naive or both are aware and have the same tzinfo attribute the tzinfo attributes are ignored and the result is a timedelta object t such that datetime2 t datetime1 No time zone adjustments are done in this case If both are aware and have different tzinfo attributes a b acts as if a and b were first converted to naive UTC datetimes The result is a replace tzinfo None a utcoffset b replace tzinfo None b utcoffset except that the implementation never overflows 4 datetime objects are equal if they represent the same date and time taking into account the time zone Naive and aware datetime objects are never equal datetime objects are never equal to date objects that are not also datetime instances even if they represent the same date If both comparands are aware and have the same tzinfo attribute the tzinfo and fold attributes are ignored and the base datetimes are compared If both comparands are aware and have different tzinfo attributes the comparison acts as comparands were first converted to UTC datetimes except that the implementation never overflows datetime instances in a repeated interval are never equal to datetime instances in other time zone 5 datetime1 is considered less than datetime2 when datetime1 precedes datetime2 in time taking into account the time zone Order comparison between naive and aware datetime objects as well as a datetime object and a date object that is not also a datetime instance raises TypeError If both comparands are aware and have the same tzinfo attribute the tzinfo and fold attributes are ignored and the base datetimes are compared If both comparands are aware and have different tzinfo attributes the comparison acts as comparands were first converted to UTC datetimes except that the implementation never overflows Changed in version 3 3 Equality comparisons between aware and naive datetime instances don t raise TypeError Instance methods datetime date Return date object with same year month and day datetime time Return time object with same hour minute second microsecond and fold tzinfo is None See also method timetz Changed in version 3 6 The fold value is copied to the returned time object datetime timetz Return time object with same hour minute second microsecond fold and tzinfo attributes See also method time Changed in version 3 6 The fold value is copied to the returned time object datetime replace year self year month self month day self day hour self hour minute self minute second self second microsecond self microsecond tzinfo self tzinfo fold 0 Return a datetime with the same attributes except for those attributes given new values by whichever keyword arguments are specified Note that tzinfo None can be specified to create a naive datetime from an aware datetime with no conversion of date and time data Changed in version 3 6 Added the fold parameter datetime astimezone tz None Return a datetime object with new tzinfo attribute tz adjusting the date and time data so the result is the same UTC time as self but in tz s local time If provided tz must be an instance of a tzinfo subclass and its utcoffset and dst methods must not return None If self is naive it is presumed to represent time in the system timezone If called without arguments or with tz None the system local timezone is assumed for the target timezone The tzinfo attribute of the converted datetime instance will be set to an instance of timezone with the zone name and offset obtained from the OS If self tzinfo is tz self astimezone tz is equal to self no adjustment of date or time data is performed Else the result is local time in the timezone tz representing the same UTC time as self after astz dt astimezone tz astz astz utcoffset will have the same date and time data as dt dt utcoffset If you merely want to attach a time zone object tz to a datetime dt without adjustment of date and time data use dt replace tzinfo tz If you merely want to remove the time zone o
en
null
89
bject from an aware datetime dt without conversion of date and time data use dt replace tzinfo None Note that the default tzinfo fromutc method can be overridden in a tzinfo subclass to affect the result returned by astimezone Ignoring error cases astimezone acts like def astimezone self tz if self tzinfo is tz return self Convert self to UTC and attach the new time zone object utc self self utcoffset replace tzinfo tz Convert from UTC to tz s local time return tz fromutc utc Changed in version 3 3 tz now can be omitted Changed in version 3 6 The astimezone method can now be called on naive instances that are presumed to represent system local time datetime utcoffset If tzinfo is None returns None else returns self tzinfo utcoffset self and raises an exception if the latter doesn t return None or a timedelta object with magnitude less than one day Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes datetime dst If tzinfo is None returns None else returns self tzinfo dst self and raises an exception if the latter doesn t return None or a timedelta object with magnitude less than one day Changed in version 3 7 The DST offset is not restricted to a whole number of minutes datetime tzname If tzinfo is None returns None else returns self tzinfo tzname self raises an exception if the latter doesn t return None or a string object datetime timetuple Return a time struct_time such as returned by time localtime d timetuple is equivalent to time struct_time d year d month d day d hour d minute d second d weekday yday dst where yday d toordinal date d year 1 1 toordinal 1 is the day number within the current year starting with 1 for January 1st The tm_isdst flag of the result is set according to the dst method tzinfo is None or dst returns None tm_isdst is set to 1 else if dst returns a non zero value tm_isdst is set to 1 else tm_isdst is set to 0 datetime utctimetuple If datetime instance d is naive this is the same as d timetuple except that tm_isdst is forced to 0 regardless of what d dst returns DST is never in effect for a UTC time If d is aware d is normalized to UTC time by subtracting d utcoffset and a time struct_time for the normalized time is returned tm_isdst is forced to 0 Note that an OverflowError may be raised if d year was MINYEAR or MAXYEAR and UTC adjustment spills over a year boundary Warning Because naive datetime objects are treated by many datetime methods as local times it is preferred to use aware datetimes to represent times in UTC as a result using datetime utctimetuple may give misleading results If you have a naive datetime representing UTC use datetime replace tzinfo timezone utc to make it aware at which point you can use datetime timetuple datetime toordinal Return the proleptic Gregorian ordinal of the date The same as self date toordinal datetime timestamp Return POSIX timestamp corresponding to the datetime instance The return value is a float similar to that returned by time time Naive datetime instances are assumed to represent local time and this method relies on the platform C mktime function to perform the conversion Since datetime supports wider range of values than mktime on many platforms this method may raise OverflowError or OSError for times far in the past or far in the future For aware datetime instances the return value is computed as dt datetime 1970 1 1 tzinfo timezone utc total_seconds New in version 3 3 Changed in version 3 6 The timestamp method uses the fold attribute to disambiguate the times during a repeated interval Note There is no method to obtain the POSIX timestamp directly from a naive datetime instance representing UTC time If your application uses this convention and your system timezone is not set to UTC you can obtain the POSIX timestamp by supplying tzinfo timezone utc timestamp dt replace tzinfo timezone utc timestamp or by calculating the timestamp directly timestamp dt datetime 1970 1 1 timedelta seconds 1 datetime weekday Return the day of the week as an integer where Monday is 0 and Sunday is 6 The same as self date weekday See
en
null
90
also isoweekday datetime isoweekday Return the day of the week as an integer where Monday is 1 and Sunday is 7 The same as self date isoweekday See also weekday isocalendar datetime isocalendar Return a named tuple with three components year week and weekday The same as self date isocalendar datetime isoformat sep T timespec auto Return a string representing the date and time in ISO 8601 format YYYY MM DDTHH MM SS ffffff if microsecond is not 0 YYYY MM DDTHH MM SS if microsecond is 0 If utcoffset does not return None a string is appended giving the UTC offset YYYY MM DDTHH MM SS ffffff HH MM SS ffffff if microsecond is not 0 YYYY MM DDTHH MM SS HH MM SS ffffff if microsecond is 0 Examples from datetime import datetime timezone datetime 2019 5 18 15 17 8 132263 isoformat 2019 05 18T15 17 08 132263 datetime 2019 5 18 15 17 tzinfo timezone utc isoformat 2019 05 18T15 17 00 00 00 The optional argument sep default T is a one character separator placed between the date and time portions of the result For example from datetime import tzinfo timedelta datetime class TZ tzinfo A time zone with an arbitrary constant 06 39 offset def utcoffset self dt return timedelta hours 6 minutes 39 datetime 2002 12 25 tzinfo TZ isoformat 2002 12 25 00 00 00 06 39 datetime 2009 11 27 microsecond 100 tzinfo TZ isoformat 2009 11 27T00 00 00 000100 06 39 The optional argument timespec specifies the number of additional components of the time to include the default is auto It can be one of the following auto Same as seconds if microsecond is 0 same as microseconds otherwise hours Include the hour in the two digit HH format minutes Include hour and minute in HH MM format seconds Include hour minute and second in HH MM SS format milliseconds Include full time but truncate fractional second part to milliseconds HH MM SS sss format microseconds Include full time in HH MM SS ffffff format Note Excluded time components are truncated not rounded ValueError will be raised on an invalid timespec argument from datetime import datetime datetime now isoformat timespec minutes 2002 12 25T00 00 dt datetime 2015 1 1 12 30 59 0 dt isoformat timespec microseconds 2015 01 01T12 30 59 000000 Changed in version 3 6 Added the timespec parameter datetime __str__ For a datetime instance d str d is equivalent to d isoformat datetime ctime Return a string representing the date and time from datetime import datetime datetime 2002 12 4 20 30 40 ctime Wed Dec 4 20 30 40 2002 The output string will not include time zone information regardless of whether the input is aware or naive d ctime is equivalent to time ctime time mktime d timetuple on platforms where the native C ctime function which time ctime invokes but which datetime ctime does not invoke conforms to the C standard datetime strftime format Return a string representing the date and time controlled by an explicit format string See also strftime and strptime Behavior and datetime isoformat datetime __format__ format Same as datetime strftime This makes it possible to specify a format string for a datetime object in formatted string literals and when using str format See also strftime and strptime Behavior and datetime isoformat Examples of Usage datetime Examples of working with datetime objects from datetime import datetime date time timezone Using datetime combine d date 2005 7 14 t time 12 30 datetime combine d t datetime datetime 2005 7 14 12 30 Using datetime now datetime now datetime datetime 2007 12 6 16 29 43 79043 GMT 1 datetime now timezone utc datetime datetime 2007 12 6 15 29 43 79060 tzinfo datetime timezone utc Using datetime strptime dt datetime strptime 21 11 06 16 30 d m y H M dt datetime datetime 2006 11 21 16 30 Using datetime timetuple to get tuple of all attributes tt dt timetuple for it in tt print it 2006 year 11 month 21 day 16 hour 30 minute 0 second 1 weekday 0 Monday 325 number of days since 1st January 1 dst method tzinfo dst returned None Date in ISO format ic dt isocalendar for it in ic print it 2006 ISO year 47 ISO week 2 ISO weekday Formatting a datetime dt strftime A d B Y I M p Tuesd
en
null
91
ay 21 November 2006 04 30PM The 1 is 0 d the 2 is 0 B the 3 is 0 I M p format dt day month time The day is 21 the month is November the time is 04 30PM The example below defines a tzinfo subclass capturing time zone information for Kabul Afghanistan which used 4 UTC until 1945 and then 4 30 UTC thereafter from datetime import timedelta datetime tzinfo timezone class KabulTz tzinfo Kabul used 4 until 1945 when they moved to 4 30 UTC_MOVE_DATE datetime 1944 12 31 20 tzinfo timezone utc def utcoffset self dt if dt year 1945 return timedelta hours 4 elif 1945 1 1 0 0 dt timetuple 5 1945 1 1 0 30 An ambiguous imaginary half hour range representing a fold in time due to the shift from 4 to 4 30 If dt falls in the imaginary range use fold to decide how to resolve See PEP495 return timedelta hours 4 minutes 30 if dt fold else 0 else return timedelta hours 4 minutes 30 def fromutc self dt Follow same validations as in datetime tzinfo if not isinstance dt datetime raise TypeError fromutc requires a datetime argument if dt tzinfo is not self raise ValueError dt tzinfo is not self A custom implementation is required for fromutc as the input to this function is a datetime with utc values but with a tzinfo set to self See datetime astimezone or fromtimestamp if dt replace tzinfo timezone utc self UTC_MOVE_DATE return dt timedelta hours 4 minutes 30 else return dt timedelta hours 4 def dst self dt Kabul does not observe daylight saving time return timedelta 0 def tzname self dt if dt self UTC_MOVE_DATE return 04 30 return 04 Usage of KabulTz from above tz1 KabulTz Datetime before the change dt1 datetime 1900 11 21 16 30 tzinfo tz1 print dt1 utcoffset 4 00 00 Datetime after the change dt2 datetime 2006 6 14 13 0 tzinfo tz1 print dt2 utcoffset 4 30 00 Convert datetime to another time zone dt3 dt2 astimezone timezone utc dt3 datetime datetime 2006 6 14 8 30 tzinfo datetime timezone utc dt2 datetime datetime 2006 6 14 13 0 tzinfo KabulTz dt2 dt3 True time Objects A time object represents a local time of day independent of any particular day and subject to adjustment via a tzinfo object class datetime time hour 0 minute 0 second 0 microsecond 0 tzinfo None fold 0 All arguments are optional tzinfo may be None or an instance of a tzinfo subclass The remaining arguments must be integers in the following ranges 0 hour 24 0 minute 60 0 second 60 0 microsecond 1000000 fold in 0 1 If an argument outside those ranges is given ValueError is raised All default to 0 except tzinfo which defaults to None Class attributes time min The earliest representable time time 0 0 0 0 time max The latest representable time time 23 59 59 999999 time resolution The smallest possible difference between non equal time objects timedelta microseconds 1 although note that arithmetic on time objects is not supported Instance attributes read only time hour In range 24 time minute In range 60 time second In range 60 time microsecond In range 1000000 time tzinfo The object passed as the tzinfo argument to the time constructor or None if none was passed time fold In 0 1 Used to disambiguate wall times during a repeated interval A repeated interval occurs when clocks are rolled back at the end of daylight saving time or when the UTC offset for the current zone is decreased for political reasons The value 0 1 represents the earlier later of the two moments with the same wall time representation New in version 3 6 time objects support equality and order comparisons where a is considered less than b when a precedes b in time Naive and aware time objects are never equal Order comparison between naive and aware time objects raises TypeError If both comparands are aware and have the same tzinfo attribute the tzinfo and fold attributes are ignored and the base times are compared If both comparands are aware and have different tzinfo attributes the comparands are first adjusted by subtracting their UTC offsets obtained from self utcoffset Changed in version 3 3 Equality comparisons between aware and naive time instances don t raise TypeError In Boolean contexts a time object is always
en
null
92
considered to be true Changed in version 3 5 Before Python 3 5 a time object was considered to be false if it represented midnight in UTC This behavior was considered obscure and error prone and has been removed in Python 3 5 See bpo 13936 for full details Other constructor classmethod time fromisoformat time_string Return a time corresponding to a time_string in any valid ISO 8601 format with the following exceptions 1 Time zone offsets may have fractional seconds 2 The leading T normally required in cases where there may be ambiguity between a date and a time is not required 3 Fractional seconds may have any number of digits anything beyond 6 will be truncated 4 Fractional hours and minutes are not supported Examples from datetime import time time fromisoformat 04 23 01 datetime time 4 23 1 time fromisoformat T04 23 01 datetime time 4 23 1 time fromisoformat T042301 datetime time 4 23 1 time fromisoformat 04 23 01 000384 datetime time 4 23 1 384 time fromisoformat 04 23 01 000384 datetime time 4 23 1 384 time fromisoformat 04 23 01 04 00 datetime time 4 23 1 tzinfo datetime timezone datetime timedelta seconds 14400 time fromisoformat 04 23 01Z datetime time 4 23 1 tzinfo datetime timezone utc time fromisoformat 04 23 01 00 00 datetime time 4 23 1 tzinfo datetime timezone utc New in version 3 7 Changed in version 3 11 Previously this method only supported formats that could be emitted by time isoformat Instance methods time replace hour self hour minute self minute second self second microsecond self microsecond tzinfo self tzinfo fold 0 Return a time with the same value except for those attributes given new values by whichever keyword arguments are specified Note that tzinfo None can be specified to create a naive time from an aware time without conversion of the time data Changed in version 3 6 Added the fold parameter time isoformat timespec auto Return a string representing the time in ISO 8601 format one of HH MM SS ffffff if microsecond is not 0 HH MM SS if microsecond is 0 HH MM SS ffffff HH MM SS ffffff if utcoffset does not return None HH MM SS HH MM SS ffffff if microsecond is 0 and utcoffset does not return None The optional argument timespec specifies the number of additional components of the time to include the default is auto It can be one of the following auto Same as seconds if microsecond is 0 same as microseconds otherwise hours Include the hour in the two digit HH format minutes Include hour and minute in HH MM format seconds Include hour minute and second in HH MM SS format milliseconds Include full time but truncate fractional second part to milliseconds HH MM SS sss format microseconds Include full time in HH MM SS ffffff format Note Excluded time components are truncated not rounded ValueError will be raised on an invalid timespec argument Example from datetime import time time hour 12 minute 34 second 56 microsecond 123456 isoformat timespec minutes 12 34 dt time hour 12 minute 34 second 56 microsecond 0 dt isoformat timespec microseconds 12 34 56 000000 dt isoformat timespec auto 12 34 56 Changed in version 3 6 Added the timespec parameter time __str__ For a time t str t is equivalent to t isoformat time strftime format Return a string representing the time controlled by an explicit format string See also strftime and strptime Behavior and time isoformat time __format__ format Same as time strftime This makes it possible to specify a format string for a time object in formatted string literals and when using str format See also strftime and strptime Behavior and time isoformat time utcoffset If tzinfo is None returns None else returns self tzinfo utcoffset None and raises an exception if the latter doesn t return None or a timedelta object with magnitude less than one day Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes time dst If tzinfo is None returns None else returns self tzinfo dst None and raises an exception if the latter doesn t return None or a timedelta object with magnitude less than one day Changed in version 3 7 The DST offset is not restricted
en
null
93
to a whole number of minutes time tzname If tzinfo is None returns None else returns self tzinfo tzname None or raises an exception if the latter doesn t return None or a string object Examples of Usage time Examples of working with a time object from datetime import time tzinfo timedelta class TZ1 tzinfo def utcoffset self dt return timedelta hours 1 def dst self dt return timedelta 0 def tzname self dt return 01 00 def __repr__ self return f self __class__ __name__ t time 12 10 30 tzinfo TZ1 t datetime time 12 10 30 tzinfo TZ1 t isoformat 12 10 30 01 00 t dst datetime timedelta 0 t tzname 01 00 t strftime H M S Z 12 10 30 01 00 The is H M format time t The time is 12 10 tzinfo Objects class datetime tzinfo This is an abstract base class meaning that this class should not be instantiated directly Define a subclass of tzinfo to capture information about a particular time zone An instance of a concrete subclass of tzinfo can be passed to the constructors for datetime and time objects The latter objects view their attributes as being in local time and the tzinfo object supports methods revealing offset of local time from UTC the name of the time zone and DST offset all relative to a date or time object passed to them You need to derive a concrete subclass and at least supply implementations of the standard tzinfo methods needed by the datetime methods you use The datetime module provides timezone a simple concrete subclass of tzinfo which can represent timezones with fixed offset from UTC such as UTC itself or North American EST and EDT Special requirement for pickling A tzinfo subclass must have an __init__ method that can be called with no arguments otherwise it can be pickled but possibly not unpickled again This is a technical requirement that may be relaxed in the future A concrete subclass of tzinfo may need to implement the following methods Exactly which methods are needed depends on the uses made of aware datetime objects If in doubt simply implement all of them tzinfo utcoffset dt Return offset of local time from UTC as a timedelta object that is positive east of UTC If local time is west of UTC this should be negative This represents the total offset from UTC for example if a tzinfo object represents both time zone and DST adjustments utcoffset should return their sum If the UTC offset isn t known return None Else the value returned must be a timedelta object strictly between timedelta hours 24 and timedelta hours 24 the magnitude of the offset must be less than one day Most implementations of utcoffset will probably look like one of these two return CONSTANT fixed offset class return CONSTANT self dst dt daylight aware class If utcoffset does not return None dst should not return None either The default implementation of utcoffset raises NotImplementedError Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes tzinfo dst dt Return the daylight saving time DST adjustment as a timedelta object or None if DST information isn t known Return timedelta 0 if DST is not in effect If DST is in effect return the offset as a timedelta object see utcoffset for details Note that DST offset if applicable has already been added to the UTC offset returned by utcoffset so there s no need to consult dst unless you re interested in obtaining DST info separately For example datetime timetuple calls its tzinfo attribute s dst method to determine how the tm_isdst flag should be set and tzinfo fromutc calls dst to account for DST changes when crossing time zones An instance tz of a tzinfo subclass that models both standard and daylight times must be consistent in this sense tz utcoffset dt tz dst dt must return the same result for every datetime dt with dt tzinfo tz For sane tzinfo subclasses this expression yields the time zone s standard offset which should not depend on the date or the time but only on geographic location The implementation of datetime astimezone relies on this but cannot detect violations it s the programmer s responsibility to ensure it If a tzinfo subclass cannot guarantee this i
en
null
94
t may be able to override the default implementation of tzinfo fromutc to work correctly with astimezone regardless Most implementations of dst will probably look like one of these two def dst self dt a fixed offset class doesn t account for DST return timedelta 0 or def dst self dt Code to set dston and dstoff to the time zone s DST transition times based on the input dt year and expressed in standard local time if dston dt replace tzinfo None dstoff return timedelta hours 1 else return timedelta 0 The default implementation of dst raises NotImplementedError Changed in version 3 7 The DST offset is not restricted to a whole number of minutes tzinfo tzname dt Return the time zone name corresponding to the datetime object dt as a string Nothing about string names is defined by the datetime module and there s no requirement that it mean anything in particular For example GMT UTC 500 5 00 EDT US Eastern America New York are all valid replies Return None if a string name isn t known Note that this is a method rather than a fixed string primarily because some tzinfo subclasses will wish to return different names depending on the specific value of dt passed especially if the tzinfo class is accounting for daylight time The default implementation of tzname raises NotImplementedError These methods are called by a datetime or time object in response to their methods of the same names A datetime object passes itself as the argument and a time object passes None as the argument A tzinfo subclass s methods should therefore be prepared to accept a dt argument of None or of class datetime When None is passed it s up to the class designer to decide the best response For example returning None is appropriate if the class wishes to say that time objects don t participate in the tzinfo protocols It may be more useful for utcoffset None to return the standard UTC offset as there is no other convention for discovering the standard offset When a datetime object is passed in response to a datetime method dt tzinfo is the same object as self tzinfo methods can rely on this unless user code calls tzinfo methods directly The intent is that the tzinfo methods interpret dt as being in local time and not need worry about objects in other timezones There is one more tzinfo method that a subclass may wish to override tzinfo fromutc dt This is called from the default datetime astimezone implementation When called from that dt tzinfo is self and dt s date and time data are to be viewed as expressing a UTC time The purpose of fromutc is to adjust the date and time data returning an equivalent datetime in self s local time Most tzinfo subclasses should be able to inherit the default fromutc implementation without problems It s strong enough to handle fixed offset time zones and time zones accounting for both standard and daylight time and the latter even if the DST transition times differ in different years An example of a time zone the default fromutc implementation may not handle correctly in all cases is one where the standard offset from UTC depends on the specific date and time passed which can happen for political reasons The default implementations of astimezone and fromutc may not produce the result you want if the result is one of the hours straddling the moment the standard offset changes Skipping code for error cases the default fromutc implementation acts like def fromutc self dt raise ValueError error if dt tzinfo is not self dtoff dt utcoffset dtdst dt dst raise ValueError if dtoff is None or dtdst is None delta dtoff dtdst this is self s standard offset if delta dt delta convert to standard local time dtdst dt dst raise ValueError if dtdst is None if dtdst return dt dtdst else return dt In the following tzinfo_examples py file there are some examples of tzinfo classes from datetime import tzinfo timedelta datetime ZERO timedelta 0 HOUR timedelta hours 1 SECOND timedelta seconds 1 A class capturing the platform s idea of local time May result in wrong values on historical times in timezones where UTC offset and or the DST rules had changed
en
null
95
in the past import time as _time STDOFFSET timedelta seconds _time timezone if _time daylight DSTOFFSET timedelta seconds _time altzone else DSTOFFSET STDOFFSET DSTDIFF DSTOFFSET STDOFFSET class LocalTimezone tzinfo def fromutc self dt assert dt tzinfo is self stamp dt datetime 1970 1 1 tzinfo self SECOND args _time localtime stamp 6 dst_diff DSTDIFF SECOND Detect fold fold args _time localtime stamp dst_diff return datetime args microsecond dt microsecond tzinfo self fold fold def utcoffset self dt if self _isdst dt return DSTOFFSET else return STDOFFSET def dst self dt if self _isdst dt return DSTDIFF else return ZERO def tzname self dt return _time tzname self _isdst dt def _isdst self dt tt dt year dt month dt day dt hour dt minute dt second dt weekday 0 0 stamp _time mktime tt tt _time localtime stamp return tt tm_isdst 0 Local LocalTimezone A complete implementation of current DST rules for major US time zones def first_sunday_on_or_after dt days_to_go 6 dt weekday if days_to_go dt timedelta days_to_go return dt US DST Rules This is a simplified i e wrong for a few cases set of rules for US DST start and end times For a complete and up to date set of DST rules and timezone definitions visit the Olson Database or try pytz http www twinsun com tz tz link htm https sourceforge net projects pytz might not be up to date In the US since 2007 DST starts at 2am standard time on the second Sunday in March which is the first Sunday on or after Mar 8 DSTSTART_2007 datetime 1 3 8 2 and ends at 2am DST time on the first Sunday of Nov DSTEND_2007 datetime 1 11 1 2 From 1987 to 2006 DST used to start at 2am standard time on the first Sunday in April and to end at 2am DST time on the last Sunday of October which is the first Sunday on or after Oct 25 DSTSTART_1987_2006 datetime 1 4 1 2 DSTEND_1987_2006 datetime 1 10 25 2 From 1967 to 1986 DST used to start at 2am standard time on the last Sunday in April the one on or after April 24 and to end at 2am DST time on the last Sunday of October which is the first Sunday on or after Oct 25 DSTSTART_1967_1986 datetime 1 4 24 2 DSTEND_1967_1986 DSTEND_1987_2006 def us_dst_range year Find start and end times for US DST For years before 1967 return start end for no DST if 2006 year dststart dstend DSTSTART_2007 DSTEND_2007 elif 1986 year 2007 dststart dstend DSTSTART_1987_2006 DSTEND_1987_2006 elif 1966 year 1987 dststart dstend DSTSTART_1967_1986 DSTEND_1967_1986 else return datetime year 1 1 2 start first_sunday_on_or_after dststart replace year year end first_sunday_on_or_after dstend replace year year return start end class USTimeZone tzinfo def __init__ self hours reprname stdname dstname self stdoffset timedelta hours hours self reprname reprname self stdname stdname self dstname dstname def __repr__ self return self reprname def tzname self dt if self dst dt return self dstname else return self stdname def utcoffset self dt return self stdoffset self dst dt def dst self dt if dt is None or dt tzinfo is None An exception may be sensible here in one or both cases It depends on how you want to treat them The default fromutc implementation called by the default astimezone implementation passes a datetime with dt tzinfo is self return ZERO assert dt tzinfo is self start end us_dst_range dt year Can t compare naive to aware objects so strip the timezone from dt first dt dt replace tzinfo None if start HOUR dt end HOUR DST is in effect return HOUR if end HOUR dt end Fold an ambiguous hour use dt fold to disambiguate return ZERO if dt fold else HOUR if start dt start HOUR Gap a non existent hour reverse the fold rule return HOUR if dt fold else ZERO DST is off return ZERO def fromutc self dt assert dt tzinfo is self start end us_dst_range dt year start start replace tzinfo self end end replace tzinfo self std_time dt self stdoffset dst_time std_time HOUR if end dst_time end HOUR Repeated hour return std_time replace fold 1 if std_time start or dst_time end Standard time return std_time if start std_time end HOUR Daylight saving time return dst_time Eastern USTimeZone 5 Eastern EST EDT Central
en
null
96
USTimeZone 6 Central CST CDT Mountain USTimeZone 7 Mountain MST MDT Pacific USTimeZone 8 Pacific PST PDT Note that there are unavoidable subtleties twice per year in a tzinfo subclass accounting for both standard and daylight time at the DST transition points For concreteness consider US Eastern UTC 0500 where EDT begins the minute after 1 59 EST on the second Sunday in March and ends the minute after 1 59 EDT on the first Sunday in November UTC 3 MM 4 MM 5 MM 6 MM 7 MM 8 MM EST 22 MM 23 MM 0 MM 1 MM 2 MM 3 MM EDT 23 MM 0 MM 1 MM 2 MM 3 MM 4 MM start 22 MM 23 MM 0 MM 1 MM 3 MM 4 MM end 23 MM 0 MM 1 MM 1 MM 2 MM 3 MM When DST starts the start line the local wall clock leaps from 1 59 to 3 00 A wall time of the form 2 MM doesn t really make sense on that day so astimezone Eastern won t deliver a result with hour 2 on the day DST begins For example at the Spring forward transition of 2016 we get from datetime import datetime timezone from tzinfo_examples import HOUR Eastern u0 datetime 2016 3 13 5 tzinfo timezone utc for i in range 4 u u0 i HOUR t u astimezone Eastern print u time UTC t time t tzname 05 00 00 UTC 00 00 00 EST 06 00 00 UTC 01 00 00 EST 07 00 00 UTC 03 00 00 EDT 08 00 00 UTC 04 00 00 EDT When DST ends the end line there s a potentially worse problem there s an hour that can t be spelled unambiguously in local wall time the last hour of daylight time In Eastern that s times of the form 5 MM UTC on the day daylight time ends The local wall clock leaps from 1 59 daylight time back to 1 00 standard time again Local times of the form 1 MM are ambiguous astimezone mimics the local clock s behavior by mapping two adjacent UTC hours into the same local hour then In the Eastern example UTC times of the form 5 MM and 6 MM both map to 1 MM when converted to Eastern but earlier times have the fold attribute set to 0 and the later times have it set to 1 For example at the Fall back transition of 2016 we get u0 datetime 2016 11 6 4 tzinfo timezone utc for i in range 4 u u0 i HOUR t u astimezone Eastern print u time UTC t time t tzname t fold 04 00 00 UTC 00 00 00 EDT 0 05 00 00 UTC 01 00 00 EDT 0 06 00 00 UTC 01 00 00 EST 1 07 00 00 UTC 02 00 00 EST 0 Note that the datetime instances that differ only by the value of the fold attribute are considered equal in comparisons Applications that can t bear wall time ambiguities should explicitly check the value of the fold attribute or avoid using hybrid tzinfo subclasses there are no ambiguities when using timezone or any other fixed offset tzinfo subclass such as a class representing only EST fixed offset 5 hours or only EDT fixed offset 4 hours See also zoneinfo The datetime module has a basic timezone class for handling arbitrary fixed offsets from UTC and its timezone utc attribute a UTC timezone instance zoneinfo brings the IANA timezone database also known as the Olson database to Python and its usage is recommended IANA timezone database The Time Zone Database often called tz tzdata or zoneinfo contains code and data that represent the history of local time for many representative locations around the globe It is updated periodically to reflect changes made by political bodies to time zone boundaries UTC offsets and daylight saving rules timezone Objects The timezone class is a subclass of tzinfo each instance of which represents a timezone defined by a fixed offset from UTC Objects of this class cannot be used to represent timezone information in the locations where different offsets are used in different days of the year or where historical changes have been made to civil time class datetime timezone offset name None The offset argument must be specified as a timedelta object representing the difference between the local time and UTC It must be strictly between timedelta hours 24 and timedelta hours 24 otherwise ValueError is raised The name argument is optional If specified it must be a string that will be used as the value returned by the datetime tzname method New in version 3 2 Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes time
en
null
97
zone utcoffset dt Return the fixed value specified when the timezone instance is constructed The dt argument is ignored The return value is a timedelta instance equal to the difference between the local time and UTC Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes timezone tzname dt Return the fixed value specified when the timezone instance is constructed If name is not provided in the constructor the name returned by tzname dt is generated from the value of the offset as follows If offset is timedelta 0 the name is UTC otherwise it is a string in the format UTC HH MM where is the sign of offset HH and MM are two digits of offset hours and offset minutes respectively Changed in version 3 6 Name generated from offset timedelta 0 is now plain UTC not UTC 00 00 timezone dst dt Always returns None timezone fromutc dt Return dt offset The dt argument must be an aware datetime instance with tzinfo set to self Class attributes timezone utc The UTC timezone timezone timedelta 0 strftime and strptime Behavior date datetime and time objects all support a strftime format method to create a string representing the time under the control of an explicit format string Conversely the datetime strptime class method creates a datetime object from a string representing a date and time and a corresponding format string The table below provides a high level comparison of strftime versus strptime strftime strptime Usage Convert object to a string according to a given format Parse a string into a datetime object given a corresponding format Type of method Instance method Class method Method of date datetime time datetime Signature strftime format strptime date_string format strftime and strptime Format Codes These methods accept format codes that can be used to parse and format dates datetime strptime 31 01 22 23 59 59 999999 d m y H M S f datetime datetime 2022 1 31 23 59 59 999999 _ strftime a d b Y I M p Mon 31 Jan 2022 11 59PM The following is a list of all the format codes that the 1989 C standard requires and these work on all platforms with a standard C implementation Directive Meaning Example Notes a Weekday as locale s abbreviated Sun Mon Sat 1 name en_US So Mo Sa de_DE A Weekday as locale s full name Sunday Monday 1 Saturday en_US Sonntag Montag Samstag de_DE w Weekday as a decimal number 0 1 6 where 0 is Sunday and 6 is Saturday d Day of the month as a zero 01 02 31 9 padded decimal number b Month as locale s abbreviated Jan Feb Dec 1 name en_US Jan Feb Dez de_DE B Month as locale s full name January February 1 December en_US Januar Februar Dezember de_DE m Month as a zero padded decimal 01 02 12 9 number y Year without century as a zero 00 01 99 9 padded decimal number Y Year with century as a decimal 0001 0002 2013 2 number 2014 9998 9999 H Hour 24 hour clock as a zero 00 01 23 9 padded decimal number I Hour 12 hour clock as a zero 01 02 12 9 padded decimal number p Locale s equivalent of either AM AM PM en_US am pm 1 or PM de_DE 3 M Minute as a zero padded decimal 00 01 59 9 number S Second as a zero padded decimal 00 01 59 4 number 9 f Microsecond as a decimal number 000000 000001 5 zero padded to 6 digits 999999 z UTC offset in the form empty 0000 0400 6 HHMM SS ffffff empty 1030 063415 string if the object is naive 030712 345216 Z Time zone name empty string if empty UTC GMT 6 the object is naive j Day of the year as a zero padded 001 002 366 9 decimal number U Week number of the year Sunday 00 01 53 7 as the first day of the week as 9 a zero padded decimal number All days in a new year preceding the first Sunday are considered to be in week 0 W Week number of the year Monday 00 01 53 7 as the first day of the week as 9 a zero padded decimal number All days in a new year preceding the first Monday are considered to be in week 0 c Locale s appropriate date and Tue Aug 16 21 30 00 1988 1 time representation en_US Di 16 Aug 21 30 00 1988 de_DE x Locale s appropriate date 08 16 88 None 1 representation 08 16 1988 en_US 16 08 1988 de_DE X Locale s appropriate time 21 30 00 en_US 1 representation
en
null
98
21 30 00 de_DE A literal character Several additional directives not required by the C89 standard are included for convenience These parameters all correspond to ISO 8601 date values Directive Meaning Example Notes G ISO 8601 year with century 0001 0002 2013 8 representing the year that 2014 9998 9999 contains the greater part of the ISO week V u ISO 8601 weekday as a decimal 1 2 7 number where 1 is Monday V ISO 8601 week as a decimal 01 02 53 8 number with Monday as the first 9 day of the week Week 01 is the week containing Jan 4 z UTC offset in the form empty 00 00 04 00 6 HH MM SS ffffff empty 10 30 06 34 15 string if the object is naive 03 07 12 345216 These may not be available on all platforms when used with the strftime method The ISO 8601 year and ISO 8601 week directives are not interchangeable with the year and week number directives above Calling strptime with incomplete or ambiguous ISO 8601 directives will raise a ValueError The full set of format codes supported varies across platforms because Python calls the platform C library s strftime function and platform variations are common To see the full set of format codes supported on your platform consult the strftime 3 documentation There are also differences between platforms in handling of unsupported format specifiers New in version 3 6 G u and V were added New in version 3 12 z was added Technical Detail Broadly speaking d strftime fmt acts like the time module s time strftime fmt d timetuple although not all objects support a timetuple method For the datetime strptime class method the default value is 1900 01 01T00 00 00 000 any components not specified in the format string will be pulled from the default value 4 Using datetime strptime date_string format is equivalent to datetime time strptime date_string format 0 6 except when the format includes sub second components or timezone offset information which are supported in datetime strptime but are discarded by time strptime For time objects the format codes for year month and day should not be used as time objects have no such values If they re used anyway 1900 is substituted for the year and 1 for the month and day For date objects the format codes for hours minutes seconds and microseconds should not be used as date objects have no such values If they re used anyway 0 is substituted for them For the same reason handling of format strings containing Unicode code points that can t be represented in the charset of the current locale is also platform dependent On some platforms such code points are preserved intact in the output while on others strftime may raise UnicodeError or return an empty string instead Notes 1 Because the format depends on the current locale care should be taken when making assumptions about the output value Field orderings will vary for example month day year versus day month year and the output may contain non ASCII characters 2 The strptime method can parse years in the full 1 9999 range but years 1000 must be zero filled to 4 digit width Changed in version 3 2 In previous versions strftime method was restricted to years 1900 Changed in version 3 3 In version 3 2 strftime method was restricted to years 1000 3 When used with the strptime method the p directive only affects the output hour field if the I directive is used to parse the hour 4 Unlike the time module the datetime module does not support leap seconds 5 When used with the strptime method the f directive accepts from one to six digits and zero pads on the right f is an extension to the set of format characters in the C standard but implemented separately in datetime objects and therefore always available 6 For a naive object the z z and Z format codes are replaced by empty strings For an aware object z utcoffset is transformed into a string of the form HHMM SS ffffff where HH is a 2 digit string giving the number of UTC offset hours MM is a 2 digit string giving the number of UTC offset minutes SS is a 2 digit string giving the number of UTC offset seconds and ffffff is a 6 digit string giving the number of UTC offset
en
null
99
microseconds The ffffff part is omitted when the offset is a whole number of seconds and both the ffffff and the SS part is omitted when the offset is a whole number of minutes For example if utcoffset returns timedelta hours 3 minutes 30 z is replaced with the string 0330 Changed in version 3 7 The UTC offset is not restricted to a whole number of minutes Changed in version 3 7 When the z directive is provided to the strptime method the UTC offsets can have a colon as a separator between hours minutes and seconds For example 01 00 00 will be parsed as an offset of one hour In addition providing Z is identical to 00 00 z Behaves exactly as z but has a colon separator added between hours minutes and seconds Z In strftime Z is replaced by an empty string if tzname returns None otherwise Z is replaced by the returned value which must be a string strptime only accepts certain values for Z 1 any value in time tzname for your machine s locale 2 the hard coded values UTC and GMT So someone living in Japan may have JST UTC and GMT as valid values but probably not EST It will raise ValueError for invalid values Changed in version 3 2 When the z directive is provided to the strptime method an aware datetime object will be produced The tzinfo of the result will be set to a timezone instance 7 When used with the strptime method U and W are only used in calculations when the day of the week and the calendar year Y are specified 8 Similar to U and W V is only used in calculations when the day of the week and the ISO year G are specified in a strptime format string Also note that G and Y are not interchangeable 9 When used with the strptime method the leading zero is optional for formats d m H I M S j U W and V Format y does require a leading zero Footnotes 1 If that is we ignore the effects of Relativity 2 This matches the definition of the proleptic Gregorian calendar in Dershowitz and Reingold s book Calendrical Calculations where it s the base calendar for all computations See the book for algorithms for converting between proleptic Gregorian ordinals and many other calendar systems 3 See R H van Gent s guide to the mathematics of the ISO 8601 calendar for a good explanation 4 Passing datetime strptime Feb 29 b d will fail since 1900 is not a leap year
en
null