stratum,class,comment,Summary,Usage,Parameters,Expand,Version,DevelopmentNotes,Todo,Exception,Links,Noise,Warnings,Recommendation,Dependecies,Precondition,CodingGuidelines,Extension,Subclassexplnation,Observation 1,AccessMixin,"Abstract CBV mixin that gives access mixins the same customizable functionality.","Abstract CBV mixin that gives access mixins the same customizable functionality.",,,,,,,,,,,,,,,,, 1,AmbiguityError,More than one migration matches a name prefix.,More than one migration matches a name prefix.,,,,,,,,,,,,,,,,, 1,AppConfigStub,Stub of an AppConfig. Only provides a label and a dict of models.,Stub of an AppConfig.,,Only provides a label and a dict of models.,,,,,,,,,,,,,,, 1,Archive,The external API class that encapsulates an archive implementation.,The external API class that encapsulates an archive implementation.,,,,,The external API class that encapsulates an archive implementation.,,,,,,,,,,,, 1,ArchiveIndexView,Top-level archive of date-based items.,Top-level archive of date-based items.,,,,,,,,,,,,,,,,, 1,Atomic,"Guarantee the atomic execution of a given block. An instance can be used either as a decorator or as a context manager. When it's used as a decorator, __call__ wraps the execution of the decorated function in the instance itself, used as a context manager. When it's used as a context manager, __enter__ creates a transaction or a savepoint, depending on whether a transaction is already in progress, and __exit__ commits the transaction or releases the savepoint on normal exit, and rolls back the transaction or to the savepoint on exceptions. It's possible to disable the creation of savepoints if the goal is to ensure that some code runs within a transaction without creating overhead. A stack of savepoints identifiers is maintained as an attribute of the connection. None denotes the absence of a savepoint. This allows reentrancy even if the same AtomicWrapper is reused. For example, it's possible to define `oa = atomic('other')` and use `@oa` or `with oa:` multiple times. Since database connections are thread-local, this is thread-safe. This is a private API.",Guarantee the atomic execution of a given block.,"When it's used as a decorator, __call__ wraps the execution of the decorated function in the instance itself, used as a context manager. When it's used as a context manager, __enter__ creates a transaction or a savepoint, depending on whether a transaction is already in progress, and __exit__ commits the transaction or releases the savepoint on normal exit, and rolls back the transaction or to the savepoint on exceptions. It's possible to disable the creation of savepoints if the goal is to ensure that some code runs within a transaction without creating overhead. A stack of savepoints identifiers is maintained as an attribute of the connection. None denotes the absence of a savepoint. This allows reentrancy even if the same AtomicWrapper is reused.",,"An instance can be used either as a decorator or as a context manager.When it's used as a decorator, __call__ wraps the execution of the decorated function in the instance itself, used as a context manager.",,,,,,,"This is a private API. Since database connections are thread-local, this is thread-safe.",,,,,,, 1,AtomicTests,"Tests for the atomic decorator and context manager. The tests make assertions on internal attributes because there isn't a robust way to ask the database for its current transaction state. Since the decorator syntax is converted into a context manager (see the implementation), there are only a few basic tests with the decorator syntax and the bulk of the tests use the context manager syntax.",Tests for the atomic decorator and context manager.,,,"The tests make assertions on internal attributes because there isn't a robust way to ask the database for its current transaction state. Since the decorator syntax is converted into a context manager (see the implementation), there are only a few basic tests with the decorator syntax and the bulk of the tests use the context manager syntax.",,,,,,,,,,,,,, 1,AutoFieldMeta,"Metaclass to maintain backward inheritance compatibility for AutoField. It is intended that AutoFieldMixin become public API when it is possible to create a non-integer automatically-generated field using column defaults stored in the database. In many areas Django also relies on using isinstance() to check for an automatically-generated field as a subclass of AutoField. A new flag needs to be implemented on Field to be used instead. When these issues have been addressed, this metaclass could be used to deprecate inheritance from AutoField and use of isinstance() with AutoField for detecting automatically-generated fields.",Metaclass to maintain backward inheritance compatibility for AutoField.,"In many areas Django also relies on using isinstance() to check for an automatically-generated field as a subclass of AutoField. A new flag needs to be implemented on Field to be used instead.",,"It is intended that AutoFieldMixin become public API when it is possible to create a non-integer automatically-generated field using column defaults stored in the database.",,"When these issues have been addressed, this metaclass could be used to deprecate inheritance from AutoField and use of isinstance() with AutoField for detecting automatically-generated fields.",,,,,,"A new flag needs to be implemented on Field to be used instead. When these issues have been addressed, this metaclass could be used to deprecate inheritance from AutoField and use of isinstance() with AutoField for detecting automatically-generated fields.","In many areas Django also relies on using isinstance() to check for an automatically-generated field as a subclass of AutoField",,,,, 1,BadSignature,Signature does not match.,Signature does not match.,,,,,,,,,,,,,,,,, 1,BarAccount,A service-specific account of type Bar.,A service-specific account of type Bar.,,,,,,,,,,,,,,,,, 1,BaseCommand,"The base class from which all management commands ultimately derive. Use this class if you want access to all of the mechanisms which parse the command-line arguments and work out what code to call in response; if you don't need to change any of that behavior, consider using one of the subclasses defined in this file. If you are interested in overriding/customizing various aspects of the command-parsing and -execution behavior, the normal flow works as follows: 1. ``django-admin`` or ``manage.py`` loads the command class and calls its ``run_from_argv()`` method. 2. The ``run_from_argv()`` method calls ``create_parser()`` to get an ``ArgumentParser`` for the arguments, parses them, performs any environment changes requested by options like ``pythonpath``, and then calls the ``execute()`` method, passing the parsed arguments. 3. The ``execute()`` method attempts to carry out the command by calling the ``handle()`` method with the parsed arguments; any output produced by ``handle()`` will be printed to standard output and, if the command is intended to produce a block of SQL statements, will be wrapped in ``BEGIN`` and ``COMMIT``. 4. If ``handle()`` or ``execute()`` raised any exception (e.g. ``CommandError``), ``run_from_argv()`` will instead print an error message to ``stderr``. Thus, the ``handle()`` method is typically the starting point for subclasses; many built-in commands and command types either place all of their logic in ``handle()``, or perform some additional parsing work in ``handle()`` and then delegate from it to more specialized methods as needed. Several attributes affect behavior at various steps along the way: ``help`` A short description of the command, which will be printed in help messages. ``output_transaction`` A boolean indicating whether the command outputs SQL statements; if ``True``, the output will automatically be wrapped with ``BEGIN;`` and ``COMMIT;``. Default value is ``False``. ``requires_migrations_checks`` A boolean; if ``True``, the command prints a warning if the set of migrations on disk don't match the migrations in the database. ``requires_system_checks`` A boolean; if ``True``, entire Django project will be checked for errors prior to executing the command. Default value is ``True``. To validate an individual application's models rather than all applications' models, call ``self.check(app_configs)`` from ``handle()``, where ``app_configs`` is the list of application's configuration provided by the app registry. ``stealth_options`` A tuple of any options the command uses which aren't defined by the argument parser.","The base class from which all management commands ultimately derive.","If you are interested in overriding/customizing various aspects of the command-parsing and -execution behavior, the normal flow works as follows: 1. ``django-admin`` or ``manage.py`` loads the command class and calls its ``run_from_argv()`` method. 2. The ``run_from_argv()`` method calls ``create_parser()`` to get an ``ArgumentParser`` for the arguments, parses them, performs any environment changes requested by options like ``pythonpath``, and then calls the ``execute()`` method, passing the parsed arguments. 3. The ``execute()`` method attempts to carry out the command by calling the ``handle()`` method with the parsed arguments; any output produced by ``handle()`` will be printed to standard output and, if the command is intended to produce a block of SQL statements, will be wrapped in ``BEGIN`` and ``COMMIT``. 4. If ``handle()`` or ``execute()`` raised any exception (e.g. ``CommandError``), ``run_from_argv()`` will instead print an error message to ``stderr``.","Several attributes affect behavior at various steps along the way: ``help`` A short description of the command, which will be printed in help messages. ``output_transaction`` A boolean indicating whether the command outputs SQL statements; if ``True``, the output will automatically be wrapped with ``BEGIN;`` and ``COMMIT;``. Default value is ``False``. ``requires_migrations_checks`` A boolean; if ``True``, the command prints a warning if the set of migrations on disk don't match the migrations in the database. ``requires_system_checks`` A boolean; if ``True``, entire Django project will be checked for errors prior to executing the command. Default value is ``True``. To validate an individual application's models rather than all applications' models, call ``self.check(app_configs)`` from ``handle()``, where ``app_configs`` is the list of application's configuration provided by the app registry.",,,"Use this class if you want access to all of the mechanisms which parse the command-line arguments and work out what code to call in response; if you don't need to change any of that behavior, consider using one of the subclasses defined in this file. Thus, the ``handle()`` method is typically the starting point for subclasses; many built-in commands and command types either place all of their logic in ``handle()``, or perform some additional parsing work in ``handle()`` and then delegate from it to more specialized methods as needed.",,,,,,"Use this class if you want access to all of the mechanisms which parse the command-line arguments and work out what code to call in response; if you don't need to change any of that behavior, consider using one of the subclasses defined in this file.",,,,,"Use this class if you want access to all of the mechanisms which parse the command-line arguments and work out what code to call in response; if you don't need to change any of that behavior, consider using one of the subclasses defined in this file.", 1,BaseDatabaseSchemaEditor,"This class and its subclasses are responsible for emitting schema-changing statements to the databases - model creation/removal/alteration, field renaming, index fiddling, and so on.","This class and its subclasses are responsible for emitting schema-changing statements to the databases - model creation/removal/alteration, field renaming, index fiddling, and so on.",,,,,"This class and its subclasses are responsible for emitting schema-changing statements to the databases - model creation/removal/alteration, field renaming, index fiddling, and so on.",,,,,,,,,,,, 1,BaseExpression,Base class for all query expressions.,Base class for all query expressions.,,,,,,,,,,,,,,,,, 1,BaseUpdateView,"Base view for updating an existing object. Using this base class requires subclassing to provide a response mixin.",Base view for updating an existing object.,,,,,Using this base class requires subclassing to provide a response mixin.,,,,,,,,,,,, 1,BaseYearArchiveView,List of objects published in a given year.,List of objects published in a given year.,,,,,,,,,,,,,,,,, 1,BCryptSHA256PasswordHasher,"Secure password hashing using the bcrypt algorithm (recommended) This is considered by many to be the most secure algorithm but you must first install the bcrypt library. Please be warned that this library depends on native C code and might cause portability issues.",Secure password hashing using the bcrypt algorithm (recommended),"This is considered by many to be the most secure algorithm but you must first install the bcrypt library.",,,,,,,,,"Please be warned that this library depends on native C code and might cause portability issues.",,,,,,, 1,BoundWidget,"A container class used for iterating over widgets. This is useful for widgets that have choices. For example, the following can be used in a template: {% for radio in myform.beatles %} {% endfor %}",A container class used for iterating over widgets.,"For example, the following can be used in a template: {% for radio in myform.beatles %} {% endfor %}",,,,"This is useful for widgets that have choices.",,,,,,,,,,,, 1,CacheHandler,"A Cache Handler to manage access to Cache instances. Ensure only one instance of each alias exists per thread.",A Cache Handler to manage access to Cache instances.,,,Ensure only one instance of each alias exists per thread.,,,,,,,,,,,,,, 1,Choices,Class for creating enumerated choices.,Class for creating enumerated choices.,,Class for creating enumerated choices.,,,,,,,,,,,,,,, 1,ChunkIter,"An iterable that will yield chunks of data. Given a file-like object as the constructor, yield chunks of read operations from that object.",An iterable that will yield chunks of data.,,,"Given a file-like object as the constructor, yield chunks of read operations from that object.",,,,,,,,,,,,,, 1,Client,"A class that can act as a client for testing purposes. It allows the user to compose GET and POST requests, and obtain the response that the server gave to those requests. The server Response objects are annotated with the details of the contexts and templates that were rendered during the process of serving the request. Client objects are stateful - they will retain cookie (and thus session) details for the lifetime of the Client instance. This is not intended as a replacement for Twill/Selenium or the like - it is here to allow testing against the contexts and templates produced by a view, rather than the HTML rendered to the end-user.",A class that can act as a client for testing purposes.,,,"It allows the user to compose GET and POST requests, and obtain the response that the server gave to those requests. The server Response objects are annotated with the details of the contexts and templates that were rendered during the process of serving the request. Client objects are stateful - they will retain cookie (and thus session) details for the lifetime of the Client instance.",,"This is not intended as a replacement for Twill/Selenium or the like - it is here to allow testing against the contexts and templates produced by a view, rather than the HTML rendered to the end-user.",,,,,"This is not intended as a replacement for Twill/Selenium or the like - it is here to allow testing against the contexts and templates produced by a view, rather than the HTML rendered to the end-user.",,,,,,, 1,Combinable,"Provide the ability to combine one or two objects with some connector. For example F('foo') + F('bar').","Provide the ability to combine one or two objects with some connector.",For example F('foo') + F('bar').,,,,,,,,,,,,,,,, 1,ConsoleDirective,"A reStructuredText directive which renders a two-tab code block in which the second tab shows a Windows command line equivalent of the usual Unix-oriented examples.","A reStructuredText directive which renders a two-tab code block in which the second tab shows a Windows command line equivalent of the usual Unix-oriented examples.",,,,,,,,,,,,,,,,, 1,Context,A stack container for variable context,A stack container for variable context,,,,,,,,,,,,,,,,, 1,CryptPasswordHasher,"Password hashing using UNIX crypt (not recommended) The crypt module is not supported on all platforms.",Password hashing using UNIX crypt (not recommended),,,,,The crypt module is not supported on all platforms.,,,,,,Password hashing using UNIX crypt (not recommended),,,,,, 1,CustomArticleAdmin,Tests various hooks for using custom templates and contexts.,Tests various hooks for using custom templates and contexts.,,,,,,,,,,,,,,,,, 1,CustomCacheKeyValidationTests,"Tests for the ability to mixin a custom ``validate_key`` method to a custom cache backend that otherwise inherits from a builtin backend, and override the default key validation. Refs #6447.","Tests for the ability to mixin a custom ``validate_key`` method to a custom cache backend",,,"that otherwise inherits from a builtin backend, and override the default key validation.",,,,,Refs #6447.,,,,,,,,, 1,CustomHeaderRemoteUserTest,"Tests a custom RemoteUserMiddleware subclass with custom HTTP auth user header.",Tests a custom RemoteUserMiddleware subclass,,Tests a custom RemoteUserMiddleware subclass with custom HTTP auth user,,,,,,,,,,,,,,Tests a custom RemoteUserMiddleware subclass with custom HTTP auth user, 1,DatabaseReceiver,Used in the tests for the database argument in signals (#13552),Used in the tests for the database argument in signals (#13552),,,,,,,,,,,,,,,,, 1,DblFromGeom,"Argument is a Geometry, return type is double that is passed in by reference as the last argument.",,,"Argument is a Geometry, return type is double that is passed in by reference as the last argument",,,,,,,,,,,,,,, 1,DisallowedModelAdminToField,Invalid to_field was passed to admin view via URL query string,Invalid to_field was passed to admin view via URL query string,,,,,,,,,,,,,,,,, 1,DjangoHTMLTranslator,Django-specific reST to HTML tweaks.,Django-specific reST to HTML tweaks.,,,,,,,,,,,,,,,,, 1,Dumpdata,Tests for dumpdata management command.,Tests for dumpdata management command.,,,,,,,,,,,,,,,,, 1,EarliestOrLatestTests,Tests for the earliest() and latest() objects methods,Tests for the earliest() and latest() objects methods,,,,,,,,,,,,,,,,, 1,EmptyStringsAsNullTest,"Filtering on non-null character fields works as expected. The reason for these tests is that Oracle treats '' as NULL, and this can cause problems in query construction. Refs #17957.",,,,,,,,,Refs #17957.,,,,,,,,, 1,ErrorDict,"A collection of errors that knows how to display itself in various formats. The dictionary keys are the field names, and the values are the errors.",A collection of errors that knows how to display itself in various formats.,"The dictionary keys are the field names, and the values are the errors.",,,,,,,,,,,,,,,, 1,ExceptionThatFailsUnpickling,"After pickling, this class fails unpickling with an error about incorrect arguments passed to __init__().","After pickling, this class fails unpickling with an error about incorrect arguments passed to __init__().",,,,,,,,,,,,,,,,, 1,FakePayload,"A wrapper around BytesIO that restricts what can be read since data from the network can't be sought and cannot be read outside of its content length. This makes sure that views can't do anything under the test client that wouldn't work in real life.","A wrapper around BytesIO that restricts what can be read since data from the network can't be sought and cannot be read outside of its content length.",,,"This makes sure that views can't do anything under the test client that wouldn't work in real life.",,,,,,,,,,,,,, 1,FallbackStorage,"Try to store all messages in the first backend. Store any unstored messages in each subsequent backend.",,,,"Try to store all messages in the first backend. Store any unstored messages in each subsequent backend.",,,,,,,,,,,,,, 1,FootNote,Model added for ticket 19838,,,,,,Model added for ticket 19838,,,,,,,,,,,, 1,FrenchTestCase,Tests using the French translations of the sampleproject.,Tests using the French translations of the sampleproject.,,,,,,,,,,,,,,,,, 1,GeoFlexibleFieldLookupDict,"Subclass that includes updates the `base_data_types_reverse` dict for geometry field types.",,,,,,,,,,,,,,,,,"Subclass that includes updates the `base_data_types_reverse` dict for geometry field types.", 1,Group,Table Column Fields,Table Column Fields,,,,,,,,,,,,,,,,, 1,GZipMiddleware,"Compress content if the browser allows gzip compression. Set the Vary header accordingly, so that caches will base their storage on the Accept-Encoding header.","Compress content if the browser allows gzip compression.Set the Vary header accordingly, so that caches will base their storage on the Accept-Encoding header.",,,,,,,,,,,,,,,,, 1,HiddenRangeWidget,"A widget that splits input into two inputs.","A widget that splits input into two inputs.",,,,,,,,,,,,,,,,, 1,ImageFileDescriptor,"Just like the FileDescriptor, but for ImageFields. The only difference is assigning the width/height to the width_field/height_field, if appropriate.","Just like the FileDescriptor, but for ImageFields.",,,"The only difference is assigning the width/height to the width_field/height_field, if appropriate.",,,,,,,,,,,,,, 1,IncompleteCategoryFormWithExclude,"A form that replaces the model's url field with a custom one. This should prevent the model field's validation from being called.",A form that replaces the model's url field with a custom one.,,,,,,,,,,"This should prevent the model field's validation from being called.",,,,,,, 1,Individual,"A model with a FK to itself. It won't be registered with the admin, so the corresponding raw ID widget won't have a magnifying glass link to select related instances (rendering will be called programmatically in this case).",A model with a FK to itself.,"It won't be registered with the admin, so the corresponding raw ID widget won't have a magnifying glass link to select related instances (rendering will be called programmatically in this case).",,,,"It won't be registered with the admin, so the corresponding raw ID widget won't have a magnifying glass link to select related instances (rendering will be called programmatically in this case).",,,,,,,,,,,, 1,InputStreamExhausted,No more reads are allowed from this device.,No more reads are allowed from this device.,,,No more reads are allowed from this device.,,,,,,,,,,,,,, 1,IntFromGeom,"Argument is a geometry, return type is an integer.",,,"Argument is a geometry, return type is an integer.",,,"Argument is a geometry, return type is an integer.",,,,,,,,,,,, 1,InvalidBasesError,A model's base classes can't be resolved.,A model's base classes can't be resolved.,,,,,,,,,,,,,,,,, 1,KMLSitemap,A minimal hook to produce KML sitemaps.,A minimal hook to produce KML sitemaps.,,,,,,,,,,,,,,,,, 1,ListMixin,"A base class which provides complete list interface. Derived classes must call ListMixin's __init__() function and implement the following: function _get_single_external(self, i): Return single item with index i for general use. The index i will always satisfy 0 <= i < len(self). function _get_single_internal(self, i): Same as above, but for use within the class [Optional] Note that if _get_single_internal and _get_single_internal return different types of objects, _set_list must distinguish between the two and handle each appropriately. function _set_list(self, length, items): Recreate the entire object. NOTE: items may be a generator which calls _get_single_internal. Therefore, it is necessary to cache the values in a temporary: temp = list(items) before clobbering the original storage. function _set_single(self, i, value): Set the single item at index i to value [Optional] If left undefined, all mutations will result in rebuilding the object using _set_list. function __len__(self): Return the length int _minlength: The minimum legal length [Optional] int _maxlength: The maximum legal length [Optional] type or tuple _allowed: A type or tuple of allowed item types [Optional]",A base class which provides complete list interface.,,,"Derived classes must call ListMixin's __init__() function and implement the following: function _get_single_external(self, i): Return single item with index i for general use. The index i will always satisfy 0 <= i < len(self). function _get_single_internal(self, i): Same as above, but for use within the class [Optional] Note that if _get_single_internal and _get_single_internal return different types of objects, _set_list must distinguish between the two and handle each appropriately. function _set_list(self, length, items): Recreate the entire object. NOTE: items may be a generator which calls _get_single_internal. Therefore, it is necessary to cache the values in a temporary: temp = list(items) before clobbering the original storage. function _set_single(self, i, value): Set the single item at index i to value [Optional] If left undefined, all mutations will result in rebuilding the object using _set_list. function __len__(self): Return the length int _minlength: The minimum legal length [Optional] int _maxlength: The maximum legal length [Optional] type or tuple _allowed: A type or tuple of allowed item types [Optional]",,,,,,,,,,,,,, 1,LogoutThenLoginTests,Tests for the logout_then_login view,Tests for the logout_then_login view,,,,,,,,,,,,,,,,, 1,MakeListTests,"The make_list filter can destroy existing escaping, so the results are escaped.",,,,"The make_list filter can destroy existing escaping, so the results are escaped.",,,,,,,,,,,,,, 1,ManagementForm,"Keep track of how many form instances are displayed on the page. If adding new forms via JavaScript, you should increment the count field of this form as well.",Keep track of how many form instances are displayed on the page.,,,,,,,,,,,"If adding new forms via JavaScript, you should increment the count field of this form as well.",,,,,, 1,MemcachedCache,An implementation of a cache binding using python-memcached,An implementation of a cache binding using python-memcached,,,,,,,,,,,,,,,,, 1,MemoryFileUploadHandler,File upload handler to stream uploads into memory (used for small files).,File upload handler to stream uploads into memory (used for small files).,,,,,,,,,,,,,,,,,(used for small files) 1,Migration,"The base class for all migrations. Migration files will import this from django.db.migrations.Migration and subclass it as a class called Migration. It will have one or more of the following attributes: - operations: A list of Operation instances, probably from django.db.migrations.operations - dependencies: A list of tuples of (app_path, migration_name) - run_before: A list of tuples of (app_path, migration_name) - replaces: A list of migration_names Note that all migrations come out of migrations and into the Loader or Graph as instances, having been initialized with their app label and name.",The base class for all migrations.,"It will have one or more of the following attributes: - operations: A list of Operation instances, probably from django.db.migrations.operations - dependencies: A list of tuples of (app_path, migration_name) - run_before: A list of tuples of (app_path, migration_name) - replaces: A list of migration_names","It will have one or more of the following attributes: - operations: A list of Operation instances, probably from django.db.migrations.operations - dependencies: A list of tuples of (app_path, migration_name) - run_before: A list of tuples of (app_path, migration_name) - replaces: A list of migration_names",,,,,,,,"Note that all migrations come out of migrations and into the Loader or Graph as instances, having been initialized with their app label and name.",,,"Note that all migrations come out of migrations and into the Loader or Graph as instances, having been initialized with their app label and name.",,,"Migration files will import this from django.db.migrations.Migration and subclass it as a class called Migration.", 1,MigrationGraph,"Represent the digraph of all migrations in a project. Each migration is a node, and each dependency is an edge. There are no implicit dependencies between numbered migrations - the numbering is merely a convention to aid file listing. Every new numbered migration has a declared dependency to the previous number, meaning that VCS branch merges can be detected and resolved. Migrations files can be marked as replacing another set of migrations - this is to support the ""squash"" feature. The graph handler isn't responsible for these; instead, the code to load them in here should examine the migration files and if the replaced migrations are all either unapplied or not present, it should ignore the replaced ones, load in just the replacing migration, and repoint any dependencies that pointed to the replaced migrations to point to the replacing one. A node should be a tuple: (app_path, migration_name). The tree special-cases things within an app - namely, root nodes and leaf nodes ignore dependencies to other apps.",Represent the digraph of all migrations in a project.,"Migrations files can be marked as replacing another set of migrations - this is to support the ""squash"" feature. The graph handler isn't responsible for these; instead, the code to load them in here should examine the migration files and if the replaced migrations are all either unapplied or not present, it should ignore the replaced ones, load in just the replacing migration, and repoint any dependencies that pointed to the replaced migrations to point to the replacing one.","A node should be a tuple: (app_path, migration_name).",,,"Each migration is a node, and each dependency is an edge. There are no implicit dependencies between numbered migrations - the numbering is merely a convention to aid file listing.",,,,,"The tree special-cases things within an app - namely, root nodes and leaf nodes ignore dependencies to other apps.","The graph handler isn't responsible for these; instead, the code to load them in here should examine the migration files and if the replaced migrations are all either unapplied or not present, it should ignore the replaced ones, load in just the replacing migration, and repoint any dependencies that pointed to the replaced migrations to point to the replacing one.",,,,,,"Every new numbered migration has a declared dependency to the previous number, meaning that VCS branch merges can be detected and resolved." 1,MigrationLoader,"Load migration files from disk and their status from the database. Migration files are expected to live in the ""migrations"" directory of an app. Their names are entirely unimportant from a code perspective, but will probably follow the 1234_name.py convention. On initialization, this class will scan those directories, and open and read the Python files, looking for a class called Migration, which should inherit from django.db.migrations.Migration. See django.db.migrations.migration for what that looks like. Some migrations will be marked as ""replacing"" another set of migrations. These are loaded into a separate set of migrations away from the main ones. If all the migrations they replace are either unapplied or missing from disk, then they are injected into the main set, replacing the named migrations. Any dependency pointers to the replaced migrations are re-pointed to the new migration. This does mean that this class MUST also talk to the database as well as to disk, but this is probably fine. We're already not just operating in memory.",Load migration files from disk and their status from the database.,"This does mean that this class MUST also talk to the database as well as to disk, but this is probably fine. We're already not just operating in memory.",,"On initialization, this class will scan those directories, and open and read the Python files, looking for a class called Migration, which should inherit from django.db.migrations.Migration. See django.db.migrations.migration for what that looks like. Some migrations will be marked as ""replacing"" another set of migrations. These are loaded into a separate set of migrations away from the main ones. If all the migrations they replace are either unapplied or missing from disk, then they are injected into the main set, replacing the named migrations. Any dependency pointers to the replaced migrations are re-pointed to the new migration.",,,,,,,,"Migration files are expected to live in the ""migrations"" directory of an app. Their names are entirely unimportant from a code perspective, but will probably follow the 1234_name.py convention.",,,,,, 1,MigrationQuestioner,"Give the autodetector responses to questions it might have. This base class has a built-in noninteractive mode, but the interactive subclass is what the command-line arguments will use.",Give the autodetector responses to questions it might have.,"This base class has a built-in noninteractive mode, but the interactive subclass is what the command-line arguments will use.",,,,"This base class has a built-in noninteractive mode, but the interactive subclass is what the command-line arguments will use.",,,,,,,,,,,, 1,MigrationWriter,"Take a Migration instance and is able to produce the contents of the migration file from it.","Take a Migration instance and is able to produce the contents of the migration file from it.",,,,,,,,,,,,,,,,, 1,ModelBase,Metaclass for all models.,Metaclass for all models.,,,,,,,,,,,,,,,,, 1,ModelSignal,"Signal subclass that allows the sender to be lazily specified as a string of the `app_label.ModelName` form.","Signal subclass that allows the sender to be lazily specified as a string of the `app_label.ModelName` form.",,,,,,,,,,,,,,,,"Signal subclass that allows the sender to be lazily specified as a string of the `app_label.ModelName` form.", 1,MultiPartParser,"A rfc2388 multipart/form-data parser. ``MultiValueDict.parse()`` reads the input stream in ``chunk_size`` chunks and returns a tuple of ``(MultiValueDict(POST), MultiValueDict(FILES))``.",A rfc2388 multipart/form-data parser.,,,,,"``MultiValueDict.parse()`` reads the input stream in ``chunk_size`` chunks and returns a tuple of ``(MultiValueDict(POST), MultiValueDict(FILES))``.",,,,,,,,,,,, 1,MultiValueDict,"A subclass of dictionary customized to handle multiple values for the same key. >>> d = MultiValueDict({'name': ['Adrian', 'Simon'], 'position': ['Developer']}) >>> d['name'] 'Simon' >>> d.getlist('name') ['Adrian', 'Simon'] >>> d.getlist('doesnotexist') [] >>> d.getlist('doesnotexist', ['Adrian', 'Simon']) ['Adrian', 'Simon'] >>> d.get('lastname', 'nonexistent') 'nonexistent' >>> d.setlist('lastname', ['Holovaty', 'Willison']) This class exists to solve the irritating problem raised by cgi.parse_qs, which returns a list for every key, even though most Web forms submit single name-value pairs.","A subclass of dictionary customized to handle multiple values for the same key.",">>> d = MultiValueDict({'name': ['Adrian', 'Simon'], 'position': ['Developer']}) >>> d['name'] 'Simon' >>> d.getlist('name') ['Adrian', 'Simon'] >>> d.getlist('doesnotexist') [] >>> d.getlist('doesnotexist', ['Adrian', 'Simon']) ['Adrian', 'Simon'] >>> d.get('lastname', 'nonexistent') 'nonexistent' >>> d.setlist('lastname', ['Holovaty', 'Willison'])",,"This class exists to solve the irritating problem raised by cgi.parse_qs, which returns a list for every key, even though most Web forms submit single name-value pairs.",,"This class exists to solve the irritating problem raised by cgi.parse_qs, which returns a list for every key, even though most Web forms submit single name-value pairs.",,,,,,,,,,,, 1,MultiValueField,"Aggregate the logic of multiple Fields. Its clean() method takes a ""decompressed"" list of values, which are then cleaned into a single value according to self.fields. Each value in this list is cleaned by the corresponding field -- the first value is cleaned by the first field, the second value is cleaned by the second field, etc. Once all fields are cleaned, the list of clean values is ""compressed"" into a single value. Subclasses should not have to implement clean(). Instead, they must implement compress(), which takes a list of valid values and returns a ""compressed"" version of those values -- a single value. You'll probably want to use this with MultiWidget.",Aggregate the logic of multiple Fields.,,,"Its clean() method takes a ""decompressed"" list of values, which are then cleaned into a single value according to self.fields. Each value in this list is cleaned by the corresponding field -- the first value is cleaned by the first field, the second value is cleaned by the second field, etc. Once all fields are cleaned, the list of clean values is ""compressed"" into a single value.",,,,,,,,You'll probably want to use this with MultiWidget.,,,,,"Subclasses should not have to implement clean(). Instead, they must implement compress(), which takes a list of valid values and returns a ""compressed"" version of those values -- a single value.", 1,MultiWidget,"A widget that is composed of multiple widgets. In addition to the values added by Widget.get_context(), this widget adds a list of subwidgets to the context as widget['subwidgets']. These can be looped over and rendered like normal widgets. You'll probably want to use this class with MultiValueField.",A widget that is composed of multiple widgets.,,,"In addition to the values added by Widget.get_context(), this widget adds a list of subwidgets to the context as widget['subwidgets']. These can be looped over and rendered like normal widgets.",,,,,,,,You'll probably want to use this class with MultiValueField.,,,,,, 1,MyModel,Model subclass with a custom base using metaclass.,Model subclass with a custom base using metaclass.,,,,,,,,,,,,,,,,, 1,NestedObjectsTests,Tests for ``NestedObject`` utility collection.,Tests for ``NestedObject`` utility collection.,,,,,,,,,,,,,,,,, 1,Operation,"Base class for migration operations. It's responsible for both mutating the in-memory model state (see db/migrations/state.py) to represent what it performs, as well as actually performing it against a live database. Note that some operations won't modify memory state at all (e.g. data copying operations), and some will need their modifications to be optionally specified by the user (e.g. custom Python code snippets) Due to the way this class deals with deconstruction, it should be considered immutable.",Base class for migration operations.,,,"It's responsible for both mutating the in-memory model state (see db/migrations/state.py) to represent what it performs, as well as actually performing it against a live database.",,,,,,,,,,"Due to the way this class deals with deconstruction, it should be considered immutable.",,,,"Note that some operations won't modify memory state at all (e.g. data copying operations), and some will need their modifications to be optionally specified by the user (e.g. custom Python code snippets)" 1,OverwritingStorage,"Overwrite existing files instead of appending a suffix to generate an unused name.","Overwrite existing files instead of appending a suffix to generate an unused name.",,,,,,,,,,,,,,,,, 1,ParentWithDependentChildren,"Issue #20522 Model where the validation of child foreign-key relationships depends on validation of the parent","Model where the validation of child foreign-key relationships depends on validation of the parent",,,,,,,,Issue #20522,,,,"Issue #20522 Model where the validation of child foreign-key relationships depends on validation of the parent",,,,, 1,Permission,"The permissions system provides a way to assign permissions to specific users and groups of users. The permission system is used by the Django admin site, but may also be useful in your own code. The Django admin site uses permissions as follows: - The ""add"" permission limits the user's ability to view the ""add"" form and add an object. - The ""change"" permission limits a user's ability to view the change list, view the ""change"" form and change an object. - The ""delete"" permission limits the ability to delete an object. - The ""view"" permission limits the ability to view an object. Permissions are set globally per type of object, not per specific object instance. It is possible to say ""Mary may change news stories,"" but it's not currently possible to say ""Mary may change news stories, but only the ones she created herself"" or ""Mary may only change news stories that have a certain status or publication date."" The permissions listed above are automatically created for each model.","The permissions system provides a way to assign permissions to specific users and groups of users.","The permission system is used by the Django admin site, but may also be useful in your own code. The Django admin site uses permissions as follows: - The ""add"" permission limits the user's ability to view the ""add"" form and add an object. - The ""change"" permission limits a user's ability to view the change list, view the ""change"" form and change an object. - The ""delete"" permission limits the ability to delete an object. - The ""view"" permission limits the ability to view an object.",,The permissions listed above are automatically created for each model.,,,,,,,,,,,,,,"Permissions are set globally per type of object, not per specific object instance. It is possible to say ""Mary may change news stories,"" but it's not currently possible to say ""Mary may change news stories, but only the ones she created herself"" or ""Mary may only change news stories that have a certain status or publication date.""" 1,PermissionDeniedBackendTest,Other backends are not checked once a backend raises PermissionDenied,,,,,,,,,,,Other backends are not checked once a backend raises PermissionDenied,,,,,,, 1,PrePopulatedPostLargeSlug,"Regression test for #15938: a large max_length for the slugfield must not be localized in prepopulated_fields_js.html or it might end up breaking the javascript (ie, using THOUSAND_SEPARATOR ends up with maxLength=1,000)",,,,,,,,"Regression test for #15938: a large max_length for the slugfield must not be localized in prepopulated_fields_js.html or it might end up breaking the javascript (ie, using THOUSAND_SEPARATOR ends up with maxLength=1,000)",Regression test for #15938:,,,,,,,,, 1,ProxyModelInheritanceTests,"Proxy model inheritance across apps can result in migrate not creating the table for the proxied model (as described in #12286). This test creates two dummy apps and calls migrate, then verifies that the table has been created.","Proxy model inheritance across apps can result in migrate not creating the table for the proxied model (as described in #12286).",,,"This test creates two dummy apps and calls migrate, then verifies that the table has been created.",,,,,,,,,,,,,, 1,RawPostDataException,"You cannot access raw_post_data from a request that has multipart/* POST data if it has been accessed via POST, FILES, etc..","You cannot access raw_post_data from a request that has multipart/* POST data if it has been accessed via POST, FILES, etc..",,,"if it has been accessed via POST, FILES, etc..",,,,,,,"You cannot access raw_post_data from a request that has multipart/* POST data if it has been accessed via POST, FILES, etc..",,,,,,, 1,RemoteTestRunner,"Run tests and record everything but don't display anything. The implementation matches the unpythonic coding style of unittest2.",Run tests and record everything but don't display anything.,,,,,,,,,,,,,,The implementation matches the unpythonic coding style of unittest2.,,, 1,RequestFactoryEnvironmentTests,"Regression tests for #8551 and #17067: ensure that environment variables are set correctly in RequestFactory.","Regression tests for #8551 and #17067: ensure that environment variables are set correctly in RequestFactory.",,,"ensure that environment variables are set correctly in RequestFactory.",,,,,,,,,,,,,, 1,ReverseGenericManyToOneDescriptor,"Accessor to the related objects manager on the one-to-many relation created by GenericRelation. In the example:: class Post(Model): comments = GenericRelation(Comment) ``post.comments`` is a ReverseGenericManyToOneDescriptor instance.","Accessor to the related objects manager on the one-to-many relation created by GenericRelation.","In the example:: class Post(Model): comments = GenericRelation(Comment) ``post.comments`` is a ReverseGenericManyToOneDescriptor instance.",,,,,,,,,,,,,,,, 1,SameAsLookup,"The ""~="" operator is the ""same as"" operator. It tests actual geometric equality of two features. So if A and B are the same feature, vertex-by-vertex, the operator returns true.","The ""~="" operator is the ""same as"" operator. It tests actual geometric equality of two features. So if A and B are the same feature, vertex-by-vertex, the operator returns true.",,,"The ""~="" operator is the ""same as"" operator. It tests actual geometric equality of two features. So if A and B are the same feature, vertex-by-vertex, the operator returns true.",,,,,,,,,,,,,, 1,Serializer,Convert a queryset to JSON.,Convert a queryset to JSON.,,,,,,,,,,,,,,,,, 1,SessionStorage,"Store messages in the session (that is, django.contrib.sessions).","Store messages in the session (that is, django.contrib.sessions).",,,,,,"Store messages in the session (that is, django.contrib.sessions).",,,,,"Store messages in the session (that is, django.contrib.sessions).",,,,,, 1,SessionStore,"A database session store, that handles updating the account ID column inside the custom session model.","A database session store,",,,"that handles updating the account ID column inside the custom session model.",,,,,,,,,,,,,, 1,SimpleView,A simple view with a docstring.,A simple view with a docstring.,,,,,,,,,,,,,,,,, 1,SpatialRefSysMixin,"The SpatialRefSysMixin is a class used by the database-dependent SpatialRefSys objects to reduce redundant code.","The SpatialRefSysMixin is a class used by the database-dependent SpatialRefSys objects to reduce redundant code.",,,,,"The SpatialRefSysMixin is a class used by the database-dependent SpatialRefSys objects to reduce redundant code.",,,,,,,,,,,, 1,SplitHiddenDateTimeWidget,"A widget that splits datetime input into two inputs.","A widget that splits datetime input into two inputs.",,,,,,,,,,,,,,,,, 1,StaticFilesHandler,"WSGI middleware that intercepts calls to the static files directory, as defined by the STATIC_URL setting, and serves those files.","WSGI middleware that intercepts calls to the static files directory, as defined by the STATIC_URL setting, and serves those files.",,"WSGI middleware that intercepts calls to the static files directory, as defined by the STATIC_URL setting, and serves those files.",,,,,,,,,,,,,,, 1,StrictAssignmentTests,"Should a model do anything special with __setattr__() or descriptors which raise a ValidationError, a model form should catch the error (#24706).",,,,,,,,"Should a model do anything special with __setattr__() or descriptors which raise a ValidationError, a model form should catch the error (#24706).",,,,,,,,,, 1,SubCategoryForm,"Subclassing without specifying a Meta on the class will use the parent's Meta (or the first parent in the MRO if there are multiple parent classes).",,,,,,,,,,,,,,,,,"Subclassing without specifying a Meta on the class will use the parent's Meta (or the first parent in the MRO if there are multiple parent classes).", 1,SuccessMessageMixin,Add a success message on successful form submission.,Add a success message on successful form submission.,,,,,,,,,,,,,,,,, 1,TemplateDoesNotExist,"The exception used when a template does not exist. Optional arguments: backend The template backend class used when raising this exception. tried A list of sources that were tried when finding the template. This is formatted as a list of tuples containing (origin, status), where origin is an Origin object or duck type and status is a string with the reason the template wasn't found. chain A list of intermediate TemplateDoesNotExist exceptions. This is used to encapsulate multiple exceptions when loading templates from multiple engines.",The exception used when a template does not exist,,"Optional arguments: backend The template backend class used when raising this exception. tried A list of sources that were tried when finding the template. This is formatted as a list of tuples containing (origin, status), where origin is an Origin object or duck type and status is a string with the reason the template wasn't found. chain A list of intermediate TemplateDoesNotExist exceptions. This is used to encapsulate multiple exceptions when loading templates from multiple engines.",,,,,,,,,,,,,,, 1,TestImageFieldFile,"Custom Field File class that records whether or not the underlying file was opened.","Custom Field File class that records whether or not the underlying file was opened.",,,,,,,,,,,,,,,,, 1,TestRouter,Routes to the 'other' database if the model name starts with 'Other'.,,,,Routes to the 'other' database if the model name starts with 'Other'.,,,,,,,,,,,,,, 1,TestUtils,"This __doc__ output is required for testing. I copied this example from `admindocs` documentation. (TITLE) Display an individual :model:`myapp.MyModel`. **Context** ``RequestContext`` ``mymodel`` An instance of :model:`myapp.MyModel`. **Template:** :template:`myapp/my_template.html` (DESCRIPTION) some_metadata: some data",,,,,,This __doc__ output is required for testing.,,,"I copied this example from `admindocs` documentation. (TITLE) Display an individual :model:`myapp.MyModel`. **Context** ``RequestContext`` ``mymodel`` An instance of :model:`myapp.MyModel`. **Template:** :template:`myapp/my_template.html` (DESCRIPTION) some_metadata: some data",,,,,,,,, 1,UniqueAnchor,"This is a model that can be used as something for other models to point at","This is a model that can be used as something for other models to point at",,,,,,,,,,,,,,,,, 1,UpdateError,Occurs if Django tries to update a session that was deleted.,Occurs if Django tries to update a session that was deleted.,,,,,,,,,,,,,,,,, 1,UserCreationForm,"A form that creates a user, with no privileges, from the given username and password.","A form that creates a user, with no privileges, from the given username and password.",,"A form that creates a user, with no privileges, from the given username and password.",,,,,,,,,,,,,,, 1,VariableWrapper,"An adapter class for cursor variables that prevents the wrapped object from being converted into a string when used to instantiate an OracleParam. This can be used generally for any other object that should be passed into Cursor.execute as-is.","An adapter class for cursor variables that prevents the wrapped object from being converted into a string when used to instantiate an OracleParam.","This can be used generally for any other object that should be passed into Cursor.execute as-is.",,,,,,,,,,,,,,,, 1,WindowFrame,"Model the frame clause in window expressions. There are two types of frame clauses which are subclasses, however, all processing and validation (by no means intended to be complete) is done here. Thus, providing an end for a frame is optional (the default is UNBOUNDED FOLLOWING, which is the last row in the frame).",Model the frame clause in window expressions,,,"There are two types of frame clauses which are subclasses, however, all processing and validation (by no means intended to be complete) is done here.",,,,,,,,,,,,,"There are two types of frame clauses which are subclasses, however, all processing and validation (by no means intended to be complete) is done here.", 1,XFrameOptionsDecoratorsTests,Tests for the X-Frame-Options decorators.,Tests for the X-Frame-Options decorators.,,,,,,,,,,,,,,,,, 1,XFrameOptionsMiddleware,"Set the X-Frame-Options HTTP header in HTTP responses. Do not set the header if it's already set or if the response contains a xframe_options_exempt value set to True. By default, set the X-Frame-Options header to 'SAMEORIGIN', meaning the response can only be loaded on a frame within the same site. To prevent the response from being loaded in a frame in any site, set X_FRAME_OPTIONS in your project's Django settings to 'DENY'.",,"By default, set the X-Frame-Options header to 'SAMEORIGIN', meaning the response can only be loaded on a frame within the same site. To prevent the response from being loaded in a frame in any site, set X_FRAME_OPTIONS in your project's Django settings to 'DENY'.",,,,"Do not set the header if it's already set or if the response contains a xframe_options_exempt value set to True. By default, set the X-Frame-Options header to 'SAMEORIGIN', meaning the response can only be loaded on a frame within the same site. To prevent the response from being loaded in a frame in any site, set X_FRAME_OPTIONS in your project's Django settings to 'DENY'.",Set the X-Frame-Options HTTP header in HTTP responses.,,,,,,,,,,, 2,Audio,"Create an audio object. When this object is returned by an input cell or passed to the display function, it will result in Audio controls being displayed in the frontend (only works in the notebook). Parameters ---------- data : numpy array, list, unicode, str or bytes Can be one of * Numpy 1d array containing the desired waveform (mono) * Numpy 2d array containing waveforms for each channel. Shape=(NCHAN, NSAMPLES). For the standard channel order, see http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx * List of float or integer representing the waveform (mono) * String containing the filename * Bytestring containing raw PCM data or * URL pointing to a file on the web. If the array option is used, the waveform will be normalized. If a filename or url is used, the format support will be browser dependent. url : unicode A URL to download the data from. filename : unicode Path to a local file to load the data from. embed : boolean Should the audio data be embedded using a data URI (True) or should the original source be referenced. Set this to True if you want the audio to playable later with no internet connection in the notebook. Default is `True`, unless the keyword argument `url` is set, then default value is `False`. rate : integer The sampling rate of the raw data. Only required when data parameter is being used as an array autoplay : bool Set to True if the audio should immediately start playing. Default is `False`. normalize : bool Whether audio should be normalized (rescaled) to the maximum possible range. Default is `True`. When set to `False`, `data` must be between -1 and 1 (inclusive), otherwise an error is raised. Applies only when `data` is a list or array of samples; other types of audio are never normalized. Examples -------- :: # Generate a sound import numpy as np framerate = 44100 t = np.linspace(0,5,framerate*5) data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) Audio(data,rate=framerate) # Can also do stereo or more channels dataleft = np.sin(2*np.pi*220*t) dataright = np.sin(2*np.pi*224*t) Audio([dataleft, dataright],rate=framerate) Audio(""http://www.nch.com.au/acm/8k16bitpcm.wav"") # From URL Audio(url=""http://www.w3schools.com/html/horse.ogg"") Audio('/path/to/sound.wav') # From file Audio(filename='/path/to/sound.ogg') Audio(b'RAW_WAV_DATA..) # From bytes Audio(data=b'RAW_WAV_DATA..) See Also -------- See also the ``Audio`` widgets form the ``ipywidget`` package for more flexibility and options.",Create an audio object.,"Examples -------- :: # Generate a sound import numpy as np framerate = 44100 t = np.linspace(0,5,framerate*5) data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) Audio(data,rate=framerate) # Can also do stereo or more channels dataleft = np.sin(2*np.pi*220*t) dataright = np.sin(2*np.pi*224*t) Audio([dataleft, dataright],rate=framerate) Audio(""http://www.nch.com.au/acm/8k16bitpcm.wav"") # From URL Audio(url=""http://www.w3schools.com/html/horse.ogg"") Audio('/path/to/sound.wav') # From file Audio(filename='/path/to/sound.ogg') Audio(b'RAW_WAV_DATA..) # From bytes Audio(data=b'RAW_WAV_DATA..)","Parameters ---------- data : numpy array, list, unicode, str or bytes Can be one of * Numpy 1d array containing the desired waveform (mono) * Numpy 2d array containing waveforms for each channel. Shape=(NCHAN, NSAMPLES). For the standard channel order, see http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx * List of float or integer representing the waveform (mono) * String containing the filename * Bytestring containing raw PCM data or * URL pointing to a file on the web. If the array option is used, the waveform will be normalized. If a filename or url is used, the format support will be browser dependent. url : unicode A URL to download the data from. filename : unicode Path to a local file to load the data from. embed : boolean Should the audio data be embedded using a data URI (True) or should the original source be referenced. Set this to True if you want the audio to playable later with no internet connection in the notebook. Default is `True`, unless the keyword argument `url` is set, then default value is `False`. rate : integer The sampling rate of the raw data. Only required when data parameter is being used as an array autoplay : bool Set to True if the audio should immediately start playing. Default is `False`. normalize : bool Whether audio should be normalized (rescaled) to the maximum possible range. Default is `True`. When set to `False`, `data` must be between -1 and 1 (inclusive), otherwise an error is raised. Applies only when `data` is a list or array of samples; other types of audio are never normalized.","When this object is returned by an input cell or passed to the display function, it will result in Audio controls being displayed in the frontend (only works in the notebook).",,,,,"See Also -------- See also the ``Audio`` widgets form the ``ipywidget`` package for more flexibility and options.",,,,,,,,, 2,capture_output,context manager for capturing stdout/err,context manager for capturing stdout/err,,,,,,,,,,,,,,,,, 2,CapturingDisplayPublisher,A DisplayPublisher that store,A DisplayPublisher that store,,,,,,,,,,,,,,,,, 2,CellMagicRole,Cross reference role displayed with a %% prefix,Cross reference role displayed with a %% prefix,,,,,,,,,,,,,,,,, 2,DisplayHook,"The custom IPython displayhook to replace sys.displayhook. This class does many things, but the basic idea is that it is a callable that gets called anytime user code returns a value.",The custom IPython displayhook to replace sys.displayhook.,,,"This class does many things, but the basic idea is that it is a callable that gets called anytime user code returns a value.",,,,,,,,,,,,,, 2,DummyMod,"A dummy module used for IPython's interactive module when a namespace must be assigned to the module's __dict__.","A dummy module used for IPython's interactive module when a namespace must be assigned to the module's __dict__.",,,,,,,,,,,,,,,,, 2,GeoJSON,"GeoJSON expects JSON-able dict not an already-serialized JSON string. Scalar types (None, number, string) are not allowed, only dict containers.",GeoJSON expects JSON-able dict,,,,,"not an already-serialized JSON string. Scalar types (None, number, string) are not allowed, only dict containers.",,,,,"not an already-serialized JSON string. Scalar types (None, number, string) are not allowed, only dict containers.",,,,,,, 2,HelpEnd,Transformer for help syntax: obj? and obj??,Transformer for help syntax: obj? and obj??,,,,,,,,,,,,,,,,, 2,HistoryAccessor,"Access the history database without adding to it. This is intended for use by standalone history tools. IPython shells use HistoryManager, below, which is a subclass of this.",Access the history database without adding to it.,"This is intended for use by standalone history tools. IPython shells use HistoryManager, below, which is a subclass of this.",,,,,,,,,,,,,,,, 2,InteractiveShellApp,"A Mixin for applications that start InteractiveShell instances. Provides configurables for loading extensions and executing files as part of configuring a Shell environment. The following methods should be called by the :meth:`initialize` method of the subclass: - :meth:`init_path` - :meth:`init_shell` (to be implemented by the subclass) - :meth:`init_gui_pylab` - :meth:`init_extensions` - :meth:`init_code`","A Mixin for applications that start InteractiveShell instances.Provides configurables for loading extensions and executing files as part of configuring a Shell environment.",,,"The following methods should be called by the :meth:`initialize` method of the subclass: - :meth:`init_path` - :meth:`init_shell` (to be implemented by the subclass) - :meth:`init_gui_pylab` - :meth:`init_extensions` - :meth:`init_code`",,,,,,,,,,,,,"The following methods should be called by the :meth:`initialize` method of the subclass: - :meth:`init_path` - :meth:`init_shell` (to be implemented by the subclass) - :meth:`init_gui_pylab` - :meth:`init_extensions` - :meth:`init_code`", 2,IPythonInputSplitter,An input splitter that recognizes all of IPython's special syntax.,An input splitter that recognizes all of IPython's special syntax.,,,,,,,,,,,,,,,,, 2,LazyEvaluate,"This is used for formatting strings with values that need to be updated at that time, such as the current time or working directory.",,,,,,"This is used for formatting strings with values that need to be updated at that time, such as the current time or working directory.",,,,,"This is used for formatting strings with values that need to be updated at that time, such as the current time or working directory.",,,,,,, 2,Magics,"Base class for implementing magic functions. Shell functions which can be reached as %function_name. All magic functions should accept a string, which they can parse for their own needs. This can make some functions easier to type, eg `%cd ../` vs. `%cd(""../"")` Classes providing magic functions need to subclass this class, and they MUST: - Use the method decorators `@line_magic` and `@cell_magic` to decorate individual methods as magic functions, AND - Use the class decorator `@magics_class` to ensure that the magic methods are properly registered at the instance level upon instance initialization. See :mod:`magic_functions` for examples of actual implementation classes.",Base class for implementing magic functions.,"Classes providing magic functions need to subclass this class, and they MUST: - Use the method decorators `@line_magic` and `@cell_magic` to decorate individual methods as magic functions, AND - Use the class decorator `@magics_class` to ensure that the magic methods are properly registered at the instance level upon instance initialization.",,"""All magic functions should accept a string, which they can parse for their own needs. """,,"Shell functions which can be reached as %function_name. This can make some functions easier to type, eg `%cd ../` vs. `%cd(""../"")`",,,See :mod:`magic_functions` for examples of actual implementation classes.,,"All magic functions should accept a string, which they can parse for their own needs.","Shell functions which can be reached as %function_name. This can make some functions easier to type, eg `%cd ../` vs. `%cd(""../"")`",,"All magic functions should accept a string, which they can parse for their own needs.",,,, 2,MyFrame,"This is MyFrame. It just shows a few controls on a wxPanel, and has a simple menu.","This is MyFrame. It just shows a few controls on a wxPanel, and has a simple menu.",,,,,,,,,,,,,,,,, 2,Obj,Namespace to hold arbitrary information.,Namespace to hold arbitrary information.,,,,,,,,,,,,,,,,, 2,RichPromptDisplayHook,Subclass of base display hook using coloured prompt,Subclass of base display hook using coloured prompt,,,,,,,,,,,,,,,,, 2,Struct,"A dict subclass with attribute style access. This dict subclass has a a few extra features: * Attribute style access. * Protection of class members (like keys, items) when using attribute style access. * The ability to restrict assignment to only existing keys. * Intelligent merging. * Overloaded operators.",A dict subclass with attribute style access.,,,"This dict subclass has a a few extra features: * Attribute style access. * Protection of class members (like keys, items) when using attribute style access. * The ability to restrict assignment to only existing keys. * Intelligent merging. * Overloaded operators.",,,,,,,,,,,,,, 2,TBTools,Basic tools used by all traceback printer classes.,Basic tools used by all traceback printer classes.,,,,,,,,,,,,,,,,, 2,TermColors,"Color escape sequences. This class defines the escape sequences for all the standard (ANSI?) colors in terminals. Also defines a NoColor escape which is just the null string, suitable for defining 'dummy' color schemes in terminals which get confused by color escapes. This class should be used as a mixin for building color schemes.","Color escape sequences. This class defines the escape sequences for all the standard (ANSI?) colors in terminals. Also defines a NoColor escape which is just the null string, suitable for defining 'dummy' color schemes in terminals which get confused by color escapes.",This class should be used as a mixin for building color schemes.,,,,,,,,,,This class should be used as a mixin for building color schemes.,,,,,, 2,UserMagics,"Placeholder for user-defined magics to be added at runtime. All magics are eventually merged into a single namespace at runtime, but we use this class to isolate the magics defined dynamically by the user into their own class.",Placeholder for user-defined magics to be added at runtime.,,,"All magics are eventually merged into a single namespace at runtime, but we use this class to isolate the magics defined dynamically by the user into their own class.",,"All magics are eventually merged into a single namespace at runtime, but we use this class to isolate the magics defined dynamically by the user into their own class.",,,,,"All magics are eventually merged into a single namespace at runtime, but we use this class to isolate the magics defined dynamically by the user into their own class.",,,,,,, 2,YouTubeVideo,"Class for embedding a YouTube Video in an IPython session, based on its video id. e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would do:: vid = YouTubeVideo(""foo"") display(vid) To start from 30 seconds:: vid = YouTubeVideo(""abc"", start=30) display(vid) To calculate seconds from time as hours, minutes, seconds use :class:`datetime.timedelta`:: start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds()) Other parameters can be provided as documented at https://developers.google.com/youtube/player_parameters#Parameters When converting the notebook using nbconvert, a jpeg representation of the video will be inserted in the document.","Class for embedding a YouTube Video in an IPython session, based on its video id.","When converting the notebook using nbconvert, a jpeg representation of the video will be inserted in the document. e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would do:: vid = YouTubeVideo(""foo"") display(vid) To start from 30 seconds:: vid = YouTubeVideo(""abc"", start=30) display(vid) To calculate seconds from time as hours, minutes, seconds use :class:`datetime.timedelta`:: start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds())","Other parameters can be provided as documented at https://developers.google.com/youtube/player_parameters#Parameters",,,,,,"Other parameters can be provided as documented at https://developers.google.com/youtube/player_parameters#Parameters",,,,,,,,, 3,_MockPOP3,"Base mock that pretends to be a poplib POP3 connection. >>> pm = POP3Mailbox('localhost', user='bad', conn_cls=_MockPOP3) Traceback (most recent call last): ... AccessError >>> pm = POP3Mailbox('localhost', user='a', password='b', ... conn_cls=_MockPOP3) >>> pm.stat() (2, 123456) >>> pm.iterkeys() ['evil', 'good'] >>> 'evil' in pm, 'bogon' in pm (True, False) >>> [msg['subject'] for msg in pm] ['Msg 1', 'Msg 2'] >>> pm.get_msg_size('evil'), pm.get_msg_size('good') (47, 51) >>> pm.get_bytes('evil') 'From: test@mailpile.is\nSubject: Msg 1\n\nOh, hi!\n' >>> pm.get_bytes('evil', 5) 'From:' >>> pm['invalid-key'] Traceback (most recent call last): ... KeyError: ...",Base mock that pretends to be a poplib POP3 connection.,">>> pm = POP3Mailbox('localhost', user='bad', conn_cls=_MockPOP3) Traceback (most recent call last): ... AccessError >>> pm = POP3Mailbox('localhost', user='a', password='b', ... conn_cls=_MockPOP3) >>> pm.stat() (2, 123456) >>> pm.iterkeys() ['evil', 'good'] >>> 'evil' in pm, 'bogon' in pm (True, False) >>> [msg['subject'] for msg in pm] ['Msg 1', 'Msg 2'] >>> pm.get_msg_size('evil'), pm.get_msg_size('good') (47, 51) >>> pm.get_bytes('evil') 'From: test@mailpile.is\nSubject: Msg 1\n\nOh, hi!\n' >>> pm.get_bytes('evil', 5) 'From:' >>> pm['invalid-key'] Traceback (most recent call last): ... KeyError: ...",,,,,,,,,,,,,,,, 3,AutocryptSearch,Search for the Autocrypt database.,Search for the Autocrypt database.,,,,,,,,,,,,,,,,, 3,AutoTlsConnBroker,"This broker tries to auto-upgrade connections to use TLS, or at least do the SSL handshake here so we can record info about it.","This broker tries to auto-upgrade connections to use TLS, or at least do the SSL handshake here so we can record info about it.",,,,,,,,,,,,,,,,, 3,ConfigDict,"A sanity-checking, self-documenting dictionary of program settings. The object must be initialized with a dictionary which describes in a structured way what variables exist, what their legal values are, and what their defaults are and what they are for. Each variable definition expects three values: 1. A human readable description of what the variable is 2. A data type / sanity check 3. A default value If the sanity check is itself a dictionary of rules, values are expected to be dictionaries or lists of items that match the rules defined. This should be used with an empty list or dictionary as a default value. Configuration data can be nested by including a dictionary of further rules in place of the default value. If the default value is an empty list, it is assumed to be a list of values of the type specified. Examples: >>> pot = ConfigDict(_rules={'potatoes': ['How many potatoes?', 'int', 0], ... 'carrots': ['How many carrots?', int, 99], ... 'liquids': ['Fluids we like', False, { ... 'water': ['Liters', int, 0], ... 'vodka': ['Liters', int, 12] ... }], ... 'tags': ['Tags', {'c': ['C', int, 0], ... 'x': ['X', str, '']}, []], ... 'colors': ['Colors', ('red', 'blue'), []]}) >>> sorted(pot.keys()), sorted(pot.values()) (['colors', 'liquids', 'tags'], [[], [], {}]) >>> pot['potatoes'] = pot['liquids']['vodka'] = ""123"" >>> pot['potatoes'] 123 >>> pot['liquids']['vodka'] 123 >>> pot['carrots'] 99 >>> pot.walk('liquids.vodka') 123 >>> pot.walk('liquids/vodka', parent=True) ({...}, 'vodka') >>> pot['colors'].append('red') '0' >>> pot['colors'].extend(['blue', 'red', 'red']) >>> pot['colors'] ['red', 'blue', 'red', 'red'] >>> pot['tags'].append({'c': '123', 'x': 'woots'}) '0' >>> pot['tags'][0]['c'] 123 >>> pot['tags'].append({'z': 'invalid'}) Traceback (most recent call last): ... ValueError: Invalid value for config/tags/1: ... >>> pot['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config: evil >>> pot['liquids']['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config/liquids: evil >>> pot['potatoes'] = ""moo"" Traceback (most recent call last): ... ValueError: Invalid value for config/potatoes: moo >>> pot['colors'].append('green') Traceback (most recent call last): ... ConfigValueError: Invalid value for config/colors/4: green >>> pot.rules['potatoes'] ['How many potatoes?', , 0] >>> isinstance(pot['liquids'], ConfigDict) True","A sanity-checking, self-documenting dictionary of program settings.","Examples: >>> pot = ConfigDict(_rules={'potatoes': ['How many potatoes?', 'int', 0], ... 'carrots': ['How many carrots?', int, 99], ... 'liquids': ['Fluids we like', False, { ... 'water': ['Liters', int, 0], ... 'vodka': ['Liters', int, 12] ... }], ... 'tags': ['Tags', {'c': ['C', int, 0], ... 'x': ['X', str, '']}, []], ... 'colors': ['Colors', ('red', 'blue'), []]}) >>> sorted(pot.keys()), sorted(pot.values()) (['colors', 'liquids', 'tags'], [[], [], {}]) >>> pot['potatoes'] = pot['liquids']['vodka'] = ""123"" >>> pot['potatoes'] 123 >>> pot['liquids']['vodka'] 123 >>> pot['carrots'] 99 >>> pot.walk('liquids.vodka') 123 >>> pot.walk('liquids/vodka', parent=True) ({...}, 'vodka') >>> pot['colors'].append('red') '0' >>> pot['colors'].extend(['blue', 'red', 'red']) >>> pot['colors'] ['red', 'blue', 'red', 'red'] >>> pot['tags'].append({'c': '123', 'x': 'woots'}) '0' >>> pot['tags'][0]['c'] 123 >>> pot['tags'].append({'z': 'invalid'}) Traceback (most recent call last): ... ValueError: Invalid value for config/tags/1: ... >>> pot['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config: evil >>> pot['liquids']['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config/liquids: evil >>> pot['potatoes'] = ""moo"" Traceback (most recent call last): ... ValueError: Invalid value for config/potatoes: moo >>> pot['colors'].append('green') Traceback (most recent call last): ... ConfigValueError: Invalid value for config/colors/4: green >>> pot.rules['potatoes'] ['How many potatoes?', , 0] >>> isinstance(pot['liquids'], ConfigDict) True","Each variable definition expects three values: 1. A human readable description of what the variable is 2. A data type / sanity check 3. A default value","The object must be initialized with a dictionary which describes in a structured way what variables exist, what their legal values are, and what their defaults are and what they are for. Each variable definition expects three values: 1. A human readable description of what the variable is 2. A data type / sanity check 3. A default value If the sanity check is itself a dictionary of rules, values are expected to be dictionaries or lists of items that match the rules defined. This should be used with an empty list or dictionary as a default value. Configuration data can be nested by including a dictionary of further rules in place of the default value. If the default value is an empty list, it is assumed to be a list of values of the type specified.",,"If the default value is an empty list, it is assumed to be a list of values of the type specified.",,">>> pot['tags'].append({'z': 'invalid'}) Traceback (most recent call last): ... ValueError: Invalid value for config/tags/1: ... >>> pot['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config: evil >>> pot['liquids']['evil'] = 123 Traceback (most recent call last): ... InvalidKeyError: Invalid key for config/liquids: evil >>> pot['potatoes'] = ""moo"" Traceback (most recent call last): ... ValueError: Invalid value for config/potatoes: moo >>> pot['colors'].append('green') Traceback (most recent call last): ... ConfigValueError: Invalid value for config/colors/4: green",,,"The object must be initialized with a dictionary which describes in a structured way what variables exist, what their legal values are, and what their defaults are and what they are for.","If the sanity check is itself a dictionary of rules, values are expected to be dictionaries or lists of items that match the rules defined. This should be used with an empty list or dictionary as a default value.",,,,,, 3,ConfigureMailboxes,"Add one or more mailboxes. If not account is specified, the mailbox is only assigned an ID for use in the metadata index. If an account is specified, the mailbox will be assigned to that account and configured for automatic indexing.",Add one or more mailboxes.,,,,,"If not account is specified, the mailbox is only assigned an ID for use in the metadata index. If an account is specified, the mailbox will be assigned to that account and configured for automatic indexing.",,,,,,,,,,,, 3,ConnectToGuiOMatic,Connect to a waiting gui-o-matic GUI,Connect to a waiting gui-o-matic GUI,,,,,,,,,,,,,,,,, 3,ContactSet,"Set contact lines, ensuring contact exists","Set contact lines, ensuring contact exists",,,,,,,,,,,,,,,,, 3,EncryptedIntDict,"EncryptedDict which only deals in signed 64-bit int values. This also adds a working keys() function.",EncryptedDict which only deals in signed 64-bit int values.,,,This also adds a working keys() function.,,,,,,,,,,,,,, 3,EncryptedUnicodeDict,EncryptedDict which only deals in unicode values.,EncryptedDict which only deals in unicode values.,,,,,,,,,,,,,,,,, 3,Event,"This is a single event in the event log. Actual interpretation and rendering of events should be handled by the respective source class.",This is a single event in the event log.,,,"Actual interpretation and rendering of events should be handled by the respective source class.",,,,,,,,"Actual interpretation and rendering of events should be handled by the respective source class.",,,,,, 3,Forward,Create forwarding drafts of one or more messages,Create forwarding drafts of one or more messages,,,,,,,,,,,,,,,,, 3,Group_,View groups,View groups,,,,,,,,,,,,,,,,, 3,HashCash,Try to collide a hash using the SMTorP algorithm,Try to collide a hash using the SMTorP algorithm,,,,,,,,,,,,,,,,, 3,ListTags,List tags,List tags,,,,,,,,,,,,,,,,, 3,MailpileJinjaLoader,"A Jinja2 template loader which uses the Mailpile configuration and plugin system to find template files.",""" A Jinja2 template loader which uses the Mailpile configuration and plugin system to find template files.""",,,,,,,,,,,,,,,,, 3,MailpileMailbox,A Maildir class for Windows (using ! instead of : in filenames),A Maildir class for Windows (using ! instead of : in filenames),,,,,,,,,,,,,,,,, 3,MailpileVFS,"This is a router object that implements the VFS interface but, delegating calls to individual implementations.","This is a router object that implements the VFS interface but, delegating calls to individual implementations.",,,,,,,,,,,,,,,,, 3,MoveFilter,Move an auto-tagging rule,Move an auto-tagging rule,,,,,,,,,,,,,,,,, 3,OldPostingList,A posting list is a map of search terms to message IDs.,A posting list is a map of search terms to message IDs.,,,,,,,,,,,,,,,,, 3,Rescan,Add new messages to index,Add new messages to index,,,,,,,,,,,,,,,,, 3,StorageBackedLongs,"This combines StorageBackedData with Pack/UnpackLongList to pack and save sets of ints. >>> storage = {'sbl': '\x01\x00\x00\x00\x00\x00\x00\x00'} >>> sbl = StorageBackedLongs(storage, 'sbl') >>> 1 in sbl True >>> sbl.append(2) >>> sbl.save() >>> UnpackLongList(storage['sbl']) == [1, 2] True","This combines StorageBackedData with Pack/UnpackLongList to pack and save sets of ints.",">>> storage = {'sbl': '\x01\x00\x00\x00\x00\x00\x00\x00'} >>> sbl = StorageBackedLongs(storage, 'sbl') >>> 1 in sbl True >>> sbl.append(2) >>> sbl.save() >>> UnpackLongList(storage['sbl']) == [1, 2] True",,,,,,,,,,,,,,,, 3,Util,Utility functions for builds,Utility functions for builds,,,,,,,,,,,,,,,,, 3,Vcard,Display a single vcard,Display a single vcard,,,,,,,,,,,,,,,,, 3,VCardSet,"Add a lines to a VCard, ensuring VCard exists","Add a lines to a VCard, ensuring VCard exists",,,,,,,,,,,,,,,,, 3,VCardStore,"This is a disk-backed in-memory collection of VCards. >>> vcs = VCardStore(cfg, '/tmp') # VCards are added to the collection using add_vcard. This will # create a file for the card on disk, using a random name. >>> vcs.add_vcards(MailpileVCard(VCardLine('FN:Dude'), ... VCardLine('EMAIL:d@evil.com')), ... MailpileVCard(VCardLine('FN:Guy'))) VCards can be looked up directly by e-mail. >>> vcs.get_vcard('d@evil.com').fn u'Dude' >>> vcs.get_vcard('nosuch@email.address') is None True Or they can be found using searches... >>> vcs.find_vcards(['guy'])[0].fn u'Guy' Cards can be removed using del_vcards >>> vcs.del_vcards(vcs.get_vcard('d@evil.com')) >>> vcs.get_vcard('d@evil.com') is None True >>> vcs.del_vcards(*vcs.find_vcards(['guy'])) >>> vcs.find_vcards(['guy']) []",This is a disk-backed in-memory collection of VCards.,"# VCards are added to the collection using add_vcard. This will # create a file for the card on disk, using a random name. >>> vcs.add_vcards(MailpileVCard(VCardLine('FN:Dude'), ... VCardLine('EMAIL:d@evil.com')), ... MailpileVCard(VCardLine('FN:Guy'))) VCards can be looked up directly by e-mail. >>> vcs.get_vcard('d@evil.com').fn u'Dude' >>> vcs.get_vcard('nosuch@email.address') is None True Or they can be found using searches... >>> vcs.find_vcards(['guy'])[0].fn u'Guy' Cards can be removed using del_vcards >>> vcs.del_vcards(vcs.get_vcard('d@evil.com')) >>> vcs.get_vcard('d@evil.com') is None True >>> vcs.del_vcards(*vcs.find_vcards(['guy'])) >>> vcs.find_vcards(['guy']) []",,,,,,,,,,,,,,,, 4,_MergeOperation,"Perform a database (SQL) merge operation between two DataFrame or Series objects using either columns as keys or their row indexes",,Perform a database (SQL) merge operation between two DataFrame or Series objects using either columns as keys or their row indexes,,,,,,,,,,,,,,,, 4,AbstractEngine,Object serving as a base class for all engines.,Object serving as a base class for all engines.,,,,,,,,,,,,,,,,, 4,AbstractHolidayCalendar,Abstract interface to create holidays following certain rules.,Abstract interface to create holidays following certain rules.,,,,,,,,,,,,,,,,, 4,AccessorCallableDocumenter,"This documenter lets us removes .__call__ from the method signature for callable accessors like Series.plot","This documenter lets us removes .__call__ from the method signature for callable accessors like Series.plot",,,,,,,,,,,,,,,,, 4,AccessorDocumenter,Specialized Documenter subclass for accessors.,Specialized Documenter subclass for accessors.,,,,,,,,,,,,,,,,, 4,Base,"Common tests for all variations of IntervalIndex construction. Input data to be supplied in breaks format, then converted by the subclass method get_kwargs_from_breaks to the expected format.",Common tests for all variations of IntervalIndex construction.,,,"Input data to be supplied in breaks format, then converted by the subclass method get_kwargs_from_breaks to the expected format.",,"Input data to be supplied in breaks format, then converted by the subclass method get_kwargs_from_breaks to the expected format.",,,,,,,,"Input data to be supplied in breaks format, then converted by the subclass method get_kwargs_from_breaks to the expected format.",,,, 4,BaseInterfaceTests,"Tests that the basic interface is satisfied. ------------------------------------------------------------------------ Interface ------------------------------------------------------------------------","Tests that the basic interface is satisfied. ------------------------------------------------------------------------ Interface ------------------------------------------------------------------------",,,,,,,,,,,,,,,,, 4,BooleanArray,"Array of boolean (True/False) data with missing values. This is a pandas Extension array for boolean data, under the hood represented by 2 numpy arrays: a boolean array with the data and a boolean array with the mask (True indicating missing). BooleanArray implements Kleene logic (sometimes called three-value logic) for logical operations. See :ref:`boolean.kleene` for more. To construct an BooleanArray from generic array-like input, use :func:`pandas.array` specifying ``dtype=""boolean""`` (see examples below). .. versionadded:: 1.0.0 .. warning:: BooleanArray is considered experimental. The implementation and parts of the API may change without warning. Parameters ---------- values : numpy.ndarray A 1-d boolean-dtype array with the data. mask : numpy.ndarray A 1-d boolean-dtype array indicating missing values (True indicates missing). copy : bool, default False Whether to copy the `values` and `mask` arrays. Attributes ---------- None Methods ------- None Returns ------- BooleanArray Examples -------- Create an BooleanArray with :func:`pandas.array`: >>> pd.array([True, False, None], dtype=""boolean"") [True, False, NA] Length: 3, dtype: boolean",Array of boolean (True/False) data with missing values.,"Examples -------- Create an BooleanArray with :func:`pandas.array`: >>> pd.array([True, False, None], dtype=""boolean"") [True, False, NA] Length: 3, dtype: boolean","Parameters ---------- values : numpy.ndarray A 1-d boolean-dtype array with the data. mask : numpy.ndarray A 1-d boolean-dtype array indicating missing values (True indicates missing). copy : bool, default False Whether to copy the `values` and `mask` arrays.","This is a pandas Extension array for boolean data, under the hood represented by 2 numpy arrays: a boolean array with the data and a boolean array with the mask (True indicating missing). BooleanArray implements Kleene logic (sometimes called three-value logic) for logical operations. See :ref:`boolean.kleene` for more. To construct an BooleanArray from generic array-like input, use :func:`pandas.array` specifying ``dtype=""boolean""`` (see examples below). Attributes ---------- None Methods ------- None Returns ------- BooleanArray",.. version added:: 1.0.0,,,,See :ref:`boolean.kleene` for more.,,".. warning:: BooleanArray is considered experimental. The implementation and parts of the API may change without warning.",,,,,,, 4,BusinessHour,DateOffset subclass representing possibly n business hours.,DateOffset subclass representing possibly n business hours.,,,,,,,,,,,,,,,,DateOffset subclass representing possibly n business hours., 4,BusinessMixin,Mixin to business types to provide related functions.,Mixin to business types to provide related functions.,,,,,,,,,,,,,,,,, 4,BYearBegin,DateOffset increments between business year begin dates.,DateOffset increments between business year begin dates.,,,,,,,,,,,,,,,,, 4,CategoricalDtype,"Type for categorical data with the categories and orderedness. .. versionchanged:: 0.21.0 Parameters ---------- categories : sequence, optional Must be unique, and must not contain any nulls. ordered : bool or None, default False Whether or not this categorical is treated as a ordered categorical. None can be used to maintain the ordered value of existing categoricals when used in operations that combine categoricals, e.g. astype, and will resolve to False if there is no existing ordered to maintain. Attributes ---------- categories ordered Methods ------- None See Also -------- Categorical Notes ----- This class is useful for specifying the type of a ``Categorical`` independent of the values. See :ref:`categorical.categoricaldtype` for more. Examples -------- >>> t = pd.CategoricalDtype(categories=['b', 'a'], ordered=True) >>> pd.Series(['a', 'b', 'a', 'c'], dtype=t) 0 a 1 b 2 a 3 NaN dtype: category Categories (2, object): [b < a]",Type for categorical data with the categories and orderedness.,"Examples -------- >>> t = pd.CategoricalDtype(categories=['b', 'a'], ordered=True) >>> pd.Series(['a', 'b', 'a', 'c'], dtype=t) 0 a 1 b 2 a 3 NaN dtype: category Categories (2, object): [b < a]","Parameters ---------- categories : sequence, optional Must be unique, and must not contain any nulls. ordered : bool or None, default False Whether or not this categorical is treated as a ordered categorical. None can be used to maintain the ordered value of existing categoricals when used in operations that combine categoricals, e.g. astype, and will resolve to False if there is no existing ordered to maintain.","Attributes ---------- categories ordered Methods ------- None",. versionchanged:: 0.21.0,"Notes ----- This class is useful for specifying the type of a ``Categorical`` independent of the values. See :ref:`categorical.categoricaldtype` for more.",,,"See :ref:`categorical.categoricaldtype` for more. See Also -------- Categorical",,,,,,,,, 4,CategoricalIndex,"Index based on an underlying :class:`Categorical`. CategoricalIndex, like Categorical, can only take on a limited, and usually fixed, number of possible values (`categories`). Also, like Categorical, it might have an order, but numerical operations (additions, divisions, ...) are not possible. Parameters ---------- data : array-like (1-dimensional) The values of the categorical. If `categories` are given, values not in `categories` will be replaced with NaN. categories : index-like, optional The categories for the categorical. Items need to be unique. If the categories are not given here (and also not in `dtype`), they will be inferred from the `data`. ordered : bool, optional Whether or not this categorical is treated as an ordered categorical. If not given here or in `dtype`, the resulting categorical will be unordered. dtype : CategoricalDtype or ""category"", optional If :class:`CategoricalDtype`, cannot be used together with `categories` or `ordered`. .. versionadded:: 0.21.0 copy : bool, default False Make a copy of input ndarray. name : object, optional Name to be stored in the index. Attributes ---------- codes categories ordered Methods ------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories set_categories as_ordered as_unordered map Raises ------ ValueError If the categories do not validate. TypeError If an explicit ``ordered=True`` is given but no `categories` and the `values` are not sortable. See Also -------- Index : The base pandas Index type. Categorical : A categorical array. CategoricalDtype : Type for categorical data. Notes ----- See the `user guide `_ for more. Examples -------- >>> pd.CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c']) CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=False, dtype='category') # noqa ``CategoricalIndex`` can also be instantiated from a ``Categorical``: >>> c = pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c']) >>> pd.CategoricalIndex(c) CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=False, dtype='category') # noqa Ordered ``CategoricalIndex`` can have a min and max value. >>> ci = pd.CategoricalIndex(['a','b','c','a','b','c'], ordered=True, ... categories=['c', 'b', 'a']) >>> ci CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['c', 'b', 'a'], ordered=True, dtype='category') # noqa >>> ci.min() 'c'",Index based on an underlying :class:`Categorical`.,"Examples -------- >>> pd.CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c']) CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=False, dtype='category') # noqa ``CategoricalIndex`` can also be instantiated from a ``Categorical``: >>> c = pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c']) >>> pd.CategoricalIndex(c) CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=False, dtype='category') # noqa Ordered ``CategoricalIndex`` can have a min and max value. >>> ci = pd.CategoricalIndex(['a','b','c','a','b','c'], ordered=True, ... categories=['c', 'b', 'a']) >>> ci CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'], categories=['c', 'b', 'a'], ordered=True, dtype='category') # noqa >>> ci.min() 'c'","Parameters ---------- data : array-like (1-dimensional) The values of the categorical. If `categories` are given, values not in `categories` will be replaced with NaN. categories : index-like, optional The categories for the categorical. Items need to be unique. If the categories are not given here (and also not in `dtype`), they will be inferred from the `data`. ordered : bool, optional Whether or not this categorical is treated as an ordered categorical. If not given here or in `dtype`, the resulting categorical will be unordered. dtype : CategoricalDtype or ""category"", optional If :class:`CategoricalDtype`, cannot be used together with `categories` or `ordered`. .. versionadded:: 0.21.0 copy : bool, default False Make a copy of input ndarray. name : object, optional Name to be stored in the index.","CategoricalIndex, like Categorical, can only take on a limited, and usually fixed, number of possible values (`categories`). Also, like Categorical, it might have an order, but numerical operations (additions, divisions, ...) are not possible. Methods ------- rename_categories reorder_categories add_categories remove_categories remove_unused_categories set_categories as_ordered as_unordered map Attributes ---------- codes categories ordered",,"Notes ----- See the `user guide `_ for more.",,"Raises ------ ValueError If the categories do not validate. TypeError If an explicit ``ordered=True`` is given but no `categories` and the `values` are not sortable.","See Also -------- Index : The base pandas Index type. Categorical : A categorical array. CategoricalDtype : Type for categorical data.",,,,,,,,Index based on an underlying :class:`Categorical`., 4,CheckingBuildExt,Subclass build_ext to get clearer report if Cython is necessary.,,,,,,,,,,,,,,,,,Subclass build_ext to get clearer report if Cython is necessary., 4,CythonCommand,"Custom distutils command subclassed from Cython.Distutils.build_ext to compile pyx->c, and stop there. All this does is override the C-compile method build_extension() with a no-op.","Custom distutils command subclassed from Cython.Distutils.build_ext to compile pyx->c, and stop there.",,,,,,,,,,,,,,,,"All this does is override the C-compile method build_extension() with a no-op.", 4,DataIndexableCol,represent a data column that can be indexed,represent a data column that can be indexed,,,,,,,,,,,,,,,,, 4,ExcelFile,"Class for parsing tabular excel sheets into DataFrame objects. Uses xlrd. See read_excel for more documentation Parameters ---------- io : str, path object (pathlib.Path or py._path.local.LocalPath), a file-like object, xlrd workbook or openpypl workbook. If a string or path object, expected to be a path to xls, xlsx or odf file. engine : str, default None If io is not a buffer or path, this must be set to identify io. Acceptable values are None, ``xlrd``, ``openpyxl`` or ``odf``. Note that ``odf`` reads tables out of OpenDocument formatted files.",Class for parsing tabular excel sheets into DataFrame objects.,,"Parameters ---------- io : str, path object (pathlib.Path or py._path.local.LocalPath), a file-like object, xlrd workbook or openpypl workbook. If a string or path object, expected to be a path to xls, xlsx or odf file. engine : str, default None If io is not a buffer or path, this must be set to identify io. Acceptable values are None, ``xlrd``, ``openpyxl`` or ``odf``. Note that ``odf`` reads tables out of OpenDocument formatted files.",,,Uses xlrd. See read_excel for more documentation,,,See read_excel for more documentation,,,,,,Uses xlrd.,,, 4,Holiday,"Class that defines a holiday with start/end dates and rules for observance.","Class that defines a holiday with start/end dates and rules for observance.",,,,,,,,,,,,,,,,, 4,IntegerArray,"Array of integer (optional missing) values. .. versionadded:: 0.24.0 .. warning:: IntegerArray is currently experimental, and its API or internal implementation may change without warning. We represent an IntegerArray with 2 numpy arrays: - data: contains a numpy integer array of the appropriate dtype - mask: a boolean array holding a mask on the data, True is missing To construct an IntegerArray from generic array-like input, use :func:`pandas.array` with one of the integer dtypes (see examples). See :ref:`integer_na` for more. Parameters ---------- values : numpy.ndarray A 1-d integer-dtype array. mask : numpy.ndarray A 1-d boolean-dtype array indicating missing values. copy : bool, default False Whether to copy the `values` and `mask`. Attributes ---------- None Methods ------- None Returns ------- IntegerArray Examples -------- Create an IntegerArray with :func:`pandas.array`. >>> int_array = pd.array([1, None, 3], dtype=pd.Int32Dtype()) >>> int_array [1, NaN, 3] Length: 3, dtype: Int32 String aliases for the dtypes are also available. They are capitalized. >>> pd.array([1, None, 3], dtype='Int32') [1, NaN, 3] Length: 3, dtype: Int32 >>> pd.array([1, None, 3], dtype='UInt16') [1, NaN, 3] Length: 3, dtype: UInt16",Array of integer (optional missing) values.,"To construct an IntegerArray from generic array-like input, use :func:`pandas.array` with one of the integer dtypes (see examples). Examples -------- Create an IntegerArray with :func:`pandas.array`. >>> int_array = pd.array([1, None, 3], dtype=pd.Int32Dtype()) >>> int_array [1, NaN, 3] Length: 3, dtype: Int32 String aliases for the dtypes are also available. They are capitalized. >>> pd.array([1, None, 3], dtype='Int32') [1, NaN, 3] Length: 3, dtype: Int32 >>> pd.array([1, None, 3], dtype='UInt16') [1, NaN, 3] Length: 3, dtype: UInt16","Parameters ---------- values : numpy.ndarray A 1-d integer-dtype array. mask : numpy.ndarray A 1-d boolean-dtype array indicating missing values. copy : bool, default False Whether to copy the `values` and `mask`.","Attributes ---------- None Methods ------- None Returns ------- IntegerArray We represent an IntegerArray with 2 numpy arrays: - data: contains a numpy integer array of the appropriate dtype - mask: a boolean array holding a mask on the data, True is missing To construct an IntegerArray from generic array-like input, use :func:`pandas.array` with one of the integer dtypes (see examples).",.. versionadded:: 0.24.0,"We represent an IntegerArray with 2 numpy arrays: - data: contains a numpy integer array of the appropriate dtype - mask: a boolean array holding a mask on the data, True is missing To construct an IntegerArray from generic array-like input, use :func:`pandas.array` with one of the integer dtypes (see examples).",,,See :ref:`integer_na` for more.,,".. warning:: IntegerArray is currently experimental, and its API or internal implementation may change without warning. We represent an IntegerArray with 2 numpy arrays: - data: contains a numpy integer array of the appropriate dtype - mask: a boolean array holding a mask on the data, True is missing To construct an IntegerArray from generic array-like input, use :func:`pandas.array` with one of the integer dtypes (see examples).",,,,,,, 4,IntervalDtype,"An ExtensionDtype for Interval data. **This is not an actual numpy dtype**, but a duck type. Parameters ---------- subtype : str, np.dtype The dtype of the Interval bounds. Attributes ---------- subtype Methods ------- None Examples -------- >>> pd.IntervalDtype(subtype='int64') interval[int64]",An ExtensionDtype for Interval data.,"Examples -------- >>> pd.IntervalDtype(subtype='int64') interval[int64]","Parameters ---------- subtype : str, np.dtype The dtype of the Interval bounds.","Attributes ---------- subtype Methods ------- None",,"**This is not an actual numpy dtype**, but a duck type.",,,,,"**This is not an actual numpy dtype**, but a duck type.",,,,,,, 4,NonConsolidatableMixIn,hold methods for the nonconsolidatable blocks,hold methods for the nonconsolidatable blocks,,,,,,,,,,,,,,,,, 4,PlotAccessor,"Make plots of Series or DataFrame. Uses the backend specified by the option ``plotting.backend``. By default, matplotlib is used. Parameters ---------- data : Series or DataFrame The object for which the method is called. x : label or position, default None Only used if data is a DataFrame. y : label, position or list of label, positions, default None Allows plotting of one column versus another. Only used if data is a DataFrame. kind : str The kind of plot to produce: - 'line' : line plot (default) - 'bar' : vertical bar plot - 'barh' : horizontal bar plot - 'hist' : histogram - 'box' : boxplot - 'kde' : Kernel Density Estimation plot - 'density' : same as 'kde' - 'area' : area plot - 'pie' : pie plot - 'scatter' : scatter plot - 'hexbin' : hexbin plot. figsize : a tuple (width, height) in inches use_index : bool, default True Use index as ticks for x axis. title : str or list Title to use for the plot. If a string is passed, print the string at the top of the figure. If a list is passed and `subplots` is True, print each item in the list above the corresponding subplot. grid : bool, default None (matlab style default) Axis grid lines. legend : bool or {'reverse'} Place legend on axis subplots. style : list or dict The matplotlib line style per column. logx : bool or 'sym', default False Use log scaling or symlog scaling on x axis. .. versionchanged:: 0.25.0 logy : bool or 'sym' default False Use log scaling or symlog scaling on y axis. .. versionchanged:: 0.25.0 loglog : bool or 'sym', default False Use log scaling or symlog scaling on both x and y axes. .. versionchanged:: 0.25.0 xticks : sequence Values to use for the xticks. yticks : sequence Values to use for the yticks. xlim : 2-tuple/list ylim : 2-tuple/list rot : int, default None Rotation for ticks (xticks for vertical, yticks for horizontal plots). fontsize : int, default None Font size for xticks and yticks. colormap : str or matplotlib colormap object, default None Colormap to select colors from. If string, load colormap with that name from matplotlib. colorbar : bool, optional If True, plot colorbar (only relevant for 'scatter' and 'hexbin' plots). position : float Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center). table : bool, Series or DataFrame, default False If True, draw a table using the data in the DataFrame and the data will be transposed to meet matplotlib's default layout. If a Series or DataFrame is passed, use passed data to draw a table. yerr : DataFrame, Series, array-like, dict and str See :ref:`Plotting with Error Bars ` for detail. xerr : DataFrame, Series, array-like, dict and str Equivalent to yerr. mark_right : bool, default True When using a secondary_y axis, automatically mark the column labels with ""(right)"" in the legend. include_bool : bool, default is False If True, boolean values can be plotted. backend : str, default None Backend to use instead of the backend specified in the option ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to specify the ``plotting.backend`` for the whole session, set ``pd.options.plotting.backend``. .. versionadded:: 1.0.0 **kwargs Options to pass to matplotlib plotting method. Returns ------- :class:`matplotlib.axes.Axes` or numpy.ndarray of them If the backend is not the default matplotlib one, the return value will be the object returned by the backend. Notes ----- - See matplotlib documentation online for more on this subject - If `kind` = 'bar' or 'barh', you can specify relative alignments for bar plot layout by `position` keyword. From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)",Make plots of Series or DataFrame.,"Uses the backend specified by the option ``plotting.backend``. By default, matplotlib is used.","Parameters ---------- data : Series or DataFrame The object for which the method is called. x : label or position, default None Only used if data is a DataFrame. y : label, position or list of label, positions, default None Allows plotting of one column versus another. Only used if data is a DataFrame. kind : str The kind of plot to produce: - 'line' : line plot (default) - 'bar' : vertical bar plot - 'barh' : horizontal bar plot - 'hist' : histogram - 'box' : boxplot - 'kde' : Kernel Density Estimation plot - 'density' : same as 'kde' - 'area' : area plot - 'pie' : pie plot - 'scatter' : scatter plot - 'hexbin' : hexbin plot. figsize : a tuple (width, height) in inches use_index : bool, default True Use index as ticks for x axis. title : str or list Title to use for the plot. If a string is passed, print the string at the top of the figure. If a list is passed and `subplots` is True, print each item in the list above the corresponding subplot. grid : bool, default None (matlab style default) Axis grid lines. legend : bool or {'reverse'} Place legend on axis subplots. style : list or dict The matplotlib line style per column. logx : bool or 'sym', default False Use log scaling or symlog scaling on x axis. .. versionchanged:: 0.25.0 logy : bool or 'sym' default False Use log scaling or symlog scaling on y axis. .. versionchanged:: 0.25.0 loglog : bool or 'sym', default False Use log scaling or symlog scaling on both x and y axes. .. versionchanged:: 0.25.0 xticks : sequence Values to use for the xticks. yticks : sequence Values to use for the yticks. xlim : 2-tuple/list ylim : 2-tuple/list rot : int, default None Rotation for ticks (xticks for vertical, yticks for horizontal plots). fontsize : int, default None Font size for xticks and yticks. colormap : str or matplotlib colormap object, default None Colormap to select colors from. If string, load colormap with that name from matplotlib. colorbar : bool, optional If True, plot colorbar (only relevant for 'scatter' and 'hexbin' plots). position : float Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center). table : bool, Series or DataFrame, default False If True, draw a table using the data in the DataFrame and the data will be transposed to meet matplotlib's default layout. If a Series or DataFrame is passed, use passed data to draw a table. yerr : DataFrame, Series, array-like, dict and str See :ref:`Plotting with Error Bars ` for detail. xerr : DataFrame, Series, array-like, dict and str Equivalent to yerr. mark_right : bool, default True When using a secondary_y axis, automatically mark the column labels with ""(right)"" in the legend. include_bool : bool, default is False If True, boolean values can be plotted. backend : str, default None Backend to use instead of the backend specified in the option ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to specify the ``plotting.backend`` for the whole session, set ``pd.options.plotting.backend``. .. versionadded:: 1.0.0 **kwargs Options to pass to matplotlib plotting method. Returns ------- :class:`matplotlib.axes.Axes` or numpy.ndarray of them If the backend is not the default matplotlib one, the return value will be the object returned by the backend.","- If `kind` = 'bar' or 'barh', you can specify relative alignments for bar plot layout by `position` keyword. From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)",,"Uses the backend specified by the option ``plotting.backend``. By default, matplotlib is used. Notes ----- - See matplotlib documentation online for more on this subject - If `kind` = 'bar' or 'barh', you can specify relative alignments for bar plot layout by `position` keyword. From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)",,,- See matplotlib documentation online for more on this subject,,,,"Uses the backend specified by the option ``plotting.backend``. By default, matplotlib is used.",,,,, 4,SelectNSeries,"Implement n largest/smallest for Series Parameters ---------- obj : Series n : int keep : {'first', 'last'}, default 'first' Returns ------- nordered : Series",Implement n largest/smallest for Series,,"Parameters ---------- obj : Series n : int keep : {'first', 'last'}, default 'first' Returns ------- nordered : Series",,,,,,,,,,,,,,, 4,SetopCheck,"This is called to decorate the set operations of IntervalIndex to perform the type check in advance.","This is called to decorate the set operations of IntervalIndex to perform the type check in advance.",,,,,"This is called to decorate the set operations of IntervalIndex to perform the type check in advance.",,,,,,,,,,,, 4,SQLiteTable,"Patch the SQLTable for fallback support. Instead of a table variable just use the Create Table statement.",Patch the SQLTable for fallback support.,,,,,Instead of a table variable just use the Create Table statement.,,,,,Instead of a table variable just use the Create Table statement.,,,,,,, 4,SQLTable,"For mapping Pandas tables to SQL tables. Uses fact that table is reflected by SQLAlchemy to do better type conversions. Also holds various flags needed to avoid having to pass them between functions all the time. TODO: support for multiIndex",For mapping Pandas tables to SQL tables.,,,"Uses fact that table is reflected by SQLAlchemy to do better type conversions. Also holds various flags needed to avoid having to pass them between functions all the time.",,,TODO: support for multiIndex,,,,,,,,,,, 4,StringArray,"Extension array for string data. .. versionadded:: 1.0.0 .. warning:: StringArray is considered experimental. The implementation and parts of the API may change without warning. In particular, the NA value used may change to no longer be ``numpy.nan``. Parameters ---------- values : array-like The array of data. .. warning:: Currently, this expects an object-dtype ndarray where the elements are Python strings. This may change without warning in the future. copy : bool, default False Whether to copy the array of data. Attributes ---------- None Methods ------- None See Also -------- Series.str The string methods are available on Series backed by a StringArray. Notes ----- StringArray returns a BooleanArray for comparison methods. Examples -------- >>> pd.array(['This is', 'some text', None, 'data.'], dtype=""string"") ['This is', 'some text', NA, 'data.'] Length: 4, dtype: string Unlike ``object`` dtype arrays, ``StringArray`` doesn't allow non-string values. >>> pd.array(['1', 1], dtype=""string"") Traceback (most recent call last): ... ValueError: StringArray requires an object-dtype ndarray of strings. For comparison methods, this returns a :class:`pandas.BooleanArray` >>> pd.array([""a"", None, ""c""], dtype=""string"") == ""a"" [True, NA, False] Length: 3, dtype: boolean",Extension array for string data.,"Examples -------- >>> pd.array(['This is', 'some text', None, 'data.'], dtype=""string"") ['This is', 'some text', NA, 'data.'] Length: 4, dtype: string Unlike ``object`` dtype arrays, ``StringArray`` doesn't allow non-string values. >>> pd.array(['1', 1], dtype=""string"") Traceback (most recent call last): ... ValueError: StringArray requires an object-dtype ndarray of strings. For comparison methods, this returns a :class:`pandas.BooleanArray` >>> pd.array([""a"", None, ""c""], dtype=""string"") == ""a"" [True, NA, False]","Parameters ---------- values : array-like The array of data. .. warning:: Currently, this expects an object-dtype ndarray where the elements are Python strings. This may change without warning in the future. copy : bool, default False Whether to copy the array of data.","Attributes ---------- None For comparison methods, this returns a :class:`pandas.BooleanArray` Methods ------- None",.. versionadded:: 1.0.0,"Notes ----- StringArray returns a BooleanArray for comparison methods.",,ValueError: StringArray requires an object-dtype ndarray of strings.,"See Also -------- Series.str The string methods are available on Series backed by a StringArray.",,".. warning:: StringArray is considered experimental. The implementation and parts of the API may change without warning. In particular, the NA value used may change to no longer be ``numpy.nan``.",,,,,,, 4,StringMethods,"Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular method. Patterned after Python's string methods, with some inspiration from R's stringr package. Examples -------- >>> s.str.split('_') >>> s.str.replace('_', '')",Vectorized string functions for Series and Index.,"Examples -------- >>> s.str.split('_') >>> s.str.replace('_', '')",,,,"NAs stay NA unless handled otherwise by a particular method. Patterned after Python's string methods, with some inspiration from R's stringr package.",,,,,,"""NAs stay NA unless handled otherwise by a particular method. Patterned after Python's string methods, with some inspiration from R's stringr package. """,,,"Patterned after Python's string methods, with some inspiration from R's stringr package. """,,, 4,TermValue,hold a term value the we use to construct a condition/filter,hold a term value the we use to construct a condition/filter,,,,,,,,,,,,,,,,, 4,TestDatetimelikeSubtype,Tests specific to IntervalIndex with datetime-like subtype,Tests specific to IntervalIndex with datetime-like subtype,,,,,,,,,,,,,,,,, 4,TestFromArrays,Tests specific to IntervalIndex.from_arrays,Tests specific to IntervalIndex.from_arrays,,,,,,,,,,,,,,,,, 4,TestFromTuples,Tests specific to IntervalIndex.from_tuples,Tests specific to IntervalIndex.from_tuples,,,,,,,,,,,,,,,,, 4,TestPeriodIndexSeriesComparisonConsistency,"Test PeriodIndex and Period Series Ops consistency TODO: needs parametrization+de-duplication",Test PeriodIndex and Period Series Ops consistency,,,,,,TODO: needs parametrization+de-duplication,,,,,,,,,,, 4,TestSorted,everything you wanted to test about sorting,everything you wanted to test about sorting,,,,,,,,,,,,,,,,, 4,UnsortedIndexError,"Error raised when attempting to get a slice of a MultiIndex, and the index has not been lexsorted. Subclass of `KeyError`.","Error raised when attempting to get a slice of a MultiIndex, and the index has not been lexsorted.",,,Subclass of `KeyError`.,,,,,,,,,,,,,, 5,_BZ2Proxy,"Small proxy class that enables external file object support for ""r:bz2"" and ""w:bz2"" modes. This is actually a workaround for a limitation in bz2 module's BZ2File class which (unlike gzip.GzipFile) has no support for a file object argument.","Small proxy class that enables external file object support for ""r:bz2"" and ""w:bz2"" modes.",,,,,"This is actually a workaround for a limitation in bz2 module's BZ2File class which (unlike gzip.GzipFile) has no support for a file object argument.",,,,,"This is actually a workaround for a limitation in bz2 module's BZ2File class which (unlike gzip.GzipFile) has no support for a file object argument.",,,,,,, 5,_FileInFile,"A thin wrapper around an existing file object that provides a part of its data as an individual file object.","A thin wrapper around an existing file object that provides a part of its data as an individual file object.",,,,,,,,,,,,,,,,, 5,_MovedItems,Lazy loading of moved objects,Lazy loading of moved objects,,,,,,,,,,,,,,,,, 5,_PathParents,"This object provides sequence-like access to the logical ancestors of a path. Don't try to construct it yourself.","This object provides sequence-like access to the logical ancestors of a path.",,,,,,,,,,Don't try to construct it yourself.,,,,,,, 5,And,"Requires all given :class:`ParseExpression` s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the ``'+'`` operator. May also be constructed using the ``'-'`` operator, which will suppress backtracking. Example:: integer = Word(nums) name_expr = OneOrMore(Word(alphas)) expr = And([integer(""id""),name_expr(""name""),integer(""age"")]) # more easily written as: expr = integer(""id"") + name_expr(""name"") + integer(""age"")",,"Example:: integer = Word(nums) name_expr = OneOrMore(Word(alphas)) expr = And([integer(""id""),name_expr(""name""),integer(""age"")]) # more easily written as: expr = integer(""id"") + name_expr(""name"") + integer(""age"")",,"Expressions may be separated by whitespace. May be constructed using the ``'+'`` operator. May also be constructed using the ``'-'`` operator, which will suppress backtracking.",,"Requires all given :class:`ParseExpression` s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the ``'+'`` operator. May also be constructed using the ``'-'`` operator, which will suppress backtracking.",,,,,,,,"Requires all given :class:`ParseExpression` s to be found in the given order. Expressions may be separated by whitespace. May be constructed using the ``'+'`` operator. May also be constructed using the ``'-'`` operator, which will suppress backtracking.",,,, 5,Argument,"Arguments are positional parameters to a command. They generally provide fewer features than options but can have infinite ``nargs`` and are required by default. All parameters are passed onwards to the parameter constructor.",Arguments are positional parameters to a command.,"They generally provide fewer features than options but can have infinite ``nargs`` and are required by default.",,All parameters are passed onwards to the parameter constructor,,,,,,,,,,,,,, 5,BaseCommand_,A CLI command.,A CLI command.,,,,,,,,,,,,,,,,, 5,BrokenStdoutLoggingError,Raised if BrokenPipeError occurs for the stdout stream while logging.,Raised if BrokenPipeError occurs for the stdout stream while logging.,,,,,,,,,,,,,,,,, 5,Bucket,"Buckets are used to store the bytecode for one template. It's created and initialized by the bytecode cache and passed to the loading functions. The buckets get an internal checksum from the cache assigned and use this to automatically reject outdated cache material. Individual bytecode cache subclasses don't have to care about cache invalidation.","Buckets are used to store the bytecode for one template. It's created and initialized by the bytecode cache and passed to the loading functions.",,,"The buckets get an internal checksum from the cache assigned and use this to automatically reject outdated cache material.",,,,,,,,,,,,,Individual bytecode cache subclasses don't have to care about cache invalidation., 5,CallBlock,"Like a macro without a name but a call instead. `call` is called with the unnamed macro as `caller` argument this node holds.",,,"`call` is called with the unnamed macro as `caller` argument this node holds.",,,Like a macro without a name but a call instead.,,,,,,,,,,,, 5,CaseInsensitiveDict,"A case-insensitive ``dict``-like object. Implements all methods and operations of ``MutableMapping`` as well as dict's ``copy``. Also provides ``lower_items``. All keys are expected to be strings. The structure remembers the case of the last key to be set, and ``iter(instance)``, ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()`` will contain case-sensitive keys. However, querying and contains testing is case insensitive:: cid = CaseInsensitiveDict() cid['Accept'] = 'application/json' cid['aCCEPT'] == 'application/json' # True list(cid) == ['Accept'] # True For example, ``headers['content-encoding']`` will return the value of a ``'Content-Encoding'`` response header, regardless of how the header name was originally stored. If the constructor, ``.update``, or equality comparison operations are given keys that have equal ``.lower()``s, the behavior is undefined.","A case-insensitive ``dict``-like object. Implements all methods and operations of ``MutableMapping`` as well as dict's ``copy``. Also provides ``lower_items``.","However, querying and contains testing is case insensitive:: cid = CaseInsensitiveDict() cid['Accept'] = 'application/json' cid['aCCEPT'] == 'application/json' # True list(cid) == ['Accept'] # True For example, ``headers['content-encoding']`` will return the value of a ``'Content-Encoding'`` response header, regardless of how the header name was originally stored. If the constructor, ``.update``, or equality comparison operations are given keys that have equal ``.lower()``s, the behavior is undefined.",,"All keys are expected to be strings. The structure remembers the case of the last key to be set, and ``iter(instance)``, ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()`` will contain case-sensitive keys.",,"All keys are expected to be strings. The structure remembers the case of the last key to be set, and ``iter(instance)``, ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()`` will contain case-sensitive keys.",,,,,,,,,,,, 5,ColoredString,Enhanced string for __len__ operations on Colored output.,Enhanced string for __len__ operations on Colored output.,,,,,,,,,,,,,,,,, 5,CommandError,Raised when there is an error in command-line arguments,Raised when there is an error in command-line arguments,,,,,,,,,,,,,Raised when there is an error in command-line arguments,,,, 5,ConnectionError,A Connection error occurred.,A Connection error occurred.,,,,,,,,,,,,,,,,, 5,CONSOLE_SCREEN_BUFFER_INFO,struct in wincon.h.,struct in wincon.h.,,,,,,,,,,,,,,,,, 5,Context_,"The template context holds the variables of a template. It stores the values passed to the template and also the names the template exports. Creating instances is neither supported nor useful as it's created automatically at various stages of the template evaluation and should not be created by hand. The context is immutable. Modifications on :attr:`parent` **must not** happen and modifications on :attr:`vars` are allowed from generated template code only. Template filters and global functions marked as :func:`contextfunction`\s get the active context passed as first argument and are allowed to access the context read-only. The template context supports read only dict operations (`get`, `keys`, `values`, `items`, `iterkeys`, `itervalues`, `iteritems`, `__getitem__`, `__contains__`). Additionally there is a :meth:`resolve` method that doesn't fail with a `KeyError` but returns an :class:`Undefined` object for missing variables.",The template context holds the variables of a template.,"It stores the values passed to the template and also the names the template exports. Modifications on :attr:`parent` **must not** happen and modifications on :attr:`vars` are allowed from generated template code only. Template filters and global functions marked as :func:`contextfunction`\s get the active context passed as first argument and are allowed to access the context read-only.","Modifications on :attr:`parent` **must not** happen and modifications on :attr:`vars` are allowed from generated template code only. Template filters and global functions marked as :func:`contextfunction`\s get the active context passed as first argument and are allowed to access the context read-only. The template context supports read only dict operations (`get`, `keys`, `values`, `items`, `iterkeys`, `itervalues`, `iteritems`, `__getitem__`, `__contains__`). Additionally there is a :meth:`resolve` method that doesn't fail with a `KeyError` but returns an :class:`Undefined` object for missing variables.",,,"Creating instances is neither supported nor useful as it's created automatically at various stages of the template evaluation and should not be created by hand. The context is immutable.",,,,,"Creating instances is neither supported nor useful as it's created automatically at various stages of the template evaluation and should not be created by hand.",,,,,,, 5,ConvertingTuple,A converting tuple wrapper.,A converting tuple wrapper.,,,,,,,,,,,,,,,,, 5,CookieConflictError,"There are two cookies that meet the criteria specified in the cookie jar. Use .get and .set and include domain and path args in order to be more specific.",There are two cookies that meet the criteria specified in the cookie jar.,,,,,Use .get and .set and include domain and path args in order to be more specific.,,,,,,Use .get and .set and include domain and path args in order to be more specific.,,,,,, 5,DataViewSequence,"A sequence of data views. Each entry is an instance of `item_class`.",A sequence of data views.,,Each entry is an instance of `item_class`.,,,,,,,,,,,,,,, 5,Date,A date literal.,A date literal.,,,,,,,,,,,,,,,,, 5,DependencyWarning,"Warned when an attempt is made to import a module with missing optional dependencies.",Warned when an attempt is made to import a module with missing optional dependencies.,,,,,,,,,,,,,,,,, 5,DirectedGraph,A graph structure with directed edges.,A graph structure with directed edges.,,,,,,,,,,,,,,,,, 5,Distribution,"A base class for distributions, whether installed or from indexes. Either way, it must have some metadata, so that's all that's needed for construction.","A base class for distributions, whether installed or from indexes.","Either way, it must have some metadata, so that's all that's needed for construction.",,,,,,,,,,,,,,,, 5,DocumentErrorTree,"Implements a dict-like class to query errors by indexes following the structure of a validated document.",Implements a dict-like class to query errors,,,by indexes following the structure of a validated document.,,,,,,,,,,,,,, 5,Environment,"The core component of Jinja is the `Environment`. It contains important shared variables like configuration, filters, tests, globals and others. Instances of this class may be modified if they are not shared and if no template was loaded so far. Modifications on environments after the first template was loaded will lead to surprising effects and undefined behavior. Here are the possible initialization parameters: `block_start_string` The string marking the beginning of a block. Defaults to ``'{%'``. `block_end_string` The string marking the end of a block. Defaults to ``'%}'``. `variable_start_string` The string marking the beginning of a print statement. Defaults to ``'{{'``. `variable_end_string` The string marking the end of a print statement. Defaults to ``'}}'``. `comment_start_string` The string marking the beginning of a comment. Defaults to ``'{#'``. `comment_end_string` The string marking the end of a comment. Defaults to ``'#}'``. `line_statement_prefix` If given and a string, this will be used as prefix for line based statements. See also :ref:`line-statements`. `line_comment_prefix` If given and a string, this will be used as prefix for line based comments. See also :ref:`line-statements`. .. versionadded:: 2.2 `trim_blocks` If this is set to ``True`` the first newline after a block is removed (block, not variable tag!). Defaults to `False`. `lstrip_blocks` If this is set to ``True`` leading spaces and tabs are stripped from the start of a line to a block. Defaults to `False`. `newline_sequence` The sequence that starts a newline. Must be one of ``'\r'``, ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a useful default for Linux and OS X systems as well as web applications. `keep_trailing_newline` Preserve the trailing newline when rendering templates. The default is ``False``, which causes a single newline, if present, to be stripped from the end of the template. .. versionadded:: 2.7 `extensions` List of Jinja extensions to use. This can either be import paths as strings or extension classes. For more information have a look at :ref:`the extensions documentation `. `optimized` should the optimizer be enabled? Default is ``True``. `undefined` :class:`Undefined` or a subclass of it that is used to represent undefined values in the template. `finalize` A callable that can be used to process the result of a variable expression before it is output. For example one can convert ``None`` implicitly into an empty string here. `autoescape` If set to ``True`` the XML/HTML autoescaping feature is enabled by default. For more details about autoescaping see :class:`~jinja2.utils.Markup`. As of Jinja 2.4 this can also be a callable that is passed the template name and has to return ``True`` or ``False`` depending on autoescape should be enabled by default. .. versionchanged:: 2.4 `autoescape` can now be a function `loader` The template loader for this environment. `cache_size` The size of the cache. Per default this is ``400`` which means that if more than 400 templates are loaded the loader will clean out the least recently used template. If the cache size is set to ``0`` templates are recompiled all the time, if the cache size is ``-1`` the cache will not be cleaned. .. versionchanged:: 2.8 The cache size was increased to 400 from a low 50. `auto_reload` Some loaders load templates from locations where the template sources may change (ie: file system or database). If ``auto_reload`` is set to ``True`` (default) every time a template is requested the loader checks if the source changed and if yes, it will reload the template. For higher performance it's possible to disable that. `bytecode_cache` If set to a bytecode cache object, this object will provide a cache for the internal Jinja bytecode so that templates don't have to be parsed if they were not changed. See :ref:`bytecode-cache` for more information. `enable_async` If set to true this enables async template execution which allows you to take advantage of newer Python features. This requires Python 3.6 or later.","The core component of Jinja is the `Environment`It contains important shared variables like configuration, filters, tests, globals and others. Instances of this class may be modified if they are not shared and if no template was loaded so far. Modifications on environments after the first template was loaded will lead to surprising effects and undefined behavior.",,"Here are the possible initialization parameters: `block_start_string` The string marking the beginning of a block. Defaults to ``'{%'``. `block_end_string` The string marking the end of a block. Defaults to ``'%}'``. `variable_start_string` The string marking the beginning of a print statement. Defaults to ``'{{'``. `variable_end_string` The string marking the end of a print statement. Defaults to ``'}}'``. `comment_start_string` The string marking the beginning of a comment. Defaults to ``'{#'``. `comment_end_string` The string marking the end of a comment. Defaults to ``'#}'``. `line_statement_prefix` If given and a string, this will be used as prefix for line based statements. See also :ref:`line-statements`. `line_comment_prefix` If given and a string, this will be used as prefix for line based comments. See also :ref:`line-statements`. .. versionadded:: 2.2 `trim_blocks` If this is set to ``True`` the first newline after a block is removed (block, not variable tag!). Defaults to `False`. `lstrip_blocks` If this is set to ``True`` leading spaces and tabs are stripped from the start of a line to a block. Defaults to `False`. `newline_sequence` The sequence that starts a newline. Must be one of ``'\r'``, ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a useful default for Linux and OS X systems as well as web applications. `keep_trailing_newline` Preserve the trailing newline when rendering templates. The default is ``False``, which causes a single newline, if present, to be stripped from the end of the template. .. versionadded:: 2.7 `extensions` List of Jinja extensions to use. This can either be import paths as strings or extension classes. For more information have a look at :ref:`the extensions documentation `. `optimized` should the optimizer be enabled? Default is ``True``. `undefined` :class:`Undefined` or a subclass of it that is used to represent undefined values in the template. `finalize` A callable that can be used to process the result of a variable expression before it is output. For example one can convert ``None`` implicitly into an empty string here. `autoescape` If set to ``True`` the XML/HTML autoescaping feature is enabled by default. For more details about autoescaping see :class:`~jinja2.utils.Markup`. As of Jinja 2.4 this can also be a callable that is passed the template name and has to return ``True`` or ``False`` depending on autoescape should be enabled by default. .. versionchanged:: 2.4 `autoescape` can now be a function `loader` The template loader for this environment. `cache_size` The size of the cache. Per default this is ``400`` which means that if more than 400 templates are loaded the loader will clean out the least recently used template. If the cache size is set to ``0`` templates are recompiled all the time, if the cache size is ``-1`` the cache will not be cleaned. .. versionchanged:: 2.8 The cache size was increased to 400 from a low 50. `auto_reload` Some loaders load templates from locations where the template sources may change (ie: file system or database). If ``auto_reload`` is set to ``True`` (default) every time a template is requested the loader checks if the source changed and if yes, it will reload the template. For higher performance it's possible to disable that. `bytecode_cache` If set to a bytecode cache object, this object will provide a cache for the internal Jinja bytecode so that templates don't have to be parsed if they were not changed. See :ref:`bytecode-cache` for more information. `enable_async` If set to true this enables async template execution which allows you to take advantage of newer Python features. This requires Python 3.6 or later.","It contains important shared variables like configuration, filters, tests, globals and others. Instances of this class may be modified if they are not shared and if no template was loaded so far. Modifications on environments after the first template was loaded will lead to surprising effects and undefined behavior.",".. versionadded:: 2.2 .. versionadded:: 2.7",,,"Modifications on environments after the first template was loaded will lead to surprising effects and undefined behavior.",See :ref:`bytecode-cache` for more information.,,"Modifications on environments after the first template was loaded will lead to surprising effects and undefined behavior.",,,,,,, 5,EOF,"Raised when EOF is read from a child. This usually means the child has exited.",This usually means the child has exited.,,,,,,,Raised when EOF is read from a child.,,,,,,,,,, 5,ExceptionPexpect,Base class for all exceptions raised by this module.,Base class for all exceptions raised by this module.,,,,,,,,,,,,,,,,, 5,ExtractError,General exception for extract errors.,General exception for extract errors.,,,,,,,,,,,,,,,,, 5,FakeFile,"Wrap a list of lines in an object with readline() to make ConfigParser happy.",Wrap a list of lines in an object,,,with readline() to make ConfigParser happy.,,,,,,,,,,,,,, 5,fdspawn,"This is like pexpect.spawn but allows you to supply your own open file descriptor. For example, you could use it to read through a file looking for patterns, or to control a modem or serial device.","This is like pexpect.spawn but allows you to supply your own open file descriptor.","For example, you could use it to read through a file looking for patterns, or to control a modem or serial device.",,,,,,,,,,,,,,,, 5,FileMetadata,"Metadata handler for standalone PKG-INFO files Usage:: metadata = FileMetadata(""/path/to/PKG-INFO"") This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location. Metadata handler for standalone PKG-INFO files Usage:: metadata = FileMetadata(""/path/to/PKG-INFO"") This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location.",Metadata handler for standalone PKG-INFO files,"Usage:: metadata = FileMetadata(""/path/to/PKG-INFO"") Usage:: metadata = FileMetadata(""/path/to/PKG-INFO"")",,"This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location. Metadata handler for standalone PKG-INFO files This provider rejects all data and metadata requests except for PKG-INFO, which is treated as existing, and will be the contents of the file at the provided location.",,,,,,,,,,,,,, 5,FileSystemLoader,"Loads templates from the file system. This loader can find templates in folders on the file system and is the preferred way to load them. The loader takes the path to the templates as string, or if multiple locations are wanted a list of them which is then looked up in the given order:: >>> loader = FileSystemLoader('/path/to/templates') >>> loader = FileSystemLoader(['/path/to/templates', '/other/path']) Per default the template encoding is ``'utf-8'`` which can be changed by setting the `encoding` parameter to something else. To follow symbolic links, set the *followlinks* parameter to ``True``:: >>> loader = FileSystemLoader('/path/to/templates', followlinks=True) .. versionchanged:: 2.8+ The *followlinks* parameter was added.","Loads templates from the file system. This loader can find templates in folders on the file system and is the preferred way to load them.","The loader takes the path to the templates as string, or if multiple locations are wanted a list of them which is then looked up in the given order:: >>> loader = FileSystemLoader('/path/to/templates') >>> loader = FileSystemLoader(['/path/to/templates', '/other/path']) Per default the template encoding is ``'utf-8'`` which can be changed by setting the `encoding` parameter to something else. To follow symbolic links, set the *followlinks* parameter to ``True``::",,,".. versionchanged:: 2.8+ The *followlinks* parameter was added.",,,,,,,,,,,,, 5,Filter,Alphabetizes attributes for elements,Alphabetizes attributes for elements,,,,,,,,,,,,,,,,, 5,Filter_,Sanitizes token stream of XHTML+MathML+SVG and of inline style attributes,Sanitizes token stream of XHTML+MathML+SVG and of inline style attributes,,,,,,,,,,,,,,,,, 5,FollowedBy,"Lookahead matching of the given parse expression. ``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list. If any results names are defined in the lookahead expression, those *will* be returned for access by name. Example:: # use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString(""shape: SQUARE color: BLACK posn: upper left"").pprint() prints:: [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']]","Lookahead matching of the given parse expression. ``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list.","``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list. If any results names are defined in the lookahead expression, those *will* be returned for access by name. Example:: # use FollowedBy to match a label only if it is followed by a ':' data_word = Word(alphas) label = data_word + FollowedBy(':') attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) OneOrMore(attr_expr).parseString(""shape: SQUARE color: BLACK posn: upper left"").pprint() prints:: [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']]",,"If any results names are defined in the lookahead expression, those *will* be returned for access by name.",,"``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list. If any results names are defined in the lookahead expression, those *will* be returned for access by name.",,,,,"``FollowedBy`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression matches at the current position. ``FollowedBy`` always returns a null token list. If any results names are defined in the lookahead expression, those *will* be returned for access by name.",,,,,,, 5,Getattr,"Get an attribute or item from an expression that is a ascii-only bytestring and prefer the attribute.",,"Get an attribute or item from an expression that is a ascii-only bytestring and prefer the attribute.",,,,,"Get an attribute or item from an expression that is a ascii-only bytestring and prefer the attribute.",,,,,,,,,,, 5,HashMissing,A hash was needed for a requirement but is absent.,A hash was needed for a requirement but is absent.,,,,,,,,,,,,,,,,, 5,HebrewProber,"The prober is divided between two SBCharSetProbers and a HebrewProber, all of which are managed, created, fed data, inquired and deleted by the SBCSGroupProber. The two SBCharSetProbers identify that the text is in fact some kind of Hebrew, Logical or Visual. The final decision about which one is it is made by the HebrewProber by combining final-letter scores with the scores of the two SBCharSetProbers to produce a final answer. The SBCSGroupProber is responsible for stripping the original text of HTML tags, English characters, numbers, low-ASCII punctuation characters, spaces and new lines. It reduces any sequence of such characters to a single space. The buffer fed to each prober in the SBCS group prober is pure text in high-ASCII. The two SBCharSetProbers (model probers) share the same language model: Win1255Model. The first SBCharSetProber uses the model normally as any other SBCharSetProber does, to recognize windows-1255, upon which this model was built. The second SBCharSetProber is told to make the pair-of-letter lookup in the language model backwards. This in practice exactly simulates a visual Hebrew model using the windows-1255 logical Hebrew model. The HebrewProber is not using any language model. All it does is look for final-letter evidence suggesting the text is either logical Hebrew or visual Hebrew. Disjointed from the model probers, the results of the HebrewProber alone are meaningless. HebrewProber always returns 0.00 as confidence since it never identifies a charset by itself. Instead, the pointer to the HebrewProber is passed to the model probers as a helper ""Name Prober"". When the Group prober receives a positive identification from any prober, it asks for the name of the charset identified. If the prober queried is a Hebrew model prober, the model prober forwards the call to the HebrewProber to make the final decision. In the HebrewProber, the decision is made according to the final-letters scores maintained and Both model probers scores. The answer is returned in the form of the name of the charset identified, either ""windows-1255"" or ""ISO-8859-8""",It reduces any sequence of such characters to a single space.,"The final decision about which one is it is made by the HebrewProber by combining final-letter scores with the scores of the two SBCharSetProbers to produce a final answer. The first SBCharSetProber uses the model normally as any other SBCharSetProber does, to recognize windows-1255, upon which this model was built. The second SBCharSetProber is told to make the pair-of-letter lookup in the language model backwards. All it does is look for final-letter evidence suggesting the text is either logical Hebrew or visual Hebrew. HebrewProber always returns 0.00 as confidence since it never identifies a charset by itself. Instead, the pointer to the HebrewProber is passed to the model probers as a helper ""Name Prober"". When the Group prober receives a positive identification from any prober, it asks for the name of the charset identified. If the prober queried is a Hebrew model prober, the model prober forwards the call to the HebrewProber to make the final decision. In the HebrewProber, the decision is made according to the final-letters scores maintained and Both model probers scores. The answer is returned in the form of the name of the charset identified, either ""windows-1255"" or ""ISO-8859-8""",,,,"The prober is divided between two SBCharSetProbers and a HebrewProber, all of which are managed, created, fed data, inquired and deleted by the SBCSGroupProber. The two SBCharSetProbers identify that the text is in fact some kind of Hebrew, Logical or Visual. The SBCSGroupProber is responsible for stripping the original text of HTML tags, English characters, numbers, low-ASCII punctuation characters, spaces and new lines. The buffer fed to each prober in the SBCS group prober is pure text in high-ASCII. The two SBCharSetProbers (model probers) share the same language model: Win1255Model. This in practice exactly simulates a visual Hebrew model using the windows-1255 logical Hebrew model. The HebrewProber is not using any language model. Disjointed from the model probers, the results of the HebrewProber alone are meaningless.",,,,,,,,,,,, 5,HTMLParser,"HTML parser Generates a tree structure from a stream of (possibly malformed) HTML.","HTML parser Generates a tree structure from a stream of (possibly malformed) HTML.",,,,,,,,,,,,,,,,, 5,HTTPError,An HTTP error occurred.,An HTTP error occurred.,,,,,,,,,,,,,,,,, 5,InCellPhase,http://www.whatwg.org/specs/web-apps/current-work/in-cell,,,,,,,,,http://www.whatwg.org/specs/web-apps/current-work/in-cell,,,,,,,,, 5,InsecurePlatformWarning,Warned when certain SSL configuration is not available on a platform.,Warned when certain SSL configuration is not available on a platform.,,,,,,,,,,,,,,,,, 5,InspectedValidator,Metaclass for all validators ,Metaclass for all validators ,,,,,,,,,,,,,,,,, 5,InstalledDistribution,"Created with the *path* of the ``.dist-info`` directory provided to the constructor. It reads the metadata contained in ``pydist.json`` when it is instantiated., or uses a passed in Metadata instance (useful for when dry-run mode is being used).",,"It reads the metadata contained in ``pydist.json`` when it is instantiated., or uses a passed in Metadata instance (useful for when dry-run mode is being used).",,,,Created with the *path* of the ``.dist-info`` directory provided to the constructor.,,,,,,,,,,,, 5,InternalName,"An internal name in the compiler. You cannot create these nodes yourself but the parser provides a :meth:`~jinja2.parser.Parser.free_identifier` method that creates a new identifier for you. This identifier is not available from the template and is not threated specially by the compiler.",An internal name in the compiler,,,,,"You cannot create these nodes yourself but the parser provides a :meth:`~jinja2.parser.Parser.free_identifier` method that creates a new identifier for you. This identifier is not available from the template and is not threated specially by the compiler.",,,,,"You cannot create these nodes yourself but the parser provides a :meth:`~jinja2.parser.Parser.free_identifier` method that creates a new identifier for you. This identifier is not available from the template and is not threated specially by the compiler.",,,,,,, 5,InvalidDateError,A date field was improperly specified.,A date field was improperly specified.,,,,,,,,,,,,,,,,, 5,InvalidHeaderError,Exception for invalid headers.,Exception for invalid headers.,,,,,,,,,,,,,,,,, 5,InvalidVersion,"An invalid version was found, users should refer to PEP 440.","An invalid version was found,",users should refer to PEP 440.,,,,,,,,,,users should refer to PEP 440.,,,,,, 5,Kanji,Unicode set for Kanji Unicode Character Range,Unicode set for Kanji Unicode Character Range,,,,,,,,,,,,,,,,, 5,Katakana,Unicode set for Katakana  Unicode Character Range,Unicode set for Katakana Unicode Character Range,,,,,,,,,,,,,,,,, 5,KeyType,"The type of a Key. Keys can be bare (unquoted), or quoted using basic (""), or literal (') quotes following the same escaping rules as single-line StringType.",The type of a Key.,,"Keys can be bare (unquoted), or quoted using basic (""), or literal (') quotes following the same escaping rules as single-line StringType.","Keys can be bare (unquoted), or quoted using basic (""), or literal (') quotes following the same escaping rules as single-line StringType.",,,,,,,,,,,,,, 5,LockFailed,"Lock file creation failed for some other reason. >>> try: ... raise LockFailed ... except LockError: ... Pass",Lock file creation failed for some other reason.,">>> try: ... raise LockFailed ... except LockError: ... Pass",,,,,,,,,,,,,,,, 5,LRUCache,"A simple LRU Cache implementation. this is fast for small capacities (something below 1000) but doesn't scale. But as long as it's only used as storage for templates this won't do any harm.",A simple LRU Cache implementation.,,,,,"this is fast for small capacities (something below 1000) but doesn't scale. But as long as it's only used as storage for templates this won't do any harm.",,,,,"this is fast for small capacities (something below 1000) but doesn't scale. But as long as it's only used as storage for templates this won't do any harm.",,,,,,, 5,Markup,"A string that is ready to be safely inserted into an HTML or XML document, either because it was escaped or because it was marked safe. Passing an object to the constructor converts it to text and wraps it to mark it safe without escaping. To escape the text, use the :meth:`escape` class method instead. >>> Markup('Hello, World!') Markup('Hello, World!') >>> Markup(42) Markup('42') >>> Markup.escape('Hello, World!') Markup('Hello <em>World</em>!') This implements the ``__html__()`` interface that some frameworks use. Passing an object that implements ``__html__()`` will wrap the output of that method, marking it safe. >>> class Foo: ... def __html__(self): ... return 'foo' ... >>> Markup(Foo()) Markup('foo') This is a subclass of the text type (``str`` in Python 3, ``unicode`` in Python 2). It has the same methods as that type, but all methods escape their arguments and return a ``Markup`` instance. >>> Markup('%s') % 'foo & bar' Markup('foo & bar') >>> Markup('Hello ') + '' Markup('Hello <foo>')","A string that is ready to be safely inserted into an HTML or XML document, either because it was escaped or because it was marked safe.","To escape the text, use the :meth:`escape` class method instead. >>> Markup('Hello, World!') Markup('Hello, World!') >>> Markup(42) Markup('42') >>> Markup.escape('Hello, World!') Markup('Hello <em>World</em>!') This implements the ``__html__()`` interface that some frameworks use. Passing an object that implements ``__html__()`` will wrap the output of that method, marking it safe. >>> class Foo: ... def __html__(self): ... return 'foo' ... >>> Markup(Foo()) Markup('foo') This is a subclass of the text type (``str`` in Python 3, ``unicode`` in Python 2). It has the same methods as that type, but all methods escape their arguments and return a ``Markup`` instance. >>> Markup('%s') % 'foo & bar' Markup('foo & bar') >>> Markup('Hello ') + '' Markup('Hello <foo>')","Passing an object to the constructor converts it to text and wraps it to mark it safe without escaping. To escape the text, use the :meth:`escape` class method instead.","Passing an object to the constructor converts it to text and wraps it to mark it safe without escaping. To escape the text, use the :meth:`escape` class method instead.",,">>> Markup('Hello, World!') Markup('Hello, World!') >>> Markup(42) Markup('42') >>> Markup.escape('Hello, World!') Markup('Hello <em>World</em>!') This implements the ``__html__()`` interface that some frameworks use. Passing an object that implements ``__html__()`` will wrap the output of that method, marking it safe. >>> class Foo: ... def __html__(self): ... return 'foo' ... >>> Markup(Foo()) Markup('foo') This is a subclass of the text type (``str`` in Python 3, ``unicode`` in Python 2). It has the same methods as that type, but all methods escape their arguments and return a ``Markup`` instance. >>> Markup('%s') % 'foo & bar' Markup('foo & bar') >>> Markup('Hello ') + '' Markup('Hello <foo>')",,,,,,,,,,,"This is a subclass of the text type (``str`` in Python 3, ``unicode`` in Python 2). It has the same methods as that type, but all methods escape their arguments and return a ``Markup`` instance.", 5,MemcachedBytecodeCache,"This class implements a bytecode cache that uses a memcache cache for storing the information. It does not enforce a specific memcache library (tummy's memcache or cmemcache) but will accept any class that provides the minimal interface required. Libraries compatible with this class: - `werkzeug `_.contrib.cache - `python-memcached `_ - `cmemcache `_ (Unfortunately the django cache interface is not compatible because it does not support storing binary data, only unicode. You can however pass the underlying cache client to the bytecode cache which is available as `django.core.cache.cache._client`.) The minimal interface for the client passed to the constructor is this: .. class:: MinimalClientInterface .. method:: set(key, value[, timeout]) Stores the bytecode in the cache. `value` is a string and `timeout` the timeout of the key. If timeout is not provided a default timeout or no timeout should be assumed, if it's provided it's an integer with the number of seconds the cache item should exist. .. method:: get(key) Returns the value for the cache key. If the item does not exist in the cache the return value must be `None`. The other arguments to the constructor are the prefix for all keys that is added before the actual cache key and the timeout for the bytecode in the cache system. We recommend a high (or no) timeout. This bytecode cache does not support clearing of used items in the cache. The clear method is a no-operation function. .. versionadded:: 2.7 Added support for ignoring memcache errors through the `ignore_memcache_errors` parameter.","This class implements a bytecode cache that uses a memcache cache for storing the information.","The minimal interface for the client passed to the constructor is this: .. class:: MinimalClientInterface .. method:: set(key, value[, timeout]) Stores the bytecode in the cache. `value` is a string and `timeout` the timeout of the key. If timeout is not provided a default timeout or no timeout should be assumed, if it's provided it's an integer with the number of seconds the cache item should exist. .. method:: get(key) Returns the value for the cache key. If the item does not exist in the cache the return value must be `None`. The other arguments to the constructor are the prefix for all keys that is added before the actual cache key and the timeout for the bytecode in the cache system. We recommend a high (or no) timeout.",,"It does not enforce a specific memcache library (tummy's memcache or cmemcache) but will accept any class that provides the minimal interface required.",".. versionadded:: 2.7 Added support for ignoring memcache errors through the `ignore_memcache_errors` parameter.",,,,,,"Libraries compatible with this class: - `werkzeug `_.contrib.cache - `python-memcached `_ - `cmemcache `_ (Unfortunately the django cache interface is not compatible because it does not support storing binary data, only unicode. You can however pass the underlying cache client to the bytecode cache which is available as `django.core.cache.cache._client`.)",,,,,,, 5,MemoizedZipManifests,Memoized zipfile manifests.,Memoized zipfile manifests.,,,,,,,,,,,,,,,,, 5,MetadataMissingError,A required metadata is missing,A required metadata is missing,,,,,,,,,,,,,,,,, 5,MethodDispatcher,"Dict with 2 special properties: On initiation, keys that are lists, sets or tuples are converted to multiple keys so accessing any one of the items in the original list-like object returns the matching value md = MethodDispatcher({(""foo"", ""bar""):""baz""}) md[""foo""] == ""baz"" A default value which can be set through the default attribute.",Dict with 2 special properties:,A default value which can be set through the default attribute.,,"On initiation, keys that are lists, sets or tuples are converted to multiple keys so accessing any one of the items in the original list-like object returns the matching value md = MethodDispatcher({(""foo"", ""bar""):""baz""}) md[""foo""] == ""baz"" A default value which can be set through the default attribute.",,,,,,,,,,,,,, 5,Module_six_moves_urllib_parse,Lazy loading of moved objects in six.moves.urllib_parse,Lazy loading of moved objects in six.moves.urllib_parse,,,,,,,,,,,,,,,,, 5,Module_six_moves_urllib_request,Lazy loading of moved objects in six.moves.urllib_request,Lazy loading of moved objects in six.moves.urllib_request,,,,,,,,,,,,,,,,, 5,Module_six_moves_urllib_response,Lazy loading of moved objects in six.moves.urllib_response,Lazy loading of moved objects in six.moves.urllib_response,,,,,,,,,,,,,,,,, 5,Mul,Multiplies the left with the right node.,Multiplies the left with the right node.,,,,,,,,,,,,,,,,, 5,NativeEnvironment,An environment that renders templates to native Python types.,An environment that renders templates to native Python types.,,,,,,,,,,,,,,,,, 5,Node,Represents an item in the tree,Represents an item in the tree,,,,,,,,,,,,,,,,, 5,NoMatch,A token that will never match.,A token that will never match.,,,,,,,,,,,,,,,,, 5,NotAny,"Lookahead to disallow matching with the given parse expression. ``NotAny`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression does *not* match at the current position. Also, ``NotAny`` does *not* skip over leading whitespace. ``NotAny`` always returns a null token list. May be constructed using the '~' operator. Example:: AND, OR, NOT = map(CaselessKeyword, ""AND OR NOT"".split()) # take care not to mistake keywords for identifiers ident = ~(AND | OR | NOT) + Word(alphas) boolean_term = Optional(NOT) + ident # very crude boolean expression - to support parenthesis groups and # operation hierarchy, use infixNotation boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term) # integers that are followed by ""."" are actually floats integer = Word(nums) + ~Char(""."")",Lookahead to disallow matching with the given parse expression.,"`NotAny`` always returns a null token list. May be constructed using the '~' operator. Example:: AND, OR, NOT = map(CaselessKeyword, ""AND OR NOT"".split()) # take care not to mistake keywords for identifiers ident = ~(AND | OR | NOT) + Word(alphas) boolean_term = Optional(NOT) + ident # very crude boolean expression - to support parenthesis groups and # operation hierarchy, use infixNotation boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term) # integers that are followed by ""."" are actually floats integer = Word(nums) + ~Char(""."")",,"``NotAny`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression does *not* match at the current position. Also, ``NotAny`` does *not* skip over leading whitespace.",,,,,,,"``NotAny`` does *not* advance the parsing position within the input string, it only verifies that the specified parse expression does *not* match at the current position. Also, ``NotAny`` does *not* skip over leading whitespace. ``NotAny`` always returns a null token list.",,,,,,, 5,NotMyLock,"Raised when an attempt is made to unlock a file someone else locked. >>> try: ... raise NotMyLock ... except UnlockError: ... Pass",Raised when an attempt is made to unlock a file someone else locked.,">>> try: ... raise NotMyLock ... except UnlockError: ... Pass",,,,,,,,,,,,,,,, 5,omdict,"Ordered Multivalue Dictionary. A multivalue dictionary is a dictionary that can store multiple values per key. An ordered multivalue dictionary is a multivalue dictionary that retains the order of insertions and deletions. Internally, items are stored in a doubly linked list, self._items. A dictionary, self._map, is also maintained and stores an ordered list of linked list node references, one for each value associated with that key. Standard dict methods interact with the first value associated with a given key. This means that omdict retains method parity with dict, and a dict object can be replaced with an omdict object and all interaction will behave identically. All dict methods that retain parity with omdict are: get(), setdefault(), pop(), popitem(), clear(), copy(), update(), fromkeys(), len() __getitem__(), __setitem__(), __delitem__(), __contains__(), items(), keys(), values(), iteritems(), iterkeys(), itervalues(), Optional parameters have been added to some dict methods, but because the added parameters are optional, existing use remains unaffected. An optional parameter has been added to these methods: items(), values(), iteritems(), itervalues() New methods have also been added to omdict. Methods with 'list' in their name interact with lists of values, and methods with 'all' in their name interact with all items in the dictionary, including multiple items with the same key. The new omdict methods are: load(), size(), reverse(), getlist(), add(), addlist(), set(), setlist(), setdefaultlist(), poplist(), popvalue(), popvalues(), popitem(), poplistitem(), allitems(), allkeys(), allvalues(), lists(), listitems(), iterallitems(), iterallkeys(), iterallvalues(), iterlists(), iterlistitems() Explanations and examples of the new methods above can be found in the function comments below and online at https://github.com/gruns/orderedmultidict Additional omdict information and documentation can also be found at the above url.","Ordered Multivalue Dictionary. A multivalue dictionary is a dictionary that can store multiple values per key. An ordered multivalue dictionary is a multivalue dictionary that retains the order of insertions and deletions.","get(), setdefault(), pop(), popitem(), clear(), copy(), update(), fromkeys(), len() __getitem__(), __setitem__(), __delitem__(), __contains__(), items(), keys(), values(), iteritems(), iterkeys(), itervalues(), Optional parameters have been added to some dict methods, but because the added parameters are optional, existing use remains unaffected. An optional parameter has been added to these methods: items(), values(), iteritems(), itervalues() New methods have also been added to omdict. Methods with 'list' in their name interact with lists of values, and methods with 'all' in their name interact with all items in the dictionary, including multiple items with the same key. The new omdict methods are: load(), size(), reverse(), getlist(), add(), addlist(), set(), setlist(), setdefaultlist(), poplist(), popvalue(), popvalues(), popitem(), poplistitem(), allitems(), allkeys(), allvalues(), lists(), listitems(), iterallitems(), iterallkeys(), iterallvalues(), iterlists(), iterlistitems()",,"Internally, items are stored in a doubly linked list, self._items. A dictionary, self._map, is also maintained and stores an ordered list of linked list node references, one for each value associated with that key. Standard dict methods interact with the first value associated with a given key. This means that omdict retains method parity with dict, and a dict object can be replaced with an omdict object and all interaction will behave identically. All dict methods that retain parity with omdict are:",,,,,"Explanations and examples of the new methods above can be found in the function comments below and online at https://github.com/gruns/orderedmultidict Additional omdict information and documentation can also be found at the above url.",,,,,,,,, 5,Option,"Options are usually optional values on the command line and have some extra features that arguments don't have. All other parameters are passed onwards to the parameter constructor. :param show_default: controls if the default value should be shown on the help page. Normally, defaults are not shown. If this value is a string, it shows the string instead of the value. This is particularly useful for dynamic options. :param show_envvar: controls if an environment variable should be shown on the help page. Normally, environment variables are not shown. :param prompt: if set to `True` or a non empty string then the user will be prompted for input. If set to `True` the prompt will be the option name capitalized. :param confirmation_prompt: if set then the value will need to be confirmed if it was prompted for. :param hide_input: if this is `True` then the input on the prompt will be hidden from the user. This is useful for password input. :param is_flag: forces this option to act as a flag. The default is auto detection. :param flag_value: which value should be used for this flag if it's enabled. This is set to a boolean automatically if the option string contains a slash to mark two options. :param multiple: if this is set to `True` then the argument is accepted multiple times and recorded. This is similar to ``nargs`` in how it works but supports arbitrary number of arguments. :param count: this flag makes an option increment an integer. :param allow_from_autoenv: if this is enabled then the value of this parameter will be pulled from an environment variable in case a prefix is defined on the context. :param help: the help string. :param hidden: hide this option from help outputs.","Options are usually optional values on the command line and have some extra features that arguments don't have.",All other parameters are passed onwards to the parameter constructor.,":param show_default: controls if the default value should be shown on the help page. Normally, defaults are not shown. If this value is a string, it shows the string instead of the value. This is particularly useful for dynamic options. :param show_envvar: controls if an environment variable should be shown on the help page. Normally, environment variables are not shown. :param prompt: if set to `True` or a non empty string then the user will be prompted for input. If set to `True` the prompt will be the option name capitalized. :param confirmation_prompt: if set then the value will need to be confirmed if it was prompted for. :param hide_input: if this is `True` then the input on the prompt will be hidden from the user. This is useful for password input. :param is_flag: forces this option to act as a flag. The default is auto detection. :param flag_value: which value should be used for this flag if it's enabled. This is set to a boolean automatically if the option string contains a slash to mark two options. :param multiple: if this is set to `True` then the argument is accepted multiple times and recorded. This is similar to ``nargs`` in how it works but supports arbitrary number of arguments. :param count: this flag makes an option increment an integer. :param allow_from_autoenv: if this is enabled then the value of this parameter will be pulled from an environment variable in case a prefix is defined on the context. :param help: the help string. :param hidden: hide this option from help outputs.",,,,,,,,,,,,,,, 5,PacifyFlushWrapper,"This wrapper is used to catch and suppress BrokenPipeErrors resulting from ``.flush()`` being called on broken pipe during the shutdown/final-GC of the Python interpreter. Notably ``.flush()`` is always called on ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any other cleanup code, and the case where the underlying file is not a broken pipe, all calls and attributes are proxied.","This wrapper is used to catch and suppress BrokenPipeErrors resulting from ``.flush()`` being called on broken pipe during the shutdown/final-GC of the Python interpreter.",,,"This wrapper is used to catch and suppress BrokenPipeErrors resulting from ``.flush()`` being called on broken pipe during the shutdown/final-GC of the Python interpreter. Notably ``.flush()`` is always called on ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any other cleanup code, and the case where the underlying file is not a broken pipe, all calls and attributes are proxied.",,,,,,,,,,,,,, 5,PackageIndex,Represents a Package Index and provides easier access to endpoints,Represents a Package Index and provides easier access to endpoints,,,,,,,,,,,,,,,,, 5,ParseError,"This error occurs when the parser encounters a syntax error in the TOML being parsed. The error references the line and location within the line where the error was encountered.","This error occurs when the parser encounters a syntax error in the TOML being parsed.",,,The error references the line and location within the line where the error was encountered.,,,,,,,,,,,,,, 5,ParseFatalException,"user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately",,,,,,"user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately",,"user-throwable exception thrown when inconsistent parse content is found; stops all parsing immediately",,,,,,"user-throwable exception thrown when inconsistent parse content is found;",,,, 5,ParserElement,Abstract base level parser element class.,Abstract base level parser element class.,,,,,,,,,,,,,,,,, 5,ParseResultBytes,Compatibility shim for the urlparse.ParseResultBytes object.,Compatibility shim for the urlparse.ParseResultBytes object.,,,,,,,,,,,,,,,,, 5,ParseResults,"Structured parse results, to provide multiple means of access to the parsed data: - as a list (``len(results)``) - by list index (``results[0], results[1]``, etc.) - by attribute (``results.`` - see :class:`ParserElement.setResultsName`) Example:: integer = Word(nums) date_str = (integer.setResultsName(""year"") + '/' + integer.setResultsName(""month"") + '/' + integer.setResultsName(""day"")) # equivalent form: # date_str = integer(""year"") + '/' + integer(""month"") + '/' + integer(""day"") # parseString returns a ParseResults object result = date_str.parseString(""1999/12/31"") def test(s, fn=repr): print(""%s -> %s"" % (s, fn(eval(s)))) test(""list(result)"") test(""result[0]"") test(""result['month']"") test(""result.day"") test(""'month' in result"") test(""'minutes' in result"") test(""result.dump()"", str) prints:: list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: 31 - month: 12 - year: 1999","Structured parse results, to provide multiple means of access to the parsed data:","the parsed data: - as a list (``len(results)``) - by list index (``results[0], results[1]``, etc.) - by attribute (``results.`` - see :class:`ParserElement.setResultsName`) Example:: integer = Word(nums) date_str = (integer.setResultsName(""year"") + '/' + integer.setResultsName(""month"") + '/' + integer.setResultsName(""day"")) # equivalent form: # date_str = integer(""year"") + '/' + integer(""month"") + '/' + integer(""day"") # parseString returns a ParseResults object result = date_str.parseString(""1999/12/31"") def test(s, fn=repr): print(""%s -> %s"" % (s, fn(eval(s)))) test(""list(result)"") test(""result[0]"") test(""result['month']"") test(""result.day"") test(""'month' in result"") test(""'minutes' in result"") test(""result.dump()"", str) prints:: list(result) -> ['1999', '/', '12', '/', '31'] result[0] -> '1999' result['month'] -> '12' result.day -> '31' 'month' in result -> True 'minutes' in result -> False result.dump() -> ['1999', '/', '12', '/', '31'] - day: 31 - month: 12 - year: 1999",,,,"Example:: integer = Word(nums) date_str = (integer.setResultsName(""year"") + '/' + integer.setResultsName(""month"") + '/' + integer.setResultsName(""day"")) # equivalent form: # date_str = integer(""year"") + '/' + integer(""month"") + '/' + integer(""day"") # parseString returns a ParseResults object result = date_str.parseString(""1999/12/31"") def test(s, fn=repr): print(""%s -> %s"" % (s, fn(eval(s)))) test(""list(result)"") test(""result[0]"") test(""result['month']"") test(""result.day"") test(""'month' in result"") test(""'minutes' in result"") test(""result.dump()"", str)",,,,,,,,,,,, 5,ParseSyntaxException,"just like :class:`ParseFatalException`, but thrown internally when an :class:`ErrorStop` ('-' operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found.",,,,,,,,"just like :class:`ParseFatalException`, but thrown internally when an :class:`ErrorStop` ('-' operator) indicates that parsing is to stop immediately because an unbacktrackable syntax error has been found.",,,,,,,,,, 5,Path,"The path type is similar to the :class:`File` type but it performs different checks. First of all, instead of returning an open file handle it returns just the filename. Secondly, it can perform various basic checks about what the file or directory should be. .. versionchanged:: 6.0 `allow_dash` was added. :param exists: if set to true, the file or directory needs to exist for this value to be valid. If this is not required and a file does indeed not exist, then all further checks are silently skipped. :param file_okay: controls if a file is a possible value. :param dir_okay: controls if a directory is a possible value. :param writable: if true, a writable check is performed. :param readable: if true, a readable check is performed. :param resolve_path: if this is true, then the path is fully resolved before the value is passed onwards. This means that it's absolute and symlinks are resolved. It will not expand a tilde-prefix, as this is supposed to be done by the shell only. :param allow_dash: If this is set to `True`, a single dash to indicate standard streams is permitted. :param path_type: optionally a string type that should be used to represent the path. The default is `None` which means the return value will be either bytes or unicode depending on what makes most sense given the input data Click deals with.","The path type is similar to the :class:`File` type but it performs different checks.",,":param exists: if set to true, the file or directory needs to exist for this value to be valid. If this is not required and a file does indeed not exist, then all further checks are silently skipped. :param file_okay: controls if a file is a possible value. :param dir_okay: controls if a directory is a possible value. :param writable: if true, a writable check is performed. :param readable: if true, a readable check is performed. :param resolve_path: if this is true, then the path is fully resolved before the value is passed onwards. This means that it's absolute and symlinks are resolved. It will not expand a tilde-prefix, as this is supposed to be done by the shell only. :param allow_dash: If this is set to `True`, a single dash to indicate standard streams is permitted. :param path_type: optionally a string type that should be used to represent the path. The default is `None` which means the return value will be either bytes or unicode depending on what makes most sense given the input data Click deals with.",,".. versionchanged:: 6.0 `allow_dash` was added.","First of all, instead of returning an open file handle it returns just the filename. Secondly, it can perform various basic checks about what the file or directory should be.",,,,,,,,,,,, 5,PipError,Base pip exception,Base pip exception,,,,,,,,,,,,,,,,, 5,PoolManager,"Allows for arbitrary requests while transparently keeping track of necessary connection pools for you. :param num_pools: Number of connection pools to cache before discarding the least recently used pool. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param \**connection_pool_kw: Additional parameters are used to create fresh :class:`urllib3.connectionpool.ConnectionPool` instances. Example:: >>> manager = PoolManager(num_pools=2) >>> r = manager.request('GET', 'http://google.com/') >>> r = manager.request('GET', 'http://google.com/mail') >>> r = manager.request('GET', 'http://yahoo.com/') >>> len(manager.pools)","Allows for arbitrary requests while transparently keeping track of necessary connection pools for you.","Example:: >>> manager = PoolManager(num_pools=2) >>> r = manager.request('GET', 'http://google.com/') >>> r = manager.request('GET', 'http://google.com/mail') >>> r = manager.request('GET', 'http://yahoo.com/') >>> len(manager.pools)",":param num_pools: Number of connection pools to cache before discarding the least recently used pool. :param headers: Headers to include with all requests, unless other headers are given explicitly. :param \**connection_pool_kw: Additional parameters are used to create fresh :class:`urllib3.connectionpool.ConnectionPool` instances.",,,,,,,,,,,,,,, 5,PrefixLoader,"A loader that is passed a dict of loaders where each loader is bound to a prefix. The prefix is delimited from the template by a slash per default, which can be changed by setting the `delimiter` argument to something else:: loader = PrefixLoader({ 'app1': PackageLoader('mypackage.app1'), 'app2': PackageLoader('mypackage.app2') }) By loading ``'app1/index.html'`` the file from the app1 package is loaded, by loading ``'app2/index.html'`` the file from the second.","A loader that is passed a dict of loaders where each loader is bound to a prefix.","The prefix is delimited from the template by a slash per default, which can be changed by setting the `delimiter` argument to something else:: loader = PrefixLoader({ 'app1': PackageLoader('mypackage.app1'), 'app2': PackageLoader('mypackage.app2') }) By loading ``'app1/index.html'`` the file from the app1 package is loaded, by loading ``'app2/index.html'`` the file from the second.",,"The prefix is delimited from the template by a slash per default, which can be changed by setting the `delimiter` argument to something else::",,,,,,,,,,,,,, 5,ProcessedTraceback,Holds a Jinja preprocessed traceback for printing or reraising.,Holds a Jinja preprocessed traceback for printing or reraising.,,,,,,,,,,,,,,,,, 5,PyPIJSONLocator,"This locator uses PyPI''s JSON interface. It''s very limited in functionality and probably not worth using.",,,,This locator uses PyPI''s JSON interface.,,,,,,,"It''s very limited in functionality and probably not worth using.",,,,,,, 5,ReadError,Raised when an archive cannot be read,Raised when an archive cannot be read,,,,,,,,,,,,,,,,, 5,Request,"A user-created :class:`Request ` object. Used to prepare a :class:`PreparedRequest `, which is sent to the server. :param method: HTTP method to use. :param url: URL to send. :param headers: dictionary of headers to send. :param files: dictionary of {filename: fileobject} files to multipart upload. :param data: the body to attach to the request. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param json: json for the body to attach to the request (if files or data is not specified). :param params: URL parameters to append to the URL. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param auth: Auth handler or (user, pass) tuple. :param cookies: dictionary or CookieJar of cookies to attach to this request. :param hooks: dictionary of callback hooks, for internal usage. Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> req.prepare() ",A user-created :class:`Request ` object.,"Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> req.prepare() ",":param method: HTTP method to use. :param url: URL to send. :param headers: dictionary of headers to send. :param files: dictionary of {filename: fileobject} files to multipart upload. :param data: the body to attach to the request. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param json: json for the body to attach to the request (if files or data is not specified). :param params: URL parameters to append to the URL. If a dictionary or list of tuples ``[(key, value)]`` is provided, form-encoding will take place. :param auth: Auth handler or (user, pass) tuple. :param cookies: dictionary or CookieJar of cookies to attach to this request. :param hooks: dictionary of callback hooks, for internal usage.",,,"Used to prepare a :class:`PreparedRequest `, which is sent to the server.",,,,,,,,,,,, 5,RequestException,"There was an ambiguous exception that occurred while handling your request.",,,,,,,,"There was an ambiguous exception that occurred while handling your request.",,,,,,,,,, 5,RequirementsFileParseError,Raised when a general error occurs parsing a requirements file line.,Raised when a general error occurs parsing a requirements file line.,,,,,,,,,,,,,,,,, 5,RequirementUninstaller,"A context manager to remove a package for the inner block. This uses `UninstallPathSet` to control the workflow. If the inner block exits correctly, the uninstallation is committed, otherwise rolled back.",A context manager to remove a package for the inner block.,,,"This uses `UninstallPathSet` to control the workflow. If the inner block exits correctly, the uninstallation is committed, otherwise rolled back.",,,,,,,,,,,,,, 5,RequiresPythonCache,Cache a candidate's Requires-Python information.,Cache a candidate's Requires-Python information.,,,,,,,Cache a candidate's Requires-Python information.,,,,,,,,,, 5,Resource,"A class representing an in-package resource, such as a data file. This is not normally instantiated by user code, but rather by a :class:`ResourceFinder` which manages the resource.","A class representing an in-package resource, such as a data file.",,,,,"This is not normally instantiated by user code, but rather by a :class:`ResourceFinder` which manages the resource.",,,,,,,,,,,, 5,ResourceFinder,Resource finder for file system resources.,Resource finder for file system resources.,,,,,,,,,,,,,,,,, 5,ResourceManager,Manage resource extraction and packages,Manage resource extraction and packages,,,,,,,,,,,,,,,,, 5,ResponseError,Used as a container for an error reason supplied in a MaxRetryError.,,Used as a container for an error reason supplied in a MaxRetryError.,,,,,,,,,,,,,,,, 5,ResponseNotChunked,Response needs to be chunked in order to read it as chunks.,,Response needs to be chunked in order to read it as chunks.,,,,,,,,,,,,,,,, 5,Retry,"Retry configuration. Each retry attempt will create a new Retry object with updated values, so they can be safely reused. Retries can be defined as a default for a pool:: retries = Retry(connect=5, read=2, redirect=5) http = PoolManager(retries=retries) response = http.request('GET', 'http://example.com/') Or per-request (which overrides the default for the pool):: response = http.request('GET', 'http://example.com/', retries=Retry(10)) Retries can be disabled by passing ``False``:: response = http.request('GET', 'http://example.com/', retries=False) Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless retries are disabled, in which case the causing exception will be raised. :param int total: Total number of retries to allow. Takes precedence over other counts. Set to ``None`` to remove this constraint and fall back on other counts. It's a good idea to set this to some sensibly-high value to account for unexpected edge cases and avoid infinite retry loops. Set to ``0`` to fail on the first retry. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int connect: How many connection-related errors to retry on. These are errors raised before the request is sent to the remote server, which we assume has not triggered the server to process the request. Set to ``0`` to fail on the first retry of this type. :param int read: How many times to retry on read errors. These errors are raised after the request was sent to the server, so the request may have side-effects. Set to ``0`` to fail on the first retry of this type. :param int redirect: How many redirects to perform. Limit this to avoid infinite redirect loops. A redirect is a HTTP response with a status code 301, 302, 303, 307 or 308. Set to ``0`` to fail on the first retry of this type. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int status: How many times to retry on bad status codes. These are retries made on responses, where status code matches ``status_forcelist``. Set to ``0`` to fail on the first retry of this type. :param iterable method_whitelist: Set of uppercased HTTP method verbs that we should retry on. By default, we only retry on methods which are considered to be idempotent (multiple requests with the same parameters end with the same state). See :attr:`Retry.DEFAULT_METHOD_WHITELIST`. Set to a ``False`` value to retry on any verb. :param iterable status_forcelist: A set of integer HTTP status codes that we should force a retry on. A retry is initiated if the request method is in ``method_whitelist`` and the response status code is in ``status_forcelist``. By default, this is disabled with ``None``. :param float backoff_factor: A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:: {backoff factor} * (2 ** ({number of total retries} - 1)) seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer than :attr:`Retry.BACKOFF_MAX`. By default, backoff is disabled (set to 0). :param bool raise_on_redirect: Whether, if the number of redirects is exhausted, to raise a MaxRetryError, or to return a response with a response code in the 3xx range. :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: whether we should raise an exception, or return a response, if status falls in ``status_forcelist`` range and retries have been exhausted. :param tuple history: The history of the request encountered during each call to :meth:`~Retry.increment`. The list is in the order the requests occurred. Each list item is of class :class:`RequestHistory`. :param bool respect_retry_after_header: Whether to respect Retry-After header on status codes defined as :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. :param iterable remove_headers_on_redirect: Sequence of headers to remove from the request when a response indicating a redirect is returned before firing off the redirected request.",Retry configuration.,"Retries can be defined as a default for a pool:: retries = Retry(connect=5, read=2, redirect=5) http = PoolManager(retries=retries) response = http.request('GET', 'http://example.com/') Or per-request (which overrides the default for the pool):: response = http.request('GET', 'http://example.com/', retries=Retry(10)) Retries can be disabled by passing ``False``:: response = http.request('GET', 'http://example.com/', retries=False)",":param int total: Total number of retries to allow. Takes precedence over other counts. Set to ``None`` to remove this constraint and fall back on other counts. It's a good idea to set this to some sensibly-high value to account for unexpected edge cases and avoid infinite retry loops. Set to ``0`` to fail on the first retry. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int connect: How many connection-related errors to retry on. These are errors raised before the request is sent to the remote server, which we assume has not triggered the server to process the request. Set to ``0`` to fail on the first retry of this type. :param int read: How many times to retry on read errors. These errors are raised after the request was sent to the server, so the request may have side-effects. Set to ``0`` to fail on the first retry of this type. :param int redirect: How many redirects to perform. Limit this to avoid infinite redirect loops. A redirect is a HTTP response with a status code 301, 302, 303, 307 or 308. Set to ``0`` to fail on the first retry of this type. Set to ``False`` to disable and imply ``raise_on_redirect=False``. :param int status: How many times to retry on bad status codes. These are retries made on responses, where status code matches ``status_forcelist``. Set to ``0`` to fail on the first retry of this type. :param iterable method_whitelist: Set of uppercased HTTP method verbs that we should retry on. By default, we only retry on methods which are considered to be idempotent (multiple requests with the same parameters end with the same state). See :attr:`Retry.DEFAULT_METHOD_WHITELIST`. Set to a ``False`` value to retry on any verb. :param iterable status_forcelist: A set of integer HTTP status codes that we should force a retry on. A retry is initiated if the request method is in ``method_whitelist`` and the response status code is in ``status_forcelist``. By default, this is disabled with ``None``. :param float backoff_factor: A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:: {backoff factor} * (2 ** ({number of total retries} - 1)) seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer than :attr:`Retry.BACKOFF_MAX`. By default, backoff is disabled (set to 0). :param bool raise_on_redirect: Whether, if the number of redirects is exhausted, to raise a MaxRetryError, or to return a response with a response code in the 3xx range. :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: whether we should raise an exception, or return a response, if status falls in ``status_forcelist`` range and retries have been exhausted. :param tuple history: The history of the request encountered during each call to :meth:`~Retry.increment`. The list is in the order the requests occurred. Each list item is of class :class:`RequestHistory`. :param bool respect_retry_after_header: Whether to respect Retry-After header on status codes defined as :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. :param iterable remove_headers_on_redirect: Sequence of headers to remove from the request when a response indicating a redirect is returned before firing off the redirected request.","Each retry attempt will create a new Retry object with updated values, so they can be safely reused.",,,,"Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless retries are disabled, in which case the causing exception will be raised.",See :ref:`bytecode-cache` for more information.,,,,,,,,, 5,SafeFileCache,"A file based cache which is safe to use even when the target directory may not be accessible or writable.","A file based cache which is safe to use even when the target directory may not be accessible or writable.",,,,,,,,,,,,,,,,, 5,SchemaValidatorMixin,"This validator mixin provides mechanics to validate schemas passed to a Cerberus validator.","This validator mixin provides mechanics to validate schemas passed to a Cerberus validator.",,,,,,,,,,,,,,,,, 5,Session,"A Requests session. Provides cookie persistence, connection-pooling, and configuration. Basic Usage:: >>> import requests >>> s = requests.Session() >>> s.get('https://httpbin.org/get') Or as a context manager:: >>> with requests.Session() as s: >>> s.get('https://httpbin.org/get') ","A Requests session. Provides cookie persistence, connection-pooling, and configuration.","Basic Usage:: >>> import requests >>> s = requests.Session() >>> s.get('https://httpbin.org/get') Or as a context manager:: >>> with requests.Session() as s: >>> s.get('https://httpbin.org/get') ",,,,,,,,,,,,,,,, 5,SkipTo,"Token for skipping over all undefined text until the matched expression is found. Parameters: - expr - target expression marking the end of the data to be skipped - include - (default= ``False``) if True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list). - ignore - (default= ``None``) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expression - failOn - (default= ``None``) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, the SkipTo is not a match Example:: report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quotedString) string_data.setParseAction(tokenMap(str.strip)) ticket_expr = (integer(""issue_num"") + SEP + string_data(""sev"") + SEP + string_data(""desc"") + SEP + integer(""days_open"")) for tkt in ticket_expr.searchString(report): print tkt.dump() prints:: ['101', 'Critical', 'Intermittent system crash', '6'] - days_open: 6 - desc: Intermittent system crash - issue_num: 101 - sev: Critical ['94', 'Cosmetic', ""Spelling error on Login ('log|n')"", '14'] - days_open: 14 - desc: Spelling error on Login ('log|n') - issue_num: 94 - sev: Cosmetic ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: 47 - desc: System slow when running too many reports - issue_num: 79 - sev: Minor","Token for skipping over all undefined text until the matched expression is found.","Example:: report = ''' Outstanding Issues Report - 1 Jan 2000 # | Severity | Description | Days Open -----+----------+-------------------------------------------+----------- 101 | Critical | Intermittent system crash | 6 94 | Cosmetic | Spelling error on Login ('log|n') | 14 79 | Minor | System slow when running too many reports | 47 ''' integer = Word(nums) SEP = Suppress('|') # use SkipTo to simply match everything up until the next SEP # - ignore quoted strings, so that a '|' character inside a quoted string does not match # - parse action will call token.strip() for each matched token, i.e., the description body string_data = SkipTo(SEP, ignore=quotedString) string_data.setParseAction(tokenMap(str.strip)) ticket_expr = (integer(""issue_num"") + SEP + string_data(""sev"") + SEP + string_data(""desc"") + SEP + integer(""days_open"")) for tkt in ticket_expr.searchString(report): print tkt.dump() prints:: ['101', 'Critical', 'Intermittent system crash', '6'] - days_open: 6 - desc: Intermittent system crash - issue_num: 101 - sev: Critical ['94', 'Cosmetic', ""Spelling error on Login ('log|n')"", '14'] - days_open: 14 - desc: Spelling error on Login ('log|n') - issue_num: 94 - sev: Cosmetic ['79', 'Minor', 'System slow when running too many reports', '47'] - days_open: 47 - desc: System slow when running too many reports - issue_num: 79 - sev: Minor","Parameters: - expr - target expression marking the end of the data to be skipped - include - (default= ``False``) if True, the target expression is also parsed (the skipped text and target expression are returned as a 2-element list). - ignore - (default= ``None``) used to define grammars (typically quoted strings and comments) that might contain false matches to the target expression - failOn - (default= ``None``) define expressions that are not allowed to be included in the skipped test; if found before the target expression is found, the SkipTo is not a match",,,,,,,,,,,,,,, 5,TarFile,The TarFile Class provides an interface to tar archives.,The TarFile Class provides an interface to tar archives.,,,,,,,,,,,,,,,,, 5,Token,Token class.,Token class.,,,,,,,,,,,,,,,,, 5,TokenStreamIterator,"The iterator for tokenstreams. Iterate over the stream until the eof token is reached.",The iterator for tokenstreams.,,,"Iterate over the stream until the eof token is reached.",,,,,,,,,,,,,, 5,TreeBuilder,"Base treebuilder implementation * documentClass - the class to use for the bottommost node of a document * elementClass - the class to use for HTML Elements * commentClass - the class to use for comments * doctypeClass - the class to use for doctypes",Base treebuilder implementation,"* documentClass - the class to use for the bottommost node of a document * elementClass - the class to use for HTML Elements * commentClass - the class to use for comments * doctypeClass - the class to use for doctypes",,,,,,,,,,,,,,,, 5,Trie,Abstract base class for tries,Abstract base class for tries,,,,,,,,,,,,,,,,, 5,UndefinedEnvironmentName,"A name was attempted to be used that does not exist inside of the environment.","A name was attempted to be used that does not exist inside of the environment.",,,,,,,,,,,,,,,,, 5,UnlockError,"Base class for errors arising from attempts to release the lock. >>> try: ... raise UnlockError ... except Error: ... Pass",Base class for errors arising from attempts to release the lock.,">>> try: ... raise UnlockError ... except Error: ... Pass",,,,,,,,,,,,,,,, 5,ZipResourceFinder,Resource finder for resources in .zip files.,Resource finder for resources in .zip files.,,,,,,,,,,,,,,,,, 6,_OpNamespace,"An op namespace to dynamically bind Operators into Python. Say a user has created a custom Operator called ""my_namespace::my_op"". To call this op, the user will write torch.ops.my_namespace.my_op(...). At startup, this operation will not yet be bound into Python. Instead, the following sequence of magic tricks will occur: 1. `torch.ops.my_namespace` will invoke the `__getattr__` magic method on the `torch.ops` object, which will create a new `_OpNamespace` object called `my_namespace` and set it as an attribute on the `ops` object. 2. `torch.ops.my_namespace.my_op` will then invoke `__getattr__` on the `my_namespace` object, which will retrieve the operation via `torch.get_operation`, a function bound from C++, and then in a similar fashion bind this new object onto the `my_namespace` object. 3. `torch.ops.my_namespace.my_op(...)` then calls this new operation and subsequent accesses will incur no further lookup (the namespace and operation will already exist).",An op namespace to dynamically bind Operators into Python.,"Say a user has created a custom Operator called ""my_namespace::my_op"". To call this op, the user will write torch.ops.my_namespace.my_op(...). At startup, this operation will not yet be bound into Python. Instead, the following sequence of magic tricks will occur: 1. `torch.ops.my_namespace` will invoke the `__getattr__` magic method on the `torch.ops` object, which will create a new `_OpNamespace` object called `my_namespace` and set it as an attribute on the `ops` object. 2. `torch.ops.my_namespace.my_op` will then invoke `__getattr__` on the `my_namespace` object, which will retrieve the operation via `torch.get_operation`, a function bound from C++, and then in a similar fashion bind this new object onto the `my_namespace` object. 3. `torch.ops.my_namespace.my_op(...)` then calls this new operation and subsequent accesses will incur no further lookup (the namespace and operation will already exist).",,,,"Say a user has created a custom Operator called ""my_namespace::my_op"". To call this op, the user will write torch.ops.my_namespace.my_op(...). At startup, this operation will not yet be bound into Python. Instead, the following sequence of magic tricks will occur: 1. `torch.ops.my_namespace` will invoke the `__getattr__` magic method on the `torch.ops` object, which will create a new `_OpNamespace` object called `my_namespace` and set it as an attribute on the `ops` object. 2. `torch.ops.my_namespace.my_op` will then invoke `__getattr__` on the `my_namespace` object, which will retrieve the operation via `torch.get_operation`, a function bound from C++, and then in a similar fashion bind this new object onto the `my_namespace` object. 3. `torch.ops.my_namespace.my_op(...)` then calls this new operation and subsequent accesses will incur no further lookup (the namespace and operation will already exist).",,,,,,,,,,,, 6,Adadelta,"Implements Adadelta algorithm. It has been proposed in `ADADELTA: An Adaptive Learning Rate Method`__. Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups rho (float, optional): coefficient used for computing a running average of squared gradients (default: 0.9) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-6) lr (float, optional): coefficient that scale delta before it is applied to the parameters (default: 1.0) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) __ https://arxiv.org/abs/1212.5701","Implements Adadelta algorithm. It has been proposed in `ADADELTA: An Adaptive Learning Rate Method`__",,"Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups rho (float, optional): coefficient used for computing a running average of squared gradients (default: 0.9) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-6) lr (float, optional): coefficient that scale delta before it is applied to the parameters (default: 1.0) weight_decay (float, optional): weight decay (L2 penalty) (default: 0)",,,,,,__ https://arxiv.org/abs/1212.5701,,,,,,,,, 6,Adam,"Implements Adam algorithm. It has been proposed in `Adam: A Method for Stochastic Optimization`_. Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) amsgrad (boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper `On the Convergence of Adam and Beyond`_ (default: False) .. _Adam\: A Method for Stochastic Optimization: https://arxiv.org/abs/1412.6980 .. _On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ",Implements Adam algorithm.,,"Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) amsgrad (boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper `On the Convergence of Adam and Beyond`_ (default: False)",,,"It has been proposed in `Adam: A Method for Stochastic Optimization`_. .. _Adam\: A Method for Stochastic Optimization: https://arxiv.org/abs/1412.6980 .. _On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ",,,".. _Adam\: A Method for Stochastic Optimization: https://arxiv.org/abs/1412.6980 .. _On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ","This standard encoder layer is based on the paper ""Attention Is All You Need"". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.",,,,,Implements Adam algorithm.,,, 6,Adamax,"Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in `Adam: A Method for Stochastic Optimization`__. Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 2e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) __ https://arxiv.org/abs/1412.6980","Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in `Adam: A Method for Stochastic Optimization`__.",,"Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 2e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0)",,,,,,__ https://arxiv.org/abs/1412.6980,,,,,,,,, 6,AdaptiveMaxPool3d,"Applies a 3D adaptive max pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size of the image of the form D x H x W. Can be a tuple (D, H, W) or a single D for a cube D x D x D. D, H and W can be either a ``int``, or ``None`` which means the size will be the same as that of the input. return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default: ``False`` Examples: >>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)",Applies a 3D adaptive max pooling over an input signal composed of several input planes.,"Examples: >>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)","Args: output_size: the target output size of the image of the form D x H x W. Can be a tuple (D, H, W) or a single D for a cube D x D x D. D, H and W can be either a ``int``, or ``None`` which means the size will be the same as that of the input. return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default: ``False``","The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.",,,,,,,,,,,,,, 6,BaseTestCase,Base class used for all TensorBoard tests,Base class used for all TensorBoard tests,,,,,,,,,,,,,,,,, 6,BatchNorm1d,"Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`_ . .. math:: y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta The mean and standard-deviation are calculated per-dimension over the mini-batches and :math:`\gamma` and :math:`\beta` are learnable parameter vectors of size `C` (where `C` is the input size). By default, the elements of :math:`\gamma` are set to 1 and the elements of :math:`\beta` are set to 0. Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default :attr:`momentum` of 0.1. If :attr:`track_running_stats` is set to ``False``, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. .. note:: This :attr:`momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`, where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the new observed value. Because the Batch Normalization is done over the `C` dimension, computing statistics on `(N, L)` slices, it's common terminology to call this Temporal Batch Normalization. Args: num_features: :math:`C` from an expected input of size :math:`(N, C, L)` or :math:`L` from input of size :math:`(N, L)` eps: a value added to the denominator for numerical stability. Default: 1e-5 momentum: the value used for the running_mean and running_var computation. Can be set to ``None`` for cumulative moving average (i.e. simple average). Default: 0.1 affine: a boolean value that when set to ``True``, this module has learnable affine parameters. Default: ``True`` track_running_stats: a boolean value that when set to ``True``, this module tracks the running mean and variance, and when set to ``False``, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: ``True`` Shape: - Input: :math:`(N, C)` or :math:`(N, C, L)` - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) Examples:: >>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = torch.randn(20, 100) >>> output = m(input) .. _`Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`: https://arxiv.org/abs/1502.03167","Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`_ .","Examples:: >>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = torch.randn(20, 100) >>> output = m(input)","Args: num_features: :math:`C` from an expected input of size :math:`(N, C, L)` or :math:`L` from input of size :math:`(N, L)` eps: a value added to the denominator for numerical stability. Default: 1e-5 momentum: the value used for the running_mean and running_var computation. Can be set to ``None`` for cumulative moving average (i.e. simple average). Default: 0.1 affine: a boolean value that when set to ``True``, this module has learnable affine parameters. Default: ``True`` track_running_stats: a boolean value that when set to ``True``, this module tracks the running mean and variance, and when set to ``False``, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: ``True``",".. math:: y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta The mean and standard-deviation are calculated per-dimension over the mini-batches and :math:`\gamma` and :math:`\beta` are learnable parameter vectors of size `C` (where `C` is the input size). By default, the elements of :math:`\gamma` are set to 1 and the elements of :math:`\beta` are set to 0. Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default :attr:`momentum` of 0.1. If :attr:`track_running_stats` is set to ``False``, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Shape: - Input: :math:`(N, C)` or :math:`(N, C, L)` - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)",,".. note:: This :attr:`momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`, where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the new observed value. Because the Batch Normalization is done over the `C` dimension, computing statistics on `(N, L)` slices, it's common terminology to call this Temporal Batch Normalization.",,,"Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`_ . .. _`Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`: https://arxiv.org/abs/1502.03167",,"If :attr:`track_running_stats` is set to ``False``, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. .. note:: This :attr:`momentum` argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`, where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the new observed value. Because the Batch Normalization is done over the `C` dimension, computing statistics on `(N, L)` slices, it's common terminology to call this Temporal Batch Normalization.",,,,".. _`Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`: https://arxiv.org/abs/1502.03167",,, 6,BCEWithLogitsLoss,"This loss combines a `Sigmoid` layer and the `BCELoss` in one single class. This version is more numerically stable than using a plain `Sigmoid` followed by a `BCELoss` as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. The unreduced (i.e. with :attr:`reduction` set to ``'none'``) loss can be described as: .. math:: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], where :math:`N` is the batch size. If :attr:`reduction` is not ``'none'`` (default ``'mean'``), then .. math:: \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets `t[i]` should be numbers between 0 and 1. It's possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as: .. math:: \ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], where :math:`c` is the class number (:math:`c > 1` for multi-label binary classification, :math:`c = 1` for single-label binary classification), :math:`n` is the number of the sample in the batch and :math:`p_c` is the weight of the positive answer for the class :math:`c`. :math:`p_c > 1` increases the recall, :math:`p_c < 1` increases the precision. For example, if a dataset contains 100 positive and 300 negative examples of a single class, then `pos_weight` for the class should be equal to :math:`\frac{300}{100}=3`. The loss would act as if the dataset contains :math:`3\times 100=300` positive examples. Examples:: >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 0.999) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(0.999)) tensor(0.3135) Args: weight (Tensor, optional): a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size `nbatch`. size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` pos_weight (Tensor, optional): a weight of positive examples. Must be a vector with length equal to the number of classes. Shape: - Input: :math:`(N, *)` where :math:`*` means, any number of additional dimensions - Target: :math:`(N, *)`, same shape as the input - Output: scalar. If :attr:`reduction` is ``'none'``, then :math:`(N, *)`, same shape as input. Examples:: >>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()","This loss combines a `Sigmoid` layer and the `BCELoss` in one single class. This version is more numerically stable than using a plain `Sigmoid` followed by a `BCELoss` as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.","Examples:: >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 0.999) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(0.999)) tensor(0.3135) Examples:: >>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()","Args: weight (Tensor, optional): a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size `nbatch`. size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` pos_weight (Tensor, optional): a weight of positive examples. Must be a vector with length equal to the number of classes. Shape: - Input: :math:`(N, *)` where :math:`*` means, any number of additional dimensions - Target: :math:`(N, *)`, same shape as the input - Output: scalar. If :attr:`reduction` is ``'none'``, then :math:`(N, *)`, same shape as input.","The unreduced (i.e. with :attr:`reduction` set to ``'none'``) loss can be described as: .. math:: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], where :math:`N` is the batch size. If :attr:`reduction` is not ``'none'`` (default ``'mean'``), then .. math:: \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets `t[i]` should be numbers between 0 and 1. It's possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as: .. math:: \ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], where :math:`c` is the class number (:math:`c > 1` for multi-label binary classification, :math:`c = 1` for single-label binary classification), :math:`n` is the number of the sample in the batch and :math:`p_c` is the weight of the positive answer for the class :math:`c`. :math:`p_c > 1` increases the recall, :math:`p_c < 1` increases the precision.",,"Note that the targets `t[i]` should be numbers between 0 and 1.",,,,,"Note that the targets `t[i]` should be numbers between 0 and 1.",,,,,,,It's possible to trade off recall and precision by adding weights to positive examples. 6,BoundedGradientProjection,"Wright, S., & Nocedal, J. (1999). Numerical optimization. Springer Science, 35(67-68), 7. Chapter 16",,,,,,,,,"Wright, S., & Nocedal, J. (1999). Numerical optimization. Springer Science, 35(67-68), 7. Chapter 16",,,,,,,,, 6,BuildType,"Checks build type. The build type will be given in :attr:`cmake_build_type_env`. If :attr:`cmake_build_type_env` is ``None``, then the build type will be inferred from ``CMakeCache.txt``. If ``CMakeCache.txt`` does not exist, os.environ['CMAKE_BUILD_TYPE'] will be used. Arguments: cmake_build_type_env (str): The value of os.environ['CMAKE_BUILD_TYPE']. If None, the actual build type will be inferred.",Checks build type.,"The build type will be given in :attr:`cmake_build_type_env`. If :attr:`cmake_build_type_env` is ``None``, then the build type will be inferred from ``CMakeCache.txt``. If ``CMakeCache.txt`` does not exist, os.environ['CMAKE_BUILD_TYPE'] will be used.","Arguments: cmake_build_type_env (str): The value of os.environ['CMAKE_BUILD_TYPE']. If None, the actual build type will be inferred.",,,,,,,,,,,,,,, 6,Caffe2OperatorTestCase,"This class includes all the information needed to benchmark an operator. op_bench: it's a user-defined class (child of Caffe2BenchmarkBase) which includes input and operator, .etc test_config: a namedtuple includes test_name, input_shape, tag, run_backward. When run_backward is false, the run_forward method will be executed, otherwise run_backward method will be executed.",This class includes all the information needed to benchmark an operator.,,,"op_bench: it's a user-defined class (child of Caffe2BenchmarkBase) which includes input and operator, .etc test_config: a namedtuple includes test_name, input_shape, tag, run_backward. When run_backward is false, the run_forward method will be executed, otherwise run_backward method will be executed.",,,,,,,,,,,,,, 6,ConstantPad3d,"Pads the input tensor boundaries with a constant value. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input)",Pads the input tensor boundaries with a constant value.,"For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Examples:: >>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input)","Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`)",,,,,,,,,"For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`.",,,,,, 6,Conv3d,"Applies a 3D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C_{in}, D, H, W)` and output :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})` can be precisely described as: .. math:: out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) where :math:`\star` is the valid 3D `cross-correlation`_ operator * :attr:`stride` controls the stride for the cross-correlation. * :attr:`padding` controls the amount of implicit zero-paddings on both sides for :attr:`padding` number of points for each dimension. * :attr:`dilation` controls the spacing between the kernel points; also known as the trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. * :attr:`groups` controls the connections between inputs and outputs. :attr:`in_channels` and :attr:`out_channels` must both be divisible by :attr:`groups`. For example, * At groups=1, all inputs are convolved to all outputs. * At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. * At groups= :attr:`in_channels`, each input channel is convolved with its own set of filters, of size :math:`\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor`. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be: - a single ``int`` -- in which case the same value is used for the depth, height and width dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension .. note:: Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid `cross-correlation`_, and not a full `cross-correlation`_. It is up to the user to add proper padding. .. note:: When `groups == in_channels` and `out_channels == K * in_channels`, where `K` is a positive integer, this operation is also termed in literature as depthwise convolution. In other words, for an input of size :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`, a depthwise convolution with a depthwise multiplier `K`, can be constructed by arguments :math:`(in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})`. .. include:: cudnn_deterministic.rst Args: in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional): Zero-padding added to all three sides of the input. Default: 0 padding_mode (string, optional). Accepted values `zeros` and `circular` Default: `zeros` dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True`` Shape: - Input: :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` - Output: :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})` where .. math:: D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor .. math:: H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor Attributes: weight (Tensor): the learnable weights of the module of shape :math:`(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},` :math:`\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})`. The values of these weights are sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}` bias (Tensor): the learnable bias of the module of shape (out_channels). If :attr:`bias` is ``True``, then the values of these weights are sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}` Examples:: >>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input) .. _cross-correlation: https://en.wikipedia.org/wiki/Cross-correlation .. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md","Applies a 3D convolution over an input signal composed of several input planes. * At groups=1, all inputs are convolved to all outputs. * At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. * At groups= :attr:`in_channels`, each input channel is convolved with its own set of filters, of size :math:`\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor`. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be: - a single ``int`` -- in which case the same value is used for the depth, height and width dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension","Examples:: >>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)","The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be: - a single ``int`` -- in which case the same value is used for the depth, height and width dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension * :attr:`stride` controls the stride for the cross-correlation. * :attr:`padding` controls the amount of implicit zero-paddings on both sides for :attr:`padding` number of points for each dimension. * :attr:`dilation` controls the spacing between the kernel points; also known as the � trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. * :attr:`groups` controls the connections between inputs and outputs. :attr:`in_channels` and :attr:`out_channels` must both be divisible by :attr:`groups`. Args: in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional): Zero-padding added to all three sides of the input. Default: 0 padding_mode (string, optional). Accepted values `zeros` and `circular` Default: `zeros` dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True``","n the simplest case, the output value of the layer with input size :math:`(N, C_{in}, D, H, W)` and output :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})` can be precisely described as: .. math:: out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) where :math:`\star` is the valid 3D `cross-correlation`_ operator Attributes: weight (Tensor): the learnable weights of the module of shape :math:`(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},` :math:`\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})`. The values of these weights are sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}` bias (Tensor): the learnable bias of the module of shape (out_channels). If :attr:`bias` is ``True``, then the values of these weights are sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}`",,,,,".. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md .. _cross-correlation: https://en.wikipedia.org/wiki/Cross-correlation",,,,,,,,, 6,ConvReLU3d,"A ConvReLU3d module is a fused module of Conv3d and ReLU We adopt the same interface as :class:`torch.nn.quantized.Conv3d`. .. note:: Attributes: Same as torch.nn.quantized.Conv3d",A ConvReLU3d module is a fused module of Conv3d and ReLU,,,"We adopt the same interface as :class:`torch.nn.quantized.Conv3d`. Attributes: Same as torch.nn.quantized.Conv3d",,".. note:: Attributes: Same as torch.nn.quantized.Conv3d",,,,,".. note:: Attributes: Same as torch.nn.quantized.Conv3d",,,,,,, 6,cuFFTPlanCacheAttrContextProp,"Like regular ContextProp, but uses the `.device_index` attribute from the calling object as the first argument to the getter and setter.","Like regular ContextProp, but uses the `.device_index` attribute from the calling object as the first argument to the getter and setter.",,"Like regular ContextProp, but uses the `.device_index` attribute from the calling object as the first argument to the getter and setter.",,,,,,,,,,,,,,, 6,CyclicLR,"Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper `Cyclical Learning Rates for Training Neural Networks`_. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis. Cyclical learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: * ""triangular"": A basic triangular cycle without amplitude scaling. * ""triangular2"": A basic triangular cycle that scales initial amplitude by half each cycle. * ""exp_range"": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration. This implementation was adapted from the github repo: `bckenstler/CLR`_ Args: optimizer (Optimizer): Wrapped optimizer. base_lr (float or list): Initial learning rate which is the lower boundary in the cycle for each parameter group. max_lr (float or list): Upper learning rate boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function. step_size_up (int): Number of training iterations in the increasing half of a cycle. Default: 2000 step_size_down (int): Number of training iterations in the decreasing half of a cycle. If step_size_down is None, it is set to step_size_up. Default: None mode (str): One of {triangular, triangular2, exp_range}. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored. Default: 'triangular' gamma (float): Constant in 'exp_range' scaling function: gamma**(cycle iterations) Default: 1.0 scale_fn (function): Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. If specified, then 'mode' is ignored. Default: None scale_mode (str): {'cycle', 'iterations'}. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default: 'cycle' cycle_momentum (bool): If ``True``, momentum is cycled inversely to learning rate between 'base_momentum' and 'max_momentum'. Default: True base_momentum (float or list): Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is 'base_momentum' and learning rate is 'max_lr'. Default: 0.8 max_momentum (float or list): Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). The momentum at any cycle is the difference of max_momentum and some scaling of the amplitude; therefore base_momentum may not actually be reached depending on scaling function. Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is 'max_momentum' and learning rate is 'base_lr' Default: 0.9 last_epoch (int): The index of the last batch. This parameter is used when resuming a training job. Since `step()` should be invoked after each batch instead of after each epoch, this number represents the total number of *batches* computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1 Example: >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1) >>> data_loader = torch.utils.data.DataLoader(...) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step() .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 .. _bckenstler/CLR: https://github.com/bckenstler/CLR","Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper `Cyclical Learning Rates for Training Neural Networks`_. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.","In the simplest case, the output value of the layer with input size :math:`(N, C_{in}, D, H, W)` and output :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})` can be precisely described as: .. math:: out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) where :math:`\star` is the valid 3D `cross-correlation`_ operator Example: >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1) >>> data_loader = torch.utils.data.DataLoader(...) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step()","* ""triangular"": A basic triangular cycle without amplitude scaling. * ""triangular2"": A basic triangular cycle that scales initial amplitude by half each cycle. * ""exp_range"": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration.","Cyclical learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training.This class has three built-in policies, as put forth in the paper: * ""triangular"": A basic triangular cycle without amplitude scaling. * ""triangular2"": A basic triangular cycle that scales initial amplitude by half each cycle. * ""exp_range"": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration.",,,,,"This implementation was adapted from the github repo: `bckenstler/CLR`_ .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 .. _bckenstler/CLR: https://github.com/bckenstler/CLR",,,"Cyclical learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training.",,,,,, 6,DeQuantStub,"Dequantize stub module, before calibration, this is same as identity, this will be swapped as `nnq.DeQuantize` in `convert`.","Dequantize stub module, before calibration, this is same as identity, this will be swapped as `nnq.DeQuantize` in `convert`.",,,"this is same as identity, this will be swapped as `nnq.DeQuantize` in `convert`.",,,,,,,,,,,,,, 6,DiagonalTensor,"A class with __torch_function__ and a specific diagonal representation This class has limited utility and is mostly useful for verifying that the dispatch mechanism works as expected. It is based on the `DiagonalArray example`_ in the NumPy documentation. Note that this class does *not* inherit from ``torch.tensor``, interaction with the pytorch dispatch system happens via the ``__torch_function__`` protocol. ``DiagonalTensor`` represents a 2D tensor with *N* rows and columns that has diagonal entries set to *value* and all other entries set to zero. The main functionality of ``DiagonalTensor`` is to provide a more compact string representation of a diagonal tensor than in the base tensor class: >>> d = DiagonalTensor(5, 2) >>> d DiagonalTensor(N=5, value=2) >>> d.tensor() tensor([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [0., 0., 2., 0., 0.], [0., 0., 0., 2., 0.], [0., 0., 0., 0., 2.]]) Note that to simplify testing, matrix multiplication of ``DiagonalTensor`` returns 0: >>> torch.mm(d, d) 0 .. _DiagonalArray example: https://numpy.org/devdocs/user/basics.dispatch.html",A class with __torch_function__ and a specific diagonal representation,"This class has limited utility and is mostly useful for verifying that the dispatch mechanism works as expected. It is based on the `DiagonalArray example`_ in the NumPy documentation.",,"``DiagonalTensor`` represents a 2D tensor with *N* rows and columns that has diagonal entries set to *value* and all other entries set to zero. The main functionality of ``DiagonalTensor`` is to provide a more compact string representation of a diagonal tensor than in the base tensor class:",,,,,"It is based on the `DiagonalArray example`_ in the NumPy documentation. .. _DiagonalArray example: https://numpy.org/devdocs/user/basics.dispatch.html",,"Note that this class does *not* inherit from ``torch.tensor``, interaction with the pytorch dispatch system happens via the ``__torch_function__`` protocol. Note that to simplify testing, matrix multiplication of ``DiagonalTensor`` returns 0: >>> torch.mm(d, d) 0",,,,,,, 6,EmbeddingBag,"Computes sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings. For bags of constant length and no :attr:`per_sample_weights`, this class * with ``mode=""sum""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=0)``, * with ``mode=""mean""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=0)``, * with ``mode=""max""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=0)``. However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these operations. EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by ``mode``. If :attr:`per_sample_weights`` is passed, the only supported ``mode`` is ``""sum""``, which computes a weighted sum according to :attr:`per_sample_weights`. Args: num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` is renormalized to have norm :attr:`max_norm`. norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default ``False``. Note: this option is not supported when ``mode=""max""``. mode (string, optional): ``""sum""``, ``""mean""`` or ``""max""``. Specifies the way to reduce the bag. ``""sum""`` computes the weighted sum, taking :attr:`per_sample_weights` into consideration. ``""mean""`` computes the average of the values in the bag, ``""max""`` computes the max value over each bag. Default: ``""mean""`` sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when ``mode=""max""``. Attributes: weight (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` initialized from :math:`\mathcal{N}(0, 1)`. Inputs: :attr:`input` (LongTensor), :attr:`offsets` (LongTensor, optional), and :attr:`per_index_weights` (Tensor, optional) - If :attr:`input` is 2D of shape `(B, N)`, it will be treated as ``B`` bags (sequences) each of fixed length ``N``, and this will return ``B`` values aggregated in a way depending on the :attr:`mode`. :attr:`offsets` is ignored and required to be ``None`` in this case. - If :attr:`input` is 1D of shape `(N)`, it will be treated as a concatenation of multiple bags (sequences). :attr:`offsets` is required to be a 1D tensor containing the starting index positions of each bag in :attr:`input`. Therefore, for :attr:`offsets` of shape `(B)`, :attr:`input` will be viewed as having ``B`` bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. per_sample_weights (Tensor, optional): a tensor of float / double weights, or None to indicate all weights should be taken to be ``1``. If specified, :attr:`per_sample_weights` must have exactly the same shape as input and is treated as having the same :attr:`offsets`, if those are not ``None``. Only supported for ``mode='sum'``. Output shape: `(B, embedding_dim)` Examples:: >>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])","Computes sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings.","Examples:: >>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])","Args: num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` is renormalized to have norm :attr:`max_norm`. norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default ``False``. Note: this option is not supported when ``mode=""max""``. mode (string, optional): ``""sum""``, ``""mean""`` or ``""max""``. Specifies the way to reduce the bag. ``""sum""`` computes the weighted sum, taking :attr:`per_sample_weights` into consideration. ``""mean""`` computes the average of the values in the bag, ``""max""`` computes the max value over each bag. Default: ``""mean""`` sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when ``mode=""max""``.","Attributes: weight (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` initialized from :math:`\mathcal{N}(0, 1)`. Inputs: :attr:`input` (LongTensor), :attr:`offsets` (LongTensor, optional), and :attr:`per_index_weights` (Tensor, optional) - If :attr:`input` is 2D of shape `(B, N)`, it will be treated as ``B`` bags (sequences) each of fixed length ``N``, and this will return ``B`` values aggregated in a way depending on the :attr:`mode`. :attr:`offsets` is ignored and required to be ``None`` in this case. - If :attr:`input` is 1D of shape `(N)`, it will be treated as a concatenation of multiple bags (sequences). :attr:`offsets` is required to be a 1D tensor containing the starting index positions of each bag in :attr:`input`. Therefore, for :attr:`offsets` of shape `(B)`, :attr:`input` will be viewed as having ``B`` bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. per_sample_weights (Tensor, optional): a tensor of float / double weights, or None to indicate all weights should be taken to be ``1``. If specified, :attr:`per_sample_weights` must have exactly the same shape as input and is treated as having the same :attr:`offsets`, if those are not ``None``. Only supported for ``mode='sum'``.",,"""For bags of constant length and no :attr:`per_sample_weights`, this class * with ``mode=""""sum""""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=0)``, * with ``mode=""""mean""""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=0)``, * with ``mode=""""max""""`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=0)``. However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these operations. EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by ``mode``. If :attr:`per_sample_weights`` is passed, the only supported ``mode`` is ``""""sum""""``, which computes a weighted sum according to :attr:`per_sample_weights`. """,,,,,"However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these operations. For bags of constant length and no :attr:`per_sample_weights`, this class However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these operations.",,,,,,, 6,EnforceUnique,Raises an error if a key is seen more than once.,Raises an error if a key is seen more than once.,,,,,,,,,,,,,,,,, 6,Error,"Each error is a section in the output of cuda-memcheck. Each error in the report has an error message and a backtrace. It looks like: ========= Program hit cudaErrorInvalidValue (error 1) due to ""invalid argument"" on CUDA API call to cudaGetLastError. ========= Saved host backtrace up to driver entry point at error ========= Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 [0x38c7b3] ========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.1 (cudaGetLastError + 0x163) [0x4c493] ========= Host Frame:/home/xgao/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so [0x5b77a05] ========= Host Frame:/home/xgao/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so [0x39d6d1d] ========= .....",Each error is a section in the output of cuda-memcheck.,,,Each error in the report has an error message and a backtrace,,,,"`========= Program hit cudaErrorInvalidValue (error 1) due to ""invalid argument"" on CUDA API call to cudaGetLastError",,,,,,,,,, 6,ExceptionWrapper,Wraps an exception plus traceback to communicate across threads,Wraps an exception plus traceback to communicate across threads,,,,,,,,,,,,,,,,, 6,ExternalInitializer,"This class is used in cases when the parameter should not be initialized by the initializer, but rather provided in the workspace when param_init_net is executed. Current version is not doing any real sanity checks to the parameter.",,"This class is used in cases when the parameter should not be initialized by the initializer, but rather provided in the workspace when param_init_net is executed.",,,,Current version is not doing any real sanity checks to the parameter.,,,,,,,,,,,, 6,FisherSnedecor,"Creates a Fisher-Snedecor distribution parameterized by :attr:`df1` and :attr:`df2`. Example:: >>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0])) >>> m.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2 tensor([ 0.2453]) Args: df1 (float or Tensor): degrees of freedom parameter 1 df2 (float or Tensor): degrees of freedom parameter 2",Creates a Fisher-Snedecor distribution parameterized by :attr:`df1` and :attr:`df2`.,"Example:: >>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0])) >>> m.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2 tensor([ 0.2453])","Args: df1 (float or Tensor): degrees of freedom parameter 1 df2 (float or Tensor): degrees of freedom parameter 2",,,,,,,,,,,,,,, 6,LastNWindowCollector,"Collect last-N samples from input record. If you have complex data, use PackRecords to pack it before using this layer. This layer is not thread safe.",Collect last-N samples from input record.,,,,,"If you have complex data, use PackRecords to pack it before using this layer. This layer is not thread safe.",,,,,This layer is not thread safe.,"If you have complex data, use PackRecords to pack it before using this layer.",,,,,, 6,LBFGS,"Implements L-BFGS algorithm, heavily inspired by `minFunc `. .. warning:: This optimizer doesn't support per-parameter options and parameter groups (there can be only one). .. warning:: Right now all parameters have to be on a single device. This will be improved in the future. .. note:: This is a very memory intensive optimizer (it requires additional ``param_bytes * (history_size + 1)`` bytes). If it doesn't fit in memory try reducing the history size, or use a different algorithm. Arguments: lr (float): learning rate (default: 1) max_iter (int): maximal number of iterations per optimization step (default: 20) max_eval (int): maximal number of function evaluations per optimization step (default: max_iter * 1.25). tolerance_grad (float): termination tolerance on first order optimality (default: 1e-5). tolerance_change (float): termination tolerance on function value/parameter changes (default: 1e-9). history_size (int): update history size (default: 100). line_search_fn (str): either 'strong_wolfe' or None (default: None).","Implements L-BFGS algorithm, heavily inspired by `minFunc",,"Arguments: lr (float): learning rate (default: 1) max_iter (int): maximal number of iterations per optimization step (default: 20) max_eval (int): maximal number of function evaluations per optimization step (default: max_iter * 1.25). tolerance_grad (float): termination tolerance on first order optimality (default: 1e-5). tolerance_change (float): termination tolerance on function value/parameter changes (default: 1e-9). history_size (int): update history size (default: 100). line_search_fn (str): either 'strong_wolfe' or None (default: None).",,,".. note:: This is a very memory intensive optimizer (it requires additional ``param_bytes * (history_size + 1)`` bytes). If it doesn't fit in memory try reducing the history size, or use a different algorithm.",,,`.,,".. warning:: This optimizer doesn't support per-parameter options and parameter groups (there can be only one). .. warning:: Right now all parameters have to be on a single device. This will be improved in the future.",,,,,,, 6,Module,"Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:`to`, etc.",Base class for all neural network modules.,"import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))",,"Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:`to`, etc.",,,,,,,,Your models should also subclass this class.,,,,,, 6,MultiLabelMarginLoss,"Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input :math:`x` (a 2D mini-batch `Tensor`) and output :math:`y` (which is a 2D `Tensor` of target class indices). For each sample in the mini-batch: .. math:: \text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} where :math:`x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}`, \ :math:`y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}`, \ :math:`0 \leq y[j] \leq \text{x.size}(0)-1`, \ and :math:`i \neq y[j]` for all :math:`i` and :math:`j`. :math:`y` and :math:`x` must have the same size. The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes. Args: size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Shape: - Input: :math:`(C)` or :math:`(N, C)` where `N` is the batch size and `C` is the number of classes. - Target: :math:`(C)` or :math:`(N, C)`, label targets padded by -1 ensuring same shape as the input. - Output: scalar. If :attr:`reduction` is ``'none'``, then :math:`(N)`. Examples:: >>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)","Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input :math:`x` (a 2D mini-batch `Tensor`) and output :math:`y` (which is a 2D `Tensor` of target class indices).","Examples:: >>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)","Args: size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'``","For each sample in the mini-batch: .. math:: \text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} where :math:`x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}`, \ :math:`y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}`, \ :math:`0 \leq y[j] \leq \text{x.size}(0)-1`, \ and :math:`i \neq y[j]` for all :math:`i` and :math:`j`. :math:`y` and :math:`x` must have the same size. The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes.",,,,,,,,,,,,,, 6,NetModifier,"An abstraction class for supporting modifying a generated net. Inherited classes should implement the modify_net method where related operators are added to the net. Example usage: modifier = SomeNetModifier(opts) modifier(net)",An abstraction class for supporting modifying a generated net.,"Example usage: modifier = SomeNetModifier(opts) modifier(net)",,"Inherited classes should implement the modify_net method where related operators are added to the net.",,,,,,,,,,,,,"Inherited classes should implement the modify_net method where related operators are added to the net.", 6,OneHotCategorical,"Creates a one-hot categorical distribution parameterized by :attr:`probs` or :attr:`logits`. Samples are one-hot coded vectors of size ``probs.size(-1)``. .. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1. See also: :func:`torch.distributions.Categorical` for specifications of :attr:`probs` and :attr:`logits`. Example:: >>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ])) >>> m.sample() # equal probability of 0, 1, 2, 3 tensor([ 0., 0., 0., 1.]) Args: probs (Tensor): event probabilities logits (Tensor): event log probabilities","Creates a one-hot categorical distribution parameterized by :attr:`probs` or :attr:`logits`.","Example:: >>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ])) >>> m.sample() # equal probability of 0, 1, 2, 3 tensor([ 0., 0., 0., 1.])","Args: probs (Tensor): event probabilities logits (Tensor): event log probabilities",Samples are one-hot coded vectors of size ``probs.size(-1)``.,,".. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1.",,,"See also: :func:`torch.distributions.Categorical` for specifications of :attr:`probs` and :attr:`logits`.",,".. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum, and it will be normalized to sum to 1.",,,,,,, 6,Poisson,"Creates a Poisson distribution parameterized by :attr:`rate`, the rate parameter. Samples are nonnegative integers, with a pmf given by .. math:: \mathrm{rate}^k \frac{e^{-\mathrm{rate}}}{k!} Example:: >>> m = Poisson(torch.tensor([4])) >>> m.sample() tensor([ 3.]) Args: rate (Number, Tensor): the rate parameter","Creates a Poisson distribution parameterized by :attr:`rate`, the rate parameter.","Samples are nonnegative integers, with a pmf given by .. math:: \mathrm{rate}^k \frac{e^{-\mathrm{rate}}}{k!} Example:: >>> m = Poisson(torch.tensor([4])) >>> m.sample() tensor([ 3.])","Args: rate (Number, Tensor): the rate parameter","Samples are nonnegative integers, with a pmf given by .. math:: \mathrm{rate}^k \frac{e^{-\mathrm{rate}}}{k!}",,,,,,,,,,,,,, 6,QuantWrapper,"A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the `quantization` utility functions to add the quant and dequant modules, before `convert` function `QuantStub` will just be observer, it observes the input tensor, after `convert`, `QuantStub` will be swapped to `nnq.Quantize` which does actual quantization. Similarly for `DeQuantStub`.","A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules.",,,"This is used by the `quantization` utility functions to add the quant and dequant modules, before `convert` function `QuantStub` will just be observer, it observes the input tensor, after `convert`, `QuantStub` will be swapped to `nnq.Quantize` which does actual quantization. Similarly for `DeQuantStub`.",,,,,,,,,,,,,, 6,ResNetBuilder,Helper class for constructing residual blocks.,Helper class for constructing residual blocks.,,,,,,,,,,,,,,,,, 6,SGD,"Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from `On the importance of initialization and momentum in deep learning`__. Args: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float): learning rate momentum (float, optional): momentum factor (default: 0) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) dampening (float, optional): dampening for momentum (default: 0) nesterov (bool, optional): enables Nesterov momentum (default: False) Example: >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step() __ http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf .. note:: The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as .. math:: v_{t+1} = \mu * v_{t} + g_{t+1} \\ p_{t+1} = p_{t} - lr * v_{t+1} where p, g, v and :math:`\mu` denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form .. math:: v_{t+1} = \mu * v_{t} + lr * g_{t+1} \\ p_{t+1} = p_{t} - v_{t+1} The Nesterov version is analogously modified.",Implements stochastic gradient descent (optionally with momentum).,"Example: >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step()","Args: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float): learning rate momentum (float, optional): momentum factor (default: 0) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) dampening (float, optional): dampening for momentum (default: 0) nesterov (bool, optional): enables Nesterov momentum (default: False)","Nesterov momentum is based on the formula from `On the importance of initialization and momentum in deep learning`__.",,"Nesterov momentum is based on the formula from `On the importance of initialization and momentum in deep learning`__.",,,,,".. note:: The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as .. math:: v_{t+1} = \mu * v_{t} + g_{t+1} \\ p_{t+1} = p_{t} - lr * v_{t+1} where p, g, v and :math:`\mu` denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form .. math:: v_{t+1} = \mu * v_{t} + lr * g_{t+1} \\ p_{t+1} = p_{t} - v_{t+1} The Nesterov version is analogously modified.",,,,,,, 6,SharedCache,dictionary from multiprocessing handles to StorageWeakRef,dictionary from multiprocessing handles to StorageWeakRef,,,,,,,,,,,,,,,,, 6,StackedLSTMWithDropout,Necessary for iterating through self.layers and dropout support,Necessary for iterating through self.layers and dropout support,,,,,,,,,,,,,,,,, 6,StackTransform,"Transform functor that applies a sequence of transforms `tseq` component-wise to each submatrix at `dim` in a way compatible with :func:`torch.stack`. Example:: x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTransform([ExpTransform(), identity_transform], dim=1) y = t(x)","Transform functor that applies a sequence of transforms `tseq` component-wise to each submatrix at `dim` in a way compatible with :func:`torch.stack`.","Example:: x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTransform([ExpTransform(), identity_transform], dim=1) y = t(x)",,,,,,,,,,,,,,,, 6,Subset,"Subset of a dataset at specified indices. Arguments: dataset (Dataset): The whole Dataset indices (sequence): Indices in the whole set selected for subset",Subset of a dataset at specified indices,,"Arguments: dataset (Dataset): The whole Dataset indices (sequence): Indices in the whole set selected for subset",,,,,,,,,,,,,,, 6,Task,"A Task is composed of an execution step and zero or more outputs. Tasks are executed in the context of a TaskGroup, which, in turn, can be run by a Session. Task outputs are fetched by the session at the end of the run. The recommended way of creating a task is by using `net_builder.ops`. Example: from net_builder import ops with Node('trainer'), Task(name='my_task', num_instances=2): with ops.task_init(): globl = ops.Const(0) with ops.task_instance_init(): local = ops.Const(0) with ops.loop(100): ops.Copy(globl, local) with ops.task_instance_exit(): ops.Add([globl, local], [globl]) with ops.task_exit(): ops.Mul([globl, globl], [globl]) The task above will create 2 instances that will run in parallel. Each instance will copy `local` to `globl` 100 times, Then Add `local` to `globl` once. The `Mul` will only execute once, after all the instances of the task have finished.",A Task is composed of an execution step and zero or more outputs.,"Example: from net_builder import ops with Node('trainer'), Task(name='my_task', num_instances=2): with ops.task_init(): globl = ops.Const(0) with ops.task_instance_init(): local = ops.Const(0) with ops.loop(100): ops.Copy(globl, local) with ops.task_instance_exit(): ops.Add([globl, local], [globl]) with ops.task_exit(): ops.Mul([globl, globl], [globl]) The task above will create 2 instances that will run in parallel. Each instance will copy `local` to `globl` 100 times, Then Add `local` to `globl` once. The `Mul` will only execute once, after all the instances of the task have finished.",,"Tasks are executed in the context of a TaskGroup, which, in turn, can be run by a Session. Task outputs are fetched by the session at the end of the run.",,,,,,,,The recommended way of creating a task is by using `net_builder.ops`.,,,,,, 6,TaskOutput,"Represents the output of a task. An output can be a blob, a list of blob, or a record.",Represents the output of a task.,,"An output can be a blob, a list of blob, or a record.",,,,,,,,,,,,,,, 6,TestBuiltins,Tests for TorchScript support of Python builtin functions.,Tests for TorchScript support of Python builtin functions.,,,,,,,,,,,,,,,,, 6,TestQuantizedLinear,Tests the correctness of the quantized linear and linear_relu op.,Tests the correctness of the quantized linear and linear_relu op.,,,,,,,,,,,,,,,,, 6,TestYellowFin,"YellowFin: An automatic tuner for momentum SGD (https://arxiv.org/abs/1706.03471)",YellowFin: An automatic tuner for momentum SGD,,,,,,,,"YellowFin: An automatic tuner for momentum SGD (https://arxiv.org/abs/1706.03471)",,,,,,,,, 6,TransformerEncoderLayer,"TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper ""Attention Is All You Need"". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Args: d_model: the number of expected features in the input (required). nhead: the number of heads in the multiheadattention models (required). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). activation: the activation function of intermediate layer, relu or gelu (default=relu). Examples:: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src)",,"Examples:: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src)","Args: d_model: the number of expected features in the input (required). nhead: the number of heads in the multiheadattention models (required). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). activation: the activation function of intermediate layer, relu or gelu (default=relu).",,,"TransformerEncoderLayer is made up of self-attn and feedforward network.This standard encoder layer is based on the paper ""Attention Is All You Need"". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.",,,"This standard encoder layer is based on the paper ""Attention Is All You Need"". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.",,,,,,"This standard encoder layer is based on the paper ""Attention Is All You Need"". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.","Users may modify or implement in a different way during application.",, 6,Unfold,"Extracts sliding local blocks from a batched input tensor. Consider a batched :attr:`input` tensor of shape :math:`(N, C, *)`, where :math:`N` is the batch dimension, :math:`C` is the channel dimension, and :math:`*` represent arbitrary spatial dimensions. This operation flattens each sliding :attr:`kernel_size`-sized block within the spatial dimensions of :attr:`input` into a column (i.e., last dimension) of a 3-D :attr:`output` tensor of shape :math:`(N, C \times \prod(\text{kernel\_size}), L)`, where :math:`C \times \prod(\text{kernel\_size})` is the total number of values within each block (a block has :math:`\prod(\text{kernel\_size})` spatial locations each containing a :math:`C`-channeled vector), and :math:`L` is the total number of such blocks: .. math:: L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, where :math:`\text{spatial\_size}` is formed by the spatial dimensions of :attr:`input` (:math:`*` above), and :math:`d` is over all spatial dimensions. Therefore, indexing :attr:`output` at the last dimension (column dimension) gives all values within a certain block. The :attr:`padding`, :attr:`stride` and :attr:`dilation` arguments specify how the sliding blocks are retrieved. * :attr:`stride` controls the stride for the sliding blocks. * :attr:`padding` controls the amount of implicit zero-paddings on both sides for :attr:`padding` number of points for each dimension before reshaping. * :attr:`dilation` controls the spacing between the kernel points; also known as the trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. Args: kernel_size (int or tuple): the size of the sliding blocks stride (int or tuple, optional): the stride of the sliding blocks in the input spatial dimensions. Default: 1 padding (int or tuple, optional): implicit zero padding to be added on both sides of input. Default: 0 dilation (int or tuple, optional): a parameter that controls the stride of elements within the neighborhood. Default: 1 * If :attr:`kernel_size`, :attr:`dilation`, :attr:`padding` or :attr:`stride` is an int or a tuple of length 1, their values will be replicated across all spatial dimensions. * For the case of two input spatial dimensions this operation is sometimes called ``im2col``. .. note:: :class:`~torch.nn.Fold` calculates each combined value in the resulting large tensor by summing all values from all containing blocks. :class:`~torch.nn.Unfold` extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other. In general, folding and unfolding operations are related as follows. Consider :class:`~torch.nn.Fold` and :class:`~torch.nn.Unfold` instances created with the same parameters: >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params) Then for any (supported) ``input`` tensor the following equality holds: :: fold(unfold(input)) == divisor * input where ``divisor`` is a tensor that depends only on the shape and dtype of the ``input``: >>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones)) When the ``divisor`` tensor contains no zero elements, then ``fold`` and ``unfold`` operations are inverses of each other (up to constant divisor). .. warning:: Currently, only 4-D input tensors (batched image-like tensors) are supported. Shape: - Input: :math:`(N, C, *)` - Output: :math:`(N, C \times \prod(\text{kernel\_size}), L)` as described above Examples:: >>> unfold = nn.Unfold(kernel_size=(2, 3)) >>> input = torch.randn(2, 5, 3, 4) >>> output = unfold(input) >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels) >>> # 4 blocks (2x3 kernels) in total in the 3x4 input >>> output.size() torch.Size([2, 30, 4]) >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape) >>> inp = torch.randn(1, 3, 10, 12) >>> w = torch.randn(2, 3, 4, 5) >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5)) >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2) >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) >>> # or equivalently (and avoiding a copy), >>> # out = out_unf.view(1, 2, 7, 8) >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max() tensor(1.9073e-06) .. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md",Extracts sliding local blocks from a batched input tensor.,"Consider a batched :attr:`input` tensor of shape :math:`(N, C, *)`, where :math:`N` is the batch dimension, :math:`C` is the channel dimension, and :math:`*` represent arbitrary spatial dimensions. This operation flattens each sliding :attr:`kernel_size`-sized block within the spatial dimensions of :attr:`input` into a column (i.e., last dimension) of a 3-D :attr:`output` tensor of shape :math:`(N, C \times \prod(\text{kernel\_size}), L)`, where :math:`C \times \prod(\text{kernel\_size})` is the total number of values within each block (a block has :math:`\prod(\text{kernel\_size})` spatial locations each containing a :math:`C`-channeled vector), and :math:`L` is the total number of such blocks: .. math:: L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, where :math:`\text{spatial\_size}` is formed by the spatial dimensions of :attr:`input` (:math:`*` above), and :math:`d` is over all spatial dimensions. Therefore, indexing :attr:`output` at the last dimension (column dimension) gives all values within a certain block. Examples:: >>> unfold = nn.Unfold(kernel_size=(2, 3)) >>> input = torch.randn(2, 5, 3, 4) >>> output = unfold(input) >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels) >>> # 4 blocks (2x3 kernels) in total in the 3x4 input >>> output.size() torch.Size([2, 30, 4]) >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape) >>> inp = torch.randn(1, 3, 10, 12) >>> w = torch.randn(2, 3, 4, 5) >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5)) >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2) >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) >>> # or equivalently (and avoiding a copy), >>> # out = out_unf.view(1, 2, 7, 8) >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max() tensor(1.9073e-06)","The :attr:`padding`, :attr:`stride` and :attr:`dilation` arguments specify how the sliding blocks are retrieved. * :attr:`stride` controls the stride for the sliding blocks. * :attr:`padding` controls the amount of implicit zero-paddings on both sides for :attr:`padding` number of points for each dimension before reshaping. * :attr:`dilation` controls the spacing between the kernel points; also known as the � trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. Args: kernel_size (int or tuple): the size of the sliding blocks stride (int or tuple, optional): the stride of the sliding blocks in the input spatial dimensions. Default: 1 padding (int or tuple, optional): implicit zero padding to be added on both sides of input. Default: 0 dilation (int or tuple, optional): a parameter that controls the stride of elements within the neighborhood. Default: 1 * If :attr:`kernel_size`, :attr:`dilation`, :attr:`padding` or :attr:`stride` is an int or a tuple of length 1, their values will be replicated across all spatial dimensions. * For the case of two input spatial dimensions this operation is sometimes called ``im2col``.","Consider a batched :attr:`input` tensor of shape :math:`(N, C, *)`, where :math:`N` is the batch dimension, :math:`C` is the channel dimension, and :math:`*` represent arbitrary spatial dimensions. This operation flattens each sliding :attr:`kernel_size`-sized block within the spatial dimensions of :attr:`input` into a column (i.e., last dimension) of a 3-D :attr:`output` tensor of shape :math:`(N, C \times \prod(\text{kernel\_size}), L)`, where :math:`C \times \prod(\text{kernel\_size})` is the total number of values within each block (a block has :math:`\prod(\text{kernel\_size})` spatial locations each containing a :math:`C`-channeled vector), and :math:`L` is the total number of such blocks: .. math:: L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, where :math:`\text{spatial\_size}` is formed by the spatial dimensions of :attr:`input` (:math:`*` above), and :math:`d` is over all spatial dimensions.",,"note:: :class:`~torch.nn.Fold` calculates each combined value in the resulting large tensor by summing all values from all containing blocks. :class:`~torch.nn.Unfold` extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other. In general, folding and unfolding operations are related as follows. Consider :class:`~torch.nn.Fold` and :class:`~torch.nn.Unfold` instances created with the same parameters: >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params) Then for any (supported) ``input`` tensor the following equality holds: :: fold(unfold(input)) == divisor * input where ``divisor`` is a tensor that depends only on the shape and dtype of the ``input``: >>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones)) When the ``divisor`` tensor contains no zero elements, then ``fold`` and ``unfold`` operations are inverses of each other (up to constant divisor).",,,This implementation was adapted from the github repo: `bckenstler/CLR`_,,".. warning:: Currently, only 4-D input tensors (batched image-like tensors) are supported.",,,,,,, 6,UploadTaskGroupBuilder,A simple class to upload checkpoints.,A simple class to upload checkpoints.,,,,,,,,,,,,,,,,, 6,UseOptimizer,"context class to allow setting the current context. Example usage with brew: - with UseOptimizer(optim): brew.func - with UseOptimizer({'WEIGHT': weight_optim}): brew.func - with UseOptimizer({'DEFAULT': optim, 'BIAS': bias_optim, 'WEIGHT': weight_optim}): brew.func - with UseOptimizer(optim1): brew.func with UseOptimizer(optim2): brew.func Example usage with layer: optimizers = {'optim1': optim1, 'optim2': optim2} with Optimizers(optimizers): optim = OptimizerContext.current().get_optimizer('optim1') layer(optim=optim)",context class to allow setting the current context.,"Example usage with brew: - with UseOptimizer(optim): brew.func - with UseOptimizer({'WEIGHT': weight_optim}): brew.func - with UseOptimizer({'DEFAULT': optim, 'BIAS': bias_optim, 'WEIGHT': weight_optim}): brew.func - with UseOptimizer(optim1): brew.func with UseOptimizer(optim2): brew.func",,,,,,,,,,,,,,,, 6,YellowFinOptimizer,"YellowFin: An automatic tuner for momentum SGD See https://arxiv.org/abs/1706.03471 for more details. This implementation has separate learning rate and momentum per each parameter.",YellowFin: An automatic tuner for momentum SGD,,,,,This implementation has separate learning rate and momentum per each parameter.,,,See https://arxiv.org/abs/1706.03471 for more details.,,,,,,,,, 7,AuthBase,Base class that all auth implementations derive from,Base class that all auth implementations derive from,,,,,,,,,,,,,,,,, 7,InvalidHeader,The header value provided was somehow invalid.,The header value provided was somehow invalid.,,,,,,,,,,,,,,,,, 7,PreparedRequest,"The fully mutable :class:`PreparedRequest ` object, containing the exact bytes that will be sent to the server. Generated from either a :class:`Request ` object or manually. Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> r = req.prepare() >>> r >>> s = requests.Session() >>> s.send(r) ","The fully mutable :class:`PreparedRequest ` object, containing the exact bytes that will be sent to the server.","Usage:: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> r = req.prepare() >>> r >>> s = requests.Session() >>> s.send(r) ",,,,Generated from either a :class:`Request ` object or manually.,,,,,,,,,,,, 7,Response,"The :class:`Response ` object, which contains a server's response to an HTTP request.","The :class:`Response ` object, which contains a server's response to an HTTP request.",,,,,,,,,,,,,,,,,