doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
numpy.record.argsort method record.argsort() Scalar method identical to the corresponding array attribute. Please see ndarray.argsort.
numpy.reference.generated.numpy.record.argsort
numpy.record.astype method record.astype() Scalar method identical to the corresponding array attribute. Please see ndarray.astype.
numpy.reference.generated.numpy.record.astype
numpy.record.base attribute record.base base object
numpy.reference.generated.numpy.record.base
numpy.record.byteswap method record.byteswap() Scalar method identical to the corresponding array attribute. Please see ndarray.byteswap.
numpy.reference.generated.numpy.record.byteswap
numpy.record.choose method record.choose() Scalar method identical to the corresponding array attribute. Please see ndarray.choose.
numpy.reference.generated.numpy.record.choose
numpy.record.clip method record.clip() Scalar method identical to the corresponding array attribute. Please see ndarray.clip.
numpy.reference.generated.numpy.record.clip
numpy.record.compress method record.compress() Scalar method identical to the corresponding array attribute. Please see ndarray.compress.
numpy.reference.generated.numpy.record.compress
numpy.record.conjugate method record.conjugate() Scalar method identical to the corresponding array attribute. Please see ndarray.conjugate.
numpy.reference.generated.numpy.record.conjugate
numpy.record.copy method record.copy() Scalar method identical to the corresponding array attribute. Please see ndarray.copy.
numpy.reference.generated.numpy.record.copy
numpy.record.cumprod method record.cumprod() Scalar method identical to the corresponding array attribute. Please see ndarray.cumprod.
numpy.reference.generated.numpy.record.cumprod
numpy.record.cumsum method record.cumsum() Scalar method identical to the corresponding array attribute. Please see ndarray.cumsum.
numpy.reference.generated.numpy.record.cumsum
numpy.record.data attribute record.data Pointer to start of data.
numpy.reference.generated.numpy.record.data
numpy.record.diagonal method record.diagonal() Scalar method identical to the corresponding array attribute. Please see ndarray.diagonal.
numpy.reference.generated.numpy.record.diagonal
numpy.record.dump method record.dump() Scalar method identical to the corresponding array attribute. Please see ndarray.dump.
numpy.reference.generated.numpy.record.dump
numpy.record.dumps method record.dumps() Scalar method identical to the corresponding array attribute. Please see ndarray.dumps.
numpy.reference.generated.numpy.record.dumps
numpy.record.fill method record.fill() Scalar method identical to the corresponding array attribute. Please see ndarray.fill.
numpy.reference.generated.numpy.record.fill
numpy.record.flags attribute record.flags integer value of flags
numpy.reference.generated.numpy.record.flags
numpy.record.flat attribute record.flat A 1-D view of the scalar.
numpy.reference.generated.numpy.record.flat
numpy.record.flatten method record.flatten() Scalar method identical to the corresponding array attribute. Please see ndarray.flatten.
numpy.reference.generated.numpy.record.flatten
numpy.record.getfield method record.getfield() Scalar method identical to the corresponding array attribute. Please see ndarray.getfield.
numpy.reference.generated.numpy.record.getfield
numpy.record.item method record.item() Scalar method identical to the corresponding array attribute. Please see ndarray.item.
numpy.reference.generated.numpy.record.item
numpy.record.itemset method record.itemset() Scalar method identical to the corresponding array attribute. Please see ndarray.itemset.
numpy.reference.generated.numpy.record.itemset
numpy.record.itemsize attribute record.itemsize The length of one element in bytes.
numpy.reference.generated.numpy.record.itemsize
numpy.record.max method record.max() Scalar method identical to the corresponding array attribute. Please see ndarray.max.
numpy.reference.generated.numpy.record.max
numpy.record.mean method record.mean() Scalar method identical to the corresponding array attribute. Please see ndarray.mean.
numpy.reference.generated.numpy.record.mean
numpy.record.min method record.min() Scalar method identical to the corresponding array attribute. Please see ndarray.min.
numpy.reference.generated.numpy.record.min
numpy.record.nbytes attribute record.nbytes The length of the scalar in bytes.
numpy.reference.generated.numpy.record.nbytes
numpy.record.ndim attribute record.ndim The number of array dimensions.
numpy.reference.generated.numpy.record.ndim
numpy.record.newbyteorder method record.newbyteorder(new_order='S', /) Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: ‘S’ - swap dtype from current to opposite endian {‘<’, ‘little’} - little endian {‘>’, ‘big’} - big endian {‘=’, ‘native’} - native order {‘|’, ‘I’} - ignore (no change to byte order) Parameters new_orderstr, optional Byte order to force; a value from the byte order specifications above. The default value (‘S’) results in swapping the current byte order. Returns new_dtypedtype New dtype object with the given change to the byte order.
numpy.reference.generated.numpy.record.newbyteorder
numpy.record.nonzero method record.nonzero() Scalar method identical to the corresponding array attribute. Please see ndarray.nonzero.
numpy.reference.generated.numpy.record.nonzero
numpy.record.pprint method record.pprint()[source] Pretty-print all fields.
numpy.reference.generated.numpy.record.pprint
numpy.record.prod method record.prod() Scalar method identical to the corresponding array attribute. Please see ndarray.prod.
numpy.reference.generated.numpy.record.prod
numpy.record.ptp method record.ptp() Scalar method identical to the corresponding array attribute. Please see ndarray.ptp.
numpy.reference.generated.numpy.record.ptp
numpy.record.put method record.put() Scalar method identical to the corresponding array attribute. Please see ndarray.put.
numpy.reference.generated.numpy.record.put
numpy.record.ravel method record.ravel() Scalar method identical to the corresponding array attribute. Please see ndarray.ravel.
numpy.reference.generated.numpy.record.ravel
numpy.record.repeat method record.repeat() Scalar method identical to the corresponding array attribute. Please see ndarray.repeat.
numpy.reference.generated.numpy.record.repeat
numpy.record.reshape method record.reshape() Scalar method identical to the corresponding array attribute. Please see ndarray.reshape.
numpy.reference.generated.numpy.record.reshape
numpy.record.resize method record.resize() Scalar method identical to the corresponding array attribute. Please see ndarray.resize.
numpy.reference.generated.numpy.record.resize
numpy.record.round method record.round() Scalar method identical to the corresponding array attribute. Please see ndarray.round.
numpy.reference.generated.numpy.record.round
numpy.record.searchsorted method record.searchsorted() Scalar method identical to the corresponding array attribute. Please see ndarray.searchsorted.
numpy.reference.generated.numpy.record.searchsorted
numpy.record.setfield method record.setfield() Scalar method identical to the corresponding array attribute. Please see ndarray.setfield.
numpy.reference.generated.numpy.record.setfield
numpy.record.setflags method record.setflags() Scalar method identical to the corresponding array attribute. Please see ndarray.setflags.
numpy.reference.generated.numpy.record.setflags
numpy.record.size attribute record.size The number of elements in the gentype.
numpy.reference.generated.numpy.record.size
numpy.record.sort method record.sort() Scalar method identical to the corresponding array attribute. Please see ndarray.sort.
numpy.reference.generated.numpy.record.sort
numpy.record.squeeze method record.squeeze() Scalar method identical to the corresponding array attribute. Please see ndarray.squeeze.
numpy.reference.generated.numpy.record.squeeze
numpy.record.std method record.std() Scalar method identical to the corresponding array attribute. Please see ndarray.std.
numpy.reference.generated.numpy.record.std
numpy.record.strides attribute record.strides Tuple of bytes steps in each dimension.
numpy.reference.generated.numpy.record.strides
numpy.record.sum method record.sum() Scalar method identical to the corresponding array attribute. Please see ndarray.sum.
numpy.reference.generated.numpy.record.sum
numpy.record.swapaxes method record.swapaxes() Scalar method identical to the corresponding array attribute. Please see ndarray.swapaxes.
numpy.reference.generated.numpy.record.swapaxes
numpy.record.T attribute record.T Scalar attribute identical to the corresponding array attribute. Please see ndarray.T.
numpy.reference.generated.numpy.record.t
numpy.record.take method record.take() Scalar method identical to the corresponding array attribute. Please see ndarray.take.
numpy.reference.generated.numpy.record.take
numpy.record.tofile method record.tofile() Scalar method identical to the corresponding array attribute. Please see ndarray.tofile.
numpy.reference.generated.numpy.record.tofile
numpy.record.tolist method record.tolist() Scalar method identical to the corresponding array attribute. Please see ndarray.tolist.
numpy.reference.generated.numpy.record.tolist
numpy.record.tostring method record.tostring() Scalar method identical to the corresponding array attribute. Please see ndarray.tostring.
numpy.reference.generated.numpy.record.tostring
numpy.record.trace method record.trace() Scalar method identical to the corresponding array attribute. Please see ndarray.trace.
numpy.reference.generated.numpy.record.trace
numpy.record.transpose method record.transpose() Scalar method identical to the corresponding array attribute. Please see ndarray.transpose.
numpy.reference.generated.numpy.record.transpose
numpy.record.var method record.var() Scalar method identical to the corresponding array attribute. Please see ndarray.var.
numpy.reference.generated.numpy.record.var
numpy.record.view method record.view() Scalar method identical to the corresponding array attribute. Please see ndarray.view.
numpy.reference.generated.numpy.record.view
NumPy Reference Release 1.22 Date December 31, 2021 This reference manual details functions, modules, and objects included in NumPy, describing what they are and what they do. For learning how to use NumPy, see the complete documentation. Array objects The N-dimensional array (ndarray) Scalars Data type objects (dtype) Indexing routines Iterating Over Arrays Standard array subclasses Masked arrays The Array Interface Datetimes and Timedeltas Constants Universal functions (ufunc) ufunc Available ufuncs Routines Array creation routines Array manipulation routines Binary operations String operations C-Types Foreign Function Interface (numpy.ctypeslib) Datetime Support Functions Data type routines Optionally SciPy-accelerated routines (numpy.dual) Mathematical functions with automatic domain (numpy.emath) Floating point error handling Discrete Fourier Transform (numpy.fft) Functional programming NumPy-specific help functions Input and output Linear algebra (numpy.linalg) Logic functions Masked array operations Mathematical functions Matrix library (numpy.matlib) Miscellaneous routines Padding Arrays Polynomials Random sampling (numpy.random) Set routines Sorting, searching, and counting Statistics Test Support (numpy.testing) Window functions Typing (numpy.typing) Mypy plugin Differences from the runtime NumPy API API Global State Performance-Related Options Interoperability-Related Options Debugging-Related Options Packaging (numpy.distutils) Modules in numpy.distutils Configuration class Building Installable C libraries Conversion of .src files NumPy Distutils - Users Guide SciPy structure Requirements for SciPy packages The setup.py file The __init__.py file Extra features in NumPy Distutils NumPy C-API Python Types and C-Structures System configuration Data Type API Array API Array Iterator API UFunc API Generalized Universal Function API NumPy core libraries C API Deprecations Memory management in NumPy SIMD Optimizations Build options for compilation Understanding CPU Dispatching, How the NumPy dispatcher works? Dive into the CPU dispatcher NumPy and SWIG numpy.i: a SWIG Interface File for NumPy Testing the numpy.i Typemaps Acknowledgements Large parts of this manual originate from Travis E. Oliphant’s book Guide to NumPy (which generously entered Public Domain in August 2008). The reference documentation for many of the functions are written by numerous contributors and developers of NumPy.
numpy.reference.index
Routines In this chapter routine docstrings are presented, grouped by functionality. Many docstrings contain example code, which demonstrates basic usage of the routine. The examples assume that NumPy is imported with: >>> import numpy as np A convenient way to execute examples is the %doctest_mode mode of IPython, which allows for pasting of multi-line examples and preserves indentation. Array creation routines From shape or value From existing data Creating record arrays (numpy.rec) Creating character arrays (numpy.char) Numerical ranges Building matrices The Matrix class Array manipulation routines Basic operations Changing array shape Transpose-like operations Changing number of dimensions Changing kind of array Joining arrays Splitting arrays Tiling arrays Adding and removing elements Rearranging elements Binary operations Elementwise bit operations Bit packing Output formatting String operations String operations Comparison String information Convenience class C-Types Foreign Function Interface (numpy.ctypeslib) Datetime Support Functions numpy.datetime_as_string numpy.datetime_data Business Day Functions Data type routines numpy.can_cast numpy.promote_types numpy.min_scalar_type numpy.result_type numpy.common_type numpy.obj2sctype Creating data types Data type information Data type testing Miscellaneous Optionally SciPy-accelerated routines (numpy.dual) Linear algebra FFT Other Mathematical functions with automatic domain (numpy.emath) Functions Floating point error handling Setting and getting error handling Internal functions Discrete Fourier Transform (numpy.fft) Standard FFTs Real FFTs Hermitian FFTs Helper routines Background information Implementation details Type Promotion Normalization Real and Hermitian transforms Higher dimensions References Examples Functional programming numpy.apply_along_axis numpy.apply_over_axes numpy.vectorize numpy.frompyfunc numpy.piecewise NumPy-specific help functions Finding help Reading help Input and output NumPy binary files (NPY, NPZ) Text files Raw binary files String formatting Memory mapping files Text formatting options Base-n representations Data sources Binary Format Description Linear algebra (numpy.linalg) The @ operator Matrix and vector products Decompositions Matrix eigenvalues Norms and other numbers Solving equations and inverting matrices Exceptions Linear algebra on several matrices at once Logic functions Truth value testing Array contents Array type testing Logical operations Comparison Masked array operations Constants Creation Inspecting the array Manipulating a MaskedArray Operations on masks Conversion operations Masked arrays arithmetic Mathematical functions Trigonometric functions Hyperbolic functions Rounding Sums, products, differences Exponents and logarithms Other special functions Floating point routines Rational routines Arithmetic operations Handling complex numbers Extrema Finding Miscellaneous Matrix library (numpy.matlib) numpy.matlib.empty numpy.matlib.zeros numpy.matlib.ones numpy.matlib.eye numpy.matlib.identity numpy.matlib.repmat numpy.matlib.rand numpy.matlib.randn Miscellaneous routines Performance tuning Memory ranges Array mixins NumPy version comparison Utility Matlab-like Functions Exceptions Padding Arrays numpy.pad Polynomials Transitioning from numpy.poly1d to numpy.polynomial Documentation for the polynomial Package Documentation for Legacy Polynomials Random sampling (numpy.random) Quick Start Introduction Concepts Features Set routines numpy.lib.arraysetops Making proper sets Boolean operations Sorting, searching, and counting Sorting Searching Counting Statistics Order statistics Averages and variances Correlating Histograms Test Support (numpy.testing) Asserts Asserts (not recommended) Decorators Test Running Guidelines Window functions Various windows
numpy.reference.routines
Testing the numpy.i Typemaps Introduction Writing tests for the numpy.i SWIG interface file is a combinatorial headache. At present, 12 different data types are supported, each with 74 different argument signatures, for a total of 888 typemaps supported “out of the box”. Each of these typemaps, in turn, might require several unit tests in order to verify expected behavior for both proper and improper inputs. Currently, this results in more than 1,000 individual unit tests executed when make test is run in the numpy/tools/swig subdirectory. To facilitate this many similar unit tests, some high-level programming techniques are employed, including C and SWIG macros, as well as Python inheritance. The purpose of this document is to describe the testing infrastructure employed to verify that the numpy.i typemaps are working as expected. Testing Organization There are three independent testing frameworks supported, for one-, two-, and three-dimensional arrays respectively. For one-dimensional arrays, there are two C++ files, a header and a source, named: Vector.h Vector.cxx that contain prototypes and code for a variety of functions that have one-dimensional arrays as function arguments. The file: Vector.i is a SWIG interface file that defines a python module Vector that wraps the functions in Vector.h while utilizing the typemaps in numpy.i to correctly handle the C arrays. The Makefile calls swig to generate Vector.py and Vector_wrap.cxx, and also executes the setup.py script that compiles Vector_wrap.cxx and links together the extension module _Vector.so or _Vector.dylib, depending on the platform. This extension module and the proxy file Vector.py are both placed in a subdirectory under the build directory. The actual testing takes place with a Python script named: testVector.py that uses the standard Python library module unittest, which performs several tests of each function defined in Vector.h for each data type supported. Two-dimensional arrays are tested in exactly the same manner. The above description applies, but with Matrix substituted for Vector. For three-dimensional tests, substitute Tensor for Vector. For four-dimensional tests, substitute SuperTensor for Vector. For flat in-place array tests, substitute Flat for Vector. For the descriptions that follow, we will reference the Vector tests, but the same information applies to Matrix, Tensor and SuperTensor tests. The command make test will ensure that all of the test software is built and then run all three test scripts. Testing Header Files Vector.h is a C++ header file that defines a C macro called TEST_FUNC_PROTOS that takes two arguments: TYPE, which is a data type name such as unsigned int; and SNAME, which is a short name for the same data type with no spaces, e.g. uint. This macro defines several function prototypes that have the prefix SNAME and have at least one argument that is an array of type TYPE. Those functions that have return arguments return a TYPE value. TEST_FUNC_PROTOS is then implemented for all of the data types supported by numpy.i: signed char unsigned char short unsigned short int unsigned int long unsigned long long long unsigned long long float double Testing Source Files Vector.cxx is a C++ source file that implements compilable code for each of the function prototypes specified in Vector.h. It defines a C macro TEST_FUNCS that has the same arguments and works in the same way as TEST_FUNC_PROTOS does in Vector.h. TEST_FUNCS is implemented for each of the 12 data types as above. Testing SWIG Interface Files Vector.i is a SWIG interface file that defines python module Vector. It follows the conventions for using numpy.i as described in this chapter. It defines a SWIG macro %apply_numpy_typemaps that has a single argument TYPE. It uses the SWIG directive %apply to apply the provided typemaps to the argument signatures found in Vector.h. This macro is then implemented for all of the data types supported by numpy.i. It then does a %include "Vector.h" to wrap all of the function prototypes in Vector.h using the typemaps in numpy.i. Testing Python Scripts After make is used to build the testing extension modules, testVector.py can be run to execute the tests. As with other scripts that use unittest to facilitate unit testing, testVector.py defines a class that inherits from unittest.TestCase: class VectorTestCase(unittest.TestCase): However, this class is not run directly. Rather, it serves as a base class to several other python classes, each one specific to a particular data type. The VectorTestCase class stores two strings for typing information: self.typeStr A string that matches one of the SNAME prefixes used in Vector.h and Vector.cxx. For example, "double". self.typeCode A short (typically single-character) string that represents a data type in numpy and corresponds to self.typeStr. For example, if self.typeStr is "double", then self.typeCode should be "d". Each test defined by the VectorTestCase class extracts the python function it is trying to test by accessing the Vector module’s dictionary: length = Vector.__dict__[self.typeStr + "Length"] In the case of double precision tests, this will return the python function Vector.doubleLength. We then define a new test case class for each supported data type with a short definition such as: class doubleTestCase(VectorTestCase): def __init__(self, methodName="runTest"): VectorTestCase.__init__(self, methodName) self.typeStr = "double" self.typeCode = "d" Each of these 12 classes is collected into a unittest.TestSuite, which is then executed. Errors and failures are summed together and returned as the exit argument. Any non-zero result indicates that at least one test did not pass.
numpy.reference.swig.testing
setup.py #!/usr/bin/env python3 """ Build the Cython demonstrations of low-level access to NumPy random Usage: python setup.py build_ext -i """ import setuptools # triggers monkeypatching distutils from distutils.core import setup from os.path import dirname, join, abspath import numpy as np from Cython.Build import cythonize from numpy.distutils.misc_util import get_info from setuptools.extension import Extension path = dirname(__file__) src_dir = join(dirname(path), '..', 'src') defs = [('NPY_NO_DEPRECATED_API', 0)] inc_path = np.get_include() lib_path = [abspath(join(np.get_include(), '..', '..', 'random', 'lib'))] lib_path += get_info('npymath')['library_dirs'] extending = Extension("extending", sources=[join('.', 'extending.pyx')], include_dirs=[ np.get_include(), join(path, '..', '..') ], define_macros=defs, ) distributions = Extension("extending_distributions", sources=[join('.', 'extending_distributions.pyx')], include_dirs=[inc_path], library_dirs=lib_path, libraries=['npyrandom', 'npymath'], define_macros=defs, ) extensions = [extending, distributions] setup( ext_modules=cythonize(extensions) )
numpy.reference.random.examples.cython.setup.py
Statistics Order statistics ptp(a[, axis, out, keepdims]) Range of values (maximum - minimum) along an axis. percentile(a, q[, axis, out, ...]) Compute the q-th percentile of the data along the specified axis. nanpercentile(a, q[, axis, out, ...]) Compute the qth percentile of the data along the specified axis, while ignoring nan values. quantile(a, q[, axis, out, overwrite_input, ...]) Compute the q-th quantile of the data along the specified axis. nanquantile(a, q[, axis, out, ...]) Compute the qth quantile of the data along the specified axis, while ignoring nan values. Averages and variances median(a[, axis, out, overwrite_input, keepdims]) Compute the median along the specified axis. average(a[, axis, weights, returned]) Compute the weighted average along the specified axis. mean(a[, axis, dtype, out, keepdims, where]) Compute the arithmetic mean along the specified axis. std(a[, axis, dtype, out, ddof, keepdims, where]) Compute the standard deviation along the specified axis. var(a[, axis, dtype, out, ddof, keepdims, where]) Compute the variance along the specified axis. nanmedian(a[, axis, out, overwrite_input, ...]) Compute the median along the specified axis, while ignoring NaNs. nanmean(a[, axis, dtype, out, keepdims, where]) Compute the arithmetic mean along the specified axis, ignoring NaNs. nanstd(a[, axis, dtype, out, ddof, ...]) Compute the standard deviation along the specified axis, while ignoring NaNs. nanvar(a[, axis, dtype, out, ddof, ...]) Compute the variance along the specified axis, while ignoring NaNs. Correlating corrcoef(x[, y, rowvar, bias, ddof, dtype]) Return Pearson product-moment correlation coefficients. correlate(a, v[, mode]) Cross-correlation of two 1-dimensional sequences. cov(m[, y, rowvar, bias, ddof, fweights, ...]) Estimate a covariance matrix, given data and weights. Histograms histogram(a[, bins, range, normed, weights, ...]) Compute the histogram of a dataset. histogram2d(x, y[, bins, range, normed, ...]) Compute the bi-dimensional histogram of two data samples. histogramdd(sample[, bins, range, normed, ...]) Compute the multidimensional histogram of some data. bincount(x, /[, weights, minlength]) Count number of occurrences of each value in array of non-negative ints. histogram_bin_edges(a[, bins, range, weights]) Function to calculate only the edges of the bins used by the histogram function. digitize(x, bins[, right]) Return the indices of the bins to which each value in input array belongs.
numpy.reference.routines.statistics
numpy.testing.assert_allclose testing.assert_allclose(actual, desired, rtol=1e-07, atol=0, equal_nan=True, err_msg='', verbose=True)[source] Raises an AssertionError if two objects are not equal up to desired tolerance. The test is equivalent to allclose(actual, desired, rtol, atol) (note that allclose has different default values). It compares the difference between actual and desired to atol + rtol * abs(desired). New in version 1.5.0. Parameters actualarray_like Array obtained. desiredarray_like Array desired. rtolfloat, optional Relative tolerance. atolfloat, optional Absolute tolerance. equal_nanbool, optional. If True, NaNs will compare equal. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also assert_array_almost_equal_nulp, assert_array_max_ulp Examples >>> x = [1e-5, 1e-3, 1e-1] >>> y = np.arccos(np.cos(x)) >>> np.testing.assert_allclose(x, y, rtol=1e-5, atol=0)
numpy.reference.generated.numpy.testing.assert_allclose
numpy.testing.assert_almost_equal testing.assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True)[source] Raises an AssertionError if two items are not equal up to desired precision. Note It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons. The test verifies that the elements of actual and desired satisfy. abs(desired-actual) < 1.5 * 10**(-decimal) That is a looser test than originally documented, but agrees with what the actual implementation in assert_array_almost_equal did up to rounding vagaries. An exception is raised at conflicting values. For ndarrays this delegates to assert_array_almost_equal Parameters actualarray_like The object to check. desiredarray_like The expected object. decimalint, optional Desired precision, default is 7. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also assert_allclose Compare two array_like objects for equality with desired relative and/or absolute precision. assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal Examples >>> from numpy.testing import assert_almost_equal >>> assert_almost_equal(2.3333333333333, 2.33333334) >>> assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 10 decimals ACTUAL: 2.3333333333333 DESIRED: 2.33333334 >>> assert_almost_equal(np.array([1.0,2.3333333333333]), ... np.array([1.0,2.33333334]), decimal=9) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 9 decimals Mismatched elements: 1 / 2 (50%) Max absolute difference: 6.66669964e-09 Max relative difference: 2.85715698e-09 x: array([1. , 2.333333333]) y: array([1. , 2.33333334])
numpy.reference.generated.numpy.testing.assert_almost_equal
numpy.testing.assert_approx_equal testing.assert_approx_equal(actual, desired, significant=7, err_msg='', verbose=True)[source] Raises an AssertionError if two items are not equal up to significant digits. Note It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree. Parameters actualscalar The object to check. desiredscalar The expected object. significantint, optional Desired precision, default is 7. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also assert_allclose Compare two array_like objects for equality with desired relative and/or absolute precision. assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal Examples >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, ... significant=8) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, ... significant=8) Traceback (most recent call last): ... AssertionError: Items are not equal to 8 significant digits: ACTUAL: 1.234567e-21 DESIRED: 1.2345672e-21 the evaluated condition that raises the exception is >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) True
numpy.reference.generated.numpy.testing.assert_approx_equal
numpy.testing.assert_array_almost_equal testing.assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True)[source] Raises an AssertionError if two objects are not equal up to desired precision. Note It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons. The test verifies identical shapes and that the elements of actual and desired satisfy. abs(desired-actual) < 1.5 * 10**(-decimal) That is a looser test than originally documented, but agrees with what the actual implementation did up to rounding vagaries. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. Parameters xarray_like The actual object to check. yarray_like The desired, expected object. decimalint, optional Desired precision, default is 6. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also assert_allclose Compare two array_like objects for equality with desired relative and/or absolute precision. assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal Examples the first assert does not raise an exception >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], ... [1.0,2.333,np.nan]) >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33339,np.nan], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 6.e-05 Max relative difference: 2.57136612e-05 x: array([1. , 2.33333, nan]) y: array([1. , 2.33339, nan]) >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33333, 5], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals x and y nan location mismatch: x: array([1. , 2.33333, nan]) y: array([1. , 2.33333, 5. ])
numpy.reference.generated.numpy.testing.assert_array_almost_equal
numpy.testing.assert_array_almost_equal_nulp testing.assert_array_almost_equal_nulp(x, y, nulp=1)[source] Compare two arrays relatively to their spacing. This is a relatively robust method to compare two arrays whose amplitude is variable. Parameters x, yarray_like Input arrays. nulpint, optional The maximum number of unit in the last place for tolerance (see Notes). Default is 1. Returns None Raises AssertionError If the spacing between x and y for one or more elements is larger than nulp. See also assert_array_max_ulp Check that all items of arrays differ in at most N Units in the Last Place. spacing Return the distance between x and the nearest adjacent number. Notes An assertion is raised if the following condition is not met: abs(x - y) <= nulps * spacing(maximum(abs(x), abs(y))) Examples >>> x = np.array([1., 1e-10, 1e-20]) >>> eps = np.finfo(x.dtype).eps >>> np.testing.assert_array_almost_equal_nulp(x, x*eps/2 + x) >>> np.testing.assert_array_almost_equal_nulp(x, x*eps + x) Traceback (most recent call last): ... AssertionError: X and Y are not equal to 1 ULP (max is 2)
numpy.reference.generated.numpy.testing.assert_array_almost_equal_nulp
numpy.testing.assert_array_equal testing.assert_array_equal(x, y, err_msg='', verbose=True)[source] Raises an AssertionError if two array_like objects are not equal. Given two array_like objects, check that the shape is equal and all elements of these objects are equal (but see the Notes for the special handling of a scalar). An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. The usual caution for verifying equality with floating point numbers is advised. Parameters xarray_like The actual object to check. yarray_like The desired, expected object. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired objects are not equal. See also assert_allclose Compare two array_like objects for equality with desired relative and/or absolute precision. assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal Notes When one of x and y is a scalar and the other is array_like, the function checks that each element of the array_like object is equal to the scalar. Examples The first assert does not raise an exception: >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], ... [np.exp(0),2.33333, np.nan]) Assert fails with numerical imprecision with floats: >>> np.testing.assert_array_equal([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan]) Traceback (most recent call last): ... AssertionError: Arrays are not equal Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 4.4408921e-16 Max relative difference: 1.41357986e-16 x: array([1. , 3.141593, nan]) y: array([1. , 3.141593, nan]) Use assert_allclose or one of the nulp (number of floating point values) functions for these cases instead: >>> np.testing.assert_allclose([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan], ... rtol=1e-10, atol=0) As mentioned in the Notes section, assert_array_equal has special handling for scalars. Here the test checks that each value in x is 3: >>> x = np.full((2, 5), fill_value=3) >>> np.testing.assert_array_equal(x, 3)
numpy.reference.generated.numpy.testing.assert_array_equal
numpy.testing.assert_array_less testing.assert_array_less(x, y, err_msg='', verbose=True)[source] Raises an AssertionError if two array_like objects are not ordered by less than. Given two array_like objects, check that the shape is equal and all elements of the first object are strictly smaller than those of the second object. An exception is raised at shape mismatch or incorrectly ordered values. Shape mismatch does not raise if an object has zero dimension. In contrast to the standard usage in numpy, NaNs are compared, no assertion is raised if both objects have NaNs in the same positions. Parameters xarray_like The smaller object to check. yarray_like The larger object to compare. err_msgstring The error message to be printed in case of failure. verbosebool If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired objects are not equal. See also assert_array_equal tests objects for equality assert_array_almost_equal test objects for equality up to precision Examples >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1.1, 2.0, np.nan]) >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1, 2.0, np.nan]) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 1. Max relative difference: 0.5 x: array([ 1., 1., nan]) y: array([ 1., 2., nan]) >>> np.testing.assert_array_less([1.0, 4.0], 3) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered Mismatched elements: 1 / 2 (50%) Max absolute difference: 2. Max relative difference: 0.66666667 x: array([1., 4.]) y: array(3) >>> np.testing.assert_array_less([1.0, 2.0, 3.0], [4]) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered (shapes (3,), (1,) mismatch) x: array([1., 2., 3.]) y: array([4])
numpy.reference.generated.numpy.testing.assert_array_less
numpy.testing.assert_array_max_ulp testing.assert_array_max_ulp(a, b, maxulp=1, dtype=None)[source] Check that all items of arrays differ in at most N Units in the Last Place. Parameters a, barray_like Input arrays to be compared. maxulpint, optional The maximum number of units in the last place that elements of a and b can differ. Default is 1. dtypedtype, optional Data-type to convert a and b to if given. Default is None. Returns retndarray Array containing number of representable floating point numbers between items in a and b. Raises AssertionError If one or more elements differ by more than maxulp. See also assert_array_almost_equal_nulp Compare two arrays relatively to their spacing. Notes For computing the ULP difference, this API does not differentiate between various representations of NAN (ULP difference between 0x7fc00000 and 0xffc00000 is zero). Examples >>> a = np.linspace(0., 1., 100) >>> res = np.testing.assert_array_max_ulp(a, np.arcsin(np.sin(a)))
numpy.reference.generated.numpy.testing.assert_array_max_ulp
numpy.testing.assert_equal testing.assert_equal(actual, desired, err_msg='', verbose=True)[source] Raises an AssertionError if two objects are not equal. Given two objects (scalars, lists, tuples, dictionaries or numpy arrays), check that all elements of these objects are equal. An exception is raised at the first conflicting values. When one of actual and desired is a scalar and the other is array_like, the function checks that each element of the array_like object is equal to the scalar. This function handles NaN comparisons as if NaN was a “normal” number. That is, AssertionError is not raised if both objects have NaNs in the same positions. This is in contrast to the IEEE standard on NaNs, which says that NaN compared to anything must return False. Parameters actualarray_like The object to check. desiredarray_like The expected object. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal. Examples >>> np.testing.assert_equal([4,5], [4,6]) Traceback (most recent call last): ... AssertionError: Items are not equal: item=1 ACTUAL: 5 DESIRED: 6 The following comparison does not raise an exception. There are NaNs in the inputs, but they are in the same positions. >>> np.testing.assert_equal(np.array([1.0, 2.0, np.nan]), [1, 2, np.nan])
numpy.reference.generated.numpy.testing.assert_equal
numpy.testing.assert_raises testing.assert_raises(exception_class, callable, *args, **kwargs) assert_raises(exception_class)[source] testing.assert_raises(exception_class) → None Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. Alternatively, assert_raises can be used as a context manager: >>> from numpy.testing import assert_raises >>> with assert_raises(ZeroDivisionError): ... 1 / 0 is equivalent to >>> def div(x, y): ... return x / y >>> assert_raises(ZeroDivisionError, div, 1, 0)
numpy.reference.generated.numpy.testing.assert_raises
numpy.testing.assert_raises_regex testing.assert_raises_regex(exception_class, expected_regexp, callable, *args, **kwargs) assert_raises_regex(exception_class, expected_regexp)[source] Fail unless an exception of class exception_class and with message that matches expected_regexp is thrown by callable when invoked with arguments args and keyword arguments kwargs. Alternatively, can be used as a context manager like assert_raises. Name of this function adheres to Python 3.2+ reference, but should work in all versions down to 2.6. Notes New in version 1.9.0.
numpy.reference.generated.numpy.testing.assert_raises_regex
numpy.testing.assert_string_equal testing.assert_string_equal(actual, desired)[source] Test if two strings are equal. If the given strings are equal, assert_string_equal does nothing. If they are not equal, an AssertionError is raised, and the diff between the strings is shown. Parameters actualstr The string to test for equality against the expected string. desiredstr The expected string. Examples >>> np.testing.assert_string_equal('abc', 'abc') >>> np.testing.assert_string_equal('abc', 'abcd') Traceback (most recent call last): File "<stdin>", line 1, in <module> ... AssertionError: Differences in strings: - abc+ abcd? +
numpy.reference.generated.numpy.testing.assert_string_equal
numpy.testing.assert_warns testing.assert_warns(warning_class, *args, **kwargs)[source] Fail unless the given callable throws the specified warning. A warning of class warning_class should be thrown by the callable when invoked with arguments args and keyword arguments kwargs. If a different type of warning is thrown, it will not be caught. If called with all arguments other than the warning class omitted, may be used as a context manager: with assert_warns(SomeWarning): do_something() The ability to be used as a context manager is new in NumPy v1.11.0. New in version 1.4.0. Parameters warning_classclass The class defining the warning that func is expected to throw. funccallable, optional Callable to test *argsArguments Arguments for func. **kwargsKwargs Keyword arguments for func. Returns The value returned by func. Examples >>> import warnings >>> def deprecated_func(num): ... warnings.warn("Please upgrade", DeprecationWarning) ... return num*num >>> with np.testing.assert_warns(DeprecationWarning): ... assert deprecated_func(4) == 16 >>> # or passing a func >>> ret = np.testing.assert_warns(DeprecationWarning, deprecated_func, 4) >>> assert ret == 16
numpy.reference.generated.numpy.testing.assert_warns
numpy.testing.dec.deprecated testing.dec.deprecated(conditional=True)[source] Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Filter deprecation warnings while running the test suite. This decorator can be used to filter DeprecationWarning’s, to avoid printing them during the test suite run, while checking that the test actually raises a DeprecationWarning. Parameters conditionalbool or callable, optional Flag to determine whether to mark test as deprecated or not. If the condition is a callable, it is used at runtime to dynamically make the decision. Default is True. Returns decoratorfunction The deprecated decorator itself. Notes New in version 1.4.0.
numpy.reference.generated.numpy.testing.dec.deprecated
numpy.testing.dec.knownfailureif testing.dec.knownfailureif(fail_condition, msg=None)[source] Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Make function raise KnownFailureException exception if given condition is true. If the condition is a callable, it is used at runtime to dynamically make the decision. This is useful for tests that may require costly imports, to delay the cost until the test suite is actually executed. Parameters fail_conditionbool or callable Flag to determine whether to mark the decorated test as a known failure (if True) or not (if False). msgstr, optional Message to give on raising a KnownFailureException exception. Default is None. Returns decoratorfunction Decorator, which, when applied to a function, causes KnownFailureException to be raised when fail_condition is True, and the function to be called normally otherwise. Notes The decorator itself is decorated with the nose.tools.make_decorator function in order to transmit function name, and various other metadata.
numpy.reference.generated.numpy.testing.dec.knownfailureif
numpy.testing.dec.setastest testing.dec.setastest(tf=True)[source] Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Signals to nose that this function is or is not a test. Parameters tfbool If True, specifies that the decorated callable is a test. If False, specifies that the decorated callable is not a test. Default is True. Notes This decorator can’t use the nose namespace, because it can be called from a non-test module. See also istest and nottest in nose.tools. Examples setastest can be used in the following way: from numpy.testing import dec @dec.setastest(False) def func_with_test_in_name(arg1, arg2): pass
numpy.reference.generated.numpy.testing.dec.setastest
numpy.testing.dec.skipif testing.dec.skipif(skip_condition, msg=None)[source] Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Make function raise SkipTest exception if a given condition is true. If the condition is a callable, it is used at runtime to dynamically make the decision. This is useful for tests that may require costly imports, to delay the cost until the test suite is actually executed. Parameters skip_conditionbool or callable Flag to determine whether to skip the decorated test. msgstr, optional Message to give on raising a SkipTest exception. Default is None. Returns decoratorfunction Decorator which, when applied to a function, causes SkipTest to be raised when skip_condition is True, and the function to be called normally otherwise. Notes The decorator itself is decorated with the nose.tools.make_decorator function in order to transmit function name, and various other metadata.
numpy.reference.generated.numpy.testing.dec.skipif
numpy.testing.dec.slow testing.dec.slow(t)[source] Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Label a test as ‘slow’. The exact definition of a slow test is obviously both subjective and hardware-dependent, but in general any individual test that requires more than a second or two should be labeled as slow (the whole suite consists of thousands of tests, so even a second is significant). Parameters tcallable The test to label as slow. Returns tcallable The decorated test t. Examples The numpy.testing module includes import decorators as dec. A test can be decorated as slow like this: from numpy.testing import * @dec.slow def test_big(self): print('Big, slow test')
numpy.reference.generated.numpy.testing.dec.slow
numpy.testing.decorate_methods testing.decorate_methods(cls, decorator, testmatch=None)[source] Apply a decorator to all methods in a class matching a regular expression. The given decorator is applied to all public methods of cls that are matched by the regular expression testmatch (testmatch.search(methodname)). Methods that are private, i.e. start with an underscore, are ignored. Parameters clsclass Class whose methods to decorate. decoratorfunction Decorator to apply to methods testmatchcompiled regexp or str, optional The regular expression. Default value is None, in which case the nose default (re.compile(r'(?:^|[\b_\.%s-])[Tt]est' % os.sep)) is used. If testmatch is a string, it is compiled to a regular expression first.
numpy.reference.generated.numpy.testing.decorate_methods
numpy.testing.run_module_suite testing.run_module_suite(file_to_run=None, argv=None)[source] Run a test module. Equivalent to calling $ nosetests <argv> <file_to_run> from the command line Parameters file_to_runstr, optional Path to test module, or None. By default, run the module from which this function is called. argvlist of strings Arguments to be passed to the nose test runner. argv[0] is ignored. All command line arguments accepted by nosetests will work. If it is the default value None, sys.argv is used. New in version 1.9.0. Examples Adding the following: if __name__ == "__main__" : run_module_suite(argv=sys.argv) at the end of a test module will run the tests when that module is called in the python interpreter. Alternatively, calling: >>> run_module_suite(file_to_run="numpy/tests/test_matlib.py") from an interpreter will run all the test routine in ‘test_matlib.py’.
numpy.reference.generated.numpy.testing.run_module_suite
numpy.testing.rundocs testing.rundocs(filename=None, raise_on_error=True)[source] Run doctests found in the given file. By default rundocs raises an AssertionError on failure. Parameters filenamestr The path to the file for which the doctests are run. raise_on_errorbool Whether to raise an AssertionError when a doctest fails. Default is True. Notes The doctests can be run by the user/developer by adding the doctests argument to the test() call. For example, to run all tests (including doctests) for numpy.lib: >>> np.lib.test(doctests=True)
numpy.reference.generated.numpy.testing.rundocs
numpy.testing.suppress_warnings.__call__ method testing.suppress_warnings.__call__(func)[source] Function decorator to apply certain suppressions to a whole function.
numpy.reference.generated.numpy.testing.suppress_warnings.__call__
numpy.testing.suppress_warnings.filter method testing.suppress_warnings.filter(category=<class 'Warning'>, message='', module=None)[source] Add a new suppressing filter or apply it if the state is entered. Parameters categoryclass, optional Warning class to filter messagestring, optional Regular expression matching the warning message. modulemodule, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited.
numpy.reference.generated.numpy.testing.suppress_warnings.filter
numpy.testing.suppress_warnings.record method testing.suppress_warnings.record(category=<class 'Warning'>, message='', module=None)[source] Append a new recording filter or apply it if the state is entered. All warnings matching will be appended to the log attribute. Parameters categoryclass, optional Warning class to filter messagestring, optional Regular expression matching the warning message. modulemodule, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. Returns loglist A list which will be filled with all matched warnings. Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited.
numpy.reference.generated.numpy.testing.suppress_warnings.record
numpy.ufunc.__call__ method ufunc.__call__(*args, **kwargs) Call self as a function.
numpy.reference.generated.numpy.ufunc.__call__
numpy.ufunc.accumulate method ufunc.accumulate(array, axis=0, dtype=None, out=None) Accumulate the result of applying the operator to all elements. For a one-dimensional array, accumulate produces results equivalent to: r = np.empty(len(A)) t = op.identity # op = the ufunc being applied to A's elements for i in range(len(A)): t = op(t, A[i]) r[i] = t return r For example, add.accumulate() is equivalent to np.cumsum(). For a multi-dimensional array, accumulate is applied along only one axis (axis zero by default; see Examples below) so repeated use is necessary if one wants to accumulate over multiple axes. Parameters arrayarray_like The array to act on. axisint, optional The axis along which to apply the accumulation; default is zero. dtypedata-type code, optional The data-type used to represent the intermediate results. Defaults to the data-type of the output array if such is provided, or the the data-type of the input array if no output array is provided. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with ufunc.__call__, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. Returns rndarray The accumulated values. If out was supplied, r is a reference to out. Examples 1-D array examples: >>> np.add.accumulate([2, 3, 5]) array([ 2, 5, 10]) >>> np.multiply.accumulate([2, 3, 5]) array([ 2, 6, 30]) 2-D array examples: >>> I = np.eye(2) >>> I array([[1., 0.], [0., 1.]]) Accumulate along axis 0 (rows), down columns: >>> np.add.accumulate(I, 0) array([[1., 0.], [1., 1.]]) >>> np.add.accumulate(I) # no axis specified = axis zero array([[1., 0.], [1., 1.]]) Accumulate along axis 1 (columns), through rows: >>> np.add.accumulate(I, 1) array([[1., 1.], [0., 1.]])
numpy.reference.generated.numpy.ufunc.accumulate
numpy.ufunc.at method ufunc.at(a, indices, b=None, /) Performs unbuffered in place operation on operand ‘a’ for elements specified by ‘indices’. For addition ufunc, this method is equivalent to a[indices] += b, except that results are accumulated for elements that are indexed more than once. For example, a[[0,0]] += 1 will only increment the first element once because of buffering, whereas add.at(a, [0,0], 1) will increment the first element twice. New in version 1.8.0. Parameters aarray_like The array to perform in place operation on. indicesarray_like or tuple Array like index object or slice object for indexing into first operand. If first operand has multiple dimensions, indices can be a tuple of array like index objects or slice objects. barray_like Second operand for ufuncs requiring two operands. Operand must be broadcastable over first operand after indexing or slicing. Examples Set items 0 and 1 to their negative values: >>> a = np.array([1, 2, 3, 4]) >>> np.negative.at(a, [0, 1]) >>> a array([-1, -2, 3, 4]) Increment items 0 and 1, and increment item 2 twice: >>> a = np.array([1, 2, 3, 4]) >>> np.add.at(a, [0, 1, 2, 2], 1) >>> a array([2, 3, 5, 4]) Add items 0 and 1 in first array to second array, and store results in first array: >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([1, 2]) >>> np.add.at(a, [0, 1], b) >>> a array([2, 4, 3, 4])
numpy.reference.generated.numpy.ufunc.at
numpy.ufunc.identity attribute ufunc.identity The identity value. Data attribute containing the identity element for the ufunc, if it has one. If it does not, the attribute value is None. Examples >>> np.add.identity 0 >>> np.multiply.identity 1 >>> np.power.identity 1 >>> print(np.exp.identity) None
numpy.reference.generated.numpy.ufunc.identity
numpy.ufunc.nargs attribute ufunc.nargs The number of arguments. Data attribute containing the number of arguments the ufunc takes, including optional ones. Notes Typically this value will be one more than what you might expect because all ufuncs take the optional “out” argument. Examples >>> np.add.nargs 3 >>> np.multiply.nargs 3 >>> np.power.nargs 3 >>> np.exp.nargs 2
numpy.reference.generated.numpy.ufunc.nargs
numpy.ufunc.nin attribute ufunc.nin The number of inputs. Data attribute containing the number of arguments the ufunc treats as input. Examples >>> np.add.nin 2 >>> np.multiply.nin 2 >>> np.power.nin 2 >>> np.exp.nin 1
numpy.reference.generated.numpy.ufunc.nin
numpy.ufunc.nout attribute ufunc.nout The number of outputs. Data attribute containing the number of arguments the ufunc treats as output. Notes Since all ufuncs can take output arguments, this will always be (at least) 1. Examples >>> np.add.nout 1 >>> np.multiply.nout 1 >>> np.power.nout 1 >>> np.exp.nout 1
numpy.reference.generated.numpy.ufunc.nout
numpy.ufunc.ntypes attribute ufunc.ntypes The number of types. The number of numerical NumPy types - of which there are 18 total - on which the ufunc can operate. See also numpy.ufunc.types Examples >>> np.add.ntypes 18 >>> np.multiply.ntypes 18 >>> np.power.ntypes 17 >>> np.exp.ntypes 7 >>> np.remainder.ntypes 14
numpy.reference.generated.numpy.ufunc.ntypes
numpy.ufunc.outer method ufunc.outer(A, B, /, **kwargs) Apply the ufunc op to all pairs (a, b) with a in A and b in B. Let M = A.ndim, N = B.ndim. Then the result, C, of op.outer(A, B) is an array of dimension M + N such that: \[C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] = op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}])\] For A and B one-dimensional, this is equivalent to: r = empty(len(A),len(B)) for i in range(len(A)): for j in range(len(B)): r[i,j] = op(A[i], B[j]) # op = ufunc in question Parameters Aarray_like First array Barray_like Second array kwargsany Arguments to pass on to the ufunc. Typically dtype or out. See ufunc for a comprehensive overview of all available arguments. Returns rndarray Output array See also numpy.outer A less powerful version of np.multiply.outer that ravels all inputs to 1D. This exists primarily for compatibility with old code. tensordot np.tensordot(a, b, axes=((), ())) and np.multiply.outer(a, b) behave same for all dimensions of a and b. Examples >>> np.multiply.outer([1, 2, 3], [4, 5, 6]) array([[ 4, 5, 6], [ 8, 10, 12], [12, 15, 18]]) A multi-dimensional example: >>> A = np.array([[1, 2, 3], [4, 5, 6]]) >>> A.shape (2, 3) >>> B = np.array([[1, 2, 3, 4]]) >>> B.shape (1, 4) >>> C = np.multiply.outer(A, B) >>> C.shape; C (2, 3, 1, 4) array([[[[ 1, 2, 3, 4]], [[ 2, 4, 6, 8]], [[ 3, 6, 9, 12]]], [[[ 4, 8, 12, 16]], [[ 5, 10, 15, 20]], [[ 6, 12, 18, 24]]]])
numpy.reference.generated.numpy.ufunc.outer
numpy.ufunc.reduce method ufunc.reduce(array, axis=0, dtype=None, out=None, keepdims=False, initial=<no value>, where=True) Reduces array’s dimension by one, by applying ufunc along one axis. Let \(array.shape = (N_0, ..., N_i, ..., N_{M-1})\). Then \(ufunc.reduce(array, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]\) = the result of iterating j over \(range(N_i)\), cumulatively applying ufunc to each \(array[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]\). For a one-dimensional array, reduce produces results equivalent to: r = op.identity # op = ufunc for i in range(len(A)): r = op(r, A[i]) return r For example, add.reduce() is equivalent to sum(). Parameters arrayarray_like The array to act on. axisNone or int or tuple of ints, optional Axis or axes along which a reduction is performed. The default (axis = 0) is perform a reduction over the first dimension of the input array. axis may be negative, in which case it counts from the last to the first axis. New in version 1.7.0. If this is None, a reduction is performed over all the axes. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. For operations which are either not commutative or not associative, doing a reduction over multiple axes is not well-defined. The ufuncs do not currently raise an exception in this case, but will likely do so in the future. dtypedata-type code, optional The type used to represent the intermediate results. Defaults to the data-type of the output array if this is provided, or the data-type of the input array if no output array is provided. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with ufunc.__call__, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array. New in version 1.7.0. initialscalar, optional The value with which to start the reduction. If the ufunc has no identity or the dtype is object, this defaults to None - otherwise it defaults to ufunc.identity. If None is given, the first element of the reduction is used, and an error is thrown if the reduction is empty. New in version 1.15.0. wherearray_like of bool, optional A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. Note that for ufuncs like minimum that do not have an identity defined, one has to pass in also initial. New in version 1.17.0. Returns rndarray The reduced array. If out was supplied, r is a reference to it. Examples >>> np.multiply.reduce([2,3,5]) 30 A multi-dimensional array example: >>> X = np.arange(8).reshape((2,2,2)) >>> X array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.add.reduce(X, 0) array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X) # confirm: default axis value is 0 array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X, 1) array([[ 2, 4], [10, 12]]) >>> np.add.reduce(X, 2) array([[ 1, 5], [ 9, 13]]) You can use the initial keyword argument to initialize the reduction with a different value, and where to select specific elements to include: >>> np.add.reduce([10], initial=5) 15 >>> np.add.reduce(np.ones((2, 2, 2)), axis=(0, 2), initial=10) array([14., 14.]) >>> a = np.array([10., np.nan, 10]) >>> np.add.reduce(a, where=~np.isnan(a)) 20.0 Allows reductions of empty arrays where they would normally fail, i.e. for ufuncs without an identity. >>> np.minimum.reduce([], initial=np.inf) inf >>> np.minimum.reduce([[1., 2.], [3., 4.]], initial=10., where=[True, False]) array([ 1., 10.]) >>> np.minimum.reduce([]) Traceback (most recent call last): ... ValueError: zero-size array to reduction operation minimum which has no identity
numpy.reference.generated.numpy.ufunc.reduce
numpy.ufunc.reduceat method ufunc.reduceat(array, indices, axis=0, dtype=None, out=None) Performs a (local) reduce with specified slices over a single axis. For i in range(len(indices)), reduceat computes ufunc.reduce(array[indices[i]:indices[i+1]]), which becomes the i-th generalized “row” parallel to axis in the final result (i.e., in a 2-D array, for example, if axis = 0, it becomes the i-th row, but if axis = 1, it becomes the i-th column). There are three exceptions to this: when i = len(indices) - 1 (so for the last index), indices[i+1] = array.shape[axis]. if indices[i] >= indices[i + 1], the i-th generalized “row” is simply array[indices[i]]. if indices[i] >= len(array) or indices[i] < 0, an error is raised. The shape of the output depends on the size of indices, and may be larger than array (this happens if len(indices) > array.shape[axis]). Parameters arrayarray_like The array to act on. indicesarray_like Paired indices, comma separated (not colon), specifying slices to reduce. axisint, optional The axis along which to apply the reduceat. dtypedata-type code, optional The type used to represent the intermediate results. Defaults to the data type of the output array if this is provided, or the data type of the input array if no output array is provided. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with ufunc.__call__, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. Returns rndarray The reduced values. If out was supplied, r is a reference to out. Notes A descriptive example: If array is 1-D, the function ufunc.accumulate(array) is the same as ufunc.reduceat(array, indices)[::2] where indices is range(len(array) - 1) with a zero placed in every other element: indices = zeros(2 * len(array) - 1), indices[1::2] = range(1, len(array)). Don’t be fooled by this attribute’s name: reduceat(array) is not necessarily smaller than array. Examples To take the running sum of four successive values: >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2] array([ 6, 10, 14, 18]) A 2-D example: >>> x = np.linspace(0, 15, 16).reshape(4,4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) # reduce such that the result has the following five rows: # [row1 + row2 + row3] # [row4] # [row2] # [row3] # [row1 + row2 + row3 + row4] >>> np.add.reduceat(x, [0, 3, 1, 2, 0]) array([[12., 15., 18., 21.], [12., 13., 14., 15.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [24., 28., 32., 36.]]) # reduce such that result has the following two columns: # [col1 * col2 * col3, col4] >>> np.multiply.reduceat(x, [0, 3], 1) array([[ 0., 3.], [ 120., 7.], [ 720., 11.], [2184., 15.]])
numpy.reference.generated.numpy.ufunc.reduceat
numpy.ufunc.signature attribute ufunc.signature Definition of the core elements a generalized ufunc operates on. The signature determines how the dimensions of each input/output array are split into core and loop dimensions: Each dimension in the signature is matched to a dimension of the corresponding passed-in array, starting from the end of the shape tuple. Core dimensions assigned to the same label in the signature must have exactly matching sizes, no broadcasting is performed. The core dimensions are removed from all inputs and the remaining dimensions are broadcast together, defining the loop dimensions. Notes Generalized ufuncs are used internally in many linalg functions, and in the testing suite; the examples below are taken from these. For ufuncs that operate on scalars, the signature is None, which is equivalent to ‘()’ for every argument. Examples >>> np.core.umath_tests.matrix_multiply.signature '(m,n),(n,p)->(m,p)' >>> np.linalg._umath_linalg.det.signature '(m,m)->()' >>> np.add.signature is None True # equivalent to '(),()->()'
numpy.reference.generated.numpy.ufunc.signature
numpy.ufunc.types attribute ufunc.types Returns a list with types grouped input->output. Data attribute listing the data-type “Domain-Range” groupings the ufunc can deliver. The data-types are given using the character codes. See also numpy.ufunc.ntypes Examples >>> np.add.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.multiply.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.power.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.exp.types ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O'] >>> np.remainder.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O']
numpy.reference.generated.numpy.ufunc.types