title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.errors.DuplicateLabelError
|
`pandas.errors.DuplicateLabelError`
Error raised when an operation would introduce duplicate labels.
New in version 1.2.0.
```
>>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags(
... allows_duplicate_labels=False
... )
>>> s.reindex(['a', 'a', 'b'])
Traceback (most recent call last):
...
DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
```
|
exception pandas.errors.DuplicateLabelError[source]#
Error raised when an operation would introduce duplicate labels.
New in version 1.2.0.
Examples
>>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags(
... allows_duplicate_labels=False
... )
>>> s.reindex(['a', 'a', 'b'])
Traceback (most recent call last):
...
DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
|
reference/api/pandas.errors.DuplicateLabelError.html
|
pandas.api.extensions.register_extension_dtype
|
`pandas.api.extensions.register_extension_dtype`
Register an ExtensionType with pandas as class decorator.
```
>>> from pandas.api.extensions import register_extension_dtype, ExtensionDtype
>>> @register_extension_dtype
... class MyExtensionDtype(ExtensionDtype):
... name = "myextension"
```
|
pandas.api.extensions.register_extension_dtype(cls)[source]#
Register an ExtensionType with pandas as class decorator.
This enables operations like .astype(name) for the name
of the ExtensionDtype.
Returns
callableA class decorator.
Examples
>>> from pandas.api.extensions import register_extension_dtype, ExtensionDtype
>>> @register_extension_dtype
... class MyExtensionDtype(ExtensionDtype):
... name = "myextension"
|
reference/api/pandas.api.extensions.register_extension_dtype.html
|
pandas.core.groupby.DataFrameGroupBy.idxmax
|
`pandas.core.groupby.DataFrameGroupBy.idxmax`
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
```
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
```
|
DataFrameGroupBy.idxmax(axis=0, skipna=True, numeric_only=_NoDefault.no_default)[source]#
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
numeric_onlybool, default True for axis=0, False for axis=1Include only float, int or boolean data.
New in version 1.5.0.
Returns
SeriesIndexes of maxima along the specified axis.
Raises
ValueError
If the row/column is empty
See also
Series.idxmaxReturn index of the maximum element.
Notes
This method is the DataFrame version of ndarray.argmax.
Examples
Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the maximum value in each column.
>>> df.idxmax()
consumption Wheat Products
co2_emissions Beef
dtype: object
To return the index for the maximum value in each row, use axis="columns".
>>> df.idxmax(axis="columns")
Pork co2_emissions
Wheat Products consumption
Beef co2_emissions
dtype: object
|
reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html
|
pandas.Index.map
|
`pandas.Index.map`
Map values using an input mapping or function.
Mapping correspondence.
|
Index.map(mapper, na_action=None)[source]#
Map values using an input mapping or function.
Parameters
mapperfunction, dict, or SeriesMapping correspondence.
na_action{None, ‘ignore’}If ‘ignore’, propagate NA values, without passing them to the
mapping correspondence.
Returns
appliedUnion[Index, MultiIndex], inferredThe output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
|
reference/api/pandas.Index.map.html
|
pandas.api.indexers.check_array_indexer
|
`pandas.api.indexers.check_array_indexer`
Check if indexer is a valid array indexer for array.
```
>>> mask = pd.array([True, False])
>>> arr = pd.array([1, 2])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
```
|
pandas.api.indexers.check_array_indexer(array, indexer)[source]#
Check if indexer is a valid array indexer for array.
For a boolean mask, array and indexer are checked to have the same
length. The dtype is validated, and if it is an integer or boolean
ExtensionArray, it is checked if there are missing values present, and
it is converted to the appropriate numpy array. Other dtypes will raise
an error.
Non-array indexers (integer, slice, Ellipsis, tuples, ..) are passed
through as is.
New in version 1.0.0.
Parameters
arrayarray-likeThe array that is being indexed (only used for the length).
indexerarray-like or list-likeThe array-like that’s used to index. List-like input that is not yet
a numpy array or an ExtensionArray is converted to one. Other input
types are passed through as is.
Returns
numpy.ndarrayThe validated indexer as a numpy array that can be used to index.
Raises
IndexErrorWhen the lengths don’t match.
ValueErrorWhen indexer cannot be converted to a numpy ndarray to index
(e.g. presence of missing values).
See also
api.types.is_bool_dtypeCheck if key is of boolean dtype.
Examples
When checking a boolean mask, a boolean ndarray is returned when the
arguments are all valid.
>>> mask = pd.array([True, False])
>>> arr = pd.array([1, 2])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
An IndexError is raised when the lengths don’t match.
>>> mask = pd.array([True, False, True])
>>> pd.api.indexers.check_array_indexer(arr, mask)
Traceback (most recent call last):
...
IndexError: Boolean index has wrong length: 3 instead of 2.
NA values in a boolean array are treated as False.
>>> mask = pd.array([True, pd.NA])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
A numpy boolean mask will get passed through (if the length is correct):
>>> mask = np.array([True, False])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
Similarly for integer indexers, an integer ndarray is returned when it is
a valid indexer, otherwise an error is (for integer indexers, a matching
length is not required):
>>> indexer = pd.array([0, 2], dtype="Int64")
>>> arr = pd.array([1, 2, 3])
>>> pd.api.indexers.check_array_indexer(arr, indexer)
array([0, 2])
>>> indexer = pd.array([0, pd.NA], dtype="Int64")
>>> pd.api.indexers.check_array_indexer(arr, indexer)
Traceback (most recent call last):
...
ValueError: Cannot index with an integer indexer containing NA values
For non-integer/boolean dtypes, an appropriate error is raised:
>>> indexer = np.array([0., 2.], dtype="float64")
>>> pd.api.indexers.check_array_indexer(arr, indexer)
Traceback (most recent call last):
...
IndexError: arrays used as indices must be of integer or boolean type
|
reference/api/pandas.api.indexers.check_array_indexer.html
|
Internals
|
Internals
|
This section will provide a look into some of pandas internals. It’s primarily
intended for developers of pandas itself.
Indexing#
In pandas there are a few objects implemented which can serve as valid
containers for the axis labels:
Index: the generic “ordered set” object, an ndarray of object dtype
assuming nothing about its contents. The labels must be hashable (and
likely immutable) and unique. Populates a dict of label to location in
Cython to do O(1) lookups.
Int64Index: a version of Index highly optimized for 64-bit integer
data, such as time stamps
Float64Index: a version of Index highly optimized for 64-bit float data
MultiIndex: the standard hierarchical index object
DatetimeIndex: An Index object with Timestamp boxed elements (impl are the int64 values)
TimedeltaIndex: An Index object with Timedelta boxed elements (impl are the in64 values)
PeriodIndex: An Index object with Period elements
There are functions that make the creation of a regular index easy:
date_range: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of Python datetime objects
period_range: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of Period objects, representing timespans
The motivation for having an Index class in the first place was to enable
different implementations of indexing. This means that it’s possible for you,
the user, to implement a custom Index subclass that may be better suited to
a particular application than the ones provided in pandas.
From an internal implementation point of view, the relevant methods that an
Index must define are one or more of the following (depending on how
incompatible the new object internals are with the Index functions):
get_loc: returns an “indexer” (an integer, or in some cases a
slice object) for a label
slice_locs: returns the “range” to slice between two labels
get_indexer: Computes the indexing vector for reindexing / data
alignment purposes. See the source / docstrings for more on this
get_indexer_non_unique: Computes the indexing vector for reindexing / data
alignment purposes when the index is non-unique. See the source / docstrings
for more on this
reindex: Does any pre-conversion of the input index then calls
get_indexer
union, intersection: computes the union or intersection of two
Index objects
insert: Inserts a new label into an Index, yielding a new object
delete: Delete a label, yielding a new object
drop: Deletes a set of labels
take: Analogous to ndarray.take
MultiIndex#
Internally, the MultiIndex consists of a few things: the levels, the
integer codes (until version 0.24 named labels), and the level names:
In [1]: index = pd.MultiIndex.from_product(
...: [range(3), ["one", "two"]], names=["first", "second"]
...: )
...:
In [2]: index
Out[2]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [3]: index.levels
Out[3]: FrozenList([[0, 1, 2], ['one', 'two']])
In [4]: index.codes
Out[4]: FrozenList([[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]])
In [5]: index.names
Out[5]: FrozenList(['first', 'second'])
You can probably guess that the codes determine which unique element is
identified with that location at each layer of the index. It’s important to
note that sortedness is determined solely from the integer codes and does
not check (or care) whether the levels themselves are sorted. Fortunately, the
constructors from_tuples and from_arrays ensure that this is true, but
if you compute the levels and codes yourself, please be careful.
Values#
pandas extends NumPy’s type system with custom types, like Categorical or
datetimes with a timezone, so we have multiple notions of “values”. For 1-D
containers (Index classes and Series) we have the following convention:
cls._values refers is the “best possible” array. This could be an
ndarray or ExtensionArray.
So, for example, Series[category]._values is a Categorical.
Subclassing pandas data structures#
This section has been moved to Subclassing pandas data structures.
|
development/internals.html
|
pandas.tseries.offsets.BusinessHour.next_bday
|
`pandas.tseries.offsets.BusinessHour.next_bday`
Used for moving to next business day.
|
BusinessHour.next_bday#
Used for moving to next business day.
|
reference/api/pandas.tseries.offsets.BusinessHour.next_bday.html
|
pandas.io.formats.style.Styler.set_table_styles
|
`pandas.io.formats.style.Styler.set_table_styles`
Set the table styles included within the <style> HTML element.
This function can be used to style the entire table, columns, rows or
specific HTML selectors.
```
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': [('background-color', 'yellow')]}]
... )
```
|
Styler.set_table_styles(table_styles=None, axis=0, overwrite=True, css_class_names=None)[source]#
Set the table styles included within the <style> HTML element.
This function can be used to style the entire table, columns, rows or
specific HTML selectors.
Parameters
table_styleslist or dictIf supplying a list, each individual table_style should be a
dictionary with selector and props keys. selector
should be a CSS selector that the style will be applied to
(automatically prefixed by the table’s UUID) and props
should be a list of tuples with (attribute, value).
If supplying a dict, the dict keys should correspond to
column names or index values, depending upon the specified
axis argument. These will be mapped to row or col CSS
selectors. MultiIndex values as dict keys should be
in their respective tuple form. The dict values should be
a list as specified in the form with CSS selectors and
props that will be applied to the specified row or column.
Changed in version 1.2.0.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'). Only used if table_styles is
dict.
New in version 1.2.0.
overwritebool, default TrueStyles are replaced if True, or extended if False. CSS
rules are preserved so most recent styles set will dominate
if selectors intersect.
New in version 1.2.0.
css_class_namesdict, optionalA dict of strings used to replace the default CSS classes described below.
New in version 1.4.0.
Returns
selfStyler
See also
Styler.set_td_classesSet the DataFrame of strings added to the class attribute of <td> HTML elements.
Styler.set_table_attributesSet the table attributes added to the <table> HTML element.
Notes
The default CSS classes dict, whose values can be replaced is as follows:
css_class_names = {"row_heading": "row_heading",
"col_heading": "col_heading",
"index_name": "index_name",
"col": "col",
"row": "row",
"col_trim": "col_trim",
"row_trim": "row_trim",
"level": "level",
"data": "data",
"blank": "blank",
"foot": "foot"}
Examples
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': [('background-color', 'yellow')]}]
... )
Or with CSS strings
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': 'background-color: yellow; font-size: 1em;'}]
... )
Adding column styling by name
>>> df.style.set_table_styles({
... 'A': [{'selector': '',
... 'props': [('color', 'red')]}],
... 'B': [{'selector': 'td',
... 'props': 'color: blue;'}]
... }, overwrite=False)
Adding row styling
>>> df.style.set_table_styles({
... 0: [{'selector': 'td:hover',
... 'props': [('font-size', '25px')]}]
... }, axis=1, overwrite=False)
See Table Visualization user guide for
more details.
|
reference/api/pandas.io.formats.style.Styler.set_table_styles.html
|
pandas.tseries.offsets.MonthEnd.nanos
|
pandas.tseries.offsets.MonthEnd.nanos
|
MonthEnd.nanos#
|
reference/api/pandas.tseries.offsets.MonthEnd.nanos.html
|
pandas.Interval.closed
|
`pandas.Interval.closed`
String describing the inclusive side the intervals.
Either left, right, both or neither.
|
Interval.closed#
String describing the inclusive side the intervals.
Either left, right, both or neither.
|
reference/api/pandas.Interval.closed.html
|
pandas.Series.tz_convert
|
`pandas.Series.tz_convert`
Convert tz-aware axis to target time zone.
|
Series.tz_convert(tz, axis=0, level=None, copy=True)[source]#
Convert tz-aware axis to target time zone.
Parameters
tzstr or tzinfo object
axisthe axis to convert
levelint, str, default NoneIf axis is a MultiIndex, convert a specific level. Otherwise
must be None.
copybool, default TrueAlso make a copy of the underlying data.
Returns
Series/DataFrameObject with time zone converted axis.
Raises
TypeErrorIf the axis is tz-naive.
|
reference/api/pandas.Series.tz_convert.html
|
pandas.core.groupby.SeriesGroupBy.hist
|
`pandas.core.groupby.SeriesGroupBy.hist`
Draw histogram of the input series using matplotlib.
If passed, then used to form histograms for separate groups.
|
property SeriesGroupBy.hist[source]#
Draw histogram of the input series using matplotlib.
Parameters
byobject, optionalIf passed, then used to form histograms for separate groups.
axmatplotlib axis objectIf not passed, uses gca().
gridbool, default TrueWhether to show axis grid lines.
xlabelsizeint, default NoneIf specified changes the x-axis label size.
xrotfloat, default NoneRotation of x axis labels.
ylabelsizeint, default NoneIf specified changes the y-axis label size.
yrotfloat, default NoneRotation of y axis labels.
figsizetuple, default NoneFigure size in inches by default.
binsint or sequence, default 10Number of histogram bins to be used. If an integer is given, bins + 1
bin edges are calculated and returned. If bins is a sequence, gives
bin edges, including left edge of first bin and right edge of last
bin. In this case, bins is returned unmodified.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
legendbool, default FalseWhether to show the legend.
New in version 1.1.0.
**kwargsTo be passed to the actual plotting function.
Returns
matplotlib.AxesSubplotA histogram plot.
See also
matplotlib.axes.Axes.histPlot a histogram using matplotlib.
|
reference/api/pandas.core.groupby.SeriesGroupBy.hist.html
|
DataFrame
|
DataFrame
DataFrame([data, index, columns, dtype, copy])
Two-dimensional, size-mutable, potentially heterogeneous tabular data.
Axes
DataFrame.index
The index (row labels) of the DataFrame.
|
Constructor#
DataFrame([data, index, columns, dtype, copy])
Two-dimensional, size-mutable, potentially heterogeneous tabular data.
Attributes and underlying data#
Axes
DataFrame.index
The index (row labels) of the DataFrame.
DataFrame.columns
The column labels of the DataFrame.
DataFrame.dtypes
Return the dtypes in the DataFrame.
DataFrame.info([verbose, buf, max_cols, ...])
Print a concise summary of a DataFrame.
DataFrame.select_dtypes([include, exclude])
Return a subset of the DataFrame's columns based on the column dtypes.
DataFrame.values
Return a Numpy representation of the DataFrame.
DataFrame.axes
Return a list representing the axes of the DataFrame.
DataFrame.ndim
Return an int representing the number of axes / array dimensions.
DataFrame.size
Return an int representing the number of elements in this object.
DataFrame.shape
Return a tuple representing the dimensionality of the DataFrame.
DataFrame.memory_usage([index, deep])
Return the memory usage of each column in bytes.
DataFrame.empty
Indicator whether Series/DataFrame is empty.
DataFrame.set_flags(*[, copy, ...])
Return a new object with updated flags.
Conversion#
DataFrame.astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
DataFrame.convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
DataFrame.infer_objects()
Attempt to infer better dtypes for object columns.
DataFrame.copy([deep])
Make a copy of this object's indices and data.
DataFrame.bool()
Return the bool of a single element Series or DataFrame.
Indexing, iteration#
DataFrame.head([n])
Return the first n rows.
DataFrame.at
Access a single value for a row/column label pair.
DataFrame.iat
Access a single value for a row/column pair by integer position.
DataFrame.loc
Access a group of rows and columns by label(s) or a boolean array.
DataFrame.iloc
Purely integer-location based indexing for selection by position.
DataFrame.insert(loc, column, value[, ...])
Insert column into DataFrame at specified location.
DataFrame.__iter__()
Iterate over info axis.
DataFrame.items()
Iterate over (column name, Series) pairs.
DataFrame.iteritems()
(DEPRECATED) Iterate over (column name, Series) pairs.
DataFrame.keys()
Get the 'info axis' (see Indexing for more).
DataFrame.iterrows()
Iterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuples([index, name])
Iterate over DataFrame rows as namedtuples.
DataFrame.lookup(row_labels, col_labels)
(DEPRECATED) Label-based "fancy indexing" function for DataFrame.
DataFrame.pop(item)
Return item and drop from frame.
DataFrame.tail([n])
Return the last n rows.
DataFrame.xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
DataFrame.get(key[, default])
Get item from object for given key (ex: DataFrame column).
DataFrame.isin(values)
Whether each element in the DataFrame is contained in values.
DataFrame.where(cond[, other, inplace, ...])
Replace values where the condition is False.
DataFrame.mask(cond[, other, inplace, axis, ...])
Replace values where the condition is True.
DataFrame.query(expr, *[, inplace])
Query the columns of a DataFrame with a boolean expression.
For more information on .at, .iat, .loc, and
.iloc, see the indexing documentation.
Binary operator functions#
DataFrame.add(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator add).
DataFrame.sub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator sub).
DataFrame.mul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator mul).
DataFrame.div(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.truediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.floordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator mod).
DataFrame.pow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator pow).
DataFrame.dot(other)
Compute the matrix multiplication between the DataFrame and other.
DataFrame.radd(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator radd).
DataFrame.rsub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator rsub).
DataFrame.rmul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator rmul).
DataFrame.rdiv(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rtruediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rfloordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
DataFrame.rmod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator rmod).
DataFrame.rpow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
DataFrame.lt(other[, axis, level])
Get Less than of dataframe and other, element-wise (binary operator lt).
DataFrame.gt(other[, axis, level])
Get Greater than of dataframe and other, element-wise (binary operator gt).
DataFrame.le(other[, axis, level])
Get Less than or equal to of dataframe and other, element-wise (binary operator le).
DataFrame.ge(other[, axis, level])
Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
DataFrame.ne(other[, axis, level])
Get Not equal to of dataframe and other, element-wise (binary operator ne).
DataFrame.eq(other[, axis, level])
Get Equal to of dataframe and other, element-wise (binary operator eq).
DataFrame.combine(other, func[, fill_value, ...])
Perform column-wise combine with another DataFrame.
DataFrame.combine_first(other)
Update null elements with value in the same location in other.
Function application, GroupBy & window#
DataFrame.apply(func[, axis, raw, ...])
Apply a function along an axis of the DataFrame.
DataFrame.applymap(func[, na_action])
Apply a function to a Dataframe elementwise.
DataFrame.pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
DataFrame.agg([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.transform(func[, axis])
Call func on self producing a DataFrame with the same axis shape as self.
DataFrame.groupby([by, axis, level, ...])
Group DataFrame using a mapper or by a Series of columns.
DataFrame.rolling(window[, min_periods, ...])
Provide rolling window calculations.
DataFrame.expanding([min_periods, center, ...])
Provide expanding window calculations.
DataFrame.ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
Computations / descriptive stats#
DataFrame.abs()
Return a Series/DataFrame with absolute numeric value of each element.
DataFrame.all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
DataFrame.any(*[, axis, bool_only, skipna, ...])
Return whether any element is True, potentially over an axis.
DataFrame.clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
DataFrame.corr([method, min_periods, ...])
Compute pairwise correlation of columns, excluding NA/null values.
DataFrame.corrwith(other[, axis, drop, ...])
Compute pairwise correlation.
DataFrame.count([axis, level, numeric_only])
Count non-NA cells for each column or row.
DataFrame.cov([min_periods, ddof, numeric_only])
Compute pairwise covariance of columns, excluding NA/null values.
DataFrame.cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
DataFrame.cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
DataFrame.cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
DataFrame.cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
DataFrame.describe([percentiles, include, ...])
Generate descriptive statistics.
DataFrame.diff([periods, axis])
First discrete difference of element.
DataFrame.eval(expr, *[, inplace])
Evaluate a string describing operations on DataFrame columns.
DataFrame.kurt([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.kurtosis([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrame.max([axis, skipna, level, ...])
Return the maximum of the values over the requested axis.
DataFrame.mean([axis, skipna, level, ...])
Return the mean of the values over the requested axis.
DataFrame.median([axis, skipna, level, ...])
Return the median of the values over the requested axis.
DataFrame.min([axis, skipna, level, ...])
Return the minimum of the values over the requested axis.
DataFrame.mode([axis, numeric_only, dropna])
Get the mode(s) of each element along the selected axis.
DataFrame.pct_change([periods, fill_method, ...])
Percentage change between the current and a prior element.
DataFrame.prod([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.product([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.quantile([q, axis, numeric_only, ...])
Return values at the given quantile over requested axis.
DataFrame.rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
DataFrame.round([decimals])
Round a DataFrame to a variable number of decimal places.
DataFrame.sem([axis, skipna, level, ddof, ...])
Return unbiased standard error of the mean over requested axis.
DataFrame.skew([axis, skipna, level, ...])
Return unbiased skew over requested axis.
DataFrame.sum([axis, skipna, level, ...])
Return the sum of the values over the requested axis.
DataFrame.std([axis, skipna, level, ddof, ...])
Return sample standard deviation over requested axis.
DataFrame.var([axis, skipna, level, ddof, ...])
Return unbiased variance over requested axis.
DataFrame.nunique([axis, dropna])
Count number of distinct elements in specified axis.
DataFrame.value_counts([subset, normalize, ...])
Return a Series containing counts of unique rows in the DataFrame.
Reindexing / selection / label manipulation#
DataFrame.add_prefix(prefix)
Prefix labels with string prefix.
DataFrame.add_suffix(suffix)
Suffix labels with string suffix.
DataFrame.align(other[, join, axis, level, ...])
Align two objects on their axes with the specified join method.
DataFrame.at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
DataFrame.between_time(start_time, end_time)
Select values between particular times of the day (e.g., 9:00-9:30 AM).
DataFrame.drop([labels, axis, index, ...])
Drop specified labels from rows or columns.
DataFrame.drop_duplicates([subset, keep, ...])
Return DataFrame with duplicate rows removed.
DataFrame.duplicated([subset, keep])
Return boolean Series denoting duplicate rows.
DataFrame.equals(other)
Test whether two objects contain the same elements.
DataFrame.filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
DataFrame.first(offset)
Select initial periods of time series data based on a date offset.
DataFrame.head([n])
Return the first n rows.
DataFrame.idxmax([axis, skipna, numeric_only])
Return index of first occurrence of maximum over requested axis.
DataFrame.idxmin([axis, skipna, numeric_only])
Return index of first occurrence of minimum over requested axis.
DataFrame.last(offset)
Select final periods of time series data based on a date offset.
DataFrame.reindex([labels, index, columns, ...])
Conform Series/DataFrame to new index with optional filling logic.
DataFrame.reindex_like(other[, method, ...])
Return an object with matching indices as other object.
DataFrame.rename([mapper, index, columns, ...])
Alter axes labels.
DataFrame.rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
DataFrame.reset_index([level, drop, ...])
Reset the index, or a level of it.
DataFrame.sample([n, frac, replace, ...])
Return a random sample of items from an axis of object.
DataFrame.set_axis(labels, *[, axis, ...])
Assign desired index to given axis.
DataFrame.set_index(keys, *[, drop, append, ...])
Set the DataFrame index using existing columns.
DataFrame.tail([n])
Return the last n rows.
DataFrame.take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
DataFrame.truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
Missing data handling#
DataFrame.backfill(*[, axis, inplace, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.bfill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.dropna(*[, axis, how, thresh, ...])
Remove missing values.
DataFrame.ffill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.fillna([value, method, axis, ...])
Fill NA/NaN values using the specified method.
DataFrame.interpolate([method, axis, limit, ...])
Fill NaN values using an interpolation method.
DataFrame.isna()
Detect missing values.
DataFrame.isnull()
DataFrame.isnull is an alias for DataFrame.isna.
DataFrame.notna()
Detect existing (non-missing) values.
DataFrame.notnull()
DataFrame.notnull is an alias for DataFrame.notna.
DataFrame.pad(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.replace([to_replace, value, ...])
Replace values given in to_replace with value.
Reshaping, sorting, transposing#
DataFrame.droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
DataFrame.pivot(*[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
DataFrame.pivot_table([values, index, ...])
Create a spreadsheet-style pivot table as a DataFrame.
DataFrame.reorder_levels(order[, axis])
Rearrange index levels using input order.
DataFrame.sort_values(by, *[, axis, ...])
Sort by the values along either axis.
DataFrame.sort_index(*[, axis, level, ...])
Sort object by labels (along an axis).
DataFrame.nlargest(n, columns[, keep])
Return the first n rows ordered by columns in descending order.
DataFrame.nsmallest(n, columns[, keep])
Return the first n rows ordered by columns in ascending order.
DataFrame.swaplevel([i, j, axis])
Swap levels i and j in a MultiIndex.
DataFrame.stack([level, dropna])
Stack the prescribed level(s) from columns to index.
DataFrame.unstack([level, fill_value])
Pivot a level of the (necessarily hierarchical) index labels.
DataFrame.swapaxes(axis1, axis2[, copy])
Interchange axes and swap values axes appropriately.
DataFrame.melt([id_vars, value_vars, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
DataFrame.explode(column[, ignore_index])
Transform each element of a list-like to a row, replicating index values.
DataFrame.squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
DataFrame.to_xarray()
Return an xarray object from the pandas object.
DataFrame.T
DataFrame.transpose(*args[, copy])
Transpose index and columns.
Combining / comparing / joining / merging#
DataFrame.append(other[, ignore_index, ...])
(DEPRECATED) Append rows of other to the end of caller, returning a new object.
DataFrame.assign(**kwargs)
Assign new columns to a DataFrame.
DataFrame.compare(other[, align_axis, ...])
Compare to another DataFrame and show the differences.
DataFrame.join(other[, on, how, lsuffix, ...])
Join columns of another DataFrame.
DataFrame.merge(right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
DataFrame.update(other[, join, overwrite, ...])
Modify in place using non-NA values from another DataFrame.
Time Series-related#
DataFrame.asfreq(freq[, method, how, ...])
Convert time series to specified frequency.
DataFrame.asof(where[, subset])
Return the last row(s) without any NaNs before where.
DataFrame.shift([periods, freq, axis, ...])
Shift index by desired number of periods with an optional time freq.
DataFrame.slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
DataFrame.tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrame.first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
DataFrame.last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
DataFrame.resample(rule[, axis, closed, ...])
Resample time-series data.
DataFrame.to_period([freq, axis, copy])
Convert DataFrame from DatetimeIndex to PeriodIndex.
DataFrame.to_timestamp([freq, how, axis, copy])
Cast to DatetimeIndex of timestamps, at beginning of period.
DataFrame.tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
DataFrame.tz_localize(tz[, axis, level, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
Flags#
Flags refer to attributes of the pandas object. Properties of the dataset (like
the date is was recorded, the URL it was accessed from, etc.) should be stored
in DataFrame.attrs.
Flags(obj, *, allows_duplicate_labels)
Flags that apply to pandas objects.
Metadata#
DataFrame.attrs is a dictionary for storing global metadata for this DataFrame.
Warning
DataFrame.attrs is considered experimental and may change without warning.
DataFrame.attrs
Dictionary of global attributes of this dataset.
Plotting#
DataFrame.plot is both a callable method and a namespace attribute for
specific plotting methods of the form DataFrame.plot.<kind>.
DataFrame.plot([x, y, kind, ax, ....])
DataFrame plotting accessor and method
DataFrame.plot.area([x, y])
Draw a stacked area plot.
DataFrame.plot.bar([x, y])
Vertical bar plot.
DataFrame.plot.barh([x, y])
Make a horizontal bar plot.
DataFrame.plot.box([by])
Make a box plot of the DataFrame columns.
DataFrame.plot.density([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.hexbin(x, y[, C, ...])
Generate a hexagonal binning plot.
DataFrame.plot.hist([by, bins])
Draw one histogram of the DataFrame's columns.
DataFrame.plot.kde([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.line([x, y])
Plot Series or DataFrame as lines.
DataFrame.plot.pie(**kwargs)
Generate a pie plot.
DataFrame.plot.scatter(x, y[, s, c])
Create a scatter plot with varying marker point size and color.
DataFrame.boxplot([column, by, ax, ...])
Make a box plot from DataFrame columns.
DataFrame.hist([column, by, grid, ...])
Make a histogram of the DataFrame's columns.
Sparse accessor#
Sparse-dtype specific methods and attributes are provided under the
DataFrame.sparse accessor.
DataFrame.sparse.density
Ratio of non-sparse points to total (dense) data points.
DataFrame.sparse.from_spmatrix(data[, ...])
Create a new DataFrame from a scipy sparse matrix.
DataFrame.sparse.to_coo()
Return the contents of the frame as a sparse SciPy COO matrix.
DataFrame.sparse.to_dense()
Convert a DataFrame with sparse values to dense.
Serialization / IO / conversion#
DataFrame.from_dict(data[, orient, dtype, ...])
Construct DataFrame from dict of array-like or dicts.
DataFrame.from_records(data[, index, ...])
Convert structured or record ndarray to DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
DataFrame.to_hdf(path_or_buf, key[, mode, ...])
Write the contained data to an HDF5 file using HDFStore.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
DataFrame.to_dict([orient, into])
Convert the DataFrame to a dictionary.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
DataFrame.to_gbq(destination_table[, ...])
Write a DataFrame to a Google BigQuery table.
DataFrame.to_records([index, column_dtypes, ...])
Convert DataFrame to a NumPy record array.
DataFrame.to_string([buf, columns, ...])
Render a DataFrame to a console-friendly tabular output.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
DataFrame.to_markdown([buf, mode, index, ...])
Print DataFrame in Markdown-friendly format.
DataFrame.style
Returns a Styler object.
DataFrame.__dataframe__([nan_as_null, ...])
Return the dataframe interchange object implementing the interchange protocol.
|
reference/frame.html
|
pandas.tseries.offsets.BYearEnd.is_year_start
|
`pandas.tseries.offsets.BYearEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BYearEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BYearEnd.is_year_start.html
|
pandas.core.groupby.DataFrameGroupBy.filter
|
`pandas.core.groupby.DataFrameGroupBy.filter`
Return a copy of a DataFrame excluding filtered elements.
Elements from groups are filtered if they do not satisfy the
boolean criterion specified by func.
```
>>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
... 'foo', 'bar'],
... 'B' : [1, 2, 3, 4, 5, 6],
... 'C' : [2.0, 5., 8., 1., 2., 9.]})
>>> grouped = df.groupby('A')
>>> grouped.filter(lambda x: x['B'].mean() > 3.)
A B C
1 bar 2 5.0
3 bar 4 1.0
5 bar 6 9.0
```
|
DataFrameGroupBy.filter(func, dropna=True, *args, **kwargs)[source]#
Return a copy of a DataFrame excluding filtered elements.
Elements from groups are filtered if they do not satisfy the
boolean criterion specified by func.
Parameters
funcfunctionFunction to apply to each subframe. Should return True or False.
dropnaDrop groups that do not pass the filter. True by default;If False, groups that evaluate False are filled with NaNs.
Returns
filteredDataFrame
Notes
Each subframe is endowed the attribute ‘name’ in case you need to know
which group you are working on.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
... 'foo', 'bar'],
... 'B' : [1, 2, 3, 4, 5, 6],
... 'C' : [2.0, 5., 8., 1., 2., 9.]})
>>> grouped = df.groupby('A')
>>> grouped.filter(lambda x: x['B'].mean() > 3.)
A B C
1 bar 2 5.0
3 bar 4 1.0
5 bar 6 9.0
|
reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html
|
pandas.Timedelta.max
|
pandas.Timedelta.max
|
Timedelta.max = Timedelta('106751 days 23:47:16.854775807')#
|
reference/api/pandas.Timedelta.max.html
|
pandas.tseries.offsets.Easter.kwds
|
`pandas.tseries.offsets.Easter.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
Easter.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Easter.kwds.html
|
pandas.DataFrame.from_dict
|
`pandas.DataFrame.from_dict`
Construct DataFrame from dict of array-like or dicts.
```
>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
col_1 col_2
0 3 a
1 2 b
2 1 c
3 0 d
```
|
classmethod DataFrame.from_dict(data, orient='columns', dtype=None, columns=None)[source]#
Construct DataFrame from dict of array-like or dicts.
Creates DataFrame object from dictionary by columns or by index
allowing dtype specification.
Parameters
datadictOf the form {field : array-like} or {field : dict}.
orient{‘columns’, ‘index’, ‘tight’}, default ‘columns’The “orientation” of the data. If the keys of the passed dict
should be the columns of the resulting DataFrame, pass ‘columns’
(default). Otherwise if the keys should be rows, pass ‘index’.
If ‘tight’, assume a dict with keys [‘index’, ‘columns’, ‘data’,
‘index_names’, ‘column_names’].
New in version 1.4.0: ‘tight’ as an allowed value for the orient argument
dtypedtype, default NoneData type to force, otherwise infer.
columnslist, default NoneColumn labels to use when orient='index'. Raises a ValueError
if used with orient='columns' or orient='tight'.
Returns
DataFrame
See also
DataFrame.from_recordsDataFrame from structured ndarray, sequence of tuples or dicts, or DataFrame.
DataFrameDataFrame object creation using constructor.
DataFrame.to_dictConvert the DataFrame to a dictionary.
Examples
By default the keys of the dict become the DataFrame columns:
>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
col_1 col_2
0 3 a
1 2 b
2 1 c
3 0 d
Specify orient='index' to create the DataFrame using dictionary
keys as rows:
>>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data, orient='index')
0 1 2 3
row_1 3 2 1 0
row_2 a b c d
When using the ‘index’ orientation, the column names can be
specified manually:
>>> pd.DataFrame.from_dict(data, orient='index',
... columns=['A', 'B', 'C', 'D'])
A B C D
row_1 3 2 1 0
row_2 a b c d
Specify orient='tight' to create the DataFrame using a ‘tight’
format:
>>> data = {'index': [('a', 'b'), ('a', 'c')],
... 'columns': [('x', 1), ('y', 2)],
... 'data': [[1, 3], [2, 4]],
... 'index_names': ['n1', 'n2'],
... 'column_names': ['z1', 'z2']}
>>> pd.DataFrame.from_dict(data, orient='tight')
z1 x y
z2 1 2
n1 n2
a b 1 3
c 2 4
|
reference/api/pandas.DataFrame.from_dict.html
|
pandas.tseries.offsets.Tick.name
|
`pandas.tseries.offsets.Tick.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Tick.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Tick.name.html
|
pandas.tseries.offsets.Second.copy
|
`pandas.tseries.offsets.Second.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
Second.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.Second.copy.html
|
pandas.tseries.offsets.Week.name
|
`pandas.tseries.offsets.Week.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
Week.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Week.name.html
|
pandas.tseries.offsets.BusinessHour.is_quarter_end
|
`pandas.tseries.offsets.BusinessHour.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
BusinessHour.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessHour.is_quarter_end.html
|
pandas.tseries.offsets.FY5253Quarter.is_anchored
|
`pandas.tseries.offsets.FY5253Quarter.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
FY5253Quarter.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.FY5253Quarter.is_anchored.html
|
pandas.tseries.offsets.BYearEnd.__call__
|
`pandas.tseries.offsets.BYearEnd.__call__`
Call self as a function.
|
BYearEnd.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.BYearEnd.__call__.html
|
pandas.DataFrame.to_pickle
|
`pandas.DataFrame.to_pickle`
Pickle (serialize) object to file.
```
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
>>> original_df.to_pickle("./dummy.pkl")
```
|
DataFrame.to_pickle(path, compression='infer', protocol=5, storage_options=None)[source]#
Pickle (serialize) object to file.
Parameters
pathstr, path object, or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary write() function. File path where
the pickled object will be stored.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
protocolintInt which indicates which protocol should be used by the pickler,
default HIGHEST_PROTOCOL (see [1] paragraph 12.1.2). The possible
values are 0, 1, 2, 3, 4, 5. A negative value for the protocol
parameter is equivalent to setting its value to HIGHEST_PROTOCOL.
1
https://docs.python.org/3/library/pickle.html.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
See also
read_pickleLoad pickled pandas object (or any object) from file.
DataFrame.to_hdfWrite DataFrame to an HDF5 file.
DataFrame.to_sqlWrite DataFrame to a SQL database.
DataFrame.to_parquetWrite a DataFrame to the binary parquet format.
Examples
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
>>> original_df.to_pickle("./dummy.pkl")
>>> unpickled_df = pd.read_pickle("./dummy.pkl")
>>> unpickled_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
|
reference/api/pandas.DataFrame.to_pickle.html
|
pandas.Series.rmod
|
`pandas.Series.rmod`
Return Modulo of series and other, element-wise (binary operator rmod).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.mod(b, fill_value=0)
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
```
|
Series.rmod(other, level=None, fill_value=None, axis=0)[source]#
Return Modulo of series and other, element-wise (binary operator rmod).
Equivalent to other % series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.modElement-wise Modulo, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.mod(b, fill_value=0)
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.rmod.html
|
pandas.Index.is_monotonic_increasing
|
`pandas.Index.is_monotonic_increasing`
Return a boolean if the values are equal or increasing.
```
>>> Index([1, 2, 3]).is_monotonic_increasing
True
>>> Index([1, 2, 2]).is_monotonic_increasing
True
>>> Index([1, 3, 2]).is_monotonic_increasing
False
```
|
property Index.is_monotonic_increasing[source]#
Return a boolean if the values are equal or increasing.
Examples
>>> Index([1, 2, 3]).is_monotonic_increasing
True
>>> Index([1, 2, 2]).is_monotonic_increasing
True
>>> Index([1, 3, 2]).is_monotonic_increasing
False
|
reference/api/pandas.Index.is_monotonic_increasing.html
|
Options and settings
|
Options and settings
|
API for configuring global behavior. See the User Guide for more.
Working with options#
describe_option(pat[, _print_desc])
Prints the description for one or more registered options.
reset_option(pat)
Reset one or more options to their default value.
get_option(pat)
Retrieves the value of the specified option.
set_option(pat, value)
Sets the value of the specified option.
option_context(*args)
Context manager to temporarily set options in the with statement context.
|
reference/options.html
|
Index
|
_
| A
| B
| C
| D
| E
| F
| G
| H
| I
| J
| K
| L
| M
| N
| O
| P
| Q
| R
| S
| T
| U
| V
| W
| X
| Y
| Z
_
__array__() (pandas.Categorical method)
(pandas.Series method)
__call__() (pandas.option_context method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
__dataframe__() (pandas.DataFrame method)
__iter__() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
_concat_same_type() (pandas.api.extensions.ExtensionArray class method)
_formatter() (pandas.api.extensions.ExtensionArray method)
_from_factorized() (pandas.api.extensions.ExtensionArray class method)
_from_sequence() (pandas.api.extensions.ExtensionArray class method)
_from_sequence_of_strings() (pandas.api.extensions.ExtensionArray class method)
_reduce() (pandas.api.extensions.ExtensionArray method)
_values_for_argsort() (pandas.api.extensions.ExtensionArray method)
_values_for_factorize() (pandas.api.extensions.ExtensionArray method)
A
abs() (pandas.DataFrame method)
(pandas.Series method)
AbstractMethodError
AccessorRegistrationWarning
add() (pandas.DataFrame method)
(pandas.Series method)
add_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
add_prefix() (pandas.DataFrame method)
(pandas.Series method)
add_suffix() (pandas.DataFrame method)
(pandas.Series method)
agg() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
aggregate() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.SeriesGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
align() (pandas.DataFrame method)
(pandas.Series method)
all() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
allows_duplicate_labels (pandas.Flags property)
andrews_curves() (in module pandas.plotting)
any() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
append() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Index method)
(pandas.Series method)
apply() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
apply_index() (pandas.io.formats.style.Styler method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
applymap() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
applymap_index() (pandas.io.formats.style.Styler method)
area() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
argmax() (pandas.Index method)
(pandas.Series method)
argmin() (pandas.Index method)
(pandas.Series method)
argsort() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
array (pandas.Index attribute)
(pandas.Series property)
array() (in module pandas)
ArrowDtype (class in pandas)
ArrowExtensionArray (class in pandas.arrays)
ArrowStringArray (class in pandas.arrays)
as_ordered() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
as_unordered() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
asfreq() (pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series method)
asi8 (pandas.Index property)
asm8 (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
asof() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
asof_locs() (pandas.Index method)
assert_extension_array_equal() (in module pandas.testing)
assert_frame_equal() (in module pandas.testing)
assert_index_equal() (in module pandas.testing)
assert_series_equal() (in module pandas.testing)
assign() (pandas.DataFrame method)
astimezone() (pandas.Timestamp method)
astype() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
at (pandas.DataFrame property)
(pandas.Series property)
at_time() (pandas.DataFrame method)
(pandas.Series method)
AttributeConflictWarning
attrs (pandas.DataFrame property)
(pandas.Series property)
autocorr() (pandas.Series method)
autocorrelation_plot() (in module pandas.plotting)
axes (pandas.DataFrame property)
(pandas.Series property)
B
backfill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
background_gradient() (pandas.io.formats.style.Styler method)
bar() (pandas.DataFrame.plot method)
(pandas.io.formats.style.Styler method)
(pandas.Series.plot method)
barh() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
base (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
BaseIndexer (class in pandas.api.indexers)
bdate_range() (in module pandas)
BDay (in module pandas.tseries.offsets)
between() (pandas.Series method)
between_time() (pandas.DataFrame method)
(pandas.Series method)
bfill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
BMonthBegin (in module pandas.tseries.offsets)
BMonthEnd (in module pandas.tseries.offsets)
book (pandas.ExcelWriter property)
bool() (pandas.DataFrame method)
(pandas.Series method)
BooleanArray (class in pandas.arrays)
BooleanDtype (class in pandas)
bootstrap_plot() (in module pandas.plotting)
box() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
boxplot() (in module pandas.plotting)
(pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
BQuarterBegin (class in pandas.tseries.offsets)
BQuarterEnd (class in pandas.tseries.offsets)
build_table_schema() (in module pandas.io.json)
BusinessDay (class in pandas.tseries.offsets)
BusinessHour (class in pandas.tseries.offsets)
BusinessMonthBegin (class in pandas.tseries.offsets)
BusinessMonthEnd (class in pandas.tseries.offsets)
BYearBegin (class in pandas.tseries.offsets)
BYearEnd (class in pandas.tseries.offsets)
C
calendar (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
capitalize() (pandas.Series.str method)
casefold() (pandas.Series.str method)
cat() (pandas.Series method)
(pandas.Series.str method)
Categorical (class in pandas)
CategoricalConversionWarning
CategoricalDtype (class in pandas)
CategoricalIndex (class in pandas)
categories (pandas.Categorical property)
(pandas.CategoricalDtype property)
(pandas.CategoricalIndex property)
(pandas.Series.cat attribute)
cbday_roll (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
CBMonthBegin (in module pandas.tseries.offsets)
CBMonthEnd (in module pandas.tseries.offsets)
CDay (in module pandas.tseries.offsets)
ceil() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
center() (pandas.Series.str method)
check_array_indexer() (in module pandas.api.indexers)
check_extension() (pandas.ExcelWriter class method)
clear() (pandas.io.formats.style.Styler method)
clip() (pandas.DataFrame method)
(pandas.Series method)
close() (pandas.ExcelWriter method)
closed (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
closed_left (pandas.Interval attribute)
closed_right (pandas.Interval attribute)
ClosedFileError
codes (pandas.Categorical property)
(pandas.CategoricalIndex property)
(pandas.MultiIndex property)
(pandas.Series.cat attribute)
columns (pandas.DataFrame attribute)
combine() (pandas.DataFrame method)
(pandas.Series method)
(pandas.Timestamp class method)
combine_first() (pandas.DataFrame method)
(pandas.Series method)
compare() (pandas.DataFrame method)
(pandas.Series method)
components (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
concat() (in module pandas)
(pandas.io.formats.style.Styler method)
construct_array_type() (pandas.api.extensions.ExtensionDtype class method)
construct_from_string() (pandas.api.extensions.ExtensionDtype class method)
contains() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
(pandas.Series.str method)
convert_dtypes() (pandas.DataFrame method)
(pandas.Series method)
copy() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
corr (pandas.core.groupby.DataFrameGroupBy property)
corr() (pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
corrwith (pandas.core.groupby.DataFrameGroupBy property)
corrwith() (pandas.DataFrame method)
count() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
cov (pandas.core.groupby.DataFrameGroupBy property)
cov() (pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
crosstab() (in module pandas)
CSSWarning
ctime() (pandas.Timestamp method)
cumcount() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
cummax() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cummin() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cumprod() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cumsum() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cur_sheet (pandas.ExcelWriter property)
CustomBusinessDay (class in pandas.tseries.offsets)
CustomBusinessHour (class in pandas.tseries.offsets)
CustomBusinessMonthBegin (class in pandas.tseries.offsets)
CustomBusinessMonthEnd (class in pandas.tseries.offsets)
cut() (in module pandas)
D
data_label (pandas.io.stata.StataReader property)
DatabaseError
DataError
DataFrame (class in pandas)
date (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
date() (pandas.Timestamp method)
date_format (pandas.ExcelWriter property)
date_range() (in module pandas)
DateOffset (class in pandas.tseries.offsets)
datetime_format (pandas.ExcelWriter property)
DatetimeArray (class in pandas.arrays)
DatetimeIndex (class in pandas)
DatetimeTZDtype (class in pandas)
Day (class in pandas.tseries.offsets)
day (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
day_name() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
day_of_month (pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
day_of_week (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
day_of_year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
dayofweek (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
dayofyear (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
days (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
days_in_month (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
daysinmonth (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
decode() (pandas.Series.str method)
delete() (pandas.Index method)
delta (pandas.Timedelta attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.Tick attribute)
density (pandas.DataFrame.sparse attribute)
(pandas.Series.sparse attribute)
density() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
deregister_matplotlib_converters() (in module pandas.plotting)
describe() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
describe_option (in module pandas)
diff() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
difference() (pandas.Index method)
div() (pandas.DataFrame method)
(pandas.Series method)
divide() (pandas.DataFrame method)
(pandas.Series method)
divmod() (pandas.Series method)
dot() (pandas.DataFrame method)
(pandas.Series method)
drop() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
drop_duplicates() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
droplevel() (pandas.DataFrame method)
(pandas.Index method)
(pandas.MultiIndex method)
(pandas.Series method)
dropna() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
dst() (pandas.Timestamp method)
dt() (pandas.Series method)
dtype (pandas.api.extensions.ExtensionArray property)
(pandas.Categorical property)
(pandas.Index attribute)
(pandas.Series property)
dtypes (pandas.DataFrame property)
(pandas.MultiIndex attribute)
(pandas.Series property)
DtypeWarning
duplicated() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
DuplicateLabelError
E
Easter (class in pandas.tseries.offsets)
empty (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
empty() (pandas.api.extensions.ExtensionDtype method)
EmptyDataError
encode() (pandas.Series.str method)
end (pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
end_time (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
endswith() (pandas.Series.str method)
engine (pandas.ExcelWriter property)
env (pandas.io.formats.style.Styler attribute)
eq() (pandas.DataFrame method)
(pandas.Series method)
equals() (pandas.api.extensions.ExtensionArray method)
(pandas.CategoricalIndex method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
eval() (in module pandas)
(pandas.DataFrame method)
ewm() (pandas.DataFrame method)
(pandas.Series method)
ExcelWriter (class in pandas)
expanding() (pandas.DataFrame method)
(pandas.Series method)
explode() (pandas.DataFrame method)
(pandas.Series method)
export() (pandas.io.formats.style.Styler method)
ExtensionArray (class in pandas.api.extensions)
ExtensionDtype (class in pandas.api.extensions)
extract() (pandas.Series.str method)
extractall() (pandas.Series.str method)
F
factorize() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
ffill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
fill_value (pandas.Series.sparse attribute)
fillna (pandas.core.groupby.DataFrameGroupBy property)
fillna() (pandas.api.extensions.ExtensionArray method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
filter() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
find() (pandas.Series.str method)
findall() (pandas.Series.str method)
first() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
first_valid_index() (pandas.DataFrame method)
(pandas.Series method)
FixedForwardWindowIndexer (class in pandas.api.indexers)
Flags (class in pandas)
flags (pandas.DataFrame property)
(pandas.Series property)
Float64Index (class in pandas)
floor() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
floordiv() (pandas.DataFrame method)
(pandas.Series method)
fold (pandas.Timestamp attribute)
format() (pandas.Index method)
(pandas.io.formats.style.Styler method)
format_index() (pandas.io.formats.style.Styler method)
freq (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodDtype property)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.Timestamp attribute)
freqstr (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Timestamp property)
(pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
from_arrays() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
(pandas.MultiIndex class method)
from_breaks() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
from_codes() (pandas.Categorical class method)
from_coo() (pandas.Series.sparse class method)
from_custom_template() (pandas.io.formats.style.Styler class method)
from_dataframe() (in module pandas.api.interchange)
from_dict() (pandas.DataFrame class method)
from_dummies() (in module pandas)
from_frame() (pandas.MultiIndex class method)
from_product() (pandas.MultiIndex class method)
from_range() (pandas.RangeIndex class method)
from_records() (pandas.DataFrame class method)
from_spmatrix() (pandas.DataFrame.sparse class method)
from_tuples() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
(pandas.MultiIndex class method)
fromisocalendar() (pandas.Timestamp method)
fromisoformat() (pandas.Timestamp method)
fromordinal() (pandas.Timestamp class method)
fromtimestamp() (pandas.Timestamp class method)
fullmatch() (pandas.Series.str method)
FY5253 (class in pandas.tseries.offsets)
FY5253Quarter (class in pandas.tseries.offsets)
G
ge() (pandas.DataFrame method)
(pandas.Series method)
get() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
(pandas.Series.str method)
get_dummies() (in module pandas)
(pandas.Series.str method)
get_group() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
get_indexer() (pandas.Index method)
(pandas.IntervalIndex method)
(pandas.MultiIndex method)
get_indexer_for() (pandas.Index method)
get_indexer_non_unique() (pandas.Index method)
get_level_values() (pandas.Index method)
(pandas.MultiIndex method)
get_loc() (pandas.Index method)
(pandas.IntervalIndex method)
(pandas.MultiIndex method)
get_loc_level() (pandas.MultiIndex method)
get_locs() (pandas.MultiIndex method)
get_option (in module pandas)
get_rule_code_suffix() (pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
get_slice_bound() (pandas.Index method)
get_value() (pandas.Index method)
get_weeks() (pandas.tseries.offsets.FY5253Quarter method)
get_window_bounds() (pandas.api.indexers.BaseIndexer method)
(pandas.api.indexers.FixedForwardWindowIndexer method)
(pandas.api.indexers.VariableOffsetWindowIndexer method)
get_year_end() (pandas.tseries.offsets.FY5253 method)
groupby() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
Grouper (class in pandas)
groups (pandas.core.groupby.GroupBy property)
(pandas.core.resample.Resampler property)
groups() (pandas.HDFStore method)
gt() (pandas.DataFrame method)
(pandas.Series method)
H
handles (pandas.ExcelWriter property)
has_duplicates (pandas.Index property)
hash_array() (in module pandas.util)
hash_pandas_object() (in module pandas.util)
hasnans (pandas.Index attribute)
(pandas.Series property)
head() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
hexbin() (pandas.DataFrame.plot method)
hide() (pandas.io.formats.style.Styler method)
hide_columns() (pandas.io.formats.style.Styler method)
hide_index() (pandas.io.formats.style.Styler method)
highlight_between() (pandas.io.formats.style.Styler method)
highlight_max() (pandas.io.formats.style.Styler method)
highlight_min() (pandas.io.formats.style.Styler method)
highlight_null() (pandas.io.formats.style.Styler method)
highlight_quantile() (pandas.io.formats.style.Styler method)
hist (pandas.core.groupby.DataFrameGroupBy property)
(pandas.core.groupby.SeriesGroupBy property)
hist() (pandas.DataFrame method)
(pandas.DataFrame.plot method)
(pandas.Series method)
(pandas.Series.plot method)
holds_integer() (pandas.Index method)
holidays (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
Hour (class in pandas.tseries.offsets)
hour (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
I
iat (pandas.DataFrame property)
(pandas.Series property)
identical() (pandas.Index method)
idxmax() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
idxmin() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
if_sheet_exists (pandas.ExcelWriter property)
iloc (pandas.DataFrame property)
(pandas.Series property)
IncompatibilityWarning
Index (class in pandas)
index (pandas.DataFrame attribute)
(pandas.Series attribute)
index() (pandas.Series.str method)
indexer_at_time() (pandas.DatetimeIndex method)
indexer_between_time() (pandas.DatetimeIndex method)
IndexingError
IndexSlice (in module pandas)
indices (pandas.core.groupby.GroupBy property)
(pandas.core.resample.Resampler property)
infer_dtype() (in module pandas.api.types)
infer_freq() (in module pandas)
infer_objects() (pandas.DataFrame method)
(pandas.Series method)
inferred_freq (pandas.DatetimeIndex attribute)
(pandas.TimedeltaIndex attribute)
inferred_type (pandas.Index attribute)
info() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
insert() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
Int16Dtype (class in pandas)
Int32Dtype (class in pandas)
Int64Dtype (class in pandas)
Int64Index (class in pandas)
Int8Dtype (class in pandas)
IntCastingNaNError
IntegerArray (class in pandas.arrays)
interpolate() (pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
intersection() (pandas.Index method)
Interval (class in pandas)
interval_range() (in module pandas)
IntervalArray (class in pandas.arrays)
IntervalDtype (class in pandas)
IntervalIndex (class in pandas)
InvalidColumnName
InvalidIndexError
is_() (pandas.Index method)
is_all_dates (pandas.Index attribute)
is_anchored() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_bool() (in module pandas.api.types)
is_bool_dtype() (in module pandas.api.types)
is_boolean() (pandas.Index method)
is_categorical() (in module pandas.api.types)
(pandas.Index method)
is_categorical_dtype() (in module pandas.api.types)
is_complex() (in module pandas.api.types)
is_complex_dtype() (in module pandas.api.types)
is_datetime64_any_dtype() (in module pandas.api.types)
is_datetime64_dtype() (in module pandas.api.types)
is_datetime64_ns_dtype() (in module pandas.api.types)
is_datetime64tz_dtype() (in module pandas.api.types)
is_dict_like() (in module pandas.api.types)
is_dtype() (pandas.api.extensions.ExtensionDtype class method)
is_empty (pandas.arrays.IntervalArray attribute)
(pandas.Interval attribute)
(pandas.IntervalIndex property)
is_extension_array_dtype() (in module pandas.api.types)
is_extension_type() (in module pandas.api.types)
is_file_like() (in module pandas.api.types)
is_float() (in module pandas.api.types)
is_float_dtype() (in module pandas.api.types)
is_floating() (pandas.Index method)
is_hashable() (in module pandas.api.types)
is_int64_dtype() (in module pandas.api.types)
is_integer() (in module pandas.api.types)
(pandas.Index method)
is_integer_dtype() (in module pandas.api.types)
is_interval() (in module pandas.api.types)
(pandas.Index method)
is_interval_dtype() (in module pandas.api.types)
is_iterator() (in module pandas.api.types)
is_leap_year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_list_like() (in module pandas.api.types)
is_mixed() (pandas.Index method)
is_monotonic (pandas.Index property)
(pandas.Series property)
is_monotonic_decreasing (pandas.core.groupby.SeriesGroupBy property)
(pandas.Index property)
(pandas.Series property)
is_monotonic_increasing (pandas.core.groupby.SeriesGroupBy property)
(pandas.Index property)
(pandas.Series property)
is_month_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_month_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_month_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_month_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_named_tuple() (in module pandas.api.types)
is_non_overlapping_monotonic (pandas.arrays.IntervalArray property)
(pandas.IntervalIndex attribute)
is_number() (in module pandas.api.types)
is_numeric() (pandas.Index method)
is_numeric_dtype() (in module pandas.api.types)
is_object() (pandas.Index method)
is_object_dtype() (in module pandas.api.types)
is_on_offset() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_overlapping (pandas.IntervalIndex property)
is_period_dtype() (in module pandas.api.types)
is_populated (pandas.Timedelta attribute)
is_quarter_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_quarter_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_quarter_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_quarter_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_re() (in module pandas.api.types)
is_re_compilable() (in module pandas.api.types)
is_scalar() (in module pandas.api.types)
is_signed_integer_dtype() (in module pandas.api.types)
is_sparse() (in module pandas.api.types)
is_string_dtype() (in module pandas.api.types)
is_timedelta64_dtype() (in module pandas.api.types)
is_timedelta64_ns_dtype() (in module pandas.api.types)
is_type_compatible() (pandas.Index method)
is_unique (pandas.Index attribute)
(pandas.Series property)
is_unsigned_integer_dtype() (in module pandas.api.types)
is_year_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_year_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_year_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_year_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
isalnum() (pandas.Series.str method)
isalpha() (pandas.Series.str method)
isAnchored() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
isdecimal() (pandas.Series.str method)
isdigit() (pandas.Series.str method)
isetitem() (pandas.DataFrame method)
isin() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
islower() (pandas.Series.str method)
isna() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
isnull() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
isnumeric() (pandas.Series.str method)
isocalendar() (pandas.Series.dt method)
(pandas.Timestamp method)
isoformat() (pandas.Timedelta method)
(pandas.Timestamp method)
isoweekday() (pandas.Timestamp method)
isspace() (pandas.Series.str method)
istitle() (pandas.Series.str method)
isupper() (pandas.Series.str method)
item() (pandas.Index method)
(pandas.Series method)
items() (pandas.DataFrame method)
(pandas.Series method)
iteritems() (pandas.DataFrame method)
(pandas.Series method)
iterrows() (pandas.DataFrame method)
itertuples() (pandas.DataFrame method)
J
join() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series.str method)
json_normalize() (in module pandas)
K
kde() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
keys() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
kind (pandas.api.extensions.ExtensionDtype property)
kurt() (pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
kurtosis() (pandas.DataFrame method)
(pandas.Series method)
kwds (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
L
lag_plot() (in module pandas.plotting)
last() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
last_valid_index() (pandas.DataFrame method)
(pandas.Series method)
LastWeekOfMonth (class in pandas.tseries.offsets)
le() (pandas.DataFrame method)
(pandas.Series method)
left (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
len() (pandas.Series.str method)
length (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex property)
levels (pandas.MultiIndex attribute)
levshape (pandas.MultiIndex property)
line() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
ljust() (pandas.Series.str method)
loader (pandas.io.formats.style.Styler attribute)
loc (pandas.DataFrame property)
(pandas.Series property)
lookup() (pandas.DataFrame method)
lower() (pandas.Series.str method)
lstrip() (pandas.Series.str method)
lt() (pandas.DataFrame method)
(pandas.Series method)
M
m_offset (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
mad (pandas.core.groupby.DataFrameGroupBy property)
mad() (pandas.DataFrame method)
(pandas.Series method)
map() (pandas.CategoricalIndex method)
(pandas.Index method)
(pandas.Series method)
mask() (pandas.DataFrame method)
(pandas.Series method)
match() (pandas.Series.str method)
max (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
max() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
mean() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.TimedeltaIndex method)
median() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
melt() (in module pandas)
(pandas.DataFrame method)
memory_usage() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
merge() (in module pandas)
(pandas.DataFrame method)
merge_asof() (in module pandas)
merge_ordered() (in module pandas)
MergeError
Micro (class in pandas.tseries.offsets)
microsecond (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
microseconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
mid (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
Milli (class in pandas.tseries.offsets)
min (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
min() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
Minute (class in pandas.tseries.offsets)
minute (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
mod() (pandas.DataFrame method)
(pandas.Series method)
mode() (pandas.DataFrame method)
(pandas.Series method)
module
pandas
month (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
month_name() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
month_roll (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
MonthBegin (class in pandas.tseries.offsets)
MonthEnd (class in pandas.tseries.offsets)
mul() (pandas.DataFrame method)
(pandas.Series method)
MultiIndex (class in pandas)
multiply() (pandas.DataFrame method)
(pandas.Series method)
N
n (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
na_value (pandas.api.extensions.ExtensionDtype property)
name (pandas.api.extensions.ExtensionDtype property)
(pandas.Index property)
(pandas.Series property)
(pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
names (pandas.api.extensions.ExtensionDtype property)
(pandas.Index property)
(pandas.MultiIndex property)
Nano (class in pandas.tseries.offsets)
nanos (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
nanosecond (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
nanoseconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
nbytes (pandas.api.extensions.ExtensionArray property)
(pandas.Index property)
(pandas.Series property)
ndim (pandas.api.extensions.ExtensionArray property)
(pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
ne() (pandas.DataFrame method)
(pandas.Series method)
nearest() (pandas.core.resample.Resampler method)
next_bday (pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
ngroup() (pandas.core.groupby.GroupBy method)
nlargest() (pandas.core.groupby.SeriesGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
nlevels (pandas.Index property)
(pandas.MultiIndex property)
normalize (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
normalize() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Series.str method)
(pandas.Timestamp method)
notna() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
notnull() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
now() (pandas.Period method)
(pandas.Timestamp class method)
npoints (pandas.Series.sparse attribute)
nsmallest() (pandas.core.groupby.SeriesGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
nth (pandas.core.groupby.GroupBy property)
NullFrequencyError
NumbaUtilError
NumExprClobberingError
nunique() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
O
offset (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
ohlc() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
onOffset() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
open_left (pandas.Interval attribute)
open_right (pandas.Interval attribute)
option_context (class in pandas)
OptionError
ordered (pandas.Categorical property)
(pandas.CategoricalDtype property)
(pandas.CategoricalIndex property)
(pandas.Series.cat attribute)
ordinal (pandas.Period attribute)
OutOfBoundsDatetime
OutOfBoundsTimedelta
overlaps() (pandas.arrays.IntervalArray method)
(pandas.Interval method)
(pandas.IntervalIndex method)
P
pad() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
pandas
module
pandas_dtype() (in module pandas.api.types)
PandasArray (class in pandas.arrays)
parallel_coordinates() (in module pandas.plotting)
parse() (pandas.ExcelFile method)
ParserError
ParserWarning
partition() (pandas.Series.str method)
path (pandas.ExcelWriter property)
pct_change() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
PerformanceWarning
Period (class in pandas)
period_range() (in module pandas)
PeriodArray (class in pandas.arrays)
PeriodDtype (class in pandas)
PeriodIndex (class in pandas)
pie() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
pipe() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
pivot() (in module pandas)
(pandas.DataFrame method)
pivot_table() (in module pandas)
(pandas.DataFrame method)
plot (pandas.core.groupby.DataFrameGroupBy property)
plot() (pandas.DataFrame method)
(pandas.Series method)
plot_params (in module pandas.plotting)
pop() (pandas.DataFrame method)
(pandas.Series method)
PossibleDataLossError
PossiblePrecisionLoss
pow() (pandas.DataFrame method)
(pandas.Series method)
prod() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
product() (pandas.DataFrame method)
(pandas.Series method)
put() (pandas.HDFStore method)
putmask() (pandas.Index method)
PyperclipException
PyperclipWindowsException
Python Enhancement Proposals
PEP 484
PEP 561
PEP 585
PEP 8#imports
Q
qcut() (in module pandas)
qtr_with_extra_week (pandas.tseries.offsets.FY5253Quarter attribute)
quantile() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
quarter (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
QuarterBegin (class in pandas.tseries.offsets)
QuarterEnd (class in pandas.tseries.offsets)
query() (pandas.DataFrame method)
qyear (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
R
radd() (pandas.DataFrame method)
(pandas.Series method)
radviz() (in module pandas.plotting)
RangeIndex (class in pandas)
rank() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
ravel() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
rdiv() (pandas.DataFrame method)
(pandas.Series method)
rdivmod() (pandas.Series method)
read_clipboard() (in module pandas)
read_csv() (in module pandas)
read_excel() (in module pandas)
read_feather() (in module pandas)
read_fwf() (in module pandas)
read_gbq() (in module pandas)
read_hdf() (in module pandas)
read_html() (in module pandas)
read_json() (in module pandas)
read_orc() (in module pandas)
read_parquet() (in module pandas)
read_pickle() (in module pandas)
read_sas() (in module pandas)
read_spss() (in module pandas)
read_sql() (in module pandas)
read_sql_query() (in module pandas)
read_sql_table() (in module pandas)
read_stata() (in module pandas)
read_table() (in module pandas)
read_xml() (in module pandas)
register_dataframe_accessor() (in module pandas.api.extensions)
register_extension_dtype() (in module pandas.api.extensions)
register_index_accessor() (in module pandas.api.extensions)
register_matplotlib_converters() (in module pandas.plotting)
register_series_accessor() (in module pandas.api.extensions)
reindex() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
reindex_like() (pandas.DataFrame method)
(pandas.Series method)
relabel_index() (pandas.io.formats.style.Styler method)
remove_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
remove_unused_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
remove_unused_levels() (pandas.MultiIndex method)
removeprefix() (pandas.Series.str method)
removesuffix() (pandas.Series.str method)
rename() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
rename_axis() (pandas.DataFrame method)
(pandas.Series method)
rename_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
render() (pandas.io.formats.style.Styler method)
reorder_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
reorder_levels() (pandas.DataFrame method)
(pandas.MultiIndex method)
(pandas.Series method)
repeat() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
(pandas.Series.str method)
replace() (pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
(pandas.Timestamp method)
resample() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
reset_index() (pandas.DataFrame method)
(pandas.Series method)
reset_option (in module pandas)
resolution (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
resolution_string (pandas.Timedelta attribute)
rfind() (pandas.Series.str method)
rfloordiv() (pandas.DataFrame method)
(pandas.Series method)
right (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
rindex() (pandas.Series.str method)
rjust() (pandas.Series.str method)
rmod() (pandas.DataFrame method)
(pandas.Series method)
rmul() (pandas.DataFrame method)
(pandas.Series method)
rollback() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
rollforward() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
rolling() (pandas.DataFrame method)
(pandas.Series method)
round() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
rpartition() (pandas.Series.str method)
rpow() (pandas.DataFrame method)
(pandas.Series method)
rsplit() (pandas.Series.str method)
rstrip() (pandas.Series.str method)
rsub() (pandas.DataFrame method)
(pandas.Series method)
rtruediv() (pandas.DataFrame method)
(pandas.Series method)
rule_code (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
S
sample() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
save() (pandas.ExcelWriter method)
scatter() (pandas.DataFrame.plot method)
scatter_matrix() (in module pandas.plotting)
searchsorted() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
Second (class in pandas.tseries.offsets)
second (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
seconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
select() (pandas.HDFStore method)
select_dtypes() (pandas.DataFrame method)
sem() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
SemiMonthBegin (class in pandas.tseries.offsets)
SemiMonthEnd (class in pandas.tseries.offsets)
Series (class in pandas)
set_axis() (pandas.DataFrame method)
(pandas.Series method)
set_caption() (pandas.io.formats.style.Styler method)
set_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
set_closed() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
set_codes() (pandas.MultiIndex method)
set_flags() (pandas.DataFrame method)
(pandas.Series method)
set_index() (pandas.DataFrame method)
set_levels() (pandas.MultiIndex method)
set_na_rep() (pandas.io.formats.style.Styler method)
set_names() (pandas.Index method)
set_option (in module pandas)
set_precision() (pandas.io.formats.style.Styler method)
set_properties() (pandas.io.formats.style.Styler method)
set_sticky() (pandas.io.formats.style.Styler method)
set_table_attributes() (pandas.io.formats.style.Styler method)
set_table_styles() (pandas.io.formats.style.Styler method)
set_td_classes() (pandas.io.formats.style.Styler method)
set_tooltips() (pandas.io.formats.style.Styler method)
set_uuid() (pandas.io.formats.style.Styler method)
set_value() (pandas.Index method)
SettingWithCopyError
SettingWithCopyWarning
shape (pandas.api.extensions.ExtensionArray property)
(pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
sheets (pandas.ExcelWriter property)
shift() (pandas.api.extensions.ExtensionArray method)
(pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
show_versions() (in module pandas)
size (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
size() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
skew (pandas.core.groupby.DataFrameGroupBy property)
skew() (pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
slice() (pandas.Series.str method)
slice_indexer() (pandas.Index method)
slice_locs() (pandas.Index method)
slice_replace() (pandas.Series.str method)
slice_shift() (pandas.DataFrame method)
(pandas.Series method)
snap() (pandas.DatetimeIndex method)
sort() (pandas.Index method)
sort_index() (pandas.DataFrame method)
(pandas.Series method)
sort_values() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
sortlevel() (pandas.Index method)
(pandas.MultiIndex method)
sp_values (pandas.Series.sparse attribute)
sparse() (pandas.DataFrame method)
(pandas.Series method)
SparseArray (class in pandas.arrays)
SparseDtype (class in pandas)
SpecificationError
split() (pandas.Series.str method)
squeeze() (pandas.DataFrame method)
(pandas.Series method)
stack() (pandas.DataFrame method)
start (pandas.RangeIndex property)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
start_time (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
startingMonth (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
startswith() (pandas.Series.str method)
std() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
step (pandas.RangeIndex property)
stop (pandas.RangeIndex property)
str() (pandas.Index method)
(pandas.Series method)
strftime() (pandas.DatetimeIndex method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
StringArray (class in pandas.arrays)
StringDtype (class in pandas)
strip() (pandas.Series.str method)
strptime() (pandas.Timestamp class method)
style (pandas.DataFrame property)
Styler (class in pandas.io.formats.style)
sub() (pandas.DataFrame method)
(pandas.Series method)
subtract() (pandas.DataFrame method)
(pandas.Series method)
subtype (pandas.IntervalDtype property)
sum() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.Series method)
supported_extensions (pandas.ExcelWriter property)
swapaxes() (pandas.DataFrame method)
(pandas.Series method)
swapcase() (pandas.Series.str method)
swaplevel() (pandas.DataFrame method)
(pandas.MultiIndex method)
(pandas.Series method)
symmetric_difference() (pandas.Index method)
T
T (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
table() (in module pandas.plotting)
tail() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
take (pandas.core.groupby.DataFrameGroupBy property)
take() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
template_html (pandas.io.formats.style.Styler attribute)
template_html_style (pandas.io.formats.style.Styler attribute)
template_html_table (pandas.io.formats.style.Styler attribute)
template_latex (pandas.io.formats.style.Styler attribute)
template_string (pandas.io.formats.style.Styler attribute)
test() (in module pandas)
text_gradient() (pandas.io.formats.style.Styler method)
Tick (class in pandas.tseries.offsets)
time (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
time() (pandas.Timestamp method)
Timedelta (class in pandas)
timedelta_range() (in module pandas)
TimedeltaArray (class in pandas.arrays)
TimedeltaIndex (class in pandas)
Timestamp (class in pandas)
timestamp() (pandas.Timestamp method)
timetuple() (pandas.Timestamp method)
timetz (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
timetz() (pandas.Timestamp method)
title() (pandas.Series.str method)
to_clipboard() (pandas.DataFrame method)
(pandas.Series method)
to_coo() (pandas.DataFrame.sparse method)
(pandas.Series.sparse method)
to_csv() (pandas.DataFrame method)
(pandas.Series method)
to_datetime() (in module pandas)
to_datetime64() (pandas.Timestamp method)
to_dense() (pandas.DataFrame.sparse method)
to_dict() (pandas.DataFrame method)
(pandas.Series method)
to_excel() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_feather() (pandas.DataFrame method)
to_flat_index() (pandas.Index method)
(pandas.MultiIndex method)
to_frame() (pandas.DatetimeIndex method)
(pandas.Index method)
(pandas.MultiIndex method)
(pandas.Series method)
(pandas.TimedeltaIndex method)
to_gbq() (pandas.DataFrame method)
to_hdf() (pandas.DataFrame method)
(pandas.Series method)
to_html() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
to_json() (pandas.DataFrame method)
(pandas.Series method)
to_julian_date() (pandas.Timestamp method)
to_latex() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_list() (pandas.Index method)
(pandas.Series method)
to_markdown() (pandas.DataFrame method)
(pandas.Series method)
to_native_types() (pandas.Index method)
to_numeric() (in module pandas)
to_numpy() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
(pandas.Timedelta method)
(pandas.Timestamp method)
to_offset() (in module pandas.tseries.frequencies)
to_orc() (pandas.DataFrame method)
to_parquet() (pandas.DataFrame method)
to_period() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
to_perioddelta() (pandas.DatetimeIndex method)
to_pickle() (pandas.DataFrame method)
(pandas.Series method)
to_pydatetime() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
to_pytimedelta() (pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
to_records() (pandas.DataFrame method)
to_series() (pandas.DatetimeIndex method)
(pandas.Index method)
(pandas.TimedeltaIndex method)
to_sql() (pandas.DataFrame method)
(pandas.Series method)
to_stata() (pandas.DataFrame method)
to_string() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_timedelta() (in module pandas)
to_timedelta64() (pandas.Timedelta method)
to_timestamp() (pandas.DataFrame method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series method)
to_tuples() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
to_xarray() (pandas.DataFrame method)
(pandas.Series method)
to_xml() (pandas.DataFrame method)
today() (pandas.Timestamp class method)
tolist() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
toordinal() (pandas.Timestamp method)
total_seconds() (pandas.Series.dt method)
(pandas.Timedelta method)
transform() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.SeriesGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
translate() (pandas.Series.str method)
transpose() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
truediv() (pandas.DataFrame method)
(pandas.Series method)
truncate() (pandas.DataFrame method)
(pandas.Series method)
tshift (pandas.core.groupby.DataFrameGroupBy property)
tshift() (pandas.DataFrame method)
(pandas.Series method)
type (pandas.api.extensions.ExtensionDtype property)
tz (pandas.DatetimeIndex property)
(pandas.DatetimeTZDtype property)
(pandas.Series.dt attribute)
(pandas.Timestamp property)
tz_convert() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
tz_localize() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
tzinfo (pandas.Timestamp attribute)
tzname() (pandas.Timestamp method)
U
UInt16Dtype (class in pandas)
UInt32Dtype (class in pandas)
UInt64Dtype (class in pandas)
UInt64Index (class in pandas)
UInt8Dtype (class in pandas)
UndefinedVariableError
union() (pandas.Index method)
union_categoricals() (in module pandas.api.types)
unique (pandas.core.groupby.SeriesGroupBy property)
unique() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
unit (pandas.DatetimeTZDtype property)
UnsortedIndexError
unstack() (pandas.DataFrame method)
(pandas.Series method)
UnsupportedFunctionCall
update() (pandas.DataFrame method)
(pandas.Series method)
upper() (pandas.Series.str method)
use() (pandas.io.formats.style.Styler method)
utcfromtimestamp() (pandas.Timestamp class method)
utcnow() (pandas.Timestamp class method)
utcoffset() (pandas.Timestamp method)
utctimetuple() (pandas.Timestamp method)
V
value (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
value_counts() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
value_labels() (pandas.io.stata.StataReader method)
ValueLabelTypeMismatch
values (pandas.DataFrame property)
(pandas.Index property)
(pandas.IntervalIndex property)
(pandas.Series property)
var() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.Series method)
variable_labels() (pandas.io.stata.StataReader method)
VariableOffsetWindowIndexer (class in pandas.api.indexers)
variation (pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
view() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
(pandas.Timedelta method)
W
walk() (pandas.HDFStore method)
Week (class in pandas.tseries.offsets)
week (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
weekday (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
weekday() (pandas.Timestamp method)
weekmask (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
WeekOfMonth (class in pandas.tseries.offsets)
weekofyear (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
where() (pandas.DataFrame method)
(pandas.Index method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
wide_to_long() (in module pandas)
wrap() (pandas.Series.str method)
write_cells() (pandas.ExcelWriter method)
write_file() (pandas.io.stata.StataWriter method)
X
xs() (pandas.DataFrame method)
(pandas.Series method)
Y
year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
year_has_extra_week() (pandas.tseries.offsets.FY5253Quarter method)
YearBegin (class in pandas.tseries.offsets)
YearEnd (class in pandas.tseries.offsets)
Z
zfill() (pandas.Series.str method)
|
genindex.html
| null |
pandas.Series.str.get
|
`pandas.Series.str.get`
Extract element from each component at specified position or with specified key.
```
>>> s = pd.Series(["String",
... (1, 2, 3),
... ["a", "b", "c"],
... 123,
... -456,
... {1: "Hello", "2": "World"}])
>>> s
0 String
1 (1, 2, 3)
2 [a, b, c]
3 123
4 -456
5 {1: 'Hello', '2': 'World'}
dtype: object
```
|
Series.str.get(i)[source]#
Extract element from each component at specified position or with specified key.
Extract element from lists, tuples, dict, or strings in each element in the
Series/Index.
Parameters
iint or hashable dict labelPosition or key of element to extract.
Returns
Series or Index
Examples
>>> s = pd.Series(["String",
... (1, 2, 3),
... ["a", "b", "c"],
... 123,
... -456,
... {1: "Hello", "2": "World"}])
>>> s
0 String
1 (1, 2, 3)
2 [a, b, c]
3 123
4 -456
5 {1: 'Hello', '2': 'World'}
dtype: object
>>> s.str.get(1)
0 t
1 2
2 b
3 NaN
4 NaN
5 Hello
dtype: object
>>> s.str.get(-1)
0 g
1 3
2 c
3 NaN
4 NaN
5 None
dtype: object
Return element with given key
>>> s = pd.Series([{"name": "Hello", "value": "World"},
... {"name": "Goodbye", "value": "Planet"}])
>>> s.str.get('name')
0 Hello
1 Goodbye
dtype: object
|
reference/api/pandas.Series.str.get.html
|
pandas.Period.freqstr
|
`pandas.Period.freqstr`
Return a string representation of the frequency.
|
Period.freqstr#
Return a string representation of the frequency.
|
reference/api/pandas.Period.freqstr.html
|
pandas.tseries.offsets.SemiMonthBegin.rule_code
|
pandas.tseries.offsets.SemiMonthBegin.rule_code
|
SemiMonthBegin.rule_code#
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.rule_code.html
|
User Guide
|
User Guide
|
The User Guide covers all of pandas by topic area. Each of the subsections
introduces a topic (such as “working with missing data”), and discusses how
pandas approaches the problem, with many examples throughout.
Users brand-new to pandas should start with 10 minutes to pandas.
For a high level summary of the pandas fundamentals, see Intro to data structures and Essential basic functionality.
Further information on any specific method can be obtained in the
API reference.
How to read these guides#
In these guides you will see input code inside code blocks such as:
import pandas as pd
pd.DataFrame({'A': [1, 2, 3]})
or:
In [1]: import pandas as pd
In [2]: pd.DataFrame({'A': [1, 2, 3]})
Out[2]:
A
0 1
1 2
2 3
The first block is a standard python input, while in the second the In [1]: indicates the input is inside a notebook. In Jupyter Notebooks the last line is printed and plots are shown inline.
For example:
In [3]: a = 1
In [4]: a
Out[4]: 1
is equivalent to:
a = 1
print(a)
Guides#
10 minutes to pandas
Object creation
Viewing data
Selection
Missing data
Operations
Merge
Grouping
Reshaping
Time series
Categoricals
Plotting
Importing and exporting data
Gotchas
Intro to data structures
Series
DataFrame
Essential basic functionality
Head and tail
Attributes and underlying data
Accelerated operations
Flexible binary operations
Descriptive statistics
Function application
Reindexing and altering labels
Iteration
.dt accessor
Vectorized string methods
Sorting
Copying
dtypes
Selecting columns based on dtype
IO tools (text, CSV, HDF5, …)
CSV & text files
JSON
HTML
LaTeX
XML
Excel files
OpenDocument Spreadsheets
Binary Excel (.xlsb) files
Clipboard
Pickling
msgpack
HDF5 (PyTables)
Feather
Parquet
ORC
SQL queries
Google BigQuery
Stata format
SAS formats
SPSS formats
Other file formats
Performance considerations
Indexing and selecting data
Different choices for indexing
Basics
Attribute access
Slicing ranges
Selection by label
Selection by position
Selection by callable
Combining positional and label-based indexing
Indexing with list with missing labels is deprecated
Selecting random samples
Setting with enlargement
Fast scalar value getting and setting
Boolean indexing
Indexing with isin
The where() Method and Masking
Setting with enlargement conditionally using numpy()
The query() Method
Duplicate data
Dictionary-like get() method
Looking up values by index/column labels
Index objects
Set / reset index
Returning a view versus a copy
MultiIndex / advanced indexing
Hierarchical indexing (MultiIndex)
Advanced indexing with hierarchical index
Sorting a MultiIndex
Take methods
Index types
Miscellaneous indexing FAQ
Merge, join, concatenate and compare
Concatenating objects
Database-style DataFrame or named Series joining/merging
Timeseries friendly merging
Comparing objects
Reshaping and pivot tables
Reshaping by pivoting DataFrame objects
Reshaping by stacking and unstacking
Reshaping by melt
Combining with stats and GroupBy
Pivot tables
Cross tabulations
Tiling
Computing indicator / dummy variables
Factorizing values
Examples
Exploding a list-like column
Working with text data
Text data types
String methods
Splitting and replacing strings
Concatenation
Indexing with .str
Extracting substrings
Testing for strings that match or contain a pattern
Creating indicator variables
Method summary
Working with missing data
Values considered “missing”
Inserting missing data
Calculations with missing data
Sum/prod of empties/nans
NA values in GroupBy
Filling missing values: fillna
Filling with a PandasObject
Dropping axis labels with missing data: dropna
Interpolation
Replacing generic values
String/regular expression replacement
Numeric replacement
Experimental NA scalar to denote missing values
Duplicate Labels
Consequences of Duplicate Labels
Duplicate Label Detection
Disallowing Duplicate Labels
Categorical data
Object creation
CategoricalDtype
Description
Working with categories
Sorting and order
Comparisons
Operations
Data munging
Getting data in/out
Missing data
Differences to R’s factor
Gotchas
Nullable integer data type
Construction
Operations
Scalar NA Value
Nullable Boolean data type
Indexing with NA values
Kleene logical operations
Chart visualization
Basic plotting: plot
Other plots
Plotting with missing data
Plotting tools
Plot formatting
Plotting directly with Matplotlib
Plotting backends
Table Visualization
Styler Object and HTML
Formatting the Display
Methods to Add Styles
Table Styles
Setting Classes and Linking to External CSS
Styler Functions
Tooltips and Captions
Finer Control with Slicing
Optimization
Builtin Styles
Sharing styles
Limitations
Other Fun and Useful Stuff
Export to Excel
Export to LaTeX
More About CSS and HTML
Extensibility
Group by: split-apply-combine
Splitting an object into groups
Iterating through groups
Selecting a group
Aggregation
Transformation
Filtration
Dispatching to instance methods
Flexible apply
Numba Accelerated Routines
Other useful features
Examples
Windowing operations
Overview
Rolling window
Weighted window
Expanding window
Exponentially weighted window
Time series / date functionality
Overview
Timestamps vs. time spans
Converting to timestamps
Generating ranges of timestamps
Timestamp limitations
Indexing
Time/date components
DateOffset objects
Time Series-related instance methods
Resampling
Time span representation
Converting between representations
Representing out-of-bounds spans
Time zone handling
Time deltas
Parsing
Operations
Reductions
Frequency conversion
Attributes
TimedeltaIndex
Resampling
Options and settings
Overview
Available options
Getting and setting options
Setting startup options in Python/IPython environment
Frequently used options
Number formatting
Unicode formatting
Table schema display
Enhancing performance
Cython (writing C extensions for pandas)
Numba (JIT compilation)
Expression evaluation via eval()
Scaling to large datasets
Load less data
Use efficient datatypes
Use chunking
Use other libraries
Sparse data structures
SparseArray
SparseDtype
Sparse accessor
Sparse calculation
Migrating
Interaction with scipy.sparse
Frequently Asked Questions (FAQ)
DataFrame memory usage
Using if/truth statements with pandas
Mutating with User Defined Function (UDF) methods
NaN, Integer NA values and NA type promotions
Differences with NumPy
Thread-safety
Byte-ordering issues
Cookbook
Idioms
Selection
Multiindexing
Missing data
Grouping
Timeseries
Merge
Plotting
Data in/out
Computation
Timedeltas
Creating example data
|
user_guide/index.html
|
pandas.tseries.offsets.Hour.apply_index
|
`pandas.tseries.offsets.Hour.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
Hour.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Hour.apply_index.html
|
pandas.DataFrame.combine
|
`pandas.DataFrame.combine`
Perform column-wise combine with another DataFrame.
```
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
>>> df1.combine(df2, take_smaller)
A B
0 0 3
1 0 3
```
|
DataFrame.combine(other, func, fill_value=None, overwrite=True)[source]#
Perform column-wise combine with another DataFrame.
Combines a DataFrame with other DataFrame using func
to element-wise combine columns. The row and column indexes of the
resulting DataFrame will be the union of the two.
Parameters
otherDataFrameThe DataFrame to merge column-wise.
funcfunctionFunction that takes two series as inputs and return a Series or a
scalar. Used to merge the two dataframes column by columns.
fill_valuescalar value, default NoneThe value to fill NaNs with prior to passing any column to the
merge func.
overwritebool, default TrueIf True, columns in self that do not exist in other will be
overwritten with NaNs.
Returns
DataFrameCombination of the provided DataFrames.
See also
DataFrame.combine_firstCombine two DataFrame objects and default to non-null values in frame calling the method.
Examples
Combine using a simple function that chooses the smaller column.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
>>> df1.combine(df2, take_smaller)
A B
0 0 3
1 0 3
Example using a true element-wise combine function.
>>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, np.minimum)
A B
0 1 2
1 0 3
Using fill_value fills Nones prior to passing the column to the
merge function.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
A B
0 0 -5.0
1 0 4.0
However, if the same element in both dataframes is None, that None
is preserved
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
A B
0 0 -5.0
1 0 3.0
Example that demonstrates the use of overwrite and behavior when
the axis differ between the dataframes.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2])
>>> df1.combine(df2, take_smaller)
A B C
0 NaN NaN NaN
1 NaN 3.0 -10.0
2 NaN 3.0 1.0
>>> df1.combine(df2, take_smaller, overwrite=False)
A B C
0 0.0 NaN NaN
1 0.0 3.0 -10.0
2 NaN 3.0 1.0
Demonstrating the preference of the passed in dataframe.
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2])
>>> df2.combine(df1, take_smaller)
A B C
0 0.0 NaN NaN
1 0.0 3.0 NaN
2 NaN 3.0 NaN
>>> df2.combine(df1, take_smaller, overwrite=False)
A B C
0 0.0 NaN NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
|
reference/api/pandas.DataFrame.combine.html
|
Testing
|
Assertion functions#
testing.assert_frame_equal(left, right[, ...])
Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...])
Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...])
Check that left and right Index are equal.
testing.assert_extension_array_equal(left, right)
Check that left and right ExtensionArrays are equal.
Exceptions and warnings#
errors.AbstractMethodError(class_instance[, ...])
Raise this error instead of NotImplementedError for abstract methods.
errors.AccessorRegistrationWarning
Warning for attribute conflicts in accessor registration.
errors.AttributeConflictWarning
Warning raised when index attributes conflict when using HDFStore.
errors.CategoricalConversionWarning
Warning is raised when reading a partial labeled Stata file using a iterator.
errors.ClosedFileError
Exception is raised when trying to perform an operation on a closed HDFStore file.
errors.CSSWarning
Warning is raised when converting css styling fails.
errors.DatabaseError
Error is raised when executing sql with bad syntax or sql that throws an error.
errors.DataError
Exceptionn raised when performing an operation on non-numerical data.
errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
errors.DuplicateLabelError
Error raised when an operation would introduce duplicate labels.
errors.EmptyDataError
Exception raised in pd.read_csv when empty data or header is encountered.
errors.IncompatibilityWarning
Warning raised when trying to use where criteria on an incompatible HDF5 file.
errors.IndexingError
Exception is raised when trying to index and there is a mismatch in dimensions.
errors.InvalidColumnName
Warning raised by to_stata the column contains a non-valid stata name.
errors.InvalidIndexError
Exception raised when attempting to use an invalid index key.
errors.IntCastingNaNError
Exception raised when converting (astype) an array with NaN to an integer type.
errors.MergeError
Exception raised when merging data.
errors.NullFrequencyError
Exception raised when a freq cannot be null.
errors.NumbaUtilError
Error raised for unsupported Numba engine routines.
errors.NumExprClobberingError
Exception raised when trying to use a built-in numexpr name as a variable name.
errors.OptionError
Exception raised for pandas.options.
errors.OutOfBoundsDatetime
Raised when the datetime is outside the range that can be represented.
errors.OutOfBoundsTimedelta
Raised when encountering a timedelta value that cannot be represented.
errors.ParserError
Exception that is raised by an error encountered in parsing file contents.
errors.ParserWarning
Warning raised when reading a file that doesn't use the default 'c' parser.
errors.PerformanceWarning
Warning raised when there is a possible performance impact.
errors.PossibleDataLossError
Exception raised when trying to open a HDFStore file when already opened.
errors.PossiblePrecisionLoss
Warning raised by to_stata on a column with a value outside or equal to int64.
errors.PyperclipException
Exception raised when clipboard functionality is unsupported.
errors.PyperclipWindowsException(message)
Exception raised when clipboard functionality is unsupported by Windows.
errors.SettingWithCopyError
Exception raised when trying to set on a copied slice from a DataFrame.
errors.SettingWithCopyWarning
Warning raised when trying to set on a copied slice from a DataFrame.
errors.SpecificationError
Exception raised by agg when the functions are ill-specified.
errors.UndefinedVariableError(name[, is_local])
Exception raised by query or eval when using an undefined variable name.
errors.UnsortedIndexError
Error raised when slicing a MultiIndex which has not been lexsorted.
errors.UnsupportedFunctionCall
Exception raised when attempting to call a unsupported numpy function.
errors.ValueLabelTypeMismatch
Warning raised by to_stata on a category column that contains non-string values.
Bug report function#
show_versions([as_json])
Provide useful information, important for bug reports.
Test suite runner#
test([extra_args])
Run the pandas test suite using pytest.
|
reference/testing.html
| null |
pandas.Series.truediv
|
`pandas.Series.truediv`
Return Floating division of series and other, element-wise (binary operator truediv).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
```
|
Series.truediv(other, level=None, fill_value=None, axis=0)[source]#
Return Floating division of series and other, element-wise (binary operator truediv).
Equivalent to series / other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rtruedivReverse of the Floating division operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.truediv.html
|
pandas.io.formats.style.Styler.template_html_style
|
pandas.io.formats.style.Styler.template_html_style
|
Styler.template_html_style = <Template 'html_style.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_html_style.html
|
MultiIndex / advanced indexing
|
MultiIndex / advanced indexing
This section covers indexing with a MultiIndex
and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning
Whether a copy or a reference is returned for a setting operation may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the cookbook for some advanced strategies.
|
This section covers indexing with a MultiIndex
and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning
Whether a copy or a reference is returned for a setting operation may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the cookbook for some advanced strategies.
Hierarchical indexing (MultiIndex)#
Hierarchical / Multi-level indexing is very exciting as it opens the door to some
quite sophisticated data analysis and manipulation, especially for working with
higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data
structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by “hierarchical” indexing
and how it integrates with all of the pandas indexing functionality
described above and in prior sections. Later, when discussing group by and pivoting and reshaping data, we’ll show
non-trivial applications to illustrate how it aids in structuring data for
analysis.
See the cookbook for some advanced strategies.
Creating a MultiIndex (hierarchical index) object#
The MultiIndex object is the hierarchical analogue of the standard
Index object which typically stores the axis labels in pandas objects. You
can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using
MultiIndex.from_arrays()), an array of tuples (using
MultiIndex.from_tuples()), a crossed set of iterables (using
MultiIndex.from_product()), or a DataFrame (using
MultiIndex.from_frame()). The Index constructor will attempt to return
a MultiIndex when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
In [1]: arrays = [
...: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
...: ["one", "two", "one", "two", "one", "two", "one", "two"],
...: ]
...:
In [2]: tuples = list(zip(*arrays))
In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]
In [4]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [5]: index
Out[5]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
In [6]: s = pd.Series(np.random.randn(8), index=index)
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier
to use the MultiIndex.from_product() method:
In [8]: iterables = [["bar", "baz", "foo", "qux"], ["one", "two"]]
In [9]: pd.MultiIndex.from_product(iterables, names=["first", "second"])
Out[9]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
You can also construct a MultiIndex from a DataFrame directly, using
the method MultiIndex.from_frame(). This is a complementary method to
MultiIndex.to_frame().
In [10]: df = pd.DataFrame(
....: [["bar", "one"], ["bar", "two"], ["foo", "one"], ["foo", "two"]],
....: columns=["first", "second"],
....: )
....:
In [11]: pd.MultiIndex.from_frame(df)
Out[11]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('foo', 'one'),
('foo', 'two')],
names=['first', 'second'])
As a convenience, you can pass a list of arrays directly into Series or
DataFrame to construct a MultiIndex automatically:
In [12]: arrays = [
....: np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
....: np.array(["one", "two", "one", "two", "one", "two", "one", "two"]),
....: ]
....:
In [13]: s = pd.Series(np.random.randn(8), index=arrays)
In [14]: s
Out[14]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64
In [15]: df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
In [16]: df
Out[16]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores
string names for the levels themselves. If no names are provided, None will
be assigned:
In [17]: df.index.names
Out[17]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels
of the index is up to you:
In [18]: df = pd.DataFrame(np.random.randn(3, 8), index=["A", "B", "C"], columns=index)
In [19]: df
Out[19]:
first bar baz ... foo qux
second one two one ... two one two
A 0.895717 0.805244 -1.206412 ... 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 ... -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 ... -2.211372 0.974466 -2.006747
[3 rows x 8 columns]
In [20]: pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
Out[20]:
first bar baz foo
second one two one two one two
first second
bar one -0.410001 -0.078638 0.545952 -1.219217 -1.226825 0.769804
two -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734
baz one 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
two 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
foo one -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232
two 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441
We’ve “sparsified” the higher levels of the indexes to make the console output a
bit easier on the eyes. Note that how the index is displayed can be controlled using the
multi_sparse option in pandas.set_options():
In [21]: with pd.option_context("display.multi_sparse", False):
....: df
....:
It’s worth keeping in mind that there’s nothing preventing you from using
tuples as atomic labels on an axis:
In [22]: pd.Series(np.random.randn(8), index=tuples)
Out[22]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do
grouping, selection, and reshaping operations as we will describe below and in
subsequent areas of the documentation. As you will see in later sections, you
can find yourself working with hierarchically-indexed data without creating a
MultiIndex explicitly yourself. However, when loading data from a file, you
may wish to generate your own MultiIndex when preparing the data set.
Reconstructing the level labels#
The method get_level_values() will return a vector of the labels for each
location at a particular level:
In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
In [24]: index.get_level_values("second")
Out[24]: Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object', name='second')
Basic indexing on axis with MultiIndex#
One of the important features of hierarchical indexing is that you can select
data by a “partial” label identifying a subgroup in the data. Partial
selection “drops” levels of the hierarchical index in the result in a
completely analogous way to selecting a column in a regular DataFrame:
In [25]: df["bar"]
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
In [26]: df["bar", "one"]
Out[26]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
In [27]: df["bar"]["one"]
Out[27]:
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [28]: s["qux"]
Out[28]:
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select
on a deeper level.
Defined levels#
The MultiIndex keeps all the defined levels of an index, even
if they are not actually used. When slicing an index, you may notice this.
For example:
In [29]: df.columns.levels # original MultiIndex
Out[29]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
In [30]: df[["foo","qux"]].columns.levels # sliced
Out[30]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
This is done to avoid a recomputation of the levels in order to make slicing
highly performant. If you want to see only the used levels, you can use the
get_level_values() method.
In [31]: df[["foo", "qux"]].columns.to_numpy()
Out[31]:
array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)
# for a specific level
In [32]: df[["foo", "qux"]].columns.get_level_values(0)
Out[32]: Index(['foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
To reconstruct the MultiIndex with only the used levels, the
remove_unused_levels() method may be used.
In [33]: new_mi = df[["foo", "qux"]].columns.remove_unused_levels()
In [34]: new_mi.levels
Out[34]: FrozenList([['foo', 'qux'], ['one', 'two']])
Data alignment and using reindex#
Operations between differently-indexed objects having MultiIndex on the
axes will work as you expect; data alignment will work the same as an Index of
tuples:
In [35]: s + s[:-2]
Out[35]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [36]: s + s[::2]
Out[36]:
bar one -1.723698
two NaN
baz one -0.989859
two NaN
foo one 1.443110
two NaN
qux one -2.079150
two NaN
dtype: float64
The reindex() method of Series/DataFrames can be
called with another MultiIndex, or even a list or array of tuples:
In [37]: s.reindex(index[:3])
Out[37]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
In [38]: s.reindex([("foo", "two"), ("bar", "one"), ("qux", "one"), ("baz", "one")])
Out[38]:
foo two -0.706771
bar one -0.861849
qux one -1.039575
baz one -0.494929
dtype: float64
Advanced indexing with hierarchical index#
Syntactically integrating MultiIndex in advanced indexing with .loc is a
bit challenging, but we’ve made every effort to do so. In general, MultiIndex
keys take the form of tuples. For example, the following works as you would expect:
In [39]: df = df.T
In [40]: df
Out[40]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [41]: df.loc[("bar", "two")]
Out[41]:
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
Note that df.loc['bar', 'two'] would also work in this example, but this shorthand
notation can lead to ambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple
like this:
In [42]: df.loc[("bar", "two"), "A"]
Out[42]: 0.8052440253863785
You don’t have to specify all levels of the MultiIndex by passing only the
first elements of the tuple. For example, you can use “partial” indexing to
get all elements with bar in the first level as follows:
In [43]: df.loc["bar"]
Out[43]:
A B C
second
one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent
to df.loc['bar',] in this example).
“Partial” slicing also works quite nicely.
In [44]: df.loc["baz":"foo"]
Out[44]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
You can slice with a ‘range’ of values, by providing a slice of tuples.
In [45]: df.loc[("baz", "two"):("qux", "one")]
Out[45]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
In [46]: df.loc[("baz", "two"):"foo"]
Out[46]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Passing a list of labels or tuples works similar to reindexing:
In [47]: df.loc[[("bar", "two"), ("qux", "one")]]
Out[47]:
A B C
first second
bar two 0.805244 0.813850 1.607920
qux one -1.170299 1.130127 0.974466
Note
It is important to note that tuples and lists are not treated identically
in pandas when it comes to indexing. Whereas a tuple is interpreted as one
multi-level key, a list is used to specify several keys. Or in other words,
tuples go horizontally (traversing levels), lists go vertically (scanning levels).
Importantly, a list of tuples indexes several complete MultiIndex keys,
whereas a tuple of lists refer to several values within a level:
In [48]: s = pd.Series(
....: [1, 2, 3, 4, 5, 6],
....: index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]),
....: )
....:
In [49]: s.loc[[("A", "c"), ("B", "d")]] # list of tuples
Out[49]:
A c 1
B d 5
dtype: int64
In [50]: s.loc[(["A", "B"], ["c", "d"])] # tuple of lists
Out[50]:
A c 1
d 2
B c 4
d 5
dtype: int64
Using slicers#
You can slice a MultiIndex by providing multiple indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label,
including slices, lists of labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the
deeper levels, they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
Warning
You should specify all axes in the .loc specifier, meaning the indexer for the index and
for the columns. There are some ambiguous cases where the passed indexer could be mis-interpreted
as indexing both axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice("A1", "A3"), ...), :] # noqa: E999
You should not do this:
df.loc[(slice("A1", "A3"), ...)] # noqa: E999
In [51]: def mklbl(prefix, n):
....: return ["%s%s" % (prefix, i) for i in range(n)]
....:
In [52]: miindex = pd.MultiIndex.from_product(
....: [mklbl("A", 4), mklbl("B", 2), mklbl("C", 4), mklbl("D", 2)]
....: )
....:
In [53]: micolumns = pd.MultiIndex.from_tuples(
....: [("a", "foo"), ("a", "bar"), ("b", "foo"), ("b", "bah")], names=["lvl0", "lvl1"]
....: )
....:
In [54]: dfmi = (
....: pd.DataFrame(
....: np.arange(len(miindex) * len(micolumns)).reshape(
....: (len(miindex), len(micolumns))
....: ),
....: index=miindex,
....: columns=micolumns,
....: )
....: .sort_index()
....: .sort_index(axis=1)
....: )
....:
In [55]: dfmi
Out[55]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
[64 rows x 4 columns]
Basic MultiIndex slicing using slices, lists, and labels.
In [56]: dfmi.loc[(slice("A1", "A3"), slice(None), ["C1", "C3"]), :]
Out[56]:
lvl0 a b
lvl1 bar foo bah foo
A1 B0 C1 D0 73 72 75 74
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[24 rows x 4 columns]
You can use pandas.IndexSlice to facilitate a more natural syntax
using :, rather than using slice(None).
In [57]: idx = pd.IndexSlice
In [58]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[58]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
It is possible to perform quite complicated selections using this method on multiple
axes at the same time.
In [59]: dfmi.loc["A1", (slice(None), "foo")]
Out[59]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
... ... ...
B1 C1 D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
[16 rows x 2 columns]
In [60]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[60]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
Using a boolean indexer you can provide selection related to the values.
In [61]: mask = dfmi[("a", "foo")] > 200
In [62]: dfmi.loc[idx[mask, :, ["C1", "C3"]], idx[:, "foo"]]
Out[62]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed
slicers on a single axis.
In [63]: dfmi.loc(axis=0)[:, :, ["C1", "C3"]]
Out[63]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[32 rows x 4 columns]
Furthermore, you can set the values using the following methods.
In [64]: df2 = dfmi.copy()
In [65]: df2.loc(axis=0)[:, :, ["C1", "C3"]] = -10
In [66]: df2
Out[66]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
[64 rows x 4 columns]
You can use a right-hand-side of an alignable object as well.
In [67]: df2 = dfmi.copy()
In [68]: df2.loc[idx[:, :, ["C1", "C3"]], :] = df2 * 1000
In [69]: df2
Out[69]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000
[64 rows x 4 columns]
Cross-section#
The xs() method of DataFrame additionally takes a level argument to make
selecting data at a particular level of a MultiIndex easier.
In [70]: df
Out[70]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [71]: df.xs("one", level="second")
Out[71]:
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466
# using the slicers
In [72]: df.loc[(slice(None), "one"), :]
Out[72]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
baz one -1.206412 0.132003 1.024180
foo one 1.431256 -0.076467 0.875906
qux one -1.170299 1.130127 0.974466
You can also select on the columns with xs, by
providing the axis argument.
In [73]: df = df.T
In [74]: df.xs("one", level="second", axis=1)
Out[74]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
# using the slicers
In [75]: df.loc[:, (slice(None), "one")]
Out[75]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
xs also allows selection with multiple keys.
In [76]: df.xs(("one", "bar"), level=("second", "first"), axis=1)
Out[76]:
first bar
second one
A 0.895717
B 0.410835
C -1.413681
# using the slicers
In [77]: df.loc[:, ("bar", "one")]
Out[77]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
You can pass drop_level=False to xs to retain
the level that was selected.
In [78]: df.xs("one", level="second", axis=1, drop_level=False)
Out[78]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Compare the above with the result using drop_level=True (the default value).
In [79]: df.xs("one", level="second", axis=1, drop_level=True)
Out[79]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Advanced reindexing and alignment#
Using the parameter level in the reindex() and
align() methods of pandas objects is useful to broadcast
values across a level. For instance:
In [80]: midx = pd.MultiIndex(
....: levels=[["zero", "one"], ["x", "y"]], codes=[[1, 1, 0, 0], [1, 0, 1, 0]]
....: )
....:
In [81]: df = pd.DataFrame(np.random.randn(4, 2), index=midx)
In [82]: df
Out[82]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [83]: df2 = df.groupby(level=0).mean()
In [84]: df2
Out[84]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416
In [85]: df2.reindex(df.index, level=0)
Out[85]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [86]: df_aligned, df2_aligned = df.align(df2, level=0)
In [87]: df_aligned
Out[87]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [88]: df2_aligned
Out[88]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
Swapping levels with swaplevel#
The swaplevel() method can switch the order of two levels:
In [89]: df[:5]
Out[89]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [90]: df[:5].swaplevel(0, 1, axis=0)
Out[90]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Reordering levels with reorder_levels#
The reorder_levels() method generalizes the swaplevel
method, allowing you to permute the hierarchical index levels in one step:
In [91]: df[:5].reorder_levels([1, 0], axis=0)
Out[91]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Renaming names of an Index or MultiIndex#
The rename() method is used to rename the labels of a
MultiIndex, and is typically used to rename the columns of a DataFrame.
The columns argument of rename allows a dictionary to be specified
that includes only the columns you wish to rename.
In [92]: df.rename(columns={0: "col0", 1: "col1"})
Out[92]:
col0 col1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
This method can also be used to rename specific labels of the main index
of the DataFrame.
In [93]: df.rename(index={"one": "two", "y": "z"})
Out[93]:
0 1
two z 1.519970 -0.493662
x 0.600178 0.274230
zero z 0.132885 -0.023688
x 2.410179 1.450520
The rename_axis() method is used to rename the name of a
Index or MultiIndex. In particular, the names of the levels of a
MultiIndex can be specified, which is useful if reset_index() is later
used to move the values from the MultiIndex to a column.
In [94]: df.rename_axis(index=["abc", "def"])
Out[94]:
0 1
abc def
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
Note that the columns of a DataFrame are an index, so that using
rename_axis with the columns argument will change the name of that
index.
In [95]: df.rename_axis(columns="Cols").columns
Out[95]: RangeIndex(start=0, stop=2, step=1, name='Cols')
Both rename and rename_axis support specifying a dictionary,
Series or a mapping function to map labels/names to new values.
When working with an Index object directly, rather than via a DataFrame,
Index.set_names() can be used to change the names.
In [96]: mi = pd.MultiIndex.from_product([[1, 2], ["a", "b"]], names=["x", "y"])
In [97]: mi.names
Out[97]: FrozenList(['x', 'y'])
In [98]: mi2 = mi.rename("new name", level=0)
In [99]: mi2
Out[99]:
MultiIndex([(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['new name', 'y'])
You cannot set the names of the MultiIndex via a level.
In [100]: mi.levels[0].name = "name via level"
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[100], line 1
----> 1 mi.levels[0].name = "name via level"
File ~/work/pandas/pandas/pandas/core/indexes/base.py:1745, in Index.name(self, value)
1741 @name.setter
1742 def name(self, value: Hashable) -> None:
1743 if self._no_setting_name:
1744 # Used in MultiIndex.levels to avoid silently ignoring name updates.
-> 1745 raise RuntimeError(
1746 "Cannot set name on a level of a MultiIndex. Use "
1747 "'MultiIndex.set_names' instead."
1748 )
1749 maybe_extract_name(value, None, type(self))
1750 self._name = value
RuntimeError: Cannot set name on a level of a MultiIndex. Use 'MultiIndex.set_names' instead.
Use Index.set_names() instead.
Sorting a MultiIndex#
For MultiIndex-ed objects to be indexed and sliced effectively,
they need to be sorted. As with any index, you can use sort_index().
In [101]: import random
In [102]: random.shuffle(tuples)
In [103]: s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))
In [104]: s
Out[104]:
baz two 0.206053
foo two -0.251905
bar one -2.213588
qux two 1.063327
baz one 1.266143
qux one 0.299368
foo one -0.863838
bar two 0.408204
dtype: float64
In [105]: s.sort_index()
Out[105]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [106]: s.sort_index(level=0)
Out[106]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [107]: s.sort_index(level=1)
Out[107]:
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
You may also pass a level name to sort_index if the MultiIndex levels
are named.
In [108]: s.index.set_names(["L1", "L2"], inplace=True)
In [109]: s.sort_index(level="L1")
Out[109]:
L1 L2
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [110]: s.sort_index(level="L2")
Out[110]:
L1 L2
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if
they have a MultiIndex:
In [111]: df.T.sort_index(level=1, axis=1)
Out[111]:
one zero one zero
x x y y
0 0.600178 2.410179 1.519970 0.132885
1 0.274230 1.450520 -0.493662 -0.023688
Indexing will work even if the data are not sorted, but will be rather
inefficient (and show a PerformanceWarning). It will also
return a copy of the data rather than a view:
In [112]: dfm = pd.DataFrame(
.....: {"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
.....: )
.....:
In [113]: dfm = dfm.set_index(["jim", "joe"])
In [114]: dfm
Out[114]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968
In [4]: dfm.loc[(1, 'z')]
PerformanceWarning: indexing past lexsort depth may impact performance.
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore, if you try to index something that is not fully lexsorted, this can raise:
In [5]: dfm.loc[(0, 'y'):(1, 'z')]
UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
The is_monotonic_increasing() method on a MultiIndex shows if the
index is sorted:
In [115]: dfm.index.is_monotonic_increasing
Out[115]: False
In [116]: dfm = dfm.sort_index()
In [117]: dfm
Out[117]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [118]: dfm.index.is_monotonic_increasing
Out[118]: True
And now selection works as expected.
In [119]: dfm.loc[(0, "y"):(1, "z")]
Out[119]:
jolie
jim joe
1 y 0.110968
z 0.537020
Take methods#
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
the take() method that retrieves elements along a given axis at the given
indices. The given indices must be either a list or an ndarray of integer
index positions. take will also accept negative integers as relative positions to the end of the object.
In [120]: index = pd.Index(np.random.randint(0, 1000, 10))
In [121]: index
Out[121]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64')
In [122]: positions = [0, 9, 3]
In [123]: index[positions]
Out[123]: Int64Index([214, 329, 567], dtype='int64')
In [124]: index.take(positions)
Out[124]: Int64Index([214, 329, 567], dtype='int64')
In [125]: ser = pd.Series(np.random.randn(10))
In [126]: ser.iloc[positions]
Out[126]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [127]: ser.take(positions)
Out[127]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies
row or column positions.
In [128]: frm = pd.DataFrame(np.random.randn(5, 3))
In [129]: frm.take([1, 4, 3])
Out[129]:
0 1 2
1 -1.237881 0.106854 -1.276829
4 0.629675 -1.425966 1.857704
3 0.979542 -1.633678 0.615855
In [130]: frm.take([0, 2], axis=1)
Out[130]:
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not
intended to work on boolean indices and may return unexpected results.
In [131]: arr = np.random.randn(10)
In [132]: arr.take([False, False, True, True])
Out[132]: array([-1.1935, -1.1935, 0.6775, 0.6775])
In [133]: arr[[0, 1]]
Out[133]: array([-1.1935, 0.6775])
In [134]: ser = pd.Series(np.random.randn(10))
In [135]: ser.take([False, False, True, True])
Out[135]:
0 0.233141
0 0.233141
1 -0.223540
1 -0.223540
dtype: float64
In [136]: ser.iloc[[0, 1]]
Out[136]:
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles
a narrower range of inputs, it can offer performance that is a good deal
faster than fancy indexing.
In [137]: arr = np.random.randn(10000, 5)
In [138]: indexer = np.arange(10000)
In [139]: random.shuffle(indexer)
In [140]: %timeit arr[indexer]
.....: %timeit arr.take(indexer, axis=0)
.....:
141 us +- 1.18 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
43.6 us +- 1.01 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
In [141]: ser = pd.Series(arr[:, 0])
In [142]: %timeit ser.iloc[indexer]
.....: %timeit ser.take(indexer)
.....:
71.3 us +- 2.24 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
63.1 us +- 4.29 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
Index types#
We have discussed MultiIndex in the previous sections pretty extensively.
Documentation about DatetimeIndex and PeriodIndex are shown here,
and documentation about TimedeltaIndex is found here.
In the following sub-sections we will highlight some other index types.
CategoricalIndex#
CategoricalIndex is a type of index that is useful for supporting
indexing with duplicates. This is a container around a Categorical
and allows efficient indexing and storage of an index with a large number of duplicated elements.
In [143]: from pandas.api.types import CategoricalDtype
In [144]: df = pd.DataFrame({"A": np.arange(6), "B": list("aabbca")})
In [145]: df["B"] = df["B"].astype(CategoricalDtype(list("cab")))
In [146]: df
Out[146]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [147]: df.dtypes
Out[147]:
A int64
B category
dtype: object
In [148]: df["B"].cat.categories
Out[148]: Index(['c', 'a', 'b'], dtype='object')
Setting the index will create a CategoricalIndex.
In [149]: df2 = df.set_index("B")
In [150]: df2.index
Out[150]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates.
The indexers must be in the category or the operation will raise a KeyError.
In [151]: df2.loc["a"]
Out[151]:
A
B
a 0
a 1
a 5
The CategoricalIndex is preserved after indexing:
In [152]: df2.loc["a"].index
Out[152]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Sorting the index will sort by the order of the categories (recall that we
created the index with CategoricalDtype(list('cab')), so the sorted
order is cab).
In [153]: df2.sort_index()
Out[153]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well.
In [154]: df2.groupby(level=0).sum()
Out[154]:
A
B
c 4
a 6
b 5
In [155]: df2.groupby(level=0).sum().index
Out[155]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old Index; indexing with
a Categorical will return a CategoricalIndex, indexed according to the categories
of the passed Categorical dtype. This allows one to arbitrarily index these even with
values not in the categories, similarly to how you can reindex any pandas index.
In [156]: df3 = pd.DataFrame(
.....: {"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
.....: )
.....:
In [157]: df3 = df3.set_index("B")
In [158]: df3
Out[158]:
A
B
a 0
b 1
c 2
In [159]: df3.reindex(["a", "e"])
Out[159]:
A
B
a 0.0
e NaN
In [160]: df3.reindex(["a", "e"]).index
Out[160]: Index(['a', 'e'], dtype='object', name='B')
In [161]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe")))
Out[161]:
A
B
a 0.0
e NaN
In [162]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe"))).index
Out[162]: CategoricalIndex(['a', 'e'], categories=['a', 'b', 'e'], ordered=False, dtype='category', name='B')
Warning
Reshaping and Comparison operations on a CategoricalIndex must have the same categories
or a TypeError will be raised.
In [163]: df4 = pd.DataFrame({"A": np.arange(2), "B": list("ba")})
In [164]: df4["B"] = df4["B"].astype(CategoricalDtype(list("ab")))
In [165]: df4 = df4.set_index("B")
In [166]: df4.index
Out[166]: CategoricalIndex(['b', 'a'], categories=['a', 'b'], ordered=False, dtype='category', name='B')
In [167]: df5 = pd.DataFrame({"A": np.arange(2), "B": list("bc")})
In [168]: df5["B"] = df5["B"].astype(CategoricalDtype(list("bc")))
In [169]: df5 = df5.set_index("B")
In [170]: df5.index
Out[170]: CategoricalIndex(['b', 'c'], categories=['b', 'c'], ordered=False, dtype='category', name='B')
In [1]: pd.concat([df4, df5])
TypeError: categories must match existing categories when appending
Int64Index and RangeIndex#
Deprecated since version 1.4.0: In pandas 2.0, Index will become the default index type for numeric types
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a futire version.
RangeIndex will not be removed, as it represents an optimized version of an integer index.
Int64Index is a fundamental basic index in pandas. This is an immutable array
implementing an ordered, sliceable set.
RangeIndex is a sub-class of Int64Index that provides the default index for all NDFrame objects.
RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered set. These are analogous to Python range types.
Float64Index#
Deprecated since version 1.4.0: Index will become the default index type for numeric types in the future
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a future version of Pandas.
RangeIndex will not be removed as it represents an optimized version of an integer index.
By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values in index creation.
This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and slicing work exactly the
same.
In [171]: indexf = pd.Index([1.5, 2, 3, 4.5, 5])
In [172]: indexf
Out[172]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [173]: sf = pd.Series(range(5), index=indexf)
In [174]: sf
Out[174]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is equivalent to 3.0).
In [175]: sf[3]
Out[175]: 2
In [176]: sf[3.0]
Out[176]: 2
In [177]: sf.loc[3]
Out[177]: 2
In [178]: sf.loc[3.0]
Out[178]: 2
The only positional indexing is via iloc.
In [179]: sf.iloc[3]
Out[179]: 3
A scalar index that is not found will raise a KeyError.
Slicing is primarily on the values of the index when using [],ix,loc, and
always positional when using iloc. The exception is when the slice is
boolean, in which case it will always be positional.
In [180]: sf[2:4]
Out[180]:
2.0 1
3.0 2
dtype: int64
In [181]: sf.loc[2:4]
Out[181]:
2.0 1
3.0 2
dtype: int64
In [182]: sf.iloc[2:4]
Out[182]:
3.0 2
4.5 3
dtype: int64
In float indexes, slicing using floats is allowed.
In [183]: sf[2.1:4.6]
Out[183]:
3.0 2
4.5 3
dtype: int64
In [184]: sf.loc[2.1:4.6]
Out[184]:
3.0 2
4.5 3
dtype: int64
In non-float indexes, slicing using floats will raise a TypeError.
In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
example, be millisecond offsets.
In [185]: dfir = pd.concat(
.....: [
.....: pd.DataFrame(
.....: np.random.randn(5, 2), index=np.arange(5) * 250.0, columns=list("AB")
.....: ),
.....: pd.DataFrame(
.....: np.random.randn(6, 2),
.....: index=np.arange(4, 10) * 250.1,
.....: columns=list("AB"),
.....: ),
.....: ]
.....: )
.....:
In [186]: dfir
Out[186]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
1250.5 -0.212673 0.909872
1500.6 -0.733333 -0.349893
1750.7 0.456434 -0.306735
2000.8 0.553396 0.166221
2250.9 -0.101684 -0.734907
Selection operations then will always work on a value basis, for all selection operators.
In [187]: dfir[0:1000.4]
Out[187]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
In [188]: dfir.loc[0:1001, "A"]
Out[188]:
0.0 -0.435772
250.0 -0.808286
500.0 -1.815703
750.0 -0.243487
1000.0 1.162969
1000.4 -0.179734
Name: A, dtype: float64
In [189]: dfir.loc[1000.4]
Out[189]:
A -0.179734
B 0.993962
Name: 1000.4, dtype: float64
You could retrieve the first 1 second (1000 ms) of data as such:
In [190]: dfir[0:1000]
Out[190]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
If you need integer based selection, you should use iloc:
In [191]: dfir.iloc[0:5]
Out[191]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
IntervalIndex#
IntervalIndex together with its own dtype, IntervalDtype
as well as the Interval scalar type, allow first-class support in pandas
for interval notation.
The IntervalIndex allows some unique indexing and is also used as a
return type for the categories in cut() and qcut().
Indexing with an IntervalIndex#
An IntervalIndex can be used in Series and in DataFrame as the index.
In [192]: df = pd.DataFrame(
.....: {"A": [1, 2, 3, 4]}, index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4])
.....: )
.....:
In [193]: df
Out[193]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect,
selecting that particular interval.
In [194]: df.loc[2]
Out[194]:
A 2
Name: (1, 2], dtype: int64
In [195]: df.loc[[2, 3]]
Out[195]:
A
(1, 2] 2
(2, 3] 3
If you select a label contained within an interval, this will also select the interval.
In [196]: df.loc[2.5]
Out[196]:
A 3
Name: (2, 3], dtype: int64
In [197]: df.loc[[2.5, 3.5]]
Out[197]:
A
(2, 3] 3
(3, 4] 4
Selecting using an Interval will only return exact matches (starting from pandas 0.25.0).
In [198]: df.loc[pd.Interval(1, 2)]
Out[198]:
A 2
Name: (1, 2], dtype: int64
Trying to select an Interval that is not exactly contained in the IntervalIndex will raise a KeyError.
In [7]: df.loc[pd.Interval(0.5, 2.5)]
---------------------------------------------------------------------------
KeyError: Interval(0.5, 2.5, closed='right')
Selecting all Intervals that overlap a given Interval can be performed using the
overlaps() method to create a boolean indexer.
In [199]: idxr = df.index.overlaps(pd.Interval(0.5, 2.5))
In [200]: idxr
Out[200]: array([ True, True, True, False])
In [201]: df[idxr]
Out[201]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
Binning data with cut and qcut#
cut() and qcut() both return a Categorical object, and the bins they
create are stored as an IntervalIndex in its .categories attribute.
In [202]: c = pd.cut(range(4), bins=2)
In [203]: c
Out[203]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
In [204]: c.categories
Out[204]: IntervalIndex([(-0.003, 1.5], (1.5, 3.0]], dtype='interval[float64, right]')
cut() also accepts an IntervalIndex for its bins argument, which enables
a useful pandas idiom. First, We call cut() with some data and bins set to a
fixed number, to generate the bins. Then, we pass the values of .categories as the
bins argument in subsequent calls to cut(), supplying new data which will be
binned into the same bins.
In [205]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[205]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
Any value which falls outside all bins will be assigned a NaN value.
Generating ranges of intervals#
If we need intervals on a regular frequency, we can use the interval_range() function
to create an IntervalIndex using various combinations of start, end, and periods.
The default frequency for interval_range is a 1 for numeric intervals, and calendar day for
datetime-like intervals:
In [206]: pd.interval_range(start=0, end=5)
Out[206]: IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]], dtype='interval[int64, right]')
In [207]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4)
Out[207]: IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03], (2017-01-03, 2017-01-04], (2017-01-04, 2017-01-05]], dtype='interval[datetime64[ns], right]')
In [208]: pd.interval_range(end=pd.Timedelta("3 days"), periods=3)
Out[208]: IntervalIndex([(0 days 00:00:00, 1 days 00:00:00], (1 days 00:00:00, 2 days 00:00:00], (2 days 00:00:00, 3 days 00:00:00]], dtype='interval[timedelta64[ns], right]')
The freq parameter can used to specify non-default frequencies, and can utilize a variety
of frequency aliases with datetime-like intervals:
In [209]: pd.interval_range(start=0, periods=5, freq=1.5)
Out[209]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0], (6.0, 7.5]], dtype='interval[float64, right]')
In [210]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4, freq="W")
Out[210]: IntervalIndex([(2017-01-01, 2017-01-08], (2017-01-08, 2017-01-15], (2017-01-15, 2017-01-22], (2017-01-22, 2017-01-29]], dtype='interval[datetime64[ns], right]')
In [211]: pd.interval_range(start=pd.Timedelta("0 days"), periods=3, freq="9H")
Out[211]: IntervalIndex([(0 days 00:00:00, 0 days 09:00:00], (0 days 09:00:00, 0 days 18:00:00], (0 days 18:00:00, 1 days 03:00:00]], dtype='interval[timedelta64[ns], right]')
Additionally, the closed parameter can be used to specify which side(s) the intervals
are closed on. Intervals are closed on the right side by default.
In [212]: pd.interval_range(start=0, end=4, closed="both")
Out[212]: IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4]], dtype='interval[int64, both]')
In [213]: pd.interval_range(start=0, end=4, closed="neither")
Out[213]: IntervalIndex([(0, 1), (1, 2), (2, 3), (3, 4)], dtype='interval[int64, neither]')
Specifying start, end, and periods will generate a range of evenly spaced
intervals from start to end inclusively, with periods number of elements
in the resulting IntervalIndex:
In [214]: pd.interval_range(start=0, end=6, periods=4)
Out[214]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]], dtype='interval[float64, right]')
In [215]: pd.interval_range(pd.Timestamp("2018-01-01"), pd.Timestamp("2018-02-28"), periods=3)
Out[215]: IntervalIndex([(2018-01-01, 2018-01-20 08:00:00], (2018-01-20 08:00:00, 2018-02-08 16:00:00], (2018-02-08 16:00:00, 2018-02-28]], dtype='interval[datetime64[ns], right]')
Miscellaneous indexing FAQ#
Integer indexing#
Label-based indexing with integer axis labels is a thorny topic. It has been
discussed heavily on mailing lists and among various members of the scientific
Python community. In pandas, our general viewpoint is that labels matter more
than integer locations. Therefore, with an integer axis index only
label-based indexing is possible with the standard tools like .loc. The
following code will generate exceptions:
In [216]: s = pd.Series(range(5))
In [217]: s[-1]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance)
390 try:
--> 391 return self._range.index(new_key)
392 except ValueError as err:
ValueError: -1 is not in range
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[217], line 1
----> 1 s[-1]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance)
391 return self._range.index(new_key)
392 except ValueError as err:
--> 393 raise KeyError(key) from err
394 self._check_indexing_error(key)
395 raise KeyError(key)
KeyError: -1
In [218]: df = pd.DataFrame(np.random.randn(5, 4))
In [219]: df
Out[219]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
In [220]: df.loc[-2:]
Out[220]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
This deliberate decision was made to prevent ambiguities and subtle bugs (many
users reported finding bugs when the API change was made to stop “falling back”
on position-based indexing).
Non-monotonic indexes require exact matches#
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds
of a label-based slice can be outside the range of the index, much like slice indexing a
normal Python list. Monotonicity of an index can be tested with the is_monotonic_increasing() and
is_monotonic_decreasing() attributes.
In [221]: df = pd.DataFrame(index=[2, 3, 3, 4, 5], columns=["data"], data=list(range(5)))
In [222]: df.index.is_monotonic_increasing
Out[222]: True
# no rows 0 or 1, but still returns rows 2, 3 (both of them), and 4:
In [223]: df.loc[0:4, :]
Out[223]:
data
2 0
3 1
3 2
4 3
# slice is are outside the index, so empty DataFrame is returned
In [224]: df.loc[13:15, :]
Out[224]:
Empty DataFrame
Columns: [data]
Index: []
On the other hand, if the index is not monotonic, then both slice bounds must be
unique members of the index.
In [225]: df = pd.DataFrame(index=[2, 3, 1, 4, 3, 5], columns=["data"], data=list(range(6)))
In [226]: df.index.is_monotonic_increasing
Out[226]: False
# OK because 2 and 4 are in the index
In [227]: df.loc[2:4, :]
Out[227]:
data
2 0
3 1
1 2
4 3
# 0 is not in the index
In [9]: df.loc[0:4, :]
KeyError: 0
# 3 is not a unique label
In [11]: df.loc[2:3, :]
KeyError: 'Cannot get right slice bound for non-unique label: 3'
Index.is_monotonic_increasing and Index.is_monotonic_decreasing only check that
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
the is_unique() attribute.
In [228]: weakly_monotonic = pd.Index(["a", "b", "c", "c"])
In [229]: weakly_monotonic
Out[229]: Index(['a', 'b', 'c', 'c'], dtype='object')
In [230]: weakly_monotonic.is_monotonic_increasing
Out[230]: True
In [231]: weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique
Out[231]: False
Endpoints are inclusive#
Compared with standard Python sequence slicing in which the slice endpoint is
not inclusive, label-based slicing in pandas is inclusive. The primary
reason for this is that it is often not possible to easily determine the
“successor” or next element after a particular label in an index. For example,
consider the following Series:
In [232]: s = pd.Series(np.random.randn(6), index=list("abcdef"))
In [233]: s
Out[233]:
a 0.301379
b 1.240445
c -0.846068
d -0.043312
e -1.658747
f -0.819549
dtype: float64
Suppose we wished to slice from c to e, using integers this would be
accomplished as such:
In [234]: s[2:5]
Out[234]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
However, if you only had c and e, determining the next element in the
index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e' + 1]
A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design choice to make label-based
slicing include both endpoints:
In [235]: s.loc["c":"e"]
Out[235]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
This is most definitely a “practicality beats purity” sort of thing, but it is
something to watch out for if you expect label-based slicing to behave exactly
in the way that standard Python integer slicing works.
Indexing potentially changes underlying Series dtype#
The different indexing operation can potentially change the dtype of a Series.
In [236]: series1 = pd.Series([1, 2, 3])
In [237]: series1.dtype
Out[237]: dtype('int64')
In [238]: res = series1.reindex([0, 4])
In [239]: res.dtype
Out[239]: dtype('float64')
In [240]: res
Out[240]:
0 1.0
4 NaN
dtype: float64
In [241]: series2 = pd.Series([True])
In [242]: series2.dtype
Out[242]: dtype('bool')
In [243]: res = series2.reindex_like(series1)
In [244]: res.dtype
Out[244]: dtype('O')
In [245]: res
Out[245]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype
changes accordingly. This can cause some issues when using numpy ufuncs
such as numpy.logical_and.
See the GH2388 for a more
detailed discussion.
|
user_guide/advanced.html
|
pandas.io.formats.style.Styler.hide_index
|
`pandas.io.formats.style.Styler.hide_index`
Hide the entire index, or specific keys in the index from rendering.
|
Styler.hide_index(subset=None, level=None, names=False)[source]#
Hide the entire index, or specific keys in the index from rendering.
This method has dual functionality:
if subset is None then the entire index, or specified levels, will
be hidden whilst displaying all data-rows.
if a subset is given then those specific rows will be hidden whilst the
index itself remains visible.
Changed in version 1.3.0.
Deprecated since version 1.4.0: This method should be replaced by hide(axis="index", **kwargs)
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 1d input or single key along the index axis within
DataFrame.loc[<subset>, :], to limit data to before applying
the function.
levelint, str, listThe level(s) to hide in a MultiIndex if hiding the entire index. Cannot be
used simultaneously with subset.
New in version 1.4.0.
namesboolWhether to hide the index name(s), in the case the index or part of it
remains visible.
New in version 1.4.0.
Returns
selfStyler
See also
Styler.hideHide the entire index / columns, or specific rows / columns.
|
reference/api/pandas.io.formats.style.Styler.hide_index.html
|
API reference
|
This page gives an overview of all public pandas objects, functions and
methods. All classes and functions exposed in pandas.* namespace are public.
Some subpackages are public which include pandas.errors,
pandas.plotting, and pandas.testing. Public functions in
pandas.io and pandas.tseries submodules are mentioned in
the documentation. pandas.api.types subpackage holds some
public functions related to data types in pandas.
Warning
The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Stable functionality in such modules is not guaranteed.
Input/output
Pickling
Flat file
Clipboard
Excel
JSON
HTML
XML
Latex
HDFStore: PyTables (HDF5)
Feather
Parquet
ORC
SAS
SPSS
SQL
Google BigQuery
STATA
General functions
Data manipulations
Top-level missing data
Top-level dealing with numeric data
Top-level dealing with datetimelike data
Top-level dealing with Interval data
Top-level evaluation
Hashing
Importing from other DataFrame libraries
Series
Constructor
Attributes
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting
Combining / comparing / joining / merging
Time Series-related
Accessors
Plotting
Serialization / IO / conversion
DataFrame
Constructor
Attributes and underlying data
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting, transposing
Combining / comparing / joining / merging
Time Series-related
Flags
Metadata
Plotting
Sparse accessor
Serialization / IO / conversion
pandas arrays, scalars, and data types
Objects
Utilities
Index objects
Index
Numeric Index
CategoricalIndex
IntervalIndex
MultiIndex
DatetimeIndex
TimedeltaIndex
PeriodIndex
Date offsets
DateOffset
BusinessDay
BusinessHour
CustomBusinessDay
CustomBusinessHour
MonthEnd
MonthBegin
BusinessMonthEnd
BusinessMonthBegin
CustomBusinessMonthEnd
CustomBusinessMonthBegin
SemiMonthEnd
SemiMonthBegin
Week
WeekOfMonth
LastWeekOfMonth
BQuarterEnd
BQuarterBegin
QuarterEnd
QuarterBegin
BYearEnd
BYearBegin
YearEnd
YearBegin
FY5253
FY5253Quarter
Easter
Tick
Day
Hour
Minute
Second
Milli
Micro
Nano
Frequencies
pandas.tseries.frequencies.to_offset
Window
Rolling window functions
Weighted window functions
Expanding window functions
Exponentially-weighted window functions
Window indexer
GroupBy
Indexing, iteration
Function application
Computations / descriptive stats
Resampling
Indexing, iteration
Function application
Upsampling
Computations / descriptive stats
Style
Styler constructor
Styler properties
Style application
Builtin styles
Style export and import
Plotting
pandas.plotting.andrews_curves
pandas.plotting.autocorrelation_plot
pandas.plotting.bootstrap_plot
pandas.plotting.boxplot
pandas.plotting.deregister_matplotlib_converters
pandas.plotting.lag_plot
pandas.plotting.parallel_coordinates
pandas.plotting.plot_params
pandas.plotting.radviz
pandas.plotting.register_matplotlib_converters
pandas.plotting.scatter_matrix
pandas.plotting.table
Options and settings
Working with options
Extensions
pandas.api.extensions.register_extension_dtype
pandas.api.extensions.register_dataframe_accessor
pandas.api.extensions.register_series_accessor
pandas.api.extensions.register_index_accessor
pandas.api.extensions.ExtensionDtype
pandas.api.extensions.ExtensionArray
pandas.arrays.PandasArray
pandas.api.indexers.check_array_indexer
Testing
Assertion functions
Exceptions and warnings
Bug report function
Test suite runner
|
reference/index.html
| null |
pandas.tseries.offsets.CustomBusinessMonthEnd.name
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
CustomBusinessMonthEnd.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.name.html
|
pandas.Series.rfloordiv
|
`pandas.Series.rfloordiv`
Return Integer division of series and other, element-wise (binary operator rfloordiv).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
```
|
Series.rfloordiv(other, level=None, fill_value=None, axis=0)[source]#
Return Integer division of series and other, element-wise (binary operator rfloordiv).
Equivalent to other // series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.floordivElement-wise Integer division, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.rfloordiv.html
|
pandas.tseries.offsets.Easter.rollback
|
`pandas.tseries.offsets.Easter.rollback`
Roll provided date backward to next offset only if not on offset.
|
Easter.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.Easter.rollback.html
|
pandas.Series.dot
|
`pandas.Series.dot`
Compute the dot product between the Series and the columns of other.
```
>>> s = pd.Series([0, 1, 2, 3])
>>> other = pd.Series([-1, 2, -3, 4])
>>> s.dot(other)
8
>>> s @ other
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(arr)
array([24, 14])
```
|
Series.dot(other)[source]#
Compute the dot product between the Series and the columns of other.
This method computes the dot product between the Series and another
one, or the Series and each columns of a DataFrame, or the Series and
each columns of an array.
It can also be called using self @ other in Python >= 3.5.
Parameters
otherSeries, DataFrame or array-likeThe other object to compute the dot product with its columns.
Returns
scalar, Series or numpy.ndarrayReturn the dot product of the Series and other if other is a
Series, the Series of the dot product of Series and each rows of
other if other is a DataFrame or a numpy.ndarray between the Series
and each columns of the numpy array.
See also
DataFrame.dotCompute the matrix product with the DataFrame.
Series.mulMultiplication of series and other, element-wise.
Notes
The Series and other has to share the same index if other is a Series
or a DataFrame.
Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> other = pd.Series([-1, 2, -3, 4])
>>> s.dot(other)
8
>>> s @ other
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(arr)
array([24, 14])
|
reference/api/pandas.Series.dot.html
|
pandas.tseries.offsets.YearBegin.isAnchored
|
pandas.tseries.offsets.YearBegin.isAnchored
|
YearBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.YearBegin.isAnchored.html
|
pandas.tseries.offsets.FY5253.is_quarter_end
|
`pandas.tseries.offsets.FY5253.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
FY5253.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.FY5253.is_quarter_end.html
|
pandas.tseries.offsets.Nano.__call__
|
`pandas.tseries.offsets.Nano.__call__`
Call self as a function.
|
Nano.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.Nano.__call__.html
|
pandas.api.types.union_categoricals
|
`pandas.api.types.union_categoricals`
Combine list-like of Categorical-like, unioning categories.
```
>>> a = pd.Categorical(["b", "c"])
>>> b = pd.Categorical(["a", "b"])
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
```
|
pandas.api.types.union_categoricals(to_union, sort_categories=False, ignore_order=False)[source]#
Combine list-like of Categorical-like, unioning categories.
All categories must have the same dtype.
Parameters
to_unionlist-likeCategorical, CategoricalIndex, or Series with dtype=’category’.
sort_categoriesbool, default FalseIf true, resulting categories will be lexsorted, otherwise
they will be ordered as they appear in the data.
ignore_orderbool, default FalseIf true, the ordered attribute of the Categoricals will be ignored.
Results in an unordered categorical.
Returns
Categorical
Raises
TypeError
all inputs do not have the same dtype
all inputs do not have the same ordered property
all inputs are ordered and their categories are not identical
sort_categories=True and Categoricals are ordered
ValueErrorEmpty list of categoricals passed
Notes
To learn more about categories, see link
Examples
If you want to combine categoricals that do not necessarily have
the same categories, union_categoricals will combine a list-like
of categoricals. The new categories will be the union of the
categories being combined.
>>> a = pd.Categorical(["b", "c"])
>>> b = pd.Categorical(["a", "b"])
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
By default, the resulting categories will be ordered as they appear
in the categories of the data. If you want the categories to be
lexsorted, use sort_categories=True argument.
>>> pd.api.types.union_categoricals([a, b], sort_categories=True)
['b', 'c', 'a', 'b']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with the case of combining two
categoricals of the same categories and order information (e.g. what
you could also append for).
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "a"], ordered=True)
>>> pd.api.types.union_categoricals([a, b])
['a', 'b', 'a', 'b', 'a']
Categories (2, object): ['a' < 'b']
Raises TypeError because the categories are ordered and not identical.
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "c"], ordered=True)
>>> pd.api.types.union_categoricals([a, b])
Traceback (most recent call last):
...
TypeError: to union ordered Categoricals, all categories must be the same
New in version 0.20.0
Ordered categoricals with different categories or orderings can be
combined by using the ignore_ordered=True argument.
>>> a = pd.Categorical(["a", "b", "c"], ordered=True)
>>> b = pd.Categorical(["c", "b", "a"], ordered=True)
>>> pd.api.types.union_categoricals([a, b], ignore_order=True)
['a', 'b', 'c', 'c', 'b', 'a']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with a CategoricalIndex, or Series
containing categorical data, but note that the resulting array will
always be a plain Categorical
>>> a = pd.Series(["b", "c"], dtype='category')
>>> b = pd.Series(["a", "b"], dtype='category')
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
|
reference/api/pandas.api.types.union_categoricals.html
|
pandas.Series.take
|
`pandas.Series.take`
Return the elements in the given positional indices along an axis.
This means that we are not indexing according to actual values in
the index attribute of the object. We are indexing according to the
actual position of the element in the object.
```
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=['name', 'class', 'max_speed'],
... index=[0, 2, 3, 1])
>>> df
name class max_speed
0 falcon bird 389.0
2 parrot bird 24.0
3 lion mammal 80.5
1 monkey mammal NaN
```
|
Series.take(indices, axis=0, is_copy=None, **kwargs)[source]#
Return the elements in the given positional indices along an axis.
This means that we are not indexing according to actual values in
the index attribute of the object. We are indexing according to the
actual position of the element in the object.
Parameters
indicesarray-likeAn array of ints indicating which positions to take.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0The axis on which to select elements. 0 means that we are
selecting rows, 1 means that we are selecting columns.
For Series this parameter is unused and defaults to 0.
is_copyboolBefore pandas 1.0, is_copy=False can be specified to ensure
that the return value is an actual copy. Starting with pandas 1.0,
take always returns a copy, and the keyword is therefore
deprecated.
Deprecated since version 1.0.0.
**kwargsFor compatibility with numpy.take(). Has no effect on the
output.
Returns
takensame type as callerAn array-like containing the elements taken from the object.
See also
DataFrame.locSelect a subset of a DataFrame by labels.
DataFrame.ilocSelect a subset of a DataFrame by positions.
numpy.takeTake elements from an array along an axis.
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=['name', 'class', 'max_speed'],
... index=[0, 2, 3, 1])
>>> df
name class max_speed
0 falcon bird 389.0
2 parrot bird 24.0
3 lion mammal 80.5
1 monkey mammal NaN
Take elements at positions 0 and 3 along the axis 0 (default).
Note how the actual indices selected (0 and 1) do not correspond to
our selected indices 0 and 3. That’s because we are selecting the 0th
and 3rd rows, not rows whose indices equal 0 and 3.
>>> df.take([0, 3])
name class max_speed
0 falcon bird 389.0
1 monkey mammal NaN
Take elements at indices 1 and 2 along the axis 1 (column selection).
>>> df.take([1, 2], axis=1)
class max_speed
0 bird 389.0
2 bird 24.0
3 mammal 80.5
1 mammal NaN
We may take elements using negative integers for positive indices,
starting from the end of the object, just like with Python lists.
>>> df.take([-1, -2])
name class max_speed
1 monkey mammal NaN
3 lion mammal 80.5
|
reference/api/pandas.Series.take.html
|
pandas.Timedelta.delta
|
`pandas.Timedelta.delta`
Return the timedelta in nanoseconds (ns), for internal compatibility.
```
>>> td = pd.Timedelta('1 days 42 ns')
>>> td.delta
86400000000042
```
|
Timedelta.delta#
Return the timedelta in nanoseconds (ns), for internal compatibility.
Deprecated since version 1.5.0: This argument is deprecated.
Returns
intTimedelta in nanoseconds.
Examples
>>> td = pd.Timedelta('1 days 42 ns')
>>> td.delta
86400000000042
>>> td = pd.Timedelta('3 s')
>>> td.delta
3000000000
>>> td = pd.Timedelta('3 ms 5 us')
>>> td.delta
3005000
>>> td = pd.Timedelta(42, unit='ns')
>>> td.delta
42
|
reference/api/pandas.Timedelta.delta.html
|
General functions
|
General functions
|
Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
|
reference/general_functions.html
|
pandas.core.groupby.GroupBy.ohlc
|
`pandas.core.groupby.GroupBy.ohlc`
Compute open, high, low and close values of a group, excluding missing values.
|
final GroupBy.ohlc()[source]#
Compute open, high, low and close values of a group, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Returns
DataFrameOpen, high, low and close values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.ohlc.html
|
pandas.Series.loc
|
`pandas.Series.loc`
Access a group of rows and columns by label(s) or a boolean array.
```
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=['cobra', 'viper', 'sidewinder'],
... columns=['max_speed', 'shield'])
>>> df
max_speed shield
cobra 1 2
viper 4 5
sidewinder 7 8
```
|
property Series.loc[source]#
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a
boolean array.
Allowed inputs are:
A single label, e.g. 5 or 'a', (note that 5 is
interpreted as a label of the index, and never as an
integer position along the index).
A list or array of labels, e.g. ['a', 'b', 'c'].
A slice object with labels, e.g. 'a':'f'.
Warning
Note that contrary to usual python slices, both the
start and the stop are included
A boolean array of the same length as the axis being sliced,
e.g. [True, False, True].
An alignable boolean Series. The index of the key will be aligned before
masking.
An alignable Index. The Index of the returned selection will be the input.
A callable function with one argument (the calling Series or
DataFrame) and that returns valid output for indexing (one of the above)
See more at Selection by Label.
Raises
KeyErrorIf any items are not found.
IndexingErrorIf an indexed key is passed and its index is unalignable to the frame index.
See also
DataFrame.atAccess a single value for a row/column label pair.
DataFrame.ilocAccess group of rows and columns by integer position(s).
DataFrame.xsReturns a cross-section (row(s) or column(s)) from the Series/DataFrame.
Series.locAccess group of values using labels.
Examples
Getting values
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=['cobra', 'viper', 'sidewinder'],
... columns=['max_speed', 'shield'])
>>> df
max_speed shield
cobra 1 2
viper 4 5
sidewinder 7 8
Single label. Note this returns the row as a Series.
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
List of labels. Note using [[]] returns a DataFrame.
>>> df.loc[['viper', 'sidewinder']]
max_speed shield
viper 4 5
sidewinder 7 8
Single label for row and column
>>> df.loc['cobra', 'shield']
2
Slice with labels for row and single label for column. As mentioned
above, note that both the start and stop of the slice are included.
>>> df.loc['cobra':'viper', 'max_speed']
cobra 1
viper 4
Name: max_speed, dtype: int64
Boolean list with the same length as the row axis
>>> df.loc[[False, False, True]]
max_speed shield
sidewinder 7 8
Alignable boolean Series:
>>> df.loc[pd.Series([False, True, False],
... index=['viper', 'sidewinder', 'cobra'])]
max_speed shield
sidewinder 7 8
Index (same behavior as df.reindex)
>>> df.loc[pd.Index(["cobra", "viper"], name="foo")]
max_speed shield
foo
cobra 1 2
viper 4 5
Conditional that returns a boolean Series
>>> df.loc[df['shield'] > 6]
max_speed shield
sidewinder 7 8
Conditional that returns a boolean Series with column labels specified
>>> df.loc[df['shield'] > 6, ['max_speed']]
max_speed
sidewinder 7
Callable that returns a boolean Series
>>> df.loc[lambda df: df['shield'] == 8]
max_speed shield
sidewinder 7 8
Setting values
Set value for all items matching the list of labels
>>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
>>> df
max_speed shield
cobra 1 2
viper 4 50
sidewinder 7 50
Set value for an entire row
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Set value for an entire column
>>> df.loc[:, 'max_speed'] = 30
>>> df
max_speed shield
cobra 30 10
viper 30 50
sidewinder 30 50
Set value for rows matching callable condition
>>> df.loc[df['shield'] > 35] = 0
>>> df
max_speed shield
cobra 30 10
viper 0 0
sidewinder 0 0
Getting values on a DataFrame with an index that has integer labels
Another example using integers for the index
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=[7, 8, 9], columns=['max_speed', 'shield'])
>>> df
max_speed shield
7 1 2
8 4 5
9 7 8
Slice with integer labels for rows. As mentioned above, note that both
the start and stop of the slice are included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
Getting values with a MultiIndex
A number of examples using a DataFrame with a MultiIndex
>>> tuples = [
... ('cobra', 'mark i'), ('cobra', 'mark ii'),
... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
... ('viper', 'mark ii'), ('viper', 'mark iii')
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20],
... [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
>>> df
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Single label. Note this returns a DataFrame with a single index.
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single index tuple. Note this returns a Series.
>>> df.loc[('cobra', 'mark ii')]
max_speed 0
shield 4
Name: (cobra, mark ii), dtype: int64
Single label for row and column. Similar to passing in a tuple, this
returns a Series.
>>> df.loc['cobra', 'mark i']
max_speed 12
shield 2
Name: (cobra, mark i), dtype: int64
Single tuple. Note using [[]] returns a DataFrame.
>>> df.loc[[('cobra', 'mark ii')]]
max_speed shield
cobra mark ii 0 4
Single tuple for the index with a single label for the column
>>> df.loc[('cobra', 'mark i'), 'shield']
2
Slice from index tuple to single label
>>> df.loc[('cobra', 'mark i'):'viper']
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Slice from index tuple to index tuple
>>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
Please see the user guide
for more details and explanations of advanced indexing.
|
reference/api/pandas.Series.loc.html
|
pandas.Timestamp.isocalendar
|
`pandas.Timestamp.isocalendar`
Return a 3-tuple containing ISO year, week number, and weekday.
|
Timestamp.isocalendar()#
Return a 3-tuple containing ISO year, week number, and weekday.
|
reference/api/pandas.Timestamp.isocalendar.html
|
pandas.Timestamp.minute
|
pandas.Timestamp.minute
|
Timestamp.minute#
|
reference/api/pandas.Timestamp.minute.html
|
pandas.Index.to_flat_index
|
`pandas.Index.to_flat_index`
Identity method.
|
Index.to_flat_index()[source]#
Identity method.
This is implemented for compatibility with subclass implementations
when chaining.
Returns
pd.IndexCaller.
See also
MultiIndex.to_flat_indexSubclass implementation.
|
reference/api/pandas.Index.to_flat_index.html
|
pandas.DataFrame.cumsum
|
`pandas.DataFrame.cumsum`
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
```
|
DataFrame.cumsum(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative sum of Series or DataFrame.
See also
core.window.expanding.Expanding.sumSimilar functionality but ignores NaN values.
DataFrame.sumReturn the sum over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row,
use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
|
reference/api/pandas.DataFrame.cumsum.html
|
pandas.DataFrame.truediv
|
`pandas.DataFrame.truediv`
Get Floating division of dataframe and other, element-wise (binary operator truediv).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.truediv(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.truediv.html
|
pandas.tseries.offsets.Week.is_month_end
|
`pandas.tseries.offsets.Week.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Week.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Week.is_month_end.html
|
pandas.Period.start_time
|
`pandas.Period.start_time`
Get the Timestamp for the start of the period.
```
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
```
|
Period.start_time#
Get the Timestamp for the start of the period.
Returns
Timestamp
See also
Period.end_timeReturn the end Timestamp.
Period.dayofyearReturn the day of year.
Period.daysinmonthReturn the days in that month.
Period.dayofweekReturn the day of the week.
Examples
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
|
reference/api/pandas.Period.start_time.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
|
CustomBusinessMonthBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward.html
|
pandas.Series.dt.is_year_start
|
`pandas.Series.dt.is_year_start`
Indicate whether the date is the first day of a year.
The same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
```
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
```
|
Series.dt.is_year_start[source]#
Indicate whether the date is the first day of a year.
Returns
Series or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
is_year_endSimilar property indicating the last day of the year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
>>> idx = pd.date_range("2017-12-30", periods=3)
>>> idx
DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_year_start
array([False, False, True])
|
reference/api/pandas.Series.dt.is_year_start.html
|
pandas.tseries.offsets.BusinessMonthEnd.normalize
|
pandas.tseries.offsets.BusinessMonthEnd.normalize
|
BusinessMonthEnd.normalize#
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.normalize.html
|
pandas.CategoricalIndex.as_unordered
|
`pandas.CategoricalIndex.as_unordered`
Set the Categorical to be unordered.
|
CategoricalIndex.as_unordered(*args, **kwargs)[source]#
Set the Categorical to be unordered.
Parameters
inplacebool, default FalseWhether or not to set the ordered attribute in-place or return
a copy of this categorical with ordered set to False.
Deprecated since version 1.5.0.
Returns
Categorical or NoneUnordered Categorical or None if inplace=True.
|
reference/api/pandas.CategoricalIndex.as_unordered.html
|
pandas.tseries.offsets.YearBegin.is_quarter_end
|
`pandas.tseries.offsets.YearBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
YearBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_quarter_end.html
|
pandas.interval_range
|
`pandas.interval_range`
Return a fixed frequency IntervalIndex.
Left bound for generating intervals.
```
>>> pd.interval_range(start=0, end=5)
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
dtype='interval[int64, right]')
```
|
pandas.interval_range(start=None, end=None, periods=None, freq=None, name=None, closed='right')[source]#
Return a fixed frequency IntervalIndex.
Parameters
startnumeric or datetime-like, default NoneLeft bound for generating intervals.
endnumeric or datetime-like, default NoneRight bound for generating intervals.
periodsint, default NoneNumber of periods to generate.
freqnumeric, str, or DateOffset, default NoneThe length of each interval. Must be consistent with the type of start
and end, e.g. 2 for numeric, or ‘5H’ for datetime-like. Default is 1
for numeric and ‘D’ for datetime-like.
namestr, default NoneName of the resulting IntervalIndex.
closed{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’Whether the intervals are closed on the left-side, right-side, both
or neither.
Returns
IntervalIndex
See also
IntervalIndexAn Index of intervals that are all closed on the same side.
Notes
Of the four parameters start, end, periods, and freq,
exactly three must be specified. If freq is omitted, the resulting
IntervalIndex will have periods linearly spaced elements between
start and end, inclusively.
To learn more about datetime-like frequency strings, please see this link.
Examples
Numeric start and end is supported.
>>> pd.interval_range(start=0, end=5)
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
dtype='interval[int64, right]')
Additionally, datetime-like input is also supported.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
... end=pd.Timestamp('2017-01-04'))
IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03],
(2017-01-03, 2017-01-04]],
dtype='interval[datetime64[ns], right]')
The freq parameter specifies the frequency between the left and right.
endpoints of the individual intervals within the IntervalIndex. For
numeric start and end, the frequency must also be numeric.
>>> pd.interval_range(start=0, periods=4, freq=1.5)
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
dtype='interval[float64, right]')
Similarly, for datetime-like start and end, the frequency must be
convertible to a DateOffset.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
... periods=3, freq='MS')
IntervalIndex([(2017-01-01, 2017-02-01], (2017-02-01, 2017-03-01],
(2017-03-01, 2017-04-01]],
dtype='interval[datetime64[ns], right]')
Specify start, end, and periods; the frequency is generated
automatically (linearly spaced).
>>> pd.interval_range(start=0, end=6, periods=4)
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
dtype='interval[float64, right]')
The closed parameter specifies which endpoints of the individual
intervals within the IntervalIndex are closed.
>>> pd.interval_range(end=5, periods=4, closed='both')
IntervalIndex([[1, 2], [2, 3], [3, 4], [4, 5]],
dtype='interval[int64, both]')
|
reference/api/pandas.interval_range.html
|
pandas.core.window.expanding.Expanding.median
|
`pandas.core.window.expanding.Expanding.median`
Calculate the expanding median.
|
Expanding.median(numeric_only=False, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the expanding median.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.medianAggregating median for Series.
pandas.DataFrame.medianAggregating median for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
|
reference/api/pandas.core.window.expanding.Expanding.median.html
|
pandas.read_json
|
`pandas.read_json`
Convert a JSON string to pandas object.
```
>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
```
|
pandas.read_json(path_or_buf, *, orient=None, typ='frame', dtype=None, convert_axes=None, convert_dates=True, keep_default_dates=True, numpy=False, precise_float=False, date_unit=None, encoding=None, encoding_errors='strict', lines=False, chunksize=None, compression='infer', nrows=None, storage_options=None)[source]#
Convert a JSON string to pandas object.
Parameters
path_or_bufa valid JSON str, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be:
file://localhost/path/to/table.json.
If you want to pass in a path object, pandas accepts any
os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
orientstrIndication of expected JSON string format.
Compatible JSON strings can be produced by to_json() with a
corresponding orient value.
The set of possible orients is:
'split' : dict like
{index -> [index], columns -> [columns], data -> [values]}
'records' : list like
[{column -> value}, ... , {column -> value}]
'index' : dict like {index -> {column -> value}}
'columns' : dict like {column -> {index -> value}}
'values' : just the values array
The allowed and default values depend on the value
of the typ parameter.
when typ == 'series',
allowed orients are {'split','records','index'}
default is 'index'
The Series index must be unique for orient 'index'.
when typ == 'frame',
allowed orients are {'split','records','index',
'columns','values', 'table'}
default is 'columns'
The DataFrame index must be unique for orients 'index' and
'columns'.
The DataFrame columns must be unique for orients 'index',
'columns', and 'records'.
typ{‘frame’, ‘series’}, default ‘frame’The type of object to recover.
dtypebool or dict, default NoneIf True, infer dtypes; if a dict of column to dtype, then use those;
if False, then don’t infer dtypes at all, applies only to the data.
For all orient values except 'table', default is True.
Changed in version 0.25.0: Not applicable for orient='table'.
convert_axesbool, default NoneTry to convert the axes to the proper dtypes.
For all orient values except 'table', default is True.
Changed in version 0.25.0: Not applicable for orient='table'.
convert_datesbool or list of str, default TrueIf True then default datelike columns may be converted (depending on
keep_default_dates).
If False, no dates will be converted.
If a list of column names, then those columns will be converted and
default datelike columns may also be converted (depending on
keep_default_dates).
keep_default_datesbool, default TrueIf parsing dates (convert_dates is not False), then try to parse the
default datelike columns.
A column label is datelike if
it ends with '_at',
it ends with '_time',
it begins with 'timestamp',
it is 'modified', or
it is 'date'.
numpybool, default FalseDirect decoding to numpy arrays. Supports numeric data only, but
non-numeric column and index labels are supported. Note also that the
JSON ordering MUST be the same for each term if numpy=True.
Deprecated since version 1.0.0.
precise_floatbool, default FalseSet to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but
less precise builtin functionality.
date_unitstr, default NoneThe timestamp unit to detect if converting dates. The default behaviour
is to try and detect the correct precision, but if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force parsing only seconds,
milliseconds, microseconds or nanoseconds respectively.
encodingstr, default is ‘utf-8’The encoding to use to decode py3 bytes.
encoding_errorsstr, optional, default “strict”How encoding errors are treated. List of possible values .
New in version 1.3.0.
linesbool, default FalseRead the file as a json object per line.
chunksizeint, optionalReturn JsonReader object for iteration.
See the line-delimited json docs
for more information on chunksize.
This can only be passed if lines=True.
If this is None, the file will be read into memory all at once.
Changed in version 1.2: JsonReader is a context manager.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buf’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
nrowsint, optionalThe number of lines from the line-delimited jsonfile that has to be read.
This can only be passed if lines=True.
If this is None, all the rows will be returned.
New in version 1.1.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
Series or DataFrameThe type returned depends on the value of typ.
See also
DataFrame.to_jsonConvert a DataFrame to a JSON string.
Series.to_jsonConvert a Series to a JSON string.
json_normalizeNormalize semi-structured JSON data into a flat table.
Notes
Specific to orient='table', if a DataFrame with a literal
Index name of index gets written with to_json(), the
subsequent read operation will incorrectly set the Index name to
None. This is because index is also used by DataFrame.to_json()
to denote a missing Index name, and the subsequent
read_json() operation cannot distinguish between the two. The same
limitation is encountered with a MultiIndex and any names
beginning with 'level_'.
Examples
>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
Encoding/decoding a Dataframe using 'split' formatted JSON:
>>> df.to_json(orient='split')
'{"columns":["col 1","col 2"],"index":["row 1","row 2"],"data":[["a","b"],["c","d"]]}'
>>> pd.read_json(_, orient='split')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'index' formatted JSON:
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(_, orient='index')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'records' formatted JSON.
Note that index labels are not preserved with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> pd.read_json(_, orient='records')
col 1 col 2
0 a b
1 c d
Encoding with Table Schema
>>> df.to_json(orient='table')
'{"schema":{"fields":[{"name":"index","type":"string"},{"name":"col 1","type":"string"},{"name":"col 2","type":"string"}],"primaryKey":["index"],"pandas_version":"1.4.0"},"data":[{"index":"row 1","col 1":"a","col 2":"b"},{"index":"row 2","col 1":"c","col 2":"d"}]}'
|
reference/api/pandas.read_json.html
|
pandas.Int32Dtype
|
`pandas.Int32Dtype`
An ExtensionDtype for int32 integer data.
|
class pandas.Int32Dtype[source]#
An ExtensionDtype for int32 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
|
reference/api/pandas.Int32Dtype.html
|
pandas.tseries.offsets.Hour.rollback
|
`pandas.tseries.offsets.Hour.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
Hour.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.Hour.rollback.html
|
pandas.core.window.expanding.Expanding.mean
|
`pandas.core.window.expanding.Expanding.mean`
Calculate the expanding mean.
|
Expanding.mean(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the expanding mean.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.meanAggregating mean for Series.
pandas.DataFrame.meanAggregating mean for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
|
reference/api/pandas.core.window.expanding.Expanding.mean.html
|
pandas.core.resample.Resampler.pad
|
`pandas.core.resample.Resampler.pad`
Forward fill the values.
Deprecated since version 1.4: Use ffill instead.
|
Resampler.pad(limit=None)[source]#
Forward fill the values.
Deprecated since version 1.4: Use ffill instead.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
An upsampled Series.
|
reference/api/pandas.core.resample.Resampler.pad.html
|
pandas.tseries.offsets.CustomBusinessHour.freqstr
|
`pandas.tseries.offsets.CustomBusinessHour.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
CustomBusinessHour.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.freqstr.html
|
pandas.Series.str.istitle
|
`pandas.Series.str.istitle`
Check whether all characters in each string are titlecase.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.istitle()[source]#
Check whether all characters in each string are titlecase.
This is equivalent to running the Python string method
str.istitle() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.istitle.html
|
pandas.api.types.is_integer_dtype
|
`pandas.api.types.is_integer_dtype`
Check whether the provided array or dtype is of an integer dtype.
Unlike in is_any_int_dtype, timedelta64 instances will return False.
```
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype('int8')
True
>>> is_integer_dtype('Int8')
True
>>> is_integer_dtype(pd.Int8Dtype)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False
```
|
pandas.api.types.is_integer_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of an integer dtype.
Unlike in is_any_int_dtype, timedelta64 instances will return False.
The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered
as integer by this function.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of an integer dtype and
not an instance of timedelta64.
Examples
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype('int8')
True
>>> is_integer_dtype('Int8')
True
>>> is_integer_dtype(pd.Int8Dtype)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False
|
reference/api/pandas.api.types.is_integer_dtype.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.n
|
pandas.tseries.offsets.CustomBusinessMonthEnd.n
|
CustomBusinessMonthEnd.n#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.n.html
|
Style
|
Style
|
Styler objects are returned by pandas.DataFrame.style.
Styler constructor#
Styler(data[, precision, table_styles, ...])
Helps style a DataFrame or Series according to the data with HTML and CSS.
Styler.from_custom_template(searchpath[, ...])
Factory function for creating a subclass of Styler.
Styler properties#
Styler.env
Styler.template_html
Styler.template_html_style
Styler.template_html_table
Styler.template_latex
Styler.template_string
Styler.loader
Style application#
Styler.apply(func[, axis, subset])
Apply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymap(func[, subset])
Apply a CSS-styling function elementwise.
Styler.apply_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, level-wise.
Styler.applymap_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, elementwise.
Styler.format([formatter, subset, na_rep, ...])
Format the text display value of cells.
Styler.format_index([formatter, axis, ...])
Format the text display value of index labels or column headers.
Styler.relabel_index(labels[, axis, level])
Relabel the index, or column header, keys to display a set of specified values.
Styler.hide([subset, axis, level, names])
Hide the entire index / column headers, or specific rows / columns from display.
Styler.concat(other)
Append another Styler to combine the output into a single table.
Styler.set_td_classes(classes)
Set the class attribute of <td> HTML elements.
Styler.set_table_styles([table_styles, ...])
Set the table styles included within the <style> HTML element.
Styler.set_table_attributes(attributes)
Set the table attributes added to the <table> HTML element.
Styler.set_tooltips(ttips[, props, css_class])
Set the DataFrame of strings on Styler generating :hover tooltips.
Styler.set_caption(caption)
Set the text added to a <caption> HTML element.
Styler.set_sticky([axis, pixel_size, levels])
Add CSS to permanently display the index or column headers in a scrolling frame.
Styler.set_properties([subset])
Set defined CSS-properties to each <td> HTML element for the given subset.
Styler.set_uuid(uuid)
Set the uuid applied to id attributes of HTML elements.
Styler.clear()
Reset the Styler, removing any previously applied styles.
Styler.pipe(func, *args, **kwargs)
Apply func(self, *args, **kwargs), and return the result.
Builtin styles#
Styler.highlight_null([color, subset, ...])
Highlight missing values with a style.
Styler.highlight_max([subset, color, axis, ...])
Highlight the maximum with a style.
Styler.highlight_min([subset, color, axis, ...])
Highlight the minimum with a style.
Styler.highlight_between([subset, color, ...])
Highlight a defined range with a style.
Styler.highlight_quantile([subset, color, ...])
Highlight values defined by a quantile with a style.
Styler.background_gradient([cmap, low, ...])
Color the background in a gradient style.
Styler.text_gradient([cmap, low, high, ...])
Color the text in a gradient style.
Styler.bar([subset, axis, color, cmap, ...])
Draw bar chart in the cell backgrounds.
Style export and import#
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
Styler.to_string([buf, encoding, ...])
Write Styler to a file, buffer or string in text format.
Styler.export()
Export the styles applied to the current Styler.
Styler.use(styles)
Set the styles on the current Styler.
|
reference/style.html
|
pandas.DataFrame.cov
|
`pandas.DataFrame.cov`
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame.
The returned data frame is the covariance matrix of the columns
of the DataFrame.
```
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
```
|
DataFrame.cov(min_periods=None, ddof=1, numeric_only=_NoDefault.no_default)[source]#
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame.
The returned data frame is the covariance matrix of the columns
of the DataFrame.
Both NA and null values are automatically excluded from the
calculation. (See the note below about bias from missing values.)
A threshold can be set for the minimum number of
observations for each value created. Comparisons with observations
below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to
understand the relationship between different measures
across time.
Parameters
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result.
ddofint, default 1Delta degrees of freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
New in version 1.1.0.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameThe covariance matrix of the series of the DataFrame.
See also
Series.covCompute covariance with another Series.
core.window.ewm.ExponentialMovingWindow.covExponential weighted sample covariance.
core.window.expanding.Expanding.covExpanding sample covariance.
core.window.rolling.Rolling.covRolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series.
The covariance is normalized by N-ddof.
For DataFrames that have Series that are missing data (assuming that
data is missing at random)
the returned covariance matrix will be an unbiased estimate
of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable
because the estimate covariance matrix is not guaranteed to be positive
semi-definite. This could lead to estimate correlations having
absolute values which are greater than one, and/or a non-invertible
covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional min_periods keyword
that specifies the required minimum number of non-NA observations for
each column pair in order to have a valid result:
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
|
reference/api/pandas.DataFrame.cov.html
|
pandas.Series.dt.weekofyear
|
`pandas.Series.dt.weekofyear`
The week ordinal of the year according to the ISO 8601 standard.
|
Series.dt.weekofyear[source]#
The week ordinal of the year according to the ISO 8601 standard.
Deprecated since version 1.1.0.
Series.dt.weekofyear and Series.dt.week have been deprecated. Please
call Series.dt.isocalendar() and access the week column
instead.
|
reference/api/pandas.Series.dt.weekofyear.html
|
pandas.DataFrame.drop_duplicates
|
`pandas.DataFrame.drop_duplicates`
Return DataFrame with duplicate rows removed.
```
>>> df = pd.DataFrame({
... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
... 'rating': [4, 4, 3.5, 15, 5]
... })
>>> df
brand style rating
0 Yum Yum cup 4.0
1 Yum Yum cup 4.0
2 Indomie cup 3.5
3 Indomie pack 15.0
4 Indomie pack 5.0
```
|
DataFrame.drop_duplicates(subset=None, *, keep='first', inplace=False, ignore_index=False)[source]#
Return DataFrame with duplicate rows removed.
Considering certain columns is optional. Indexes, including time indexes
are ignored.
Parameters
subsetcolumn label or sequence of labels, optionalOnly consider certain columns for identifying duplicates, by
default use all of the columns.
keep{‘first’, ‘last’, False}, default ‘first’Determines which duplicates (if any) to keep.
- first : Drop duplicates except for the first occurrence.
- last : Drop duplicates except for the last occurrence.
- False : Drop all duplicates.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
Returns
DataFrame or NoneDataFrame with duplicates removed or None if inplace=True.
See also
DataFrame.value_countsCount unique combinations of columns.
Examples
Consider dataset containing ramen rating.
>>> df = pd.DataFrame({
... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
... 'rating': [4, 4, 3.5, 15, 5]
... })
>>> df
brand style rating
0 Yum Yum cup 4.0
1 Yum Yum cup 4.0
2 Indomie cup 3.5
3 Indomie pack 15.0
4 Indomie pack 5.0
By default, it removes duplicate rows based on all columns.
>>> df.drop_duplicates()
brand style rating
0 Yum Yum cup 4.0
2 Indomie cup 3.5
3 Indomie pack 15.0
4 Indomie pack 5.0
To remove duplicates on specific column(s), use subset.
>>> df.drop_duplicates(subset=['brand'])
brand style rating
0 Yum Yum cup 4.0
2 Indomie cup 3.5
To remove duplicates and keep last occurrences, use keep.
>>> df.drop_duplicates(subset=['brand', 'style'], keep='last')
brand style rating
1 Yum Yum cup 4.0
2 Indomie cup 3.5
4 Indomie pack 5.0
|
reference/api/pandas.DataFrame.drop_duplicates.html
|
pandas.api.types.union_categoricals
|
`pandas.api.types.union_categoricals`
Combine list-like of Categorical-like, unioning categories.
All categories must have the same dtype.
```
>>> a = pd.Categorical(["b", "c"])
>>> b = pd.Categorical(["a", "b"])
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
```
|
pandas.api.types.union_categoricals(to_union, sort_categories=False, ignore_order=False)[source]#
Combine list-like of Categorical-like, unioning categories.
All categories must have the same dtype.
Parameters
to_unionlist-likeCategorical, CategoricalIndex, or Series with dtype=’category’.
sort_categoriesbool, default FalseIf true, resulting categories will be lexsorted, otherwise
they will be ordered as they appear in the data.
ignore_orderbool, default FalseIf true, the ordered attribute of the Categoricals will be ignored.
Results in an unordered categorical.
Returns
Categorical
Raises
TypeError
all inputs do not have the same dtype
all inputs do not have the same ordered property
all inputs are ordered and their categories are not identical
sort_categories=True and Categoricals are ordered
ValueErrorEmpty list of categoricals passed
Notes
To learn more about categories, see link
Examples
If you want to combine categoricals that do not necessarily have
the same categories, union_categoricals will combine a list-like
of categoricals. The new categories will be the union of the
categories being combined.
>>> a = pd.Categorical(["b", "c"])
>>> b = pd.Categorical(["a", "b"])
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
By default, the resulting categories will be ordered as they appear
in the categories of the data. If you want the categories to be
lexsorted, use sort_categories=True argument.
>>> pd.api.types.union_categoricals([a, b], sort_categories=True)
['b', 'c', 'a', 'b']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with the case of combining two
categoricals of the same categories and order information (e.g. what
you could also append for).
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "a"], ordered=True)
>>> pd.api.types.union_categoricals([a, b])
['a', 'b', 'a', 'b', 'a']
Categories (2, object): ['a' < 'b']
Raises TypeError because the categories are ordered and not identical.
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "c"], ordered=True)
>>> pd.api.types.union_categoricals([a, b])
Traceback (most recent call last):
...
TypeError: to union ordered Categoricals, all categories must be the same
New in version 0.20.0
Ordered categoricals with different categories or orderings can be
combined by using the ignore_ordered=True argument.
>>> a = pd.Categorical(["a", "b", "c"], ordered=True)
>>> b = pd.Categorical(["c", "b", "a"], ordered=True)
>>> pd.api.types.union_categoricals([a, b], ignore_order=True)
['a', 'b', 'c', 'c', 'b', 'a']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with a CategoricalIndex, or Series
containing categorical data, but note that the resulting array will
always be a plain Categorical
>>> a = pd.Series(["b", "c"], dtype='category')
>>> b = pd.Series(["a", "b"], dtype='category')
>>> pd.api.types.union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
|
reference/api/pandas.api.types.union_categoricals.html
|
pandas.IntervalIndex.is_non_overlapping_monotonic
|
`pandas.IntervalIndex.is_non_overlapping_monotonic`
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
Non-overlapping means (no Intervals share points), and monotonic means
either monotonic increasing or monotonic decreasing.
|
IntervalIndex.is_non_overlapping_monotonic[source]#
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
Non-overlapping means (no Intervals share points), and monotonic means
either monotonic increasing or monotonic decreasing.
|
reference/api/pandas.IntervalIndex.is_non_overlapping_monotonic.html
|
pandas.tseries.offsets.WeekOfMonth.rollback
|
`pandas.tseries.offsets.WeekOfMonth.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
WeekOfMonth.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.WeekOfMonth.rollback.html
|
pandas.tseries.offsets.Milli.normalize
|
pandas.tseries.offsets.Milli.normalize
|
Milli.normalize#
|
reference/api/pandas.tseries.offsets.Milli.normalize.html
|
How to handle time series data with ease?
|
How to handle time series data with ease?
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
I want to work with the dates in the column datetime as datetime objects instead of plain text
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
|
matplotlib.pyplot as plt
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")
In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m³
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m³
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m³
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m³
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m³
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
How to handle time series data with ease?#
Using pandas datetime properties#
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])
In [8]: air_quality["datetime"]
Out[8]:
0 2019-06-21 00:00:00+00:00
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
As many data sets do contain datetime information in one of
the columns, pandas input function like pandas.read_csv() and pandas.read_json()
can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as
Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful? Let’s illustrate the added
value with some example cases.
What is the start and end date of the time series data set we are working
with?
In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()
Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))
Using pandas.Timestamp for datetimes enables us to calculate with date
information and make them comparable. Hence, we can use this to get the
length of our time series:
In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()
Out[10]: Timedelta('44 days 23:00:00')
The result is a pandas.Timedelta object, similar to datetime.timedelta
from the standard Python library and defining a time duration.
To user guideThe various time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [11]: air_quality["month"] = air_quality["datetime"].dt.month
In [12]: air_quality.head()
Out[12]:
city country datetime ... value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 ... 20.0 µg/m³ 6
1 Paris FR 2019-06-20 23:00:00+00:00 ... 21.8 µg/m³ 6
2 Paris FR 2019-06-20 22:00:00+00:00 ... 26.5 µg/m³ 6
3 Paris FR 2019-06-20 21:00:00+00:00 ... 24.9 µg/m³ 6
4 Paris FR 2019-06-20 20:00:00+00:00 ... 21.4 µg/m³ 6
[5 rows x 8 columns]
By using Timestamp objects for dates, a lot of time-related
properties are provided by pandas. For example the month, but also
year, weekofyear, quarter,… All of these properties are
accessible by the dt accessor.
To user guideAn overview of the existing date properties is given in the
time and date components overview table. More details about the dt accessor
to return datetime like properties are explained in a dedicated section on the dt accessor.
What is the average \(NO_2\) concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the
tutorial on statistics calculation?
Here, we want to calculate a given statistic (e.g. mean \(NO_2\))
for each weekday and for each measurement location. To group on
weekdays, we use the datetime property weekday (with Monday=0 and
Sunday=6) of pandas Timestamp, which is also accessible by the
dt accessor. The grouping on both locations and weekdays can be done
to split the calculation of the mean on each of these combinations.
Danger
As we are working with a very short time series in these
examples, the analysis does not provide a long-term representative
result!
Plot the typical \(NO_2\) pattern during the day of our time series of all stations together. In other words, what is the average value for each hour of the day?
In [14]: fig, axs = plt.subplots(figsize=(12, 4))
In [15]: air_quality.groupby(air_quality["datetime"].dt.hour)["value"].mean().plot(
....: kind='bar', rot=0, ax=axs
....: )
....:
Out[15]: <AxesSubplot: xlabel='datetime'>
In [16]: plt.xlabel("Hour of the day"); # custom x label using Matplotlib
In [17]: plt.ylabel("$NO_2 (µg/m^3)$");
Similar to the previous case, we want to calculate a given statistic
(e.g. mean \(NO_2\)) for each hour of the day and we can use the
split-apply-combine approach again. For this case, we use the datetime property hour
of pandas Timestamp, which is also accessible by the dt accessor.
Datetime as index#
In the tutorial on reshaping,
pivot() was introduced to reshape the data table with each of the
measurements locations as a separate column:
In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value")
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN
Note
By pivoting the data, the datetime information became the
index of the table. In general, setting a column as an index can be
achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful
functionalities. For example, we do not need the dt accessor to get
the time series properties, but have these properties available on the
index directly:
In [20]: no_2.index.year, no_2.index.weekday
Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))
Some other advantages are the convenient subsetting of time period or
the adapted time scale on plots. Let’s apply this on our data.
Create a plot of the \(NO_2\) values in the different stations from the 20th of May till the end of 21st of May
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
To user guideMore information on the DatetimeIndex and the slicing by using strings is provided in the section on time series indexing.
Resample a time series to another frequency#
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [22]: monthly_max = no_2.resample("M").max()
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the
ability to resample() time series to another frequency (e.g.,
converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
it provides a time-based grouping, by using a string (e.g. M,
5H,…) that defines the target frequency
it requires an aggregation function such as mean, max,…
To user guideAn overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the
freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
Make a plot of the daily mean \(NO_2\) value in each of the stations.
In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
To user guideMore details on the power of time series resampling is provided in the user guide section on resampling.
REMEMBER
Valid date strings can be converted to datetime objects using
to_datetime function or as part of read functions.
Datetime objects in pandas support calculations, logical operations
and convenient date-related properties using the dt accessor.
A DatetimeIndex contains these date-related properties and
supports convenient slicing.
Resample is a powerful method to change the frequency of a time
series.
To user guideA full overview on time series is given on the pages on time series and date functionality.
|
getting_started/intro_tutorials/09_timeseries.html
|
pandas.DataFrame.notna
|
`pandas.DataFrame.notna`
Detect existing (non-missing) values.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
DataFrame.notna()[source]#
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
DataFrameMask of bool values for each element in DataFrame that
indicates whether an element is not an NA value.
See also
DataFrame.notnullAlias of notna.
DataFrame.isnaBoolean inverse of notna.
DataFrame.dropnaOmit axes labels with missing values.
notnaTop-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
|
reference/api/pandas.DataFrame.notna.html
|
pandas.tseries.offsets.Micro.normalize
|
pandas.tseries.offsets.Micro.normalize
|
Micro.normalize#
|
reference/api/pandas.tseries.offsets.Micro.normalize.html
|
pandas.ExcelWriter.cur_sheet
|
`pandas.ExcelWriter.cur_sheet`
Current sheet for writing.
|
property ExcelWriter.cur_sheet[source]#
Current sheet for writing.
Deprecated since version 1.5.0.
|
reference/api/pandas.ExcelWriter.cur_sheet.html
|
pandas.Series.argsort
|
`pandas.Series.argsort`
Return the integer indices that would sort the Series values.
|
Series.argsort(axis=0, kind='quicksort', order=None)[source]#
Return the integer indices that would sort the Series values.
Override ndarray.argsort. Argsorts the value, omitting NA/null values,
and places the result in the same locations as the non-NA values.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
kind{‘mergesort’, ‘quicksort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See numpy.sort() for more
information. ‘mergesort’ and ‘stable’ are the only stable algorithms.
orderNoneHas no effect but is accepted for compatibility with numpy.
Returns
Series[np.intp]Positions of values within the sort order with -1 indicating
nan values.
See also
numpy.ndarray.argsortReturns the indices that would sort this array.
|
reference/api/pandas.Series.argsort.html
|
pandas.Series.str.casefold
|
`pandas.Series.str.casefold`
Convert strings in the Series/Index to be casefolded.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.casefold()[source]#
Convert strings in the Series/Index to be casefolded.
New in version 0.25.0.
Equivalent to str.casefold().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.casefold.html
|
pandas.Categorical.from_codes
|
`pandas.Categorical.from_codes`
Make a Categorical type from codes and categories or dtype.
```
>>> dtype = pd.CategoricalDtype(['a', 'b'], ordered=True)
>>> pd.Categorical.from_codes(codes=[0, 1, 0, 1], dtype=dtype)
['a', 'b', 'a', 'b']
Categories (2, object): ['a' < 'b']
```
|
classmethod Categorical.from_codes(codes, categories=None, ordered=None, dtype=None)[source]#
Make a Categorical type from codes and categories or dtype.
This constructor is useful if you already have codes and
categories/dtype and so do not need the (computation intensive)
factorization step, which is usually done on the constructor.
If your data does not follow this convention, please use the normal
constructor.
Parameters
codesarray-like of intAn integer array, where each integer points to a category in
categories or dtype.categories, or else is -1 for NaN.
categoriesindex-like, optionalThe categories for the categorical. Items need to be unique.
If the categories are not given here, then they must be provided
in dtype.
orderedbool, optionalWhether or not this categorical is treated as an ordered
categorical. If not given here or in dtype, the resulting
categorical will be unordered.
dtypeCategoricalDtype or “category”, optionalIf CategoricalDtype, cannot be used together with
categories or ordered.
Returns
Categorical
Examples
>>> dtype = pd.CategoricalDtype(['a', 'b'], ordered=True)
>>> pd.Categorical.from_codes(codes=[0, 1, 0, 1], dtype=dtype)
['a', 'b', 'a', 'b']
Categories (2, object): ['a' < 'b']
|
reference/api/pandas.Categorical.from_codes.html
|
pandas.Series.aggregate
|
`pandas.Series.aggregate`
Aggregate using one or more operations over the specified axis.
Function to use for aggregating the data. If a function, must either
work when passed a Series or when passed to Series.apply.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
```
|
Series.aggregate(func=None, axis=0, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a Series or when passed to Series.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
Series.applyInvoke function on a Series.
Series.transformTransform function producing a Series with like indexes.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.agg('min')
1
>>> s.agg(['min', 'max'])
min 1
max 4
dtype: int64
|
reference/api/pandas.Series.aggregate.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.kwds
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
CustomBusinessMonthBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.kwds.html
|
pandas.UInt64Dtype
|
`pandas.UInt64Dtype`
An ExtensionDtype for uint64 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
|
class pandas.UInt64Dtype[source]#
An ExtensionDtype for uint64 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
|
reference/api/pandas.UInt64Dtype.html
|
pandas.tseries.offsets.Day.is_month_end
|
`pandas.tseries.offsets.Day.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Day.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Day.is_month_end.html
|
pandas.tseries.offsets.YearBegin.is_year_end
|
`pandas.tseries.offsets.YearBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
YearBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_year_end.html
|
pandas.tseries.offsets.DateOffset.is_quarter_end
|
`pandas.tseries.offsets.DateOffset.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
DateOffset.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.DateOffset.is_quarter_end.html
|
pandas.MultiIndex.from_arrays
|
`pandas.MultiIndex.from_arrays`
Convert arrays to MultiIndex.
```
>>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
>>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
MultiIndex([(1, 'red'),
(1, 'blue'),
(2, 'red'),
(2, 'blue')],
names=['number', 'color'])
```
|
classmethod MultiIndex.from_arrays(arrays, sortorder=None, names=_NoDefault.no_default)[source]#
Convert arrays to MultiIndex.
Parameters
arrayslist / sequence of array-likesEach array-like gives one level’s value for each data point.
len(arrays) is the number of levels.
sortorderint or NoneLevel of sortedness (must be lexicographically sorted by that
level).
nameslist / sequence of str, optionalNames for the levels in the index.
Returns
MultiIndex
See also
MultiIndex.from_tuplesConvert list of tuples to MultiIndex.
MultiIndex.from_productMake a MultiIndex from cartesian product of iterables.
MultiIndex.from_frameMake a MultiIndex from a DataFrame.
Examples
>>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
>>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
MultiIndex([(1, 'red'),
(1, 'blue'),
(2, 'red'),
(2, 'blue')],
names=['number', 'color'])
|
reference/api/pandas.MultiIndex.from_arrays.html
|
pandas.api.types.is_iterator
|
`pandas.api.types.is_iterator`
Check if the object is an iterator.
```
>>> import datetime
>>> is_iterator((x for x in []))
True
>>> is_iterator([1, 2, 3])
False
>>> is_iterator(datetime.datetime(2017, 1, 1))
False
>>> is_iterator("foo")
False
>>> is_iterator(1)
False
```
|
pandas.api.types.is_iterator()#
Check if the object is an iterator.
This is intended for generators, not list-like objects.
Parameters
objThe object to check
Returns
is_iterboolWhether obj is an iterator.
Examples
>>> import datetime
>>> is_iterator((x for x in []))
True
>>> is_iterator([1, 2, 3])
False
>>> is_iterator(datetime.datetime(2017, 1, 1))
False
>>> is_iterator("foo")
False
>>> is_iterator(1)
False
|
reference/api/pandas.api.types.is_iterator.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.