title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.tseries.offsets.MonthBegin.__call__ | `pandas.tseries.offsets.MonthBegin.__call__`
Call self as a function. | MonthBegin.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.MonthBegin.__call__.html |
pandas.tseries.offsets.Tick.delta | pandas.tseries.offsets.Tick.delta | Tick.delta#
| reference/api/pandas.tseries.offsets.Tick.delta.html |
Extensions | Extensions | These are primarily intended for library authors looking to extend pandas
objects.
api.extensions.register_extension_dtype(cls)
Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
api.extensions.register_series_accessor(name)
Register a custom accessor on Series objects.
api.extensions.register_index_accessor(name)
Register a custom accessor on Index objects.
api.extensions.ExtensionDtype()
A custom data type, to be paired with an ExtensionArray.
api.extensions.ExtensionArray()
Abstract base class for custom 1-D array types.
arrays.PandasArray(values[, copy])
A pandas ExtensionArray for NumPy data.
Additionally, we have some utility methods for ensuring your object
behaves correctly.
api.indexers.check_array_indexer(array, indexer)
Check if indexer is a valid array indexer for array.
The sentinel pandas.api.extensions.no_default is used as the default
value in some methods. Use an is comparison to check if the user
provides a non-default value.
| reference/extensions.html |
pandas.tseries.offsets.BusinessHour.is_quarter_start | `pandas.tseries.offsets.BusinessHour.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | BusinessHour.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.BusinessHour.is_quarter_start.html |
pandas.tseries.offsets.BusinessDay.rollback | `pandas.tseries.offsets.BusinessDay.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp. | BusinessDay.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.BusinessDay.rollback.html |
pandas.tseries.offsets.Easter.is_year_start | `pandas.tseries.offsets.Easter.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | Easter.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.Easter.is_year_start.html |
pandas.DataFrame.replace | `pandas.DataFrame.replace`
Replace values given in to_replace with value.
Values of the DataFrame are replaced with other values dynamically.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
``` | DataFrame.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the DataFrame are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
DataFrameObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
DataFrame.fillnaFill NA values.
DataFrame.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| reference/api/pandas.DataFrame.replace.html |
pandas.Categorical.categories | `pandas.Categorical.categories`
The categories of this categorical. | property Categorical.categories[source]#
The categories of this categorical.
Setting assigns new values to each category (effectively a rename of
each individual category).
The assigned value has to be a list-like object. All items must be
unique and the number of items in the new categories must be the same
as the number of items in the old categories.
Assigning to categories is a inplace operation!
Raises
ValueErrorIf the new categories do not validate as categories or if the
number of new categories is unequal the number of old categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
| reference/api/pandas.Categorical.categories.html |
pandas.errors.DatabaseError | `pandas.errors.DatabaseError`
Error is raised when executing sql with bad syntax or sql that throws an error.
Examples
```
>>> from sqlite3 import connect
>>> conn = connect(':memory:')
>>> pd.read_sql('select * test', conn)
... # DatabaseError: Execution failed on sql 'test': near "test": syntax error
``` | exception pandas.errors.DatabaseError[source]#
Error is raised when executing sql with bad syntax or sql that throws an error.
Examples
>>> from sqlite3 import connect
>>> conn = connect(':memory:')
>>> pd.read_sql('select * test', conn)
... # DatabaseError: Execution failed on sql 'test': near "test": syntax error
| reference/api/pandas.errors.DatabaseError.html |
pandas.tseries.offsets.CustomBusinessDay.is_quarter_end | `pandas.tseries.offsets.CustomBusinessDay.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | CustomBusinessDay.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessDay.is_quarter_end.html |
pandas.Series.replace | `pandas.Series.replace`
Replace values given in to_replace with value.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
``` | Series.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the Series are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseIf True, performs operation inplace and returns None.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
SeriesObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
Series.fillnaFill NA values.
Series.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| reference/api/pandas.Series.replace.html |
pandas.tseries.offsets.FY5253Quarter.qtr_with_extra_week | pandas.tseries.offsets.FY5253Quarter.qtr_with_extra_week | FY5253Quarter.qtr_with_extra_week#
| reference/api/pandas.tseries.offsets.FY5253Quarter.qtr_with_extra_week.html |
pandas.tseries.offsets.Week.is_quarter_end | `pandas.tseries.offsets.Week.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Week.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Week.is_quarter_end.html |
pandas.tseries.offsets.FY5253.copy | `pandas.tseries.offsets.FY5253.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | FY5253.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.FY5253.copy.html |
pandas.DataFrame.hist | `pandas.DataFrame.hist`
Make a histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function calls matplotlib.pyplot.hist(), on each series in
the DataFrame, resulting in one histogram per column.
```
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index=['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
``` | DataFrame.hist(column=None, by=None, grid=True, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None, ax=None, sharex=False, sharey=False, figsize=None, layout=None, bins=10, backend=None, legend=False, **kwargs)[source]#
Make a histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function calls matplotlib.pyplot.hist(), on each series in
the DataFrame, resulting in one histogram per column.
Parameters
dataDataFrameThe pandas object holding the data.
columnstr or sequence, optionalIf passed, will be used to limit data to a subset of columns.
byobject, optionalIf passed, then used to form histograms for separate groups.
gridbool, default TrueWhether to show axis grid lines.
xlabelsizeint, default NoneIf specified changes the x-axis label size.
xrotfloat, default NoneRotation of x axis labels. For example, a value of 90 displays the
x labels rotated 90 degrees clockwise.
ylabelsizeint, default NoneIf specified changes the y-axis label size.
yrotfloat, default NoneRotation of y axis labels. For example, a value of 90 displays the
y labels rotated 90 degrees clockwise.
axMatplotlib axes object, default NoneThe axes to plot the histogram on.
sharexbool, default True if ax is None else FalseIn case subplots=True, share x axis and set some x axis labels to
invisible; defaults to True if ax is None otherwise False if an ax
is passed in.
Note that passing in both an ax and sharex=True will alter all x axis
labels for all subplots in a figure.
shareybool, default FalseIn case subplots=True, share y axis and set some y axis labels to
invisible.
figsizetuple, optionalThe size in inches of the figure to create. Uses the value in
matplotlib.rcParams by default.
layouttuple, optionalTuple of (rows, columns) for the layout of the histograms.
binsint or sequence, default 10Number of histogram bins to be used. If an integer is given, bins + 1
bin edges are calculated and returned. If bins is a sequence, gives
bin edges, including left edge of first bin and right edge of last
bin. In this case, bins is returned unmodified.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
legendbool, default FalseWhether to show the legend.
New in version 1.1.0.
**kwargsAll other plotting keyword arguments to be passed to
matplotlib.pyplot.hist().
Returns
matplotlib.AxesSubplot or numpy.ndarray of them
See also
matplotlib.pyplot.histPlot a histogram using matplotlib.
Examples
This example draws a histogram based on the length and width of
some animals, displayed in three bins
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index=['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
| reference/api/pandas.DataFrame.hist.html |
pandas.api.types.is_datetime64_ns_dtype | `pandas.api.types.is_datetime64_ns_dtype`
Check whether the provided array or dtype is of the datetime64[ns] dtype.
```
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
``` | pandas.api.types.is_datetime64_ns_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of the datetime64[ns] dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
boolWhether or not the array or dtype is of the datetime64[ns] dtype.
Examples
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
| reference/api/pandas.api.types.is_datetime64_ns_dtype.html |
pandas.tseries.offsets.FY5253Quarter.is_anchored | `pandas.tseries.offsets.FY5253Quarter.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | FY5253Quarter.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.FY5253Quarter.is_anchored.html |
pandas.DataFrame.eval | `pandas.DataFrame.eval`
Evaluate a string describing operations on DataFrame columns.
Operates on columns only, not specific rows or elements. This allows
eval to run arbitrary code, which can make you vulnerable to code
injection if you pass user input to this function.
```
>>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})
>>> df
A B
0 1 10
1 2 8
2 3 6
3 4 4
4 5 2
>>> df.eval('A + B')
0 11
1 10
2 9
3 8
4 7
dtype: int64
``` | DataFrame.eval(expr, *, inplace=False, **kwargs)[source]#
Evaluate a string describing operations on DataFrame columns.
Operates on columns only, not specific rows or elements. This allows
eval to run arbitrary code, which can make you vulnerable to code
injection if you pass user input to this function.
Parameters
exprstrThe expression string to evaluate.
inplacebool, default FalseIf the expression contains an assignment, whether to perform the
operation inplace and mutate the existing DataFrame. Otherwise,
a new DataFrame is returned.
**kwargsSee the documentation for eval() for complete details
on the keyword arguments accepted by
query().
Returns
ndarray, scalar, pandas object, or NoneThe result of the evaluation or None if inplace=True.
See also
DataFrame.queryEvaluates a boolean expression to query the columns of a frame.
DataFrame.assignCan evaluate an expression or function to create new values for a column.
evalEvaluate a Python expression as a string using various backends.
Notes
For more details see the API documentation for eval().
For detailed examples see enhancing performance with eval.
Examples
>>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})
>>> df
A B
0 1 10
1 2 8
2 3 6
3 4 4
4 5 2
>>> df.eval('A + B')
0 11
1 10
2 9
3 8
4 7
dtype: int64
Assignment is allowed though by default the original DataFrame is not
modified.
>>> df.eval('C = A + B')
A B C
0 1 10 11
1 2 8 10
2 3 6 9
3 4 4 8
4 5 2 7
>>> df
A B
0 1 10
1 2 8
2 3 6
3 4 4
4 5 2
Use inplace=True to modify the original DataFrame.
>>> df.eval('C = A + B', inplace=True)
>>> df
A B C
0 1 10 11
1 2 8 10
2 3 6 9
3 4 4 8
4 5 2 7
Multiple columns can be assigned to using multi-line expressions:
>>> df.eval(
... '''
... C = A + B
... D = A - B
... '''
... )
A B C D
0 1 10 11 -9
1 2 8 10 -6
2 3 6 9 -3
3 4 4 8 0
4 5 2 7 3
| reference/api/pandas.DataFrame.eval.html |
pandas.cut | `pandas.cut`
Bin values into discrete intervals.
Use cut when you need to segment and sort data values into bins. This
function is also useful for going from a continuous variable to a
categorical variable. For example, cut could convert ages to groups of
age ranges. Supports binning into an equal number of bins, or a
pre-specified array of bins.
```
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3)
...
[(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ...
``` | pandas.cut(x, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False, duplicates='raise', ordered=True)[source]#
Bin values into discrete intervals.
Use cut when you need to segment and sort data values into bins. This
function is also useful for going from a continuous variable to a
categorical variable. For example, cut could convert ages to groups of
age ranges. Supports binning into an equal number of bins, or a
pre-specified array of bins.
Parameters
xarray-likeThe input array to be binned. Must be 1-dimensional.
binsint, sequence of scalars, or IntervalIndexThe criteria to bin by.
int : Defines the number of equal-width bins in the range of x. The
range of x is extended by .1% on each side to include the minimum
and maximum values of x.
sequence of scalars : Defines the bin edges allowing for non-uniform
width. No extension of the range of x is done.
IntervalIndex : Defines the exact bins to be used. Note that
IntervalIndex for bins must be non-overlapping.
rightbool, default TrueIndicates whether bins includes the rightmost edge or not. If
right == True (the default), then the bins [1, 2, 3, 4]
indicate (1,2], (2,3], (3,4]. This argument is ignored when
bins is an IntervalIndex.
labelsarray or False, default NoneSpecifies the labels for the returned bins. Must be the same length as
the resulting bins. If False, returns only integer indicators of the
bins. This affects the type of the output container (see below).
This argument is ignored when bins is an IntervalIndex. If True,
raises an error. When ordered=False, labels must be provided.
retbinsbool, default FalseWhether to return the bins or not. Useful when bins is provided
as a scalar.
precisionint, default 3The precision at which to store and display the bins labels.
include_lowestbool, default FalseWhether the first interval should be left-inclusive or not.
duplicates{default ‘raise’, ‘drop’}, optionalIf bin edges are not unique, raise ValueError or drop non-uniques.
orderedbool, default TrueWhether the labels are ordered or not. Applies to returned types
Categorical and Series (with Categorical dtype). If True,
the resulting categorical will be ordered. If False, the resulting
categorical will be unordered (labels must be provided).
New in version 1.1.0.
Returns
outCategorical, Series, or ndarrayAn array-like object representing the respective bin for each value
of x. The type depends on the value of labels.
None (default) : returns a Series for Series x or a
Categorical for all other inputs. The values stored within
are Interval dtype.
sequence of scalars : returns a Series for Series x or a
Categorical for all other inputs. The values stored within
are whatever the type in the sequence is.
False : returns an ndarray of integers.
binsnumpy.ndarray or IntervalIndex.The computed or specified bins. Only returned when retbins=True.
For scalar or sequence bins, this is an ndarray with the computed
bins. If set duplicates=drop, bins will drop non-unique bin. For
an IntervalIndex bins, this is equal to bins.
See also
qcutDiscretize variable into equal-sized buckets based on rank or based on sample quantiles.
CategoricalArray type for storing data that come from a fixed set of values.
SeriesOne-dimensional array with axis labels (including time series).
IntervalIndexImmutable Index implementing an ordered, sliceable set.
Notes
Any NA values will be NA in the result. Out of bounds values will be NA in
the resulting Series or Categorical object.
Reference the user guide for more examples.
Examples
Discretize into three equal-sized bins.
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3)
...
[(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ...
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, retbins=True)
...
([(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ...
array([0.994, 3. , 5. , 7. ]))
Discovers the same bins, but assign them specific labels. Notice that
the returned Categorical’s categories are labels and is ordered.
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]),
... 3, labels=["bad", "medium", "good"])
['bad', 'good', 'medium', 'medium', 'good', 'bad']
Categories (3, object): ['bad' < 'medium' < 'good']
ordered=False will result in unordered categories when labels are passed.
This parameter can be used to allow non-unique labels:
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3,
... labels=["B", "A", "B"], ordered=False)
['B', 'B', 'A', 'A', 'B', 'B']
Categories (2, object): ['A', 'B']
labels=False implies you just want the bins back.
>>> pd.cut([0, 1, 1, 2], bins=4, labels=False)
array([0, 1, 1, 3])
Passing a Series as an input returns a Series with categorical dtype:
>>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
... index=['a', 'b', 'c', 'd', 'e'])
>>> pd.cut(s, 3)
...
a (1.992, 4.667]
b (1.992, 4.667]
c (4.667, 7.333]
d (7.333, 10.0]
e (7.333, 10.0]
dtype: category
Categories (3, interval[float64, right]): [(1.992, 4.667] < (4.667, ...
Passing a Series as an input returns a Series with mapping value.
It is used to map numerically to intervals based on bins.
>>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
... index=['a', 'b', 'c', 'd', 'e'])
>>> pd.cut(s, [0, 2, 4, 6, 8, 10], labels=False, retbins=True, right=False)
...
(a 1.0
b 2.0
c 3.0
d 4.0
e NaN
dtype: float64,
array([ 0, 2, 4, 6, 8, 10]))
Use drop optional when bins is not unique
>>> pd.cut(s, [0, 2, 4, 6, 10, 10], labels=False, retbins=True,
... right=False, duplicates='drop')
...
(a 1.0
b 2.0
c 3.0
d 3.0
e NaN
dtype: float64,
array([ 0, 2, 4, 6, 10]))
Passing an IntervalIndex for bins results in those categories exactly.
Notice that values not covered by the IntervalIndex are set to NaN. 0
is to the left of the first bin (which is closed on the right), and 1.5
falls between two bins.
>>> bins = pd.IntervalIndex.from_tuples([(0, 1), (2, 3), (4, 5)])
>>> pd.cut([0, 0.5, 1.5, 2.5, 4.5], bins)
[NaN, (0.0, 1.0], NaN, (2.0, 3.0], (4.0, 5.0]]
Categories (3, interval[int64, right]): [(0, 1] < (2, 3] < (4, 5]]
| reference/api/pandas.cut.html |
pandas.DataFrame.div | `pandas.DataFrame.div`
Get Floating division of dataframe and other, element-wise (binary operator truediv).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
``` | DataFrame.div(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
| reference/api/pandas.DataFrame.div.html |
pandas.tseries.offsets.Second.name | `pandas.tseries.offsets.Second.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
``` | Second.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.Second.name.html |
pandas.DataFrame.plot.pie | `pandas.DataFrame.plot.pie`
Generate a pie plot.
A pie plot is a proportional representation of the numerical data in a
column. This function wraps matplotlib.pyplot.pie() for the
specified column. If no column reference is passed and
subplots=True a pie plot is drawn for each numerical column
independently.
```
>>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> plot = df.plot.pie(y='mass', figsize=(5, 5))
``` | DataFrame.plot.pie(**kwargs)[source]#
Generate a pie plot.
A pie plot is a proportional representation of the numerical data in a
column. This function wraps matplotlib.pyplot.pie() for the
specified column. If no column reference is passed and
subplots=True a pie plot is drawn for each numerical column
independently.
Parameters
yint or label, optionalLabel or position of the column to plot.
If not provided, subplots=True argument must be passed.
**kwargsKeyword arguments to pass on to DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themA NumPy array is returned when subplots is True.
See also
Series.plot.pieGenerate a pie plot for a Series.
DataFrame.plotMake plots of a DataFrame.
Examples
In the example below we have a DataFrame with the information about
planet’s mass and radius. We pass the ‘mass’ column to the
pie function to get a pie plot.
>>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> plot = df.plot.pie(y='mass', figsize=(5, 5))
>>> plot = df.plot.pie(subplots=True, figsize=(11, 6))
| reference/api/pandas.DataFrame.plot.pie.html |
pandas.tseries.offsets.Day.is_month_end | `pandas.tseries.offsets.Day.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | Day.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.Day.is_month_end.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.nanos | pandas.tseries.offsets.CustomBusinessMonthEnd.nanos | CustomBusinessMonthEnd.nanos#
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.nanos.html |
pandas.tseries.offsets.BYearEnd.normalize | pandas.tseries.offsets.BYearEnd.normalize | BYearEnd.normalize#
| reference/api/pandas.tseries.offsets.BYearEnd.normalize.html |
pandas.core.groupby.DataFrameGroupBy.corr | `pandas.core.groupby.DataFrameGroupBy.corr`
Compute pairwise correlation of columns, excluding NA/null values.
Method of correlation:
```
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
``` | property DataFrameGroupBy.corr[source]#
Compute pairwise correlation of columns, excluding NA/null values.
Parameters
method{‘pearson’, ‘kendall’, ‘spearman’} or callableMethod of correlation:
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
callable: callable with input two 1d ndarraysand returning a float. Note that the returned matrix from corr
will have 1 along the diagonals and will be symmetric
regardless of the callable’s behavior.
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result. Currently only available for Pearson
and Spearman correlation.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameCorrelation matrix.
See also
DataFrame.corrwithCompute pairwise correlation with another DataFrame or Series.
Series.corrCompute the correlation between two Series.
Notes
Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations.
Pearson correlation coefficient
Kendall rank correlation coefficient
Spearman’s rank correlation coefficient
Examples
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
>>> df = pd.DataFrame([(1, 1), (2, np.nan), (np.nan, 3), (4, 4)],
... columns=['dogs', 'cats'])
>>> df.corr(min_periods=3)
dogs cats
dogs 1.0 NaN
cats NaN 1.0
| reference/api/pandas.core.groupby.DataFrameGroupBy.corr.html |
pandas.Series.set_axis | `pandas.Series.set_axis`
Assign desired index to given axis.
```
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
1 2
2 3
dtype: int64
``` | Series.set_axis(labels, *, axis=0, inplace=_NoDefault.no_default, copy=_NoDefault.no_default)[source]#
Assign desired index to given axis.
Indexes for row labels can be changed by assigning
a list-like or Index.
Parameters
labelslist-like, IndexThe values for the new index.
axis{0 or ‘index’}, default 0The axis to update. The value 0 identifies the rows. For Series
this parameter is unused and defaults to 0.
inplacebool, default FalseWhether to return a new Series instance.
Deprecated since version 1.5.0.
copybool, default TrueWhether to make a copy of the underlying data.
New in version 1.5.0.
Returns
renamedSeries or NoneAn object of type Series or None if inplace=True.
See also
Series.rename_axisAlter the name of the index.
Examples
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
1 2
2 3
dtype: int64
>>> s.set_axis(['a', 'b', 'c'], axis=0)
a 1
b 2
c 3
dtype: int64
| reference/api/pandas.Series.set_axis.html |
pandas.Series.str.isspace | `pandas.Series.str.isspace`
Check whether all characters in each string are whitespace.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
``` | Series.str.isspace()[source]#
Check whether all characters in each string are whitespace.
This is equivalent to running the Python string method
str.isspace() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
| reference/api/pandas.Series.str.isspace.html |
pandas.DatetimeIndex.quarter | `pandas.DatetimeIndex.quarter`
The quarter of the date. | property DatetimeIndex.quarter[source]#
The quarter of the date.
| reference/api/pandas.DatetimeIndex.quarter.html |
pandas.Timestamp.freq | pandas.Timestamp.freq | Timestamp.freq#
| reference/api/pandas.Timestamp.freq.html |
pandas.CategoricalIndex | `pandas.CategoricalIndex`
Index based on an underlying Categorical.
```
>>> pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"])
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
``` | class pandas.CategoricalIndex(data=None, categories=None, ordered=None, dtype=None, copy=False, name=None)[source]#
Index based on an underlying Categorical.
CategoricalIndex, like Categorical, can only take on a limited,
and usually fixed, number of possible values (categories). Also,
like Categorical, it might have an order, but numerical operations
(additions, divisions, …) are not possible.
Parameters
dataarray-like (1-dimensional)The values of the categorical. If categories are given, values not in
categories will be replaced with NaN.
categoriesindex-like, optionalThe categories for the categorical. Items need to be unique.
If the categories are not given here (and also not in dtype), they
will be inferred from the data.
orderedbool, optionalWhether or not this categorical is treated as an ordered
categorical. If not given here or in dtype, the resulting
categorical will be unordered.
dtypeCategoricalDtype or “category”, optionalIf CategoricalDtype, cannot be used together with
categories or ordered.
copybool, default FalseMake a copy of input ndarray.
nameobject, optionalName to be stored in the index.
Raises
ValueErrorIf the categories do not validate.
TypeErrorIf an explicit ordered=True is given but no categories and the
values are not sortable.
See also
IndexThe base pandas Index type.
CategoricalA categorical array.
CategoricalDtypeType for categorical data.
Notes
See the user guide
for more.
Examples
>>> pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"])
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
CategoricalIndex can also be instantiated from a Categorical:
>>> c = pd.Categorical(["a", "b", "c", "a", "b", "c"])
>>> pd.CategoricalIndex(c)
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
Ordered CategoricalIndex can have a min and max value.
>>> ci = pd.CategoricalIndex(
... ["a", "b", "c", "a", "b", "c"], ordered=True, categories=["c", "b", "a"]
... )
>>> ci
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['c', 'b', 'a'], ordered=True, dtype='category')
>>> ci.min()
'c'
Attributes
codes
The category codes of this categorical.
categories
The categories of this categorical.
ordered
Whether the categories have an ordered relationship.
Methods
rename_categories(*args, **kwargs)
Rename categories.
reorder_categories(*args, **kwargs)
Reorder categories as specified in new_categories.
add_categories(*args, **kwargs)
Add new categories.
remove_categories(*args, **kwargs)
Remove the specified categories.
remove_unused_categories(*args, **kwargs)
Remove categories which are not used.
set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
map(mapper)
Map values using input an input mapping or function.
| reference/api/pandas.CategoricalIndex.html |
pandas.Timestamp.round | `pandas.Timestamp.round`
Round the Timestamp to the specified resolution.
Frequency string indicating the rounding resolution.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
``` | Timestamp.round(freq, ambiguous='raise', nonexistent='raise')#
Round the Timestamp to the specified resolution.
Parameters
freqstrFrequency string indicating the rounding resolution.
ambiguousbool or {‘raise’, ‘NaT’}, default ‘raise’The behavior is as follows:
bool contains flags to determine if time is dst or not (note
that this flag is only applicable for ambiguous fall dst dates).
‘NaT’ will return NaT for an ambiguous time.
‘raise’ will raise an AmbiguousTimeError for an ambiguous time.
nonexistent{‘raise’, ‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta}, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time.
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time.
‘NaT’ will return NaT where there are nonexistent times.
timedelta objects will shift nonexistent times by the timedelta.
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
a new Timestamp rounded to the given resolution of freq
Raises
ValueError if the freq cannot be converted
Notes
If the Timestamp has a timezone, rounding will take place relative to the
local (“wall”) time and re-localized to the same timezone. When rounding
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
Create a timestamp object:
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
A timestamp can be rounded using multiple frequency units:
>>> ts.round(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
>>> ts.round(freq='T') # minute
Timestamp('2020-03-14 15:33:00')
>>> ts.round(freq='S') # seconds
Timestamp('2020-03-14 15:32:52')
>>> ts.round(freq='L') # milliseconds
Timestamp('2020-03-14 15:32:52.193000')
freq can also be a multiple of a single unit, like ‘5T’ (i.e. 5 minutes):
>>> ts.round(freq='5T')
Timestamp('2020-03-14 15:35:00')
or a combination of multiple units, like ‘1H30T’ (i.e. 1 hour and 30 minutes):
>>> ts.round(freq='1H30T')
Timestamp('2020-03-14 15:00:00')
Analogous for pd.NaT:
>>> pd.NaT.round()
NaT
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> ts_tz = pd.Timestamp("2021-10-31 01:30:00").tz_localize("Europe/Amsterdam")
>>> ts_tz.round("H", ambiguous=False)
Timestamp('2021-10-31 02:00:00+0100', tz='Europe/Amsterdam')
>>> ts_tz.round("H", ambiguous=True)
Timestamp('2021-10-31 02:00:00+0200', tz='Europe/Amsterdam')
| reference/api/pandas.Timestamp.round.html |
pandas.core.window.expanding.Expanding.min | `pandas.core.window.expanding.Expanding.min`
Calculate the expanding minimum. | Expanding.min(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the expanding minimum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.minAggregating min for Series.
pandas.DataFrame.minAggregating min for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
| reference/api/pandas.core.window.expanding.Expanding.min.html |
pandas.tseries.offsets.QuarterEnd.apply | pandas.tseries.offsets.QuarterEnd.apply | QuarterEnd.apply()#
| reference/api/pandas.tseries.offsets.QuarterEnd.apply.html |
pandas.tseries.offsets.CustomBusinessHour.__call__ | `pandas.tseries.offsets.CustomBusinessHour.__call__`
Call self as a function. | CustomBusinessHour.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.CustomBusinessHour.__call__.html |
pandas.core.groupby.GroupBy.ngroup | `pandas.core.groupby.GroupBy.ngroup`
Number each group from 0 to the number of groups - 1.
```
>>> df = pd.DataFrame({"A": list("aaabba")})
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').ngroup()
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
>>> df.groupby('A').ngroup(ascending=False)
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
>>> df.groupby(["A", [1,1,2,3,2,1]]).ngroup()
0 0
1 0
2 1
3 3
4 2
5 0
dtype: int64
``` | final GroupBy.ngroup(ascending=True)[source]#
Number each group from 0 to the number of groups - 1.
This is the enumerative complement of cumcount. Note that the
numbers given to the groups match the order in which the groups
would be seen when iterating over the groupby object, not the
order they are first observed.
Parameters
ascendingbool, default TrueIf False, number in reverse, from number of group - 1 to 0.
Returns
SeriesUnique numbers for each group.
See also
cumcountNumber the rows in each group.
Examples
>>> df = pd.DataFrame({"A": list("aaabba")})
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').ngroup()
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
>>> df.groupby('A').ngroup(ascending=False)
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
>>> df.groupby(["A", [1,1,2,3,2,1]]).ngroup()
0 0
1 0
2 1
3 3
4 2
5 0
dtype: int64
| reference/api/pandas.core.groupby.GroupBy.ngroup.html |
pandas.read_parquet | `pandas.read_parquet`
Load a parquet object from the file path, returning a DataFrame.
String, path object (implementing os.PathLike[str]), or file-like
object implementing a binary read() function.
The string could be a URL. Valid URL schemes include http, ftp, s3,
gs, and file. For file URLs, a host is expected. A local file could be:
file://localhost/path/to/table.parquet.
A file URL can also be a path to a directory that contains multiple
partitioned parquet files. Both pyarrow and fastparquet support
paths to directories as well as file URLs. A directory path could be:
file://localhost/path/to/tables or s3://bucket/partition_dir. | pandas.read_parquet(path, engine='auto', columns=None, storage_options=None, use_nullable_dtypes=False, **kwargs)[source]#
Load a parquet object from the file path, returning a DataFrame.
Parameters
pathstr, path object or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary read() function.
The string could be a URL. Valid URL schemes include http, ftp, s3,
gs, and file. For file URLs, a host is expected. A local file could be:
file://localhost/path/to/table.parquet.
A file URL can also be a path to a directory that contains multiple
partitioned parquet files. Both pyarrow and fastparquet support
paths to directories as well as file URLs. A directory path could be:
file://localhost/path/to/tables or s3://bucket/partition_dir.
engine{‘auto’, ‘pyarrow’, ‘fastparquet’}, default ‘auto’Parquet library to use. If ‘auto’, then the option
io.parquet.engine is used. The default io.parquet.engine
behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if
‘pyarrow’ is unavailable.
columnslist, default=NoneIf not None, only these columns will be read from the file.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.3.0.
use_nullable_dtypesbool, default FalseIf True, use dtypes that use pd.NA as missing value indicator
for the resulting DataFrame. (only applicable for the pyarrow
engine)
As new dtypes are added that support pd.NA in the future, the
output with this option will change to use those dtypes.
Note: this is an experimental option, and behaviour (e.g. additional
support dtypes) may change without notice.
New in version 1.2.0.
**kwargsAny additional kwargs are passed to the engine.
Returns
DataFrame
| reference/api/pandas.read_parquet.html |
Comparison with other tools | Comparison with other tools | Comparison with R / R libraries
Quick reference
Base R
plyr
reshape / reshape2
Comparison with SQL
Copies vs. in place operations
SELECT
WHERE
GROUP BY
JOIN
UNION
LIMIT
pandas equivalents for some SQL analytic and aggregate functions
UPDATE
DELETE
Comparison with spreadsheets
Data structures
Data input / output
Data operations
String processing
Merging
Other considerations
Comparison with SAS
Data structures
Data input / output
Data operations
String processing
Merging
Missing data
GroupBy
Other considerations
Comparison with Stata
Data structures
Data input / output
Data operations
String processing
Merging
Missing data
GroupBy
Other considerations
| getting_started/comparison/index.html |
pandas.DataFrame.sparse.density | `pandas.DataFrame.sparse.density`
Ratio of non-sparse points to total (dense) data points. | DataFrame.sparse.density[source]#
Ratio of non-sparse points to total (dense) data points.
| reference/api/pandas.DataFrame.sparse.density.html |
pandas.DataFrame.align | `pandas.DataFrame.align`
Align two objects on their axes with the specified join method.
```
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
``` | DataFrame.align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)[source]#
Align two objects on their axes with the specified join method.
Join method is specified for each axis Index.
Parameters
otherDataFrame or Series
join{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axisallowed axis of the other object, default NoneAlign on index (0), columns (1), or both (None).
levelint or level name, default NoneBroadcast across a level, matching Index values on the
passed MultiIndex level.
copybool, default TrueAlways returns new objects. If copy=False and no reindexing is
required then original objects are returned.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default NoneMethod to use for filling holes in reindexed Series:
pad / ffill: propagate last valid observation forward to next valid.
backfill / bfill: use NEXT valid observation to fill gap.
limitint, default NoneIf method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0 if not None.
fill_axis{0 or ‘index’, 1 or ‘columns’}, default 0Filling axis, method and limit.
broadcast_axis{0 or ‘index’, 1 or ‘columns’}, default NoneBroadcast values along this axis, if aligning two objects of
different dimensions.
Returns
(left, right)(DataFrame, type of other)Aligned objects.
Examples
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
Align on columns:
>>> left, right = df.align(other, join="outer", axis=1)
>>> left
A B C D E
1 4 2 NaN 1 3
2 9 7 NaN 6 8
>>> right
A B C D E
2 10 20 30 40 NaN
3 60 70 80 90 NaN
4 600 700 800 900 NaN
We can also align on the index:
>>> left, right = df.align(other, join="outer", axis=0)
>>> left
D B E A
1 1.0 2.0 3.0 4.0
2 6.0 7.0 8.0 9.0
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
>>> right
A B C D
1 NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0
3 60.0 70.0 80.0 90.0
4 600.0 700.0 800.0 900.0
Finally, the default axis=None will align on both index and columns:
>>> left, right = df.align(other, join="outer", axis=None)
>>> left
A B C D E
1 4.0 2.0 NaN 1.0 3.0
2 9.0 7.0 NaN 6.0 8.0
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
>>> right
A B C D E
1 NaN NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0 NaN
3 60.0 70.0 80.0 90.0 NaN
4 600.0 700.0 800.0 900.0 NaN
| reference/api/pandas.DataFrame.align.html |
pandas.tseries.offsets.Easter.isAnchored | pandas.tseries.offsets.Easter.isAnchored | Easter.isAnchored()#
| reference/api/pandas.tseries.offsets.Easter.isAnchored.html |
pandas.errors.CategoricalConversionWarning | `pandas.errors.CategoricalConversionWarning`
Warning is raised when reading a partial labeled Stata file using a iterator.
Examples
```
>>> from pandas.io.stata import StataReader
>>> with StataReader('dta_file', chunksize=2) as reader:
... for i, block in enumerate(reader):
... print(i, block))
... # CategoricalConversionWarning: One or more series with value labels...
``` | exception pandas.errors.CategoricalConversionWarning[source]#
Warning is raised when reading a partial labeled Stata file using a iterator.
Examples
>>> from pandas.io.stata import StataReader
>>> with StataReader('dta_file', chunksize=2) as reader:
... for i, block in enumerate(reader):
... print(i, block))
... # CategoricalConversionWarning: One or more series with value labels...
| reference/api/pandas.errors.CategoricalConversionWarning.html |
pandas.Series.dt.components | `pandas.Series.dt.components`
Return a Dataframe of the components of the Timedeltas.
```
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s'))
>>> s
0 0 days 00:00:00
1 0 days 00:00:01
2 0 days 00:00:02
3 0 days 00:00:03
4 0 days 00:00:04
dtype: timedelta64[ns]
>>> s.dt.components
days hours minutes seconds milliseconds microseconds nanoseconds
0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
2 0 0 0 2 0 0 0
3 0 0 0 3 0 0 0
4 0 0 0 4 0 0 0
``` | Series.dt.components[source]#
Return a Dataframe of the components of the Timedeltas.
Returns
DataFrame
Examples
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s'))
>>> s
0 0 days 00:00:00
1 0 days 00:00:01
2 0 days 00:00:02
3 0 days 00:00:03
4 0 days 00:00:04
dtype: timedelta64[ns]
>>> s.dt.components
days hours minutes seconds milliseconds microseconds nanoseconds
0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
2 0 0 0 2 0 0 0
3 0 0 0 3 0 0 0
4 0 0 0 4 0 0 0
| reference/api/pandas.Series.dt.components.html |
Development | Development | Contributing to pandas
Where to start?
Bug reports and enhancement requests
Working with the code
Contributing your changes to pandas
Tips for a successful pull request
Creating a development environment
Step 1: install a C compiler
Step 2: create an isolated environment
Step 3: build and install pandas
Contributing to the documentation
About the pandas documentation
Updating a pandas docstring
How to build the pandas documentation
Previewing changes
Contributing to the code base
Code standards
Pre-commit
Optional dependencies
Backwards compatibility
Type hints
Testing with continuous integration
Test-driven development
Running the test suite
Running the performance test suite
Documenting your code
pandas maintenance
Roles
Tasks
Issue triage
Closing issues
Reviewing pull requests
Backporting
Cleaning up old issues
Cleaning up old pull requests
Becoming a pandas maintainer
Merging pull requests
Benchmark machine
Release process
Internals
Indexing
Subclassing pandas data structures
Debugging C extensions
Using a debugger
Checking memory leaks with valgrind
Extending pandas
Registering custom accessors
Extension types
Subclassing pandas data structures
Plotting backends
Developer
Storing pandas DataFrame objects in Apache Parquet format
Policies
Version policy
Python support
Roadmap
Extensibility
String data type
Consistent missing value handling
Apache Arrow interoperability
Block manager rewrite
Decoupling of indexing and internals
Numba-accelerated operations
Performance monitoring
Roadmap evolution
Completed items
Contributor community
Community meeting
New contributor meeting
Calendar
GitHub issue tracker
The developer mailing list
Community slack
| development/index.html |
pandas.Series.eq | `pandas.Series.eq`
Return Equal to of series and other, element-wise (binary operator eq).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.eq(b, fill_value=0)
a True
b False
c False
d False
e False
dtype: bool
``` | Series.eq(other, level=None, fill_value=None, axis=0)[source]#
Return Equal to of series and other, element-wise (binary operator eq).
Equivalent to series == other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.eq(b, fill_value=0)
a True
b False
c False
d False
e False
dtype: bool
| reference/api/pandas.Series.eq.html |
pandas.DatetimeIndex.is_quarter_start | `pandas.DatetimeIndex.is_quarter_start`
Indicator for whether the date is the first day of a quarter.
```
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
... periods=4)})
>>> df.assign(quarter=df.dates.dt.quarter,
... is_quarter_start=df.dates.dt.is_quarter_start)
dates quarter is_quarter_start
0 2017-03-30 1 False
1 2017-03-31 1 False
2 2017-04-01 2 True
3 2017-04-02 2 False
``` | property DatetimeIndex.is_quarter_start[source]#
Indicator for whether the date is the first day of a quarter.
Returns
is_quarter_startSeries or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
quarterReturn the quarter of the date.
is_quarter_endSimilar property for indicating the quarter start.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
... periods=4)})
>>> df.assign(quarter=df.dates.dt.quarter,
... is_quarter_start=df.dates.dt.is_quarter_start)
dates quarter is_quarter_start
0 2017-03-30 1 False
1 2017-03-31 1 False
2 2017-04-01 2 True
3 2017-04-02 2 False
>>> idx = pd.date_range('2017-03-30', periods=4)
>>> idx
DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_quarter_start
array([False, False, True, False])
| reference/api/pandas.DatetimeIndex.is_quarter_start.html |
pandas.tseries.offsets.YearEnd.__call__ | `pandas.tseries.offsets.YearEnd.__call__`
Call self as a function. | YearEnd.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.YearEnd.__call__.html |
pandas.tseries.offsets.Micro.kwds | `pandas.tseries.offsets.Micro.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
``` | Micro.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
| reference/api/pandas.tseries.offsets.Micro.kwds.html |
pandas.merge | `pandas.merge`
Merge DataFrame or named Series objects with a database-style join.
```
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [5, 6, 7, 8]})
>>> df1
lkey value
0 foo 1
1 bar 2
2 baz 3
3 foo 5
>>> df2
rkey value
0 foo 5
1 bar 6
2 baz 7
3 foo 8
``` | pandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)[source]#
Merge DataFrame or named Series objects with a database-style join.
A named Series object is treated as a DataFrame with a single named column.
The join is done on columns or indexes. If joining columns on
columns, the DataFrame indexes will be ignored. Otherwise if joining indexes
on indexes or indexes on a column or columns, the index will be passed on.
When performing a cross merge, no column specifications to merge on are
allowed.
Warning
If both key columns contain rows where the key is a null value, those
rows will be matched against each other. This is different from usual SQL
join behaviour and can lead to unexpected results.
Parameters
leftDataFrame
rightDataFrame or named SeriesObject to merge with.
how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘inner’Type of merge to be performed.
left: use only keys from left frame, similar to a SQL left outer join;
preserve key order.
right: use only keys from right frame, similar to a SQL right outer join;
preserve key order.
outer: use union of keys from both frames, similar to a SQL full outer
join; sort keys lexicographically.
inner: use intersection of keys from both frames, similar to a SQL inner
join; preserve the order of the left keys.
cross: creates the cartesian product from both frames, preserves the order
of the left keys.
New in version 1.2.0.
onlabel or listColumn or index level names to join on. These must be found in both
DataFrames. If on is None and not merging on indexes then this defaults
to the intersection of the columns in both DataFrames.
left_onlabel or list, or array-likeColumn or index level names to join on in the left DataFrame. Can also
be an array or list of arrays of the length of the left DataFrame.
These arrays are treated as if they are columns.
right_onlabel or list, or array-likeColumn or index level names to join on in the right DataFrame. Can also
be an array or list of arrays of the length of the right DataFrame.
These arrays are treated as if they are columns.
left_indexbool, default FalseUse the index from the left DataFrame as the join key(s). If it is a
MultiIndex, the number of keys in the other DataFrame (either the index
or a number of columns) must match the number of levels.
right_indexbool, default FalseUse the index from the right DataFrame as the join key. Same caveats as
left_index.
sortbool, default FalseSort the join keys lexicographically in the result DataFrame. If False,
the order of the join keys depends on the join type (how keyword).
suffixeslist-like, default is (“_x”, “_y”)A length-2 sequence where each element is optionally a string
indicating the suffix to add to overlapping column names in
left and right respectively. Pass a value of None instead
of a string to indicate that the column name from left or
right should be left as-is, with no suffix. At least one of the
values must not be None.
copybool, default TrueIf False, avoid copy if possible.
indicatorbool or str, default FalseIf True, adds a column to the output DataFrame called “_merge” with
information on the source of each row. The column can be given a different
name by providing a string argument. The column will have a Categorical
type with the value of “left_only” for observations whose merge key only
appears in the left DataFrame, “right_only” for observations
whose merge key only appears in the right DataFrame, and “both”
if the observation’s merge key is found in both DataFrames.
validatestr, optionalIf specified, checks if merge is of specified type.
“one_to_one” or “1:1”: check if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: check if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: check if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Returns
DataFrameA DataFrame of the two merged objects.
See also
merge_orderedMerge with optional filling/interpolation.
merge_asofMerge on nearest keys.
DataFrame.joinSimilar method using indices.
Notes
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0
Support for merging named Series objects was added in version 0.24.0
Examples
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [5, 6, 7, 8]})
>>> df1
lkey value
0 foo 1
1 bar 2
2 baz 3
3 foo 5
>>> df2
rkey value
0 foo 5
1 bar 6
2 baz 7
3 foo 8
Merge df1 and df2 on the lkey and rkey columns. The value columns have
the default suffixes, _x and _y, appended.
>>> df1.merge(df2, left_on='lkey', right_on='rkey')
lkey value_x rkey value_y
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2 with specified left and right suffixes
appended to any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey',
... suffixes=('_left', '_right'))
lkey value_left rkey value_right
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2, but raise an exception if the DataFrames have
any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False))
Traceback (most recent call last):
...
ValueError: columns overlap but no suffix specified:
Index(['value'], dtype='object')
>>> df1 = pd.DataFrame({'a': ['foo', 'bar'], 'b': [1, 2]})
>>> df2 = pd.DataFrame({'a': ['foo', 'baz'], 'c': [3, 4]})
>>> df1
a b
0 foo 1
1 bar 2
>>> df2
a c
0 foo 3
1 baz 4
>>> df1.merge(df2, how='inner', on='a')
a b c
0 foo 1 3
>>> df1.merge(df2, how='left', on='a')
a b c
0 foo 1 3.0
1 bar 2 NaN
>>> df1 = pd.DataFrame({'left': ['foo', 'bar']})
>>> df2 = pd.DataFrame({'right': [7, 8]})
>>> df1
left
0 foo
1 bar
>>> df2
right
0 7
1 8
>>> df1.merge(df2, how='cross')
left right
0 foo 7
1 foo 8
2 bar 7
3 bar 8
| reference/api/pandas.merge.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_end | `pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | CustomBusinessMonthBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_end.html |
pandas.Series.dt.seconds | `pandas.Series.dt.seconds`
Number of seconds (>= 0 and less than 1 day) for each element. | Series.dt.seconds[source]#
Number of seconds (>= 0 and less than 1 day) for each element.
| reference/api/pandas.Series.dt.seconds.html |
pandas.core.groupby.DataFrameGroupBy.any | `pandas.core.groupby.DataFrameGroupBy.any`
Return True if any value in the group is truthful, else False. | DataFrameGroupBy.any(skipna=True)[source]#
Return True if any value in the group is truthful, else False.
Parameters
skipnabool, default TrueFlag to ignore nan values during truth testing.
Returns
Series or DataFrameDataFrame or Series of boolean values, where a value is True if any element
is True within its respective group, False otherwise.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.DataFrameGroupBy.any.html |
pandas.arrays.IntervalArray.is_empty | `pandas.arrays.IntervalArray.is_empty`
Indicates if an interval is empty, meaning it contains no points.
```
>>> pd.Interval(0, 1, closed='right').is_empty
False
``` | IntervalArray.is_empty#
Indicates if an interval is empty, meaning it contains no points.
New in version 0.25.0.
Returns
bool or ndarrayA boolean indicating if a scalar Interval is empty, or a
boolean ndarray positionally indicating if an Interval in
an IntervalArray or IntervalIndex is
empty.
Examples
An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a
boolean ndarray positionally indicating if an Interval is
empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
| reference/api/pandas.arrays.IntervalArray.is_empty.html |
pandas.tseries.offsets.CustomBusinessDay.is_quarter_start | `pandas.tseries.offsets.CustomBusinessDay.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | CustomBusinessDay.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.CustomBusinessDay.is_quarter_start.html |
pandas.tseries.offsets.BusinessHour.start | pandas.tseries.offsets.BusinessHour.start | BusinessHour.start#
| reference/api/pandas.tseries.offsets.BusinessHour.start.html |
pandas.ExcelWriter.close | `pandas.ExcelWriter.close`
synonym for save, to make it more file-like | ExcelWriter.close()[source]#
synonym for save, to make it more file-like
| reference/api/pandas.ExcelWriter.close.html |
pandas.tseries.offsets.Nano | `pandas.tseries.offsets.Nano`
Attributes
base | class pandas.tseries.offsets.Nano#
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
nanos
Return an integer of the total number of nanoseconds.
delta
n
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.Nano.html |
pandas.tseries.offsets.SemiMonthEnd.is_on_offset | `pandas.tseries.offsets.SemiMonthEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | SemiMonthEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.SemiMonthEnd.is_on_offset.html |
pandas.Timestamp.timetuple | `pandas.Timestamp.timetuple`
Return time tuple, compatible with time.localtime(). | Timestamp.timetuple()#
Return time tuple, compatible with time.localtime().
| reference/api/pandas.Timestamp.timetuple.html |
pandas.Series.str.rstrip | `pandas.Series.str.rstrip`
Remove trailing characters.
```
>>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan, 10, True])
>>> s
0 1. Ant.
1 2. Bee!\n
2 3. Cat?\t
3 NaN
4 10
5 True
dtype: object
``` | Series.str.rstrip(to_strip=None)[source]#
Remove trailing characters.
Strip whitespaces (including newlines) or a set of specified characters
from each string in the Series/Index from right side.
Replaces any non-strings in Series with NaNs.
Equivalent to str.rstrip().
Parameters
to_stripstr or None, default NoneSpecifying the set of characters to be removed.
All combinations of this set of characters will be stripped.
If None then whitespaces are removed.
Returns
Series or Index of object
See also
Series.str.stripRemove leading and trailing characters in Series/Index.
Series.str.lstripRemove leading characters in Series/Index.
Series.str.rstripRemove trailing characters in Series/Index.
Examples
>>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan, 10, True])
>>> s
0 1. Ant.
1 2. Bee!\n
2 3. Cat?\t
3 NaN
4 10
5 True
dtype: object
>>> s.str.strip()
0 1. Ant.
1 2. Bee!
2 3. Cat?
3 NaN
4 NaN
5 NaN
dtype: object
>>> s.str.lstrip('123.')
0 Ant.
1 Bee!\n
2 Cat?\t
3 NaN
4 NaN
5 NaN
dtype: object
>>> s.str.rstrip('.!? \n\t')
0 1. Ant
1 2. Bee
2 3. Cat
3 NaN
4 NaN
5 NaN
dtype: object
>>> s.str.strip('123.!? \n\t')
0 Ant
1 Bee
2 Cat
3 NaN
4 NaN
5 NaN
dtype: object
| reference/api/pandas.Series.str.rstrip.html |
pandas.Series.tshift | `pandas.Series.tshift`
Shift the time index, using the index’s frequency if available. | Series.tshift(periods=1, freq=None, axis=0)[source]#
Shift the time index, using the index’s frequency if available.
Deprecated since version 1.1.0: Use shift instead.
Parameters
periodsintNumber of periods to move, can be positive or negative.
freqDateOffset, timedelta, or str, default NoneIncrement to use from the tseries module
or time rule expressed as a string (e.g. ‘EOM’).
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Corresponds to the axis that contains the Index.
For Series this parameter is unused and defaults to 0.
Returns
shiftedSeries/DataFrame
Notes
If freq is not specified then tries to use the freq or inferred_freq
attributes of the index. If neither of those attributes exist, a
ValueError is thrown
| reference/api/pandas.Series.tshift.html |
pandas.CategoricalIndex.ordered | `pandas.CategoricalIndex.ordered`
Whether the categories have an ordered relationship. | property CategoricalIndex.ordered[source]#
Whether the categories have an ordered relationship.
| reference/api/pandas.CategoricalIndex.ordered.html |
pandas.tseries.offsets.BusinessHour.apply_index | `pandas.tseries.offsets.BusinessHour.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead. | BusinessHour.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.BusinessHour.apply_index.html |
pandas.Flags | `pandas.Flags`
Flags that apply to pandas objects.
New in version 1.2.0.
```
>>> df = pd.DataFrame()
>>> df.flags
<Flags(allows_duplicate_labels=True)>
>>> df.flags.allows_duplicate_labels = False
>>> df.flags
<Flags(allows_duplicate_labels=False)>
``` | class pandas.Flags(obj, *, allows_duplicate_labels)[source]#
Flags that apply to pandas objects.
New in version 1.2.0.
Parameters
objSeries or DataFrameThe object these flags are associated with.
allows_duplicate_labelsbool, default TrueWhether to allow duplicate labels in this object. By default,
duplicate labels are permitted. Setting this to False will
cause an errors.DuplicateLabelError to be raised when
index (or columns for DataFrame) is not unique, or any
subsequent operation on introduces duplicates.
See Disallowing Duplicate Labels for more.
Warning
This is an experimental feature. Currently, many methods fail to
propagate the allows_duplicate_labels value. In future versions
it is expected that every method taking or returning one or more
DataFrame or Series objects will propagate allows_duplicate_labels.
Notes
Attributes can be set in two ways
>>> df = pd.DataFrame()
>>> df.flags
<Flags(allows_duplicate_labels=True)>
>>> df.flags.allows_duplicate_labels = False
>>> df.flags
<Flags(allows_duplicate_labels=False)>
>>> df.flags['allows_duplicate_labels'] = True
>>> df.flags
<Flags(allows_duplicate_labels=True)>
Attributes
allows_duplicate_labels
Whether this object allows duplicate labels.
| reference/api/pandas.Flags.html |
pandas.tseries.offsets.CustomBusinessHour.rollback | `pandas.tseries.offsets.CustomBusinessHour.rollback`
Roll provided date backward to next offset only if not on offset. | CustomBusinessHour.rollback(other)#
Roll provided date backward to next offset only if not on offset.
| reference/api/pandas.tseries.offsets.CustomBusinessHour.rollback.html |
pandas.Series.is_monotonic_increasing | `pandas.Series.is_monotonic_increasing`
Return boolean if values in the object are monotonically increasing. | property Series.is_monotonic_increasing[source]#
Return boolean if values in the object are monotonically increasing.
Returns
bool
| reference/api/pandas.Series.is_monotonic_increasing.html |
pandas.core.resample.Resampler.last | `pandas.core.resample.Resampler.last`
Compute the last non-null entry of each column.
```
>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[5, None, 6], C=[1, 2, 3]))
>>> df.groupby("A").last()
B C
A
1 5.0 2
3 6.0 3
``` | Resampler.last(numeric_only=_NoDefault.no_default, min_count=0, *args, **kwargs)[source]#
Compute the last non-null entry of each column.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameLast non-null of values within each group.
See also
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
DataFrame.core.groupby.GroupBy.firstCompute the first non-null entry of each column.
DataFrame.core.groupby.GroupBy.nthTake the nth row from each group.
Examples
>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[5, None, 6], C=[1, 2, 3]))
>>> df.groupby("A").last()
B C
A
1 5.0 2
3 6.0 3
| reference/api/pandas.core.resample.Resampler.last.html |
pandas.Series.reindex_like | `pandas.Series.reindex_like`
Return an object with matching indices as other object.
```
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
``` | Series.reindex_like(other, method=None, copy=True, limit=None, tolerance=None)[source]#
Return an object with matching indices as other object.
Conform the object to the same index on all axes. Optional
filling logic, placing NaN in locations having no value
in the previous index. A new object is produced unless the
new index is equivalent to the current one and copy=False.
Parameters
otherObject of the same data typeIts row and column indices are used to define the new indices
of this object.
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: propagate last valid observation forward to next
valid
backfill / bfill: use next valid observation to fill gap
nearest: use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
limitint, default NoneMaximum number of consecutive labels to fill for inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series or DataFrameSame type as caller, but with changed indices on each axis.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindexChange to new indices or expand indices.
Notes
Same as calling
.reindex(index=other.index, columns=other.columns,...).
Examples
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
>>> df1
temp_celsius temp_fahrenheit windspeed
2014-02-12 24.3 75.7 high
2014-02-13 31.0 87.8 high
2014-02-14 22.0 71.6 medium
2014-02-15 35.0 95.0 medium
>>> df2 = pd.DataFrame([[28, 'low'],
... [30, 'low'],
... [35.1, 'medium']],
... columns=['temp_celsius', 'windspeed'],
... index=pd.DatetimeIndex(['2014-02-12', '2014-02-13',
... '2014-02-15']))
>>> df2
temp_celsius windspeed
2014-02-12 28.0 low
2014-02-13 30.0 low
2014-02-15 35.1 medium
>>> df2.reindex_like(df1)
temp_celsius temp_fahrenheit windspeed
2014-02-12 28.0 NaN low
2014-02-13 30.0 NaN low
2014-02-14 NaN NaN NaN
2014-02-15 35.1 NaN medium
| reference/api/pandas.Series.reindex_like.html |
pandas.DatetimeIndex.indexer_at_time | `pandas.DatetimeIndex.indexer_at_time`
Return index locations of values at particular time of day. | DatetimeIndex.indexer_at_time(time, asof=False)[source]#
Return index locations of values at particular time of day.
Parameters
timedatetime.time or strTime passed in either as object (datetime.time) or as string in
appropriate format (“%H:%M”, “%H%M”, “%I:%M%p”, “%I%M%p”,
“%H:%M:%S”, “%H%M%S”, “%I:%M:%S%p”, “%I%M%S%p”).
Returns
np.ndarray[np.intp]
See also
indexer_between_timeGet index locations of values between particular times of day.
DataFrame.at_timeSelect values at particular time of day.
| reference/api/pandas.DatetimeIndex.indexer_at_time.html |
pandas.Series.dt.is_year_end | `pandas.Series.dt.is_year_end`
Indicate whether the date is the last day of the year.
```
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
``` | Series.dt.is_year_end[source]#
Indicate whether the date is the last day of the year.
Returns
Series or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
is_year_startSimilar property indicating the start of the year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_end
0 False
1 True
2 False
dtype: bool
>>> idx = pd.date_range("2017-12-30", periods=3)
>>> idx
DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_year_end
array([False, True, False])
| reference/api/pandas.Series.dt.is_year_end.html |
pandas.tseries.offsets.SemiMonthBegin.apply_index | `pandas.tseries.offsets.SemiMonthBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead. | SemiMonthBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.SemiMonthBegin.apply_index.html |
pandas.tseries.offsets.Hour.apply_index | `pandas.tseries.offsets.Hour.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead. | Hour.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.Hour.apply_index.html |
pandas.IntervalIndex.overlaps | `pandas.IntervalIndex.overlaps`
Check elementwise if an Interval overlaps the values in the IntervalArray.
Two intervals overlap if they share a common point, including closed
endpoints. Intervals that only have an open endpoint in common do not
overlap.
```
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
``` | IntervalIndex.overlaps(*args, **kwargs)[source]#
Check elementwise if an Interval overlaps the values in the IntervalArray.
Two intervals overlap if they share a common point, including closed
endpoints. Intervals that only have an open endpoint in common do not
overlap.
Parameters
otherIntervalArrayInterval to check against for an overlap.
Returns
ndarrayBoolean array positionally indicating where an overlap occurs.
See also
Interval.overlapsCheck whether two Interval objects overlap.
Examples
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.overlaps(pd.Interval(0.5, 1.5))
array([ True, True, False])
Intervals that share closed endpoints overlap:
>>> intervals.overlaps(pd.Interval(1, 3, closed='left'))
array([ True, True, True])
Intervals that only have an open endpoint in common do not overlap:
>>> intervals.overlaps(pd.Interval(1, 2, closed='right'))
array([False, True, False])
| reference/api/pandas.IntervalIndex.overlaps.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.isAnchored | pandas.tseries.offsets.CustomBusinessMonthEnd.isAnchored | CustomBusinessMonthEnd.isAnchored()#
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.isAnchored.html |
pandas.tseries.offsets.Micro.rollback | `pandas.tseries.offsets.Micro.rollback`
Roll provided date backward to next offset only if not on offset. | Micro.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.Micro.rollback.html |
pandas.tseries.offsets.Easter.freqstr | `pandas.tseries.offsets.Easter.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | Easter.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.Easter.freqstr.html |
pandas.Index.get_indexer_for | `pandas.Index.get_indexer_for`
Guaranteed return of an indexer even when non-unique.
```
>>> idx = pd.Index([np.nan, 'var1', np.nan])
>>> idx.get_indexer_for([np.nan])
array([0, 2])
``` | final Index.get_indexer_for(target)[source]#
Guaranteed return of an indexer even when non-unique.
This dispatches to get_indexer or get_indexer_non_unique
as appropriate.
Returns
np.ndarray[np.intp]List of indices.
Examples
>>> idx = pd.Index([np.nan, 'var1', np.nan])
>>> idx.get_indexer_for([np.nan])
array([0, 2])
| reference/api/pandas.Index.get_indexer_for.html |
pandas.Timedelta.nanoseconds | `pandas.Timedelta.nanoseconds`
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
```
>>> td = pd.Timedelta('1 days 2 min 3 us 42 ns')
``` | Timedelta.nanoseconds#
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Returns
intNumber of nanoseconds.
See also
Timedelta.componentsReturn all attributes with assigned values (i.e. days, hours, minutes, seconds, milliseconds, microseconds, nanoseconds).
Examples
Using string input
>>> td = pd.Timedelta('1 days 2 min 3 us 42 ns')
>>> td.nanoseconds
42
Using integer input
>>> td = pd.Timedelta(42, unit='ns')
>>> td.nanoseconds
42
| reference/api/pandas.Timedelta.nanoseconds.html |
pandas.tseries.offsets.Hour.apply | pandas.tseries.offsets.Hour.apply | Hour.apply()#
| reference/api/pandas.tseries.offsets.Hour.apply.html |
pandas.Series.swaplevel | `pandas.Series.swaplevel`
Swap levels i and j in a MultiIndex.
Default is to swap the two innermost levels of the index.
```
>>> s = pd.Series(
... ["A", "B", "A", "C"],
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> s
Final exam History January A
Geography February B
Coursework History March A
Geography April C
dtype: object
``` | Series.swaplevel(i=- 2, j=- 1, copy=True)[source]#
Swap levels i and j in a MultiIndex.
Default is to swap the two innermost levels of the index.
Parameters
i, jint or strLevels of the indices to be swapped. Can pass level name as string.
copybool, default TrueWhether to copy underlying data.
Returns
SeriesSeries with levels swapped in MultiIndex.
Examples
>>> s = pd.Series(
... ["A", "B", "A", "C"],
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> s
Final exam History January A
Geography February B
Coursework History March A
Geography April C
dtype: object
In the following example, we will swap the levels of the indices.
Here, we will swap the levels column-wise, but levels can be swapped row-wise
in a similar manner. Note that column-wise is the default behaviour.
By not supplying any arguments for i and j, we swap the last and second to
last indices.
>>> s.swaplevel()
Final exam January History A
February Geography B
Coursework March History A
April Geography C
dtype: object
By supplying one argument, we can choose which index to swap the last
index with. We can for example swap the first index with the last one as
follows.
>>> s.swaplevel(0)
January History Final exam A
February Geography Final exam B
March History Coursework A
April Geography Coursework C
dtype: object
We can also define explicitly which indices we want to swap by supplying values
for both i and j. Here, we for example swap the first and second indices.
>>> s.swaplevel(0, 1)
History Final exam January A
Geography Final exam February B
History Coursework March A
Geography Coursework April C
dtype: object
| reference/api/pandas.Series.swaplevel.html |
pandas.Series.sort_values | `pandas.Series.sort_values`
Sort by the values.
```
>>> s = pd.Series([np.nan, 1, 3, 10, 5])
>>> s
0 NaN
1 1.0
2 3.0
3 10.0
4 5.0
dtype: float64
``` | Series.sort_values(*, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)[source]#
Sort by the values.
Sort a Series in ascending or descending order by some
criterion.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
ascendingbool or list of bools, default TrueIf True, sort values in ascending order, otherwise descending.
inplacebool, default FalseIf True, perform operation in-place.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See also numpy.sort() for more
information. ‘mergesort’ and ‘stable’ are the only stable algorithms.
na_position{‘first’ or ‘last’}, default ‘last’Argument ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at
the end.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
keycallable, optionalIf not None, apply the key function to the series values
before sorting. This is similar to the key argument in the
builtin sorted() function, with the notable difference that
this key function should be vectorized. It should expect a
Series and return an array-like.
New in version 1.1.0.
Returns
Series or NoneSeries ordered by values or None if inplace=True.
See also
Series.sort_indexSort by the Series indices.
DataFrame.sort_valuesSort DataFrame by the values along either axis.
DataFrame.sort_indexSort DataFrame by indices.
Examples
>>> s = pd.Series([np.nan, 1, 3, 10, 5])
>>> s
0 NaN
1 1.0
2 3.0
3 10.0
4 5.0
dtype: float64
Sort values ascending order (default behaviour)
>>> s.sort_values(ascending=True)
1 1.0
2 3.0
4 5.0
3 10.0
0 NaN
dtype: float64
Sort values descending order
>>> s.sort_values(ascending=False)
3 10.0
4 5.0
2 3.0
1 1.0
0 NaN
dtype: float64
Sort values inplace
>>> s.sort_values(ascending=False, inplace=True)
>>> s
3 10.0
4 5.0
2 3.0
1 1.0
0 NaN
dtype: float64
Sort values putting NAs first
>>> s.sort_values(na_position='first')
0 NaN
1 1.0
2 3.0
4 5.0
3 10.0
dtype: float64
Sort a series of strings
>>> s = pd.Series(['z', 'b', 'd', 'a', 'c'])
>>> s
0 z
1 b
2 d
3 a
4 c
dtype: object
>>> s.sort_values()
3 a
1 b
4 c
2 d
0 z
dtype: object
Sort using a key function. Your key function will be
given the Series of values and should return an array-like.
>>> s = pd.Series(['a', 'B', 'c', 'D', 'e'])
>>> s.sort_values()
1 B
3 D
0 a
2 c
4 e
dtype: object
>>> s.sort_values(key=lambda x: x.str.lower())
0 a
1 B
2 c
3 D
4 e
dtype: object
NumPy ufuncs work well here. For example, we can
sort by the sin of the value
>>> s = pd.Series([-4, -2, 0, 2, 4])
>>> s.sort_values(key=np.sin)
1 -2
4 4
2 0
0 -4
3 2
dtype: int64
More complicated user-defined functions can be used,
as long as they expect a Series and return an array-like
>>> s.sort_values(key=lambda x: (np.tan(x.cumsum())))
0 -4
3 2
4 4
1 -2
2 0
dtype: int64
| reference/api/pandas.Series.sort_values.html |
pandas.tseries.offsets.BQuarterEnd.is_year_end | `pandas.tseries.offsets.BQuarterEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | BQuarterEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.BQuarterEnd.is_year_end.html |
pandas.core.resample.Resampler.sem | `pandas.core.resample.Resampler.sem`
Compute standard error of the mean of groups, excluding missing values. | Resampler.sem(ddof=1, numeric_only=_NoDefault.no_default, *args, **kwargs)[source]#
Compute standard error of the mean of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex.
Parameters
ddofint, default 1Degrees of freedom.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Returns
Series or DataFrameStandard error of the mean of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.resample.Resampler.sem.html |
pandas.Series.truncate | `pandas.Series.truncate`
Truncate a Series or DataFrame before and after some index value.
```
>>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
... 'B': ['f', 'g', 'h', 'i', 'j'],
... 'C': ['k', 'l', 'm', 'n', 'o']},
... index=[1, 2, 3, 4, 5])
>>> df
A B C
1 a f k
2 b g l
3 c h m
4 d i n
5 e j o
``` | Series.truncate(before=None, after=None, axis=None, copy=True)[source]#
Truncate a Series or DataFrame before and after some index value.
This is a useful shorthand for boolean indexing based on index
values above or below certain thresholds.
Parameters
beforedate, str, intTruncate all rows before this index value.
afterdate, str, intTruncate all rows after this index value.
axis{0 or ‘index’, 1 or ‘columns’}, optionalAxis to truncate. Truncates the index (rows) by default.
For Series this parameter is unused and defaults to 0.
copybool, default is True,Return a copy of the truncated section.
Returns
type of callerThe truncated Series or DataFrame.
See also
DataFrame.locSelect a subset of a DataFrame by label.
DataFrame.ilocSelect a subset of a DataFrame by position.
Notes
If the index being truncated contains only datetime values,
before and after may be specified as strings instead of
Timestamps.
Examples
>>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
... 'B': ['f', 'g', 'h', 'i', 'j'],
... 'C': ['k', 'l', 'm', 'n', 'o']},
... index=[1, 2, 3, 4, 5])
>>> df
A B C
1 a f k
2 b g l
3 c h m
4 d i n
5 e j o
>>> df.truncate(before=2, after=4)
A B C
2 b g l
3 c h m
4 d i n
The columns of a DataFrame can be truncated.
>>> df.truncate(before="A", after="B", axis="columns")
A B
1 a f
2 b g
3 c h
4 d i
5 e j
For Series, only rows can be truncated.
>>> df['A'].truncate(before=2, after=4)
2 b
3 c
4 d
Name: A, dtype: object
The index values in truncate can be datetimes or string
dates.
>>> dates = pd.date_range('2016-01-01', '2016-02-01', freq='s')
>>> df = pd.DataFrame(index=dates, data={'A': 1})
>>> df.tail()
A
2016-01-31 23:59:56 1
2016-01-31 23:59:57 1
2016-01-31 23:59:58 1
2016-01-31 23:59:59 1
2016-02-01 00:00:00 1
>>> df.truncate(before=pd.Timestamp('2016-01-05'),
... after=pd.Timestamp('2016-01-10')).tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Because the index is a DatetimeIndex containing only dates, we can
specify before and after as strings. They will be coerced to
Timestamps before truncation.
>>> df.truncate('2016-01-05', '2016-01-10').tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Note that truncate assumes a 0 value for any unspecified time
component (midnight). This differs from partial string slicing, which
returns any partially matching dates.
>>> df.loc['2016-01-05':'2016-01-10', :].tail()
A
2016-01-10 23:59:55 1
2016-01-10 23:59:56 1
2016-01-10 23:59:57 1
2016-01-10 23:59:58 1
2016-01-10 23:59:59 1
| reference/api/pandas.Series.truncate.html |
pandas.api.extensions.ExtensionArray.tolist | `pandas.api.extensions.ExtensionArray.tolist`
Return a list of the values. | ExtensionArray.tolist()[source]#
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
Returns
list
| reference/api/pandas.api.extensions.ExtensionArray.tolist.html |
pandas.tseries.offsets.SemiMonthEnd.n | pandas.tseries.offsets.SemiMonthEnd.n | SemiMonthEnd.n#
| reference/api/pandas.tseries.offsets.SemiMonthEnd.n.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.__call__ | `pandas.tseries.offsets.CustomBusinessMonthBegin.__call__`
Call self as a function. | CustomBusinessMonthBegin.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.__call__.html |
pandas.io.json.build_table_schema | `pandas.io.json.build_table_schema`
Create a Table schema from data.
```
>>> df = pd.DataFrame(
... {'A': [1, 2, 3],
... 'B': ['a', 'b', 'c'],
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
{'fields': [{'name': 'idx', 'type': 'integer'}, {'name': 'A', 'type': 'integer'}, {'name': 'B', 'type': 'string'}, {'name': 'C', 'type': 'datetime'}], 'primaryKey': ['idx'], 'pandas_version': '1.4.0'}
``` | pandas.io.json.build_table_schema(data, index=True, primary_key=None, version=True)[source]#
Create a Table schema from data.
Parameters
dataSeries, DataFrame
indexbool, default TrueWhether to include data.index in the schema.
primary_keybool or None, default TrueColumn names to designate as the primary key.
The default None will set ‘primaryKey’ to the index
level or levels if the index is unique.
versionbool, default TrueWhether to include a field pandas_version with the version
of pandas that last revised the table schema. This version
can be different from the installed pandas version.
Returns
schemadict
Notes
See Table Schema for
conversion types.
Timedeltas as converted to ISO8601 duration format with
9 decimal places after the seconds field for nanosecond precision.
Categoricals are converted to the any dtype, and use the enum field
constraint to list the allowed values. The ordered attribute is included
in an ordered field.
Examples
>>> df = pd.DataFrame(
... {'A': [1, 2, 3],
... 'B': ['a', 'b', 'c'],
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
{'fields': [{'name': 'idx', 'type': 'integer'}, {'name': 'A', 'type': 'integer'}, {'name': 'B', 'type': 'string'}, {'name': 'C', 'type': 'datetime'}], 'primaryKey': ['idx'], 'pandas_version': '1.4.0'}
| reference/api/pandas.io.json.build_table_schema.html |
pandas.core.groupby.GroupBy.pad | `pandas.core.groupby.GroupBy.pad`
Forward fill the values.
Deprecated since version 1.4: Use ffill instead. | GroupBy.pad(limit=None)[source]#
Forward fill the values.
Deprecated since version 1.4: Use ffill instead.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameObject with missing values filled.
| reference/api/pandas.core.groupby.GroupBy.pad.html |
pandas.option_context.__call__ | `pandas.option_context.__call__`
Call self as a function. | option_context.__call__(func)[source]#
Call self as a function.
| reference/api/pandas.option_context.__call__.html |
pandas.DataFrame.shift | `pandas.DataFrame.shift`
Shift index by desired number of periods with an optional time freq.
When freq is not passed, shift the index without realigning the data.
If freq is passed (in this case, the index must be date or datetime,
or it will raise a NotImplementedError), the index will be
increased using the periods and the freq. freq can be inferred
when specified as “infer” as long as either freq or inferred_freq
attribute is set in the index.
```
>>> df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45],
... "Col2": [13, 23, 18, 33, 48],
... "Col3": [17, 27, 22, 37, 52]},
... index=pd.date_range("2020-01-01", "2020-01-05"))
>>> df
Col1 Col2 Col3
2020-01-01 10 13 17
2020-01-02 20 23 27
2020-01-03 15 18 22
2020-01-04 30 33 37
2020-01-05 45 48 52
``` | DataFrame.shift(periods=1, freq=None, axis=0, fill_value=_NoDefault.no_default)[source]#
Shift index by desired number of periods with an optional time freq.
When freq is not passed, shift the index without realigning the data.
If freq is passed (in this case, the index must be date or datetime,
or it will raise a NotImplementedError), the index will be
increased using the periods and the freq. freq can be inferred
when specified as “infer” as long as either freq or inferred_freq
attribute is set in the index.
Parameters
periodsintNumber of periods to shift. Can be positive or negative.
freqDateOffset, tseries.offsets, timedelta, or str, optionalOffset to use from the tseries module or time rule (e.g. ‘EOM’).
If freq is specified then the index values are shifted but the
data is not realigned. That is, use freq if you would like to
extend the index when shifting and preserve the original data.
If freq is specified as “infer” then it will be inferred from
the freq or inferred_freq attributes of the index. If neither of
those attributes exist, a ValueError is thrown.
axis{0 or ‘index’, 1 or ‘columns’, None}, default NoneShift direction. For Series this parameter is unused and defaults to 0.
fill_valueobject, optionalThe scalar value to use for newly introduced missing values.
the default depends on the dtype of self.
For numeric data, np.nan is used.
For datetime, timedelta, or period data, etc. NaT is used.
For extension dtypes, self.dtype.na_value is used.
Changed in version 1.1.0.
Returns
DataFrameCopy of input object, shifted.
See also
Index.shiftShift values of Index.
DatetimeIndex.shiftShift values of DatetimeIndex.
PeriodIndex.shiftShift values of PeriodIndex.
tshiftShift the time index, using the index’s frequency if available.
Examples
>>> df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45],
... "Col2": [13, 23, 18, 33, 48],
... "Col3": [17, 27, 22, 37, 52]},
... index=pd.date_range("2020-01-01", "2020-01-05"))
>>> df
Col1 Col2 Col3
2020-01-01 10 13 17
2020-01-02 20 23 27
2020-01-03 15 18 22
2020-01-04 30 33 37
2020-01-05 45 48 52
>>> df.shift(periods=3)
Col1 Col2 Col3
2020-01-01 NaN NaN NaN
2020-01-02 NaN NaN NaN
2020-01-03 NaN NaN NaN
2020-01-04 10.0 13.0 17.0
2020-01-05 20.0 23.0 27.0
>>> df.shift(periods=1, axis="columns")
Col1 Col2 Col3
2020-01-01 NaN 10 13
2020-01-02 NaN 20 23
2020-01-03 NaN 15 18
2020-01-04 NaN 30 33
2020-01-05 NaN 45 48
>>> df.shift(periods=3, fill_value=0)
Col1 Col2 Col3
2020-01-01 0 0 0
2020-01-02 0 0 0
2020-01-03 0 0 0
2020-01-04 10 13 17
2020-01-05 20 23 27
>>> df.shift(periods=3, freq="D")
Col1 Col2 Col3
2020-01-04 10 13 17
2020-01-05 20 23 27
2020-01-06 15 18 22
2020-01-07 30 33 37
2020-01-08 45 48 52
>>> df.shift(periods=3, freq="infer")
Col1 Col2 Col3
2020-01-04 10 13 17
2020-01-05 20 23 27
2020-01-06 15 18 22
2020-01-07 30 33 37
2020-01-08 45 48 52
| reference/api/pandas.DataFrame.shift.html |
pandas.wide_to_long | `pandas.wide_to_long`
Unpivot a DataFrame from wide to long format.
```
>>> np.random.seed(123)
>>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
... "X" : dict(zip(range(3), np.random.randn(3)))
... })
>>> df["id"] = df.index
>>> df
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
>>> pd.wide_to_long(df, ["A", "B"], i="id", j="year")
...
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
``` | pandas.wide_to_long(df, stubnames, i, j, sep='', suffix='\\d+')[source]#
Unpivot a DataFrame from wide to long format.
Less flexible but more user-friendly than melt.
With stubnames [‘A’, ‘B’], this function expects to find one or more
group of columns with format
A-suffix1, A-suffix2,…, B-suffix1, B-suffix2,…
You specify what you want to call this suffix in the resulting long format
with j (for example j=’year’)
Each row of these wide variables are assumed to be uniquely identified by
i (can be a single column name or a list of column names)
All remaining variables in the data frame are left intact.
Parameters
dfDataFrameThe wide-format DataFrame.
stubnamesstr or list-likeThe stub name(s). The wide format variables are assumed to
start with the stub names.
istr or list-likeColumn(s) to use as id variable(s).
jstrThe name of the sub-observation variable. What you wish to name your
suffix in the long format.
sepstr, default “”A character indicating the separation of the variable names
in the wide format, to be stripped from the names in the long format.
For example, if your column names are A-suffix1, A-suffix2, you
can strip the hyphen by specifying sep=’-’.
suffixstr, default ‘\d+’A regular expression capturing the wanted suffixes. ‘\d+’ captures
numeric suffixes. Suffixes with no numbers could be specified with the
negated character class ‘\D+’. You can also further disambiguate
suffixes, for example, if your wide variables are of the form A-one,
B-two,.., and you have an unrelated column A-rating, you can ignore the
last one by specifying suffix=’(!?one|two)’. When all suffixes are
numeric, they are cast to int64/float64.
Returns
DataFrameA DataFrame that contains each stub name as a variable, with new index
(i, j).
See also
meltUnpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivotCreate a spreadsheet-style pivot table as a DataFrame.
DataFrame.pivotPivot without aggregation that can handle non-numeric data.
DataFrame.pivot_tableGeneralization of pivot that can handle duplicate values for one index/column pair.
DataFrame.unstackPivot based on the index values instead of a column.
Notes
All extra variables are left untouched. This simply uses
pandas.melt under the hood, but is hard-coded to “do the right thing”
in a typical case.
Examples
>>> np.random.seed(123)
>>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
... "X" : dict(zip(range(3), np.random.randn(3)))
... })
>>> df["id"] = df.index
>>> df
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
>>> pd.wide_to_long(df, ["A", "B"], i="id", j="year")
...
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
With multiple id columns
>>> df = pd.DataFrame({
... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
... 'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
... 'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
... })
>>> df
famid birth ht1 ht2
0 1 1 2.8 3.4
1 1 2 2.9 3.8
2 1 3 2.2 2.9
3 2 1 2.0 3.2
4 2 2 1.8 2.8
5 2 3 1.9 2.4
6 3 1 2.2 3.3
7 3 2 2.3 3.4
8 3 3 2.1 2.9
>>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age')
>>> l
...
ht
famid birth age
1 1 1 2.8
2 3.4
2 1 2.9
2 3.8
3 1 2.2
2 2.9
2 1 1 2.0
2 3.2
2 1 1.8
2 2.8
3 1 1.9
2 2.4
3 1 1 2.2
2 3.3
2 1 2.3
2 3.4
3 1 2.1
2 2.9
Going from long back to wide just takes some creative use of unstack
>>> w = l.unstack()
>>> w.columns = w.columns.map('{0[0]}{0[1]}'.format)
>>> w.reset_index()
famid birth ht1 ht2
0 1 1 2.8 3.4
1 1 2 2.9 3.8
2 1 3 2.2 2.9
3 2 1 2.0 3.2
4 2 2 1.8 2.8
5 2 3 1.9 2.4
6 3 1 2.2 3.3
7 3 2 2.3 3.4
8 3 3 2.1 2.9
Less wieldy column names are also handled
>>> np.random.seed(0)
>>> df = pd.DataFrame({'A(weekly)-2010': np.random.rand(3),
... 'A(weekly)-2011': np.random.rand(3),
... 'B(weekly)-2010': np.random.rand(3),
... 'B(weekly)-2011': np.random.rand(3),
... 'X' : np.random.randint(3, size=3)})
>>> df['id'] = df.index
>>> df
A(weekly)-2010 A(weekly)-2011 B(weekly)-2010 B(weekly)-2011 X id
0 0.548814 0.544883 0.437587 0.383442 0 0
1 0.715189 0.423655 0.891773 0.791725 1 1
2 0.602763 0.645894 0.963663 0.528895 1 2
>>> pd.wide_to_long(df, ['A(weekly)', 'B(weekly)'], i='id',
... j='year', sep='-')
...
X A(weekly) B(weekly)
id year
0 2010 0 0.548814 0.437587
1 2010 1 0.715189 0.891773
2 2010 1 0.602763 0.963663
0 2011 0 0.544883 0.383442
1 2011 1 0.423655 0.791725
2 2011 1 0.645894 0.528895
If we have many columns, we could also use a regex to find our
stubnames and pass that list on to wide_to_long
>>> stubnames = sorted(
... set([match[0] for match in df.columns.str.findall(
... r'[A-B]\(.*\)').values if match != []])
... )
>>> list(stubnames)
['A(weekly)', 'B(weekly)']
All of the above examples have integers as suffixes. It is possible to
have non-integers as suffixes.
>>> df = pd.DataFrame({
... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
... 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
... 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
... })
>>> df
famid birth ht_one ht_two
0 1 1 2.8 3.4
1 1 2 2.9 3.8
2 1 3 2.2 2.9
3 2 1 2.0 3.2
4 2 2 1.8 2.8
5 2 3 1.9 2.4
6 3 1 2.2 3.3
7 3 2 2.3 3.4
8 3 3 2.1 2.9
>>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age',
... sep='_', suffix=r'\w+')
>>> l
...
ht
famid birth age
1 1 one 2.8
two 3.4
2 one 2.9
two 3.8
3 one 2.2
two 2.9
2 1 one 2.0
two 3.2
2 one 1.8
two 2.8
3 one 1.9
two 2.4
3 1 one 2.2
two 3.3
2 one 2.3
two 3.4
3 one 2.1
two 2.9
| reference/api/pandas.wide_to_long.html |
pandas.arrays.ArrowExtensionArray | `pandas.arrays.ArrowExtensionArray`
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
Warning
```
>>> pd.array([1, 1, None], dtype="int64[pyarrow]")
<ArrowExtensionArray>
[1, 1, <NA>]
Length: 3, dtype: int64[pyarrow]
``` | class pandas.arrays.ArrowExtensionArray(values)[source]#
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
Warning
ArrowExtensionArray is considered experimental. The implementation and
parts of the API may change without warning.
Parameters
valuespyarrow.Array or pyarrow.ChunkedArray
Returns
ArrowExtensionArray
Notes
Most methods are implemented using pyarrow compute functions.
Some methods may either raise an exception or raise a PerformanceWarning if an
associated compute function is not available based on the installed version of PyArrow.
Please install the latest version of PyArrow to enable the best functionality and avoid
potential bugs in prior versions of PyArrow.
Examples
Create an ArrowExtensionArray with pandas.array():
>>> pd.array([1, 1, None], dtype="int64[pyarrow]")
<ArrowExtensionArray>
[1, 1, <NA>]
Length: 3, dtype: int64[pyarrow]
Attributes
None
Methods
None
| reference/api/pandas.arrays.ArrowExtensionArray.html |
pandas.Timestamp.isoweekday | `pandas.Timestamp.isoweekday`
Return the day of the week represented by the date. | Timestamp.isoweekday()#
Return the day of the week represented by the date.
Monday == 1 … Sunday == 7.
| reference/api/pandas.Timestamp.isoweekday.html |
pandas.tseries.offsets.BQuarterBegin.is_on_offset | `pandas.tseries.offsets.BQuarterBegin.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | BQuarterBegin.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.BQuarterBegin.is_on_offset.html |
pandas.io.formats.style.Styler.highlight_null | `pandas.io.formats.style.Styler.highlight_null`
Highlight missing values with a style. | Styler.highlight_null(color=None, subset=None, props=None, null_color=_NoDefault.no_default)[source]#
Highlight missing values with a style.
Parameters
colorstr, default ‘yellow’Background color to use for highlighting.
New in version 1.5.0.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
New in version 1.1.0.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
New in version 1.3.0.
null_colorstr, default NoneThe background color for highlighting.
Deprecated since version 1.5.0: Use color instead. If color is given null_color is
not used.
Returns
selfStyler
See also
Styler.highlight_maxHighlight the maximum with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_betweenHighlight a defined range with a style.
Styler.highlight_quantileHighlight values defined by a quantile with a style.
| reference/api/pandas.io.formats.style.Styler.highlight_null.html |
pandas.api.extensions.ExtensionArray._reduce | `pandas.api.extensions.ExtensionArray._reduce`
Return a scalar result of performing the reduction operation. | ExtensionArray._reduce(name, *, skipna=True, **kwargs)[source]#
Return a scalar result of performing the reduction operation.
Parameters
namestrName of the function, supported values are:
{ any, all, min, max, sum, mean, median, prod,
std, var, sem, kurt, skew }.
skipnabool, default TrueIf True, skip NaN values.
**kwargsAdditional keyword arguments passed to the reduction function.
Currently, ddof is the only supported kwarg.
Returns
scalar
Raises
TypeErrorsubclass does not define reductions
| reference/api/pandas.api.extensions.ExtensionArray._reduce.html |
pandas.tseries.offsets.LastWeekOfMonth.is_anchored | `pandas.tseries.offsets.LastWeekOfMonth.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | LastWeekOfMonth.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.is_anchored.html |
pandas.RangeIndex.from_range | `pandas.RangeIndex.from_range`
Create RangeIndex from a range object. | classmethod RangeIndex.from_range(data, name=None, dtype=None)[source]#
Create RangeIndex from a range object.
Returns
RangeIndex
| reference/api/pandas.RangeIndex.from_range.html |
pandas.Series.to_string | `pandas.Series.to_string`
Render a string representation of the Series.
Buffer to write to. | Series.to_string(buf=None, na_rep='NaN', float_format=None, header=True, index=True, length=False, dtype=False, name=False, max_rows=None, min_rows=None)[source]#
Render a string representation of the Series.
Parameters
bufStringIO-like, optionalBuffer to write to.
na_repstr, optionalString representation of NaN to use, default ‘NaN’.
float_formatone-parameter function, optionalFormatter function to apply to columns’ elements if they are
floats, default None.
headerbool, default TrueAdd the Series header (index name).
indexbool, optionalAdd index (row) labels, default True.
lengthbool, default FalseAdd the Series length.
dtypebool, default FalseAdd the Series dtype.
namebool, default FalseAdd the Series name if not None.
max_rowsint, optionalMaximum number of rows to show before truncating. If None, show
all.
min_rowsint, optionalThe number of rows to display in a truncated repr (when number
of rows is above max_rows).
Returns
str or NoneString representation of Series if buf=None, otherwise None.
| reference/api/pandas.Series.to_string.html |