title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.Series.factorize
|
`pandas.Series.factorize`
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(),
and as a method Series.factorize() and Index.factorize().
```
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
```
|
Series.factorize(sort=False, na_sentinel=_NoDefault.no_default, use_na_sentinel=_NoDefault.no_default)[source]#
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(),
and as a method Series.factorize() and Index.factorize().
Parameters
sortbool, default FalseSort uniques and shuffle codes to maintain the
relationship.
na_sentinelint or None, default -1Value to mark “not found”. If None, will not drop the NaN
from the uniques of the values.
Deprecated since version 1.5.0: The na_sentinel argument is deprecated and
will be removed in a future version of pandas. Specify use_na_sentinel as
either True or False.
Changed in version 1.1.2.
use_na_sentinelbool, default TrueIf True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
New in version 1.5.0.
Returns
codesndarrayAn integer ndarray that’s an indexer into uniques.
uniques.take(codes) will have the same values as values.
uniquesndarray, Index, or CategoricalThe unique valid values. When values is Categorical, uniques
is a Categorical. When values is some other pandas object, an
Index is returned. Otherwise, a 1-D ndarray is returned.
Note
Even if there’s a missing value in values, uniques will
not contain an entry for it.
See also
cutDiscretize continuous-valued array.
uniqueFind the unique value in an array.
Notes
Reference the user guide for more examples.
Examples
These examples all show factorize as a top-level method like
pd.factorize(values). The results are identical for methods like
Series.factorize().
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
With sort=True, the uniques will be sorted, and codes will be
shuffled so that the relationship is the maintained.
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'], sort=True)
>>> codes
array([1, 1, 0, 2, 1]...)
>>> uniques
array(['a', 'b', 'c'], dtype=object)
When use_na_sentinel=True (the default), missing values are indicated in
the codes with the sentinel value -1 and missing values are not
included in uniques.
>>> codes, uniques = pd.factorize(['b', None, 'a', 'c', 'b'])
>>> codes
array([ 0, -1, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
Thus far, we’ve only factorized lists (which are internally coerced to
NumPy arrays). When factorizing pandas objects, the type of uniques
will differ. For Categoricals, a Categorical is returned.
>>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
['a', 'c']
Categories (3, object): ['a', 'b', 'c']
Notice that 'b' is in uniques.categories, despite not being
present in cat.values.
For all other pandas objects, an Index of the appropriate type is
returned.
>>> cat = pd.Series(['a', 'a', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
Index(['a', 'c'], dtype='object')
If NaN is in the values, and we want to include NaN in the uniques of the
values, it can be achieved by setting use_na_sentinel=False.
>>> values = np.array([1, 2, 1, np.nan])
>>> codes, uniques = pd.factorize(values) # default: use_na_sentinel=True
>>> codes
array([ 0, 1, 0, -1])
>>> uniques
array([1., 2.])
>>> codes, uniques = pd.factorize(values, use_na_sentinel=False)
>>> codes
array([0, 1, 0, 2])
>>> uniques
array([ 1., 2., nan])
|
reference/api/pandas.Series.factorize.html
|
pandas.pivot
|
`pandas.pivot`
Return reshaped DataFrame organized by given index / column values.
```
>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
... 'two'],
... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
... 'baz': [1, 2, 3, 4, 5, 6],
... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
foo bar baz zoo
0 one A 1 x
1 one B 2 y
2 one C 3 z
3 two A 4 q
4 two B 5 w
5 two C 6 t
```
|
pandas.pivot(data, *, index=None, columns=None, values=None)[source]#
Return reshaped DataFrame organized by given index / column values.
Reshape data (produce a “pivot” table) based on column values. Uses
unique values from specified index / columns to form axes of the
resulting DataFrame. This function does not support data
aggregation, multiple values will result in a MultiIndex in the
columns. See the User Guide for more on reshaping.
Parameters
dataDataFrame
indexstr or object or a list of str, optionalColumn to use to make new frame’s index. If None, uses
existing index.
Changed in version 1.1.0: Also accept list of index names.
columnsstr or object or a list of strColumn to use to make new frame’s columns.
Changed in version 1.1.0: Also accept list of columns names.
valuesstr, object or a list of the previous, optionalColumn(s) to use for populating new frame’s values. If not
specified, all remaining columns will be used and the result will
have hierarchically indexed columns.
Returns
DataFrameReturns reshaped DataFrame.
Raises
ValueError:When there are any index, columns combinations with multiple
values. DataFrame.pivot_table when you need to aggregate.
See also
DataFrame.pivot_tableGeneralization of pivot that can handle duplicate values for one index/column pair.
DataFrame.unstackPivot based on the index values instead of a column.
wide_to_longWide panel to long format. Less flexible but more user-friendly than melt.
Notes
For finer-tuned control, see hierarchical indexing documentation along
with the related stack/unstack methods.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
... 'two'],
... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
... 'baz': [1, 2, 3, 4, 5, 6],
... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
foo bar baz zoo
0 one A 1 x
1 one B 2 y
2 one C 3 z
3 two A 4 q
4 two B 5 w
5 two C 6 t
>>> df.pivot(index='foo', columns='bar', values='baz')
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar')['baz']
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo'])
baz zoo
bar A B C A B C
foo
one 1 2 3 x y z
two 4 5 6 q w t
You could also assign a list of column names or a list of index names.
>>> df = pd.DataFrame({
... "lev1": [1, 1, 1, 2, 2, 2],
... "lev2": [1, 1, 2, 1, 1, 2],
... "lev3": [1, 2, 1, 2, 1, 2],
... "lev4": [1, 2, 3, 4, 5, 6],
... "values": [0, 1, 2, 3, 4, 5]})
>>> df
lev1 lev2 lev3 lev4 values
0 1 1 1 1 0
1 1 1 2 2 1
2 1 2 1 3 2
3 2 1 2 4 3
4 2 1 1 5 4
5 2 2 2 6 5
>>> df.pivot(index="lev1", columns=["lev2", "lev3"],values="values")
lev2 1 2
lev3 1 2 1 2
lev1
1 0.0 1.0 2.0 NaN
2 4.0 3.0 NaN 5.0
>>> df.pivot(index=["lev1", "lev2"], columns=["lev3"],values="values")
lev3 1 2
lev1 lev2
1 1 0.0 1.0
2 2.0 NaN
2 1 4.0 3.0
2 NaN 5.0
A ValueError is raised if there are any duplicates.
>>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'],
... "bar": ['A', 'A', 'B', 'C'],
... "baz": [1, 2, 3, 4]})
>>> df
foo bar baz
0 one A 1
1 one A 2
2 two B 3
3 two C 4
Notice that the first two rows are the same for our index
and columns arguments.
>>> df.pivot(index='foo', columns='bar', values='baz')
Traceback (most recent call last):
...
ValueError: Index contains duplicate entries, cannot reshape
|
reference/api/pandas.pivot.html
|
pandas.tseries.offsets.BYearBegin.name
|
`pandas.tseries.offsets.BYearBegin.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
BYearBegin.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.BYearBegin.name.html
|
pandas.Series.xs
|
`pandas.Series.xs`
Return cross-section from the Series/DataFrame.
This method takes a key argument to select data at a particular
level of a MultiIndex.
```
>>> d = {'num_legs': [4, 4, 2, 2],
... 'num_wings': [0, 0, 2, 2],
... 'class': ['mammal', 'mammal', 'mammal', 'bird'],
... 'animal': ['cat', 'dog', 'bat', 'penguin'],
... 'locomotion': ['walks', 'walks', 'flies', 'walks']}
>>> df = pd.DataFrame(data=d)
>>> df = df.set_index(['class', 'animal', 'locomotion'])
>>> df
num_legs num_wings
class animal locomotion
mammal cat walks 4 0
dog walks 4 0
bat flies 2 2
bird penguin walks 2 2
```
|
Series.xs(key, axis=0, level=None, drop_level=True)[source]#
Return cross-section from the Series/DataFrame.
This method takes a key argument to select data at a particular
level of a MultiIndex.
Parameters
keylabel or tuple of labelLabel contained in the index, or partially in a MultiIndex.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to retrieve cross-section on.
levelobject, defaults to first n levels (n=1 or len(key))In case of a key partially contained in a MultiIndex, indicate
which levels are used. Levels can be referred by label or position.
drop_levelbool, default TrueIf False, returns object with same levels as self.
Returns
Series or DataFrameCross-section from the original Series or DataFrame
corresponding to the selected index levels.
See also
DataFrame.locAccess a group of rows and columns by label(s) or a boolean array.
DataFrame.ilocPurely integer-location based indexing for selection by position.
Notes
xs can not be used to set values.
MultiIndex Slicers is a generic way to get/set values on
any level or levels.
It is a superset of xs functionality, see
MultiIndex Slicers.
Examples
>>> d = {'num_legs': [4, 4, 2, 2],
... 'num_wings': [0, 0, 2, 2],
... 'class': ['mammal', 'mammal', 'mammal', 'bird'],
... 'animal': ['cat', 'dog', 'bat', 'penguin'],
... 'locomotion': ['walks', 'walks', 'flies', 'walks']}
>>> df = pd.DataFrame(data=d)
>>> df = df.set_index(['class', 'animal', 'locomotion'])
>>> df
num_legs num_wings
class animal locomotion
mammal cat walks 4 0
dog walks 4 0
bat flies 2 2
bird penguin walks 2 2
Get values at specified index
>>> df.xs('mammal')
num_legs num_wings
animal locomotion
cat walks 4 0
dog walks 4 0
bat flies 2 2
Get values at several indexes
>>> df.xs(('mammal', 'dog'))
num_legs num_wings
locomotion
walks 4 0
Get values at specified index and level
>>> df.xs('cat', level=1)
num_legs num_wings
class locomotion
mammal walks 4 0
Get values at several indexes and levels
>>> df.xs(('bird', 'walks'),
... level=[0, 'locomotion'])
num_legs num_wings
animal
penguin 2 2
Get values at specified column and axis
>>> df.xs('num_wings', axis=1)
class animal locomotion
mammal cat walks 0
dog walks 0
bat flies 2
bird penguin walks 2
Name: num_wings, dtype: int64
|
reference/api/pandas.Series.xs.html
|
pandas.tseries.offsets.BusinessMonthBegin.kwds
|
`pandas.tseries.offsets.BusinessMonthBegin.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
BusinessMonthBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.kwds.html
|
pandas.tseries.offsets.BQuarterEnd.is_on_offset
|
`pandas.tseries.offsets.BQuarterEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
BQuarterEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.BQuarterEnd.is_on_offset.html
|
pandas.tseries.offsets.BQuarterBegin.rule_code
|
pandas.tseries.offsets.BQuarterBegin.rule_code
|
BQuarterBegin.rule_code#
|
reference/api/pandas.tseries.offsets.BQuarterBegin.rule_code.html
|
pandas.tseries.offsets.Easter.is_month_end
|
`pandas.tseries.offsets.Easter.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Easter.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Easter.is_month_end.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_end
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
CustomBusinessMonthBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_end.html
|
pandas.tseries.offsets.WeekOfMonth.is_quarter_end
|
`pandas.tseries.offsets.WeekOfMonth.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
WeekOfMonth.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.WeekOfMonth.is_quarter_end.html
|
pandas.tseries.offsets.LastWeekOfMonth.apply_index
|
`pandas.tseries.offsets.LastWeekOfMonth.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
LastWeekOfMonth.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.apply_index.html
|
pandas.io.formats.style.Styler.highlight_max
|
`pandas.io.formats.style.Styler.highlight_max`
Highlight the maximum with a style.
|
Styler.highlight_max(subset=None, color='yellow', axis=0, props=None)[source]#
Highlight the maximum with a style.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
colorstr, default ‘yellow’Background color to use for highlighting.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
New in version 1.3.0.
Returns
selfStyler
See also
Styler.highlight_nullHighlight missing values with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_betweenHighlight a defined range with a style.
Styler.highlight_quantileHighlight values defined by a quantile with a style.
|
reference/api/pandas.io.formats.style.Styler.highlight_max.html
|
pandas.tseries.offsets.Second.name
|
`pandas.tseries.offsets.Second.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Second.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Second.name.html
|
pandas.Series.pipe
|
`pandas.Series.pipe`
Apply chainable functions that expect Series or DataFrames.
```
>>> func(g(h(df), arg1=a), arg2=b, arg3=c)
```
|
Series.pipe(func, *args, **kwargs)[source]#
Apply chainable functions that expect Series or DataFrames.
Parameters
funcfunctionFunction to apply to the Series/DataFrame.
args, and kwargs are passed into func.
Alternatively a (callable, data_keyword) tuple where
data_keyword is a string indicating the keyword of
callable that expects the Series/DataFrame.
argsiterable, optionalPositional arguments passed into func.
kwargsmapping, optionalA dictionary of keyword arguments passed into func.
Returns
objectthe return type of func.
See also
DataFrame.applyApply a function along input axis of DataFrame.
DataFrame.applymapApply a function elementwise on a whole DataFrame.
Series.mapApply a mapping correspondence on a Series.
Notes
Use .pipe when chaining together functions that expect
Series, DataFrames or GroupBy objects. Instead of writing
>>> func(g(h(df), arg1=a), arg2=b, arg3=c)
You can write
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(func, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second
argument, pass a tuple indicating which keyword expects the
data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((func, 'arg2'), arg1=a, arg3=c)
... )
|
reference/api/pandas.Series.pipe.html
|
pandas.api.types.is_categorical
|
`pandas.api.types.is_categorical`
Check whether an array-like is a Categorical instance.
```
>>> is_categorical([1, 2, 3])
False
```
|
pandas.api.types.is_categorical(arr)[source]#
Check whether an array-like is a Categorical instance.
Deprecated since version 1.1.0: Use is_categorical_dtype instead.
Parameters
arrarray-likeThe array-like to check.
Returns
booleanWhether or not the array-like is of a Categorical instance.
Examples
>>> is_categorical([1, 2, 3])
False
Categoricals, Series Categoricals, and CategoricalIndex will return True.
>>> cat = pd.Categorical([1, 2, 3])
>>> is_categorical(cat)
True
>>> is_categorical(pd.Series(cat))
True
>>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
True
|
reference/api/pandas.api.types.is_categorical.html
|
pandas.tseries.offsets.WeekOfMonth.base
|
`pandas.tseries.offsets.WeekOfMonth.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
WeekOfMonth.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.WeekOfMonth.base.html
|
pandas.tseries.offsets.Milli.is_month_start
|
`pandas.tseries.offsets.Milli.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
Milli.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.Milli.is_month_start.html
|
pandas.tseries.offsets.QuarterBegin.startingMonth
|
pandas.tseries.offsets.QuarterBegin.startingMonth
|
QuarterBegin.startingMonth#
|
reference/api/pandas.tseries.offsets.QuarterBegin.startingMonth.html
|
pandas.IntervalIndex.length
|
pandas.IntervalIndex.length
|
property IntervalIndex.length[source]#
|
reference/api/pandas.IntervalIndex.length.html
|
pandas.io.formats.style.Styler.template_html
|
pandas.io.formats.style.Styler.template_html
|
Styler.template_html = <Template 'html.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_html.html
|
pandas.Series.argmin
|
`pandas.Series.argmin`
Return int position of the smallest value in the Series.
```
>>> s = pd.Series({'Corn Flakes': 100.0, 'Almond Delight': 110.0,
... 'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0})
>>> s
Corn Flakes 100.0
Almond Delight 110.0
Cinnamon Toast Crunch 120.0
Cocoa Puff 110.0
dtype: float64
```
|
Series.argmin(axis=None, skipna=True, *args, **kwargs)[source]#
Return int position of the smallest value in the Series.
If the minimum is achieved in multiple locations,
the first row position is returned.
Parameters
axis{None}Unused. Parameter needed for compatibility with DataFrame.
skipnabool, default TrueExclude NA/null values when showing the result.
*args, **kwargsAdditional arguments and keywords for compatibility with NumPy.
Returns
intRow position of the minimum value.
See also
Series.argminReturn position of the minimum value.
Series.argmaxReturn position of the maximum value.
numpy.ndarray.argminEquivalent method for numpy arrays.
Series.idxmaxReturn index label of the maximum values.
Series.idxminReturn index label of the minimum values.
Examples
Consider dataset containing cereal calories
>>> s = pd.Series({'Corn Flakes': 100.0, 'Almond Delight': 110.0,
... 'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0})
>>> s
Corn Flakes 100.0
Almond Delight 110.0
Cinnamon Toast Crunch 120.0
Cocoa Puff 110.0
dtype: float64
>>> s.argmax()
2
>>> s.argmin()
0
The maximum cereal calories is the third element and
the minimum cereal calories is the first element,
since series is zero-indexed.
|
reference/api/pandas.Series.argmin.html
|
pandas.tseries.offsets.DateOffset.freqstr
|
`pandas.tseries.offsets.DateOffset.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
DateOffset.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.DateOffset.freqstr.html
|
pandas.tseries.offsets.CustomBusinessDay.kwds
|
`pandas.tseries.offsets.CustomBusinessDay.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
CustomBusinessDay.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.kwds.html
|
pandas.core.window.expanding.Expanding.cov
|
`pandas.core.window.expanding.Expanding.cov`
Calculate the expanding sample covariance.
|
Expanding.cov(other=None, pairwise=None, ddof=1, numeric_only=False, **kwargs)[source]#
Calculate the expanding sample covariance.
Parameters
otherSeries or DataFrame, optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndexed DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.covAggregating cov for Series.
pandas.DataFrame.covAggregating cov for DataFrame.
|
reference/api/pandas.core.window.expanding.Expanding.cov.html
|
pandas.Period.minute
|
`pandas.Period.minute`
Get minute of the hour component of the Period.
```
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.minute
3
```
|
Period.minute#
Get minute of the hour component of the Period.
Returns
intThe minute as an integer, between 0 and 59.
See also
Period.hourGet the hour component of the Period.
Period.secondGet the second component of the Period.
Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.minute
3
|
reference/api/pandas.Period.minute.html
|
pandas.tseries.offsets.BMonthBegin
|
`pandas.tseries.offsets.BMonthBegin`
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
|
pandas.tseries.offsets.BMonthBegin#
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
|
reference/api/pandas.tseries.offsets.BMonthBegin.html
|
pandas.tseries.offsets.BQuarterBegin.normalize
|
pandas.tseries.offsets.BQuarterBegin.normalize
|
BQuarterBegin.normalize#
|
reference/api/pandas.tseries.offsets.BQuarterBegin.normalize.html
|
pandas.tseries.offsets.BQuarterEnd.normalize
|
pandas.tseries.offsets.BQuarterEnd.normalize
|
BQuarterEnd.normalize#
|
reference/api/pandas.tseries.offsets.BQuarterEnd.normalize.html
|
pandas.tseries.offsets.QuarterBegin.__call__
|
`pandas.tseries.offsets.QuarterBegin.__call__`
Call self as a function.
|
QuarterBegin.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.QuarterBegin.__call__.html
|
pandas.core.resample.Resampler.ohlc
|
`pandas.core.resample.Resampler.ohlc`
Compute open, high, low and close values of a group, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
|
Resampler.ohlc(*args, **kwargs)[source]#
Compute open, high, low and close values of a group, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Returns
DataFrameOpen, high, low and close values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.resample.Resampler.ohlc.html
|
pandas.factorize
|
`pandas.factorize`
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(),
and as a method Series.factorize() and Index.factorize().
```
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
```
|
pandas.factorize(values, sort=False, na_sentinel=_NoDefault.no_default, use_na_sentinel=_NoDefault.no_default, size_hint=None)[source]#
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(),
and as a method Series.factorize() and Index.factorize().
Parameters
valuessequenceA 1-D sequence. Sequences that aren’t pandas objects are
coerced to ndarrays before factorization.
sortbool, default FalseSort uniques and shuffle codes to maintain the
relationship.
na_sentinelint or None, default -1Value to mark “not found”. If None, will not drop the NaN
from the uniques of the values.
Deprecated since version 1.5.0: The na_sentinel argument is deprecated and
will be removed in a future version of pandas. Specify use_na_sentinel as
either True or False.
Changed in version 1.1.2.
use_na_sentinelbool, default TrueIf True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
New in version 1.5.0.
size_hintint, optionalHint to the hashtable sizer.
Returns
codesndarrayAn integer ndarray that’s an indexer into uniques.
uniques.take(codes) will have the same values as values.
uniquesndarray, Index, or CategoricalThe unique valid values. When values is Categorical, uniques
is a Categorical. When values is some other pandas object, an
Index is returned. Otherwise, a 1-D ndarray is returned.
Note
Even if there’s a missing value in values, uniques will
not contain an entry for it.
See also
cutDiscretize continuous-valued array.
uniqueFind the unique value in an array.
Notes
Reference the user guide for more examples.
Examples
These examples all show factorize as a top-level method like
pd.factorize(values). The results are identical for methods like
Series.factorize().
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
With sort=True, the uniques will be sorted, and codes will be
shuffled so that the relationship is the maintained.
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'], sort=True)
>>> codes
array([1, 1, 0, 2, 1]...)
>>> uniques
array(['a', 'b', 'c'], dtype=object)
When use_na_sentinel=True (the default), missing values are indicated in
the codes with the sentinel value -1 and missing values are not
included in uniques.
>>> codes, uniques = pd.factorize(['b', None, 'a', 'c', 'b'])
>>> codes
array([ 0, -1, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
Thus far, we’ve only factorized lists (which are internally coerced to
NumPy arrays). When factorizing pandas objects, the type of uniques
will differ. For Categoricals, a Categorical is returned.
>>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
['a', 'c']
Categories (3, object): ['a', 'b', 'c']
Notice that 'b' is in uniques.categories, despite not being
present in cat.values.
For all other pandas objects, an Index of the appropriate type is
returned.
>>> cat = pd.Series(['a', 'a', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
Index(['a', 'c'], dtype='object')
If NaN is in the values, and we want to include NaN in the uniques of the
values, it can be achieved by setting use_na_sentinel=False.
>>> values = np.array([1, 2, 1, np.nan])
>>> codes, uniques = pd.factorize(values) # default: use_na_sentinel=True
>>> codes
array([ 0, 1, 0, -1])
>>> uniques
array([1., 2.])
>>> codes, uniques = pd.factorize(values, use_na_sentinel=False)
>>> codes
array([0, 1, 0, 2])
>>> uniques
array([ 1., 2., nan])
|
reference/api/pandas.factorize.html
|
pandas.tseries.offsets.YearBegin.copy
|
`pandas.tseries.offsets.YearBegin.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
YearBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.YearBegin.copy.html
|
pandas.tseries.offsets.FY5253.is_quarter_start
|
`pandas.tseries.offsets.FY5253.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
FY5253.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.FY5253.is_quarter_start.html
|
Options and settings
|
Options and settings
|
API for configuring global behavior. See the User Guide for more.
Working with options#
describe_option(pat[, _print_desc])
Prints the description for one or more registered options.
reset_option(pat)
Reset one or more options to their default value.
get_option(pat)
Retrieves the value of the specified option.
set_option(pat, value)
Sets the value of the specified option.
option_context(*args)
Context manager to temporarily set options in the with statement context.
|
reference/options.html
|
pandas.tseries.offsets.Week.apply
|
pandas.tseries.offsets.Week.apply
|
Week.apply()#
|
reference/api/pandas.tseries.offsets.Week.apply.html
|
pandas.tseries.offsets.YearBegin.base
|
`pandas.tseries.offsets.YearBegin.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
YearBegin.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.YearBegin.base.html
|
pandas.tseries.offsets.BusinessMonthEnd.apply_index
|
`pandas.tseries.offsets.BusinessMonthEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
|
BusinessMonthEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.apply_index.html
|
pandas.DataFrame.add_suffix
|
`pandas.DataFrame.add_suffix`
Suffix labels with string suffix.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
```
|
DataFrame.add_suffix(suffix)[source]#
Suffix labels with string suffix.
For Series, the row labels are suffixed.
For DataFrame, the column labels are suffixed.
Parameters
suffixstrThe string to add after each label.
Returns
Series or DataFrameNew Series or DataFrame with updated labels.
See also
Series.add_prefixPrefix row labels with string prefix.
DataFrame.add_prefixPrefix column labels with string prefix.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6
|
reference/api/pandas.DataFrame.add_suffix.html
|
pandas.Series.notnull
|
`pandas.Series.notnull`
Series.notnull is an alias for Series.notna.
Detect existing (non-missing) values.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
Series.notnull()[source]#
Series.notnull is an alias for Series.notna.
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
SeriesMask of bool values for each element in Series that
indicates whether an element is not an NA value.
See also
Series.notnullAlias of notna.
Series.isnaBoolean inverse of notna.
Series.dropnaOmit axes labels with missing values.
notnaTop-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
|
reference/api/pandas.Series.notnull.html
|
pandas.Categorical.from_codes
|
`pandas.Categorical.from_codes`
Make a Categorical type from codes and categories or dtype.
This constructor is useful if you already have codes and
categories/dtype and so do not need the (computation intensive)
factorization step, which is usually done on the constructor.
```
>>> dtype = pd.CategoricalDtype(['a', 'b'], ordered=True)
>>> pd.Categorical.from_codes(codes=[0, 1, 0, 1], dtype=dtype)
['a', 'b', 'a', 'b']
Categories (2, object): ['a' < 'b']
```
|
classmethod Categorical.from_codes(codes, categories=None, ordered=None, dtype=None)[source]#
Make a Categorical type from codes and categories or dtype.
This constructor is useful if you already have codes and
categories/dtype and so do not need the (computation intensive)
factorization step, which is usually done on the constructor.
If your data does not follow this convention, please use the normal
constructor.
Parameters
codesarray-like of intAn integer array, where each integer points to a category in
categories or dtype.categories, or else is -1 for NaN.
categoriesindex-like, optionalThe categories for the categorical. Items need to be unique.
If the categories are not given here, then they must be provided
in dtype.
orderedbool, optionalWhether or not this categorical is treated as an ordered
categorical. If not given here or in dtype, the resulting
categorical will be unordered.
dtypeCategoricalDtype or “category”, optionalIf CategoricalDtype, cannot be used together with
categories or ordered.
Returns
Categorical
Examples
>>> dtype = pd.CategoricalDtype(['a', 'b'], ordered=True)
>>> pd.Categorical.from_codes(codes=[0, 1, 0, 1], dtype=dtype)
['a', 'b', 'a', 'b']
Categories (2, object): ['a' < 'b']
|
reference/api/pandas.Categorical.from_codes.html
|
pandas.testing.assert_series_equal
|
`pandas.testing.assert_series_equal`
Check that left and right Series are equal.
```
>>> from pandas import testing as tm
>>> a = pd.Series([1, 2, 3, 4])
>>> b = pd.Series([1, 2, 3, 4])
>>> tm.assert_series_equal(a, b)
```
|
pandas.testing.assert_series_equal(left, right, check_dtype=True, check_index_type='equiv', check_series_type=True, check_less_precise=_NoDefault.no_default, check_names=True, check_exact=False, check_datetimelike_compat=False, check_categorical=True, check_category_order=True, check_freq=True, check_flags=True, rtol=1e-05, atol=1e-08, obj='Series', *, check_index=True, check_like=False)[source]#
Check that left and right Series are equal.
Parameters
leftSeries
rightSeries
check_dtypebool, default TrueWhether to check the Series dtype is identical.
check_index_typebool or {‘equiv’}, default ‘equiv’Whether to check the Index class, dtype and inferred_type
are identical.
check_series_typebool, default TrueWhether to check the Series class is identical.
check_less_precisebool or int, default FalseSpecify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
When comparing two numbers, if the first number has magnitude less
than 1e-5, we compare the two numbers directly and check whether
they are equivalent within the specified precision. Otherwise, we
compare the ratio of the second number to the first number and
check whether it is equivalent to 1 within the specified precision.
Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute
tolerance, respectively. Similar to math.isclose().
check_namesbool, default TrueWhether to check the Series and Index names attribute.
check_exactbool, default FalseWhether to compare number exactly.
check_datetimelike_compatbool, default FalseCompare datetime-like which is comparable ignoring dtype.
check_categoricalbool, default TrueWhether to compare internal Categorical exactly.
check_category_orderbool, default TrueWhether to compare category order of internal Categoricals.
New in version 1.0.2.
check_freqbool, default TrueWhether to check the freq attribute on a DatetimeIndex or TimedeltaIndex.
New in version 1.1.0.
check_flagsbool, default TrueWhether to check the flags attribute.
New in version 1.2.0.
rtolfloat, default 1e-5Relative tolerance. Only used when check_exact is False.
New in version 1.1.0.
atolfloat, default 1e-8Absolute tolerance. Only used when check_exact is False.
New in version 1.1.0.
objstr, default ‘Series’Specify object name being compared, internally used to show appropriate
assertion message.
check_indexbool, default TrueWhether to check index equivalence. If False, then compare only values.
New in version 1.3.0.
check_likebool, default FalseIf True, ignore the order of the index. Must be False if check_index is False.
Note: same labels must be with the same data.
New in version 1.5.0.
Examples
>>> from pandas import testing as tm
>>> a = pd.Series([1, 2, 3, 4])
>>> b = pd.Series([1, 2, 3, 4])
>>> tm.assert_series_equal(a, b)
|
reference/api/pandas.testing.assert_series_equal.html
|
pandas.tseries.offsets.YearEnd.is_quarter_end
|
`pandas.tseries.offsets.YearEnd.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
YearEnd.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.YearEnd.is_quarter_end.html
|
pandas.Series.size
|
`pandas.Series.size`
Return the number of elements in the underlying data.
|
property Series.size[source]#
Return the number of elements in the underlying data.
|
reference/api/pandas.Series.size.html
|
pandas.Timedelta
|
`pandas.Timedelta`
Represents a duration, the difference between two dates or times.
```
>>> td = pd.Timedelta(1, "d")
>>> td
Timedelta('1 days 00:00:00')
```
|
class pandas.Timedelta(value=<object object>, unit=None, **kwargs)#
Represents a duration, the difference between two dates or times.
Timedelta is the pandas equivalent of python’s datetime.timedelta
and is interchangeable with it in most cases.
Parameters
valueTimedelta, timedelta, np.timedelta64, str, or int
unitstr, default ‘ns’Denote the unit of the input, if input is an integer.
Possible values:
‘W’, ‘D’, ‘T’, ‘S’, ‘L’, ‘U’, or ‘N’
‘days’ or ‘day’
‘hours’, ‘hour’, ‘hr’, or ‘h’
‘minutes’, ‘minute’, ‘min’, or ‘m’
‘seconds’, ‘second’, or ‘sec’
‘milliseconds’, ‘millisecond’, ‘millis’, or ‘milli’
‘microseconds’, ‘microsecond’, ‘micros’, or ‘micro’
‘nanoseconds’, ‘nanosecond’, ‘nanos’, ‘nano’, or ‘ns’.
**kwargsAvailable kwargs: {days, seconds, microseconds,
milliseconds, minutes, hours, weeks}.
Values for construction in compat with datetime.timedelta.
Numpy ints and floats will be coerced to python ints and floats.
Notes
The constructor may take in either both values of value and unit or
kwargs as above. Either one of them must be used during initialization
The .value attribute is always in ns.
If the precision is higher than nanoseconds, the precision of the duration is
truncated to nanoseconds.
Examples
Here we initialize Timedelta object with both value and unit
>>> td = pd.Timedelta(1, "d")
>>> td
Timedelta('1 days 00:00:00')
Here we initialize the Timedelta object with kwargs
>>> td2 = pd.Timedelta(days=1)
>>> td2
Timedelta('1 days 00:00:00')
We see that either way we get the same result
Attributes
asm8
Return a numpy timedelta64 array scalar view.
components
Return a components namedtuple-like.
days
delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
freq
(DEPRECATED) Freq property.
is_populated
(DEPRECATED) Is_populated property.
microseconds
nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
resolution_string
Return a string representing the lowest timedelta resolution.
seconds
value
Methods
ceil(freq)
Return a new Timedelta ceiled to this resolution.
floor(freq)
Return a new Timedelta floored to this resolution.
isoformat
Format the Timedelta as ISO 8601 Duration.
round(freq)
Round the Timedelta to the specified resolution.
to_numpy
Convert the Timedelta to a NumPy timedelta64.
to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
total_seconds
Total seconds in the duration.
view
Array view compatibility.
|
reference/api/pandas.Timedelta.html
|
pandas.Index.get_indexer_for
|
`pandas.Index.get_indexer_for`
Guaranteed return of an indexer even when non-unique.
```
>>> idx = pd.Index([np.nan, 'var1', np.nan])
>>> idx.get_indexer_for([np.nan])
array([0, 2])
```
|
final Index.get_indexer_for(target)[source]#
Guaranteed return of an indexer even when non-unique.
This dispatches to get_indexer or get_indexer_non_unique
as appropriate.
Returns
np.ndarray[np.intp]List of indices.
Examples
>>> idx = pd.Index([np.nan, 'var1', np.nan])
>>> idx.get_indexer_for([np.nan])
array([0, 2])
|
reference/api/pandas.Index.get_indexer_for.html
|
pandas.Series.nbytes
|
`pandas.Series.nbytes`
Return the number of bytes in the underlying data.
|
property Series.nbytes[source]#
Return the number of bytes in the underlying data.
|
reference/api/pandas.Series.nbytes.html
|
pandas.IntervalIndex.get_indexer
|
`pandas.IntervalIndex.get_indexer`
Compute indexer and mask for new index given the current index.
```
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
```
|
IntervalIndex.get_indexer(target, method=None, limit=None, tolerance=None)[source]#
Compute indexer and mask for new index given the current index.
The indexer should be then used as an input to ndarray.take to align the
current data to the new index.
Parameters
targetIndex
method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional
default: exact matches only.
pad / ffill: find the PREVIOUS index value if no exact match.
backfill / bfill: use NEXT index value if no exact match
nearest: use the NEAREST index value if no exact match. Tied
distances are broken by preferring the larger index value.
limitint, optionalMaximum number of consecutive labels in target to match for
inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
indexernp.ndarray[np.intp]Integers from 0 to n - 1 indicating that the index at these
positions matches the corresponding target values. Missing values
in the target are marked by -1.
Notes
Returns -1 for unmatched values, for further explanation see the
example below.
Examples
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
Notice that the return value is an array of locations in index
and x is marked by -1, as it is not in index.
|
reference/api/pandas.IntervalIndex.get_indexer.html
|
pandas.tseries.offsets.SemiMonthBegin.isAnchored
|
pandas.tseries.offsets.SemiMonthBegin.isAnchored
|
SemiMonthBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.isAnchored.html
|
pandas.Series.dt.microsecond
|
`pandas.Series.dt.microsecond`
The microseconds of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="us")
... )
>>> datetime_series
0 2000-01-01 00:00:00.000000
1 2000-01-01 00:00:00.000001
2 2000-01-01 00:00:00.000002
dtype: datetime64[ns]
>>> datetime_series.dt.microsecond
0 0
1 1
2 2
dtype: int64
```
|
Series.dt.microsecond[source]#
The microseconds of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="us")
... )
>>> datetime_series
0 2000-01-01 00:00:00.000000
1 2000-01-01 00:00:00.000001
2 2000-01-01 00:00:00.000002
dtype: datetime64[ns]
>>> datetime_series.dt.microsecond
0 0
1 1
2 2
dtype: int64
|
reference/api/pandas.Series.dt.microsecond.html
|
pandas.tseries.offsets.WeekOfMonth.is_month_end
|
`pandas.tseries.offsets.WeekOfMonth.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
WeekOfMonth.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.WeekOfMonth.is_month_end.html
|
pandas.Index.get_level_values
|
`pandas.Index.get_level_values`
Return an Index of values for requested level.
This is primarily useful to get an individual level of values from a
MultiIndex, but is provided on Index as well for compatibility.
```
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
```
|
Index.get_level_values(level)[source]#
Return an Index of values for requested level.
This is primarily useful to get an individual level of values from a
MultiIndex, but is provided on Index as well for compatibility.
Parameters
levelint or strIt is either the integer position or the name of the level.
Returns
IndexCalling object, as there is only one level in the Index.
See also
MultiIndex.get_level_valuesGet values for a level of a MultiIndex.
Notes
For Index, level should be 0, since there are no multiple levels.
Examples
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
Get level values by supplying level as integer:
>>> idx.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object')
|
reference/api/pandas.Index.get_level_values.html
|
pandas.tseries.offsets.MonthEnd.is_year_end
|
`pandas.tseries.offsets.MonthEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
MonthEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.MonthEnd.is_year_end.html
|
pandas.Series.dt.month
|
`pandas.Series.dt.month`
The month as January=1, December=12.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
```
|
Series.dt.month[source]#
The month as January=1, December=12.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
|
reference/api/pandas.Series.dt.month.html
|
pandas.core.window.rolling.Rolling.rank
|
`pandas.core.window.rolling.Rolling.rank`
Calculate the rolling rank.
New in version 1.4.0.
```
>>> s = pd.Series([1, 4, 2, 3, 5, 3])
>>> s.rolling(3).rank()
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.5
dtype: float64
```
|
Rolling.rank(method='average', ascending=True, pct=False, numeric_only=False, **kwargs)[source]#
Calculate the rolling rank.
New in version 1.4.0.
Parameters
method{‘average’, ‘min’, ‘max’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.rankAggregating rank for Series.
pandas.DataFrame.rankAggregating rank for DataFrame.
Examples
>>> s = pd.Series([1, 4, 2, 3, 5, 3])
>>> s.rolling(3).rank()
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.5
dtype: float64
>>> s.rolling(3).rank(method="max")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 2.0
dtype: float64
>>> s.rolling(3).rank(method="min")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.0
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.rank.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
CustomBusinessMonthEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset.html
|
pandas.tseries.offsets.BusinessMonthBegin
|
`pandas.tseries.offsets.BusinessMonthBegin`
DateOffset of one month at the first business day.
```
>>> from pandas.tseries.offsets import BMonthBegin
>>> ts=pd.Timestamp('2020-05-24 05:01:15')
>>> ts + BMonthBegin()
Timestamp('2020-06-01 05:01:15')
>>> ts + BMonthBegin(2)
Timestamp('2020-07-01 05:01:15')
>>> ts + BMonthBegin(-3)
Timestamp('2020-03-02 05:01:15')
```
|
class pandas.tseries.offsets.BusinessMonthBegin#
DateOffset of one month at the first business day.
Examples
>>> from pandas.tseries.offsets import BMonthBegin
>>> ts=pd.Timestamp('2020-05-24 05:01:15')
>>> ts + BMonthBegin()
Timestamp('2020-06-01 05:01:15')
>>> ts + BMonthBegin(2)
Timestamp('2020-07-01 05:01:15')
>>> ts + BMonthBegin(-3)
Timestamp('2020-03-02 05:01:15')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.html
|
pandas.Series.plot
|
`pandas.Series.plot`
Make plots of Series or DataFrame.
|
Series.plot(*args, **kwargs)[source]#
Make plots of Series or DataFrame.
Uses the backend specified by the
option plotting.backend. By default, matplotlib is used.
Parameters
dataSeries or DataFrameThe object for which the method is called.
xlabel or position, default NoneOnly used if data is a DataFrame.
ylabel, position or list of label, positions, default NoneAllows plotting of one column versus another. Only used if data is a
DataFrame.
kindstrThe kind of plot to produce:
‘line’ : line plot (default)
‘bar’ : vertical bar plot
‘barh’ : horizontal bar plot
‘hist’ : histogram
‘box’ : boxplot
‘kde’ : Kernel Density Estimation plot
‘density’ : same as ‘kde’
‘area’ : area plot
‘pie’ : pie plot
‘scatter’ : scatter plot (DataFrame only)
‘hexbin’ : hexbin plot (DataFrame only)
axmatplotlib axes object, default NoneAn axes of the current figure.
subplotsbool or sequence of iterables, default FalseWhether to group columns into subplots:
False : No subplots will be used
True : Make separate subplots for each column.
sequence of iterables of column labels: Create a subplot for each
group of columns. For example [(‘a’, ‘c’), (‘b’, ‘d’)] will
create 2 subplots: one with columns ‘a’ and ‘c’, and one
with columns ‘b’ and ‘d’. Remaining columns that aren’t specified
will be plotted in additional subplots (one per column).
.. versionadded:: 1.5.0
sharexbool, default True if ax is None else FalseIn case subplots=True, share x axis and set some x axis labels
to invisible; defaults to True if ax is None otherwise False if
an ax is passed in; Be aware, that passing in both an ax and
sharex=True will alter all x axis labels for all axis in a figure.
shareybool, default FalseIn case subplots=True, share y axis and set some y axis labels to invisible.
layouttuple, optional(rows, columns) for the layout of subplots.
figsizea tuple (width, height) in inchesSize of a figure object.
use_indexbool, default TrueUse index as ticks for x axis.
titlestr or listTitle to use for the plot. If a string is passed, print the string
at the top of the figure. If a list is passed and subplots is
True, print each item in the list above the corresponding subplot.
gridbool, default None (matlab style default)Axis grid lines.
legendbool or {‘reverse’}Place legend on axis subplots.
stylelist or dictThe matplotlib line style per column.
logxbool or ‘sym’, default FalseUse log scaling or symlog scaling on x axis.
.. versionchanged:: 0.25.0
logybool or ‘sym’ default FalseUse log scaling or symlog scaling on y axis.
.. versionchanged:: 0.25.0
loglogbool or ‘sym’, default FalseUse log scaling or symlog scaling on both x and y axes.
.. versionchanged:: 0.25.0
xtickssequenceValues to use for the xticks.
ytickssequenceValues to use for the yticks.
xlim2-tuple/listSet the x limits of the current axes.
ylim2-tuple/listSet the y limits of the current axes.
xlabellabel, optionalName to use for the xlabel on x-axis. Default uses index name as xlabel, or the
x-column name for planar plots.
New in version 1.1.0.
Changed in version 1.2.0: Now applicable to planar plots (scatter, hexbin).
ylabellabel, optionalName to use for the ylabel on y-axis. Default will show no ylabel, or the
y-column name for planar plots.
New in version 1.1.0.
Changed in version 1.2.0: Now applicable to planar plots (scatter, hexbin).
rotint, default NoneRotation for ticks (xticks for vertical, yticks for horizontal
plots).
fontsizeint, default NoneFont size for xticks and yticks.
colormapstr or matplotlib colormap object, default NoneColormap to select colors from. If string, load colormap with that
name from matplotlib.
colorbarbool, optionalIf True, plot colorbar (only relevant for ‘scatter’ and ‘hexbin’
plots).
positionfloatSpecify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
(center).
tablebool, Series or DataFrame, default FalseIf True, draw a table using the data in the DataFrame and the data
will be transposed to meet matplotlib’s default layout.
If a Series or DataFrame is passed, use passed data to draw a
table.
yerrDataFrame, Series, array-like, dict and strSee Plotting with Error Bars for
detail.
xerrDataFrame, Series, array-like, dict and strEquivalent to yerr.
stackedbool, default False in line and bar plots, and True in area plotIf True, create stacked plot.
sort_columnsbool, default FalseSort column names to determine plot ordering.
Deprecated since version 1.5.0: The sort_columns arguments is deprecated and will be removed in a
future version.
secondary_ybool or sequence, default FalseWhether to plot on the secondary y-axis if a list/tuple, which
columns to plot on secondary y-axis.
mark_rightbool, default TrueWhen using a secondary_y axis, automatically mark the column
labels with “(right)” in the legend.
include_boolbool, default is FalseIf True, boolean values can be plotted.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
**kwargsOptions to pass to matplotlib plotting method.
Returns
matplotlib.axes.Axes or numpy.ndarray of themIf the backend is not the default matplotlib one, the return value
will be the object returned by the backend.
Notes
See matplotlib documentation online for more on this subject
If kind = ‘bar’ or ‘barh’, you can specify relative alignments
for bar plot layout by position keyword.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
(center)
|
reference/api/pandas.Series.plot.html
|
Comparison with spreadsheets
|
Comparison with spreadsheets
Since many potential pandas users have some familiarity with spreadsheet programs like
Excel, this page is meant to provide some examples
of how various spreadsheet operations would be performed using pandas. This page will use
terminology and link to documentation for Excel, but much will be the same/similar in
Google Sheets,
LibreOffice Calc,
Apple Numbers, and other
Excel-compatible spreadsheet software.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
pandas
Excel
|
Since many potential pandas users have some familiarity with spreadsheet programs like
Excel, this page is meant to provide some examples
of how various spreadsheet operations would be performed using pandas. This page will use
terminology and link to documentation for Excel, but much will be the same/similar in
Google Sheets,
LibreOffice Calc,
Apple Numbers, and other
Excel-compatible spreadsheet software.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
In [1]: import pandas as pd
In [2]: import numpy as np
Data structures#
General terminology translation#
pandas
Excel
DataFrame
worksheet
Series
column
Index
row headings
row
row
NaN
empty cell
DataFrame#
A DataFrame in pandas is analogous to an Excel worksheet. While an Excel workbook can contain
multiple worksheets, pandas DataFrames exist independently.
Series#
A Series is the data structure that represents one column of a DataFrame. Working with a
Series is analogous to referencing a column of a spreadsheet.
Index#
Every DataFrame and Series has an Index, which are labels on the rows of the data. In
pandas, if no index is specified, a RangeIndex is used by default (first row = 0,
second row = 1, and so on), analogous to row headings/numbers in spreadsheets.
In pandas, indexes can be set to one (or multiple) unique values, which is like having a column that
is used as the row identifier in a worksheet. Unlike most spreadsheets, these Index values can
actually be used to reference the rows. (Note that this can be done in Excel with structured
references.)
For example, in spreadsheets, you would reference the first row as A1:Z1, while in pandas you
could use populations.loc['Chicago'].
Index values are also persistent, so if you re-order the rows in a DataFrame, the label for a
particular row don’t change.
See the indexing documentation for much more on how to use an Index
effectively.
Copies vs. in place operations#
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable:
sorted_df = df.sort_values("col1")
or overwrite the original one:
df = df.sort_values("col1")
Note
You will see an inplace=True keyword argument available for some methods:
df.sort_values("col1", inplace=True)
Its use is discouraged. More information.
Data input / output#
Constructing a DataFrame from values#
In a spreadsheet, values can be typed directly into cells.
A pandas DataFrame can be constructed in many different ways,
but for a small number of values, it is often convenient to specify it as
a Python dictionary, where the keys are the column names
and the values are the data.
In [3]: df = pd.DataFrame({"x": [1, 3, 5], "y": [2, 4, 6]})
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Reading external data#
Both Excel
and pandas can import data from various sources in various
formats.
CSV#
Let’s load and display the tips
dataset from the pandas tests, which is a CSV file. In Excel, you would download and then
open the CSV.
In pandas, you pass the URL or local path of the CSV file to read_csv():
In [5]: url = (
...: "https://raw.githubusercontent.com/pandas-dev"
...: "/pandas/main/pandas/tests/io/data/csv/tips.csv"
...: )
...:
In [6]: tips = pd.read_csv(url)
In [7]: tips
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Like Excel’s Text Import Wizard,
read_csv can take a number of parameters to specify how the data should be parsed. For
example, if the data was instead tab delimited, and did not have column names, the pandas command
would be:
tips = pd.read_csv("tips.csv", sep="\t", header=None)
# alternatively, read_table is an alias to read_csv with tab delimiter
tips = pd.read_table("tips.csv", header=None)
Excel files#
Excel opens various Excel file formats
by double-clicking them, or using the Open menu.
In pandas, you use special methods for reading and writing from/to Excel files.
Let’s first create a new Excel file based on the tips dataframe in the above example:
tips.to_excel("./tips.xlsx")
Should you wish to subsequently access the data in the tips.xlsx file, you can read it into your module using
tips_df = pd.read_excel("./tips.xlsx", index_col=0)
You have just read in an Excel file using pandas!
Limiting output#
Spreadsheet programs will only show one screenful of data at a time and then allow you to scroll, so
there isn’t really a need to limit output. In pandas, you’ll need to put a little more thought into
controlling how your DataFrames are displayed.
By default, pandas will truncate output of large DataFrames to show the first and last rows.
This can be overridden by changing the pandas options, or using
DataFrame.head() or DataFrame.tail().
In [8]: tips.head(5)
Out[8]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Exporting data#
By default, desktop spreadsheet software will save to its respective file format (.xlsx, .ods, etc). You can, however, save to other file formats.
pandas can create Excel files, CSV, or a number of other formats.
Data operations#
Operations on columns#
In spreadsheets, formulas
are often created in individual cells and then dragged
into other cells to compute them for other columns. In pandas, you’re able to do operations on whole
columns directly.
pandas provides vectorized operations by specifying the individual Series in the
DataFrame. New columns can be assigned in the same way. The DataFrame.drop() method drops
a column from the DataFrame.
In [9]: tips["total_bill"] = tips["total_bill"] - 2
In [10]: tips["new_bill"] = tips["total_bill"] / 2
In [11]: tips
Out[11]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
.. ... ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3 13.515
240 25.18 2.00 Female Yes Sat Dinner 2 12.590
241 20.67 2.00 Male Yes Sat Dinner 2 10.335
242 15.82 1.75 Male No Sat Dinner 2 7.910
243 16.78 3.00 Female No Thur Dinner 2 8.390
[244 rows x 8 columns]
In [12]: tips = tips.drop("new_bill", axis=1)
Note that we aren’t having to tell it to do that subtraction cell-by-cell — pandas handles that for
us. See how to create new columns derived from existing columns.
Filtering#
In Excel, filtering is done through a graphical menu.
DataFrames can be filtered in multiple ways; the most intuitive of which is using
boolean indexing.
In [13]: tips[tips["total_bill"] > 10]
Out[13]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
5 23.29 4.71 Male No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[204 rows x 7 columns]
The above statement is simply passing a Series of True/False objects to the DataFrame,
returning all rows with True.
In [14]: is_dinner = tips["time"] == "Dinner"
In [15]: is_dinner
Out[15]:
0 True
1 True
2 True
3 True
4 True
...
239 True
240 True
241 True
242 True
243 True
Name: time, Length: 244, dtype: bool
In [16]: is_dinner.value_counts()
Out[16]:
True 176
False 68
Name: time, dtype: int64
In [17]: tips[is_dinner]
Out[17]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[176 rows x 7 columns]
If/then logic#
Let’s say we want to make a bucket column with values of low and high, based on whether
the total_bill is less or more than $10.
In spreadsheets, logical comparison can be done with conditional formulas.
We’d use a formula of =IF(A2 < 10, "low", "high"), dragged to all cells in a new bucket
column.
The same operation in pandas can be accomplished using
the where method from numpy.
In [18]: tips["bucket"] = np.where(tips["total_bill"] < 10, "low", "high")
In [19]: tips
Out[19]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
.. ... ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3 high
240 25.18 2.00 Female Yes Sat Dinner 2 high
241 20.67 2.00 Male Yes Sat Dinner 2 high
242 15.82 1.75 Male No Sat Dinner 2 high
243 16.78 3.00 Female No Thur Dinner 2 high
[244 rows x 8 columns]
Date functionality#
This section will refer to “dates”, but timestamps are handled similarly.
We can think of date functionality in two parts: parsing, and output. In spreadsheets, date values
are generally parsed automatically, though there is a DATEVALUE
function if you need it. In pandas, you need to explicitly convert plain text to datetime objects,
either while reading from a CSV or once in a DataFrame.
Once parsed, spreadsheets display the dates in a default format, though the format can be changed.
In pandas, you’ll generally want to keep dates as datetime objects while you’re doing
calculations with them. Outputting parts of dates (such as the year) is done through date
functions
in spreadsheets, and datetime properties in pandas.
Given date1 and date2 in columns A and B of a spreadsheet, you might have these
formulas:
column
formula
date1_year
=YEAR(A2)
date2_month
=MONTH(B2)
date1_next
=DATE(YEAR(A2),MONTH(A2)+1,1)
months_between
=DATEDIF(A2,B2,"M")
The equivalent pandas operations are shown below.
In [20]: tips["date1"] = pd.Timestamp("2013-01-15")
In [21]: tips["date2"] = pd.Timestamp("2015-02-15")
In [22]: tips["date1_year"] = tips["date1"].dt.year
In [23]: tips["date2_month"] = tips["date2"].dt.month
In [24]: tips["date1_next"] = tips["date1"] + pd.offsets.MonthBegin()
In [25]: tips["months_between"] = tips["date2"].dt.to_period("M") - tips[
....: "date1"
....: ].dt.to_period("M")
....:
In [26]: tips[
....: ["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]
....: ]
....:
Out[26]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
1 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
2 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
3 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
4 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
.. ... ... ... ... ... ...
239 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
240 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
241 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
242 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
243 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
[244 rows x 6 columns]
See Time series / date functionality for more details.
Selection of columns#
In spreadsheets, you can select columns you want by:
Hiding columns
Deleting columns
Referencing a range from one worksheet into another
Since spreadsheet columns are typically named in a header row,
renaming a column is simply a matter of changing the text in that first cell.
The same operations are expressed in pandas below.
Keep certain columns#
In [27]: tips[["sex", "total_bill", "tip"]]
Out[27]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61
.. ... ... ...
239 Male 27.03 5.92
240 Female 25.18 2.00
241 Male 20.67 2.00
242 Male 15.82 1.75
243 Female 16.78 3.00
[244 rows x 3 columns]
Drop a column#
In [28]: tips.drop("sex", axis=1)
Out[28]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4
.. ... ... ... ... ... ...
239 27.03 5.92 No Sat Dinner 3
240 25.18 2.00 Yes Sat Dinner 2
241 20.67 2.00 Yes Sat Dinner 2
242 15.82 1.75 No Sat Dinner 2
243 16.78 3.00 No Thur Dinner 2
[244 rows x 6 columns]
Rename a column#
In [29]: tips.rename(columns={"total_bill": "total_bill_2"})
Out[29]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Sorting by values#
Sorting in spreadsheets is accomplished via the sort dialog.
pandas has a DataFrame.sort_values() method, which takes a list of columns to sort by.
In [30]: tips = tips.sort_values(["sex", "total_bill"])
In [31]: tips
Out[31]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
.. ... ... ... ... ... ... ...
182 43.35 3.50 Male Yes Sun Dinner 3
156 46.17 5.00 Male No Sun Dinner 6
59 46.27 6.73 Male No Sat Dinner 4
212 46.33 9.00 Male No Sat Dinner 4
170 48.81 10.00 Male Yes Sat Dinner 3
[244 rows x 7 columns]
String processing#
Finding length of string#
In spreadsheets, the number of characters in text can be found with the LEN
function. This can be used with the TRIM
function to remove extra whitespace.
=LEN(TRIM(A2))
You can find the length of a character string with Series.str.len().
In Python 3, all strings are Unicode strings. len includes trailing blanks.
Use len and rstrip to exclude trailing blanks.
In [32]: tips["time"].str.len()
Out[32]:
67 6
92 6
111 6
145 5
135 5
..
182 6
156 6
59 6
212 6
170 6
Name: time, Length: 244, dtype: int64
In [33]: tips["time"].str.rstrip().str.len()
Out[33]:
67 6
92 6
111 6
145 5
135 5
..
182 6
156 6
59 6
212 6
170 6
Name: time, Length: 244, dtype: int64
Note this will still include multiple spaces within the string, so isn’t 100% equivalent.
Finding position of substring#
The FIND
spreadsheet function returns the position of a substring, with the first character being 1.
You can find the position of a character in a column of strings with the Series.str.find()
method. find searches for the first position of the substring. If the substring is found, the
method returns its position. If not found, it returns -1. Keep in mind that Python indexes are
zero-based.
In [34]: tips["sex"].str.find("ale")
Out[34]:
67 3
92 3
111 3
145 3
135 3
..
182 1
156 1
59 1
212 1
170 1
Name: sex, Length: 244, dtype: int64
Extracting substring by position#
Spreadsheets have a MID
formula for extracting a substring from a given position. To get the first character:
=MID(A2,1,1)
With pandas you can use [] notation to extract a substring
from a string by position locations. Keep in mind that Python
indexes are zero-based.
In [35]: tips["sex"].str[0:1]
Out[35]:
67 F
92 F
111 F
145 F
135 F
..
182 M
156 M
59 M
212 M
170 M
Name: sex, Length: 244, dtype: object
Extracting nth word#
In Excel, you might use the Text to Columns Wizard
for splitting text and retrieving a specific column. (Note it’s possible to do so through a formula
as well.)
The simplest way to extract words in pandas is to split the strings by spaces, then reference the
word by index. Note there are more powerful approaches should you need them.
In [36]: firstlast = pd.DataFrame({"String": ["John Smith", "Jane Cook"]})
In [37]: firstlast["First_Name"] = firstlast["String"].str.split(" ", expand=True)[0]
In [38]: firstlast["Last_Name"] = firstlast["String"].str.rsplit(" ", expand=True)[1]
In [39]: firstlast
Out[39]:
String First_Name Last_Name
0 John Smith John Smith
1 Jane Cook Jane Cook
Changing case#
Spreadsheets provide UPPER, LOWER, and PROPER functions
for converting text to upper, lower, and title case, respectively.
The equivalent pandas methods are Series.str.upper(), Series.str.lower(), and
Series.str.title().
In [40]: firstlast = pd.DataFrame({"string": ["John Smith", "Jane Cook"]})
In [41]: firstlast["upper"] = firstlast["string"].str.upper()
In [42]: firstlast["lower"] = firstlast["string"].str.lower()
In [43]: firstlast["title"] = firstlast["string"].str.title()
In [44]: firstlast
Out[44]:
string upper lower title
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
Merging#
The following tables will be used in the merge examples:
In [45]: df1 = pd.DataFrame({"key": ["A", "B", "C", "D"], "value": np.random.randn(4)})
In [46]: df1
Out[46]:
key value
0 A 0.469112
1 B -0.282863
2 C -1.509059
3 D -1.135632
In [47]: df2 = pd.DataFrame({"key": ["B", "D", "D", "E"], "value": np.random.randn(4)})
In [48]: df2
Out[48]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236
In Excel, there are merging of tables can be done through a VLOOKUP.
pandas DataFrames have a merge() method, which provides similar functionality. The
data does not have to be sorted ahead of time, and different join types are accomplished via the
how keyword.
In [49]: inner_join = df1.merge(df2, on=["key"], how="inner")
In [50]: inner_join
Out[50]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
In [51]: left_join = df1.merge(df2, on=["key"], how="left")
In [52]: left_join
Out[52]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
In [53]: right_join = df1.merge(df2, on=["key"], how="right")
In [54]: right_join
Out[54]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
In [55]: outer_join = df1.merge(df2, on=["key"], how="outer")
In [56]: outer_join
Out[56]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
merge has a number of advantages over VLOOKUP:
The lookup value doesn’t need to be the first column of the lookup table
If multiple rows are matched, there will be one row for each match, instead of just the first
It will include all columns from the lookup table, instead of just a single specified column
It supports more complex join operations
Other considerations#
Fill Handle#
Create a series of numbers following a set pattern in a certain set of cells. In
a spreadsheet, this would be done by shift+drag after entering the first number or by
entering the first two or three values and then dragging.
This can be achieved by creating a series and assigning it to the desired cells.
In [57]: df = pd.DataFrame({"AAA": [1] * 8, "BBB": list(range(0, 8))})
In [58]: df
Out[58]:
AAA BBB
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 1 5
6 1 6
7 1 7
In [59]: series = list(range(1, 5))
In [60]: series
Out[60]: [1, 2, 3, 4]
In [61]: df.loc[2:5, "AAA"] = series
In [62]: df
Out[62]:
AAA BBB
0 1 0
1 1 1
2 1 2
3 2 3
4 3 4
5 4 5
6 1 6
7 1 7
Drop Duplicates#
Excel has built-in functionality for removing duplicate values.
This is supported in pandas via drop_duplicates().
In [63]: df = pd.DataFrame(
....: {
....: "class": ["A", "A", "A", "B", "C", "D"],
....: "student_count": [42, 35, 42, 50, 47, 45],
....: "all_pass": ["Yes", "Yes", "Yes", "No", "No", "Yes"],
....: }
....: )
....:
In [64]: df.drop_duplicates()
Out[64]:
class student_count all_pass
0 A 42 Yes
1 A 35 Yes
3 B 50 No
4 C 47 No
5 D 45 Yes
In [65]: df.drop_duplicates(["class", "student_count"])
Out[65]:
class student_count all_pass
0 A 42 Yes
1 A 35 Yes
3 B 50 No
4 C 47 No
5 D 45 Yes
Pivot Tables#
PivotTables
from spreadsheets can be replicated in pandas through Reshaping and pivot tables. Using the tips dataset again,
let’s find the average gratuity by size of the party and sex of the server.
In Excel, we use the following configuration for the PivotTable:
The equivalent in pandas:
In [66]: pd.pivot_table(
....: tips, values="tip", index=["size"], columns=["sex"], aggfunc=np.average
....: )
....:
Out[66]:
sex Female Male
size
1 1.276667 1.920000
2 2.528448 2.614184
3 3.250000 3.476667
4 4.021111 4.172143
5 5.140000 3.750000
6 4.600000 5.850000
Adding a row#
Assuming we are using a RangeIndex (numbered 0, 1, etc.), we can use concat() to add a row to the bottom of a DataFrame.
In [67]: df
Out[67]:
class student_count all_pass
0 A 42 Yes
1 A 35 Yes
2 A 42 Yes
3 B 50 No
4 C 47 No
5 D 45 Yes
In [68]: new_row = pd.DataFrame([["E", 51, True]],
....: columns=["class", "student_count", "all_pass"])
....:
In [69]: pd.concat([df, new_row])
Out[69]:
class student_count all_pass
0 A 42 Yes
1 A 35 Yes
2 A 42 Yes
3 B 50 No
4 C 47 No
5 D 45 Yes
0 E 51 True
Find and Replace#
Excel’s Find dialog
takes you to cells that match, one by one. In pandas, this operation is generally done for an
entire column or DataFrame at once through conditional expressions.
In [70]: tips
Out[70]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
.. ... ... ... ... ... ... ...
182 43.35 3.50 Male Yes Sun Dinner 3
156 46.17 5.00 Male No Sun Dinner 6
59 46.27 6.73 Male No Sat Dinner 4
212 46.33 9.00 Male No Sat Dinner 4
170 48.81 10.00 Male Yes Sat Dinner 3
[244 rows x 7 columns]
In [71]: tips == "Sun"
Out[71]:
total_bill tip sex smoker day time size
67 False False False False False False False
92 False False False False False False False
111 False False False False False False False
145 False False False False False False False
135 False False False False False False False
.. ... ... ... ... ... ... ...
182 False False False False True False False
156 False False False False True False False
59 False False False False False False False
212 False False False False False False False
170 False False False False False False False
[244 rows x 7 columns]
In [72]: tips["day"].str.contains("S")
Out[72]:
67 True
92 False
111 True
145 False
135 False
...
182 True
156 True
59 True
212 True
170 True
Name: day, Length: 244, dtype: bool
pandas’ replace() is comparable to Excel’s Replace All.
In [73]: tips.replace("Thu", "Thursday")
Out[73]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
.. ... ... ... ... ... ... ...
182 43.35 3.50 Male Yes Sun Dinner 3
156 46.17 5.00 Male No Sun Dinner 6
59 46.27 6.73 Male No Sat Dinner 4
212 46.33 9.00 Male No Sat Dinner 4
170 48.81 10.00 Male Yes Sat Dinner 3
[244 rows x 7 columns]
|
getting_started/comparison/comparison_with_spreadsheets.html
|
pandas.tseries.offsets.MonthBegin
|
`pandas.tseries.offsets.MonthBegin`
DateOffset of one month at beginning.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.MonthBegin()
Timestamp('2022-02-01 00:00:00')
```
|
class pandas.tseries.offsets.MonthBegin#
DateOffset of one month at beginning.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.MonthBegin()
Timestamp('2022-02-01 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.MonthBegin.html
|
pandas.Index.nunique
|
`pandas.Index.nunique`
Return number of unique elements in the object.
Excludes NA values by default.
```
>>> s = pd.Series([1, 3, 5, 7, 7])
>>> s
0 1
1 3
2 5
3 7
4 7
dtype: int64
```
|
Index.nunique(dropna=True)[source]#
Return number of unique elements in the object.
Excludes NA values by default.
Parameters
dropnabool, default TrueDon’t include NaN in the count.
Returns
int
See also
DataFrame.nuniqueMethod nunique for DataFrame.
Series.countCount non-NA/null observations in the Series.
Examples
>>> s = pd.Series([1, 3, 5, 7, 7])
>>> s
0 1
1 3
2 5
3 7
4 7
dtype: int64
>>> s.nunique()
4
|
reference/api/pandas.Index.nunique.html
|
pandas.tseries.offsets.Week.name
|
`pandas.tseries.offsets.Week.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Week.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Week.name.html
|
pandas.core.resample.Resampler.nearest
|
`pandas.core.resample.Resampler.nearest`
Resample by using the nearest value.
When resampling data, missing values may appear (e.g., when the
resampling frequency is higher than the original frequency).
The nearest method will replace NaN values that appeared in
the resampled data with the value from the nearest member of the
sequence, based on the index value.
Missing values that existed in the original data will not be modified.
If limit is given, fill only this many values in each direction for
each of the original values.
```
>>> s = pd.Series([1, 2],
... index=pd.date_range('20180101',
... periods=2,
... freq='1h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
Freq: H, dtype: int64
```
|
Resampler.nearest(limit=None)[source]#
Resample by using the nearest value.
When resampling data, missing values may appear (e.g., when the
resampling frequency is higher than the original frequency).
The nearest method will replace NaN values that appeared in
the resampled data with the value from the nearest member of the
sequence, based on the index value.
Missing values that existed in the original data will not be modified.
If limit is given, fill only this many values in each direction for
each of the original values.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameAn upsampled Series or DataFrame with NaN values filled with
their nearest value.
See also
backfillBackward fill the new missing values in the resampled data.
padForward fill NaN values.
Examples
>>> s = pd.Series([1, 2],
... index=pd.date_range('20180101',
... periods=2,
... freq='1h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
Freq: H, dtype: int64
>>> s.resample('15min').nearest()
2018-01-01 00:00:00 1
2018-01-01 00:15:00 1
2018-01-01 00:30:00 2
2018-01-01 00:45:00 2
2018-01-01 01:00:00 2
Freq: 15T, dtype: int64
Limit the number of upsampled values imputed by the nearest:
>>> s.resample('15min').nearest(limit=1)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
Freq: 15T, dtype: float64
|
reference/api/pandas.core.resample.Resampler.nearest.html
|
pandas.tseries.offsets.Day.is_anchored
|
`pandas.tseries.offsets.Day.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
Day.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.Day.is_anchored.html
|
Extensions
|
Extensions
These are primarily intended for library authors looking to extend pandas
objects.
api.extensions.register_extension_dtype(cls)
Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
|
These are primarily intended for library authors looking to extend pandas
objects.
api.extensions.register_extension_dtype(cls)
Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
api.extensions.register_series_accessor(name)
Register a custom accessor on Series objects.
api.extensions.register_index_accessor(name)
Register a custom accessor on Index objects.
api.extensions.ExtensionDtype()
A custom data type, to be paired with an ExtensionArray.
api.extensions.ExtensionArray()
Abstract base class for custom 1-D array types.
arrays.PandasArray(values[, copy])
A pandas ExtensionArray for NumPy data.
Additionally, we have some utility methods for ensuring your object
behaves correctly.
api.indexers.check_array_indexer(array, indexer)
Check if indexer is a valid array indexer for array.
The sentinel pandas.api.extensions.no_default is used as the default
value in some methods. Use an is comparison to check if the user
provides a non-default value.
|
reference/extensions.html
|
pandas.tseries.offsets.Week.apply_index
|
`pandas.tseries.offsets.Week.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
Week.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Week.apply_index.html
|
pandas.DataFrame.ewm
|
`pandas.DataFrame.ewm`
Provide exponentially weighted (EW) calculations.
```
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
```
|
DataFrame.ewm(com=None, span=None, halflife=None, alpha=None, min_periods=0, adjust=True, ignore_na=False, axis=0, times=None, method='single')[source]#
Provide exponentially weighted (EW) calculations.
Exactly one of com, span, halflife, or alpha must be
provided if times is not provided. If times is provided,
halflife and one of com, span or alpha may be provided.
Parameters
comfloat, optionalSpecify decay in terms of center of mass
\(\alpha = 1 / (1 + com)\), for \(com \geq 0\).
spanfloat, optionalSpecify decay in terms of span
\(\alpha = 2 / (span + 1)\), for \(span \geq 1\).
halflifefloat, str, timedelta, optionalSpecify decay in terms of half-life
\(\alpha = 1 - \exp\left(-\ln(2) / halflife\right)\), for
\(halflife > 0\).
If times is specified, a timedelta convertible unit over which an
observation decays to half its value. Only applicable to mean(),
and halflife value will not apply to the other functions.
New in version 1.1.0.
alphafloat, optionalSpecify smoothing factor \(\alpha\) directly
\(0 < \alpha \leq 1\).
min_periodsint, default 0Minimum number of observations in window required to have a value;
otherwise, result is np.nan.
adjustbool, default TrueDivide by decaying adjustment factor in beginning periods to account
for imbalance in relative weightings (viewing EWMA as a moving average).
When adjust=True (default), the EW function is calculated using weights
\(w_i = (1 - \alpha)^i\). For example, the EW moving average of the series
[\(x_0, x_1, ..., x_t\)] would be:
\[y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... + (1 -
\alpha)^t x_0}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... + (1 - \alpha)^t}\]
When adjust=False, the exponentially weighted function is calculated
recursively:
\[\begin{split}\begin{split}
y_0 &= x_0\\
y_t &= (1 - \alpha) y_{t-1} + \alpha x_t,
\end{split}\end{split}\]
ignore_nabool, default FalseIgnore missing values when calculating weights.
When ignore_na=False (default), weights are based on absolute positions.
For example, the weights of \(x_0\) and \(x_2\) used in calculating
the final weighted average of [\(x_0\), None, \(x_2\)] are
\((1-\alpha)^2\) and \(1\) if adjust=True, and
\((1-\alpha)^2\) and \(\alpha\) if adjust=False.
When ignore_na=True, weights are based
on relative positions. For example, the weights of \(x_0\) and \(x_2\)
used in calculating the final weighted average of
[\(x_0\), None, \(x_2\)] are \(1-\alpha\) and \(1\) if
adjust=True, and \(1-\alpha\) and \(\alpha\) if adjust=False.
axis{0, 1}, default 0If 0 or 'index', calculate across the rows.
If 1 or 'columns', calculate across the columns.
For Series this parameter is unused and defaults to 0.
timesstr, np.ndarray, Series, default None
New in version 1.1.0.
Only applicable to mean().
Times corresponding to the observations. Must be monotonically increasing and
datetime64[ns] dtype.
If 1-D array like, a sequence with the same shape as the observations.
Deprecated since version 1.4.0: If str, the name of the column in the DataFrame representing the times.
methodstr {‘single’, ‘table’}, default ‘single’
New in version 1.4.0.
Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
Only applicable to mean()
Returns
ExponentialMovingWindow subclass
See also
rollingProvides rolling window calculations.
expandingProvides expanding transformations.
Notes
See Windowing Operations
for further usage details and examples.
Examples
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
>>> df.ewm(alpha=2 / 3).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
adjust
>>> df.ewm(com=0.5, adjust=True).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
>>> df.ewm(com=0.5, adjust=False).mean()
B
0 0.000000
1 0.666667
2 1.555556
3 1.555556
4 3.650794
ignore_na
>>> df.ewm(com=0.5, ignore_na=True).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.225000
>>> df.ewm(com=0.5, ignore_na=False).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
times
Exponentially weighted mean with weights calculated with a timedelta halflife
relative to times.
>>> times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17']
>>> df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean()
B
0 0.000000
1 0.585786
2 1.523889
3 1.523889
4 3.233686
|
reference/api/pandas.DataFrame.ewm.html
|
pandas.Series.dt.month
|
`pandas.Series.dt.month`
The month as January=1, December=12.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
```
|
Series.dt.month[source]#
The month as January=1, December=12.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="M")
... )
>>> datetime_series
0 2000-01-31
1 2000-02-29
2 2000-03-31
dtype: datetime64[ns]
>>> datetime_series.dt.month
0 1
1 2
2 3
dtype: int64
|
reference/api/pandas.Series.dt.month.html
|
pandas.tseries.offsets.SemiMonthEnd.freqstr
|
`pandas.tseries.offsets.SemiMonthEnd.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
SemiMonthEnd.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.freqstr.html
|
pandas.tseries.offsets.BYearBegin.is_year_start
|
`pandas.tseries.offsets.BYearBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BYearBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BYearBegin.is_year_start.html
|
pandas.core.resample.Resampler.size
|
`pandas.core.resample.Resampler.size`
Compute group sizes.
|
Resampler.size()[source]#
Compute group sizes.
Returns
DataFrame or SeriesNumber of rows in each group as a Series if as_index is True
or a DataFrame if as_index is False.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.resample.Resampler.size.html
|
pandas.tseries.offsets.LastWeekOfMonth.is_quarter_end
|
`pandas.tseries.offsets.LastWeekOfMonth.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
LastWeekOfMonth.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.is_quarter_end.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.nanos
|
pandas.tseries.offsets.CustomBusinessMonthEnd.nanos
|
CustomBusinessMonthEnd.nanos#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.nanos.html
|
Index
|
Index
|
_
| A
| B
| C
| D
| E
| F
| G
| H
| I
| J
| K
| L
| M
| N
| O
| P
| Q
| R
| S
| T
| U
| V
| W
| X
| Y
| Z
_
__array__() (pandas.Categorical method)
(pandas.Series method)
__call__() (pandas.option_context method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
__dataframe__() (pandas.DataFrame method)
__iter__() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
_concat_same_type() (pandas.api.extensions.ExtensionArray class method)
_formatter() (pandas.api.extensions.ExtensionArray method)
_from_factorized() (pandas.api.extensions.ExtensionArray class method)
_from_sequence() (pandas.api.extensions.ExtensionArray class method)
_from_sequence_of_strings() (pandas.api.extensions.ExtensionArray class method)
_reduce() (pandas.api.extensions.ExtensionArray method)
_values_for_argsort() (pandas.api.extensions.ExtensionArray method)
_values_for_factorize() (pandas.api.extensions.ExtensionArray method)
A
abs() (pandas.DataFrame method)
(pandas.Series method)
AbstractMethodError
AccessorRegistrationWarning
add() (pandas.DataFrame method)
(pandas.Series method)
add_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
add_prefix() (pandas.DataFrame method)
(pandas.Series method)
add_suffix() (pandas.DataFrame method)
(pandas.Series method)
agg() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
aggregate() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.SeriesGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
align() (pandas.DataFrame method)
(pandas.Series method)
all() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
allows_duplicate_labels (pandas.Flags property)
andrews_curves() (in module pandas.plotting)
any() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
append() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Index method)
(pandas.Series method)
apply() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
apply_index() (pandas.io.formats.style.Styler method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
applymap() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
applymap_index() (pandas.io.formats.style.Styler method)
area() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
argmax() (pandas.Index method)
(pandas.Series method)
argmin() (pandas.Index method)
(pandas.Series method)
argsort() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
array (pandas.Index attribute)
(pandas.Series property)
array() (in module pandas)
ArrowDtype (class in pandas)
ArrowExtensionArray (class in pandas.arrays)
ArrowStringArray (class in pandas.arrays)
as_ordered() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
as_unordered() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
asfreq() (pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series method)
asi8 (pandas.Index property)
asm8 (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
asof() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
asof_locs() (pandas.Index method)
assert_extension_array_equal() (in module pandas.testing)
assert_frame_equal() (in module pandas.testing)
assert_index_equal() (in module pandas.testing)
assert_series_equal() (in module pandas.testing)
assign() (pandas.DataFrame method)
astimezone() (pandas.Timestamp method)
astype() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
at (pandas.DataFrame property)
(pandas.Series property)
at_time() (pandas.DataFrame method)
(pandas.Series method)
AttributeConflictWarning
attrs (pandas.DataFrame property)
(pandas.Series property)
autocorr() (pandas.Series method)
autocorrelation_plot() (in module pandas.plotting)
axes (pandas.DataFrame property)
(pandas.Series property)
B
backfill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
background_gradient() (pandas.io.formats.style.Styler method)
bar() (pandas.DataFrame.plot method)
(pandas.io.formats.style.Styler method)
(pandas.Series.plot method)
barh() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
base (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
BaseIndexer (class in pandas.api.indexers)
bdate_range() (in module pandas)
BDay (in module pandas.tseries.offsets)
between() (pandas.Series method)
between_time() (pandas.DataFrame method)
(pandas.Series method)
bfill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
BMonthBegin (in module pandas.tseries.offsets)
BMonthEnd (in module pandas.tseries.offsets)
book (pandas.ExcelWriter property)
bool() (pandas.DataFrame method)
(pandas.Series method)
BooleanArray (class in pandas.arrays)
BooleanDtype (class in pandas)
bootstrap_plot() (in module pandas.plotting)
box() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
boxplot() (in module pandas.plotting)
(pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
BQuarterBegin (class in pandas.tseries.offsets)
BQuarterEnd (class in pandas.tseries.offsets)
build_table_schema() (in module pandas.io.json)
BusinessDay (class in pandas.tseries.offsets)
BusinessHour (class in pandas.tseries.offsets)
BusinessMonthBegin (class in pandas.tseries.offsets)
BusinessMonthEnd (class in pandas.tseries.offsets)
BYearBegin (class in pandas.tseries.offsets)
BYearEnd (class in pandas.tseries.offsets)
C
calendar (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
capitalize() (pandas.Series.str method)
casefold() (pandas.Series.str method)
cat() (pandas.Series method)
(pandas.Series.str method)
Categorical (class in pandas)
CategoricalConversionWarning
CategoricalDtype (class in pandas)
CategoricalIndex (class in pandas)
categories (pandas.Categorical property)
(pandas.CategoricalDtype property)
(pandas.CategoricalIndex property)
(pandas.Series.cat attribute)
cbday_roll (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
CBMonthBegin (in module pandas.tseries.offsets)
CBMonthEnd (in module pandas.tseries.offsets)
CDay (in module pandas.tseries.offsets)
ceil() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
center() (pandas.Series.str method)
check_array_indexer() (in module pandas.api.indexers)
check_extension() (pandas.ExcelWriter class method)
clear() (pandas.io.formats.style.Styler method)
clip() (pandas.DataFrame method)
(pandas.Series method)
close() (pandas.ExcelWriter method)
closed (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
closed_left (pandas.Interval attribute)
closed_right (pandas.Interval attribute)
ClosedFileError
codes (pandas.Categorical property)
(pandas.CategoricalIndex property)
(pandas.MultiIndex property)
(pandas.Series.cat attribute)
columns (pandas.DataFrame attribute)
combine() (pandas.DataFrame method)
(pandas.Series method)
(pandas.Timestamp class method)
combine_first() (pandas.DataFrame method)
(pandas.Series method)
compare() (pandas.DataFrame method)
(pandas.Series method)
components (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
concat() (in module pandas)
(pandas.io.formats.style.Styler method)
construct_array_type() (pandas.api.extensions.ExtensionDtype class method)
construct_from_string() (pandas.api.extensions.ExtensionDtype class method)
contains() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
(pandas.Series.str method)
convert_dtypes() (pandas.DataFrame method)
(pandas.Series method)
copy() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
(pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
corr (pandas.core.groupby.DataFrameGroupBy property)
corr() (pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
corrwith (pandas.core.groupby.DataFrameGroupBy property)
corrwith() (pandas.DataFrame method)
count() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
cov (pandas.core.groupby.DataFrameGroupBy property)
cov() (pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
crosstab() (in module pandas)
CSSWarning
ctime() (pandas.Timestamp method)
cumcount() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
cummax() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cummin() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cumprod() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cumsum() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
cur_sheet (pandas.ExcelWriter property)
CustomBusinessDay (class in pandas.tseries.offsets)
CustomBusinessHour (class in pandas.tseries.offsets)
CustomBusinessMonthBegin (class in pandas.tseries.offsets)
CustomBusinessMonthEnd (class in pandas.tseries.offsets)
cut() (in module pandas)
D
data_label (pandas.io.stata.StataReader property)
DatabaseError
DataError
DataFrame (class in pandas)
date (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
date() (pandas.Timestamp method)
date_format (pandas.ExcelWriter property)
date_range() (in module pandas)
DateOffset (class in pandas.tseries.offsets)
datetime_format (pandas.ExcelWriter property)
DatetimeArray (class in pandas.arrays)
DatetimeIndex (class in pandas)
DatetimeTZDtype (class in pandas)
Day (class in pandas.tseries.offsets)
day (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
day_name() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
day_of_month (pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
day_of_week (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
day_of_year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
dayofweek (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
dayofyear (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
days (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
days_in_month (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
daysinmonth (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
decode() (pandas.Series.str method)
delete() (pandas.Index method)
delta (pandas.Timedelta attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.Tick attribute)
density (pandas.DataFrame.sparse attribute)
(pandas.Series.sparse attribute)
density() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
deregister_matplotlib_converters() (in module pandas.plotting)
describe() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
describe_option (in module pandas)
diff() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
difference() (pandas.Index method)
div() (pandas.DataFrame method)
(pandas.Series method)
divide() (pandas.DataFrame method)
(pandas.Series method)
divmod() (pandas.Series method)
dot() (pandas.DataFrame method)
(pandas.Series method)
drop() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
drop_duplicates() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
droplevel() (pandas.DataFrame method)
(pandas.Index method)
(pandas.MultiIndex method)
(pandas.Series method)
dropna() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
dst() (pandas.Timestamp method)
dt() (pandas.Series method)
dtype (pandas.api.extensions.ExtensionArray property)
(pandas.Categorical property)
(pandas.Index attribute)
(pandas.Series property)
dtypes (pandas.DataFrame property)
(pandas.MultiIndex attribute)
(pandas.Series property)
DtypeWarning
duplicated() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
DuplicateLabelError
E
Easter (class in pandas.tseries.offsets)
empty (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
empty() (pandas.api.extensions.ExtensionDtype method)
EmptyDataError
encode() (pandas.Series.str method)
end (pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
end_time (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
endswith() (pandas.Series.str method)
engine (pandas.ExcelWriter property)
env (pandas.io.formats.style.Styler attribute)
eq() (pandas.DataFrame method)
(pandas.Series method)
equals() (pandas.api.extensions.ExtensionArray method)
(pandas.CategoricalIndex method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
eval() (in module pandas)
(pandas.DataFrame method)
ewm() (pandas.DataFrame method)
(pandas.Series method)
ExcelWriter (class in pandas)
expanding() (pandas.DataFrame method)
(pandas.Series method)
explode() (pandas.DataFrame method)
(pandas.Series method)
export() (pandas.io.formats.style.Styler method)
ExtensionArray (class in pandas.api.extensions)
ExtensionDtype (class in pandas.api.extensions)
extract() (pandas.Series.str method)
extractall() (pandas.Series.str method)
F
factorize() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
ffill() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
fill_value (pandas.Series.sparse attribute)
fillna (pandas.core.groupby.DataFrameGroupBy property)
fillna() (pandas.api.extensions.ExtensionArray method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
filter() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
find() (pandas.Series.str method)
findall() (pandas.Series.str method)
first() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
first_valid_index() (pandas.DataFrame method)
(pandas.Series method)
FixedForwardWindowIndexer (class in pandas.api.indexers)
Flags (class in pandas)
flags (pandas.DataFrame property)
(pandas.Series property)
Float64Index (class in pandas)
floor() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
floordiv() (pandas.DataFrame method)
(pandas.Series method)
fold (pandas.Timestamp attribute)
format() (pandas.Index method)
(pandas.io.formats.style.Styler method)
format_index() (pandas.io.formats.style.Styler method)
freq (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodDtype property)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.Timestamp attribute)
freqstr (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Timestamp property)
(pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
from_arrays() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
(pandas.MultiIndex class method)
from_breaks() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
from_codes() (pandas.Categorical class method)
from_coo() (pandas.Series.sparse class method)
from_custom_template() (pandas.io.formats.style.Styler class method)
from_dataframe() (in module pandas.api.interchange)
from_dict() (pandas.DataFrame class method)
from_dummies() (in module pandas)
from_frame() (pandas.MultiIndex class method)
from_product() (pandas.MultiIndex class method)
from_range() (pandas.RangeIndex class method)
from_records() (pandas.DataFrame class method)
from_spmatrix() (pandas.DataFrame.sparse class method)
from_tuples() (pandas.arrays.IntervalArray class method)
(pandas.IntervalIndex class method)
(pandas.MultiIndex class method)
fromisocalendar() (pandas.Timestamp method)
fromisoformat() (pandas.Timestamp method)
fromordinal() (pandas.Timestamp class method)
fromtimestamp() (pandas.Timestamp class method)
fullmatch() (pandas.Series.str method)
FY5253 (class in pandas.tseries.offsets)
FY5253Quarter (class in pandas.tseries.offsets)
G
ge() (pandas.DataFrame method)
(pandas.Series method)
get() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
(pandas.Series.str method)
get_dummies() (in module pandas)
(pandas.Series.str method)
get_group() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
get_indexer() (pandas.Index method)
(pandas.IntervalIndex method)
(pandas.MultiIndex method)
get_indexer_for() (pandas.Index method)
get_indexer_non_unique() (pandas.Index method)
get_level_values() (pandas.Index method)
(pandas.MultiIndex method)
get_loc() (pandas.Index method)
(pandas.IntervalIndex method)
(pandas.MultiIndex method)
get_loc_level() (pandas.MultiIndex method)
get_locs() (pandas.MultiIndex method)
get_option (in module pandas)
get_rule_code_suffix() (pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
get_slice_bound() (pandas.Index method)
get_value() (pandas.Index method)
get_weeks() (pandas.tseries.offsets.FY5253Quarter method)
get_window_bounds() (pandas.api.indexers.BaseIndexer method)
(pandas.api.indexers.FixedForwardWindowIndexer method)
(pandas.api.indexers.VariableOffsetWindowIndexer method)
get_year_end() (pandas.tseries.offsets.FY5253 method)
groupby() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
Grouper (class in pandas)
groups (pandas.core.groupby.GroupBy property)
(pandas.core.resample.Resampler property)
groups() (pandas.HDFStore method)
gt() (pandas.DataFrame method)
(pandas.Series method)
H
handles (pandas.ExcelWriter property)
has_duplicates (pandas.Index property)
hash_array() (in module pandas.util)
hash_pandas_object() (in module pandas.util)
hasnans (pandas.Index attribute)
(pandas.Series property)
head() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
hexbin() (pandas.DataFrame.plot method)
hide() (pandas.io.formats.style.Styler method)
hide_columns() (pandas.io.formats.style.Styler method)
hide_index() (pandas.io.formats.style.Styler method)
highlight_between() (pandas.io.formats.style.Styler method)
highlight_max() (pandas.io.formats.style.Styler method)
highlight_min() (pandas.io.formats.style.Styler method)
highlight_null() (pandas.io.formats.style.Styler method)
highlight_quantile() (pandas.io.formats.style.Styler method)
hist (pandas.core.groupby.DataFrameGroupBy property)
(pandas.core.groupby.SeriesGroupBy property)
hist() (pandas.DataFrame method)
(pandas.DataFrame.plot method)
(pandas.Series method)
(pandas.Series.plot method)
holds_integer() (pandas.Index method)
holidays (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
Hour (class in pandas.tseries.offsets)
hour (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
I
iat (pandas.DataFrame property)
(pandas.Series property)
identical() (pandas.Index method)
idxmax() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
idxmin() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
if_sheet_exists (pandas.ExcelWriter property)
iloc (pandas.DataFrame property)
(pandas.Series property)
IncompatibilityWarning
Index (class in pandas)
index (pandas.DataFrame attribute)
(pandas.Series attribute)
index() (pandas.Series.str method)
indexer_at_time() (pandas.DatetimeIndex method)
indexer_between_time() (pandas.DatetimeIndex method)
IndexingError
IndexSlice (in module pandas)
indices (pandas.core.groupby.GroupBy property)
(pandas.core.resample.Resampler property)
infer_dtype() (in module pandas.api.types)
infer_freq() (in module pandas)
infer_objects() (pandas.DataFrame method)
(pandas.Series method)
inferred_freq (pandas.DatetimeIndex attribute)
(pandas.TimedeltaIndex attribute)
inferred_type (pandas.Index attribute)
info() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
insert() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
Int16Dtype (class in pandas)
Int32Dtype (class in pandas)
Int64Dtype (class in pandas)
Int64Index (class in pandas)
Int8Dtype (class in pandas)
IntCastingNaNError
IntegerArray (class in pandas.arrays)
interpolate() (pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
intersection() (pandas.Index method)
Interval (class in pandas)
interval_range() (in module pandas)
IntervalArray (class in pandas.arrays)
IntervalDtype (class in pandas)
IntervalIndex (class in pandas)
InvalidColumnName
InvalidIndexError
is_() (pandas.Index method)
is_all_dates (pandas.Index attribute)
is_anchored() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_bool() (in module pandas.api.types)
is_bool_dtype() (in module pandas.api.types)
is_boolean() (pandas.Index method)
is_categorical() (in module pandas.api.types)
(pandas.Index method)
is_categorical_dtype() (in module pandas.api.types)
is_complex() (in module pandas.api.types)
is_complex_dtype() (in module pandas.api.types)
is_datetime64_any_dtype() (in module pandas.api.types)
is_datetime64_dtype() (in module pandas.api.types)
is_datetime64_ns_dtype() (in module pandas.api.types)
is_datetime64tz_dtype() (in module pandas.api.types)
is_dict_like() (in module pandas.api.types)
is_dtype() (pandas.api.extensions.ExtensionDtype class method)
is_empty (pandas.arrays.IntervalArray attribute)
(pandas.Interval attribute)
(pandas.IntervalIndex property)
is_extension_array_dtype() (in module pandas.api.types)
is_extension_type() (in module pandas.api.types)
is_file_like() (in module pandas.api.types)
is_float() (in module pandas.api.types)
is_float_dtype() (in module pandas.api.types)
is_floating() (pandas.Index method)
is_hashable() (in module pandas.api.types)
is_int64_dtype() (in module pandas.api.types)
is_integer() (in module pandas.api.types)
(pandas.Index method)
is_integer_dtype() (in module pandas.api.types)
is_interval() (in module pandas.api.types)
(pandas.Index method)
is_interval_dtype() (in module pandas.api.types)
is_iterator() (in module pandas.api.types)
is_leap_year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_list_like() (in module pandas.api.types)
is_mixed() (pandas.Index method)
is_monotonic (pandas.Index property)
(pandas.Series property)
is_monotonic_decreasing (pandas.core.groupby.SeriesGroupBy property)
(pandas.Index property)
(pandas.Series property)
is_monotonic_increasing (pandas.core.groupby.SeriesGroupBy property)
(pandas.Index property)
(pandas.Series property)
is_month_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_month_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_month_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_month_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_named_tuple() (in module pandas.api.types)
is_non_overlapping_monotonic (pandas.arrays.IntervalArray property)
(pandas.IntervalIndex attribute)
is_number() (in module pandas.api.types)
is_numeric() (pandas.Index method)
is_numeric_dtype() (in module pandas.api.types)
is_object() (pandas.Index method)
is_object_dtype() (in module pandas.api.types)
is_on_offset() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_overlapping (pandas.IntervalIndex property)
is_period_dtype() (in module pandas.api.types)
is_populated (pandas.Timedelta attribute)
is_quarter_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_quarter_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_quarter_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_quarter_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_re() (in module pandas.api.types)
is_re_compilable() (in module pandas.api.types)
is_scalar() (in module pandas.api.types)
is_signed_integer_dtype() (in module pandas.api.types)
is_sparse() (in module pandas.api.types)
is_string_dtype() (in module pandas.api.types)
is_timedelta64_dtype() (in module pandas.api.types)
is_timedelta64_ns_dtype() (in module pandas.api.types)
is_type_compatible() (pandas.Index method)
is_unique (pandas.Index attribute)
(pandas.Series property)
is_unsigned_integer_dtype() (in module pandas.api.types)
is_year_end (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_year_end() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
is_year_start (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
is_year_start() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
isalnum() (pandas.Series.str method)
isalpha() (pandas.Series.str method)
isAnchored() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
isdecimal() (pandas.Series.str method)
isdigit() (pandas.Series.str method)
isetitem() (pandas.DataFrame method)
isin() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
islower() (pandas.Series.str method)
isna() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
isnull() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
isnumeric() (pandas.Series.str method)
isocalendar() (pandas.Series.dt method)
(pandas.Timestamp method)
isoformat() (pandas.Timedelta method)
(pandas.Timestamp method)
isoweekday() (pandas.Timestamp method)
isspace() (pandas.Series.str method)
istitle() (pandas.Series.str method)
isupper() (pandas.Series.str method)
item() (pandas.Index method)
(pandas.Series method)
items() (pandas.DataFrame method)
(pandas.Series method)
iteritems() (pandas.DataFrame method)
(pandas.Series method)
iterrows() (pandas.DataFrame method)
itertuples() (pandas.DataFrame method)
J
join() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series.str method)
json_normalize() (in module pandas)
K
kde() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
keys() (pandas.DataFrame method)
(pandas.HDFStore method)
(pandas.Series method)
kind (pandas.api.extensions.ExtensionDtype property)
kurt() (pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
kurtosis() (pandas.DataFrame method)
(pandas.Series method)
kwds (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
L
lag_plot() (in module pandas.plotting)
last() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
last_valid_index() (pandas.DataFrame method)
(pandas.Series method)
LastWeekOfMonth (class in pandas.tseries.offsets)
le() (pandas.DataFrame method)
(pandas.Series method)
left (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
len() (pandas.Series.str method)
length (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex property)
levels (pandas.MultiIndex attribute)
levshape (pandas.MultiIndex property)
line() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
ljust() (pandas.Series.str method)
loader (pandas.io.formats.style.Styler attribute)
loc (pandas.DataFrame property)
(pandas.Series property)
lookup() (pandas.DataFrame method)
lower() (pandas.Series.str method)
lstrip() (pandas.Series.str method)
lt() (pandas.DataFrame method)
(pandas.Series method)
M
m_offset (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
mad (pandas.core.groupby.DataFrameGroupBy property)
mad() (pandas.DataFrame method)
(pandas.Series method)
map() (pandas.CategoricalIndex method)
(pandas.Index method)
(pandas.Series method)
mask() (pandas.DataFrame method)
(pandas.Series method)
match() (pandas.Series.str method)
max (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
max() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
mean() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.TimedeltaIndex method)
median() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
melt() (in module pandas)
(pandas.DataFrame method)
memory_usage() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
merge() (in module pandas)
(pandas.DataFrame method)
merge_asof() (in module pandas)
merge_ordered() (in module pandas)
MergeError
Micro (class in pandas.tseries.offsets)
microsecond (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
microseconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
mid (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
Milli (class in pandas.tseries.offsets)
min (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
min() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
Minute (class in pandas.tseries.offsets)
minute (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
mod() (pandas.DataFrame method)
(pandas.Series method)
mode() (pandas.DataFrame method)
(pandas.Series method)
module
pandas
month (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
month_name() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
month_roll (pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
MonthBegin (class in pandas.tseries.offsets)
MonthEnd (class in pandas.tseries.offsets)
mul() (pandas.DataFrame method)
(pandas.Series method)
MultiIndex (class in pandas)
multiply() (pandas.DataFrame method)
(pandas.Series method)
N
n (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
na_value (pandas.api.extensions.ExtensionDtype property)
name (pandas.api.extensions.ExtensionDtype property)
(pandas.Index property)
(pandas.Series property)
(pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
names (pandas.api.extensions.ExtensionDtype property)
(pandas.Index property)
(pandas.MultiIndex property)
Nano (class in pandas.tseries.offsets)
nanos (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
nanosecond (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
nanoseconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
nbytes (pandas.api.extensions.ExtensionArray property)
(pandas.Index property)
(pandas.Series property)
ndim (pandas.api.extensions.ExtensionArray property)
(pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
ne() (pandas.DataFrame method)
(pandas.Series method)
nearest() (pandas.core.resample.Resampler method)
next_bday (pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
ngroup() (pandas.core.groupby.GroupBy method)
nlargest() (pandas.core.groupby.SeriesGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
nlevels (pandas.Index property)
(pandas.MultiIndex property)
normalize (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
normalize() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Series.str method)
(pandas.Timestamp method)
notna() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
notnull() (in module pandas)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
now() (pandas.Period method)
(pandas.Timestamp class method)
npoints (pandas.Series.sparse attribute)
nsmallest() (pandas.core.groupby.SeriesGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
nth (pandas.core.groupby.GroupBy property)
NullFrequencyError
NumbaUtilError
NumExprClobberingError
nunique() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
O
offset (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
ohlc() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
onOffset() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
open_left (pandas.Interval attribute)
open_right (pandas.Interval attribute)
option_context (class in pandas)
OptionError
ordered (pandas.Categorical property)
(pandas.CategoricalDtype property)
(pandas.CategoricalIndex property)
(pandas.Series.cat attribute)
ordinal (pandas.Period attribute)
OutOfBoundsDatetime
OutOfBoundsTimedelta
overlaps() (pandas.arrays.IntervalArray method)
(pandas.Interval method)
(pandas.IntervalIndex method)
P
pad() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
pandas
module
pandas_dtype() (in module pandas.api.types)
PandasArray (class in pandas.arrays)
parallel_coordinates() (in module pandas.plotting)
parse() (pandas.ExcelFile method)
ParserError
ParserWarning
partition() (pandas.Series.str method)
path (pandas.ExcelWriter property)
pct_change() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
PerformanceWarning
Period (class in pandas)
period_range() (in module pandas)
PeriodArray (class in pandas.arrays)
PeriodDtype (class in pandas)
PeriodIndex (class in pandas)
pie() (pandas.DataFrame.plot method)
(pandas.Series.plot method)
pipe() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
pivot() (in module pandas)
(pandas.DataFrame method)
pivot_table() (in module pandas)
(pandas.DataFrame method)
plot (pandas.core.groupby.DataFrameGroupBy property)
plot() (pandas.DataFrame method)
(pandas.Series method)
plot_params (in module pandas.plotting)
pop() (pandas.DataFrame method)
(pandas.Series method)
PossibleDataLossError
PossiblePrecisionLoss
pow() (pandas.DataFrame method)
(pandas.Series method)
prod() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
product() (pandas.DataFrame method)
(pandas.Series method)
put() (pandas.HDFStore method)
putmask() (pandas.Index method)
PyperclipException
PyperclipWindowsException
Python Enhancement Proposals
PEP 484
PEP 561
PEP 585
PEP 8#imports
Q
qcut() (in module pandas)
qtr_with_extra_week (pandas.tseries.offsets.FY5253Quarter attribute)
quantile() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
quarter (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
QuarterBegin (class in pandas.tseries.offsets)
QuarterEnd (class in pandas.tseries.offsets)
query() (pandas.DataFrame method)
qyear (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
R
radd() (pandas.DataFrame method)
(pandas.Series method)
radviz() (in module pandas.plotting)
RangeIndex (class in pandas)
rank() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
ravel() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
rdiv() (pandas.DataFrame method)
(pandas.Series method)
rdivmod() (pandas.Series method)
read_clipboard() (in module pandas)
read_csv() (in module pandas)
read_excel() (in module pandas)
read_feather() (in module pandas)
read_fwf() (in module pandas)
read_gbq() (in module pandas)
read_hdf() (in module pandas)
read_html() (in module pandas)
read_json() (in module pandas)
read_orc() (in module pandas)
read_parquet() (in module pandas)
read_pickle() (in module pandas)
read_sas() (in module pandas)
read_spss() (in module pandas)
read_sql() (in module pandas)
read_sql_query() (in module pandas)
read_sql_table() (in module pandas)
read_stata() (in module pandas)
read_table() (in module pandas)
read_xml() (in module pandas)
register_dataframe_accessor() (in module pandas.api.extensions)
register_extension_dtype() (in module pandas.api.extensions)
register_index_accessor() (in module pandas.api.extensions)
register_matplotlib_converters() (in module pandas.plotting)
register_series_accessor() (in module pandas.api.extensions)
reindex() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
reindex_like() (pandas.DataFrame method)
(pandas.Series method)
relabel_index() (pandas.io.formats.style.Styler method)
remove_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
remove_unused_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
remove_unused_levels() (pandas.MultiIndex method)
removeprefix() (pandas.Series.str method)
removesuffix() (pandas.Series.str method)
rename() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
rename_axis() (pandas.DataFrame method)
(pandas.Series method)
rename_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
render() (pandas.io.formats.style.Styler method)
reorder_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
reorder_levels() (pandas.DataFrame method)
(pandas.MultiIndex method)
(pandas.Series method)
repeat() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
(pandas.Series.str method)
replace() (pandas.DataFrame method)
(pandas.Series method)
(pandas.Series.str method)
(pandas.Timestamp method)
resample() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
reset_index() (pandas.DataFrame method)
(pandas.Series method)
reset_option (in module pandas)
resolution (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
resolution_string (pandas.Timedelta attribute)
rfind() (pandas.Series.str method)
rfloordiv() (pandas.DataFrame method)
(pandas.Series method)
right (pandas.arrays.IntervalArray property)
(pandas.Interval attribute)
(pandas.IntervalIndex attribute)
rindex() (pandas.Series.str method)
rjust() (pandas.Series.str method)
rmod() (pandas.DataFrame method)
(pandas.Series method)
rmul() (pandas.DataFrame method)
(pandas.Series method)
rollback() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
rollforward() (pandas.tseries.offsets.BQuarterBegin method)
(pandas.tseries.offsets.BQuarterEnd method)
(pandas.tseries.offsets.BusinessDay method)
(pandas.tseries.offsets.BusinessHour method)
(pandas.tseries.offsets.BusinessMonthBegin method)
(pandas.tseries.offsets.BusinessMonthEnd method)
(pandas.tseries.offsets.BYearBegin method)
(pandas.tseries.offsets.BYearEnd method)
(pandas.tseries.offsets.CustomBusinessDay method)
(pandas.tseries.offsets.CustomBusinessHour method)
(pandas.tseries.offsets.CustomBusinessMonthBegin method)
(pandas.tseries.offsets.CustomBusinessMonthEnd method)
(pandas.tseries.offsets.DateOffset method)
(pandas.tseries.offsets.Day method)
(pandas.tseries.offsets.Easter method)
(pandas.tseries.offsets.FY5253 method)
(pandas.tseries.offsets.FY5253Quarter method)
(pandas.tseries.offsets.Hour method)
(pandas.tseries.offsets.LastWeekOfMonth method)
(pandas.tseries.offsets.Micro method)
(pandas.tseries.offsets.Milli method)
(pandas.tseries.offsets.Minute method)
(pandas.tseries.offsets.MonthBegin method)
(pandas.tseries.offsets.MonthEnd method)
(pandas.tseries.offsets.Nano method)
(pandas.tseries.offsets.QuarterBegin method)
(pandas.tseries.offsets.QuarterEnd method)
(pandas.tseries.offsets.Second method)
(pandas.tseries.offsets.SemiMonthBegin method)
(pandas.tseries.offsets.SemiMonthEnd method)
(pandas.tseries.offsets.Tick method)
(pandas.tseries.offsets.Week method)
(pandas.tseries.offsets.WeekOfMonth method)
(pandas.tseries.offsets.YearBegin method)
(pandas.tseries.offsets.YearEnd method)
rolling() (pandas.DataFrame method)
(pandas.Series method)
round() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
(pandas.Timestamp method)
rpartition() (pandas.Series.str method)
rpow() (pandas.DataFrame method)
(pandas.Series method)
rsplit() (pandas.Series.str method)
rstrip() (pandas.Series.str method)
rsub() (pandas.DataFrame method)
(pandas.Series method)
rtruediv() (pandas.DataFrame method)
(pandas.Series method)
rule_code (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.BusinessMonthBegin attribute)
(pandas.tseries.offsets.BusinessMonthEnd attribute)
(pandas.tseries.offsets.BYearBegin attribute)
(pandas.tseries.offsets.BYearEnd attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
(pandas.tseries.offsets.DateOffset attribute)
(pandas.tseries.offsets.Day attribute)
(pandas.tseries.offsets.Easter attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.Hour attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Micro attribute)
(pandas.tseries.offsets.Milli attribute)
(pandas.tseries.offsets.Minute attribute)
(pandas.tseries.offsets.MonthBegin attribute)
(pandas.tseries.offsets.MonthEnd attribute)
(pandas.tseries.offsets.Nano attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
(pandas.tseries.offsets.Second attribute)
(pandas.tseries.offsets.SemiMonthBegin attribute)
(pandas.tseries.offsets.SemiMonthEnd attribute)
(pandas.tseries.offsets.Tick attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
(pandas.tseries.offsets.YearBegin attribute)
(pandas.tseries.offsets.YearEnd attribute)
S
sample() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
save() (pandas.ExcelWriter method)
scatter() (pandas.DataFrame.plot method)
scatter_matrix() (in module pandas.plotting)
searchsorted() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
Second (class in pandas.tseries.offsets)
second (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
seconds (pandas.Series.dt attribute)
(pandas.Timedelta attribute)
(pandas.TimedeltaIndex property)
select() (pandas.HDFStore method)
select_dtypes() (pandas.DataFrame method)
sem() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
SemiMonthBegin (class in pandas.tseries.offsets)
SemiMonthEnd (class in pandas.tseries.offsets)
Series (class in pandas)
set_axis() (pandas.DataFrame method)
(pandas.Series method)
set_caption() (pandas.io.formats.style.Styler method)
set_categories() (pandas.CategoricalIndex method)
(pandas.Series.cat method)
set_closed() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
set_codes() (pandas.MultiIndex method)
set_flags() (pandas.DataFrame method)
(pandas.Series method)
set_index() (pandas.DataFrame method)
set_levels() (pandas.MultiIndex method)
set_na_rep() (pandas.io.formats.style.Styler method)
set_names() (pandas.Index method)
set_option (in module pandas)
set_precision() (pandas.io.formats.style.Styler method)
set_properties() (pandas.io.formats.style.Styler method)
set_sticky() (pandas.io.formats.style.Styler method)
set_table_attributes() (pandas.io.formats.style.Styler method)
set_table_styles() (pandas.io.formats.style.Styler method)
set_td_classes() (pandas.io.formats.style.Styler method)
set_tooltips() (pandas.io.formats.style.Styler method)
set_uuid() (pandas.io.formats.style.Styler method)
set_value() (pandas.Index method)
SettingWithCopyError
SettingWithCopyWarning
shape (pandas.api.extensions.ExtensionArray property)
(pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
sheets (pandas.ExcelWriter property)
shift() (pandas.api.extensions.ExtensionArray method)
(pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
show_versions() (in module pandas)
size (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
size() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
skew (pandas.core.groupby.DataFrameGroupBy property)
skew() (pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.DataFrame method)
(pandas.Series method)
slice() (pandas.Series.str method)
slice_indexer() (pandas.Index method)
slice_locs() (pandas.Index method)
slice_replace() (pandas.Series.str method)
slice_shift() (pandas.DataFrame method)
(pandas.Series method)
snap() (pandas.DatetimeIndex method)
sort() (pandas.Index method)
sort_index() (pandas.DataFrame method)
(pandas.Series method)
sort_values() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
sortlevel() (pandas.Index method)
(pandas.MultiIndex method)
sp_values (pandas.Series.sparse attribute)
sparse() (pandas.DataFrame method)
(pandas.Series method)
SparseArray (class in pandas.arrays)
SparseDtype (class in pandas)
SpecificationError
split() (pandas.Series.str method)
squeeze() (pandas.DataFrame method)
(pandas.Series method)
stack() (pandas.DataFrame method)
start (pandas.RangeIndex property)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
start_time (pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
startingMonth (pandas.tseries.offsets.BQuarterBegin attribute)
(pandas.tseries.offsets.BQuarterEnd attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.QuarterBegin attribute)
(pandas.tseries.offsets.QuarterEnd attribute)
startswith() (pandas.Series.str method)
std() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
step (pandas.RangeIndex property)
stop (pandas.RangeIndex property)
str() (pandas.Index method)
(pandas.Series method)
strftime() (pandas.DatetimeIndex method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
StringArray (class in pandas.arrays)
StringDtype (class in pandas)
strip() (pandas.Series.str method)
strptime() (pandas.Timestamp class method)
style (pandas.DataFrame property)
Styler (class in pandas.io.formats.style)
sub() (pandas.DataFrame method)
(pandas.Series method)
subtract() (pandas.DataFrame method)
(pandas.Series method)
subtype (pandas.IntervalDtype property)
sum() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.Series method)
supported_extensions (pandas.ExcelWriter property)
swapaxes() (pandas.DataFrame method)
(pandas.Series method)
swapcase() (pandas.Series.str method)
swaplevel() (pandas.DataFrame method)
(pandas.MultiIndex method)
(pandas.Series method)
symmetric_difference() (pandas.Index method)
T
T (pandas.DataFrame property)
(pandas.Index property)
(pandas.Series property)
table() (in module pandas.plotting)
tail() (pandas.core.groupby.GroupBy method)
(pandas.DataFrame method)
(pandas.Series method)
take (pandas.core.groupby.DataFrameGroupBy property)
take() (pandas.api.extensions.ExtensionArray method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
template_html (pandas.io.formats.style.Styler attribute)
template_html_style (pandas.io.formats.style.Styler attribute)
template_html_table (pandas.io.formats.style.Styler attribute)
template_latex (pandas.io.formats.style.Styler attribute)
template_string (pandas.io.formats.style.Styler attribute)
test() (in module pandas)
text_gradient() (pandas.io.formats.style.Styler method)
Tick (class in pandas.tseries.offsets)
time (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
time() (pandas.Timestamp method)
Timedelta (class in pandas)
timedelta_range() (in module pandas)
TimedeltaArray (class in pandas.arrays)
TimedeltaIndex (class in pandas)
Timestamp (class in pandas)
timestamp() (pandas.Timestamp method)
timetuple() (pandas.Timestamp method)
timetz (pandas.DatetimeIndex property)
(pandas.Series.dt attribute)
timetz() (pandas.Timestamp method)
title() (pandas.Series.str method)
to_clipboard() (pandas.DataFrame method)
(pandas.Series method)
to_coo() (pandas.DataFrame.sparse method)
(pandas.Series.sparse method)
to_csv() (pandas.DataFrame method)
(pandas.Series method)
to_datetime() (in module pandas)
to_datetime64() (pandas.Timestamp method)
to_dense() (pandas.DataFrame.sparse method)
to_dict() (pandas.DataFrame method)
(pandas.Series method)
to_excel() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_feather() (pandas.DataFrame method)
to_flat_index() (pandas.Index method)
(pandas.MultiIndex method)
to_frame() (pandas.DatetimeIndex method)
(pandas.Index method)
(pandas.MultiIndex method)
(pandas.Series method)
(pandas.TimedeltaIndex method)
to_gbq() (pandas.DataFrame method)
to_hdf() (pandas.DataFrame method)
(pandas.Series method)
to_html() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
to_json() (pandas.DataFrame method)
(pandas.Series method)
to_julian_date() (pandas.Timestamp method)
to_latex() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_list() (pandas.Index method)
(pandas.Series method)
to_markdown() (pandas.DataFrame method)
(pandas.Series method)
to_native_types() (pandas.Index method)
to_numeric() (in module pandas)
to_numpy() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
(pandas.Timedelta method)
(pandas.Timestamp method)
to_offset() (in module pandas.tseries.frequencies)
to_orc() (pandas.DataFrame method)
to_parquet() (pandas.DataFrame method)
to_period() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
to_perioddelta() (pandas.DatetimeIndex method)
to_pickle() (pandas.DataFrame method)
(pandas.Series method)
to_pydatetime() (pandas.DatetimeIndex method)
(pandas.Series.dt method)
(pandas.Timestamp method)
to_pytimedelta() (pandas.Series.dt method)
(pandas.Timedelta method)
(pandas.TimedeltaIndex method)
to_records() (pandas.DataFrame method)
to_series() (pandas.DatetimeIndex method)
(pandas.Index method)
(pandas.TimedeltaIndex method)
to_sql() (pandas.DataFrame method)
(pandas.Series method)
to_stata() (pandas.DataFrame method)
to_string() (pandas.DataFrame method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
to_timedelta() (in module pandas)
to_timedelta64() (pandas.Timedelta method)
to_timestamp() (pandas.DataFrame method)
(pandas.Period method)
(pandas.PeriodIndex method)
(pandas.Series method)
to_tuples() (pandas.arrays.IntervalArray method)
(pandas.IntervalIndex method)
to_xarray() (pandas.DataFrame method)
(pandas.Series method)
to_xml() (pandas.DataFrame method)
today() (pandas.Timestamp class method)
tolist() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
toordinal() (pandas.Timestamp method)
total_seconds() (pandas.Series.dt method)
(pandas.Timedelta method)
transform() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.core.groupby.SeriesGroupBy method)
(pandas.core.resample.Resampler method)
(pandas.DataFrame method)
(pandas.Series method)
translate() (pandas.Series.str method)
transpose() (pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
truediv() (pandas.DataFrame method)
(pandas.Series method)
truncate() (pandas.DataFrame method)
(pandas.Series method)
tshift (pandas.core.groupby.DataFrameGroupBy property)
tshift() (pandas.DataFrame method)
(pandas.Series method)
type (pandas.api.extensions.ExtensionDtype property)
tz (pandas.DatetimeIndex property)
(pandas.DatetimeTZDtype property)
(pandas.Series.dt attribute)
(pandas.Timestamp property)
tz_convert() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
tz_localize() (pandas.DataFrame method)
(pandas.DatetimeIndex method)
(pandas.Series method)
(pandas.Series.dt method)
(pandas.Timestamp method)
tzinfo (pandas.Timestamp attribute)
tzname() (pandas.Timestamp method)
U
UInt16Dtype (class in pandas)
UInt32Dtype (class in pandas)
UInt64Dtype (class in pandas)
UInt64Index (class in pandas)
UInt8Dtype (class in pandas)
UndefinedVariableError
union() (pandas.Index method)
union_categoricals() (in module pandas.api.types)
unique (pandas.core.groupby.SeriesGroupBy property)
unique() (in module pandas)
(pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
unit (pandas.DatetimeTZDtype property)
UnsortedIndexError
unstack() (pandas.DataFrame method)
(pandas.Series method)
UnsupportedFunctionCall
update() (pandas.DataFrame method)
(pandas.Series method)
upper() (pandas.Series.str method)
use() (pandas.io.formats.style.Styler method)
utcfromtimestamp() (pandas.Timestamp class method)
utcnow() (pandas.Timestamp class method)
utcoffset() (pandas.Timestamp method)
utctimetuple() (pandas.Timestamp method)
V
value (pandas.Timedelta attribute)
(pandas.Timestamp attribute)
value_counts() (pandas.core.groupby.DataFrameGroupBy method)
(pandas.DataFrame method)
(pandas.Index method)
(pandas.Series method)
value_labels() (pandas.io.stata.StataReader method)
ValueLabelTypeMismatch
values (pandas.DataFrame property)
(pandas.Index property)
(pandas.IntervalIndex property)
(pandas.Series property)
var() (pandas.core.groupby.GroupBy method)
(pandas.core.resample.Resampler method)
(pandas.core.window.ewm.ExponentialMovingWindow method)
(pandas.core.window.expanding.Expanding method)
(pandas.core.window.rolling.Rolling method)
(pandas.core.window.rolling.Window method)
(pandas.DataFrame method)
(pandas.Series method)
variable_labels() (pandas.io.stata.StataReader method)
VariableOffsetWindowIndexer (class in pandas.api.indexers)
variation (pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
view() (pandas.api.extensions.ExtensionArray method)
(pandas.Index method)
(pandas.Series method)
(pandas.Timedelta method)
W
walk() (pandas.HDFStore method)
Week (class in pandas.tseries.offsets)
week (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
weekday (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.tseries.offsets.FY5253 attribute)
(pandas.tseries.offsets.FY5253Quarter attribute)
(pandas.tseries.offsets.LastWeekOfMonth attribute)
(pandas.tseries.offsets.Week attribute)
(pandas.tseries.offsets.WeekOfMonth attribute)
weekday() (pandas.Timestamp method)
weekmask (pandas.tseries.offsets.BusinessDay attribute)
(pandas.tseries.offsets.BusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessDay attribute)
(pandas.tseries.offsets.CustomBusinessHour attribute)
(pandas.tseries.offsets.CustomBusinessMonthBegin attribute)
(pandas.tseries.offsets.CustomBusinessMonthEnd attribute)
WeekOfMonth (class in pandas.tseries.offsets)
weekofyear (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
where() (pandas.DataFrame method)
(pandas.Index method)
(pandas.io.formats.style.Styler method)
(pandas.Series method)
wide_to_long() (in module pandas)
wrap() (pandas.Series.str method)
write_cells() (pandas.ExcelWriter method)
write_file() (pandas.io.stata.StataWriter method)
X
xs() (pandas.DataFrame method)
(pandas.Series method)
Y
year (pandas.DatetimeIndex property)
(pandas.Period attribute)
(pandas.PeriodIndex property)
(pandas.Series.dt attribute)
(pandas.Timestamp attribute)
year_has_extra_week() (pandas.tseries.offsets.FY5253Quarter method)
YearBegin (class in pandas.tseries.offsets)
YearEnd (class in pandas.tseries.offsets)
Z
zfill() (pandas.Series.str method)
|
genindex.html
|
pandas.tseries.offsets.BYearBegin.nanos
|
pandas.tseries.offsets.BYearBegin.nanos
|
BYearBegin.nanos#
|
reference/api/pandas.tseries.offsets.BYearBegin.nanos.html
|
pandas.core.window.rolling.Rolling.quantile
|
`pandas.core.window.rolling.Rolling.quantile`
Calculate the rolling quantile.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s.rolling(2).quantile(.4, interpolation='lower')
0 NaN
1 1.0
2 2.0
3 3.0
dtype: float64
```
|
Rolling.quantile(quantile, interpolation='linear', numeric_only=False, **kwargs)[source]#
Calculate the rolling quantile.
Parameters
quantilefloatQuantile to compute. 0 <= quantile <= 1.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.quantileAggregating quantile for Series.
pandas.DataFrame.quantileAggregating quantile for DataFrame.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s.rolling(2).quantile(.4, interpolation='lower')
0 NaN
1 1.0
2 2.0
3 3.0
dtype: float64
>>> s.rolling(2).quantile(.4, interpolation='midpoint')
0 NaN
1 1.5
2 2.5
3 3.5
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.quantile.html
|
Window
|
Window
|
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
|
reference/window.html
|
pandas.Series.transpose
|
`pandas.Series.transpose`
Return the transpose, which is by definition self.
|
Series.transpose(*args, **kwargs)[source]#
Return the transpose, which is by definition self.
Returns
%(klass)s
|
reference/api/pandas.Series.transpose.html
|
pandas.Series.cat
|
`pandas.Series.cat`
Accessor object for categorical properties of the Series values.
Be aware that assigning to categories is a inplace operation, while all
methods return new categorical data per default (but can be called with
inplace=True).
```
>>> s = pd.Series(list("abbccc")).astype("category")
>>> s
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
```
|
Series.cat()[source]#
Accessor object for categorical properties of the Series values.
Be aware that assigning to categories is a inplace operation, while all
methods return new categorical data per default (but can be called with
inplace=True).
Parameters
dataSeries or CategoricalIndex
Examples
>>> s = pd.Series(list("abbccc")).astype("category")
>>> s
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> s.cat.categories
Index(['a', 'b', 'c'], dtype='object')
>>> s.cat.rename_categories(list("cba"))
0 c
1 b
2 b
3 a
4 a
5 a
dtype: category
Categories (3, object): ['c', 'b', 'a']
>>> s.cat.reorder_categories(list("cba"))
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['c', 'b', 'a']
>>> s.cat.add_categories(["d", "e"])
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (5, object): ['a', 'b', 'c', 'd', 'e']
>>> s.cat.remove_categories(["a", "c"])
0 NaN
1 b
2 b
3 NaN
4 NaN
5 NaN
dtype: category
Categories (1, object): ['b']
>>> s1 = s.cat.add_categories(["d", "e"])
>>> s1.cat.remove_unused_categories()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> s.cat.set_categories(list("abcde"))
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (5, object): ['a', 'b', 'c', 'd', 'e']
>>> s.cat.as_ordered()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
>>> s.cat.as_unordered()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
|
reference/api/pandas.Series.cat.html
|
pandas.tseries.offsets.BusinessDay.is_year_start
|
`pandas.tseries.offsets.BusinessDay.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BusinessDay.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessDay.is_year_start.html
|
pandas.PeriodIndex.strftime
|
`pandas.PeriodIndex.strftime`
Convert to Index using specified date_format.
```
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
```
|
PeriodIndex.strftime(*args, **kwargs)[source]#
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which
supports the same string format as the python standard library. Details
of the string format can be found in python string format
doc.
Formats supported by the C strftime API but not by the python string format
doc (such as “%R”, “%r”) are not officially supported and should be
preferably replaced with their supported equivalents (such as “%H:%M”,
“%I:%M:%S %p”).
Note that PeriodIndex support additional directives, detailed in
Period.strftime.
Parameters
date_formatstrDate format string (e.g. “%Y-%m-%d”).
Returns
ndarray[object]NumPy ndarray of formatted strings.
See also
to_datetimeConvert the given argument to datetime.
DatetimeIndex.normalizeReturn DatetimeIndex with times to midnight.
DatetimeIndex.roundRound the DatetimeIndex to the specified freq.
DatetimeIndex.floorFloor the DatetimeIndex to the specified freq.
Timestamp.strftimeFormat a single Timestamp.
Period.strftimeFormat a single Period.
Examples
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
|
reference/api/pandas.PeriodIndex.strftime.html
|
General functions
|
Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
|
reference/general_functions.html
| null |
pandas.Series.align
|
`pandas.Series.align`
Align two objects on their axes with the specified join method.
```
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
```
|
Series.align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)[source]#
Align two objects on their axes with the specified join method.
Join method is specified for each axis Index.
Parameters
otherDataFrame or Series
join{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axisallowed axis of the other object, default NoneAlign on index (0), columns (1), or both (None).
levelint or level name, default NoneBroadcast across a level, matching Index values on the
passed MultiIndex level.
copybool, default TrueAlways returns new objects. If copy=False and no reindexing is
required then original objects are returned.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default NoneMethod to use for filling holes in reindexed Series:
pad / ffill: propagate last valid observation forward to next valid.
backfill / bfill: use NEXT valid observation to fill gap.
limitint, default NoneIf method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0 if not None.
fill_axis{0 or ‘index’}, default 0Filling axis, method and limit.
broadcast_axis{0 or ‘index’}, default NoneBroadcast values along this axis, if aligning two objects of
different dimensions.
Returns
(left, right)(Series, type of other)Aligned objects.
Examples
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
Align on columns:
>>> left, right = df.align(other, join="outer", axis=1)
>>> left
A B C D E
1 4 2 NaN 1 3
2 9 7 NaN 6 8
>>> right
A B C D E
2 10 20 30 40 NaN
3 60 70 80 90 NaN
4 600 700 800 900 NaN
We can also align on the index:
>>> left, right = df.align(other, join="outer", axis=0)
>>> left
D B E A
1 1.0 2.0 3.0 4.0
2 6.0 7.0 8.0 9.0
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
>>> right
A B C D
1 NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0
3 60.0 70.0 80.0 90.0
4 600.0 700.0 800.0 900.0
Finally, the default axis=None will align on both index and columns:
>>> left, right = df.align(other, join="outer", axis=None)
>>> left
A B C D E
1 4.0 2.0 NaN 1.0 3.0
2 9.0 7.0 NaN 6.0 8.0
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
>>> right
A B C D E
1 NaN NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0 NaN
3 60.0 70.0 80.0 90.0 NaN
4 600.0 700.0 800.0 900.0 NaN
|
reference/api/pandas.Series.align.html
|
pandas.tseries.offsets.BYearEnd.is_year_end
|
`pandas.tseries.offsets.BYearEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
BYearEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.BYearEnd.is_year_end.html
|
pandas.io.formats.style.Styler
|
`pandas.io.formats.style.Styler`
Helps style a DataFrame or Series according to the data with HTML and CSS.
|
class pandas.io.formats.style.Styler(data, precision=None, table_styles=None, uuid=None, caption=None, table_attributes=None, cell_ids=True, na_rep=None, uuid_len=5, decimal=None, thousands=None, escape=None, formatter=None)[source]#
Helps style a DataFrame or Series according to the data with HTML and CSS.
Parameters
dataSeries or DataFrameData to be styled - either a Series or DataFrame.
precisionint, optionalPrecision to round floats to. If not given defaults to
pandas.options.styler.format.precision.
Changed in version 1.4.0.
table_styleslist-like, default NoneList of {selector: (attr, value)} dicts; see Notes.
uuidstr, default NoneA unique identifier to avoid CSS collisions; generated automatically.
captionstr, tuple, default NoneString caption to attach to the table. Tuple only used for LaTeX dual captions.
table_attributesstr, default NoneItems that show up in the opening <table> tag
in addition to automatic (by default) id.
cell_idsbool, default TrueIf True, each cell will have an id attribute in their HTML tag.
The id takes the form T_<uuid>_row<num_row>_col<num_col>
where <uuid> is the unique identifier, <num_row> is the row
number and <num_col> is the column number.
na_repstr, optionalRepresentation for missing values.
If na_rep is None, no special formatting is applied, and falls back to
pandas.options.styler.format.na_rep.
New in version 1.0.0.
uuid_lenint, default 5If uuid is not specified, the length of the uuid to randomly generate
expressed in hex characters, in range [0, 32].
New in version 1.2.0.
decimalstr, optionalCharacter used as decimal separator for floats, complex and integers. If not
given uses pandas.options.styler.format.decimal.
New in version 1.3.0.
thousandsstr, optional, default NoneCharacter used as thousands separator for floats, complex and integers. If not
given uses pandas.options.styler.format.thousands.
New in version 1.3.0.
escapestr, optionalUse ‘html’ to replace the characters &, <, >, ', and "
in cell display string with HTML-safe sequences.
Use ‘latex’ to replace the characters &, %, $, #, _,
{, }, ~, ^, and \ in the cell display string with
LaTeX-safe sequences. If not given uses pandas.options.styler.format.escape.
New in version 1.3.0.
formatterstr, callable, dict, optionalObject to define how values are displayed. See Styler.format. If not given
uses pandas.options.styler.format.formatter.
New in version 1.4.0.
See also
DataFrame.styleReturn a Styler object containing methods for building a styled HTML representation for the DataFrame.
Notes
Most styling will be done by passing style functions into
Styler.apply or Styler.applymap. Style functions should
return values with strings containing CSS 'attr: value' that will
be applied to the indicated cells.
If using in the Jupyter notebook, Styler has defined a _repr_html_
to automatically render itself. Otherwise call Styler.to_html to get
the generated HTML.
CSS classes are attached to the generated HTML
Index and Column names include index_name and level<k>
where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
* col_heading
* col<n> where n is the numeric position of the column
* level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Trimmed cells include col_trim or row_trim.
Any, or all, or these classes can be renamed by using the css_class_names
argument in Styler.set_table_classes, giving a value such as
{“row”: “MY_ROW_CLASS”, “col_trim”: “”, “row_trim”: “”}.
Attributes
env
(Jinja2 jinja2.Environment)
template_html
(Jinja2 Template)
template_html_table
(Jinja2 Template)
template_html_style
(Jinja2 Template)
template_latex
(Jinja2 Template)
loader
(Jinja2 Loader)
Methods
apply(func[, axis, subset])
Apply a CSS-styling function column-wise, row-wise, or table-wise.
apply_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, level-wise.
applymap(func[, subset])
Apply a CSS-styling function elementwise.
applymap_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, elementwise.
background_gradient([cmap, low, high, axis, ...])
Color the background in a gradient style.
bar([subset, axis, color, cmap, width, ...])
Draw bar chart in the cell backgrounds.
clear()
Reset the Styler, removing any previously applied styles.
concat(other)
Append another Styler to combine the output into a single table.
export()
Export the styles applied to the current Styler.
format([formatter, subset, na_rep, ...])
Format the text display value of cells.
format_index([formatter, axis, level, ...])
Format the text display value of index labels or column headers.
from_custom_template(searchpath[, ...])
Factory function for creating a subclass of Styler.
hide([subset, axis, level, names])
Hide the entire index / column headers, or specific rows / columns from display.
hide_columns([subset, level, names])
Hide the column headers or specific keys in the columns from rendering.
hide_index([subset, level, names])
(DEPRECATED) Hide the entire index, or specific keys in the index from rendering.
highlight_between([subset, color, axis, ...])
Highlight a defined range with a style.
highlight_max([subset, color, axis, props])
Highlight the maximum with a style.
highlight_min([subset, color, axis, props])
Highlight the minimum with a style.
highlight_null([color, subset, props, ...])
Highlight missing values with a style.
highlight_quantile([subset, color, axis, ...])
Highlight values defined by a quantile with a style.
pipe(func, *args, **kwargs)
Apply func(self, *args, **kwargs), and return the result.
relabel_index(labels[, axis, level])
Relabel the index, or column header, keys to display a set of specified values.
render([sparse_index, sparse_columns])
(DEPRECATED) Render the Styler including all applied styles to HTML.
set_caption(caption)
Set the text added to a <caption> HTML element.
set_na_rep(na_rep)
(DEPRECATED) Set the missing data representation on a Styler.
set_precision(precision)
(DEPRECATED) Set the precision used to display values.
set_properties([subset])
Set defined CSS-properties to each <td> HTML element for the given subset.
set_sticky([axis, pixel_size, levels])
Add CSS to permanently display the index or column headers in a scrolling frame.
set_table_attributes(attributes)
Set the table attributes added to the <table> HTML element.
set_table_styles([table_styles, axis, ...])
Set the table styles included within the <style> HTML element.
set_td_classes(classes)
Set the class attribute of <td> HTML elements.
set_tooltips(ttips[, props, css_class])
Set the DataFrame of strings on Styler generating :hover tooltips.
set_uuid(uuid)
Set the uuid applied to id attributes of HTML elements.
text_gradient([cmap, low, high, axis, ...])
Color the text in a gradient style.
to_excel(excel_writer[, sheet_name, na_rep, ...])
Write Styler to an Excel sheet.
to_html([buf, table_uuid, table_attributes, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
to_latex([buf, column_format, position, ...])
Write Styler to a file, buffer or string in LaTeX format.
to_string([buf, encoding, sparse_index, ...])
Write Styler to a file, buffer or string in text format.
use(styles)
Set the styles on the current Styler.
where(cond, value[, other, subset])
(DEPRECATED) Apply CSS-styles based on a conditional function elementwise.
|
reference/api/pandas.io.formats.style.Styler.html
|
Indexing and selecting data
|
Indexing and selecting data
|
The axis labeling information in pandas objects serves many purposes:
Identifies data (i.e. provides metadata) using known indicators,
important for analysis, visualization, and interactive console display.
Enables automatic and explicit data alignment.
Allows intuitive getting and setting of subsets of the data set.
In this section, we will focus on the final point: namely, how to slice, dice,
and generally get and set subsets of pandas objects. The primary focus will be
on Series and DataFrame as they have received more development attention in
this area.
Note
The Python and NumPy indexing operators [] and attribute operator .
provide quick and easy access to pandas data structures across a wide range
of use cases. This makes interactive work intuitive, as there’s little new
to learn if you already know how to deal with Python dictionaries and NumPy
arrays. However, since the type of the data to be accessed isn’t known in
advance, directly using standard operators has some optimization limits. For
production code, we recommended that you take advantage of the optimized
pandas data access methods exposed in this chapter.
Warning
Whether a copy or a reference is returned for a setting operation, may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.
Different choices for indexing#
Object selection has had a number of user-requested additions in order to
support more explicit location based indexing. pandas now supports three types
of multi-axis indexing.
.loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when the items are not found. Allowed inputs are:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a
label of the index. This use is not an integer position along the
index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels
and Endpoints are inclusive.)
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Label.
.iloc is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array. .iloc will raise IndexError if a requested
indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing. (this conforms with Python/NumPy slice
semantics). Allowed inputs are:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Position,
Advanced Indexing and Advanced
Hierarchical.
.loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following
notation (using .loc as an example, but the following applies to .iloc as
well). Any of the axes accessors may be the null slice :. Axes left out of
the specification are assumed to be :, e.g. p.loc['a'] is equivalent to
p.loc['a', :].
Object Type
Indexers
Series
s.loc[indexer]
DataFrame
df.loc[row_indexer,column_indexer]
Basics#
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a. __getitem__
for those familiar with implementing class behavior in Python) is selecting out
lower-dimensional slices. The following table shows return type values when
indexing pandas objects with []:
Object Type
Selection
Return Value Type
Series
series[label]
scalar value
DataFrame
frame[colname]
Series corresponding to colname
Here we construct a simple time series data set to use for illustrating the
indexing functionality:
In [1]: dates = pd.date_range('1/1/2000', periods=8)
In [2]: df = pd.DataFrame(np.random.randn(8, 4),
...: index=dates, columns=['A', 'B', 'C', 'D'])
...:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
Note
None of the indexing functionality is time series specific unless
specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [4]: s = df['A']
In [5]: s[dates[5]]
Out[5]: -0.6736897080883706
You can pass a list of columns to [] to select columns in that order.
If a column is not contained in the DataFrame, an exception will be
raised. Multiple columns can also be set in this manner:
In [6]: df
Out[6]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [7]: df[['B', 'A']] = df[['A', 'B']]
In [8]: df
Out[8]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the
columns.
Warning
pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [9]: df[['A', 'B']]
Out[9]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
In [10]: df.loc[:, ['B', 'A']] = df[['A', 'B']]
In [11]: df[['A', 'B']]
Out[11]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
The correct way to swap column values is by using raw values:
In [12]: df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy()
In [13]: df[['A', 'B']]
Out[13]:
A B
2000-01-01 0.469112 -0.282863
2000-01-02 1.212112 -0.173215
2000-01-03 -0.861849 -2.104569
2000-01-04 0.721555 -0.706771
2000-01-05 -0.424972 0.567020
2000-01-06 -0.673690 0.113648
2000-01-07 0.404705 0.577046
2000-01-08 -0.370647 -1.157892
Attribute access#
You may access an index on a Series or column on a DataFrame directly
as an attribute:
In [14]: sa = pd.Series([1, 2, 3], index=list('abc'))
In [15]: dfa = df.copy()
In [16]: sa.b
Out[16]: 2
In [17]: dfa.A
Out[17]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [18]: sa.a = 5
In [19]: sa
Out[19]:
a 5
b 2
c 3
dtype: int64
In [20]: dfa.A = list(range(len(dfa.index))) # ok if A already exists
In [21]: dfa
Out[21]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [22]: dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
In [23]: dfa
Out[23]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning
You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed.
See here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed, but s['min'] is possible.
Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
If you are using the IPython environment, you may also use tab-completion to
see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
In [24]: x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
In [25]: x.iloc[1] = {'x': 9, 'y': 99}
In [26]: x
Out[26]:
x y
0 1 3
1 9 99
2 3 5
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
if you try to use attribute access to create a new column, it creates a new attribute rather than a
new column. In 0.21.0 and later, this will raise a UserWarning:
In [1]: df = pd.DataFrame({'one': [1., 2., 3.]})
In [2]: df.two = [4, 5, 6]
UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access
In [3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
Slicing ranges#
The most robust and consistent way of slicing ranges along arbitrary axes is
described in the Selection by Position section
detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of
the values and the corresponding labels:
In [27]: s[:5]
Out[27]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [28]: s[::2]
Out[28]:
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [29]: s[::-1]
Out[29]:
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64
Note that setting works as well:
In [30]: s2 = s.copy()
In [31]: s2[:5] = 0
In [32]: s2
Out[32]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided
largely as a convenience since it is such a common operation.
In [33]: df[:3]
Out[33]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [34]: df[::-1]
Out[34]:
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Selection by label#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
Warning
.loc is strict when you present slicers that are not compatible (or convertible) with the index type. For example
using integers in a DatetimeIndex. These will raise a TypeError.
In [35]: dfl = pd.DataFrame(np.random.randn(5, 4),
....: columns=list('ABCD'),
....: index=pd.date_range('20130101', periods=5))
....:
In [36]: dfl
Out[36]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [37]: dfl.loc['20130102':'20130104']
Out[37]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
Warning
Changed in version 1.0.0.
pandas will raise a KeyError if indexing with a list with missing labels. See list-like Using loc with
missing keys in a list is Deprecated.
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol.
Every label asked for must be in the index, or a KeyError will be raised.
When slicing, both the start bound AND the stop bound are included, if present in the index.
Integers are valid labels, but they refer to the label and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer position along the index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels.
A boolean array.
A callable, see Selection By Callable.
In [38]: s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
In [39]: s1
Out[39]:
a 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [40]: s1.loc['c':]
Out[40]:
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [41]: s1.loc['b']
Out[41]: 1.3403088497993827
Note that setting works as well:
In [42]: s1.loc['c':] = 0
In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame:
In [44]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
In [46]: df1.loc[['a', 'b', 'd'], :]
Out[46]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
Accessing via label slices:
In [47]: df1.loc['d':, 'A':'C']
Out[47]:
A B C
d 0.974466 -2.006747 -0.410001
e 0.545952 -1.219217 -1.226825
f -1.281247 -0.727707 -0.121306
For getting a cross section using a label (equivalent to df.xs('a')):
In [48]: df1.loc['a']
Out[48]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64
For getting values with a boolean array:
In [49]: df1.loc['a'] > 0
Out[49]:
A True
B False
C False
D False
Name: a, dtype: bool
In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
NA values in a boolean array propagate as False:
Changed in version 1.0.2.
In [51]: mask = pd.array([True, False, True, False, pd.NA, False], dtype="boolean")
In [52]: mask
Out[52]:
<BooleanArray>
[True, False, True, False, <NA>, False]
Length: 6, dtype: boolean
In [53]: df1[mask]
Out[53]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
c 1.024180 0.569605 0.875906 -2.211372
For getting a value explicitly:
# this is also equivalent to ``df1.at['a','A']``
In [54]: df1.loc['a', 'A']
Out[54]: 0.13200317033032932
Slicing with labels#
When using .loc with slices, if both the start and the stop labels are
present in the index, then elements located between the two (including them)
are returned:
In [55]: s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4])
In [56]: s.loc[3:5]
Out[56]:
3 b
2 c
5 d
dtype: object
If at least one of the two is absent, but the index is sorted, and can be
compared against start and stop labels, then slicing will still work as
expected, by selecting labels which rank between the two:
In [57]: s.sort_index()
Out[57]:
0 a
2 c
3 b
4 e
5 d
dtype: object
In [58]: s.sort_index().loc[1:6]
Out[58]:
2 c
3 b
4 e
5 d
dtype: object
However, if at least one of the two is absent and the index is not sorted, an
error will be raised (since doing otherwise would be computationally expensive,
as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
For the rationale behind this behavior, see
Endpoints are inclusive.
In [59]: s = pd.Series(list('abcdef'), index=[0, 3, 2, 5, 4, 2])
In [60]: s.loc[3:5]
Out[60]:
3 b
2 c
5 d
dtype: object
Also, if the index has duplicate labels and either the start or the stop label is duplicated,
an error will be raised. For instance, in the above example, s.loc[2:5] would raise a KeyError.
For more information about duplicate labels, see
Duplicate Labels.
Selection by position#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely Python and NumPy slicing. These are 0-based indexing. When slicing, the start bound is included, while the upper bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array.
A callable, see Selection By Callable.
In [61]: s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2)))
In [62]: s1
Out[62]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64
In [63]: s1.iloc[:3]
Out[63]:
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [64]: s1.iloc[3]
Out[64]: -1.110336102891167
Note that setting works as well:
In [65]: s1.iloc[:3] = 0
In [66]: s1
Out[66]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame:
In [67]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list(range(0, 12, 2)),
....: columns=list(range(0, 8, 2)))
....:
In [68]: df1
Out[68]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
Select via integer slicing:
In [69]: df1.iloc[:3]
Out[69]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [70]: df1.iloc[1:5, 2:4]
Out[70]:
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
Select via integer list:
In [71]: df1.iloc[[1, 3, 5], [1, 3]]
Out[71]:
2 6
2 -0.154951 -2.179861
6 -0.345352 0.690579
10 -1.236269 -0.487602
In [72]: df1.iloc[1:3, :]
Out[72]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [73]: df1.iloc[:, 1:3]
Out[73]:
2 4
0 -0.732339 0.687738
2 -0.154951 0.301624
4 -0.954208 1.462696
6 -0.345352 1.314232
8 2.396780 0.014871
10 -1.236269 0.896171
# this is also equivalent to ``df1.iat[1,1]``
In [74]: df1.iloc[1, 1]
Out[74]: -0.1549507744249032
For getting a cross section using an integer position (equiv to df.xs(1)):
In [75]: df1.iloc[1]
Out[75]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64
Out of range slice indexes are handled gracefully just as in Python/NumPy.
# these are allowed in Python/NumPy.
In [76]: x = list('abcdef')
In [77]: x
Out[77]: ['a', 'b', 'c', 'd', 'e', 'f']
In [78]: x[4:10]
Out[78]: ['e', 'f']
In [79]: x[8:10]
Out[79]: []
In [80]: s = pd.Series(x)
In [81]: s
Out[81]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [82]: s.iloc[4:10]
Out[82]:
4 e
5 f
dtype: object
In [83]: s.iloc[8:10]
Out[83]: Series([], dtype: object)
Note that using slices that go out of bounds can result in
an empty axis (e.g. an empty DataFrame being returned).
In [84]: dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [85]: dfl
Out[85]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
In [86]: dfl.iloc[:, 2:3]
Out[86]:
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [87]: dfl.iloc[:, 1:3]
Out[87]:
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885
In [88]: dfl.iloc[4:6]
Out[88]:
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError.
A list of indexers where any element is out of bounds will raise an
IndexError.
>>> dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
>>> dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
Selection by callable#
.loc, .iloc, and also [] indexing can accept a callable as indexer.
The callable must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
In [89]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [90]: df1
Out[90]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
In [91]: df1.loc[lambda df: df['A'] > 0, :]
Out[91]:
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
In [92]: df1.loc[:, lambda df: ['A', 'B']]
Out[92]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [93]: df1.iloc[:, lambda df: [0, 1]]
Out[93]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [94]: df1[lambda df: df.columns[0]]
Out[94]:
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
You can use callable indexing in Series.
In [95]: df1['A'].loc[lambda s: s > 0]
Out[95]:
c 0.299368
e 1.289997
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations
without using a temporary variable.
In [96]: bb = pd.read_csv('data/baseball.csv', index_col='id')
In [97]: (bb.groupby(['year', 'team']).sum(numeric_only=True)
....: .loc[lambda df: df['r'] > 100])
....:
Out[97]:
stint g ab r h X2b ... so ibb hbp sh sf gidp
year team ...
2007 CIN 6 379 745 101 203 35 ... 127.0 14.0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 ... 176.0 3.0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 ... 212.0 3.0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 ... 141.0 8.0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 ... 310.0 24.0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 ... 188.0 51.0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 ... 140.0 4.0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 ... 265.0 16.0 12.0 4.0 16.0 38.0
[8 rows x 18 columns]
Combining positional and label-based indexing#
If you wish to get the 0th and the 2nd elements from the index in the ‘A’ column, you can do:
In [98]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [99]: dfd
Out[99]:
A B
a 1 4
b 2 5
c 3 6
In [100]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[100]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using
positional indexing to select things.
In [101]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[101]:
a 1
c 3
Name: A, dtype: int64
For getting multiple indexers, using .get_indexer:
In [102]: dfd.iloc[[0, 2], dfd.columns.get_indexer(['A', 'B'])]
Out[102]:
A B
a 1 4
c 3 6
Indexing with list with missing labels is deprecated#
Warning
Changed in version 1.0.0.
Using .loc or [] with a list with one or more missing labels will no longer reindex, in favor of .reindex.
In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (otherwise it
would raise a KeyError). This behavior was changed and will now raise a KeyError if at least one label is missing.
The recommended alternative is to use .reindex().
For example.
In [103]: s = pd.Series([1, 2, 3])
In [104]: s
Out[104]:
0 1
1 2
2 3
dtype: int64
Selection with all keys found is unchanged.
In [105]: s.loc[[1, 2]]
Out[105]:
1 2
2 3
dtype: int64
Previous behavior
In [4]: s.loc[[1, 2, 3]]
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Current behavior
In [4]: s.loc[[1, 2, 3]]
Passing list-likes to .loc with any non-matching elements will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Reindexing#
The idiomatic way to achieve selecting potentially not-found elements is via .reindex(). See also the section on reindexing.
In [106]: s.reindex([1, 2, 3])
Out[106]:
1 2.0
2 3.0
3 NaN
dtype: float64
Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve the dtype of the selection.
In [107]: labels = [1, 2, 3]
In [108]: s.loc[s.index.intersection(labels)]
Out[108]:
1 2
2 3
dtype: int64
Having a duplicated index will raise for a .reindex():
In [109]: s = pd.Series(np.arange(4), index=['a', 'a', 'b', 'c'])
In [110]: labels = ['c', 'd']
In [17]: s.reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Generally, you can intersect the desired labels with the current
axis, and then reindex.
In [111]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[111]:
c 3.0
d NaN
dtype: float64
However, this would still raise if your resulting index is duplicated.
In [41]: labels = ['a', 'd']
In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Selecting random samples#
A random selection of rows or columns from a Series or DataFrame with the sample() method. The method will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [112]: s = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1 row.
In [113]: s.sample()
Out[113]:
4 4
dtype: int64
# One may specify either a number of rows:
In [114]: s.sample(n=3)
Out[114]:
0 0
4 4
1 1
dtype: int64
# Or a fraction of the rows:
In [115]: s.sample(frac=0.5)
Out[115]:
5 5
3 3
1 1
dtype: int64
By default, sample will return each row at most once, but one can also sample with replacement
using the replace option:
In [116]: s = pd.Series([0, 1, 2, 3, 4, 5])
# Without replacement (default):
In [117]: s.sample(n=6, replace=False)
Out[117]:
0 0
1 1
5 5
3 3
2 2
4 4
dtype: int64
# With replacement:
In [118]: s.sample(n=6, replace=True)
Out[118]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64
By default, each row has an equal probability of being selected, but if you want rows
to have different probabilities, you can pass the sample function sampling weights as
weights. These weights can be a list, a NumPy array, or a Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights by the sum of the weights. For example:
In [119]: s = pd.Series([0, 1, 2, 3, 4, 5])
In [120]: example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]
In [121]: s.sample(n=3, weights=example_weights)
Out[121]:
5 5
4 4
3 3
dtype: int64
# Weights will be re-normalized automatically
In [122]: example_weights2 = [0.5, 0, 0, 0, 0, 0]
In [123]: s.sample(n=1, weights=example_weights2)
Out[123]:
0 0
dtype: int64
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights
(provided you are sampling rows and not columns) by simply passing the name of the column
as a string.
In [124]: df2 = pd.DataFrame({'col1': [9, 8, 7, 6],
.....: 'weight_column': [0.5, 0.4, 0.1, 0]})
.....:
In [125]: df2.sample(n=3, weights='weight_column')
Out[125]:
col1 weight_column
1 8 0.4
0 9 0.5
2 7 0.1
sample also allows users to sample columns instead of rows using the axis argument.
In [126]: df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
In [127]: df3.sample(n=1, axis=1)
Out[127]:
col1
0 1
1 2
2 3
Finally, one can also set a seed for sample’s random number generator using the random_state argument, which will accept either an integer (as a seed) or a NumPy RandomState object.
In [128]: df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
# With a given seed, the sample will always draw the same rows.
In [129]: df4.sample(n=2, random_state=2)
Out[129]:
col1 col2
2 3 4
1 2 3
In [130]: df4.sample(n=2, random_state=2)
Out[130]:
col1 col2
2 3 4
1 2 3
Setting with enlargement#
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.
In [131]: se = pd.Series([1, 2, 3])
In [132]: se
Out[132]:
0 1
1 2
2 3
dtype: int64
In [133]: se[5] = 5.
In [134]: se
Out[134]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
A DataFrame can be enlarged on either axis via .loc.
In [135]: dfi = pd.DataFrame(np.arange(6).reshape(3, 2),
.....: columns=['A', 'B'])
.....:
In [136]: dfi
Out[136]:
A B
0 0 1
1 2 3
2 4 5
In [137]: dfi.loc[:, 'C'] = dfi.loc[:, 'A']
In [138]: dfi
Out[138]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
This is like an append operation on the DataFrame.
In [139]: dfi.loc[3] = 5
In [140]: dfi
Out[140]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Fast scalar value getting and setting#
Since indexing with [] must handle a lot of cases (single-label access,
slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
out what you’re asking for. If you only want to access a scalar value, the
fastest way is to use the at and iat methods, which are implemented on
all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc
In [141]: s.iat[5]
Out[141]: 5
In [142]: df.at[dates[5], 'A']
Out[142]: -0.6736897080883706
In [143]: df.iat[3, 0]
Out[143]: 0.7215551622443669
You can also set using these same indexers.
In [144]: df.at[dates[5], 'E'] = 7
In [145]: df.iat[3, 0] = 7
at may enlarge the object in-place as above if the indexer is missing.
In [146]: df.at[dates[-1] + pd.Timedelta('1 day'), 0] = 7
In [147]: df
Out[147]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0
Boolean indexing#
Another common operation is the use of boolean vectors to filter the data.
The operators are: | for or, & for and, and ~ for not.
These must be grouped by using parentheses, since by default Python will
evaluate an expression such as df['A'] > 2 & df['B'] < 3 as
df['A'] > (2 & df['B']) < 3, while the desired evaluation order is
(df['A'] > 2) & (df['B'] < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [148]: s = pd.Series(range(-3, 4))
In [149]: s
Out[149]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
In [150]: s[s > 0]
Out[150]:
4 1
5 2
6 3
dtype: int64
In [151]: s[(s < -1) | (s > 0.5)]
Out[151]:
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
In [152]: s[~(s < 0)]
Out[152]:
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame’s index (for example, something derived from one of the columns
of the DataFrame):
In [153]: df[df['A'] > 0]
Out[153]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
List comprehensions and the map method of Series can also be used to produce
more complex criteria:
In [154]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
.....: 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
# only want 'two' or 'three'
In [155]: criterion = df2['a'].map(lambda x: x.startswith('t'))
In [156]: df2[criterion]
Out[156]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# equivalent but slower
In [157]: df2[[x.startswith('t') for x in df2['a']]]
Out[157]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [158]: df2[criterion & (df2['b'] == 'x')]
Out[158]:
a b c
3 three x 0.361719
With the choice methods Selection by Label, Selection by Position,
and Advanced Indexing you may select along more than one axis using boolean vectors combined with other indexing expressions.
In [159]: df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Out[159]:
b c
3 x 0.361719
Warning
iloc supports two kinds of boolean indexing. If the indexer is a boolean Series,
an error will be raised. For instance, in the following example, df.iloc[s.values, 1] is ok.
The boolean indexer is an array. But df.iloc[s, 1] would raise ValueError.
In [160]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6]],
.....: index=list('abc'),
.....: columns=['A', 'B'])
.....:
In [161]: s = (df['A'] > 2)
In [162]: s
Out[162]:
a False
b True
c True
Name: A, dtype: bool
In [163]: df.loc[s, 'B']
Out[163]:
b 4
c 6
Name: B, dtype: int64
In [164]: df.iloc[s.values, 1]
Out[164]:
b 4
c 6
Name: B, dtype: int64
Indexing with isin#
Consider the isin() method of Series, which returns a boolean
vector that is true wherever the Series elements exist in the passed list.
This allows you to select rows where one or more columns have values you want:
In [165]: s = pd.Series(np.arange(5), index=np.arange(5)[::-1], dtype='int64')
In [166]: s
Out[166]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [167]: s.isin([2, 4, 6])
Out[167]:
4 False
3 False
2 True
1 False
0 True
dtype: bool
In [168]: s[s.isin([2, 4, 6])]
Out[168]:
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases
when you don’t know which of the sought labels are in fact present:
In [169]: s[s.index.isin([2, 4, 6])]
Out[169]:
4 0
2 2
dtype: int64
# compare it to the following
In [170]: s.reindex([2, 4, 6])
Out[170]:
2 2.0
4 0.0
6 NaN
dtype: float64
In addition to that, MultiIndex allows selecting a separate level to use
in the membership check:
In [171]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c']]))
.....:
In [172]: s_mi
Out[172]:
0 a 0
b 1
c 2
1 a 3
b 4
c 5
dtype: int64
In [173]: s_mi.iloc[s_mi.index.isin([(1, 'a'), (2, 'b'), (0, 'c')])]
Out[173]:
0 c 2
1 a 3
dtype: int64
In [174]: s_mi.iloc[s_mi.index.isin(['a', 'c', 'e'], level=1)]
Out[174]:
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin() method. When calling isin, pass a set of
values as either an array or dict. If values is an array, isin returns
a DataFrame of booleans that is the same shape as the original DataFrame, with True
wherever the element is in the sequence of values.
In [175]: df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
.....: 'ids2': ['a', 'n', 'c', 'n']})
.....:
In [176]: values = ['a', 'b', 1, 3]
In [177]: df.isin(values)
Out[177]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False
Oftentimes you’ll want to match certain values with certain columns.
Just make values a dict where the key is the column, and the value is
a list of items you want to check for.
In [178]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [179]: df.isin(values)
Out[179]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False
To return the DataFrame of booleans where the values are not in the original DataFrame,
use the ~ operator:
In [180]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [181]: ~df.isin(values)
Out[181]:
vals ids ids2
0 False False True
1 True False True
2 False True True
3 True True True
Combine DataFrame’s isin with the any() and all() methods to
quickly select subsets of your data that meet a given criteria.
To select a row where each column meets its own criterion:
In [182]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [183]: row_mask = df.isin(values).all(1)
In [184]: df[row_mask]
Out[184]:
vals ids ids2
0 1 a a
The where() Method and Masking#
Selecting values from a Series with a boolean vector generally returns a
subset of the data. To guarantee that selection output has the same shape as
the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
In [185]: s[s > 0]
Out[185]:
3 1
2 2
1 3
0 4
dtype: int64
To return a Series of the same shape as the original:
In [186]: s.where(s > 0)
Out[186]:
4 NaN
3 1.0
2 2.0
1 3.0
0 4.0
dtype: float64
Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. where is used under the hood as the implementation.
The code below is equivalent to df.where(df < 0).
In [187]: df[df < 0]
Out[187]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
In addition, where takes an optional other argument for replacement of
values where the condition is False, in the returned copy.
In [188]: df.where(df < 0, -df)
Out[188]:
A B C D
2000-01-01 -2.104139 -1.309525 -0.485855 -0.245166
2000-01-02 -0.352480 -0.390389 -1.192319 -1.655824
2000-01-03 -0.864883 -0.299674 -0.227870 -0.281059
2000-01-04 -0.846958 -1.222082 -0.600705 -1.233203
2000-01-05 -0.669692 -0.605656 -1.169184 -0.342416
2000-01-06 -0.868584 -0.948458 -2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 -0.168904 -0.048048
2000-01-08 -0.801196 -1.392071 -0.048788 -0.808838
You may wish to set values based on some boolean criteria.
This can be done intuitively like so:
In [189]: s2 = s.copy()
In [190]: s2[s2 < 0] = 0
In [191]: s2
Out[191]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [192]: df2 = df.copy()
In [193]: df2[df2 < 0] = 0
In [194]: df2
Out[194]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an
optional parameter inplace so that the original data can be modified
without creating a copy:
In [195]: df_orig = df.copy()
In [196]: df_orig.where(df > 0, -df, inplace=True)
In [197]: df_orig
Out[197]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838
Note
The signature for DataFrame.where() differs from numpy.where().
Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2).
In [198]: df.where(df < 0, -df) == np.where(df < 0, df, -df)
Out[198]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
2000-01-04 True True True True
2000-01-05 True True True True
2000-01-06 True True True True
2000-01-07 True True True True
2000-01-08 True True True True
Alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analogous to
partial setting via .loc (but on the contents rather than the axis labels).
In [199]: df2 = df.copy()
In [200]: df2[df2[1:4] > 0] = 3
In [201]: df2
Out[201]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
Where can also accept axis and level parameters to align the input when
performing the where.
In [202]: df2 = df.copy()
In [203]: df2.where(df2 > 0, df2['A'], axis='index')
Out[203]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
This is equivalent to (but faster than) the following.
In [204]: df2 = df.copy()
In [205]: df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
Out[205]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
where can accept a callable as condition and other arguments. The function must
be with one argument (the calling Series or DataFrame) and that returns valid output
as condition and other argument.
In [206]: df3 = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6],
.....: 'C': [7, 8, 9]})
.....:
In [207]: df3.where(lambda x: x > 4, lambda x: x + 10)
Out[207]:
A B C
0 11 14 7
1 12 5 8
2 13 6 9
Mask#
mask() is the inverse boolean operation of where.
In [208]: s.mask(s >= 0)
Out[208]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64
In [209]: df.mask(df >= 0)
Out[209]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
Setting with enlargement conditionally using numpy()#
An alternative to where() is to use numpy.where().
Combined with setting a new column, you can use it to enlarge a DataFrame where the
values are determined conditionally.
Consider you have two choices to choose from in the following DataFrame. And you want to
set a new column color to ‘green’ when the second column has ‘Z’. You can do the
following:
In [210]: df = pd.DataFrame({'col1': list('ABBC'), 'col2': list('ZZXY')})
In [211]: df['color'] = np.where(df['col2'] == 'Z', 'green', 'red')
In [212]: df
Out[212]:
col1 col2 color
0 A Z green
1 B Z green
2 B X red
3 C Y red
If you have multiple conditions, you can use numpy.select() to achieve that. Say
corresponding to three conditions there are three choice of colors, with a fourth color
as a fallback, you can do the following.
In [213]: conditions = [
.....: (df['col2'] == 'Z') & (df['col1'] == 'A'),
.....: (df['col2'] == 'Z') & (df['col1'] == 'B'),
.....: (df['col1'] == 'B')
.....: ]
.....:
In [214]: choices = ['yellow', 'blue', 'purple']
In [215]: df['color'] = np.select(conditions, choices, default='black')
In [216]: df
Out[216]:
col1 col2 color
0 A Z yellow
1 B Z blue
2 B X purple
3 C Y black
The query() Method#
DataFrame objects have a query()
method that allows selection using an expression.
You can get the value of the frame where column b has values
between the values of columns a and c. For example:
In [217]: n = 10
In [218]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [219]: df
Out[219]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [220]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[220]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [221]: df.query('(a < b) & (b < c)')
Out[221]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column
with the name a.
In [222]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))
In [223]: df.index.name = 'a'
In [224]: df
Out[224]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
6 0 1
7 3 4
8 2 3
9 1 1
In [225]: df.query('a < b and b < c')
Out[225]:
b c
a
2 3 4
If instead you don’t want to or cannot name your index, you can use the name
index in your query expression:
In [226]: df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc'))
In [227]: df
Out[227]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
In [228]: df.query('index < b < c')
Out[228]:
b c
2 5 6
Note
If the name of your index overlaps with a column name, the column name is
given precedence. For example,
In [229]: df = pd.DataFrame({'a': np.random.randint(5, size=5)})
In [230]: df.index.name = 'a'
In [231]: df.query('a > 2') # uses the column 'a', not the index
Out[231]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special
identifier ‘index’:
In [232]: df.query('index > 2')
Out[232]:
a
a
3 3
4 2
If for some reason you have a column named index, then you can refer to
the index as ilevel_0 as well, but at this point you should consider
renaming your columns to something less ambiguous.
MultiIndex query() Syntax#
You can also use the levels of a DataFrame with a
MultiIndex as if they were columns in the frame:
In [233]: n = 10
In [234]: colors = np.random.choice(['red', 'green'], size=n)
In [235]: foods = np.random.choice(['eggs', 'ham'], size=n)
In [236]: colors
Out[236]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'], dtype='<U5')
In [237]: foods
Out[237]:
array(['ham', 'ham', 'eggs', 'eggs', 'eggs', 'ham', 'ham', 'eggs', 'eggs',
'eggs'], dtype='<U4')
In [238]: index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
In [239]: df = pd.DataFrame(np.random.randn(n, 2), index=index)
In [240]: df
Out[240]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [241]: df.query('color == "red"')
Out[241]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
If the levels of the MultiIndex are unnamed, you can refer to them using
special names:
In [242]: df.index.names = [None, None]
In [243]: df
Out[243]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [244]: df.query('ilevel_0 == "red"')
Out[244]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means “index level 0” for the 0th level
of the index.
query() Use Cases#
A use case for query() is when you have a collection of
DataFrame objects that have a subset of column names (or index
levels/names) in common. You can pass the same query to both frames without
having to specify which frame you’re interested in querying
In [245]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [246]: df
Out[246]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
In [247]: df2 = pd.DataFrame(np.random.rand(n + 2, 3), columns=df.columns)
In [248]: df2
Out[248]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716
In [249]: expr = '0.0 <= a <= c <= 0.5'
In [250]: map(lambda frame: frame.query(expr), [df, df2])
Out[250]: <map at 0x7f1ea0d8e580>
query() Python versus pandas Syntax Comparison#
Full numpy-like syntax:
In [251]: df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
In [252]: df
Out[252]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
In [253]: df.query('(a < b) & (b < c)')
Out[253]:
a b c
0 7 8 9
In [254]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[254]:
a b c
0 7 8 9
Slightly nicer by removing the parentheses (comparison operators bind tighter
than & and |):
In [255]: df.query('a < b & b < c')
Out[255]:
a b c
0 7 8 9
Use English instead of symbols:
In [256]: df.query('a < b and b < c')
Out[256]:
a b c
0 7 8 9
Pretty close to how you might write it on paper:
In [257]: df.query('a < b < c')
Out[257]:
a b c
0 7 8 9
The in and not in operators#
query() also supports special use of Python’s in and
not in comparison operators, providing a succinct syntax for calling the
isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [258]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [259]: df
Out[259]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [260]: df.query('a in b')
Out[260]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
# How you'd do it in pure Python
In [261]: df[df['a'].isin(df['b'])]
Out[261]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
In [262]: df.query('a not in b')
Out[262]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [263]: df[~df['a'].isin(df['b'])]
Out[263]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values
# and col c's values are less than col d's
In [264]: df.query('a in b and c < d')
Out[264]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
# pure Python
In [265]: df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
Out[265]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2
Note
Note that in and not in are evaluated in Python, since numexpr
has no equivalent of this operation. However, only the in/not in
expression itself is evaluated in vanilla Python. For example, in the
expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in
operation is evaluated in plain Python. In general, any operations that can
be evaluated using numexpr will be.
Special use of the == operator with list objects#
Comparing a list of values to a column using ==/!= works similarly
to in/not in.
In [266]: df.query('b == ["a", "b", "c"]')
Out[266]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [267]: df[df['b'].isin(["a", "b", "c"])]
Out[267]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [268]: df.query('c == [1, 2]')
Out[268]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [269]: df.query('c != [1, 2]')
Out[269]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [270]: df.query('[1, 2] in c')
Out[270]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [271]: df.query('[1, 2] not in c')
Out[271]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [272]: df[df['c'].isin([1, 2])]
Out[272]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
Boolean operators#
You can negate boolean expressions with the word not or the ~ operator.
In [273]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [274]: df['bools'] = np.random.rand(len(df)) > 0.5
In [275]: df.query('~bools')
Out[275]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [276]: df.query('not bools')
Out[276]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [277]: df.query('not bools') == df[~df['bools']]
Out[277]:
a b c bools
2 True True True True
7 True True True True
8 True True True True
Of course, expressions can be arbitrarily complex too:
# short query syntax
In [278]: shorter = df.query('a < b < c and (not bools) or bools > 2')
# equivalent in pure Python
In [279]: longer = df[(df['a'] < df['b'])
.....: & (df['b'] < df['c'])
.....: & (~df['bools'])
.....: | (df['bools'] > 2)]
.....:
In [280]: shorter
Out[280]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [281]: longer
Out[281]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [282]: shorter == longer
Out[282]:
a b c bools
7 True True True True
Performance of query()#
DataFrame.query() using numexpr is slightly faster than Python for
large frames.
Note
You will only see the performance benefits of using the numexpr engine
with DataFrame.query() if your frame has more than approximately 200,000
rows.
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Duplicate data#
If you want to identify and remove duplicate rows in a DataFrame, there are
two methods that will help: duplicated and drop_duplicates. Each
takes as an argument the columns to use to identify duplicated rows.
duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but
each method has a keep parameter to specify targets to be kept.
keep='first' (default): mark / drop duplicates except for the first occurrence.
keep='last': mark / drop duplicates except for the last occurrence.
keep=False: mark / drop all duplicates.
In [283]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
.....: 'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
In [284]: df2
Out[284]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [285]: df2.duplicated('a')
Out[285]:
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
In [286]: df2.duplicated('a', keep='last')
Out[286]:
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
In [287]: df2.duplicated('a', keep=False)
Out[287]:
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool
In [288]: df2.drop_duplicates('a')
Out[288]:
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
In [289]: df2.drop_duplicates('a', keep='last')
Out[289]:
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [290]: df2.drop_duplicates('a', keep=False)
Out[290]:
a b c
5 three x -1.964475
6 four x 1.298329
Also, you can pass a list of columns to identify duplications.
In [291]: df2.duplicated(['a', 'b'])
Out[291]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
dtype: bool
In [292]: df2.drop_duplicates(['a', 'b'])
Out[292]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing.
The same set of options are available for the keep parameter.
In [293]: df3 = pd.DataFrame({'a': np.arange(6),
.....: 'b': np.random.randn(6)},
.....: index=['a', 'a', 'b', 'c', 'b', 'a'])
.....:
In [294]: df3
Out[294]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [295]: df3.index.duplicated()
Out[295]: array([False, True, False, False, True, True])
In [296]: df3[~df3.index.duplicated()]
Out[296]:
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [297]: df3[~df3.index.duplicated(keep='last')]
Out[297]:
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [298]: df3[~df3.index.duplicated(keep=False)]
Out[298]:
a b
c 3 -0.894409
Dictionary-like get() method#
Each of Series or DataFrame have a get method which can return a
default value.
In [299]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
In [300]: s.get('a') # equivalent to s['a']
Out[300]: 1
In [301]: s.get('x', default=-1)
Out[301]: -1
Looking up values by index/column labels#
Sometimes you want to extract a set of values given a sequence of row labels
and column labels, this can be achieved by pandas.factorize and NumPy indexing.
For instance:
In [302]: df = pd.DataFrame({'col': ["A", "A", "B", "B"],
.....: 'A': [80, 23, np.nan, 22],
.....: 'B': [80, 55, 76, 67]})
.....:
In [303]: df
Out[303]:
col A B
0 A 80.0 80
1 A 23.0 55
2 B NaN 76
3 B 22.0 67
In [304]: idx, cols = pd.factorize(df['col'])
In [305]: df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx]
Out[305]: array([80., 23., 76., 67.])
Formerly this could be achieved with the dedicated DataFrame.lookup method
which was deprecated in version 1.2.0.
Index objects#
The pandas Index class and its subclasses can be viewed as
implementing an ordered multiset. Duplicates are allowed. However, if you try
to convert an Index object with duplicate entries into a
set, an exception will be raised.
Index also provides the infrastructure necessary for
lookups, data alignment, and reindexing. The easiest way to create an
Index directly is to pass a list or other sequence to
Index:
In [306]: index = pd.Index(['e', 'd', 'a', 'b'])
In [307]: index
Out[307]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [308]: 'd' in index
Out[308]: True
You can also pass a name to be stored in the index:
In [309]: index = pd.Index(['e', 'd', 'a', 'b'], name='something')
In [310]: index.name
Out[310]: 'something'
The name, if set, will be shown in the console display:
In [311]: index = pd.Index(list(range(5)), name='rows')
In [312]: columns = pd.Index(['A', 'B', 'C'], name='cols')
In [313]: df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
In [314]: df
Out[314]:
cols A B C
rows
0 1.295989 -1.051694 1.340429
1 -2.366110 0.428241 0.387275
2 0.433306 0.929548 0.278094
3 2.154730 -0.315628 0.264223
4 1.126818 1.132290 -0.353310
In [315]: df['A']
Out[315]:
rows
0 1.295989
1 -2.366110
2 0.433306
3 2.154730
4 1.126818
Name: A, dtype: float64
Setting metadata#
Indexes are “mostly immutable”, but it is possible to set and change their
name attribute. You can use the rename, set_names to set these attributes
directly, and they default to returning a copy.
See Advanced Indexing for usage of MultiIndexes.
In [316]: ind = pd.Index([1, 2, 3])
In [317]: ind.rename("apple")
Out[317]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [318]: ind
Out[318]: Int64Index([1, 2, 3], dtype='int64')
In [319]: ind.set_names(["apple"], inplace=True)
In [320]: ind.name = "bob"
In [321]: ind
Out[321]: Int64Index([1, 2, 3], dtype='int64', name='bob')
set_names, set_levels, and set_codes also take an optional
level argument
In [322]: index = pd.MultiIndex.from_product([range(3), ['one', 'two']], names=['first', 'second'])
In [323]: index
Out[323]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [324]: index.levels[1]
Out[324]: Index(['one', 'two'], dtype='object', name='second')
In [325]: index.set_levels(["a", "b"], level=1)
Out[325]:
MultiIndex([(0, 'a'),
(0, 'b'),
(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['first', 'second'])
Set operations on Index objects#
The two main operations are union and intersection.
Difference is provided via the .difference() method.
In [326]: a = pd.Index(['c', 'b', 'a'])
In [327]: b = pd.Index(['c', 'e', 'd'])
In [328]: a.difference(b)
Out[328]: Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference operation, which returns elements
that appear in either idx1 or idx2, but not in both. This is
equivalent to the Index created by idx1.difference(idx2).union(idx2.difference(idx1)),
with duplicates dropped.
In [329]: idx1 = pd.Index([1, 2, 3, 4])
In [330]: idx2 = pd.Index([2, 3, 4, 5])
In [331]: idx1.symmetric_difference(idx2)
Out[331]: Int64Index([1, 5], dtype='int64')
Note
The resulting index from a set operation will be sorted in ascending order.
When performing Index.union() between indexes with different dtypes, the indexes
must be cast to a common dtype. Typically, though not always, this is object dtype. The
exception is when performing a union between integer and float data. In this case, the
integer values are converted to float
In [332]: idx1 = pd.Index([0, 1, 2])
In [333]: idx2 = pd.Index([0.5, 1.5])
In [334]: idx1.union(idx2)
Out[334]: Float64Index([0.0, 0.5, 1.0, 1.5, 2.0], dtype='float64')
Missing values#
Important
Even though Index can hold missing values (NaN), it should be avoided
if you do not want any unexpected results. For example, some operations
exclude missing values implicitly.
Index.fillna fills missing values with specified scalar value.
In [335]: idx1 = pd.Index([1, np.nan, 3, 4])
In [336]: idx1
Out[336]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [337]: idx1.fillna(2)
Out[337]: Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [338]: idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'),
.....: pd.NaT,
.....: pd.Timestamp('2011-01-03')])
.....:
In [339]: idx2
Out[339]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]', freq=None)
In [340]: idx2.fillna(pd.Timestamp('2011-01-02'))
Out[340]: DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]', freq=None)
Set / reset index#
Occasionally you will load or create a data set into a DataFrame and want to
add an index after you’ve already done so. There are a couple of different
ways.
Set an index#
DataFrame has a set_index() method which takes a column name
(for a regular Index) or a list of column names (for a MultiIndex).
To create a new, re-indexed DataFrame:
In [341]: data
Out[341]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [342]: indexed1 = data.set_index('c')
In [343]: indexed1
Out[343]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [344]: indexed2 = data.set_index(['a', 'b'])
In [345]: indexed2
Out[345]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append
the given columns to a MultiIndex:
In [346]: frame = data.set_index('c', drop=False)
In [347]: frame = frame.set_index(['a', 'b'], append=True)
In [348]: frame
Out[348]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add
the index in-place (without creating a new object):
In [349]: data.set_index('c', drop=False)
Out[349]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [350]: data.set_index(['a', 'b'], inplace=True)
In [351]: data
Out[351]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
Reset the index#
As a convenience, there is a new function on DataFrame called
reset_index() which transfers the index values into the
DataFrame’s columns and sets a simple integer index.
This is the inverse operation of set_index().
In [352]: data
Out[352]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [353]: data.reset_index()
Out[353]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
The output is more similar to a SQL table or a record array. The names for the
columns derived from the index are the ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [354]: frame
Out[354]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [355]: frame.reset_index(level=1)
Out[355]:
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply
discards the index, instead of putting index values in the DataFrame’s columns.
Adding an ad hoc index#
If you create an index yourself, you can just assign it to the index field:
data.index = index
Returning a view versus a copy#
When setting values in a pandas object, care must be taken to avoid what is called
chained indexing. Here is an example.
In [356]: dfmi = pd.DataFrame([list('abcd'),
.....: list('efgh'),
.....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one', 'two'],
.....: ['first', 'second']]))
.....:
In [357]: dfmi
Out[357]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p
Compare these two access methods:
In [358]: dfmi['one']['second']
Out[358]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [359]: dfmi.loc[:, ('one', 'second')]
Out[359]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order
of operations on these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed.
Then another Python operation dfmi_with_one['second'] selects the series indexed by 'second'.
This is indicated by the variable dfmi_with_one because pandas sees these operations as separate events.
e.g. separate calls to __getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one','second')) to a single call to
__getitem__. This allows pandas to deal with this as a single entity. Furthermore this order of operations can be significantly
faster, and allows one to index both axes if so desired.
Why does assignment fail when using chained indexing?#
The problem in the previous section is just a performance issue. What’s up with
the SettingWithCopy warning? We don’t usually throw warnings around when
you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has
inherently unpredictable results. To see this, think about how the Python
interpreter executes this code:
dfmi.loc[:, ('one', 'second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
But this code is handled differently:
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, it’s very hard to
predict whether it will return a view or a copy (it depends on the memory layout
of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown
out immediately afterward. That’s what SettingWithCopy is warning you
about!
Note
You may be wondering whether we should be concerned about the loc
property in the first example. But dfmi.loc is guaranteed to be dfmi
itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course,
dfmi.loc.__getitem__(idx) may be a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when there’s no
obvious chained indexing going on. These are the bugs that
SettingWithCopy is designed to catch! pandas is probably trying to warn you
that you’ve done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
# We don't know whether this will modify df or not!
foo['quux'] = value
return foo
Yikes!
Evaluation order matters#
When you use chained indexing, the order and type of the indexing operation
partially determine whether the result is a slice into the original object, or
a copy of the slice.
pandas has the SettingWithCopyWarning because assigning to a copy of a
slice is frequently not intentional, but a mistake caused by chained indexing
returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a
chained indexing expression, you can set the option
mode.chained_assignment to one of these values:
'warn', the default, means a SettingWithCopyWarning is printed.
'raise' means pandas will raise a SettingWithCopyError
you have to deal with.
None will suppress the warnings entirely.
In [360]: dfb = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
# This will show the SettingWithCopyWarning
# but the frame values will be set
In [361]: dfb['c'][dfb['a'].str.startswith('o')] = 42
This however is operating on a copy and will not work.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
A chained assignment can also crop up in setting in a mixed dtype frame.
Note
These setting rules apply to all of .loc/.iloc.
The following is the recommended access method using .loc for multiple items (using mask) and a single item using a fixed index:
In [362]: dfc = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
In [363]: dfd = dfc.copy()
# Setting multiple items using a mask
In [364]: mask = dfd['a'].str.startswith('o')
In [365]: dfd.loc[mask, 'c'] = 42
In [366]: dfd
Out[366]:
a c
0 one 42
1 one 42
2 two 2
3 three 3
4 two 4
5 one 42
6 six 6
# Setting a single item
In [367]: dfd = dfc.copy()
In [368]: dfd.loc[2, 'a'] = 11
In [369]: dfd
Out[369]:
a c
0 one 0
1 one 1
2 11 2
3 three 3
4 two 4
5 one 5
6 six 6
The following can work at times, but it is not guaranteed to, and therefore should be avoided:
In [370]: dfd = dfc.copy()
In [371]: dfd['a'][2] = 111
In [372]: dfd
Out[372]:
a c
0 one 0
1 one 1
2 111 2
3 three 3
4 two 4
5 one 5
6 six 6
Last, the subsequent example will not work at all, and so should be avoided:
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfd.loc[0]['a'] = 1111
Traceback (most recent call last)
...
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning
The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently
reported.
|
user_guide/indexing.html
|
pandas.DataFrame.ewm
|
`pandas.DataFrame.ewm`
Provide exponentially weighted (EW) calculations.
Exactly one of com, span, halflife, or alpha must be
provided if times is not provided. If times is provided,
halflife and one of com, span or alpha may be provided.
```
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
```
|
DataFrame.ewm(com=None, span=None, halflife=None, alpha=None, min_periods=0, adjust=True, ignore_na=False, axis=0, times=None, method='single')[source]#
Provide exponentially weighted (EW) calculations.
Exactly one of com, span, halflife, or alpha must be
provided if times is not provided. If times is provided,
halflife and one of com, span or alpha may be provided.
Parameters
comfloat, optionalSpecify decay in terms of center of mass
\(\alpha = 1 / (1 + com)\), for \(com \geq 0\).
spanfloat, optionalSpecify decay in terms of span
\(\alpha = 2 / (span + 1)\), for \(span \geq 1\).
halflifefloat, str, timedelta, optionalSpecify decay in terms of half-life
\(\alpha = 1 - \exp\left(-\ln(2) / halflife\right)\), for
\(halflife > 0\).
If times is specified, a timedelta convertible unit over which an
observation decays to half its value. Only applicable to mean(),
and halflife value will not apply to the other functions.
New in version 1.1.0.
alphafloat, optionalSpecify smoothing factor \(\alpha\) directly
\(0 < \alpha \leq 1\).
min_periodsint, default 0Minimum number of observations in window required to have a value;
otherwise, result is np.nan.
adjustbool, default TrueDivide by decaying adjustment factor in beginning periods to account
for imbalance in relative weightings (viewing EWMA as a moving average).
When adjust=True (default), the EW function is calculated using weights
\(w_i = (1 - \alpha)^i\). For example, the EW moving average of the series
[\(x_0, x_1, ..., x_t\)] would be:
\[y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... + (1 -
\alpha)^t x_0}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... + (1 - \alpha)^t}\]
When adjust=False, the exponentially weighted function is calculated
recursively:
\[\begin{split}\begin{split}
y_0 &= x_0\\
y_t &= (1 - \alpha) y_{t-1} + \alpha x_t,
\end{split}\end{split}\]
ignore_nabool, default FalseIgnore missing values when calculating weights.
When ignore_na=False (default), weights are based on absolute positions.
For example, the weights of \(x_0\) and \(x_2\) used in calculating
the final weighted average of [\(x_0\), None, \(x_2\)] are
\((1-\alpha)^2\) and \(1\) if adjust=True, and
\((1-\alpha)^2\) and \(\alpha\) if adjust=False.
When ignore_na=True, weights are based
on relative positions. For example, the weights of \(x_0\) and \(x_2\)
used in calculating the final weighted average of
[\(x_0\), None, \(x_2\)] are \(1-\alpha\) and \(1\) if
adjust=True, and \(1-\alpha\) and \(\alpha\) if adjust=False.
axis{0, 1}, default 0If 0 or 'index', calculate across the rows.
If 1 or 'columns', calculate across the columns.
For Series this parameter is unused and defaults to 0.
timesstr, np.ndarray, Series, default None
New in version 1.1.0.
Only applicable to mean().
Times corresponding to the observations. Must be monotonically increasing and
datetime64[ns] dtype.
If 1-D array like, a sequence with the same shape as the observations.
Deprecated since version 1.4.0: If str, the name of the column in the DataFrame representing the times.
methodstr {‘single’, ‘table’}, default ‘single’
New in version 1.4.0.
Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
Only applicable to mean()
Returns
ExponentialMovingWindow subclass
See also
rollingProvides rolling window calculations.
expandingProvides expanding transformations.
Notes
See Windowing Operations
for further usage details and examples.
Examples
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
>>> df.ewm(alpha=2 / 3).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
adjust
>>> df.ewm(com=0.5, adjust=True).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
>>> df.ewm(com=0.5, adjust=False).mean()
B
0 0.000000
1 0.666667
2 1.555556
3 1.555556
4 3.650794
ignore_na
>>> df.ewm(com=0.5, ignore_na=True).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.225000
>>> df.ewm(com=0.5, ignore_na=False).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
times
Exponentially weighted mean with weights calculated with a timedelta halflife
relative to times.
>>> times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17']
>>> df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean()
B
0 0.000000
1 0.585786
2 1.523889
3 1.523889
4 3.233686
|
reference/api/pandas.DataFrame.ewm.html
|
pandas.IntervalIndex.to_tuples
|
`pandas.IntervalIndex.to_tuples`
Return an ndarray of tuples of the form (left, right).
|
IntervalIndex.to_tuples(*args, **kwargs)[source]#
Return an ndarray of tuples of the form (left, right).
Parameters
na_tuplebool, default TrueReturns NA as a tuple if True, (nan, nan), or just as the NA
value itself if False, nan.
Returns
tuples: ndarray
|
reference/api/pandas.IntervalIndex.to_tuples.html
|
pandas.DataFrame.reorder_levels
|
`pandas.DataFrame.reorder_levels`
Rearrange index levels using input order. May not drop or duplicate levels.
```
>>> data = {
... "class": ["Mammals", "Mammals", "Reptiles"],
... "diet": ["Omnivore", "Carnivore", "Carnivore"],
... "species": ["Humans", "Dogs", "Snakes"],
... }
>>> df = pd.DataFrame(data, columns=["class", "diet", "species"])
>>> df = df.set_index(["class", "diet"])
>>> df
species
class diet
Mammals Omnivore Humans
Carnivore Dogs
Reptiles Carnivore Snakes
```
|
DataFrame.reorder_levels(order, axis=0)[source]#
Rearrange index levels using input order. May not drop or duplicate levels.
Parameters
orderlist of int or list of strList representing new level order. Reference level by number
(position) or by key (label).
axis{0 or ‘index’, 1 or ‘columns’}, default 0Where to reorder levels.
Returns
DataFrame
Examples
>>> data = {
... "class": ["Mammals", "Mammals", "Reptiles"],
... "diet": ["Omnivore", "Carnivore", "Carnivore"],
... "species": ["Humans", "Dogs", "Snakes"],
... }
>>> df = pd.DataFrame(data, columns=["class", "diet", "species"])
>>> df = df.set_index(["class", "diet"])
>>> df
species
class diet
Mammals Omnivore Humans
Carnivore Dogs
Reptiles Carnivore Snakes
Let’s reorder the levels of the index:
>>> df.reorder_levels(["diet", "class"])
species
diet class
Omnivore Mammals Humans
Carnivore Mammals Dogs
Reptiles Snakes
|
reference/api/pandas.DataFrame.reorder_levels.html
|
pandas.tseries.offsets.SemiMonthEnd
|
`pandas.tseries.offsets.SemiMonthEnd`
Two DateOffset’s per month repeating on the last day of the month & day_of_month.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.SemiMonthEnd()
Timestamp('2022-01-15 00:00:00')
```
|
class pandas.tseries.offsets.SemiMonthEnd#
Two DateOffset’s per month repeating on the last day of the month & day_of_month.
Parameters
nint
normalizebool, default False
day_of_monthint, {1, 3,…,27}, default 15
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.SemiMonthEnd()
Timestamp('2022-01-15 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
day_of_month
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.html
|
pandas.DataFrame.empty
|
`pandas.DataFrame.empty`
Indicator whether Series/DataFrame is empty.
```
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
```
|
property DataFrame.empty[source]#
Indicator whether Series/DataFrame is empty.
True if Series/DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
Returns
boolIf Series/DataFrame is empty, return True, if not return False.
See also
Series.dropnaReturn series without null values.
DataFrame.dropnaReturn DataFrame with labels on given axis omitted where (all or any) data are missing.
Notes
If Series/DataFrame contains only NaNs, it is still not considered empty. See
the example below.
Examples
An example of an actual empty DataFrame. Notice the index is empty:
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
If we only have NaNs in our DataFrame, it is not considered empty! We
will need to drop the NaNs to make the DataFrame empty:
>>> df = pd.DataFrame({'A' : [np.nan]})
>>> df
A
0 NaN
>>> df.empty
False
>>> df.dropna().empty
True
>>> ser_empty = pd.Series({'A' : []})
>>> ser_empty
A []
dtype: object
>>> ser_empty.empty
False
>>> ser_empty = pd.Series()
>>> ser_empty.empty
True
|
reference/api/pandas.DataFrame.empty.html
|
pandas.DataFrame.mode
|
`pandas.DataFrame.mode`
Get the mode(s) of each element along the selected axis.
```
>>> df = pd.DataFrame([('bird', 2, 2),
... ('mammal', 4, np.nan),
... ('arthropod', 8, 0),
... ('bird', 2, np.nan)],
... index=('falcon', 'horse', 'spider', 'ostrich'),
... columns=('species', 'legs', 'wings'))
>>> df
species legs wings
falcon bird 2 2.0
horse mammal 4 NaN
spider arthropod 8 0.0
ostrich bird 2 NaN
```
|
DataFrame.mode(axis=0, numeric_only=False, dropna=True)[source]#
Get the mode(s) of each element along the selected axis.
The mode of a set of values is the value that appears most often.
It can be multiple values.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to iterate over while searching for the mode:
0 or ‘index’ : get mode of each column
1 or ‘columns’ : get mode of each row.
numeric_onlybool, default FalseIf True, only apply to numeric columns.
dropnabool, default TrueDon’t consider counts of NaN/NaT.
Returns
DataFrameThe modes of each column or row.
See also
Series.modeReturn the highest frequency value in a Series.
Series.value_countsReturn the counts of values in a Series.
Examples
>>> df = pd.DataFrame([('bird', 2, 2),
... ('mammal', 4, np.nan),
... ('arthropod', 8, 0),
... ('bird', 2, np.nan)],
... index=('falcon', 'horse', 'spider', 'ostrich'),
... columns=('species', 'legs', 'wings'))
>>> df
species legs wings
falcon bird 2 2.0
horse mammal 4 NaN
spider arthropod 8 0.0
ostrich bird 2 NaN
By default, missing values are not considered, and the mode of wings
are both 0 and 2. Because the resulting DataFrame has two rows,
the second row of species and legs contains NaN.
>>> df.mode()
species legs wings
0 bird 2.0 0.0
1 NaN NaN 2.0
Setting dropna=False NaN values are considered and they can be
the mode (like for wings).
>>> df.mode(dropna=False)
species legs wings
0 bird 2 NaN
Setting numeric_only=True, only the mode of numeric columns is
computed, and columns of other types are ignored.
>>> df.mode(numeric_only=True)
legs wings
0 2.0 0.0
1 NaN 2.0
To compute the mode over columns and not rows, use the axis parameter:
>>> df.mode(axis='columns', numeric_only=True)
0 1
falcon 2.0 NaN
horse 4.0 NaN
spider 0.0 8.0
ostrich 2.0 NaN
|
reference/api/pandas.DataFrame.mode.html
|
pandas.tseries.offsets.Nano.is_year_start
|
`pandas.tseries.offsets.Nano.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Nano.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Nano.is_year_start.html
|
pandas.DataFrame.memory_usage
|
`pandas.DataFrame.memory_usage`
Return the memory usage of each column in bytes.
```
>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']
>>> data = dict([(t, np.ones(shape=5000, dtype=int).astype(t))
... for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
int64 float64 complex128 object bool
0 1 1.0 1.0+0.0j 1 True
1 1 1.0 1.0+0.0j 1 True
2 1 1.0 1.0+0.0j 1 True
3 1 1.0 1.0+0.0j 1 True
4 1 1.0 1.0+0.0j 1 True
```
|
DataFrame.memory_usage(index=True, deep=False)[source]#
Return the memory usage of each column in bytes.
The memory usage can optionally include the contribution of
the index and elements of object dtype.
This value is displayed in DataFrame.info by default. This can be
suppressed by setting pandas.options.display.memory_usage to False.
Parameters
indexbool, default TrueSpecifies whether to include the memory usage of the DataFrame’s
index in returned Series. If index=True, the memory usage of
the index is the first item in the output.
deepbool, default FalseIf True, introspect the data deeply by interrogating
object dtypes for system-level memory consumption, and include
it in the returned values.
Returns
SeriesA Series whose index is the original column names and whose values
is the memory usage of each column in bytes.
See also
numpy.ndarray.nbytesTotal bytes consumed by the elements of an ndarray.
Series.memory_usageBytes consumed by a Series.
CategoricalMemory-efficient array for string values with many repeated values.
DataFrame.infoConcise summary of a DataFrame.
Notes
See the Frequently Asked Questions for more
details.
Examples
>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']
>>> data = dict([(t, np.ones(shape=5000, dtype=int).astype(t))
... for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
int64 float64 complex128 object bool
0 1 1.0 1.0+0.0j 1 True
1 1 1.0 1.0+0.0j 1 True
2 1 1.0 1.0+0.0j 1 True
3 1 1.0 1.0+0.0j 1 True
4 1 1.0 1.0+0.0j 1 True
>>> df.memory_usage()
Index 128
int64 40000
float64 40000
complex128 80000
object 40000
bool 5000
dtype: int64
>>> df.memory_usage(index=False)
int64 40000
float64 40000
complex128 80000
object 40000
bool 5000
dtype: int64
The memory footprint of object dtype columns is ignored by default:
>>> df.memory_usage(deep=True)
Index 128
int64 40000
float64 40000
complex128 80000
object 180000
bool 5000
dtype: int64
Use a Categorical for efficient storage of an object-dtype column with
many repeated values.
>>> df['object'].astype('category').memory_usage(deep=True)
5244
|
reference/api/pandas.DataFrame.memory_usage.html
|
pandas.tseries.offsets.Second.is_year_start
|
`pandas.tseries.offsets.Second.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Second.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Second.is_year_start.html
|
pandas.Series.aggregate
|
`pandas.Series.aggregate`
Aggregate using one or more operations over the specified axis.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
```
|
Series.aggregate(func=None, axis=0, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a Series or when passed to Series.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
Series.applyInvoke function on a Series.
Series.transformTransform function producing a Series with like indexes.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.agg('min')
1
>>> s.agg(['min', 'max'])
min 1
max 4
dtype: int64
|
reference/api/pandas.Series.aggregate.html
|
Chart visualization
|
Chart visualization
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib API:
We provide the basics in pandas to easily create decent looking plots.
See the ecosystem section for visualization
libraries that go beyond the basics documented here.
|
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib API:
In [1]: import matplotlib.pyplot as plt
In [2]: plt.close("all")
We provide the basics in pandas to easily create decent looking plots.
See the ecosystem section for visualization
libraries that go beyond the basics documented here.
Note
All calls to np.random are seeded with 123456.
Basic plotting: plot#
We will demonstrate the basics, see the cookbook for
some advanced strategies.
The plot method on Series and DataFrame is just a simple wrapper around
plt.plot():
In [3]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [4]: ts = ts.cumsum()
In [5]: ts.plot();
If the index consists of dates, it calls gcf().autofmt_xdate()
to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [7]: df = df.cumsum()
In [8]: plt.figure();
In [9]: df.plot();
You can plot one column versus another using the x and y keywords in
plot():
In [10]: df3 = pd.DataFrame(np.random.randn(1000, 2), columns=["B", "C"]).cumsum()
In [11]: df3["A"] = pd.Series(list(range(len(df))))
In [12]: df3.plot(x="A", y="B");
Note
For more formatting and styling options, see
formatting below.
Other plots#
Plotting methods allow for a handful of plot styles other than the
default line plot. These methods can be provided as the kind
keyword argument to plot(), and include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or ‘density’ for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
In [13]: plt.figure();
In [14]: df.iloc[5].plot(kind="bar");
You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
In [15]: df = pd.DataFrame()
In [16]: df.plot.<TAB> # noqa: E225, E999
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter
df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie
In addition to these kind s, there are the DataFrame.hist(),
and DataFrame.boxplot() methods, which use a separate interface.
Finally, there are several plotting functions in pandas.plotting
that take a Series or DataFrame as an argument. These
include:
Scatter Matrix
Andrews Curves
Parallel Coordinates
Lag Plot
Autocorrelation Plot
Bootstrap Plot
RadViz
Plots may also be adorned with errorbars
or tables.
Bar plots#
For labeled, non-time series data, you may wish to produce a bar plot:
In [17]: plt.figure();
In [18]: df.iloc[5].plot.bar();
In [19]: plt.axhline(0, color="k");
Calling a DataFrame’s plot.bar() method produces a multiple
bar plot:
In [20]: df2 = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [21]: df2.plot.bar();
To produce a stacked bar plot, pass stacked=True:
In [22]: df2.plot.bar(stacked=True);
To get horizontal bar plots, use the barh method:
In [23]: df2.plot.barh(stacked=True);
Histograms#
Histograms can be drawn by using the DataFrame.plot.hist() and Series.plot.hist() methods.
In [24]: df4 = pd.DataFrame(
....: {
....: "a": np.random.randn(1000) + 1,
....: "b": np.random.randn(1000),
....: "c": np.random.randn(1000) - 1,
....: },
....: columns=["a", "b", "c"],
....: )
....:
In [25]: plt.figure();
In [26]: df4.plot.hist(alpha=0.5);
A histogram can be stacked using stacked=True. Bin size can be changed
using the bins keyword.
In [27]: plt.figure();
In [28]: df4.plot.hist(stacked=True, bins=20);
You can pass other keywords supported by matplotlib hist. For example,
horizontal and cumulative histograms can be drawn by
orientation='horizontal' and cumulative=True.
In [29]: plt.figure();
In [30]: df4["a"].plot.hist(orientation="horizontal", cumulative=True);
See the hist method and the
matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [31]: plt.figure();
In [32]: df["A"].diff().hist();
DataFrame.hist() plots the histograms of the columns on multiple
subplots:
In [33]: plt.figure();
In [34]: df.diff().hist(color="k", alpha=0.5, bins=50);
The by keyword can be specified to plot grouped histograms:
In [35]: data = pd.Series(np.random.randn(1000))
In [36]: data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4));
In addition, the by keyword can also be specified in DataFrame.plot.hist().
Changed in version 1.4.0.
In [37]: data = pd.DataFrame(
....: {
....: "a": np.random.choice(["x", "y", "z"], 1000),
....: "b": np.random.choice(["e", "f", "g"], 1000),
....: "c": np.random.randn(1000),
....: "d": np.random.randn(1000) - 1,
....: },
....: )
....:
In [38]: data.plot.hist(by=["a", "b"], figsize=(10, 5));
Box plots#
Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(),
or DataFrame.boxplot() to visualize the distribution of values within each column.
For instance, here is a boxplot representing five trials of 10 observations of
a uniform random variable on [0,1).
In [39]: df = pd.DataFrame(np.random.rand(10, 5), columns=["A", "B", "C", "D", "E"])
In [40]: df.plot.box();
Boxplot can be colorized by passing color keyword. You can pass a dict
whose keys are boxes, whiskers, medians and caps.
If some keys are missing in the dict, default colors are used
for the corresponding artists. Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly
passed to matplotlib for all the boxes, whiskers, medians and caps
colorization.
The colors are applied to every boxes to be drawn. If you want
more complicated colorization, you can get each drawn artists by passing
return_type.
In [41]: color = {
....: "boxes": "DarkGreen",
....: "whiskers": "DarkOrange",
....: "medians": "DarkBlue",
....: "caps": "Gray",
....: }
....:
In [42]: df.plot.box(color=color, sym="r+");
Also, you can pass other keywords supported by matplotlib boxplot.
For example, horizontal and custom-positioned boxplot can be drawn by
vert=False and positions keywords.
In [43]: df.plot.box(vert=False, positions=[1, 4, 5, 6, 8]);
See the boxplot method and the
matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [44]: df = pd.DataFrame(np.random.rand(10, 5))
In [45]: plt.figure();
In [46]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create
groupings. For instance,
In [47]: df = pd.DataFrame(np.random.rand(10, 2), columns=["Col1", "Col2"])
In [48]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [49]: plt.figure();
In [50]: bp = df.boxplot(by="X")
You can also pass a subset of columns to plot, as well as group by multiple
columns:
In [51]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [52]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [53]: df["Y"] = pd.Series(["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"])
In [54]: plt.figure();
In [55]: bp = df.boxplot(column=["Col1", "Col2"], by=["X", "Y"])
You could also create groupings with DataFrame.plot.box(), for instance:
Changed in version 1.4.0.
In [56]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [57]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [58]: plt.figure();
In [59]: bp = df.plot.box(column=["Col1", "Col2"], by="X")
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes", "dict", "both", None}.
Faceting, created by DataFrame.boxplot with the by
keyword, will affect the output type as well:
return_type
Faceted
Output type
None
No
axes
None
Yes
2-D ndarray of axes
'axes'
No
axes
'axes'
Yes
Series of axes
'dict'
No
dict of artists
'dict'
Yes
Series of dicts of artists
'both'
No
namedtuple
'both'
Yes
Series of namedtuples
Groupby.boxplot always returns a Series of return_type.
In [60]: np.random.seed(1234)
In [61]: df_box = pd.DataFrame(np.random.randn(50, 2))
In [62]: df_box["g"] = np.random.choice(["A", "B"], size=50)
In [63]: df_box.loc[df_box["g"] == "B", 1] += 3
In [64]: bp = df_box.boxplot(by="g")
The subplots above are split by the numeric columns first, then the value of
the g column. Below the subplots are first split by the value of g,
then by the numeric columns.
In [65]: bp = df_box.groupby("g").boxplot()
Area plot#
You can create area plots with Series.plot.area() and DataFrame.plot.area().
Area plots are stacked by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use dataframe.dropna() or dataframe.fillna() before calling plot.
In [66]: df = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [67]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [68]: df.plot.area(stacked=False);
Scatter plot#
Scatter plot can be drawn by using the DataFrame.plot.scatter() method.
Scatter plot requires numeric columns for the x and y axes.
These can be specified by the x and y keywords.
In [69]: df = pd.DataFrame(np.random.rand(50, 4), columns=["a", "b", "c", "d"])
In [70]: df["species"] = pd.Categorical(
....: ["setosa"] * 20 + ["versicolor"] * 20 + ["virginica"] * 10
....: )
....:
In [71]: df.plot.scatter(x="a", y="b");
To plot multiple column groups in a single axes, repeat plot method specifying target ax.
It is recommended to specify color and label keywords to distinguish each groups.
In [72]: ax = df.plot.scatter(x="a", y="b", color="DarkBlue", label="Group 1")
In [73]: df.plot.scatter(x="c", y="d", color="DarkGreen", label="Group 2", ax=ax);
The keyword c may be given as the name of a column to provide colors for
each point:
In [74]: df.plot.scatter(x="a", y="b", c="c", s=50);
If a categorical column is passed to c, then a discrete colorbar will be produced:
New in version 1.3.0.
In [75]: df.plot.scatter(x="a", y="b", c="species", cmap="viridis", s=50);
You can pass other keywords supported by matplotlib
scatter. The example below shows a
bubble chart using a column of the DataFrame as the bubble size.
In [76]: df.plot.scatter(x="a", y="b", s=df["c"] * 200);
See the scatter method and the
matplotlib scatter documentation for more.
Hexagonal bin plot#
You can create hexagonal bin plots with DataFrame.plot.hexbin().
Hexbin plots can be a useful alternative to scatter plots if your data are
too dense to plot each point individually.
In [77]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [78]: df["b"] = df["b"] + np.arange(1000)
In [79]: df.plot.hexbin(x="a", y="b", gridsize=25);
A useful keyword argument is gridsize; it controls the number of hexagons
in the x-direction, and defaults to 100. A larger gridsize means more, smaller
bins.
By default, a histogram of the counts around each (x, y) point is computed.
You can specify alternative aggregations by passing values to the C and
reduce_C_function arguments. C specifies the value at each (x, y) point
and reduce_C_function is a function of one argument that reduces all the
values in a bin to a single number (e.g. mean, max, sum, std). In this
example the positions are given by columns a and b, while the value is
given by column z. The bins are aggregated with NumPy’s max function.
In [80]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [81]: df["b"] = df["b"] + np.arange(1000)
In [82]: df["z"] = np.random.uniform(0, 3, 1000)
In [83]: df.plot.hexbin(x="a", y="b", C="z", reduce_C_function=np.max, gridsize=25);
See the hexbin method and the
matplotlib hexbin documentation for more.
Pie plot#
You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie().
If your data includes any NaN, they will be automatically filled with 0.
A ValueError will be raised if there are any negative values in your data.
In [84]: series = pd.Series(3 * np.random.rand(4), index=["a", "b", "c", "d"], name="series")
In [85]: series.plot.pie(figsize=(6, 6));
For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1.
You can create the figure with equal width and height, or force the aspect ratio
to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a
target column by the y argument or subplots=True. When y is
specified, pie plot of selected column will be drawn. If subplots=True is
specified, pie plots for each column are drawn as subplots. A legend will be
drawn in each pie plots by default; specify legend=False to hide it.
In [86]: df = pd.DataFrame(
....: 3 * np.random.rand(4, 2), index=["a", "b", "c", "d"], columns=["x", "y"]
....: )
....:
In [87]: df.plot.pie(subplots=True, figsize=(8, 4));
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning
Most pandas plots use the label and color arguments (note the lack of “s” on those).
To be consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None.
If fontsize is specified, the value will be applied to wedge labels.
Also, other keywords supported by matplotlib.pyplot.pie() can be used.
In [88]: series.plot.pie(
....: labels=["AA", "BB", "CC", "DD"],
....: colors=["r", "g", "b", "c"],
....: autopct="%.2f",
....: fontsize=20,
....: figsize=(6, 6),
....: );
....:
If you pass values whose sum total is less than 1.0 they will be rescaled so that they sum to 1.
In [89]: series = pd.Series([0.1] * 4, index=["a", "b", "c", "d"], name="series2")
In [90]: series.plot.pie(figsize=(6, 6));
See the matplotlib pie documentation for more.
Plotting with missing data#
pandas tries to be pragmatic about plotting DataFrames or Series
that contain missing data. Missing values are dropped, left out, or filled
depending on the plot type.
Plot Type
NaN Handling
Line
Leave gaps at NaNs
Line (stacked)
Fill 0’s
Bar
Fill 0’s
Scatter
Drop NaNs
Histogram
Drop NaNs (column-wise)
Box
Drop NaNs (column-wise)
Area
Fill 0’s
KDE
Drop NaNs (column-wise)
Hexbin
Drop NaNs
Pie
Fill 0’s
If any of these defaults are not what you want, or if you want to be
explicit about how missing values are handled, consider using
fillna() or dropna()
before plotting.
Plotting tools#
These functions can be imported from pandas.plotting
and take a Series or DataFrame as an argument.
Scatter matrix plot#
You can create a scatter plot matrix using the
scatter_matrix method in pandas.plotting:
In [91]: from pandas.plotting import scatter_matrix
In [92]: df = pd.DataFrame(np.random.randn(1000, 4), columns=["a", "b", "c", "d"])
In [93]: scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal="kde");
Density plot#
You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.
In [94]: ser = pd.Series(np.random.randn(1000))
In [95]: ser.plot.kde();
Andrews curves#
Andrews curves allow one to plot multivariate data as a large number
of curves that are created using the attributes of samples as coefficients
for Fourier series, see the Wikipedia entry
for more information. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [96]: from pandas.plotting import andrews_curves
In [97]: data = pd.read_csv("data/iris.data")
In [98]: plt.figure();
In [99]: andrews_curves(data, "Name");
Parallel coordinates#
Parallel coordinates is a plotting technique for plotting multivariate data,
see the Wikipedia entry
for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually.
Using parallel coordinates points are represented as connected line segments.
Each vertical line represents one attribute. One set of connected line segments
represents one data point. Points that tend to cluster will appear closer together.
In [100]: from pandas.plotting import parallel_coordinates
In [101]: data = pd.read_csv("data/iris.data")
In [102]: plt.figure();
In [103]: parallel_coordinates(data, "Name");
Lag plot#
Lag plots are used to check if a data set or time series is random. Random
data should not exhibit any structure in the lag plot. Non-random structure
implies that the underlying data are not random. The lag argument may
be passed, and when lag=1 the plot is essentially data[:-1] vs.
data[1:].
In [104]: from pandas.plotting import lag_plot
In [105]: plt.figure();
In [106]: spacing = np.linspace(-99 * np.pi, 99 * np.pi, num=1000)
In [107]: data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))
In [108]: lag_plot(data);
Autocorrelation plot#
Autocorrelation plots are often used for checking randomness in time series.
This is done by computing autocorrelations for data values at varying time lags.
If time series is random, such autocorrelations should be near zero for any and
all time-lag separations. If time series is non-random then one or more of the
autocorrelations will be significantly non-zero. The horizontal lines displayed
in the plot correspond to 95% and 99% confidence bands. The dashed line is 99%
confidence band. See the
Wikipedia entry for more about
autocorrelation plots.
In [109]: from pandas.plotting import autocorrelation_plot
In [110]: plt.figure();
In [111]: spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
In [112]: data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
In [113]: autocorrelation_plot(data);
Bootstrap plot#
Bootstrap plots are used to visually assess the uncertainty of a statistic, such
as mean, median, midrange, etc. A random subset of a specified size is selected
from a data set, the statistic in question is computed for this subset and the
process is repeated a specified number of times. Resulting plots and histograms
are what constitutes the bootstrap plot.
In [114]: from pandas.plotting import bootstrap_plot
In [115]: data = pd.Series(np.random.rand(1000))
In [116]: bootstrap_plot(data, size=50, samples=500, color="grey");
RadViz#
RadViz is a way of visualizing multi-variate data. It is based on a simple
spring tension minimization algorithm. Basically you set up a bunch of points in
a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set
is attached to each of these points by a spring, the stiffness of which is
proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the
forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
See the R package Radviz
for more information.
Note: The “Iris” dataset is available here.
In [117]: from pandas.plotting import radviz
In [118]: data = pd.read_csv("data/iris.data")
In [119]: plt.figure();
In [120]: radviz(data, "Name");
Plot formatting#
Setting the plot style#
From version 1.5 and up, matplotlib offers a range of pre-configured plotting styles. Setting the
style can be used to easily give plots the general look that you want.
Setting the style is as easy as calling matplotlib.style.use(my_plot_style) before
creating your plot. For example you could write matplotlib.style.use('ggplot') for ggplot-style
plots.
You can see the various available style names at matplotlib.style.available and it’s very
easy to try them out.
General plot style arguments#
Most plotting methods have a set of keyword arguments that control the
layout and formatting of the returned plot:
In [121]: plt.figure();
In [122]: ts.plot(style="k--", label="Series");
For each kind of plot (e.g. line, bar, scatter) any additional arguments
keywords are passed along to the corresponding matplotlib function
(ax.plot(),
ax.bar(),
ax.scatter()). These can be used
to control additional styling, beyond what pandas provides.
Controlling the legend#
You may set the legend argument to False to hide the legend, which is
shown by default.
In [123]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [124]: df = df.cumsum()
In [125]: df.plot(legend=False);
Controlling the labels#
New in version 1.1.0.
You may set the xlabel and ylabel arguments to give the plot custom labels
for x and y axis. By default, pandas will pick up index name as xlabel, while leaving
it empty for ylabel.
In [126]: df.plot();
In [127]: df.plot(xlabel="new x", ylabel="new y");
Scales#
You may pass logy to get a log-scale Y axis.
In [128]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [129]: ts = np.exp(ts.cumsum())
In [130]: ts.plot(logy=True);
See also the logx and loglog keyword arguments.
Plotting on a secondary y-axis#
To plot data on a secondary y-axis, use the secondary_y keyword:
In [131]: df["A"].plot();
In [132]: df["B"].plot(secondary_y=True, style="g");
To plot some columns in a DataFrame, give the column names to the secondary_y
keyword:
In [133]: plt.figure();
In [134]: ax = df.plot(secondary_y=["A", "B"])
In [135]: ax.set_ylabel("CD scale");
In [136]: ax.right_ax.set_ylabel("AB scale");
Note that the columns plotted on the secondary y-axis is automatically marked
with “(right)” in the legend. To turn off the automatic marking, use the
mark_right=False keyword:
In [137]: plt.figure();
In [138]: df.plot(secondary_y=["A", "B"], mark_right=False);
Custom formatters for timeseries plots#
Changed in version 1.0.0.
pandas provides custom formatters for timeseries plots. These change the
formatting of the axis labels for dates and times. By default,
the custom formatters are applied only to plots created by pandas with
DataFrame.plot() or Series.plot(). To have them apply to all
plots, including those made by matplotlib, set the option
pd.options.plotting.matplotlib.register_converters = True or use
pandas.plotting.register_matplotlib_converters().
Suppressing tick resolution adjustment#
pandas includes automatic tick resolution adjustment for regular frequency
time-series data. For limited cases where pandas cannot infer the frequency
information (e.g., in an externally created twinx), you can choose to
suppress this behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [139]: plt.figure();
In [140]: df["A"].plot();
Using the x_compat parameter, you can suppress this behavior:
In [141]: plt.figure();
In [142]: df["A"].plot(x_compat=True);
If you have more than one plot that needs to be suppressed, the use method
in pandas.plotting.plot_params can be used in a with statement:
In [143]: plt.figure();
In [144]: with pd.plotting.plot_params.use("x_compat", True):
.....: df["A"].plot(color="r")
.....: df["B"].plot(color="g")
.....: df["C"].plot(color="b")
.....:
Automatic date tick adjustment#
TimedeltaIndex now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
See the autofmt_xdate method and the
matplotlib documentation for more.
Subplots#
Each Series in a DataFrame can be plotted on a different axis
with the subplots keyword:
In [145]: df.plot(subplots=True, figsize=(6, 6));
Using layout and targeting multiple axes#
The layout of subplots can be specified by the layout keyword. It can accept
(rows, columns). The layout keyword can be used in
hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be
larger than the number of required subplots. If layout can contain more axes than required,
blank axes are not drawn. Similar to a NumPy array’s reshape method, you
can use -1 for one dimension to automatically calculate the number of rows
or columns needed, given the other.
In [146]: df.plot(subplots=True, layout=(2, 3), figsize=(6, 6), sharex=False);
The above example is identical to using:
In [147]: df.plot(subplots=True, layout=(2, -1), figsize=(6, 6), sharex=False);
The required number of columns (3) is inferred from the number of series to plot
and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword.
This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords
don’t affect to the output. You should explicitly pass sharex=False and sharey=False,
otherwise you will see a warning.
In [148]: fig, axes = plt.subplots(4, 4, figsize=(9, 9))
In [149]: plt.subplots_adjust(wspace=0.5, hspace=0.5)
In [150]: target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]
In [151]: target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]
In [152]: df.plot(subplots=True, ax=target1, legend=False, sharex=False, sharey=False);
In [153]: (-df).plot(subplots=True, ax=target2, legend=False, sharex=False, sharey=False);
Another option is passing an ax argument to Series.plot() to plot on a particular axis:
In [154]: fig, axes = plt.subplots(nrows=2, ncols=2)
In [155]: plt.subplots_adjust(wspace=0.2, hspace=0.5)
In [156]: df["A"].plot(ax=axes[0, 0]);
In [157]: axes[0, 0].set_title("A");
In [158]: df["B"].plot(ax=axes[0, 1]);
In [159]: axes[0, 1].set_title("B");
In [160]: df["C"].plot(ax=axes[1, 0]);
In [161]: axes[1, 0].set_title("C");
In [162]: df["D"].plot(ax=axes[1, 1]);
In [163]: axes[1, 1].set_title("D");
Plotting with error bars#
Plotting with error bars is supported in DataFrame.plot() and Series.plot().
Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error values can be specified using a variety of formats:
As a DataFrame or dict of errors with column names matching the columns attribute of the plotting DataFrame or matching the name attribute of the Series.
As a str indicating which of the columns of plotting DataFrame contain the error values.
As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting DataFrame/Series.
Here is an example of one way to easily plot group means with standard deviations from the raw data.
# Generate the data
In [164]: ix3 = pd.MultiIndex.from_arrays(
.....: [
.....: ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"],
.....: ["foo", "foo", "foo", "bar", "bar", "foo", "foo", "bar", "bar", "bar"],
.....: ],
.....: names=["letter", "word"],
.....: )
.....:
In [165]: df3 = pd.DataFrame(
.....: {
.....: "data1": [9, 3, 2, 4, 3, 2, 4, 6, 3, 2],
.....: "data2": [9, 6, 5, 7, 5, 4, 5, 6, 5, 1],
.....: },
.....: index=ix3,
.....: )
.....:
# Group by index labels and take the means and standard deviations
# for each group
In [166]: gp3 = df3.groupby(level=("letter", "word"))
In [167]: means = gp3.mean()
In [168]: errors = gp3.std()
In [169]: means
Out[169]:
data1 data2
letter word
a bar 3.500000 6.000000
foo 4.666667 6.666667
b bar 3.666667 4.000000
foo 3.000000 4.500000
In [170]: errors
Out[170]:
data1 data2
letter word
a bar 0.707107 1.414214
foo 3.785939 2.081666
b bar 2.081666 2.645751
foo 1.414214 0.707107
# Plot
In [171]: fig, ax = plt.subplots()
In [172]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a N length Series, a 2xN array should be provided indicating lower and upper (or left and right) errors. For a MxN DataFrame, asymmetrical errors should be in a Mx2xN array.
Here is an example of one way to plot the min/max range using asymmetrical error bars.
In [173]: mins = gp3.min()
In [174]: maxs = gp3.max()
# errors should be positive, and defined in the order of lower, upper
In [175]: errors = [[means[c] - mins[c], maxs[c] - means[c]] for c in df3.columns]
# Plot
In [176]: fig, ax = plt.subplots()
In [177]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Plotting tables#
Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to specify table=True. Data will be transposed to meet matplotlib’s default layout.
In [178]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
In [179]: df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
In [180]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [181]: df.plot(table=True, ax=ax);
Also, you can pass a different DataFrame or Series to the
table keyword. The data will be drawn as displayed in print method
(not transposed automatically). If required, it should be transposed manually
as seen in the example below.
In [182]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.75))
In [183]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [184]: df.plot(table=np.round(df.T, 2), ax=ax);
There also exists a helper function pandas.plotting.table, which creates a
table from DataFrame or Series, and adds it to an
matplotlib.Axes instance. This function can accept keywords which the
matplotlib table has.
In [185]: from pandas.plotting import table
In [186]: fig, ax = plt.subplots(1, 1)
In [187]: table(ax, np.round(df.describe(), 2), loc="upper right", colWidths=[0.2, 0.2, 0.2]);
In [188]: df.plot(ax=ax, ylim=(0, 2), legend=None);
Note: You can get table instances on the axes using axes.tables property for further decorations. See the matplotlib table documentation for more.
Colormaps#
A potential issue when plotting a large number of columns is that it can be
difficult to distinguish some series due to repetition in the default colors. To
remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap
or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the
colors are selected based on an even spacing determined by the number of columns
in the DataFrame. There is no consideration made for background color, so some
colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.
In [189]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
In [190]: df = df.cumsum()
In [191]: plt.figure();
In [192]: df.plot(colormap="cubehelix");
Alternatively, we can pass the colormap itself:
In [193]: from matplotlib import cm
In [194]: plt.figure();
In [195]: df.plot(colormap=cm.cubehelix);
Colormaps can also be used other plot types, like bar charts:
In [196]: dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)
In [197]: dd = dd.cumsum()
In [198]: plt.figure();
In [199]: dd.plot.bar(colormap="Greens");
Parallel coordinates charts:
In [200]: plt.figure();
In [201]: parallel_coordinates(data, "Name", colormap="gist_rainbow");
Andrews curves charts:
In [202]: plt.figure();
In [203]: andrews_curves(data, "Name", colormap="winter");
Plotting directly with Matplotlib#
In some situations it may still be preferable or necessary to prepare plots
directly with matplotlib, for instance when a certain type of plot or
customization is not (yet) supported by pandas. Series and DataFrame
objects behave like arrays and can therefore be passed directly to
matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date
indices, thereby extending date and time support to practically all plot types
available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster
when plotting a large number of points.
In [204]: price = pd.Series(
.....: np.random.randn(150).cumsum(),
.....: index=pd.date_range("2000-1-1", periods=150, freq="B"),
.....: )
.....:
In [205]: ma = price.rolling(20).mean()
In [206]: mstd = price.rolling(20).std()
In [207]: plt.figure();
In [208]: plt.plot(price.index, price, "k");
In [209]: plt.plot(ma.index, ma, "b");
In [210]: plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd, color="b", alpha=0.2);
Plotting backends#
Starting in version 0.25, pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib.
This can be done by passing ‘backend.module’ as the argument backend in plot
function. For example:
>>> Series([1, 2, 3]).plot(backend="backend.module")
Alternatively, you can also set this option globally, do you don’t need to specify
the keyword in each plot call. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
Or:
>>> pd.options.plotting.backend = "backend.module"
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair, hvplot,…)
to generate the plots. Some libraries implementing a backend for pandas are listed
on the ecosystem Visualization page.
Developers guide can be found at
https://pandas.pydata.org/docs/dev/development/extending.html#plotting-backends
|
user_guide/visualization.html
|
pandas.IntervalIndex.values
|
`pandas.IntervalIndex.values`
Return an array representing the data in the Index.
|
property IntervalIndex.values[source]#
Return an array representing the data in the Index.
Warning
We recommend using Index.array or
Index.to_numpy(), depending on whether you need
a reference to the underlying data or a NumPy array.
Returns
array: numpy.ndarray or ExtensionArray
See also
Index.arrayReference to the underlying data.
Index.to_numpyA NumPy array representing the underlying data.
|
reference/api/pandas.IntervalIndex.values.html
|
pandas.errors.IntCastingNaNError
|
`pandas.errors.IntCastingNaNError`
Exception raised when converting (astype) an array with NaN to an integer type.
|
exception pandas.errors.IntCastingNaNError[source]#
Exception raised when converting (astype) an array with NaN to an integer type.
|
reference/api/pandas.errors.IntCastingNaNError.html
|
pandas.Series.pad
|
`pandas.Series.pad`
Synonym for DataFrame.fillna() with method='ffill'.
|
Series.pad(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='ffill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
|
reference/api/pandas.Series.pad.html
|
pandas.DataFrame.join
|
`pandas.DataFrame.join`
Join columns of another DataFrame.
```
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
```
|
DataFrame.join(other, on=None, how='left', lsuffix='', rsuffix='', sort=False, validate=None)[source]#
Join columns of another DataFrame.
Join columns with other DataFrame either on index or on a key
column. Efficiently join multiple DataFrame objects by index at once by
passing a list.
Parameters
otherDataFrame, Series, or a list containing any combination of themIndex should be similar to one of the columns in this one. If a
Series is passed, its name attribute must be set, and that will be
used as the column name in the resulting joined DataFrame.
onstr, list of str, or array-like, optionalColumn or index level name(s) in the caller to join on the index
in other, otherwise joins index-on-index. If multiple
values given, the other DataFrame must have a MultiIndex. Can
pass an array as the join key if it is not already contained in
the calling DataFrame. Like an Excel VLOOKUP operation.
how{‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘left’How to handle the operation of the two objects.
left: use calling frame’s index (or column if on is specified)
right: use other’s index.
outer: form union of calling frame’s index (or column if on is
specified) with other’s index, and sort it.
lexicographically.
inner: form intersection of calling frame’s index (or column if
on is specified) with other’s index, preserving the order
of the calling’s one.
cross: creates the cartesian product from both frames, preserves the order
of the left keys.
New in version 1.2.0.
lsuffixstr, default ‘’Suffix to use from left frame’s overlapping columns.
rsuffixstr, default ‘’Suffix to use from right frame’s overlapping columns.
sortbool, default FalseOrder result DataFrame lexicographically by the join key. If False,
the order of the join key depends on the join type (how keyword).
validatestr, optionalIf specified, checks if join is of specified type.
* “one_to_one” or “1:1”: check if join keys are unique in both left
and right datasets.
* “one_to_many” or “1:m”: check if join keys are unique in left dataset.
* “many_to_one” or “m:1”: check if join keys are unique in right dataset.
* “many_to_many” or “m:m”: allowed, but does not result in checks.
.. versionadded:: 1.5.0
Returns
DataFrameA dataframe containing columns from both the caller and other.
See also
DataFrame.mergeFor column(s)-on-column(s) operations.
Notes
Parameters on, lsuffix, and rsuffix are not supported when
passing a list of DataFrame objects.
Support for specifying index levels as the on parameter was added
in version 0.23.0.
Examples
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df
key A
0 K0 A0
1 K1 A1
2 K2 A2
3 K3 A3
4 K4 A4
5 K5 A5
>>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
... 'B': ['B0', 'B1', 'B2']})
>>> other
key B
0 K0 B0
1 K1 B1
2 K2 B2
Join DataFrames using their indexes.
>>> df.join(other, lsuffix='_caller', rsuffix='_other')
key_caller A key_other B
0 K0 A0 K0 B0
1 K1 A1 K1 B1
2 K2 A2 K2 B2
3 K3 A3 NaN NaN
4 K4 A4 NaN NaN
5 K5 A5 NaN NaN
If we want to join using the key columns, we need to set key to be
the index in both df and other. The joined DataFrame will have
key as its index.
>>> df.set_index('key').join(other.set_index('key'))
A B
key
K0 A0 B0
K1 A1 B1
K2 A2 B2
K3 A3 NaN
K4 A4 NaN
K5 A5 NaN
Another option to join using the key columns is to use the on
parameter. DataFrame.join always uses other’s index but we can use
any column in df. This method preserves the original DataFrame’s
index in the result.
>>> df.join(other.set_index('key'), on='key')
key A B
0 K0 A0 B0
1 K1 A1 B1
2 K2 A2 B2
3 K3 A3 NaN
4 K4 A4 NaN
5 K5 A5 NaN
Using non-unique key values shows how they are matched.
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K1', 'K3', 'K0', 'K1'],
... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df
key A
0 K0 A0
1 K1 A1
2 K1 A2
3 K3 A3
4 K0 A4
5 K1 A5
>>> df.join(other.set_index('key'), on='key', validate='m:1')
key A B
0 K0 A0 B0
1 K1 A1 B1
2 K1 A2 B1
3 K3 A3 NaN
4 K0 A4 B0
5 K1 A5 B1
|
reference/api/pandas.DataFrame.join.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.