title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.api.types.is_numeric_dtype
|
`pandas.api.types.is_numeric_dtype`
Check whether the provided array or dtype is of a numeric dtype.
```
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
```
|
pandas.api.types.is_numeric_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of a numeric dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of a numeric dtype.
Examples
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
|
reference/api/pandas.api.types.is_numeric_dtype.html
|
pandas.Index.to_list
|
`pandas.Index.to_list`
Return a list of the values.
|
Index.to_list()[source]#
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
Returns
list
See also
numpy.ndarray.tolistReturn the array as an a.ndim-levels deep nested list of Python scalars.
|
reference/api/pandas.Index.to_list.html
|
pandas.tseries.offsets.BusinessMonthEnd.__call__
|
`pandas.tseries.offsets.BusinessMonthEnd.__call__`
Call self as a function.
|
BusinessMonthEnd.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.__call__.html
|
pandas.CategoricalIndex
|
`pandas.CategoricalIndex`
Index based on an underlying Categorical.
CategoricalIndex, like Categorical, can only take on a limited,
and usually fixed, number of possible values (categories). Also,
like Categorical, it might have an order, but numerical operations
(additions, divisions, …) are not possible.
```
>>> pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"])
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
```
|
class pandas.CategoricalIndex(data=None, categories=None, ordered=None, dtype=None, copy=False, name=None)[source]#
Index based on an underlying Categorical.
CategoricalIndex, like Categorical, can only take on a limited,
and usually fixed, number of possible values (categories). Also,
like Categorical, it might have an order, but numerical operations
(additions, divisions, …) are not possible.
Parameters
dataarray-like (1-dimensional)The values of the categorical. If categories are given, values not in
categories will be replaced with NaN.
categoriesindex-like, optionalThe categories for the categorical. Items need to be unique.
If the categories are not given here (and also not in dtype), they
will be inferred from the data.
orderedbool, optionalWhether or not this categorical is treated as an ordered
categorical. If not given here or in dtype, the resulting
categorical will be unordered.
dtypeCategoricalDtype or “category”, optionalIf CategoricalDtype, cannot be used together with
categories or ordered.
copybool, default FalseMake a copy of input ndarray.
nameobject, optionalName to be stored in the index.
Raises
ValueErrorIf the categories do not validate.
TypeErrorIf an explicit ordered=True is given but no categories and the
values are not sortable.
See also
IndexThe base pandas Index type.
CategoricalA categorical array.
CategoricalDtypeType for categorical data.
Notes
See the user guide
for more.
Examples
>>> pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"])
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
CategoricalIndex can also be instantiated from a Categorical:
>>> c = pd.Categorical(["a", "b", "c", "a", "b", "c"])
>>> pd.CategoricalIndex(c)
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
Ordered CategoricalIndex can have a min and max value.
>>> ci = pd.CategoricalIndex(
... ["a", "b", "c", "a", "b", "c"], ordered=True, categories=["c", "b", "a"]
... )
>>> ci
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['c', 'b', 'a'], ordered=True, dtype='category')
>>> ci.min()
'c'
Attributes
codes
The category codes of this categorical.
categories
The categories of this categorical.
ordered
Whether the categories have an ordered relationship.
Methods
rename_categories(*args, **kwargs)
Rename categories.
reorder_categories(*args, **kwargs)
Reorder categories as specified in new_categories.
add_categories(*args, **kwargs)
Add new categories.
remove_categories(*args, **kwargs)
Remove the specified categories.
remove_unused_categories(*args, **kwargs)
Remove categories which are not used.
set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
map(mapper)
Map values using input an input mapping or function.
|
reference/api/pandas.CategoricalIndex.html
|
pandas.CategoricalIndex.add_categories
|
`pandas.CategoricalIndex.add_categories`
Add new categories.
```
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
```
|
CategoricalIndex.add_categories(*args, **kwargs)[source]#
Add new categories.
new_categories will be included at the last/highest place in the
categories and will be unused directly after this call.
Parameters
new_categoriescategory or list-like of categoryThe new categories to be included.
inplacebool, default FalseWhether or not to add the categories inplace or return a copy of
this categorical with added categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with new categories added or None if inplace=True.
Raises
ValueErrorIf the new categories include old categories or do not validate as
categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a']
|
reference/api/pandas.CategoricalIndex.add_categories.html
|
pandas.tseries.offsets.YearBegin.rule_code
|
pandas.tseries.offsets.YearBegin.rule_code
|
YearBegin.rule_code#
|
reference/api/pandas.tseries.offsets.YearBegin.rule_code.html
|
pandas.api.types.is_categorical
|
`pandas.api.types.is_categorical`
Check whether an array-like is a Categorical instance.
Deprecated since version 1.1.0: Use is_categorical_dtype instead.
```
>>> is_categorical([1, 2, 3])
False
```
|
pandas.api.types.is_categorical(arr)[source]#
Check whether an array-like is a Categorical instance.
Deprecated since version 1.1.0: Use is_categorical_dtype instead.
Parameters
arrarray-likeThe array-like to check.
Returns
booleanWhether or not the array-like is of a Categorical instance.
Examples
>>> is_categorical([1, 2, 3])
False
Categoricals, Series Categoricals, and CategoricalIndex will return True.
>>> cat = pd.Categorical([1, 2, 3])
>>> is_categorical(cat)
True
>>> is_categorical(pd.Series(cat))
True
>>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
True
|
reference/api/pandas.api.types.is_categorical.html
|
pandas.tseries.offsets.Week.freqstr
|
`pandas.tseries.offsets.Week.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
Week.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.Week.freqstr.html
|
Plotting
|
Plotting
|
The following functions are contained in the pandas.plotting module.
andrews_curves(frame, class_column[, ax, ...])
Generate a matplotlib plot for visualising clusters of multivariate data.
autocorrelation_plot(series[, ax])
Autocorrelation plot for time series.
bootstrap_plot(series[, fig, size, samples])
Bootstrap plot on mean, median and mid-range statistics.
boxplot(data[, column, by, ax, fontsize, ...])
Make a box plot from DataFrame columns.
deregister_matplotlib_converters()
Remove pandas formatters and converters.
lag_plot(series[, lag, ax])
Lag plot for time series.
parallel_coordinates(frame, class_column[, ...])
Parallel coordinates plotting.
plot_params
Stores pandas plotting options.
radviz(frame, class_column[, ax, color, ...])
Plot a multidimensional dataset in 2D.
register_matplotlib_converters()
Register pandas formatters and converters with matplotlib.
scatter_matrix(frame[, alpha, figsize, ax, ...])
Draw a matrix of scatter plots.
table(ax, data[, rowLabels, colLabels])
Helper function to convert DataFrame and Series to matplotlib.table.
|
reference/plotting.html
|
pandas.tseries.offsets.BQuarterEnd.onOffset
|
pandas.tseries.offsets.BQuarterEnd.onOffset
|
BQuarterEnd.onOffset()#
|
reference/api/pandas.tseries.offsets.BQuarterEnd.onOffset.html
|
pandas.api.extensions.ExtensionDtype.name
|
`pandas.api.extensions.ExtensionDtype.name`
A string identifying the data type.
|
property ExtensionDtype.name[source]#
A string identifying the data type.
Will be used for display in, e.g. Series.dtype
|
reference/api/pandas.api.extensions.ExtensionDtype.name.html
|
pandas.Series.str.rpartition
|
`pandas.Series.str.rpartition`
Split the string at the last occurrence of sep.
This method splits the string at the last occurrence of sep,
and returns 3 elements containing the part before the separator,
the separator itself, and the part after the separator.
If the separator is not found, return 3 elements containing two empty strings, followed by the string itself.
```
>>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers'])
>>> s
0 Linda van der Berg
1 George Pitt-Rivers
dtype: object
```
|
Series.str.rpartition(sep=' ', expand=True)[source]#
Split the string at the last occurrence of sep.
This method splits the string at the last occurrence of sep,
and returns 3 elements containing the part before the separator,
the separator itself, and the part after the separator.
If the separator is not found, return 3 elements containing two empty strings, followed by the string itself.
Parameters
sepstr, default whitespaceString to split on.
expandbool, default TrueIf True, return DataFrame/MultiIndex expanding dimensionality.
If False, return Series/Index.
Returns
DataFrame/MultiIndex or Series/Index of objects
See also
partitionSplit the string at the first occurrence of sep.
Series.str.splitSplit strings around given separators.
str.partitionStandard library version.
Examples
>>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers'])
>>> s
0 Linda van der Berg
1 George Pitt-Rivers
dtype: object
>>> s.str.partition()
0 1 2
0 Linda van der Berg
1 George Pitt-Rivers
To partition by the last space instead of the first one:
>>> s.str.rpartition()
0 1 2
0 Linda van der Berg
1 George Pitt-Rivers
To partition by something different than a space:
>>> s.str.partition('-')
0 1 2
0 Linda van der Berg
1 George Pitt - Rivers
To return a Series containing tuples instead of a DataFrame:
>>> s.str.partition('-', expand=False)
0 (Linda van der Berg, , )
1 (George Pitt, -, Rivers)
dtype: object
Also available on indices:
>>> idx = pd.Index(['X 123', 'Y 999'])
>>> idx
Index(['X 123', 'Y 999'], dtype='object')
Which will create a MultiIndex:
>>> idx.str.partition()
MultiIndex([('X', ' ', '123'),
('Y', ' ', '999')],
)
Or an index with tuples with expand=False:
>>> idx.str.partition(expand=False)
Index([('X', ' ', '123'), ('Y', ' ', '999')], dtype='object')
|
reference/api/pandas.Series.str.rpartition.html
|
pandas.core.groupby.GroupBy.median
|
`pandas.core.groupby.GroupBy.median`
Compute median of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
|
final GroupBy.median(numeric_only=_NoDefault.no_default)[source]#
Compute median of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
Returns
Series or DataFrameMedian of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.median.html
|
pandas.tseries.offsets.SemiMonthBegin.is_year_end
|
`pandas.tseries.offsets.SemiMonthBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
SemiMonthBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.is_year_end.html
|
pandas.Index.item
|
`pandas.Index.item`
Return the first element of the underlying data as a Python scalar.
|
Index.item()[source]#
Return the first element of the underlying data as a Python scalar.
Returns
scalarThe first element of %(klass)s.
Raises
ValueErrorIf the data is not length-1.
|
reference/api/pandas.Index.item.html
|
pandas.plotting.register_matplotlib_converters
|
`pandas.plotting.register_matplotlib_converters`
Register pandas formatters and converters with matplotlib.
This function modifies the global matplotlib.units.registry
dictionary. pandas adds custom converters for
|
pandas.plotting.register_matplotlib_converters()[source]#
Register pandas formatters and converters with matplotlib.
This function modifies the global matplotlib.units.registry
dictionary. pandas adds custom converters for
pd.Timestamp
pd.Period
np.datetime64
datetime.datetime
datetime.date
datetime.time
See also
deregister_matplotlib_convertersRemove pandas formatters and converters.
|
reference/api/pandas.plotting.register_matplotlib_converters.html
|
pandas.Series.cat.remove_categories
|
`pandas.Series.cat.remove_categories`
Remove the specified categories.
```
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
```
|
Series.cat.remove_categories(*args, **kwargs)[source]#
Remove the specified categories.
removals must be included in the old categories. Values which were in
the removed categories will be set to NaN
Parameters
removalscategory or list of categoriesThe categories which should be removed.
inplacebool, default FalseWhether or not to remove the categories inplace or return a copy of
this categorical with removed categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with removed categories or None if inplace=True.
Raises
ValueErrorIf the removals are not contained in the categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c.remove_categories(['d', 'a'])
[NaN, 'c', 'b', 'c', NaN]
Categories (2, object): ['b', 'c']
|
reference/api/pandas.Series.cat.remove_categories.html
|
pandas.Series.plot.bar
|
`pandas.Series.plot.bar`
Vertical bar plot.
A bar plot is a plot that presents categorical data with
rectangular bars with lengths proportional to the values that they
represent. A bar plot shows comparisons among discrete categories. One
axis of the plot shows the specific categories being compared, and the
other axis represents a measured value.
```
>>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
>>> ax = df.plot.bar(x='lab', y='val', rot=0)
```
|
Series.plot.bar(x=None, y=None, **kwargs)[source]#
Vertical bar plot.
A bar plot is a plot that presents categorical data with
rectangular bars with lengths proportional to the values that they
represent. A bar plot shows comparisons among discrete categories. One
axis of the plot shows the specific categories being compared, and the
other axis represents a measured value.
Parameters
xlabel or position, optionalAllows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
ylabel or position, optionalAllows plotting of one column versus another. If not specified,
all numerical columns are used.
colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are:
A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’.
A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For
instance [‘green’,’yellow’] each column’s bar will be filled in
green or yellow, alternatively. If there is only a single column to
be plotted, then only the first color from the color list will be
used.
A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and
b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color bars for
column a in green and bars for column b in red.
New in version 1.1.0.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes
per column when subplots=True.
See also
DataFrame.plot.barhHorizontal bar plot.
DataFrame.plotMake plots of a DataFrame.
matplotlib.pyplot.barMake a bar plot with matplotlib.
Examples
Basic plot.
>>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
>>> ax = df.plot.bar(x='lab', y='val', rot=0)
Plot a whole dataframe to a bar plot. Each column is assigned a
distinct color, and each row is nested in a group along the
horizontal axis.
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = pd.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> ax = df.plot.bar(rot=0)
Plot stacked bar charts for the DataFrame
>>> ax = df.plot.bar(stacked=True)
Instead of nesting, the figure can be split by column with
subplots=True. In this case, a numpy.ndarray of
matplotlib.axes.Axes are returned.
>>> axes = df.plot.bar(rot=0, subplots=True)
>>> axes[1].legend(loc=2)
If you don’t like the default colours, you can specify how you’d
like each column to be colored.
>>> axes = df.plot.bar(
... rot=0, subplots=True, color={"speed": "red", "lifespan": "green"}
... )
>>> axes[1].legend(loc=2)
Plot a single column.
>>> ax = df.plot.bar(y='speed', rot=0)
Plot only selected categories for the DataFrame.
>>> ax = df.plot.bar(x='lifespan', rot=0)
|
reference/api/pandas.Series.plot.bar.html
|
pandas.tseries.offsets.BQuarterEnd.is_year_end
|
`pandas.tseries.offsets.BQuarterEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
BQuarterEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.BQuarterEnd.is_year_end.html
|
pandas.PeriodIndex.end_time
|
`pandas.PeriodIndex.end_time`
Get the Timestamp for the end of the period.
|
property PeriodIndex.end_time[source]#
Get the Timestamp for the end of the period.
Returns
Timestamp
See also
Period.start_timeReturn the start Timestamp.
Period.dayofyearReturn the day of year.
Period.daysinmonthReturn the days in that month.
Period.dayofweekReturn the day of the week.
|
reference/api/pandas.PeriodIndex.end_time.html
|
pandas.core.window.expanding.Expanding.aggregate
|
`pandas.core.window.expanding.Expanding.aggregate`
Aggregate using one or more operations over the specified axis.
```
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
```
|
Expanding.aggregate(func, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a Series/Dataframe or when passed to Series/Dataframe.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.DataFrame.aggregateSimilar DataFrame method.
pandas.Series.aggregateSimilar Series method.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> df.ewm(alpha=0.5).mean()
A B C
0 1.000000 4.000000 7.000000
1 1.666667 4.666667 7.666667
2 2.428571 5.428571 8.428571
|
reference/api/pandas.core.window.expanding.Expanding.aggregate.html
|
Creating a development environment
|
Creating a development environment
|
To test out code changes, you’ll need to build pandas from source, which
requires a C/C++ compiler and Python environment. If you’re making documentation
changes, you can skip to contributing to the documentation but if you skip
creating the development environment you won’t be able to build the documentation
locally before pushing your changes. It’s recommended to also install the pre-commit hooks.
Table of contents:
Step 1: install a C compiler
Step 2: create an isolated environment
Option 1: using mamba (recommended)
Option 2: using pip
Option 3: using Docker
Step 3: build and install pandas
Step 1: install a C compiler#
How to do this will depend on your platform. If you choose to user Docker
in the next step, then you can skip this step.
Windows
You will need Build Tools for Visual Studio 2022.
Note
You DO NOT need to install Visual Studio 2022.
You only need “Build Tools for Visual Studio 2022” found by
scrolling down to “All downloads” -> “Tools for Visual Studio”.
In the installer, select the “Desktop development with C++” Workloads.
Alternatively, you can install the necessary components on the commandline using
vs_BuildTools.exe
Alternatively, you could use the WSL
and consult the Linux instructions below.
macOS
To use the mamba-based compilers, you will need to install the
Developer Tools using xcode-select --install. Otherwise
information about compiler installation can be found here:
https://devguide.python.org/setup/#macos
Linux
For Linux-based mamba installations, you won’t have to install any
additional components outside of the mamba environment. The instructions
below are only needed if your setup isn’t based on mamba environments.
Some Linux distributions will come with a pre-installed C compiler. To find out
which compilers (and versions) are installed on your system:
# for Debian/Ubuntu:
dpkg --list | grep compiler
# for Red Hat/RHEL/CentOS/Fedora:
yum list installed | grep -i --color compiler
GCC (GNU Compiler Collection), is a widely used
compiler, which supports C and a number of other languages. If GCC is listed
as an installed compiler nothing more is required.
If no C compiler is installed, or you wish to upgrade, or you’re using a different
Linux distribution, consult your favorite search engine for compiler installation/update
instructions.
Let us know if you have any difficulties by opening an issue or reaching out on our contributor
community Slack.
Step 2: create an isolated environment#
Before we begin, please:
Make sure that you have cloned the repository
cd to the pandas source directory
Option 1: using mamba (recommended)#
Install mamba
Make sure your mamba is up to date (mamba update mamba)
# Create and activate the build environment
mamba env create --file environment.yml
mamba activate pandas-dev
Option 2: using pip#
You’ll need to have at least the minimum Python version that pandas supports.
You also need to have setuptools 51.0.0 or later to build pandas.
Unix/macOS with virtualenv
# Create a virtual environment
# Use an ENV_DIR of your choice. We'll use ~/virtualenvs/pandas-dev
# Any parent directories should already exist
python3 -m venv ~/virtualenvs/pandas-dev
# Activate the virtualenv
. ~/virtualenvs/pandas-dev/bin/activate
# Install the build dependencies
python -m pip install -r requirements-dev.txt
Unix/macOS with pyenv
Consult the docs for setting up pyenv here.
# Create a virtual environment
# Use an ENV_DIR of your choice. We'll use ~/Users/<yourname>/.pyenv/versions/pandas-dev
pyenv virtualenv <version> <name-to-give-it>
# For instance:
pyenv virtualenv 3.9.10 pandas-dev
# Activate the virtualenv
pyenv activate pandas-dev
# Now install the build dependencies in the cloned pandas repo
python -m pip install -r requirements-dev.txt
Windows
Below is a brief overview on how to set-up a virtual environment with Powershell
under Windows. For details please refer to the
official virtualenv user guide.
Use an ENV_DIR of your choice. We’ll use ~\\virtualenvs\\pandas-dev where
~ is the folder pointed to by either $env:USERPROFILE (Powershell) or
%USERPROFILE% (cmd.exe) environment variable. Any parent directories
should already exist.
# Create a virtual environment
python -m venv $env:USERPROFILE\virtualenvs\pandas-dev
# Activate the virtualenv. Use activate.bat for cmd.exe
~\virtualenvs\pandas-dev\Scripts\Activate.ps1
# Install the build dependencies
python -m pip install -r requirements-dev.txt
Option 3: using Docker#
pandas provides a DockerFile in the root directory to build a Docker image
with a full pandas development environment.
Docker Commands
Build the Docker image:
# Build the image
docker build -t pandas-dev .
Run Container:
# Run a container and bind your local repo to the container
# This command assumes you are running from your local repo
# but if not alter ${PWD} to match your local repo path
docker run -it --rm -v ${PWD}:/home/pandas pandas-dev
Even easier, you can integrate Docker with the following IDEs:
Visual Studio Code
You can use the DockerFile to launch a remote session with Visual Studio Code,
a popular free IDE, using the .devcontainer.json file.
See https://code.visualstudio.com/docs/remote/containers for details.
PyCharm (Professional)
Enable Docker support and use the Services tool window to build and manage images as well as
run and interact with containers.
See https://www.jetbrains.com/help/pycharm/docker.html for details.
Step 3: build and install pandas#
You can now run:
# Build and install pandas
python setup.py build_ext -j 4
python -m pip install -e . --no-build-isolation --no-use-pep517
At this point you should be able to import pandas from your locally built version:
$ python
>>> import pandas
>>> print(pandas.__version__) # note: the exact output may differ
2.0.0.dev0+880.g2b9e661fbb.dirty
This will create the new environment, and not touch any of your existing environments,
nor any existing Python installation.
Note
You will need to repeat this step each time the C extensions change, for example
if you modified any file in pandas/_libs or if you did a fetch and merge from upstream/main.
|
development/contributing_environment.html
|
pandas.tseries.offsets.YearEnd.rollback
|
`pandas.tseries.offsets.YearEnd.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
YearEnd.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.YearEnd.rollback.html
|
pandas.HDFStore.keys
|
`pandas.HDFStore.keys`
Return a list of keys corresponding to objects stored in HDFStore.
|
HDFStore.keys(include='pandas')[source]#
Return a list of keys corresponding to objects stored in HDFStore.
Parameters
includestr, default ‘pandas’When kind equals ‘pandas’ return pandas objects.
When kind equals ‘native’ return native HDF5 Table objects.
New in version 1.1.0.
Returns
listList of ABSOLUTE path-names (e.g. have the leading ‘/’).
Raises
raises ValueError if kind has an illegal value
|
reference/api/pandas.HDFStore.keys.html
|
pandas.IntervalIndex.is_empty
|
`pandas.IntervalIndex.is_empty`
Indicates if an interval is empty, meaning it contains no points.
New in version 0.25.0.
```
>>> pd.Interval(0, 1, closed='right').is_empty
False
```
|
property IntervalIndex.is_empty[source]#
Indicates if an interval is empty, meaning it contains no points.
New in version 0.25.0.
Returns
bool or ndarrayA boolean indicating if a scalar Interval is empty, or a
boolean ndarray positionally indicating if an Interval in
an IntervalArray or IntervalIndex is
empty.
Examples
An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a
boolean ndarray positionally indicating if an Interval is
empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
|
reference/api/pandas.IntervalIndex.is_empty.html
|
pandas.Series.sub
|
`pandas.Series.sub`
Return Subtraction of series and other, element-wise (binary operator sub).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.subtract(b, fill_value=0)
a 0.0
b 1.0
c 1.0
d -1.0
e NaN
dtype: float64
```
|
Series.sub(other, level=None, fill_value=None, axis=0)[source]#
Return Subtraction of series and other, element-wise (binary operator sub).
Equivalent to series - other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rsubReverse of the Subtraction operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.subtract(b, fill_value=0)
a 0.0
b 1.0
c 1.0
d -1.0
e NaN
dtype: float64
|
reference/api/pandas.Series.sub.html
|
pandas.io.formats.style.Styler.highlight_quantile
|
`pandas.io.formats.style.Styler.highlight_quantile`
Highlight values defined by a quantile with a style.
New in version 1.3.0.
```
>>> df = pd.DataFrame(np.arange(10).reshape(2,5) + 1)
>>> df.style.highlight_quantile(axis=None, q_left=0.8, color="#fffd75")
...
```
|
Styler.highlight_quantile(subset=None, color='yellow', axis=0, q_left=0.0, q_right=1.0, interpolation='linear', inclusive='both', props=None)[source]#
Highlight values defined by a quantile with a style.
New in version 1.3.0.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
colorstr, default ‘yellow’Background color to use for highlighting.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Axis along which to determine and highlight quantiles. If None quantiles
are measured over the entire DataFrame. See examples.
q_leftfloat, default 0Left bound, in [0, q_right), for the target quantile range.
q_rightfloat, default 1Right bound, in (q_left, 1], for the target quantile range.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}Argument passed to Series.quantile or DataFrame.quantile for
quantile estimation.
inclusive{‘both’, ‘neither’, ‘left’, ‘right’}Identify whether quantile bounds are closed or open.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
Returns
selfStyler
See also
Styler.highlight_nullHighlight missing values with a style.
Styler.highlight_maxHighlight the maximum with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_betweenHighlight a defined range with a style.
Notes
This function does not work with str dtypes.
Examples
Using axis=None and apply a quantile to all collective data
>>> df = pd.DataFrame(np.arange(10).reshape(2,5) + 1)
>>> df.style.highlight_quantile(axis=None, q_left=0.8, color="#fffd75")
...
Or highlight quantiles row-wise or column-wise, in this case by row-wise
>>> df.style.highlight_quantile(axis=1, q_left=0.8, color="#fffd75")
...
Use props instead of default background coloring
>>> df.style.highlight_quantile(axis=None, q_left=0.2, q_right=0.8,
... props='font-weight:bold;color:#e83e8c')
|
reference/api/pandas.io.formats.style.Styler.highlight_quantile.html
|
pandas.Series.pad
|
`pandas.Series.pad`
Synonym for DataFrame.fillna() with method='ffill'.
|
Series.pad(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='ffill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
|
reference/api/pandas.Series.pad.html
|
pandas.DataFrame.to_parquet
|
`pandas.DataFrame.to_parquet`
Write a DataFrame to the binary parquet format.
```
>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
>>> df.to_parquet('df.parquet.gzip',
... compression='gzip')
>>> pd.read_parquet('df.parquet.gzip')
col1 col2
0 1 3
1 2 4
```
|
DataFrame.to_parquet(path=None, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)[source]#
Write a DataFrame to the binary parquet format.
This function writes the dataframe as a parquet file. You can choose different parquet
backends, and have the option of compression. See
the user guide for more details.
Parameters
pathstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary write() function. If None, the result is
returned as bytes. If a string or path, it will be used as Root Directory
path when writing a partitioned dataset.
Changed in version 1.2.0.
Previously this was “fname”
engine{‘auto’, ‘pyarrow’, ‘fastparquet’}, default ‘auto’Parquet library to use. If ‘auto’, then the option
io.parquet.engine is used. The default io.parquet.engine
behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if
‘pyarrow’ is unavailable.
compression{‘snappy’, ‘gzip’, ‘brotli’, None}, default ‘snappy’Name of the compression to use. Use None for no compression.
indexbool, default NoneIf True, include the dataframe’s index(es) in the file output.
If False, they will not be written to the file.
If None, similar to True the dataframe’s index(es)
will be saved. However, instead of being saved as values,
the RangeIndex will be stored as a range in the metadata so it
doesn’t require much space and is faster. Other indexes will
be included as columns in the file output.
partition_colslist, optional, default NoneColumn names by which to partition the dataset.
Columns are partitioned in the order they are given.
Must be None if path is not a string.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
**kwargsAdditional arguments passed to the parquet library. See
pandas io for more details.
Returns
bytes if no path argument is provided else None
See also
read_parquetRead a parquet file.
DataFrame.to_orcWrite an orc file.
DataFrame.to_csvWrite a csv file.
DataFrame.to_sqlWrite to a sql table.
DataFrame.to_hdfWrite to hdf.
Notes
This function requires either the fastparquet or pyarrow library.
Examples
>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
>>> df.to_parquet('df.parquet.gzip',
... compression='gzip')
>>> pd.read_parquet('df.parquet.gzip')
col1 col2
0 1 3
1 2 4
If you want to get a buffer to the parquet content you can use a io.BytesIO
object, as long as you don’t use partition_cols, which creates multiple files.
>>> import io
>>> f = io.BytesIO()
>>> df.to_parquet(f)
>>> f.seek(0)
0
>>> content = f.read()
|
reference/api/pandas.DataFrame.to_parquet.html
|
pandas.tseries.offsets.Tick.delta
|
pandas.tseries.offsets.Tick.delta
|
Tick.delta#
|
reference/api/pandas.tseries.offsets.Tick.delta.html
|
pandas.api.extensions.ExtensionArray.repeat
|
`pandas.api.extensions.ExtensionArray.repeat`
Repeat elements of a ExtensionArray.
```
>>> cat = pd.Categorical(['a', 'b', 'c'])
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat(2)
['a', 'a', 'b', 'b', 'c', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat([1, 2, 3])
['a', 'b', 'b', 'c', 'c', 'c']
Categories (3, object): ['a', 'b', 'c']
```
|
ExtensionArray.repeat(repeats, axis=None)[source]#
Repeat elements of a ExtensionArray.
Returns a new ExtensionArray where each element of the current ExtensionArray
is repeated consecutively a given number of times.
Parameters
repeatsint or array of intsThe number of repetitions for each element. This should be a
non-negative integer. Repeating 0 times will return an empty
ExtensionArray.
axisNoneMust be None. Has no effect but is accepted for compatibility
with numpy.
Returns
repeated_arrayExtensionArrayNewly created ExtensionArray with repeated elements.
See also
Series.repeatEquivalent function for Series.
Index.repeatEquivalent function for Index.
numpy.repeatSimilar method for numpy.ndarray.
ExtensionArray.takeTake arbitrary positions.
Examples
>>> cat = pd.Categorical(['a', 'b', 'c'])
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat(2)
['a', 'a', 'b', 'b', 'c', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat([1, 2, 3])
['a', 'b', 'b', 'c', 'c', 'c']
Categories (3, object): ['a', 'b', 'c']
|
reference/api/pandas.api.extensions.ExtensionArray.repeat.html
|
pandas.tseries.offsets.Easter.is_quarter_start
|
`pandas.tseries.offsets.Easter.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Easter.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Easter.is_quarter_start.html
|
pandas.core.groupby.DataFrameGroupBy.boxplot
|
`pandas.core.groupby.DataFrameGroupBy.boxplot`
Make box plots from DataFrameGroupBy data.
```
>>> import itertools
>>> tuples = [t for t in itertools.product(range(1000), range(4))]
>>> index = pd.MultiIndex.from_tuples(tuples, names=['lvl0', 'lvl1'])
>>> data = np.random.randn(len(index),4)
>>> df = pd.DataFrame(data, columns=list('ABCD'), index=index)
>>> grouped = df.groupby(level='lvl1')
>>> grouped.boxplot(rot=45, fontsize=12, figsize=(8,10))
```
|
DataFrameGroupBy.boxplot(subplots=True, column=None, fontsize=None, rot=0, grid=True, ax=None, figsize=None, layout=None, sharex=False, sharey=True, backend=None, **kwargs)[source]#
Make box plots from DataFrameGroupBy data.
Parameters
groupedGrouped DataFrame
subplotsbool
False - no subplots will be used
True - create a subplot for each group.
columncolumn name or list of names, or vectorCan be any valid input to groupby.
fontsizeint or str
rotlabel rotation angle
gridSetting this to True will show the grid
axMatplotlib axis object, default None
figsizeA tuple (width, height) in inches
layouttuple (optional)The layout of the plot: (rows, columns).
sharexbool, default FalseWhether x-axes will be shared among subplots.
shareybool, default TrueWhether y-axes will be shared among subplots.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
**kwargsAll other plotting keyword arguments to be passed to
matplotlib’s boxplot function.
Returns
dict of key/value = group key/DataFrame.boxplot return value
or DataFrame.boxplot return value in case subplots=figures=False
Examples
You can create boxplots for grouped data and show them as separate subplots:
>>> import itertools
>>> tuples = [t for t in itertools.product(range(1000), range(4))]
>>> index = pd.MultiIndex.from_tuples(tuples, names=['lvl0', 'lvl1'])
>>> data = np.random.randn(len(index),4)
>>> df = pd.DataFrame(data, columns=list('ABCD'), index=index)
>>> grouped = df.groupby(level='lvl1')
>>> grouped.boxplot(rot=45, fontsize=12, figsize=(8,10))
The subplots=False option shows the boxplots in a single figure.
>>> grouped.boxplot(subplots=False, rot=45, fontsize=12)
|
reference/api/pandas.core.groupby.DataFrameGroupBy.boxplot.html
|
pandas.IntervalIndex.set_closed
|
`pandas.IntervalIndex.set_closed`
Return an identical IntervalArray closed on the specified side.
```
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both]
```
|
IntervalIndex.set_closed(*args, **kwargs)[source]#
Return an identical IntervalArray closed on the specified side.
Parameters
closed{‘left’, ‘right’, ‘both’, ‘neither’}Whether the intervals are closed on the left-side, right-side, both
or neither.
Returns
new_indexIntervalArray
Examples
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both]
|
reference/api/pandas.IntervalIndex.set_closed.html
|
Input/output
|
Input/output
|
Pickling#
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Flat file#
read_table(filepath_or_buffer, *[, sep, ...])
Read general delimited file into DataFrame.
read_csv(filepath_or_buffer, *[, sep, ...])
Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer, *[, colspecs, ...])
Read a table of fixed-width formatted lines into DataFrame.
Clipboard#
read_clipboard([sep])
Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
Excel#
read_excel(io[, sheet_name, header, names, ...])
Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
Class for writing DataFrame objects into excel sheets.
JSON#
read_json(path_or_buf, *[, orient, typ, ...])
Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...])
Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
build_table_schema(data[, index, ...])
Create a Table schema from data.
HTML#
read_html(io, *[, match, flavor, header, ...])
Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
XML#
read_xml(path_or_buffer, *[, xpath, ...])
Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...])
Render a DataFrame to an XML document.
Latex#
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)#
read_hdf(path_or_buf[, key, mode, errors, ...])
Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
Append to Table in file.
HDFStore.get(key)
Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
Print detailed information on the store.
HDFStore.keys([include])
Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
Return a list of all the top-level nodes.
HDFStore.walk([where])
Walk the pytables group hierarchy for pandas objects.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
Feather#
read_feather(path[, columns, use_threads, ...])
Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
Parquet#
read_parquet(path[, engine, columns, ...])
Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
ORC#
read_orc(path[, columns])
Load an ORC object from the file path, returning a DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
SAS#
read_sas(filepath_or_buffer, *[, format, ...])
Read SAS files stored as either XPORT or SAS7BDAT format files.
SPSS#
read_spss(path[, usecols, convert_categoricals])
Load an SPSS file from the file path, returning a DataFrame.
SQL#
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Google BigQuery#
read_gbq(query[, project_id, index_col, ...])
Load data from Google BigQuery.
STATA#
read_stata(filepath_or_buffer, *[, ...])
Read Stata file into DataFrame.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
StataReader.data_label
Return data label of Stata file.
StataReader.value_labels()
Return a nested dict associating each variable name to its value and label.
StataReader.variable_labels()
Return a dict associating each variable name with corresponding label.
StataWriter.write_file()
Export DataFrame object to Stata dta format.
|
reference/io.html
|
Series
|
Series
|
Constructor#
Series([data, index, dtype, name, copy, ...])
One-dimensional ndarray with axis labels (including time series).
Attributes#
Axes
Series.index
The index (axis labels) of the Series.
Series.array
The ExtensionArray of the data backing this Series or Index.
Series.values
Return Series as ndarray or ndarray-like depending on the dtype.
Series.dtype
Return the dtype object of the underlying data.
Series.shape
Return a tuple of the shape of the underlying data.
Series.nbytes
Return the number of bytes in the underlying data.
Series.ndim
Number of dimensions of the underlying data, by definition 1.
Series.size
Return the number of elements in the underlying data.
Series.T
Return the transpose, which is by definition self.
Series.memory_usage([index, deep])
Return the memory usage of the Series.
Series.hasnans
Return True if there are any NaNs.
Series.empty
Indicator whether Series/DataFrame is empty.
Series.dtypes
Return the dtype object of the underlying data.
Series.name
Return the name of the Series.
Series.flags
Get the properties associated with this pandas object.
Series.set_flags(*[, copy, ...])
Return a new object with updated flags.
Conversion#
Series.astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
Series.convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
Series.infer_objects()
Attempt to infer better dtypes for object columns.
Series.copy([deep])
Make a copy of this object's indices and data.
Series.bool()
Return the bool of a single element Series or DataFrame.
Series.to_numpy([dtype, copy, na_value])
A NumPy ndarray representing the values in this Series or Index.
Series.to_period([freq, copy])
Convert Series from DatetimeIndex to PeriodIndex.
Series.to_timestamp([freq, how, copy])
Cast to DatetimeIndex of Timestamps, at beginning of period.
Series.to_list()
Return a list of the values.
Series.__array__([dtype])
Return the values as a NumPy array.
Indexing, iteration#
Series.get(key[, default])
Get item from object for given key (ex: DataFrame column).
Series.at
Access a single value for a row/column label pair.
Series.iat
Access a single value for a row/column pair by integer position.
Series.loc
Access a group of rows and columns by label(s) or a boolean array.
Series.iloc
Purely integer-location based indexing for selection by position.
Series.__iter__()
Return an iterator of the values.
Series.items()
Lazily iterate over (index, value) tuples.
Series.iteritems()
(DEPRECATED) Lazily iterate over (index, value) tuples.
Series.keys()
Return alias for index.
Series.pop(item)
Return item and drops from series.
Series.item()
Return the first element of the underlying data as a Python scalar.
Series.xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
For more information on .at, .iat, .loc, and
.iloc, see the indexing documentation.
Binary operator functions#
Series.add(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator add).
Series.sub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator sub).
Series.mul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator mul).
Series.div(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
Series.truediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
Series.floordiv(other[, level, fill_value, axis])
Return Integer division of series and other, element-wise (binary operator floordiv).
Series.mod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator mod).
Series.pow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator pow).
Series.radd(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator radd).
Series.rsub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator rsub).
Series.rmul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator rmul).
Series.rdiv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
Series.rtruediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
Series.rfloordiv(other[, level, fill_value, ...])
Return Integer division of series and other, element-wise (binary operator rfloordiv).
Series.rmod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator rmod).
Series.rpow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator rpow).
Series.combine(other, func[, fill_value])
Combine the Series with a Series or scalar according to func.
Series.combine_first(other)
Update null elements with value in the same location in 'other'.
Series.round([decimals])
Round each value in a Series to the given number of decimals.
Series.lt(other[, level, fill_value, axis])
Return Less than of series and other, element-wise (binary operator lt).
Series.gt(other[, level, fill_value, axis])
Return Greater than of series and other, element-wise (binary operator gt).
Series.le(other[, level, fill_value, axis])
Return Less than or equal to of series and other, element-wise (binary operator le).
Series.ge(other[, level, fill_value, axis])
Return Greater than or equal to of series and other, element-wise (binary operator ge).
Series.ne(other[, level, fill_value, axis])
Return Not equal to of series and other, element-wise (binary operator ne).
Series.eq(other[, level, fill_value, axis])
Return Equal to of series and other, element-wise (binary operator eq).
Series.product([axis, skipna, level, ...])
Return the product of the values over the requested axis.
Series.dot(other)
Compute the dot product between the Series and the columns of other.
Function application, GroupBy & window#
Series.apply(func[, convert_dtype, args])
Invoke function on values of Series.
Series.agg([func, axis])
Aggregate using one or more operations over the specified axis.
Series.aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
Series.transform(func[, axis])
Call func on self producing a Series with the same axis shape as self.
Series.map(arg[, na_action])
Map values of Series according to an input mapping or function.
Series.groupby([by, axis, level, as_index, ...])
Group Series using a mapper or by a Series of columns.
Series.rolling(window[, min_periods, ...])
Provide rolling window calculations.
Series.expanding([min_periods, center, ...])
Provide expanding window calculations.
Series.ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
Series.pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
Computations / descriptive stats#
Series.abs()
Return a Series/DataFrame with absolute numeric value of each element.
Series.all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
Series.any(*[, axis, bool_only, skipna, level])
Return whether any element is True, potentially over an axis.
Series.autocorr([lag])
Compute the lag-N autocorrelation.
Series.between(left, right[, inclusive])
Return boolean Series equivalent to left <= series <= right.
Series.clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
Series.corr(other[, method, min_periods])
Compute correlation with other Series, excluding missing values.
Series.count([level])
Return number of non-NA/null observations in the Series.
Series.cov(other[, min_periods, ddof])
Compute covariance with Series, excluding missing values.
Series.cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
Series.cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
Series.cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
Series.cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
Series.describe([percentiles, include, ...])
Generate descriptive statistics.
Series.diff([periods])
First discrete difference of element.
Series.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Series.kurt([axis, skipna, level, numeric_only])
Return unbiased kurtosis over requested axis.
Series.mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
Series.max([axis, skipna, level, numeric_only])
Return the maximum of the values over the requested axis.
Series.mean([axis, skipna, level, numeric_only])
Return the mean of the values over the requested axis.
Series.median([axis, skipna, level, ...])
Return the median of the values over the requested axis.
Series.min([axis, skipna, level, numeric_only])
Return the minimum of the values over the requested axis.
Series.mode([dropna])
Return the mode(s) of the Series.
Series.nlargest([n, keep])
Return the largest n elements.
Series.nsmallest([n, keep])
Return the smallest n elements.
Series.pct_change([periods, fill_method, ...])
Percentage change between the current and a prior element.
Series.prod([axis, skipna, level, ...])
Return the product of the values over the requested axis.
Series.quantile([q, interpolation])
Return value at the given quantile.
Series.rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
Series.sem([axis, skipna, level, ddof, ...])
Return unbiased standard error of the mean over requested axis.
Series.skew([axis, skipna, level, numeric_only])
Return unbiased skew over requested axis.
Series.std([axis, skipna, level, ddof, ...])
Return sample standard deviation over requested axis.
Series.sum([axis, skipna, level, ...])
Return the sum of the values over the requested axis.
Series.var([axis, skipna, level, ddof, ...])
Return unbiased variance over requested axis.
Series.kurtosis([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
Series.unique()
Return unique values of Series object.
Series.nunique([dropna])
Return number of unique elements in the object.
Series.is_unique
Return boolean if values in the object are unique.
Series.is_monotonic
(DEPRECATED) Return boolean if values in the object are monotonically increasing.
Series.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
Series.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
Series.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Reindexing / selection / label manipulation#
Series.align(other[, join, axis, level, ...])
Align two objects on their axes with the specified join method.
Series.drop([labels, axis, index, columns, ...])
Return Series with specified index labels removed.
Series.droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
Series.drop_duplicates(*[, keep, inplace])
Return Series with duplicate values removed.
Series.duplicated([keep])
Indicate duplicate Series values.
Series.equals(other)
Test whether two objects contain the same elements.
Series.first(offset)
Select initial periods of time series data based on a date offset.
Series.head([n])
Return the first n rows.
Series.idxmax([axis, skipna])
Return the row label of the maximum value.
Series.idxmin([axis, skipna])
Return the row label of the minimum value.
Series.isin(values)
Whether elements in Series are contained in values.
Series.last(offset)
Select final periods of time series data based on a date offset.
Series.reindex(*args, **kwargs)
Conform Series to new index with optional filling logic.
Series.reindex_like(other[, method, copy, ...])
Return an object with matching indices as other object.
Series.rename([index, axis, copy, inplace, ...])
Alter Series index labels or name.
Series.rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
Series.reset_index([level, drop, name, ...])
Generate a new DataFrame or Series with the index reset.
Series.sample([n, frac, replace, weights, ...])
Return a random sample of items from an axis of object.
Series.set_axis(labels, *[, axis, inplace, copy])
Assign desired index to given axis.
Series.take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
Series.tail([n])
Return the last n rows.
Series.truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
Series.where(cond[, other, inplace, axis, ...])
Replace values where the condition is False.
Series.mask(cond[, other, inplace, axis, ...])
Replace values where the condition is True.
Series.add_prefix(prefix)
Prefix labels with string prefix.
Series.add_suffix(suffix)
Suffix labels with string suffix.
Series.filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
Missing data handling#
Series.backfill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='bfill'.
Series.bfill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='bfill'.
Series.dropna(*[, axis, inplace, how])
Return a new Series with missing values removed.
Series.ffill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
Series.fillna([value, method, axis, ...])
Fill NA/NaN values using the specified method.
Series.interpolate([method, axis, limit, ...])
Fill NaN values using an interpolation method.
Series.isna()
Detect missing values.
Series.isnull()
Series.isnull is an alias for Series.isna.
Series.notna()
Detect existing (non-missing) values.
Series.notnull()
Series.notnull is an alias for Series.notna.
Series.pad(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
Series.replace([to_replace, value, inplace, ...])
Replace values given in to_replace with value.
Reshaping, sorting#
Series.argsort([axis, kind, order])
Return the integer indices that would sort the Series values.
Series.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Series.argmax([axis, skipna])
Return int position of the largest value in the Series.
Series.reorder_levels(order)
Rearrange index levels using input order.
Series.sort_values(*[, axis, ascending, ...])
Sort by the values.
Series.sort_index(*[, axis, level, ...])
Sort Series by index labels.
Series.swaplevel([i, j, copy])
Swap levels i and j in a MultiIndex.
Series.unstack([level, fill_value])
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
Series.explode([ignore_index])
Transform each element of a list-like to a row.
Series.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Series.ravel([order])
Return the flattened underlying data as an ndarray.
Series.repeat(repeats[, axis])
Repeat elements of a Series.
Series.squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
Series.view([dtype])
Create a new view of the Series.
Combining / comparing / joining / merging#
Series.append(to_append[, ignore_index, ...])
(DEPRECATED) Concatenate two or more Series.
Series.compare(other[, align_axis, ...])
Compare to another Series and show the differences.
Series.update(other)
Modify Series in place using values from passed Series.
Time Series-related#
Series.asfreq(freq[, method, how, ...])
Convert time series to specified frequency.
Series.asof(where[, subset])
Return the last row(s) without any NaNs before where.
Series.shift([periods, freq, axis, fill_value])
Shift index by desired number of periods with an optional time freq.
Series.first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
Series.last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
Series.resample(rule[, axis, closed, label, ...])
Resample time-series data.
Series.tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
Series.tz_localize(tz[, axis, level, copy, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
Series.at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
Series.between_time(start_time, end_time[, ...])
Select values between particular times of the day (e.g., 9:00-9:30 AM).
Series.tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
Series.slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
Accessors#
pandas provides dtype-specific methods under various accessors.
These are separate namespaces within Series that only apply
to specific data types.
Data Type
Accessor
Datetime, Timedelta, Period
dt
String
str
Categorical
cat
Sparse
sparse
Datetimelike properties#
Series.dt can be used to access the values of the series as
datetimelike and return several properties.
These can be accessed like Series.dt.<property>.
Datetime properties#
Series.dt.date
Returns numpy array of python datetime.date objects.
Series.dt.time
Returns numpy array of datetime.time objects.
Series.dt.timetz
Returns numpy array of datetime.time objects with timezones.
Series.dt.year
The year of the datetime.
Series.dt.month
The month as January=1, December=12.
Series.dt.day
The day of the datetime.
Series.dt.hour
The hours of the datetime.
Series.dt.minute
The minutes of the datetime.
Series.dt.second
The seconds of the datetime.
Series.dt.microsecond
The microseconds of the datetime.
Series.dt.nanosecond
The nanoseconds of the datetime.
Series.dt.week
(DEPRECATED) The week ordinal of the year according to the ISO 8601 standard.
Series.dt.weekofyear
(DEPRECATED) The week ordinal of the year according to the ISO 8601 standard.
Series.dt.dayofweek
The day of the week with Monday=0, Sunday=6.
Series.dt.day_of_week
The day of the week with Monday=0, Sunday=6.
Series.dt.weekday
The day of the week with Monday=0, Sunday=6.
Series.dt.dayofyear
The ordinal day of the year.
Series.dt.day_of_year
The ordinal day of the year.
Series.dt.quarter
The quarter of the date.
Series.dt.is_month_start
Indicates whether the date is the first day of the month.
Series.dt.is_month_end
Indicates whether the date is the last day of the month.
Series.dt.is_quarter_start
Indicator for whether the date is the first day of a quarter.
Series.dt.is_quarter_end
Indicator for whether the date is the last day of a quarter.
Series.dt.is_year_start
Indicate whether the date is the first day of a year.
Series.dt.is_year_end
Indicate whether the date is the last day of the year.
Series.dt.is_leap_year
Boolean indicator if the date belongs to a leap year.
Series.dt.daysinmonth
The number of days in the month.
Series.dt.days_in_month
The number of days in the month.
Series.dt.tz
Return the timezone.
Series.dt.freq
Return the frequency object for this PeriodArray.
Datetime methods#
Series.dt.isocalendar()
Calculate year, week, and day according to the ISO 8601 standard.
Series.dt.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
Series.dt.to_pydatetime()
Return the data as an array of datetime.datetime objects.
Series.dt.tz_localize(*args, **kwargs)
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
Series.dt.tz_convert(*args, **kwargs)
Convert tz-aware Datetime Array/Index from one time zone to another.
Series.dt.normalize(*args, **kwargs)
Convert times to midnight.
Series.dt.strftime(*args, **kwargs)
Convert to Index using specified date_format.
Series.dt.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
Series.dt.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
Series.dt.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
Series.dt.month_name(*args, **kwargs)
Return the month names with specified locale.
Series.dt.day_name(*args, **kwargs)
Return the day names with specified locale.
Period properties#
Series.dt.qyear
Series.dt.start_time
Get the Timestamp for the start of the period.
Series.dt.end_time
Get the Timestamp for the end of the period.
Timedelta properties#
Series.dt.days
Number of days for each element.
Series.dt.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
Series.dt.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
Series.dt.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
Series.dt.components
Return a Dataframe of the components of the Timedeltas.
Timedelta methods#
Series.dt.to_pytimedelta()
Return an array of native datetime.timedelta objects.
Series.dt.total_seconds(*args, **kwargs)
Return total duration of each element expressed in seconds.
String handling#
Series.str can be used to access the values of the series as
strings and apply several methods to it. These can be accessed like
Series.str.<function/property>.
Series.str.capitalize()
Convert strings in the Series/Index to be capitalized.
Series.str.casefold()
Convert strings in the Series/Index to be casefolded.
Series.str.cat([others, sep, na_rep, join])
Concatenate strings in the Series/Index with given separator.
Series.str.center(width[, fillchar])
Pad left and right side of strings in the Series/Index.
Series.str.contains(pat[, case, flags, na, ...])
Test if pattern or regex is contained within a string of a Series or Index.
Series.str.count(pat[, flags])
Count occurrences of pattern in each string of the Series/Index.
Series.str.decode(encoding[, errors])
Decode character string in the Series/Index using indicated encoding.
Series.str.encode(encoding[, errors])
Encode character string in the Series/Index using indicated encoding.
Series.str.endswith(pat[, na])
Test if the end of each string element matches a pattern.
Series.str.extract(pat[, flags, expand])
Extract capture groups in the regex pat as columns in a DataFrame.
Series.str.extractall(pat[, flags])
Extract capture groups in the regex pat as columns in DataFrame.
Series.str.find(sub[, start, end])
Return lowest indexes in each strings in the Series/Index.
Series.str.findall(pat[, flags])
Find all occurrences of pattern or regular expression in the Series/Index.
Series.str.fullmatch(pat[, case, flags, na])
Determine if each string entirely matches a regular expression.
Series.str.get(i)
Extract element from each component at specified position or with specified key.
Series.str.index(sub[, start, end])
Return lowest indexes in each string in Series/Index.
Series.str.join(sep)
Join lists contained as elements in the Series/Index with passed delimiter.
Series.str.len()
Compute the length of each element in the Series/Index.
Series.str.ljust(width[, fillchar])
Pad right side of strings in the Series/Index.
Series.str.lower()
Convert strings in the Series/Index to lowercase.
Series.str.lstrip([to_strip])
Remove leading characters.
Series.str.match(pat[, case, flags, na])
Determine if each string starts with a match of a regular expression.
Series.str.normalize(form)
Return the Unicode normal form for the strings in the Series/Index.
Series.str.pad(width[, side, fillchar])
Pad strings in the Series/Index up to width.
Series.str.partition([sep, expand])
Split the string at the first occurrence of sep.
Series.str.removeprefix(prefix)
Remove a prefix from an object series.
Series.str.removesuffix(suffix)
Remove a suffix from an object series.
Series.str.repeat(repeats)
Duplicate each string in the Series or Index.
Series.str.replace(pat, repl[, n, case, ...])
Replace each occurrence of pattern/regex in the Series/Index.
Series.str.rfind(sub[, start, end])
Return highest indexes in each strings in the Series/Index.
Series.str.rindex(sub[, start, end])
Return highest indexes in each string in Series/Index.
Series.str.rjust(width[, fillchar])
Pad left side of strings in the Series/Index.
Series.str.rpartition([sep, expand])
Split the string at the last occurrence of sep.
Series.str.rstrip([to_strip])
Remove trailing characters.
Series.str.slice([start, stop, step])
Slice substrings from each element in the Series or Index.
Series.str.slice_replace([start, stop, repl])
Replace a positional slice of a string with another value.
Series.str.split([pat, n, expand, regex])
Split strings around given separator/delimiter.
Series.str.rsplit([pat, n, expand])
Split strings around given separator/delimiter.
Series.str.startswith(pat[, na])
Test if the start of each string element matches a pattern.
Series.str.strip([to_strip])
Remove leading and trailing characters.
Series.str.swapcase()
Convert strings in the Series/Index to be swapcased.
Series.str.title()
Convert strings in the Series/Index to titlecase.
Series.str.translate(table)
Map all characters in the string through the given mapping table.
Series.str.upper()
Convert strings in the Series/Index to uppercase.
Series.str.wrap(width, **kwargs)
Wrap strings in Series/Index at specified line width.
Series.str.zfill(width)
Pad strings in the Series/Index by prepending '0' characters.
Series.str.isalnum()
Check whether all characters in each string are alphanumeric.
Series.str.isalpha()
Check whether all characters in each string are alphabetic.
Series.str.isdigit()
Check whether all characters in each string are digits.
Series.str.isspace()
Check whether all characters in each string are whitespace.
Series.str.islower()
Check whether all characters in each string are lowercase.
Series.str.isupper()
Check whether all characters in each string are uppercase.
Series.str.istitle()
Check whether all characters in each string are titlecase.
Series.str.isnumeric()
Check whether all characters in each string are numeric.
Series.str.isdecimal()
Check whether all characters in each string are decimal.
Series.str.get_dummies([sep])
Return DataFrame of dummy/indicator variables for Series.
Categorical accessor#
Categorical-dtype specific methods and attributes are available under
the Series.cat accessor.
Series.cat.categories
The categories of this categorical.
Series.cat.ordered
Whether the categories have an ordered relationship.
Series.cat.codes
Return Series of codes as well as the index.
Series.cat.rename_categories(*args, **kwargs)
Rename categories.
Series.cat.reorder_categories(*args, **kwargs)
Reorder categories as specified in new_categories.
Series.cat.add_categories(*args, **kwargs)
Add new categories.
Series.cat.remove_categories(*args, **kwargs)
Remove the specified categories.
Series.cat.remove_unused_categories(*args, ...)
Remove categories which are not used.
Series.cat.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
Series.cat.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
Series.cat.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Sparse accessor#
Sparse-dtype specific methods and attributes are provided under the
Series.sparse accessor.
Series.sparse.npoints
The number of non- fill_value points.
Series.sparse.density
The percent of non- fill_value points, as decimal.
Series.sparse.fill_value
Elements in data that are fill_value are not stored.
Series.sparse.sp_values
An ndarray containing the non- fill_value values.
Series.sparse.from_coo(A[, dense_index])
Create a Series with sparse values from a scipy.sparse.coo_matrix.
Series.sparse.to_coo([row_levels, ...])
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
Flags#
Flags refer to attributes of the pandas object. Properties of the dataset (like
the date is was recorded, the URL it was accessed from, etc.) should be stored
in Series.attrs.
Flags(obj, *, allows_duplicate_labels)
Flags that apply to pandas objects.
Metadata#
Series.attrs is a dictionary for storing global metadata for this Series.
Warning
Series.attrs is considered experimental and may change without warning.
Series.attrs
Dictionary of global attributes of this dataset.
Plotting#
Series.plot is both a callable method and a namespace attribute for
specific plotting methods of the form Series.plot.<kind>.
Series.plot([kind, ax, figsize, ....])
Series plotting accessor and method
Series.plot.area([x, y])
Draw a stacked area plot.
Series.plot.bar([x, y])
Vertical bar plot.
Series.plot.barh([x, y])
Make a horizontal bar plot.
Series.plot.box([by])
Make a box plot of the DataFrame columns.
Series.plot.density([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
Series.plot.hist([by, bins])
Draw one histogram of the DataFrame's columns.
Series.plot.kde([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
Series.plot.line([x, y])
Plot Series or DataFrame as lines.
Series.plot.pie(**kwargs)
Generate a pie plot.
Series.hist([by, ax, grid, xlabelsize, ...])
Draw histogram of the input series using matplotlib.
Serialization / IO / conversion#
Series.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Series.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
Series.to_dict([into])
Convert Series to {label -> value} dict or dict-like object.
Series.to_excel(excel_writer[, sheet_name, ...])
Write object to an Excel sheet.
Series.to_frame([name])
Convert Series to DataFrame.
Series.to_xarray()
Return an xarray object from the pandas object.
Series.to_hdf(path_or_buf, key[, mode, ...])
Write the contained data to an HDF5 file using HDFStore.
Series.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Series.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
Series.to_string([buf, na_rep, ...])
Render a string representation of the Series.
Series.to_clipboard([excel, sep])
Copy object to the system clipboard.
Series.to_latex([buf, columns, col_space, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Series.to_markdown([buf, mode, index, ...])
Print Series in Markdown-friendly format.
|
reference/series.html
|
pandas.tseries.offsets.DateOffset.is_month_start
|
`pandas.tseries.offsets.DateOffset.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
DateOffset.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.DateOffset.is_month_start.html
|
pandas.Series.notnull
|
`pandas.Series.notnull`
Series.notnull is an alias for Series.notna.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
Series.notnull()[source]#
Series.notnull is an alias for Series.notna.
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
SeriesMask of bool values for each element in Series that
indicates whether an element is not an NA value.
See also
Series.notnullAlias of notna.
Series.isnaBoolean inverse of notna.
Series.dropnaOmit axes labels with missing values.
notnaTop-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
|
reference/api/pandas.Series.notnull.html
|
pandas.errors.UnsortedIndexError
|
`pandas.errors.UnsortedIndexError`
Error raised when slicing a MultiIndex which has not been lexsorted.
Subclass of KeyError.
|
exception pandas.errors.UnsortedIndexError[source]#
Error raised when slicing a MultiIndex which has not been lexsorted.
Subclass of KeyError.
|
reference/api/pandas.errors.UnsortedIndexError.html
|
pandas.tseries.offsets.Easter.name
|
`pandas.tseries.offsets.Easter.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Easter.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Easter.name.html
|
pandas.tseries.offsets.CustomBusinessDay.isAnchored
|
pandas.tseries.offsets.CustomBusinessDay.isAnchored
|
CustomBusinessDay.isAnchored()#
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.isAnchored.html
|
pandas.tseries.offsets.Week.normalize
|
pandas.tseries.offsets.Week.normalize
|
Week.normalize#
|
reference/api/pandas.tseries.offsets.Week.normalize.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.rollback
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.rollback`
Roll provided date backward to next offset only if not on offset.
|
CustomBusinessMonthBegin.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.rollback.html
|
pandas.Period.second
|
`pandas.Period.second`
Get the second component of the Period.
The second of the Period (ranges from 0 to 59).
```
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.second
12
```
|
Period.second#
Get the second component of the Period.
Returns
intThe second of the Period (ranges from 0 to 59).
See also
Period.hourGet the hour component of the Period.
Period.minuteGet the minute component of the Period.
Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.second
12
|
reference/api/pandas.Period.second.html
|
pandas.tseries.offsets.LastWeekOfMonth.is_month_start
|
`pandas.tseries.offsets.LastWeekOfMonth.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
LastWeekOfMonth.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.is_month_start.html
|
pandas.tseries.offsets.Tick.is_quarter_start
|
`pandas.tseries.offsets.Tick.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Tick.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Tick.is_quarter_start.html
|
pandas.DatetimeIndex.dayofyear
|
`pandas.DatetimeIndex.dayofyear`
The ordinal day of the year.
|
property DatetimeIndex.dayofyear[source]#
The ordinal day of the year.
|
reference/api/pandas.DatetimeIndex.dayofyear.html
|
pandas.errors.DataError
|
`pandas.errors.DataError`
Exceptionn raised when performing an operation on non-numerical data.
|
exception pandas.errors.DataError[source]#
Exceptionn raised when performing an operation on non-numerical data.
For example, calling ohlc on a non-numerical column or a function
on a rolling window.
|
reference/api/pandas.errors.DataError.html
|
pandas.IntervalIndex.closed
|
`pandas.IntervalIndex.closed`
String describing the inclusive side the intervals.
|
IntervalIndex.closed[source]#
String describing the inclusive side the intervals.
Either left, right, both or neither.
|
reference/api/pandas.IntervalIndex.closed.html
|
pandas.Index.hasnans
|
`pandas.Index.hasnans`
Return True if there are any NaNs.
Enables various performance speedups.
|
Index.hasnans[source]#
Return True if there are any NaNs.
Enables various performance speedups.
|
reference/api/pandas.Index.hasnans.html
|
pandas.Series.argsort
|
`pandas.Series.argsort`
Return the integer indices that would sort the Series values.
Override ndarray.argsort. Argsorts the value, omitting NA/null values,
and places the result in the same locations as the non-NA values.
|
Series.argsort(axis=0, kind='quicksort', order=None)[source]#
Return the integer indices that would sort the Series values.
Override ndarray.argsort. Argsorts the value, omitting NA/null values,
and places the result in the same locations as the non-NA values.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
kind{‘mergesort’, ‘quicksort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See numpy.sort() for more
information. ‘mergesort’ and ‘stable’ are the only stable algorithms.
orderNoneHas no effect but is accepted for compatibility with numpy.
Returns
Series[np.intp]Positions of values within the sort order with -1 indicating
nan values.
See also
numpy.ndarray.argsortReturns the indices that would sort this array.
|
reference/api/pandas.Series.argsort.html
|
pandas.tseries.offsets.YearBegin.is_month_end
|
`pandas.tseries.offsets.YearBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
YearBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_month_end.html
|
pandas.DatetimeIndex
|
`pandas.DatetimeIndex`
Immutable ndarray-like of datetime64 data.
Represented internally as int64, and which can be boxed to Timestamp objects
that are subclasses of datetime and carry metadata.
|
class pandas.DatetimeIndex(data=None, freq=_NoDefault.no_default, tz=None, normalize=False, closed=None, ambiguous='raise', dayfirst=False, yearfirst=False, dtype=None, copy=False, name=None)[source]#
Immutable ndarray-like of datetime64 data.
Represented internally as int64, and which can be boxed to Timestamp objects
that are subclasses of datetime and carry metadata.
Parameters
dataarray-like (1-dimensional)Datetime-like data to construct index with.
freqstr or pandas offset object, optionalOne of pandas date offset strings or corresponding objects. The string
‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation.
tzpytz.timezone or dateutil.tz.tzfile or datetime.tzinfo or strSet the Timezone of the data.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
closed{‘left’, ‘right’}, optionalSet whether to include start and end that are on the
boundary. The default includes boundary points on either end.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from 03:00
DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC
and at 01:30:00 UTC. In such a situation, the ambiguous parameter
dictates how ambiguous times should be handled.
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False signifies a
non-DST time (note that this flag is only applicable for ambiguous
times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous times.
dayfirstbool, default FalseIf True, parse dates in data with the day first order.
yearfirstbool, default FalseIf True parse dates in data with the year first order.
dtypenumpy.dtype or DatetimeTZDtype or str, default NoneNote that the only NumPy dtype allowed is ‘datetime64[ns]’.
copybool, default FalseMake a copy of input ndarray.
namelabel, default NoneName to be stored in the index.
See also
IndexThe base pandas Index type.
TimedeltaIndexIndex of timedelta64 data.
PeriodIndexIndex of Period data.
to_datetimeConvert argument to datetime.
date_rangeCreate a fixed-frequency DatetimeIndex.
Notes
To learn more about the frequency strings, please see this link.
Attributes
year
The year of the datetime.
month
The month as January=1, December=12.
day
The day of the datetime.
hour
The hours of the datetime.
minute
The minutes of the datetime.
second
The seconds of the datetime.
microsecond
The microseconds of the datetime.
nanosecond
The nanoseconds of the datetime.
date
Returns numpy array of python datetime.date objects.
time
Returns numpy array of datetime.time objects.
timetz
Returns numpy array of datetime.time objects with timezones.
dayofyear
The ordinal day of the year.
day_of_year
The ordinal day of the year.
weekofyear
(DEPRECATED) The week ordinal of the year.
week
(DEPRECATED) The week ordinal of the year.
dayofweek
The day of the week with Monday=0, Sunday=6.
day_of_week
The day of the week with Monday=0, Sunday=6.
weekday
The day of the week with Monday=0, Sunday=6.
quarter
The quarter of the date.
tz
Return the timezone.
freq
Return the frequency object if it is set, otherwise None.
freqstr
Return the frequency object as a string if its set, otherwise None.
is_month_start
Indicates whether the date is the first day of the month.
is_month_end
Indicates whether the date is the last day of the month.
is_quarter_start
Indicator for whether the date is the first day of a quarter.
is_quarter_end
Indicator for whether the date is the last day of a quarter.
is_year_start
Indicate whether the date is the first day of a year.
is_year_end
Indicate whether the date is the last day of the year.
is_leap_year
Boolean indicator if the date belongs to a leap year.
inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Methods
normalize(*args, **kwargs)
Convert times to midnight.
strftime(date_format)
Convert to Index using specified date_format.
snap([freq])
Snap time stamps to nearest occurring frequency.
tz_convert(tz)
Convert tz-aware Datetime Array/Index from one time zone to another.
tz_localize(tz[, ambiguous, nonexistent])
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
round(*args, **kwargs)
Perform round operation on the data to the specified freq.
floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
to_perioddelta(freq)
Calculate deltas between self values and self converted to Periods at a freq.
to_pydatetime(*args, **kwargs)
Return an ndarray of datetime.datetime objects.
to_series([keep_tz, index, name])
Create a Series with both index and values equal to the index keys.
to_frame([index, name])
Create a DataFrame with a column containing the Index.
month_name(*args, **kwargs)
Return the month names with specified locale.
day_name(*args, **kwargs)
Return the day names with specified locale.
mean(*args, **kwargs)
Return the mean value of the Array.
std(*args, **kwargs)
Return sample standard deviation over requested axis.
|
reference/api/pandas.DatetimeIndex.html
|
pandas.tseries.offsets.Minute.is_year_start
|
`pandas.tseries.offsets.Minute.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Minute.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Minute.is_year_start.html
|
pandas.tseries.offsets.LastWeekOfMonth.isAnchored
|
pandas.tseries.offsets.LastWeekOfMonth.isAnchored
|
LastWeekOfMonth.isAnchored()#
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.isAnchored.html
|
Enhancing performance
|
Enhancing performance
In this part of the tutorial, we will investigate how to speed up certain
functions operating on pandas DataFrame using three different techniques:
Cython, Numba and pandas.eval(). We will see a speed improvement of ~200
when we use Cython and Numba on a test function operating row-wise on the
DataFrame. Using pandas.eval() we will speed up a sum by an order of
~2.
Note
In addition to following the steps in this tutorial, users interested in enhancing
performance are highly encouraged to install the
recommended dependencies for pandas.
These dependencies are often not installed by default, but will offer speed
improvements if present.
For many use cases writing pandas in pure Python and NumPy is sufficient. In some
computationally heavy applications however, it can be possible to achieve sizable
speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in Python, for example
by trying to remove for-loops and making use of NumPy vectorization. It’s always worth
optimising in Python first.
|
In this part of the tutorial, we will investigate how to speed up certain
functions operating on pandas DataFrame using three different techniques:
Cython, Numba and pandas.eval(). We will see a speed improvement of ~200
when we use Cython and Numba on a test function operating row-wise on the
DataFrame. Using pandas.eval() we will speed up a sum by an order of
~2.
Note
In addition to following the steps in this tutorial, users interested in enhancing
performance are highly encouraged to install the
recommended dependencies for pandas.
These dependencies are often not installed by default, but will offer speed
improvements if present.
Cython (writing C extensions for pandas)#
For many use cases writing pandas in pure Python and NumPy is sufficient. In some
computationally heavy applications however, it can be possible to achieve sizable
speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in Python, for example
by trying to remove for-loops and making use of NumPy vectorization. It’s always worth
optimising in Python first.
This tutorial walks through a “typical” process of cythonizing a slow computation.
We use an example from the Cython documentation
but in the context of pandas. Our final cythonized solution is around 100 times
faster than the pure Python solution.
Pure Python#
We have a DataFrame to which we want to apply a function row-wise.
In [1]: df = pd.DataFrame(
...: {
...: "a": np.random.randn(1000),
...: "b": np.random.randn(1000),
...: "N": np.random.randint(100, 1000, (1000)),
...: "x": "x",
...: }
...: )
...:
In [2]: df
Out[2]:
a b N x
0 0.469112 -0.218470 585 x
1 -0.282863 -0.061645 841 x
2 -1.509059 -0.723780 251 x
3 -1.135632 0.551225 972 x
4 1.212112 -0.497767 181 x
.. ... ... ... ..
995 -1.512743 0.874737 374 x
996 0.933753 1.120790 246 x
997 -0.308013 0.198768 157 x
998 -0.079915 1.757555 977 x
999 -1.010589 -1.115680 770 x
[1000 rows x 4 columns]
Here’s the function in pure Python:
In [3]: def f(x):
...: return x * (x - 1)
...:
In [4]: def integrate_f(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f(a + i * dx)
...: return s * dx
...:
We achieve our result by using DataFrame.apply() (row-wise):
In [5]: %timeit df.apply(lambda x: integrate_f(x["a"], x["b"], x["N"]), axis=1)
86 ms +- 1.44 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)
But clearly this isn’t fast enough for us. Let’s take a look and see where the
time is spent during this operation (limited to the most time consuming
four calls) using the prun ipython magic function:
In [6]: %prun -l 4 df.apply(lambda x: integrate_f(x["a"], x["b"], x["N"]), axis=1) # noqa E999
621327 function calls (621307 primitive calls) in 0.168 seconds
Ordered by: internal time
List reduced from 225 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
1000 0.093 0.000 0.143 0.000 <ipython-input-4-c2a74e076cf0>:1(integrate_f)
552423 0.050 0.000 0.050 0.000 <ipython-input-3-c138bdd570e3>:1(f)
3000 0.004 0.000 0.018 0.000 series.py:966(__getitem__)
3000 0.002 0.000 0.009 0.000 series.py:1072(_get_value)
By far the majority of time is spend inside either integrate_f or f,
hence we’ll concentrate our efforts cythonizing these two functions.
Plain Cython#
First we’re going to need to import the Cython magic function to IPython:
In [7]: %load_ext Cython
Now, let’s simply copy our functions over to Cython as is (the suffix
is here to distinguish between function versions):
In [8]: %%cython
...: def f_plain(x):
...: return x * (x - 1)
...: def integrate_f_plain(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_plain(a + i * dx)
...: return s * dx
...:
Note
If you’re having trouble pasting the above into your ipython, you may need
to be using bleeding edge IPython for paste to play well with cell magics.
In [9]: %timeit df.apply(lambda x: integrate_f_plain(x["a"], x["b"], x["N"]), axis=1)
50.9 ms +- 160 us per loop (mean +- std. dev. of 7 runs, 10 loops each)
Already this has shaved a third off, not too bad for a simple copy and paste.
Adding type#
We get another huge improvement simply by providing type information:
In [10]: %%cython
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....:
In [11]: %timeit df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
9.47 ms +- 279 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Now, we’re talking! It’s now over ten times faster than the original Python
implementation, and we haven’t really modified the code. Let’s have another
look at what’s eating up time:
In [12]: %prun -l 4 df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
68904 function calls (68884 primitive calls) in 0.026 seconds
Ordered by: internal time
List reduced from 224 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
3000 0.004 0.000 0.018 0.000 series.py:966(__getitem__)
3000 0.002 0.000 0.009 0.000 series.py:1072(_get_value)
16174 0.002 0.000 0.003 0.000 {built-in method builtins.isinstance}
3000 0.002 0.000 0.003 0.000 base.py:3754(get_loc)
Using ndarray#
It’s calling series a lot! It’s creating a Series from each row, and calling get from both
the index and the series (three times for each row). Function calls are expensive
in Python, so maybe we could minimize these by cythonizing the apply part.
Note
We are now passing ndarrays into the Cython function, fortunately Cython plays
very nicely with NumPy.
In [13]: %%cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_b,
....: np.ndarray col_N):
....: assert (col_a.dtype == np.float_
....: and col_b.dtype == np.float_ and col_N.dtype == np.int_)
....: cdef Py_ssize_t i, n = len(col_N)
....: assert (len(col_a) == len(col_b) == n)
....: cdef np.ndarray[double] res = np.empty(n)
....: for i in range(len(col_a)):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:
The implementation is simple, it creates an array of zeros and loops over
the rows, applying our integrate_f_typed, and putting this in the zeros array.
Warning
You can not pass a Series directly as a ndarray typed parameter
to a Cython function. Instead pass the actual ndarray using the
Series.to_numpy(). The reason is that the Cython
definition is specific to an ndarray and not the passed Series.
So, do not do this:
apply_integrate_f(df["a"], df["b"], df["N"])
But rather, use Series.to_numpy() to get the underlying ndarray:
apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
Note
Loops like this would be extremely slow in Python, but in Cython looping
over NumPy arrays is fast.
In [14]: %timeit apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
854 us +- 2.62 us per loop (mean +- std. dev. of 7 runs, 1,000 loops each)
We’ve gotten another big improvement. Let’s check again where the time is spent:
In [15]: %prun -l 4 apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
85 function calls in 0.001 seconds
Ordered by: internal time
List reduced from 24 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 {built-in method _cython_magic_6991e1e67eedbb03acaf53f278f60013.apply_integrate_f}
1 0.000 0.000 0.001 0.001 {built-in method builtins.exec}
3 0.000 0.000 0.000 0.000 frame.py:3758(__getitem__)
3 0.000 0.000 0.000 0.000 base.py:5254(__contains__)
As one might expect, the majority of the time is now spent in apply_integrate_f,
so if we wanted to make anymore efficiencies we must continue to concentrate our
efforts here.
More advanced techniques#
There is still hope for improvement. Here’s an example of using some more
advanced Cython techniques:
In [16]: %%cython
....: cimport cython
....: cimport numpy as np
....: import numpy as np
....: cdef np.float64_t f_typed(np.float64_t x) except? -2:
....: return x * (x - 1)
....: cpdef np.float64_t integrate_f_typed(np.float64_t a, np.float64_t b, np.int64_t N):
....: cdef np.int64_t i
....: cdef np.float64_t s = 0.0, dx
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: @cython.boundscheck(False)
....: @cython.wraparound(False)
....: cpdef np.ndarray[np.float64_t] apply_integrate_f_wrap(
....: np.ndarray[np.float64_t] col_a,
....: np.ndarray[np.float64_t] col_b,
....: np.ndarray[np.int64_t] col_N
....: ):
....: cdef np.int64_t i, n = len(col_N)
....: assert len(col_a) == len(col_b) == n
....: cdef np.ndarray[np.float64_t] res = np.empty(n, dtype=np.float64)
....: for i in range(n):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:
In [17]: %timeit apply_integrate_f_wrap(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
723 us +- 2.91 us per loop (mean +- std. dev. of 7 runs, 1,000 loops each)
Even faster, with the caveat that a bug in our Cython code (an off-by-one error,
for example) might cause a segfault because memory access isn’t checked.
For more about boundscheck and wraparound, see the Cython docs on
compiler directives.
Numba (JIT compilation)#
An alternative to statically compiling Cython code is to use a dynamic just-in-time (JIT) compiler with Numba.
Numba allows you to write a pure Python function which can be JIT compiled to native machine instructions, similar in performance to C, C++ and Fortran,
by decorating your function with @jit.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool).
Numba supports compilation of Python to run on either CPU or GPU hardware and is designed to integrate with the Python scientific software stack.
Note
The @jit compilation will add overhead to the runtime of the function, so performance benefits may not be realized especially when using small data sets.
Consider caching your function to avoid compilation overhead each time your function is run.
Numba can be used in 2 ways with pandas:
Specify the engine="numba" keyword in select pandas methods
Define your own Python function decorated with @jit and pass the underlying NumPy array of Series or DataFrame (using to_numpy()) into the function
pandas Numba Engine#
If Numba is installed, one can specify engine="numba" in select pandas methods to execute the method using Numba.
Methods that support engine="numba" will also have an engine_kwargs keyword that accepts a dictionary that allows one to specify
"nogil", "nopython" and "parallel" keys with boolean values to pass into the @jit decorator.
If engine_kwargs is not specified, it defaults to {"nogil": False, "nopython": True, "parallel": False} unless otherwise specified.
In terms of performance, the first time a function is run using the Numba engine will be slow
as Numba will have some function compilation overhead. However, the JIT compiled functions are cached,
and subsequent calls will be fast. In general, the Numba engine is performant with
a larger amount of data points (e.g. 1+ million).
In [1]: data = pd.Series(range(1_000_000)) # noqa: E225
In [2]: roll = data.rolling(10)
In [3]: def f(x):
...: return np.sum(x) + 5
# Run the first time, compilation time will affect performance
In [4]: %timeit -r 1 -n 1 roll.apply(f, engine='numba', raw=True)
1.23 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
# Function is cached and performance will improve
In [5]: %timeit roll.apply(f, engine='numba', raw=True)
188 ms ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [6]: %timeit roll.apply(f, engine='cython', raw=True)
3.92 s ± 59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
If your compute hardware contains multiple CPUs, the largest performance gain can be realized by setting parallel to True
to leverage more than 1 CPU. Internally, pandas leverages numba to parallelize computations over the columns of a DataFrame;
therefore, this performance benefit is only beneficial for a DataFrame with a large number of columns.
In [1]: import numba
In [2]: numba.set_num_threads(1)
In [3]: df = pd.DataFrame(np.random.randn(10_000, 100))
In [4]: roll = df.rolling(100)
In [5]: %timeit roll.mean(engine="numba", engine_kwargs={"parallel": True})
347 ms ± 26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: numba.set_num_threads(2)
In [7]: %timeit roll.mean(engine="numba", engine_kwargs={"parallel": True})
201 ms ± 2.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Custom Function Examples#
A custom Python function decorated with @jit can be used with pandas objects by passing their NumPy array
representations with to_numpy().
import numba
@numba.jit
def f_plain(x):
return x * (x - 1)
@numba.jit
def integrate_f_numba(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f_plain(a + i * dx)
return s * dx
@numba.jit
def apply_integrate_f_numba(col_a, col_b, col_N):
n = len(col_N)
result = np.empty(n, dtype="float64")
assert len(col_a) == len(col_b) == n
for i in range(n):
result[i] = integrate_f_numba(col_a[i], col_b[i], col_N[i])
return result
def compute_numba(df):
result = apply_integrate_f_numba(
df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy()
)
return pd.Series(result, index=df.index, name="result")
In [4]: %timeit compute_numba(df)
1000 loops, best of 3: 798 us per loop
In this example, using Numba was faster than Cython.
Numba can also be used to write vectorized functions that do not require the user to explicitly
loop over the observations of a vector; a vectorized function will be applied to each row automatically.
Consider the following example of doubling each observation:
import numba
def double_every_value_nonumba(x):
return x * 2
@numba.vectorize
def double_every_value_withnumba(x): # noqa E501
return x * 2
# Custom function without numba
In [5]: %timeit df["col1_doubled"] = df["a"].apply(double_every_value_nonumba) # noqa E501
1000 loops, best of 3: 797 us per loop
# Standard implementation (faster than a custom function)
In [6]: %timeit df["col1_doubled"] = df["a"] * 2
1000 loops, best of 3: 233 us per loop
# Custom function with numba
In [7]: %timeit df["col1_doubled"] = double_every_value_withnumba(df["a"].to_numpy())
1000 loops, best of 3: 145 us per loop
Caveats#
Numba is best at accelerating functions that apply numerical functions to NumPy
arrays. If you try to @jit a function that contains unsupported Python
or NumPy
code, compilation will revert object mode which
will mostly likely not speed up your function. If you would
prefer that Numba throw an error if it cannot compile a function in a way that
speeds up your code, pass Numba the argument
nopython=True (e.g. @jit(nopython=True)). For more on
troubleshooting Numba modes, see the Numba troubleshooting page.
Using parallel=True (e.g. @jit(parallel=True)) may result in a SIGABRT if the threading layer leads to unsafe
behavior. You can first specify a safe threading layer
before running a JIT function with parallel=True.
Generally if the you encounter a segfault (SIGSEGV) while using Numba, please report the issue
to the Numba issue tracker.
Expression evaluation via eval()#
The top-level function pandas.eval() implements expression evaluation of
Series and DataFrame objects.
Note
To benefit from using eval() you need to
install numexpr. See the recommended dependencies section for more details.
The point of using eval() for expression evaluation rather than
plain Python is two-fold: 1) large DataFrame objects are
evaluated more efficiently and 2) large arithmetic and boolean expressions are
evaluated all at once by the underlying engine (by default numexpr is used
for evaluation).
Note
You should not use eval() for simple
expressions or for expressions involving small DataFrames. In fact,
eval() is many orders of magnitude slower for
smaller expressions/objects than plain ol’ Python. A good rule of thumb is
to only use eval() when you have a
DataFrame with more than 10,000 rows.
eval() supports all arithmetic expressions supported by the
engine in addition to some extensions available only in pandas.
Note
The larger the frame and the larger the expression the more speedup you will
see from using eval().
Supported syntax#
These operations are supported by pandas.eval():
Arithmetic operations except for the left shift (<<) and right shift
(>>) operators, e.g., df + 2 * pi / s ** 4 % 42 - the_golden_ratio
Comparison operations, including chained comparisons, e.g., 2 < df < df2
Boolean operations, e.g., df < df2 and df3 < df4 or not df_bool
list and tuple literals, e.g., [1, 2] or (1, 2)
Attribute access, e.g., df.a
Subscript expressions, e.g., df[0]
Simple variable evaluation, e.g., pd.eval("df") (this is not very useful)
Math functions: sin, cos, exp, log, expm1, log1p,
sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh,
arcsinh, arctanh, abs, arctan2 and log10.
This Python syntax is not allowed:
Expressions
Function calls other than math functions.
is/is not operations
if expressions
lambda expressions
list/set/dict comprehensions
Literal dict and set expressions
yield expressions
Generator expressions
Boolean expressions consisting of only scalar values
Statements
Neither simple
nor compound
statements are allowed. This includes things like for, while, and
if.
eval() examples#
pandas.eval() works well with expressions containing large arrays.
First let’s create a few decent-sized arrays to play with:
In [18]: nrows, ncols = 20000, 100
In [19]: df1, df2, df3, df4 = [pd.DataFrame(np.random.randn(nrows, ncols)) for _ in range(4)]
Now let’s compare adding them together using plain ol’ Python versus
eval():
In [20]: %timeit df1 + df2 + df3 + df4
18.3 ms +- 251 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [21]: %timeit pd.eval("df1 + df2 + df3 + df4")
9.56 ms +- 588 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Now let’s do the same thing but with comparisons:
In [22]: %timeit (df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)
15.9 ms +- 225 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [23]: %timeit pd.eval("(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)")
27.9 ms +- 2.34 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)
eval() also works with unaligned pandas objects:
In [24]: s = pd.Series(np.random.randn(50))
In [25]: %timeit df1 + df2 + df3 + df4 + s
30.1 ms +- 949 us per loop (mean +- std. dev. of 7 runs, 10 loops each)
In [26]: %timeit pd.eval("df1 + df2 + df3 + df4 + s")
12.4 ms +- 270 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Note
Operations such as
1 and 2 # would parse to 1 & 2, but should evaluate to 2
3 or 4 # would parse to 3 | 4, but should evaluate to 3
~1 # this is okay, but slower when using eval
should be performed in Python. An exception will be raised if you try to
perform any boolean/bitwise operations with scalar operands that are not
of type bool or np.bool_. Again, you should perform these kinds of
operations in plain Python.
The DataFrame.eval() method#
In addition to the top level pandas.eval() function you can also
evaluate an expression in the “context” of a DataFrame.
In [27]: df = pd.DataFrame(np.random.randn(5, 2), columns=["a", "b"])
In [28]: df.eval("a + b")
Out[28]:
0 -0.246747
1 0.867786
2 -1.626063
3 -1.134978
4 -1.027798
dtype: float64
Any expression that is a valid pandas.eval() expression is also a valid
DataFrame.eval() expression, with the added benefit that you don’t have to
prefix the name of the DataFrame to the column(s) you’re
interested in evaluating.
In addition, you can perform assignment of columns within an expression.
This allows for formulaic evaluation. The assignment target can be a
new column name or an existing column name, and it must be a valid Python
identifier.
The inplace keyword determines whether this assignment will performed
on the original DataFrame or return a copy with the new column.
In [29]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [30]: df.eval("c = a + b", inplace=True)
In [31]: df.eval("d = a + b + c", inplace=True)
In [32]: df.eval("a = 1", inplace=True)
In [33]: df
Out[33]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
When inplace is set to False, the default, a copy of the DataFrame with the
new or modified columns is returned and the original frame is unchanged.
In [34]: df
Out[34]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [35]: df.eval("e = a - c", inplace=False)
Out[35]:
a b c d e
0 1 5 5 10 -4
1 1 6 7 14 -6
2 1 7 9 18 -8
3 1 8 11 22 -10
4 1 9 13 26 -12
In [36]: df
Out[36]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
As a convenience, multiple assignments can be performed by using a
multi-line string.
In [37]: df.eval(
....: """
....: c = a + b
....: d = a + b + c
....: a = 1""",
....: inplace=False,
....: )
....:
Out[37]:
a b c d
0 1 5 6 12
1 1 6 7 14
2 1 7 8 16
3 1 8 9 18
4 1 9 10 20
The equivalent in standard Python would be
In [38]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [39]: df["c"] = df["a"] + df["b"]
In [40]: df["d"] = df["a"] + df["b"] + df["c"]
In [41]: df["a"] = 1
In [42]: df
Out[42]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
The DataFrame.query method has a inplace keyword which determines
whether the query modifies the original frame.
In [43]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [44]: df.query("a > 2")
Out[44]:
a b
3 3 8
4 4 9
In [45]: df.query("a > 2", inplace=True)
In [46]: df
Out[46]:
a b
3 3 8
4 4 9
Local variables#
You must explicitly reference any local variable that you want to use in an
expression by placing the @ character in front of the name. For example,
In [47]: df = pd.DataFrame(np.random.randn(5, 2), columns=list("ab"))
In [48]: newcol = np.random.randn(len(df))
In [49]: df.eval("b + @newcol")
Out[49]:
0 -0.173926
1 2.493083
2 -0.881831
3 -0.691045
4 1.334703
dtype: float64
In [50]: df.query("b < @newcol")
Out[50]:
a b
0 0.863987 -0.115998
2 -2.621419 -1.297879
If you don’t prefix the local variable with @, pandas will raise an
exception telling you the variable is undefined.
When using DataFrame.eval() and DataFrame.query(), this allows you
to have a local variable and a DataFrame column with the same
name in an expression.
In [51]: a = np.random.randn()
In [52]: df.query("@a < a")
Out[52]:
a b
0 0.863987 -0.115998
In [53]: df.loc[a < df["a"]] # same as the previous expression
Out[53]:
a b
0 0.863987 -0.115998
With pandas.eval() you cannot use the @ prefix at all, because it
isn’t defined in that context. pandas will let you know this if you try to
use @ in a top-level call to pandas.eval(). For example,
In [54]: a, b = 1, 2
In [55]: pd.eval("@a + b")
Traceback (most recent call last):
File ~/micromamba/envs/test/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3442 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[55], line 1
pd.eval("@a + b")
File ~/work/pandas/pandas/pandas/core/computation/eval.py:342 in eval
_check_for_locals(expr, level, parser)
File ~/work/pandas/pandas/pandas/core/computation/eval.py:167 in _check_for_locals
raise SyntaxError(msg)
File <string>
SyntaxError: The '@' prefix is not allowed in top-level eval calls.
please refer to your variables by name without the '@' prefix.
In this case, you should simply refer to the variables like you would in
standard Python.
In [56]: pd.eval("a + b")
Out[56]: 3
pandas.eval() parsers#
There are two different parsers and two different engines you can use as
the backend.
The default 'pandas' parser allows a more intuitive syntax for expressing
query-like operations (comparisons, conjunctions and disjunctions). In
particular, the precedence of the & and | operators is made equal to
the precedence of the corresponding boolean operations and and or.
For example, the above conjunction can be written without parentheses.
Alternatively, you can use the 'python' parser to enforce strict Python
semantics.
In [57]: expr = "(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)"
In [58]: x = pd.eval(expr, parser="python")
In [59]: expr_no_parens = "df1 > 0 & df2 > 0 & df3 > 0 & df4 > 0"
In [60]: y = pd.eval(expr_no_parens, parser="pandas")
In [61]: np.all(x == y)
Out[61]: True
The same expression can be “anded” together with the word and as
well:
In [62]: expr = "(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)"
In [63]: x = pd.eval(expr, parser="python")
In [64]: expr_with_ands = "df1 > 0 and df2 > 0 and df3 > 0 and df4 > 0"
In [65]: y = pd.eval(expr_with_ands, parser="pandas")
In [66]: np.all(x == y)
Out[66]: True
The and and or operators here have the same precedence that they would
in vanilla Python.
pandas.eval() backends#
There’s also the option to make eval() operate identical to plain
ol’ Python.
Note
Using the 'python' engine is generally not useful, except for testing
other evaluation engines against it. You will achieve no performance
benefits using eval() with engine='python' and in fact may
incur a performance hit.
You can see this by using pandas.eval() with the 'python' engine. It
is a bit slower (not by much) than evaluating the same expression in Python
In [67]: %timeit df1 + df2 + df3 + df4
17.9 ms +- 228 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [68]: %timeit pd.eval("df1 + df2 + df3 + df4", engine="python")
19 ms +- 375 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
pandas.eval() performance#
eval() is intended to speed up certain kinds of operations. In
particular, those operations involving complex expressions with large
DataFrame/Series objects should see a
significant performance benefit. Here is a plot showing the running time of
pandas.eval() as function of the size of the frame involved in the
computation. The two lines are two different engines.
Note
Operations with smallish objects (around 15k-20k rows) are faster using
plain Python:
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Technical minutia regarding expression evaluation#
Expressions that would result in an object dtype or involve datetime operations
(because of NaT) must be evaluated in Python space. The main reason for
this behavior is to maintain backwards compatibility with versions of NumPy <
1.7. In those versions of NumPy a call to ndarray.astype(str) will
truncate any strings that are more than 60 characters in length. Second, we
can’t pass object arrays to numexpr thus string comparisons must be
evaluated in Python space.
The upshot is that this only applies to object-dtype expressions. So, if
you have an expression–for example
In [69]: df = pd.DataFrame(
....: {"strings": np.repeat(list("cba"), 3), "nums": np.repeat(range(3), 3)}
....: )
....:
In [70]: df
Out[70]:
strings nums
0 c 0
1 c 0
2 c 0
3 b 1
4 b 1
5 b 1
6 a 2
7 a 2
8 a 2
In [71]: df.query("strings == 'a' and nums == 1")
Out[71]:
Empty DataFrame
Columns: [strings, nums]
Index: []
the numeric part of the comparison (nums == 1) will be evaluated by
numexpr.
In general, DataFrame.query()/pandas.eval() will
evaluate the subexpressions that can be evaluated by numexpr and those
that must be evaluated in Python space transparently to the user. This is done
by inferring the result type of an expression from its arguments and operators.
|
user_guide/enhancingperf.html
|
pandas.tseries.offsets.FY5253.base
|
`pandas.tseries.offsets.FY5253.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
FY5253.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.FY5253.base.html
|
pandas.TimedeltaIndex.to_frame
|
`pandas.TimedeltaIndex.to_frame`
Create a DataFrame with a column containing the Index.
```
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
>>> idx.to_frame()
animal
animal
Ant Ant
Bear Bear
Cow Cow
```
|
TimedeltaIndex.to_frame(index=True, name=_NoDefault.no_default)[source]#
Create a DataFrame with a column containing the Index.
Parameters
indexbool, default TrueSet the index of the returned DataFrame as the original Index.
nameobject, default NoneThe passed name should substitute for the index name (if it has
one).
Returns
DataFrameDataFrame containing the original Index data.
See also
Index.to_seriesConvert an Index to a Series.
Series.to_frameConvert Series to DataFrame.
Examples
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
>>> idx.to_frame()
animal
animal
Ant Ant
Bear Bear
Cow Cow
By default, the original Index is reused. To enforce a new Index:
>>> idx.to_frame(index=False)
animal
0 Ant
1 Bear
2 Cow
To override the name of the resulting column, specify name:
>>> idx.to_frame(index=False, name='zoo')
zoo
0 Ant
1 Bear
2 Cow
|
reference/api/pandas.TimedeltaIndex.to_frame.html
|
pandas.core.window.rolling.Rolling.sem
|
`pandas.core.window.rolling.Rolling.sem`
Calculate the rolling standard error of mean.
```
>>> s = pd.Series([0, 1, 2, 3])
>>> s.rolling(2, min_periods=1).sem()
0 NaN
1 0.707107
2 0.707107
3 0.707107
dtype: float64
```
|
Rolling.sem(ddof=1, numeric_only=False, *args, **kwargs)[source]#
Calculate the rolling standard error of mean.
Parameters
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.semAggregating sem for Series.
pandas.DataFrame.semAggregating sem for DataFrame.
Notes
A minimum of one period is required for the calculation.
Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> s.rolling(2, min_periods=1).sem()
0 NaN
1 0.707107
2 0.707107
3 0.707107
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.sem.html
|
pandas.DatetimeTZDtype
|
`pandas.DatetimeTZDtype`
An ExtensionDtype for timezone-aware datetime data.
```
>>> pd.DatetimeTZDtype(tz='UTC')
datetime64[ns, UTC]
```
|
class pandas.DatetimeTZDtype(unit='ns', tz=None)[source]#
An ExtensionDtype for timezone-aware datetime data.
This is not an actual numpy dtype, but a duck type.
Parameters
unitstr, default “ns”The precision of the datetime data. Currently limited
to "ns".
tzstr, int, or datetime.tzinfoThe timezone.
Raises
pytz.UnknownTimeZoneErrorWhen the requested timezone cannot be found.
Examples
>>> pd.DatetimeTZDtype(tz='UTC')
datetime64[ns, UTC]
>>> pd.DatetimeTZDtype(tz='dateutil/US/Central')
datetime64[ns, tzfile('/usr/share/zoneinfo/US/Central')]
Attributes
unit
The precision of the datetime data.
tz
The timezone.
Methods
None
|
reference/api/pandas.DatetimeTZDtype.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
CustomBusinessMonthBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored.html
|
pandas.Index.get_loc
|
`pandas.Index.get_loc`
Get integer location, slice or boolean mask for requested label.
```
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
```
|
Index.get_loc(key, method=None, tolerance=None)[source]#
Get integer location, slice or boolean mask for requested label.
Parameters
keylabel
method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional
default: exact matches only.
pad / ffill: find the PREVIOUS index value if no exact match.
backfill / bfill: use NEXT index value if no exact match
nearest: use the NEAREST index value if no exact match. Tied
distances are broken by preferring the larger index value.
Deprecated since version 1.4: Use index.get_indexer([item], method=…) instead.
toleranceint or float, optionalMaximum distance from index value for inexact matches. The value of
the index at the matching location must satisfy the equation
abs(index[loc] - key) <= tolerance.
Returns
locint if unique index, slice if monotonic index, else mask
Examples
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)
>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
|
reference/api/pandas.Index.get_loc.html
|
Python Module Index
|
Python Module Index
|
p
p
pandas
|
py-modindex.html
|
pandas.tseries.offsets.BQuarterEnd.apply
|
pandas.tseries.offsets.BQuarterEnd.apply
|
BQuarterEnd.apply()#
|
reference/api/pandas.tseries.offsets.BQuarterEnd.apply.html
|
pandas.DataFrame.abs
|
`pandas.DataFrame.abs`
Return a Series/DataFrame with absolute numeric value of each element.
This function only applies to elements that are all numeric.
```
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
```
|
DataFrame.abs()[source]#
Return a Series/DataFrame with absolute numeric value of each element.
This function only applies to elements that are all numeric.
Returns
absSeries/DataFrame containing the absolute value of each element.
See also
numpy.absoluteCalculate the absolute value element-wise.
Notes
For complex inputs, 1.2 + 1j, the absolute value is
\(\sqrt{ a^2 + b^2 }\).
Examples
Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j])
>>> s.abs()
0 1.56205
dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')])
>>> s.abs()
0 1 days
dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from
StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
|
reference/api/pandas.DataFrame.abs.html
|
pandas.Series.str.wrap
|
`pandas.Series.str.wrap`
Wrap strings in Series/Index at specified line width.
This method has the same keyword parameters and defaults as
textwrap.TextWrapper.
```
>>> s = pd.Series(['line to be wrapped', 'another line to be wrapped'])
>>> s.str.wrap(12)
0 line to be\nwrapped
1 another line\nto be\nwrapped
dtype: object
```
|
Series.str.wrap(width, **kwargs)[source]#
Wrap strings in Series/Index at specified line width.
This method has the same keyword parameters and defaults as
textwrap.TextWrapper.
Parameters
widthintMaximum line width.
expand_tabsbool, optionalIf True, tab characters will be expanded to spaces (default: True).
replace_whitespacebool, optionalIf True, each whitespace character (as defined by string.whitespace)
remaining after tab expansion will be replaced by a single space
(default: True).
drop_whitespacebool, optionalIf True, whitespace that, after wrapping, happens to end up at the
beginning or end of a line is dropped (default: True).
break_long_wordsbool, optionalIf True, then words longer than width will be broken in order to ensure
that no lines are longer than width. If it is false, long words will
not be broken, and some lines may be longer than width (default: True).
break_on_hyphensbool, optionalIf True, wrapping will occur preferably on whitespace and right after
hyphens in compound words, as it is customary in English. If false,
only whitespaces will be considered as potentially good places for line
breaks, but you need to set break_long_words to false if you want truly
insecable words (default: True).
Returns
Series or Index
Notes
Internally, this method uses a textwrap.TextWrapper instance with
default settings. To achieve behavior matching R’s stringr library str_wrap
function, use the arguments:
expand_tabs = False
replace_whitespace = True
drop_whitespace = True
break_long_words = False
break_on_hyphens = False
Examples
>>> s = pd.Series(['line to be wrapped', 'another line to be wrapped'])
>>> s.str.wrap(12)
0 line to be\nwrapped
1 another line\nto be\nwrapped
dtype: object
|
reference/api/pandas.Series.str.wrap.html
|
pandas.tseries.offsets.Micro.normalize
|
pandas.tseries.offsets.Micro.normalize
|
Micro.normalize#
|
reference/api/pandas.tseries.offsets.Micro.normalize.html
|
pandas.Timedelta
|
`pandas.Timedelta`
Represents a duration, the difference between two dates or times.
```
>>> td = pd.Timedelta(1, "d")
>>> td
Timedelta('1 days 00:00:00')
```
|
class pandas.Timedelta(value=<object object>, unit=None, **kwargs)#
Represents a duration, the difference between two dates or times.
Timedelta is the pandas equivalent of python’s datetime.timedelta
and is interchangeable with it in most cases.
Parameters
valueTimedelta, timedelta, np.timedelta64, str, or int
unitstr, default ‘ns’Denote the unit of the input, if input is an integer.
Possible values:
‘W’, ‘D’, ‘T’, ‘S’, ‘L’, ‘U’, or ‘N’
‘days’ or ‘day’
‘hours’, ‘hour’, ‘hr’, or ‘h’
‘minutes’, ‘minute’, ‘min’, or ‘m’
‘seconds’, ‘second’, or ‘sec’
‘milliseconds’, ‘millisecond’, ‘millis’, or ‘milli’
‘microseconds’, ‘microsecond’, ‘micros’, or ‘micro’
‘nanoseconds’, ‘nanosecond’, ‘nanos’, ‘nano’, or ‘ns’.
**kwargsAvailable kwargs: {days, seconds, microseconds,
milliseconds, minutes, hours, weeks}.
Values for construction in compat with datetime.timedelta.
Numpy ints and floats will be coerced to python ints and floats.
Notes
The constructor may take in either both values of value and unit or
kwargs as above. Either one of them must be used during initialization
The .value attribute is always in ns.
If the precision is higher than nanoseconds, the precision of the duration is
truncated to nanoseconds.
Examples
Here we initialize Timedelta object with both value and unit
>>> td = pd.Timedelta(1, "d")
>>> td
Timedelta('1 days 00:00:00')
Here we initialize the Timedelta object with kwargs
>>> td2 = pd.Timedelta(days=1)
>>> td2
Timedelta('1 days 00:00:00')
We see that either way we get the same result
Attributes
asm8
Return a numpy timedelta64 array scalar view.
components
Return a components namedtuple-like.
days
delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
freq
(DEPRECATED) Freq property.
is_populated
(DEPRECATED) Is_populated property.
microseconds
nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
resolution_string
Return a string representing the lowest timedelta resolution.
seconds
value
Methods
ceil(freq)
Return a new Timedelta ceiled to this resolution.
floor(freq)
Return a new Timedelta floored to this resolution.
isoformat
Format the Timedelta as ISO 8601 Duration.
round(freq)
Round the Timedelta to the specified resolution.
to_numpy
Convert the Timedelta to a NumPy timedelta64.
to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
total_seconds
Total seconds in the duration.
view
Array view compatibility.
|
reference/api/pandas.Timedelta.html
|
pandas.Series.dt.to_pydatetime
|
`pandas.Series.dt.to_pydatetime`
Return the data as an array of datetime.datetime objects.
```
>>> s = pd.Series(pd.date_range('20180310', periods=2))
>>> s
0 2018-03-10
1 2018-03-11
dtype: datetime64[ns]
```
|
Series.dt.to_pydatetime()[source]#
Return the data as an array of datetime.datetime objects.
Timezone information is retained if present.
Warning
Python’s datetime uses microsecond resolution, which is lower than
pandas (nanosecond). The values are truncated.
Returns
numpy.ndarrayObject dtype array containing native Python datetime objects.
See also
datetime.datetimeStandard library value for a datetime.
Examples
>>> s = pd.Series(pd.date_range('20180310', periods=2))
>>> s
0 2018-03-10
1 2018-03-11
dtype: datetime64[ns]
>>> s.dt.to_pydatetime()
array([datetime.datetime(2018, 3, 10, 0, 0),
datetime.datetime(2018, 3, 11, 0, 0)], dtype=object)
pandas’ nanosecond precision is truncated to microseconds.
>>> s = pd.Series(pd.date_range('20180310', periods=2, freq='ns'))
>>> s
0 2018-03-10 00:00:00.000000000
1 2018-03-10 00:00:00.000000001
dtype: datetime64[ns]
>>> s.dt.to_pydatetime()
array([datetime.datetime(2018, 3, 10, 0, 0),
datetime.datetime(2018, 3, 10, 0, 0)], dtype=object)
|
reference/api/pandas.Series.dt.to_pydatetime.html
|
pandas.Series.droplevel
|
`pandas.Series.droplevel`
Return Series/DataFrame with requested index / column level(s) removed.
If a string is given, must be the name of a level
If list-like, elements must be names or positional indexes
of levels.
```
>>> df = pd.DataFrame([
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]
... ]).set_index([0, 1]).rename_axis(['a', 'b'])
```
|
Series.droplevel(level, axis=0)[source]#
Return Series/DataFrame with requested index / column level(s) removed.
Parameters
levelint, str, or list-likeIf a string is given, must be the name of a level
If list-like, elements must be names or positional indexes
of levels.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis along which the level(s) is removed:
0 or ‘index’: remove level(s) in column.
1 or ‘columns’: remove level(s) in row.
For Series this parameter is unused and defaults to 0.
Returns
Series/DataFrameSeries/DataFrame with requested index / column level(s) removed.
Examples
>>> df = pd.DataFrame([
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]
... ]).set_index([0, 1]).rename_axis(['a', 'b'])
>>> df.columns = pd.MultiIndex.from_tuples([
... ('c', 'e'), ('d', 'f')
... ], names=['level_1', 'level_2'])
>>> df
level_1 c d
level_2 e f
a b
1 2 3 4
5 6 7 8
9 10 11 12
>>> df.droplevel('a')
level_1 c d
level_2 e f
b
2 3 4
6 7 8
10 11 12
>>> df.droplevel('level_2', axis=1)
level_1 c d
a b
1 2 3 4
5 6 7 8
9 10 11 12
|
reference/api/pandas.Series.droplevel.html
|
pandas.tseries.offsets.BYearBegin.is_anchored
|
`pandas.tseries.offsets.BYearBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
BYearBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.BYearBegin.is_anchored.html
|
pandas.core.resample.Resampler.min
|
`pandas.core.resample.Resampler.min`
Compute min of group values.
|
Resampler.min(numeric_only=_NoDefault.no_default, min_count=0, *args, **kwargs)[source]#
Compute min of group values.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameComputed min of values within each group.
|
reference/api/pandas.core.resample.Resampler.min.html
|
pandas.tseries.offsets.Micro.kwds
|
`pandas.tseries.offsets.Micro.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
Micro.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Micro.kwds.html
|
pandas.tseries.offsets.DateOffset.apply
|
pandas.tseries.offsets.DateOffset.apply
|
DateOffset.apply()#
|
reference/api/pandas.tseries.offsets.DateOffset.apply.html
|
pandas.tseries.offsets.Milli.isAnchored
|
pandas.tseries.offsets.Milli.isAnchored
|
Milli.isAnchored()#
|
reference/api/pandas.tseries.offsets.Milli.isAnchored.html
|
pandas.Series.dt.weekofyear
|
`pandas.Series.dt.weekofyear`
The week ordinal of the year according to the ISO 8601 standard.
Deprecated since version 1.1.0.
|
Series.dt.weekofyear[source]#
The week ordinal of the year according to the ISO 8601 standard.
Deprecated since version 1.1.0.
Series.dt.weekofyear and Series.dt.week have been deprecated. Please
call Series.dt.isocalendar() and access the week column
instead.
|
reference/api/pandas.Series.dt.weekofyear.html
|
pandas.tseries.offsets.BusinessMonthBegin.name
|
`pandas.tseries.offsets.BusinessMonthBegin.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
BusinessMonthBegin.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.name.html
|
pandas.Series.ge
|
`pandas.Series.ge`
Return Greater than or equal to of series and other, element-wise (binary operator ge).
```
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.ge(b, fill_value=0)
a True
b True
c False
d False
e True
f False
dtype: bool
```
|
Series.ge(other, level=None, fill_value=None, axis=0)[source]#
Return Greater than or equal to of series and other, element-wise (binary operator ge).
Equivalent to series >= other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
Examples
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.ge(b, fill_value=0)
a True
b True
c False
d False
e True
f False
dtype: bool
|
reference/api/pandas.Series.ge.html
|
pandas.tseries.offsets.CustomBusinessDay.n
|
pandas.tseries.offsets.CustomBusinessDay.n
|
CustomBusinessDay.n#
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.n.html
|
pandas.tseries.offsets.WeekOfMonth.week
|
pandas.tseries.offsets.WeekOfMonth.week
|
WeekOfMonth.week#
|
reference/api/pandas.tseries.offsets.WeekOfMonth.week.html
|
pandas.tseries.offsets.YearBegin.is_anchored
|
`pandas.tseries.offsets.YearBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
YearBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_anchored.html
|
pandas.DataFrame.all
|
`pandas.DataFrame.all`
Return whether all elements are True, potentially over an axis.
Returns True unless there at least one element within a series or
along a Dataframe axis that is False or equivalent (e.g. zero or
empty).
```
>>> pd.Series([True, True]).all()
True
>>> pd.Series([True, False]).all()
False
>>> pd.Series([], dtype="float64").all()
True
>>> pd.Series([np.nan]).all()
True
>>> pd.Series([np.nan]).all(skipna=False)
True
```
|
DataFrame.all(axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]#
Return whether all elements are True, potentially over an axis.
Returns True unless there at least one element within a series or
along a Dataframe axis that is False or equivalent (e.g. zero or
empty).
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Indicate which axis or axes should be reduced. For Series this parameter
is unused and defaults to 0.
0 / ‘index’ : reduce the index, return a Series whose index is the
original column labels.
1 / ‘columns’ : reduce the columns, return a Series whose index is the
original index.
None : reduce all axes, return a scalar.
bool_onlybool, default NoneInclude only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
skipnabool, default TrueExclude NA/null values. If the entire row/column is NA and skipna is
True, then the result will be True, as for an empty row/column.
If skipna is False, then NA are treated as True, because these are not
equal to zero.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
**kwargsany, default NoneAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameIf level is specified, then, DataFrame is returned; otherwise, Series
is returned.
See also
Series.allReturn True if all elements are True.
DataFrame.anyReturn True if one (or more) elements are True.
Examples
Series
>>> pd.Series([True, True]).all()
True
>>> pd.Series([True, False]).all()
False
>>> pd.Series([], dtype="float64").all()
True
>>> pd.Series([np.nan]).all()
True
>>> pd.Series([np.nan]).all(skipna=False)
True
DataFrames
Create a dataframe from a dictionary.
>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})
>>> df
col1 col2
0 True True
1 True False
Default behaviour checks if values in each column all return True.
>>> df.all()
col1 True
col2 False
dtype: bool
Specify axis='columns' to check if values in each row all return True.
>>> df.all(axis='columns')
0 True
1 False
dtype: bool
Or axis=None for whether every value is True.
>>> df.all(axis=None)
False
|
reference/api/pandas.DataFrame.all.html
|
pandas.DataFrame.eq
|
`pandas.DataFrame.eq`
Get Equal to of dataframe and other, element-wise (binary operator eq).
```
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
```
|
DataFrame.eq(other, axis='columns', level=None)[source]#
Get Equal to of dataframe and other, element-wise (binary operator eq).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
Equivalent to ==, !=, <=, <, >=, > with support to choose axis
(rows or columns) and level for comparison.
Parameters
otherscalar, sequence, Series, or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’Whether to compare by the index (0 or ‘index’) or columns
(1 or ‘columns’).
levelint or labelBroadcast across a level, matching Index values on the passed
MultiIndex level.
Returns
DataFrame of boolResult of the comparison.
See also
DataFrame.eqCompare DataFrames for equality elementwise.
DataFrame.neCompare DataFrames for inequality elementwise.
DataFrame.leCompare DataFrames for less than inequality or equality elementwise.
DataFrame.ltCompare DataFrames for strictly less than inequality elementwise.
DataFrame.geCompare DataFrames for greater than inequality or equality elementwise.
DataFrame.gtCompare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together.
NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned
with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must
match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False
|
reference/api/pandas.DataFrame.eq.html
|
pandas.Series.nlargest
|
`pandas.Series.nlargest`
Return the largest n elements.
```
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Malta": 434000, "Maldives": 434000,
... "Brunei": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Malta 434000
Maldives 434000
Brunei 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
```
|
Series.nlargest(n=5, keep='first')[source]#
Return the largest n elements.
Parameters
nint, default 5Return this many descending sorted values.
keep{‘first’, ‘last’, ‘all’}, default ‘first’When there are duplicate values that cannot all fit in a
Series of n elements:
first : return the first n occurrences in order
of appearance.
last : return the last n occurrences in reverse
order of appearance.
all : keep all occurrences. This can result in a Series of
size larger than n.
Returns
SeriesThe n largest values in the Series, sorted in decreasing order.
See also
Series.nsmallestGet the n smallest elements.
Series.sort_valuesSort Series by values.
Series.headReturn the first n rows.
Notes
Faster than .sort_values(ascending=False).head(n) for small n
relative to the size of the Series object.
Examples
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Malta": 434000, "Maldives": 434000,
... "Brunei": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Malta 434000
Maldives 434000
Brunei 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
The n largest elements where n=5 by default.
>>> s.nlargest()
France 65000000
Italy 59000000
Malta 434000
Maldives 434000
Brunei 434000
dtype: int64
The n largest elements where n=3. Default keep value is ‘first’
so Malta will be kept.
>>> s.nlargest(3)
France 65000000
Italy 59000000
Malta 434000
dtype: int64
The n largest elements where n=3 and keeping the last duplicates.
Brunei will be kept since it is the last with value 434000 based on
the index order.
>>> s.nlargest(3, keep='last')
France 65000000
Italy 59000000
Brunei 434000
dtype: int64
The n largest elements where n=3 with all duplicates kept. Note
that the returned Series has five elements due to the three duplicates.
>>> s.nlargest(3, keep='all')
France 65000000
Italy 59000000
Malta 434000
Maldives 434000
Brunei 434000
dtype: int64
|
reference/api/pandas.Series.nlargest.html
|
pandas.Index.inferred_type
|
`pandas.Index.inferred_type`
Return a string of the type inferred from the values.
|
Index.inferred_type[source]#
Return a string of the type inferred from the values.
|
reference/api/pandas.Index.inferred_type.html
|
pandas.DataFrame.plot.line
|
`pandas.DataFrame.plot.line`
Plot Series or DataFrame as lines.
This function is useful to plot lines using DataFrame’s values
as coordinates.
```
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
```
|
DataFrame.plot.line(x=None, y=None, **kwargs)[source]#
Plot Series or DataFrame as lines.
This function is useful to plot lines using DataFrame’s values
as coordinates.
Parameters
xlabel or position, optionalAllows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
ylabel or position, optionalAllows plotting of one column versus another. If not specified,
all numerical columns are used.
colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are:
A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’.
A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For
instance [‘green’,’yellow’] each column’s line will be filled in
green or yellow, alternatively. If there is only a single column to
be plotted, then only the first color from the color list will be
used.
A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and
b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color lines for
column a in green and lines for column b in red.
New in version 1.1.0.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes
per column when subplots=True.
See also
matplotlib.pyplot.plotPlot y versus x as lines and/or markers.
Examples
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
The following example shows the populations for some animals
over the years.
>>> df = pd.DataFrame({
... 'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]
... }, index=[1990, 1997, 2003, 2009, 2014])
>>> lines = df.plot.line()
An example with subplots, so an array of axes is returned.
>>> axes = df.plot.line(subplots=True)
>>> type(axes)
<class 'numpy.ndarray'>
Let’s repeat the same example, but specifying colors for
each column (in this case, for each animal).
>>> axes = df.plot.line(
... subplots=True, color={"pig": "pink", "horse": "#742802"}
... )
The following example shows the relationship between both
populations.
>>> lines = df.plot.line(x='pig', y='horse')
|
reference/api/pandas.DataFrame.plot.line.html
|
pandas.tseries.offsets.Milli.base
|
`pandas.tseries.offsets.Milli.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
Milli.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.Milli.base.html
|
pandas.core.window.rolling.Rolling.cov
|
`pandas.core.window.rolling.Rolling.cov`
Calculate the rolling sample covariance.
If not supplied then will default to self and produce pairwise
output.
|
Rolling.cov(other=None, pairwise=None, ddof=1, numeric_only=False, **kwargs)[source]#
Calculate the rolling sample covariance.
Parameters
otherSeries or DataFrame, optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndexed DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.covAggregating cov for Series.
pandas.DataFrame.covAggregating cov for DataFrame.
|
reference/api/pandas.core.window.rolling.Rolling.cov.html
|
pandas.tseries.offsets.BusinessHour.is_month_start
|
`pandas.tseries.offsets.BusinessHour.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
BusinessHour.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessHour.is_month_start.html
|
pandas.tseries.offsets.Milli.rollforward
|
`pandas.tseries.offsets.Milli.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
Milli.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.Milli.rollforward.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.onOffset
|
pandas.tseries.offsets.CustomBusinessMonthBegin.onOffset
|
CustomBusinessMonthBegin.onOffset()#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.onOffset.html
|
pandas.api.extensions.ExtensionArray._values_for_factorize
|
`pandas.api.extensions.ExtensionArray._values_for_factorize`
Return an array and missing value suitable for factorization.
An array suitable for factorization. This should maintain order
and be a supported dtype (Float64, Int64, UInt64, String, Object).
By default, the extension array is cast to object dtype.
|
ExtensionArray._values_for_factorize()[source]#
Return an array and missing value suitable for factorization.
Returns
valuesndarrayAn array suitable for factorization. This should maintain order
and be a supported dtype (Float64, Int64, UInt64, String, Object).
By default, the extension array is cast to object dtype.
na_valueobjectThe value in values to consider missing. This will be treated
as NA in the factorization routines, so it will be coded as
na_sentinel and not included in uniques. By default,
np.nan is used.
Notes
The values returned by this method are also used in
pandas.util.hash_pandas_object().
|
reference/api/pandas.api.extensions.ExtensionArray._values_for_factorize.html
|
pandas.tseries.offsets.MonthBegin.apply_index
|
`pandas.tseries.offsets.MonthBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
MonthBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.MonthBegin.apply_index.html
|
pandas.tseries.offsets.YearEnd.n
|
pandas.tseries.offsets.YearEnd.n
|
YearEnd.n#
|
reference/api/pandas.tseries.offsets.YearEnd.n.html
|
Window
|
Window
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
|
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
|
reference/window.html
|
pandas.tseries.offsets.Nano.name
|
`pandas.tseries.offsets.Nano.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
Nano.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Nano.name.html
|
pandas.Series.where
|
`pandas.Series.where`
Replace values where the condition is False.
```
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
```
|
Series.where(cond, other=_NoDefault.no_default, *, inplace=False, axis=None, level=None, errors=_NoDefault.no_default, try_cast=_NoDefault.no_default)[source]#
Replace values where the condition is False.
Parameters
condbool Series/DataFrame, array-like, or callableWhere cond is True, keep the original value. Where
False, replace with corresponding value from other.
If cond is callable, it is computed on the Series/DataFrame and
should return boolean Series/DataFrame or array. The callable must
not change input Series/DataFrame (though pandas doesn’t check it).
otherscalar, Series/DataFrame, or callableEntries where cond is False are replaced with
corresponding value from other.
If other is callable, it is computed on the Series/DataFrame and
should return scalar or Series/DataFrame. The callable must not
change input Series/DataFrame (though pandas doesn’t check it).
inplacebool, default FalseWhether to perform the operation in place on the data.
axisint, default NoneAlignment axis if needed. For Series this parameter is
unused and defaults to 0.
levelint, default NoneAlignment level if needed.
errorsstr, {‘raise’, ‘ignore’}, default ‘raise’Note that currently this parameter won’t affect
the results and will always coerce to a suitable dtype.
‘raise’ : allow exceptions to be raised.
‘ignore’ : suppress exceptions. On error return original object.
Deprecated since version 1.5.0: This argument had no effect.
try_castbool, default NoneTry to cast the result back to the input type (if possible).
Deprecated since version 1.3.0: Manually cast back if necessary.
Returns
Same type as caller or None if inplace=True.
See also
DataFrame.mask()Return an object of same shape as self.
Notes
The where method is an application of the if-then idiom. For each
element in the calling DataFrame, if cond is True the
element is used; otherwise the corresponding element from the DataFrame
other is used. If the axis of other does not align with axis of
cond Series/DataFrame, the misaligned index positions will be filled with
False.
The signature for DataFrame.where() differs from
numpy.where(). Roughly df1.where(m, df2) is equivalent to
np.where(m, df1, df2).
For further details and examples see the where documentation in
indexing.
The dtype of the object takes precedence. The fill value is casted to
the object’s dtype, if this can be done losslessly.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
>>> s = pd.Series(range(5))
>>> t = pd.Series([True, False])
>>> s.where(t, 99)
0 0
1 99
2 99
3 99
4 99
dtype: int64
>>> s.mask(t, 99)
0 99
1 1
2 99
3 99
4 99
dtype: int64
>>> s.where(s > 1, 10)
0 10
1 10
2 2
3 3
4 4
dtype: int64
>>> s.mask(s > 1, 10)
0 0
1 1
2 10
3 10
4 10
dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
>>> df
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
>>> m = df % 3 == 0
>>> df.where(m, -df)
A B
0 0 -1
1 -2 3
2 -4 -5
3 6 -7
4 -8 9
>>> df.where(m, -df) == np.where(m, df, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
>>> df.where(m, -df) == df.mask(~m, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
|
reference/api/pandas.Series.where.html
|
pandas.core.groupby.DataFrameGroupBy.cumcount
|
`pandas.core.groupby.DataFrameGroupBy.cumcount`
Number each item in each group from 0 to the length of that group - 1.
```
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
```
|
DataFrameGroupBy.cumcount(ascending=True)[source]#
Number each item in each group from 0 to the length of that group - 1.
Essentially this is equivalent to
self.apply(lambda x: pd.Series(np.arange(len(x)), x.index))
Parameters
ascendingbool, default TrueIf False, number in reverse, from length of group - 1 to 0.
Returns
SeriesSequence number of each element within each group.
See also
ngroupNumber the groups themselves.
Examples
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
|
reference/api/pandas.core.groupby.DataFrameGroupBy.cumcount.html
|
pandas.Series.dtype
|
`pandas.Series.dtype`
Return the dtype object of the underlying data.
|
property Series.dtype[source]#
Return the dtype object of the underlying data.
|
reference/api/pandas.Series.dtype.html
|
pandas.tseries.offsets.BusinessHour.isAnchored
|
pandas.tseries.offsets.BusinessHour.isAnchored
|
BusinessHour.isAnchored()#
|
reference/api/pandas.tseries.offsets.BusinessHour.isAnchored.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.