title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.Index
|
`pandas.Index`
Immutable sequence used for indexing and alignment.
The basic object storing axis labels for all pandas objects.
```
>>> pd.Index([1, 2, 3])
Int64Index([1, 2, 3], dtype='int64')
```
|
class pandas.Index(data=None, dtype=None, copy=False, name=None, tupleize_cols=True, **kwargs)[source]#
Immutable sequence used for indexing and alignment.
The basic object storing axis labels for all pandas objects.
Parameters
dataarray-like (1-dimensional)
dtypeNumPy dtype (default: object)If dtype is None, we find the dtype that best fits the data.
If an actual dtype is provided, we coerce to that dtype if it’s safe.
Otherwise, an error will be raised.
copyboolMake a copy of input ndarray.
nameobjectName to be stored in the index.
tupleize_colsbool (default: True)When True, attempt to create a MultiIndex if possible.
See also
RangeIndexIndex implementing a monotonic integer range.
CategoricalIndexIndex of Categorical s.
MultiIndexA multi-level, or hierarchical Index.
IntervalIndexAn Index of Interval s.
DatetimeIndexIndex of datetime64 data.
TimedeltaIndexIndex of timedelta64 data.
PeriodIndexIndex of Period data.
NumericIndexIndex of numpy int/uint/float data.
Int64IndexIndex of purely int64 labels (deprecated).
UInt64IndexIndex of purely uint64 labels (deprecated).
Float64IndexIndex of purely float64 labels (deprecated).
Notes
An Index instance can only contain hashable objects
Examples
>>> pd.Index([1, 2, 3])
Int64Index([1, 2, 3], dtype='int64')
>>> pd.Index(list('abc'))
Index(['a', 'b', 'c'], dtype='object')
Attributes
T
Return the transpose, which is by definition self.
array
The ExtensionArray of the data backing this Series or Index.
asi8
Integer representation of the values.
dtype
Return the dtype object of the underlying data.
has_duplicates
Check if the Index has duplicate values.
hasnans
Return True if there are any NaNs.
inferred_type
Return a string of the type inferred from the values.
is_all_dates
Whether or not the index values only consist of dates.
is_monotonic
(DEPRECATED) Alias for is_monotonic_increasing.
is_monotonic_decreasing
Return a boolean if the values are equal or decreasing.
is_monotonic_increasing
Return a boolean if the values are equal or increasing.
is_unique
Return if the index has unique values.
name
Return Index or MultiIndex name.
nbytes
Return the number of bytes in the underlying data.
ndim
Number of dimensions of the underlying data, by definition 1.
nlevels
Number of levels.
shape
Return a tuple of the shape of the underlying data.
size
Return the number of elements in the underlying data.
values
Return an array representing the data in the Index.
empty
names
Methods
all(*args, **kwargs)
Return whether all elements are Truthy.
any(*args, **kwargs)
Return whether any element is Truthy.
append(other)
Append a collection of Index options together.
argmax([axis, skipna])
Return int position of the largest value in the Series.
argmin([axis, skipna])
Return int position of the smallest value in the Series.
argsort(*args, **kwargs)
Return the integer indices that would sort the index.
asof(label)
Return the label from the index, or, if not present, the previous one.
asof_locs(where, mask)
Return the locations (indices) of labels in the index.
astype(dtype[, copy])
Create an Index with values cast to dtypes.
copy([name, deep, dtype, names])
Make a copy of this object.
delete(loc)
Make new Index with passed location(-s) deleted.
difference(other[, sort])
Return a new Index with elements of index not in other.
drop(labels[, errors])
Make new Index with passed list of labels deleted.
drop_duplicates(*[, keep])
Return Index with duplicate values removed.
droplevel([level])
Return index with requested level(s) removed.
dropna([how])
Return Index without NA/NaN values.
duplicated([keep])
Indicate duplicate index values.
equals(other)
Determine if two Index object are equal.
factorize([sort, na_sentinel, use_na_sentinel])
Encode the object as an enumerated type or categorical variable.
fillna([value, downcast])
Fill NA/NaN values with the specified value.
format([name, formatter, na_rep])
Render a string representation of the Index.
get_indexer(target[, method, limit, tolerance])
Compute indexer and mask for new index given the current index.
get_indexer_for(target)
Guaranteed return of an indexer even when non-unique.
get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index.
get_level_values(level)
Return an Index of values for requested level.
get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
get_slice_bound(label, side[, kind])
Calculate slice bound that corresponds to given label.
get_value(series, key)
Fast lookup of value from 1-dimensional ndarray.
groupby(values)
Group the index labels by a given array of values.
holds_integer()
Whether the type is an integer type.
identical(other)
Similar to equals, but checks that object attributes and types are also equal.
insert(loc, item)
Make new Index inserting new item at location.
intersection(other[, sort])
Form the intersection of two Index objects.
is_(other)
More flexible, faster check like is but that works through views.
is_boolean()
Check if the Index only consists of booleans.
is_categorical()
Check if the Index holds categorical data.
is_floating()
Check if the Index is a floating type.
is_integer()
Check if the Index only consists of integers.
is_interval()
Check if the Index holds Interval objects.
is_mixed()
Check if the Index holds data with mixed data types.
is_numeric()
Check if the Index only consists of numeric data.
is_object()
Check if the Index is of the object dtype.
is_type_compatible(kind)
Whether the index type is compatible with the provided type.
isin(values[, level])
Return a boolean array where the index values are in values.
isna()
Detect missing values.
isnull()
Detect missing values.
item()
Return the first element of the underlying data as a Python scalar.
join(other, *[, how, level, ...])
Compute join_index and indexers to conform data structures to the new index.
map(mapper[, na_action])
Map values using an input mapping or function.
max([axis, skipna])
Return the maximum value of the Index.
memory_usage([deep])
Memory usage of the values.
min([axis, skipna])
Return the minimum value of the Index.
notna()
Detect existing (non-missing) values.
notnull()
Detect existing (non-missing) values.
nunique([dropna])
Return number of unique elements in the object.
putmask(mask, value)
Return a new Index of the values set with the mask.
ravel([order])
Return an ndarray of the flattened values of the underlying data.
reindex(target[, method, level, limit, ...])
Create index with target's values.
rename(name[, inplace])
Alter Index or MultiIndex name.
repeat(repeats[, axis])
Repeat elements of a Index.
searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
set_names(names, *[, level, inplace])
Set Index or MultiIndex name.
set_value(arr, key, value)
(DEPRECATED) Fast lookup of value from 1-dimensional ndarray.
shift([periods, freq])
Shift index by desired number of time frequency increments.
slice_indexer([start, end, step, kind])
Compute the slice indexer for input labels and step.
slice_locs([start, end, step, kind])
Compute slice locations for input labels.
sort(*args, **kwargs)
Use sort_values instead.
sort_values([return_indexer, ascending, ...])
Return a sorted copy of the index.
sortlevel([level, ascending, sort_remaining])
For internal compatibility with the Index API.
str
alias of pandas.core.strings.accessor.StringMethods
symmetric_difference(other[, result_name, sort])
Compute the symmetric difference of two Index objects.
take(indices[, axis, allow_fill, fill_value])
Return a new Index of the values selected by the indices.
to_flat_index()
Identity method.
to_frame([index, name])
Create a DataFrame with a column containing the Index.
to_list()
Return a list of the values.
to_native_types([slicer])
(DEPRECATED) Format specified values of self and return them.
to_numpy([dtype, copy, na_value])
A NumPy ndarray representing the values in this Series or Index.
to_series([index, name])
Create a Series with both index and values equal to the index keys.
tolist()
Return a list of the values.
transpose(*args, **kwargs)
Return the transpose, which is by definition self.
union(other[, sort])
Form the union of two Index objects.
unique([level])
Return unique values in the index.
value_counts([normalize, sort, ascending, ...])
Return a Series containing counts of unique values.
where(cond[, other])
Replace values where the condition is False.
view
|
reference/api/pandas.Index.html
|
GroupBy
|
GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
Indexing, iteration#
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
GroupBy.indices
Dict {group name -> group indices}.
GroupBy.get_group(name[, obj])
Construct DataFrame from group with provided name.
Grouper(*args, **kwargs)
A Grouper allows the user to specify a groupby instruction for an object.
Function application#
GroupBy.apply(func, *args, **kwargs)
Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed Series on each group.
DataFrameGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed DataFrame on each group.
GroupBy.pipe(func, *args, **kwargs)
Apply a func with arguments to this GroupBy object and return its result.
Computations / descriptive stats#
GroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
GroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit])
Backward fill the values.
GroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
GroupBy.count()
Compute count of group, excluding missing values.
GroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
GroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
GroupBy.cumprod([axis])
Cumulative product for each group.
GroupBy.cumsum([axis])
Cumulative sum for each group.
GroupBy.ffill([limit])
Forward fill the values.
GroupBy.first([numeric_only, min_count])
Compute the first non-null entry of each column.
GroupBy.head([n])
Return first n rows of each group.
GroupBy.last([numeric_only, min_count])
Compute the last non-null entry of each column.
GroupBy.max([numeric_only, min_count, ...])
Compute max of group values.
GroupBy.mean([numeric_only, engine, ...])
Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only])
Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count, ...])
Compute min of group values.
GroupBy.ngroup([ascending])
Number each group from 0 to the number of groups - 1.
GroupBy.nth
Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc()
Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
GroupBy.prod([numeric_only, min_count])
Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...])
Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...])
Calculate pct_change of each value to previous entry in group.
GroupBy.size()
Compute group sizes.
GroupBy.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs, ...])
Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...])
Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs, ...])
Compute variance of groups, excluding missing values.
GroupBy.tail([n])
Return last n rows of each group.
The following methods are available in both SeriesGroupBy and
DataFrameGroupBy objects, but may differ slightly, usually in that
the DataFrameGroupBy version usually permits the specification of an
axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
DataFrameGroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
DataFrameGroupBy.bfill([limit])
Backward fill the values.
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count()
Compute count of group, excluding missing values.
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
DataFrameGroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
DataFrameGroupBy.cumprod([axis])
Cumulative product for each group.
DataFrameGroupBy.cumsum([axis])
Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs)
Generate descriptive statistics.
DataFrameGroupBy.diff([periods, axis])
First discrete difference of element.
DataFrameGroupBy.ffill([limit])
Forward fill the values.
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna])
Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist
Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna, ...])
Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna, ...])
Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna])
Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...])
Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, ...])
Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...])
Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs)
Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...])
Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...])
Shift each group by periods observations.
DataFrameGroupBy.size()
Compute group sizes.
DataFrameGroupBy.skew
Return unbiased skew over requested axis.
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...])
Return a Series or DataFrame containing counts of unique rows.
The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist
Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep])
Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep])
Return the smallest n elements.
SeriesGroupBy.unique
Return unique values of Series object.
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith
Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...])
Make box plots from DataFrameGroupBy data.
|
reference/groupby.html
| null |
pandas.tseries.offsets.BYearBegin.__call__
|
`pandas.tseries.offsets.BYearBegin.__call__`
Call self as a function.
|
BYearBegin.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.BYearBegin.__call__.html
|
Plotting
|
Plotting
|
The following functions are contained in the pandas.plotting module.
andrews_curves(frame, class_column[, ax, ...])
Generate a matplotlib plot for visualising clusters of multivariate data.
autocorrelation_plot(series[, ax])
Autocorrelation plot for time series.
bootstrap_plot(series[, fig, size, samples])
Bootstrap plot on mean, median and mid-range statistics.
boxplot(data[, column, by, ax, fontsize, ...])
Make a box plot from DataFrame columns.
deregister_matplotlib_converters()
Remove pandas formatters and converters.
lag_plot(series[, lag, ax])
Lag plot for time series.
parallel_coordinates(frame, class_column[, ...])
Parallel coordinates plotting.
plot_params
Stores pandas plotting options.
radviz(frame, class_column[, ax, color, ...])
Plot a multidimensional dataset in 2D.
register_matplotlib_converters()
Register pandas formatters and converters with matplotlib.
scatter_matrix(frame[, alpha, figsize, ax, ...])
Draw a matrix of scatter plots.
table(ax, data[, rowLabels, colLabels])
Helper function to convert DataFrame and Series to matplotlib.table.
|
reference/plotting.html
|
pandas.tseries.offsets.Second.nanos
|
`pandas.tseries.offsets.Second.nanos`
Return an integer of the total number of nanoseconds.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
```
|
Second.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
|
reference/api/pandas.tseries.offsets.Second.nanos.html
|
Policies
|
Policies
|
Version policy#
Changed in version 1.0.0.
pandas uses a loose variant of semantic versioning (SemVer) to govern
deprecations, API compatibility, and version numbering.
A pandas release number is made up of MAJOR.MINOR.PATCH.
API breaking changes should only occur in major releases. These changes
will be documented, with clear guidance on what is changing, why it’s changing,
and how to migrate existing code to the new behavior.
Whenever possible, a deprecation path will be provided rather than an outright
breaking change.
pandas will introduce deprecations in minor releases. These deprecations
will preserve the existing behavior while emitting a warning that provide
guidance on:
How to achieve similar behavior if an alternative is available
The pandas version in which the deprecation will be enforced.
We will not introduce new deprecations in patch releases.
Deprecations will only be enforced in major releases. For example, if a
behavior is deprecated in pandas 1.2.0, it will continue to work, with a
warning, for all releases in the 1.x series. The behavior will change and the
deprecation removed in the next major release (2.0.0).
Note
pandas will sometimes make behavior changing bug fixes, as part of
minor or patch releases. Whether or not a change is a bug fix or an
API-breaking change is a judgement call. We’ll do our best, and we
invite you to participate in development discussion on the issue
tracker or mailing list.
These policies do not apply to features marked as experimental in the documentation.
pandas may change the behavior of experimental features at any time.
Python support#
pandas mirrors the NumPy guidelines for Python support.
|
development/policies.html
|
pandas.tseries.offsets.SemiMonthBegin.n
|
pandas.tseries.offsets.SemiMonthBegin.n
|
SemiMonthBegin.n#
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.n.html
|
Options and settings
|
API for configuring global behavior. See the User Guide for more.
Working with options#
describe_option(pat[, _print_desc])
Prints the description for one or more registered options.
reset_option(pat)
Reset one or more options to their default value.
get_option(pat)
Retrieves the value of the specified option.
set_option(pat, value)
Sets the value of the specified option.
option_context(*args)
Context manager to temporarily set options in the with statement context.
|
reference/options.html
| null |
pandas.Series.__array__
|
`pandas.Series.__array__`
Return the values as a NumPy array.
```
>>> ser = pd.Series([1, 2, 3])
>>> np.asarray(ser)
array([1, 2, 3])
```
|
Series.__array__(dtype=None)[source]#
Return the values as a NumPy array.
Users should not call this directly. Rather, it is invoked by
numpy.array() and numpy.asarray().
Parameters
dtypestr or numpy.dtype, optionalThe dtype to use for the resulting NumPy array. By default,
the dtype is inferred from the data.
Returns
numpy.ndarrayThe values in the series converted to a numpy.ndarray
with the specified dtype.
See also
arrayCreate a new array from data.
Series.arrayZero-copy view to the array backing the Series.
Series.to_numpySeries method for similar behavior.
Examples
>>> ser = pd.Series([1, 2, 3])
>>> np.asarray(ser)
array([1, 2, 3])
For timezone-aware data, the timezones may be retained with
dtype='object'
>>> tzser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
>>> np.asarray(tzser, dtype="object")
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')],
dtype=object)
Or the values may be localized to UTC and the tzinfo discarded with
dtype='datetime64[ns]'
>>> np.asarray(tzser, dtype="datetime64[ns]")
array(['1999-12-31T23:00:00.000000000', ...],
dtype='datetime64[ns]')
|
reference/api/pandas.Series.__array__.html
|
pandas.tseries.offsets.FY5253.is_month_start
|
`pandas.tseries.offsets.FY5253.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
FY5253.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.FY5253.is_month_start.html
|
pandas.tseries.offsets.MonthBegin.rollforward
|
`pandas.tseries.offsets.MonthBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
MonthBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.MonthBegin.rollforward.html
|
pandas.Series.axes
|
`pandas.Series.axes`
Return a list of the row axis labels.
|
property Series.axes[source]#
Return a list of the row axis labels.
|
reference/api/pandas.Series.axes.html
|
pandas.tseries.offsets.BusinessMonthBegin.is_year_start
|
`pandas.tseries.offsets.BusinessMonthBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BusinessMonthBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.is_year_start.html
|
pandas.tseries.offsets.WeekOfMonth.base
|
`pandas.tseries.offsets.WeekOfMonth.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
WeekOfMonth.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.WeekOfMonth.base.html
|
pandas.DatetimeIndex.to_period
|
`pandas.DatetimeIndex.to_period`
Cast to PeriodArray/Index at a particular frequency.
```
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
```
|
DatetimeIndex.to_period(*args, **kwargs)[source]#
Cast to PeriodArray/Index at a particular frequency.
Converts DatetimeArray/Index to PeriodArray/Index.
Parameters
freqstr or Offset, optionalOne of pandas’ offset strings
or an Offset object. Will be inferred by default.
Returns
PeriodArray/Index
Raises
ValueErrorWhen converting a DatetimeArray/Index with non-regular values,
so that a frequency cannot be inferred.
See also
PeriodIndexImmutable ndarray holding ordinal values.
DatetimeIndex.to_pydatetimeReturn DatetimeIndex as object.
Examples
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
Infer the daily frequency
>>> idx = pd.date_range("2017-01-01", periods=2)
>>> idx.to_period()
PeriodIndex(['2017-01-01', '2017-01-02'],
dtype='period[D]')
|
reference/api/pandas.DatetimeIndex.to_period.html
|
pandas.core.groupby.GroupBy.ffill
|
`pandas.core.groupby.GroupBy.ffill`
Forward fill the values.
|
final GroupBy.ffill(limit=None)[source]#
Forward fill the values.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameObject with missing values filled.
See also
Series.ffillReturns Series with minimum number of char in object.
DataFrame.ffillObject with missing values filled or None if inplace=True.
Series.fillnaFill NaN values of a Series.
DataFrame.fillnaFill NaN values of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.ffill.html
|
GroupBy
|
GroupBy
GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
|
GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
Indexing, iteration#
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
GroupBy.indices
Dict {group name -> group indices}.
GroupBy.get_group(name[, obj])
Construct DataFrame from group with provided name.
Grouper(*args, **kwargs)
A Grouper allows the user to specify a groupby instruction for an object.
Function application#
GroupBy.apply(func, *args, **kwargs)
Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed Series on each group.
DataFrameGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed DataFrame on each group.
GroupBy.pipe(func, *args, **kwargs)
Apply a func with arguments to this GroupBy object and return its result.
Computations / descriptive stats#
GroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
GroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit])
Backward fill the values.
GroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
GroupBy.count()
Compute count of group, excluding missing values.
GroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
GroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
GroupBy.cumprod([axis])
Cumulative product for each group.
GroupBy.cumsum([axis])
Cumulative sum for each group.
GroupBy.ffill([limit])
Forward fill the values.
GroupBy.first([numeric_only, min_count])
Compute the first non-null entry of each column.
GroupBy.head([n])
Return first n rows of each group.
GroupBy.last([numeric_only, min_count])
Compute the last non-null entry of each column.
GroupBy.max([numeric_only, min_count, ...])
Compute max of group values.
GroupBy.mean([numeric_only, engine, ...])
Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only])
Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count, ...])
Compute min of group values.
GroupBy.ngroup([ascending])
Number each group from 0 to the number of groups - 1.
GroupBy.nth
Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc()
Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
GroupBy.prod([numeric_only, min_count])
Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...])
Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...])
Calculate pct_change of each value to previous entry in group.
GroupBy.size()
Compute group sizes.
GroupBy.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs, ...])
Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...])
Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs, ...])
Compute variance of groups, excluding missing values.
GroupBy.tail([n])
Return last n rows of each group.
The following methods are available in both SeriesGroupBy and
DataFrameGroupBy objects, but may differ slightly, usually in that
the DataFrameGroupBy version usually permits the specification of an
axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
DataFrameGroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
DataFrameGroupBy.bfill([limit])
Backward fill the values.
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count()
Compute count of group, excluding missing values.
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
DataFrameGroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
DataFrameGroupBy.cumprod([axis])
Cumulative product for each group.
DataFrameGroupBy.cumsum([axis])
Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs)
Generate descriptive statistics.
DataFrameGroupBy.diff([periods, axis])
First discrete difference of element.
DataFrameGroupBy.ffill([limit])
Forward fill the values.
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna])
Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist
Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna, ...])
Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna, ...])
Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna])
Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...])
Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, ...])
Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...])
Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs)
Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...])
Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...])
Shift each group by periods observations.
DataFrameGroupBy.size()
Compute group sizes.
DataFrameGroupBy.skew
Return unbiased skew over requested axis.
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...])
Return a Series or DataFrame containing counts of unique rows.
The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist
Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep])
Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep])
Return the smallest n elements.
SeriesGroupBy.unique
Return unique values of Series object.
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith
Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...])
Make box plots from DataFrameGroupBy data.
|
reference/groupby.html
|
pandas.core.resample.Resampler.__iter__
|
`pandas.core.resample.Resampler.__iter__`
Groupby iterator.
|
Resampler.__iter__()[source]#
Groupby iterator.
Returns
Generator yielding sequence of (name, subsetted object)
for each group
|
reference/api/pandas.core.resample.Resampler.__iter__.html
|
pandas.tseries.offsets.BDay
|
`pandas.tseries.offsets.BDay`
alias of pandas._libs.tslibs.offsets.BusinessDay
|
pandas.tseries.offsets.BDay#
alias of pandas._libs.tslibs.offsets.BusinessDay
|
reference/api/pandas.tseries.offsets.BDay.html
|
pandas.DataFrame.swaplevel
|
`pandas.DataFrame.swaplevel`
Swap levels i and j in a MultiIndex.
```
>>> df = pd.DataFrame(
... {"Grade": ["A", "B", "A", "C"]},
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> df
Grade
Final exam History January A
Geography February B
Coursework History March A
Geography April C
```
|
DataFrame.swaplevel(i=- 2, j=- 1, axis=0)[source]#
Swap levels i and j in a MultiIndex.
Default is to swap the two innermost levels of the index.
Parameters
i, jint or strLevels of the indices to be swapped. Can pass level name as string.
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to swap levels on. 0 or ‘index’ for row-wise, 1 or
‘columns’ for column-wise.
Returns
DataFrameDataFrame with levels swapped in MultiIndex.
Examples
>>> df = pd.DataFrame(
... {"Grade": ["A", "B", "A", "C"]},
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> df
Grade
Final exam History January A
Geography February B
Coursework History March A
Geography April C
In the following example, we will swap the levels of the indices.
Here, we will swap the levels column-wise, but levels can be swapped row-wise
in a similar manner. Note that column-wise is the default behaviour.
By not supplying any arguments for i and j, we swap the last and second to
last indices.
>>> df.swaplevel()
Grade
Final exam January History A
February Geography B
Coursework March History A
April Geography C
By supplying one argument, we can choose which index to swap the last
index with. We can for example swap the first index with the last one as
follows.
>>> df.swaplevel(0)
Grade
January History Final exam A
February Geography Final exam B
March History Coursework A
April Geography Coursework C
We can also define explicitly which indices we want to swap by supplying values
for both i and j. Here, we for example swap the first and second indices.
>>> df.swaplevel(0, 1)
Grade
History Final exam January A
Geography Final exam February B
History Coursework March A
Geography Coursework April C
|
reference/api/pandas.DataFrame.swaplevel.html
|
GroupBy
|
GroupBy
|
GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
Indexing, iteration#
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
GroupBy.indices
Dict {group name -> group indices}.
GroupBy.get_group(name[, obj])
Construct DataFrame from group with provided name.
Grouper(*args, **kwargs)
A Grouper allows the user to specify a groupby instruction for an object.
Function application#
GroupBy.apply(func, *args, **kwargs)
Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed Series on each group.
DataFrameGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed DataFrame on each group.
GroupBy.pipe(func, *args, **kwargs)
Apply a func with arguments to this GroupBy object and return its result.
Computations / descriptive stats#
GroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
GroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit])
Backward fill the values.
GroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
GroupBy.count()
Compute count of group, excluding missing values.
GroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
GroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
GroupBy.cumprod([axis])
Cumulative product for each group.
GroupBy.cumsum([axis])
Cumulative sum for each group.
GroupBy.ffill([limit])
Forward fill the values.
GroupBy.first([numeric_only, min_count])
Compute the first non-null entry of each column.
GroupBy.head([n])
Return first n rows of each group.
GroupBy.last([numeric_only, min_count])
Compute the last non-null entry of each column.
GroupBy.max([numeric_only, min_count, ...])
Compute max of group values.
GroupBy.mean([numeric_only, engine, ...])
Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only])
Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count, ...])
Compute min of group values.
GroupBy.ngroup([ascending])
Number each group from 0 to the number of groups - 1.
GroupBy.nth
Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc()
Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
GroupBy.prod([numeric_only, min_count])
Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...])
Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...])
Calculate pct_change of each value to previous entry in group.
GroupBy.size()
Compute group sizes.
GroupBy.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs, ...])
Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...])
Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs, ...])
Compute variance of groups, excluding missing values.
GroupBy.tail([n])
Return last n rows of each group.
The following methods are available in both SeriesGroupBy and
DataFrameGroupBy objects, but may differ slightly, usually in that
the DataFrameGroupBy version usually permits the specification of an
axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
DataFrameGroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
DataFrameGroupBy.bfill([limit])
Backward fill the values.
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count()
Compute count of group, excluding missing values.
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
DataFrameGroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
DataFrameGroupBy.cumprod([axis])
Cumulative product for each group.
DataFrameGroupBy.cumsum([axis])
Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs)
Generate descriptive statistics.
DataFrameGroupBy.diff([periods, axis])
First discrete difference of element.
DataFrameGroupBy.ffill([limit])
Forward fill the values.
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna])
Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist
Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna, ...])
Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna, ...])
Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna])
Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...])
Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, ...])
Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...])
Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs)
Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...])
Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...])
Shift each group by periods observations.
DataFrameGroupBy.size()
Compute group sizes.
DataFrameGroupBy.skew
Return unbiased skew over requested axis.
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...])
Return a Series or DataFrame containing counts of unique rows.
The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist
Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep])
Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep])
Return the smallest n elements.
SeriesGroupBy.unique
Return unique values of Series object.
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith
Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...])
Make box plots from DataFrameGroupBy data.
|
reference/groupby.html
|
pandas.tseries.offsets.BusinessMonthBegin.n
|
pandas.tseries.offsets.BusinessMonthBegin.n
|
BusinessMonthBegin.n#
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.n.html
|
pandas.Timestamp.days_in_month
|
`pandas.Timestamp.days_in_month`
Return the number of days in the month.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
```
|
Timestamp.days_in_month#
Return the number of days in the month.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
|
reference/api/pandas.Timestamp.days_in_month.html
|
pandas.Series.dt.days_in_month
|
`pandas.Series.dt.days_in_month`
The number of days in the month.
|
Series.dt.days_in_month[source]#
The number of days in the month.
|
reference/api/pandas.Series.dt.days_in_month.html
|
pandas.Index.is_categorical
|
`pandas.Index.is_categorical`
Check if the Index holds categorical data.
```
>>> idx = pd.Index(["Watermelon", "Orange", "Apple",
... "Watermelon"]).astype("category")
>>> idx.is_categorical()
True
```
|
final Index.is_categorical()[source]#
Check if the Index holds categorical data.
Returns
boolTrue if the Index is categorical.
See also
CategoricalIndexIndex for categorical data.
is_booleanCheck if the Index only consists of booleans.
is_integerCheck if the Index only consists of integers.
is_floatingCheck if the Index is a floating type.
is_numericCheck if the Index only consists of numeric data.
is_objectCheck if the Index is of the object dtype.
is_intervalCheck if the Index holds Interval objects.
is_mixedCheck if the Index holds data with mixed data types.
Examples
>>> idx = pd.Index(["Watermelon", "Orange", "Apple",
... "Watermelon"]).astype("category")
>>> idx.is_categorical()
True
>>> idx = pd.Index([1, 3, 5, 7])
>>> idx.is_categorical()
False
>>> s = pd.Series(["Peter", "Victor", "Elisabeth", "Mar"])
>>> s
0 Peter
1 Victor
2 Elisabeth
3 Mar
dtype: object
>>> s.index.is_categorical()
False
|
reference/api/pandas.Index.is_categorical.html
|
pandas.Series.asof
|
`pandas.Series.asof`
Return the last row(s) without any NaNs before where.
```
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
```
|
Series.asof(where, subset=None)[source]#
Return the last row(s) without any NaNs before where.
The last row (for each element in where, if list) without any
NaN is taken.
In case of a DataFrame, the last row without NaN
considering only the subset of columns (if not None)
If there is no good value, NaN is returned for a Series or
a Series of NaN values for a DataFrame
Parameters
wheredate or array-like of datesDate(s) before which the last row(s) are returned.
subsetstr or array-like of str, default NoneFor DataFrame, if not None, only use these columns to
check for NaNs.
Returns
scalar, Series, or DataFrameThe return can be:
scalar : when self is a Series and where is a scalar
Series: when self is a Series and where is an array-like,
or when self is a DataFrame and where is a scalar
DataFrame : when self is a DataFrame and where is an
array-like
Return scalar, Series, or DataFrame.
See also
merge_asofPerform an asof merge. Similar to left join.
Notes
Dates are assumed to be sorted. Raises if this is not the case.
Examples
A Series and a scalar where.
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
>>> s.asof(20)
2.0
For a sequence where, a Series is returned. The first value is
NaN, because the first element of where is before the first
index value.
>>> s.asof([5, 20])
5 NaN
20 2.0
dtype: float64
Missing values are not considered. The following is 2.0, not
NaN, even though NaN is at the index location for 30.
>>> s.asof(30)
2.0
Take all columns into consideration
>>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
... 'b': [None, None, None, None, 500]},
... index=pd.DatetimeIndex(['2018-02-27 09:01:00',
... '2018-02-27 09:02:00',
... '2018-02-27 09:03:00',
... '2018-02-27 09:04:00',
... '2018-02-27 09:05:00']))
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']))
a b
2018-02-27 09:03:30 NaN NaN
2018-02-27 09:04:30 NaN NaN
Take a single column into consideration
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']),
... subset=['a'])
a b
2018-02-27 09:03:30 30 NaN
2018-02-27 09:04:30 40 NaN
|
reference/api/pandas.Series.asof.html
|
pandas.core.groupby.GroupBy.apply
|
`pandas.core.groupby.GroupBy.apply`
Apply function func group-wise and combine the results together.
```
>>> df = pd.DataFrame({'A': 'a a b'.split(),
... 'B': [1,2,3],
... 'C': [4,6,5]})
>>> g1 = df.groupby('A', group_keys=False)
>>> g2 = df.groupby('A', group_keys=True)
```
|
GroupBy.apply(func, *args, **kwargs)[source]#
Apply function func group-wise and combine the results together.
The function passed to apply must take a dataframe as its first
argument and return a DataFrame, Series or scalar. apply will
then take care of combining the results back together into a single
dataframe or series. apply is therefore a highly flexible
grouping method.
While apply is a very flexible method, its downside is that
using it can be quite a bit slower than using more specific methods
like agg or transform. Pandas offers a wide range of method that will
be much faster than using apply for their specific purposes, so try to
use them before reaching for apply.
Parameters
funccallableA callable that takes a dataframe as its first argument, and
returns a dataframe, a series or a scalar. In addition the
callable may take positional and keyword arguments.
args, kwargstuple and dictOptional positional and keyword arguments to pass to func.
Returns
appliedSeries or DataFrame
See also
pipeApply function to the full GroupBy object instead of to each group.
aggregateApply aggregate function to the GroupBy object.
transformApply function column-by-column to the GroupBy object.
Series.applyApply a function to a Series.
DataFrame.applyApply a function to each row or column of a DataFrame.
Notes
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func,
see the examples below.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A': 'a a b'.split(),
... 'B': [1,2,3],
... 'C': [4,6,5]})
>>> g1 = df.groupby('A', group_keys=False)
>>> g2 = df.groupby('A', group_keys=True)
Notice that g1 have g2 have two groups, a and b, and only
differ in their group_keys argument. Calling apply in various ways,
we can get different grouping results:
Example 1: below the function passed to apply takes a DataFrame as
its argument and returns a DataFrame. apply combines the result for
each group together into a new DataFrame:
>>> g1[['B', 'C']].apply(lambda x: x / x.sum())
B C
0 0.333333 0.4
1 0.666667 0.6
2 1.000000 1.0
In the above, the groups are not part of the index. We can have them included
by using g2 where group_keys=True:
>>> g2[['B', 'C']].apply(lambda x: x / x.sum())
B C
A
a 0 0.333333 0.4
1 0.666667 0.6
b 2 1.000000 1.0
Example 2: The function passed to apply takes a DataFrame as
its argument and returns a Series. apply combines the result for
each group together into a new DataFrame.
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func.
>>> g1[['B', 'C']].apply(lambda x: x.astype(float).max() - x.min())
B C
A
a 1.0 2.0
b 0.0 0.0
>>> g2[['B', 'C']].apply(lambda x: x.astype(float).max() - x.min())
B C
A
a 1.0 2.0
b 0.0 0.0
The group_keys argument has no effect here because the result is not
like-indexed (i.e. a transform) when compared
to the input.
Example 3: The function passed to apply takes a DataFrame as
its argument and returns a scalar. apply combines the result for
each group together into a Series, including setting the index as
appropriate:
>>> g1.apply(lambda x: x.C.max() - x.B.min())
A
a 5
b 2
dtype: int64
|
reference/api/pandas.core.groupby.GroupBy.apply.html
|
pandas.tseries.offsets.Second.is_month_end
|
`pandas.tseries.offsets.Second.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Second.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Second.is_month_end.html
|
pandas.tseries.offsets.BusinessHour.weekmask
|
pandas.tseries.offsets.BusinessHour.weekmask
|
BusinessHour.weekmask#
|
reference/api/pandas.tseries.offsets.BusinessHour.weekmask.html
|
pandas.Series.cat.add_categories
|
`pandas.Series.cat.add_categories`
Add new categories.
```
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
```
|
Series.cat.add_categories(*args, **kwargs)[source]#
Add new categories.
new_categories will be included at the last/highest place in the
categories and will be unused directly after this call.
Parameters
new_categoriescategory or list-like of categoryThe new categories to be included.
inplacebool, default FalseWhether or not to add the categories inplace or return a copy of
this categorical with added categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with new categories added or None if inplace=True.
Raises
ValueErrorIf the new categories include old categories or do not validate as
categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a']
|
reference/api/pandas.Series.cat.add_categories.html
|
pandas.Series.nbytes
|
`pandas.Series.nbytes`
Return the number of bytes in the underlying data.
|
property Series.nbytes[source]#
Return the number of bytes in the underlying data.
|
reference/api/pandas.Series.nbytes.html
|
pandas.DataFrame.interpolate
|
`pandas.DataFrame.interpolate`
Fill NaN values using an interpolation method.
```
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
```
|
DataFrame.interpolate(method='linear', *, axis=0, limit=None, inplace=False, limit_direction=None, limit_area=None, downcast=None, **kwargs)[source]#
Fill NaN values using an interpolation method.
Please note that only method='linear' is supported for
DataFrame/Series with a MultiIndex.
Parameters
methodstr, default ‘linear’Interpolation technique to use. One of:
‘linear’: Ignore the index and treat the values as equally
spaced. This is the only method supported on MultiIndexes.
‘time’: Works on daily and higher resolution data to interpolate
given length of interval.
‘index’, ‘values’: use the actual numerical values of the index.
‘pad’: Fill in NaNs using existing values.
‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’,
‘barycentric’, ‘polynomial’: Passed to
scipy.interpolate.interp1d. These methods use the numerical
values of the index. Both ‘polynomial’ and ‘spline’ require that
you also specify an order (int), e.g.
df.interpolate(method='polynomial', order=5).
‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’,
‘cubicspline’: Wrappers around the SciPy interpolation methods of
similar names. See Notes.
‘from_derivatives’: Refers to
scipy.interpolate.BPoly.from_derivatives which
replaces ‘piecewise_polynomial’ interpolation method in
scipy 0.18.
axis{{0 or ‘index’, 1 or ‘columns’, None}}, default NoneAxis to interpolate along. For Series this parameter is unused
and defaults to 0.
limitint, optionalMaximum number of consecutive NaNs to fill. Must be greater than
0.
inplacebool, default FalseUpdate the data in place if possible.
limit_direction{{‘forward’, ‘backward’, ‘both’}}, OptionalConsecutive NaNs will be filled in this direction.
If limit is specified:
If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’.
If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be
‘backwards’.
If ‘limit’ is not specified:
If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’
else the default is ‘forward’
Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and
method is ‘backfill’ or ‘bfill’.
raises ValueError if limit_direction is ‘backward’ or ‘both’ and
method is ‘pad’ or ‘ffill’.
limit_area{{None, ‘inside’, ‘outside’}}, default NoneIf limit is specified, consecutive NaNs will be filled with this
restriction.
None: No fill restriction.
‘inside’: Only fill NaNs surrounded by valid values
(interpolate).
‘outside’: Only fill NaNs outside valid values (extrapolate).
downcastoptional, ‘infer’ or None, defaults to NoneDowncast dtypes if possible.
``**kwargs``optionalKeyword arguments to pass on to the interpolating function.
Returns
Series or DataFrame or NoneReturns the same object type as the caller, interpolated at
some or all NaN values or None if inplace=True.
See also
fillnaFill missing values using different methods.
scipy.interpolate.Akima1DInterpolatorPiecewise cubic polynomials (Akima interpolator).
scipy.interpolate.BPoly.from_derivativesPiecewise polynomial in the Bernstein basis.
scipy.interpolate.interp1dInterpolate a 1-D function.
scipy.interpolate.KroghInterpolatorInterpolate polynomial (Krogh interpolator).
scipy.interpolate.PchipInterpolatorPCHIP 1-d monotonic cubic interpolation.
scipy.interpolate.CubicSplineCubic spline data interpolator.
Notes
The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’
methods are wrappers around the respective SciPy implementations of
similar names. These use the actual numerical values of the index.
For more information on their behavior, see the
SciPy documentation.
Examples
Filling in NaN in a Series via linear
interpolation.
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
Filling in NaN in a Series by padding, but filling at most two
consecutive NaN at a time.
>>> s = pd.Series([np.nan, "single_one", np.nan,
... "fill_two_more", np.nan, np.nan, np.nan,
... 4.71, np.nan])
>>> s
0 NaN
1 single_one
2 NaN
3 fill_two_more
4 NaN
5 NaN
6 NaN
7 4.71
8 NaN
dtype: object
>>> s.interpolate(method='pad', limit=2)
0 NaN
1 single_one
2 single_one
3 fill_two_more
4 fill_two_more
5 fill_two_more
6 NaN
7 4.71
8 4.71
dtype: object
Filling in NaN in a Series via polynomial interpolation or splines:
Both ‘polynomial’ and ‘spline’ methods require that you also specify
an order (int).
>>> s = pd.Series([0, 2, np.nan, 8])
>>> s.interpolate(method='polynomial', order=2)
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
Fill the DataFrame forward (that is, going down) along each column
using linear interpolation.
Note how the last entry in column ‘a’ is interpolated differently,
because there is no entry after it to use for interpolation.
Note how the first entry in column ‘b’ remains NaN, because there
is no entry before it to use for interpolation.
>>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
... (np.nan, 2.0, np.nan, np.nan),
... (2.0, 3.0, np.nan, 9.0),
... (np.nan, 4.0, -4.0, 16.0)],
... columns=list('abcd'))
>>> df
a b c d
0 0.0 NaN -1.0 1.0
1 NaN 2.0 NaN NaN
2 2.0 3.0 NaN 9.0
3 NaN 4.0 -4.0 16.0
>>> df.interpolate(method='linear', limit_direction='forward', axis=0)
a b c d
0 0.0 NaN -1.0 1.0
1 1.0 2.0 -2.0 5.0
2 2.0 3.0 -3.0 9.0
3 2.0 4.0 -4.0 16.0
Using polynomial interpolation.
>>> df['d'].interpolate(method='polynomial', order=2)
0 1.0
1 4.0
2 9.0
3 16.0
Name: d, dtype: float64
|
reference/api/pandas.DataFrame.interpolate.html
|
pandas.core.groupby.GroupBy.all
|
`pandas.core.groupby.GroupBy.all`
Return True if all values in the group are truthful, else False.
Flag to ignore nan values during truth testing.
|
final GroupBy.all(skipna=True)[source]#
Return True if all values in the group are truthful, else False.
Parameters
skipnabool, default TrueFlag to ignore nan values during truth testing.
Returns
Series or DataFrameDataFrame or Series of boolean values, where a value is True if all elements
are True within its respective group, False otherwise.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.all.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
CustomBusinessMonthEnd.normalize#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.normalize.html
|
pandas.tseries.offsets.Tick.freqstr
|
`pandas.tseries.offsets.Tick.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
Tick.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.Tick.freqstr.html
|
pandas.core.resample.Resampler.std
|
`pandas.core.resample.Resampler.std`
Compute standard deviation of groups, excluding missing values.
|
Resampler.std(ddof=1, numeric_only=_NoDefault.no_default, *args, **kwargs)[source]#
Compute standard deviation of groups, excluding missing values.
Parameters
ddofint, default 1Degrees of freedom.
numeric_onlybool, default FalseInclude only float, int or boolean data.
New in version 1.5.0.
Returns
DataFrame or SeriesStandard deviation of values within each group.
|
reference/api/pandas.core.resample.Resampler.std.html
|
pandas.tseries.offsets.Week.base
|
`pandas.tseries.offsets.Week.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
Week.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.Week.base.html
|
pandas.tseries.offsets.FY5253.kwds
|
`pandas.tseries.offsets.FY5253.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
FY5253.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.FY5253.kwds.html
|
pandas.tseries.offsets.BQuarterBegin.is_anchored
|
`pandas.tseries.offsets.BQuarterBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
BQuarterBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.BQuarterBegin.is_anchored.html
|
pandas.Series.divide
|
`pandas.Series.divide`
Return Floating division of series and other, element-wise (binary operator truediv).
Equivalent to series / other, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
```
|
Series.divide(other, level=None, fill_value=None, axis=0)[source]#
Return Floating division of series and other, element-wise (binary operator truediv).
Equivalent to series / other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rtruedivReverse of the Floating division operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.divide.html
|
pandas.tseries.offsets.BusinessDay.base
|
`pandas.tseries.offsets.BusinessDay.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
BusinessDay.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.BusinessDay.base.html
|
pandas.Series.to_csv
|
`pandas.Series.to_csv`
Write object to a comma-separated values (csv) file.
```
>>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
... 'mask': ['red', 'purple'],
... 'weapon': ['sai', 'bo staff']})
>>> df.to_csv(index=False)
'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n'
```
|
Series.to_csv(path_or_buf=None, sep=',', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, mode='w', encoding=None, compression='infer', quoting=None, quotechar='"', lineterminator=None, chunksize=None, date_format=None, doublequote=True, escapechar=None, decimal='.', errors='strict', storage_options=None)[source]#
Write object to a comma-separated values (csv) file.
Parameters
path_or_bufstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a write() function. If None, the result is
returned as a string. If a non-binary file object is passed, it should
be opened with newline=’’, disabling universal newlines. If a binary
file object is passed, mode might need to contain a ‘b’.
Changed in version 1.2.0: Support for binary file objects was introduced.
sepstr, default ‘,’String of length 1. Field delimiter for the output file.
na_repstr, default ‘’Missing data representation.
float_formatstr, Callable, default NoneFormat string for floating point numbers. If a Callable is given, it takes
precedence over other numeric formatting parameters, like decimal.
columnssequence, optionalColumns to write.
headerbool or list of str, default TrueWrite out the column names. If a list of strings is given it is
assumed to be aliases for the column names.
indexbool, default TrueWrite row names (index).
index_labelstr or sequence, or False, default NoneColumn label for index column(s) if desired. If None is given, and
header and index are True, then the index names are used. A
sequence should be given if the object uses MultiIndex. If
False do not print fields for index names. Use index_label=False
for easier importing in R.
modestr, default ‘w’Python write mode. The available write modes are the same as
open().
encodingstr, optionalA string representing the encoding to use in the output file,
defaults to ‘utf-8’. encoding is not supported if path_or_buf
is a non-binary file object.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path_or_buf’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.0.0: May now be a dict with key ‘method’ as compression mode
and other entries as additional compression options if
compression mode is ‘zip’.
Changed in version 1.1.0: Passing compression options as keys in dict is
supported for compression modes ‘gzip’, ‘bz2’, ‘zstd’, and ‘zip’.
Changed in version 1.2.0: Compression is supported for binary file objects.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to
gzip.open instead of gzip.GzipFile which prevented
setting mtime.
quotingoptional constant from csv moduleDefaults to csv.QUOTE_MINIMAL. If you have set a float_format
then floats are converted to strings and thus csv.QUOTE_NONNUMERIC
will treat them as non-numeric.
quotecharstr, default ‘"’String of length 1. Character used to quote fields.
lineterminatorstr, optionalThe newline character or character sequence to use in the output
file. Defaults to os.linesep, which depends on the OS in which
this method is called (’\n’ for linux, ‘\r\n’ for Windows, i.e.).
Changed in version 1.5.0: Previously was line_terminator, changed for consistency with
read_csv and the standard library ‘csv’ module.
chunksizeint or NoneRows to write at a time.
date_formatstr, default NoneFormat string for datetime objects.
doublequotebool, default TrueControl quoting of quotechar inside a field.
escapecharstr, default NoneString of length 1. Character used to escape sep and quotechar
when appropriate.
decimalstr, default ‘.’Character recognized as decimal separator. E.g. use ‘,’ for
European data.
errorsstr, default ‘strict’Specifies how encoding and decoding errors are to be handled.
See the errors argument for open() for a full list
of options.
New in version 1.1.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
None or strIf path_or_buf is None, returns the resulting csv format as a
string. Otherwise returns None.
See also
read_csvLoad a CSV file into a DataFrame.
to_excelWrite DataFrame to an Excel file.
Examples
>>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
... 'mask': ['red', 'purple'],
... 'weapon': ['sai', 'bo staff']})
>>> df.to_csv(index=False)
'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n'
Create ‘out.zip’ containing ‘out.csv’
>>> compression_opts = dict(method='zip',
... archive_name='out.csv')
>>> df.to_csv('out.zip', index=False,
... compression=compression_opts)
To write a csv file to a new folder or nested folder you will first
need to create it using either Pathlib or os:
>>> from pathlib import Path
>>> filepath = Path('folder/subfolder/out.csv')
>>> filepath.parent.mkdir(parents=True, exist_ok=True)
>>> df.to_csv(filepath)
>>> import os
>>> os.makedirs('folder/subfolder', exist_ok=True)
>>> df.to_csv('folder/subfolder/out.csv')
|
reference/api/pandas.Series.to_csv.html
|
pandas.tseries.offsets.BYearBegin.apply
|
pandas.tseries.offsets.BYearBegin.apply
|
BYearBegin.apply()#
|
reference/api/pandas.tseries.offsets.BYearBegin.apply.html
|
pandas.DatetimeIndex.year
|
`pandas.DatetimeIndex.year`
The year of the datetime.
Examples
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="Y")
... )
>>> datetime_series
0 2000-12-31
1 2001-12-31
2 2002-12-31
dtype: datetime64[ns]
>>> datetime_series.dt.year
0 2000
1 2001
2 2002
dtype: int64
```
|
property DatetimeIndex.year[source]#
The year of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="Y")
... )
>>> datetime_series
0 2000-12-31
1 2001-12-31
2 2002-12-31
dtype: datetime64[ns]
>>> datetime_series.dt.year
0 2000
1 2001
2 2002
dtype: int64
|
reference/api/pandas.DatetimeIndex.year.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
Timestamp to check intersections with frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
CustomBusinessMonthEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_on_offset.html
|
pandas.Series.str.ljust
|
`pandas.Series.str.ljust`
Pad right side of strings in the Series/Index.
|
Series.str.ljust(width, fillchar=' ')[source]#
Pad right side of strings in the Series/Index.
Equivalent to str.ljust().
Parameters
widthintMinimum width of resulting string; additional characters will be filled
with fillchar.
fillcharstrAdditional character for filling, default is whitespace.
Returns
filledSeries/Index of objects.
|
reference/api/pandas.Series.str.ljust.html
|
pandas.DatetimeIndex.day
|
`pandas.DatetimeIndex.day`
The day of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64
```
|
property DatetimeIndex.day[source]#
The day of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64
|
reference/api/pandas.DatetimeIndex.day.html
|
pandas.tseries.offsets.BQuarterEnd.onOffset
|
pandas.tseries.offsets.BQuarterEnd.onOffset
|
BQuarterEnd.onOffset()#
|
reference/api/pandas.tseries.offsets.BQuarterEnd.onOffset.html
|
pandas.tseries.offsets.MonthBegin.onOffset
|
pandas.tseries.offsets.MonthBegin.onOffset
|
MonthBegin.onOffset()#
|
reference/api/pandas.tseries.offsets.MonthBegin.onOffset.html
|
pandas.Series.cummin
|
`pandas.Series.cummin`
Return cumulative minimum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
minimum.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
```
|
Series.cummin(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative minimum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
minimum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
scalar or SeriesReturn cumulative minimum of scalar or Series.
See also
core.window.expanding.Expanding.minSimilar functionality but ignores NaN values.
Series.minReturn the minimum over Series axis.
Series.cummaxReturn cumulative maximum over Series axis.
Series.cumminReturn cumulative minimum over Series axis.
Series.cumsumReturn cumulative sum over Series axis.
Series.cumprodReturn cumulative product over Series axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the minimum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row,
use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
|
reference/api/pandas.Series.cummin.html
|
pandas.core.groupby.GroupBy.cumcount
|
`pandas.core.groupby.GroupBy.cumcount`
Number each item in each group from 0 to the length of that group - 1.
```
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
```
|
final GroupBy.cumcount(ascending=True)[source]#
Number each item in each group from 0 to the length of that group - 1.
Essentially this is equivalent to
self.apply(lambda x: pd.Series(np.arange(len(x)), x.index))
Parameters
ascendingbool, default TrueIf False, number in reverse, from length of group - 1 to 0.
Returns
SeriesSequence number of each element within each group.
See also
ngroupNumber the groups themselves.
Examples
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
|
reference/api/pandas.core.groupby.GroupBy.cumcount.html
|
pandas.DatetimeIndex.month_name
|
`pandas.DatetimeIndex.month_name`
Return the month names with specified locale.
```
>>> s = pd.Series(pd.date_range(start='2018-01', freq='M', periods=3))
>>> s
0 2018-01-31
1 2018-02-28
2 2018-03-31
dtype: datetime64[ns]
>>> s.dt.month_name()
0 January
1 February
2 March
dtype: object
```
|
DatetimeIndex.month_name(*args, **kwargs)[source]#
Return the month names with specified locale.
Parameters
localestr, optionalLocale determining the language in which to return the month name.
Default is English locale.
Returns
Series or IndexSeries or Index of month names.
Examples
>>> s = pd.Series(pd.date_range(start='2018-01', freq='M', periods=3))
>>> s
0 2018-01-31
1 2018-02-28
2 2018-03-31
dtype: datetime64[ns]
>>> s.dt.month_name()
0 January
1 February
2 March
dtype: object
>>> idx = pd.date_range(start='2018-01', freq='M', periods=3)
>>> idx
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],
dtype='datetime64[ns]', freq='M')
>>> idx.month_name()
Index(['January', 'February', 'March'], dtype='object')
|
reference/api/pandas.DatetimeIndex.month_name.html
|
pandas.DataFrame.set_index
|
`pandas.DataFrame.set_index`
Set the DataFrame index using existing columns.
```
>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
... 'year': [2012, 2014, 2013, 2014],
... 'sale': [55, 40, 84, 31]})
>>> df
month year sale
0 1 2012 55
1 4 2014 40
2 7 2013 84
3 10 2014 31
```
|
DataFrame.set_index(keys, *, drop=True, append=False, inplace=False, verify_integrity=False)[source]#
Set the DataFrame index using existing columns.
Set the DataFrame index (row labels) using one or more existing
columns or arrays (of the correct length). The index can replace the
existing index or expand on it.
Parameters
keyslabel or array-like or list of labels/arraysThis parameter can be either a single column key, a single array of
the same length as the calling DataFrame, or a list containing an
arbitrary combination of column keys and arrays. Here, “array”
encompasses Series, Index, np.ndarray, and
instances of Iterator.
dropbool, default TrueDelete columns to be used as the new index.
appendbool, default FalseWhether to append columns to existing index.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
verify_integritybool, default FalseCheck the new index for duplicates. Otherwise defer the check until
necessary. Setting to False will improve the performance of this
method.
Returns
DataFrame or NoneChanged row labels or None if inplace=True.
See also
DataFrame.reset_indexOpposite of set_index.
DataFrame.reindexChange to new indices or expand indices.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
... 'year': [2012, 2014, 2013, 2014],
... 'sale': [55, 40, 84, 31]})
>>> df
month year sale
0 1 2012 55
1 4 2014 40
2 7 2013 84
3 10 2014 31
Set the index to become the ‘month’ column:
>>> df.set_index('month')
year sale
month
1 2012 55
4 2014 40
7 2013 84
10 2014 31
Create a MultiIndex using columns ‘year’ and ‘month’:
>>> df.set_index(['year', 'month'])
sale
year month
2012 1 55
2014 4 40
2013 7 84
2014 10 31
Create a MultiIndex using an Index and a column:
>>> df.set_index([pd.Index([1, 2, 3, 4]), 'year'])
month sale
year
1 2012 1 55
2 2014 4 40
3 2013 7 84
4 2014 10 31
Create a MultiIndex using two Series:
>>> s = pd.Series([1, 2, 3, 4])
>>> df.set_index([s, s**2])
month year sale
1 1 1 2012 55
2 4 4 2014 40
3 9 7 2013 84
4 16 10 2014 31
|
reference/api/pandas.DataFrame.set_index.html
|
pandas.CategoricalIndex.ordered
|
`pandas.CategoricalIndex.ordered`
Whether the categories have an ordered relationship.
|
property CategoricalIndex.ordered[source]#
Whether the categories have an ordered relationship.
|
reference/api/pandas.CategoricalIndex.ordered.html
|
pandas.tseries.offsets.BusinessDay.apply_index
|
`pandas.tseries.offsets.BusinessDay.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
BusinessDay.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.BusinessDay.apply_index.html
|
pandas.tseries.offsets.MonthBegin.is_year_end
|
`pandas.tseries.offsets.MonthBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
MonthBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.MonthBegin.is_year_end.html
|
pandas.tseries.offsets.Nano.isAnchored
|
pandas.tseries.offsets.Nano.isAnchored
|
Nano.isAnchored()#
|
reference/api/pandas.tseries.offsets.Nano.isAnchored.html
|
pandas.Series.cat.remove_unused_categories
|
`pandas.Series.cat.remove_unused_categories`
Remove categories which are not used.
```
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
```
|
Series.cat.remove_unused_categories(*args, **kwargs)[source]#
Remove categories which are not used.
Parameters
inplacebool, default FalseWhether or not to drop unused categories inplace or return a copy of
this categorical with unused categories dropped.
Deprecated since version 1.2.0.
Returns
catCategorical or NoneCategorical with unused categories dropped or None if inplace=True.
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c[2] = 'a'
>>> c[4] = 'c'
>>> c
['a', 'c', 'a', 'c', 'c']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c.remove_unused_categories()
['a', 'c', 'a', 'c', 'c']
Categories (2, object): ['a', 'c']
|
reference/api/pandas.Series.cat.remove_unused_categories.html
|
pandas.PeriodIndex.hour
|
`pandas.PeriodIndex.hour`
The hour of the period.
|
property PeriodIndex.hour[source]#
The hour of the period.
|
reference/api/pandas.PeriodIndex.hour.html
|
pandas.tseries.offsets.Day.n
|
pandas.tseries.offsets.Day.n
|
Day.n#
|
reference/api/pandas.tseries.offsets.Day.n.html
|
pandas.DataFrame.sort_values
|
`pandas.DataFrame.sort_values`
Sort by the values along either axis.
```
>>> df = pd.DataFrame({
... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
... 'col2': [2, 1, 9, 8, 7, 4],
... 'col3': [0, 1, 9, 4, 2, 3],
... 'col4': ['a', 'B', 'c', 'D', 'e', 'F']
... })
>>> df
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
```
|
DataFrame.sort_values(by, *, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)[source]#
Sort by the values along either axis.
Parameters
bystr or list of strName or list of names to sort by.
if axis is 0 or ‘index’ then by may contain index
levels and/or column labels.
if axis is 1 or ‘columns’ then by may contain column
levels and/or index labels.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to be sorted.
ascendingbool or list of bool, default TrueSort ascending vs. descending. Specify list for multiple sort
orders. If this is a list of bools, must match the length of
the by.
inplacebool, default FalseIf True, perform operation in-place.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See also numpy.sort() for more
information. mergesort and stable are the only stable algorithms. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position{‘first’, ‘last’}, default ‘last’Puts NaNs at the beginning if first; last puts NaNs at the
end.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
keycallable, optionalApply the key function to the values
before sorting. This is similar to the key argument in the
builtin sorted() function, with the notable difference that
this key function should be vectorized. It should expect a
Series and return a Series with the same shape as the input.
It will be applied to each column in by independently.
New in version 1.1.0.
Returns
DataFrame or NoneDataFrame with sorted values or None if inplace=True.
See also
DataFrame.sort_indexSort a DataFrame by the index.
Series.sort_valuesSimilar method for a Series.
Examples
>>> df = pd.DataFrame({
... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
... 'col2': [2, 1, 9, 8, 7, 4],
... 'col3': [0, 1, 9, 4, 2, 3],
... 'col4': ['a', 'B', 'c', 'D', 'e', 'F']
... })
>>> df
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
Sort by col1
>>> df.sort_values(by=['col1'])
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
5 C 4 3 F
4 D 7 2 e
3 NaN 8 4 D
Sort by multiple columns
>>> df.sort_values(by=['col1', 'col2'])
col1 col2 col3 col4
1 A 1 1 B
0 A 2 0 a
2 B 9 9 c
5 C 4 3 F
4 D 7 2 e
3 NaN 8 4 D
Sort Descending
>>> df.sort_values(by='col1', ascending=False)
col1 col2 col3 col4
4 D 7 2 e
5 C 4 3 F
2 B 9 9 c
0 A 2 0 a
1 A 1 1 B
3 NaN 8 4 D
Putting NAs first
>>> df.sort_values(by='col1', ascending=False, na_position='first')
col1 col2 col3 col4
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
2 B 9 9 c
0 A 2 0 a
1 A 1 1 B
Sorting with a key function
>>> df.sort_values(by='col4', key=lambda col: col.str.lower())
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
Natural sort with the key argument,
using the natsort <https://github.com/SethMMorton/natsort> package.
>>> df = pd.DataFrame({
... "time": ['0hr', '128hr', '72hr', '48hr', '96hr'],
... "value": [10, 20, 30, 40, 50]
... })
>>> df
time value
0 0hr 10
1 128hr 20
2 72hr 30
3 48hr 40
4 96hr 50
>>> from natsort import index_natsorted
>>> df.sort_values(
... by="time",
... key=lambda x: np.argsort(index_natsorted(df["time"]))
... )
time value
0 0hr 10
3 48hr 40
2 72hr 30
4 96hr 50
1 128hr 20
|
reference/api/pandas.DataFrame.sort_values.html
|
pandas.core.window.rolling.Rolling.rank
|
`pandas.core.window.rolling.Rolling.rank`
Calculate the rolling rank.
```
>>> s = pd.Series([1, 4, 2, 3, 5, 3])
>>> s.rolling(3).rank()
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.5
dtype: float64
```
|
Rolling.rank(method='average', ascending=True, pct=False, numeric_only=False, **kwargs)[source]#
Calculate the rolling rank.
New in version 1.4.0.
Parameters
method{‘average’, ‘min’, ‘max’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.rankAggregating rank for Series.
pandas.DataFrame.rankAggregating rank for DataFrame.
Examples
>>> s = pd.Series([1, 4, 2, 3, 5, 3])
>>> s.rolling(3).rank()
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.5
dtype: float64
>>> s.rolling(3).rank(method="max")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 2.0
dtype: float64
>>> s.rolling(3).rank(method="min")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.0
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.rank.html
|
pandas.tseries.offsets.FY5253.variation
|
pandas.tseries.offsets.FY5253.variation
|
FY5253.variation#
|
reference/api/pandas.tseries.offsets.FY5253.variation.html
|
pandas.tseries.offsets.SemiMonthBegin.isAnchored
|
pandas.tseries.offsets.SemiMonthBegin.isAnchored
|
SemiMonthBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.isAnchored.html
|
pandas.tseries.offsets.FY5253Quarter.is_on_offset
|
`pandas.tseries.offsets.FY5253Quarter.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
FY5253Quarter.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.FY5253Quarter.is_on_offset.html
|
pandas.UInt64Index
|
`pandas.UInt64Index`
Immutable sequence used for indexing and alignment.
|
class pandas.UInt64Index(data=None, dtype=None, copy=False, name=None)[source]#
Immutable sequence used for indexing and alignment.
Deprecated since version 1.4.0: In pandas v2.0 UInt64Index will be removed and NumericIndex used instead.
UInt64Index will remain fully functional for the duration of pandas 1.x.
The basic object storing axis labels for all pandas objects.
UInt64Index is a special case of Index with purely unsigned integer labels. .
Parameters
dataarray-like (1-dimensional)
dtypeNumPy dtype (default: uint64)
copyboolMake a copy of input ndarray.
nameobjectName to be stored in the index.
See also
IndexThe base pandas Index type.
NumericIndexIndex of numpy int/uint/float data.
Notes
An Index instance can only contain hashable objects.
Attributes
None
Methods
None
|
reference/api/pandas.UInt64Index.html
|
pandas.tseries.offsets.CustomBusinessHour.isAnchored
|
pandas.tseries.offsets.CustomBusinessHour.isAnchored
|
CustomBusinessHour.isAnchored()#
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.isAnchored.html
|
pandas.IntervalIndex.mid
|
pandas.IntervalIndex.mid
|
IntervalIndex.mid[source]#
|
reference/api/pandas.IntervalIndex.mid.html
|
pandas.tseries.offsets.QuarterBegin.is_month_start
|
`pandas.tseries.offsets.QuarterBegin.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
QuarterBegin.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.QuarterBegin.is_month_start.html
|
pandas.Period.weekday
|
`pandas.Period.weekday`
Day of the week the period lies in, with Monday=0 and Sunday=6.
```
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.dayofweek
6
```
|
Period.weekday#
Day of the week the period lies in, with Monday=0 and Sunday=6.
If the period frequency is lower than daily (e.g. hourly), and the
period spans over multiple days, the day at the start of the period is
used.
If the frequency is higher than daily (e.g. monthly), the last day
of the period is used.
Returns
intDay of the week.
See also
Period.dayofweekDay of the week the period lies in.
Period.weekdayAlias of Period.dayofweek.
Period.dayDay of the month.
Period.dayofyearDay of the year.
Examples
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.dayofweek
6
For periods that span over multiple days, the day at the beginning of
the period is returned.
>>> per = pd.Period('2017-12-31 22:00', '4H')
>>> per.dayofweek
6
>>> per.start_time.dayofweek
6
For periods with a frequency higher than days, the last day of the
period is returned.
>>> per = pd.Period('2018-01', 'M')
>>> per.dayofweek
2
>>> per.end_time.dayofweek
2
|
reference/api/pandas.Period.weekday.html
|
pandas.core.groupby.GroupBy.pct_change
|
`pandas.core.groupby.GroupBy.pct_change`
Calculate pct_change of each value to previous entry in group.
|
final GroupBy.pct_change(periods=1, fill_method='ffill', limit=None, freq=None, axis=0)[source]#
Calculate pct_change of each value to previous entry in group.
Returns
Series or DataFramePercentage changes within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.pct_change.html
|
Testing
|
Testing
|
Assertion functions#
testing.assert_frame_equal(left, right[, ...])
Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...])
Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...])
Check that left and right Index are equal.
testing.assert_extension_array_equal(left, right)
Check that left and right ExtensionArrays are equal.
Exceptions and warnings#
errors.AbstractMethodError(class_instance[, ...])
Raise this error instead of NotImplementedError for abstract methods.
errors.AccessorRegistrationWarning
Warning for attribute conflicts in accessor registration.
errors.AttributeConflictWarning
Warning raised when index attributes conflict when using HDFStore.
errors.CategoricalConversionWarning
Warning is raised when reading a partial labeled Stata file using a iterator.
errors.ClosedFileError
Exception is raised when trying to perform an operation on a closed HDFStore file.
errors.CSSWarning
Warning is raised when converting css styling fails.
errors.DatabaseError
Error is raised when executing sql with bad syntax or sql that throws an error.
errors.DataError
Exceptionn raised when performing an operation on non-numerical data.
errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
errors.DuplicateLabelError
Error raised when an operation would introduce duplicate labels.
errors.EmptyDataError
Exception raised in pd.read_csv when empty data or header is encountered.
errors.IncompatibilityWarning
Warning raised when trying to use where criteria on an incompatible HDF5 file.
errors.IndexingError
Exception is raised when trying to index and there is a mismatch in dimensions.
errors.InvalidColumnName
Warning raised by to_stata the column contains a non-valid stata name.
errors.InvalidIndexError
Exception raised when attempting to use an invalid index key.
errors.IntCastingNaNError
Exception raised when converting (astype) an array with NaN to an integer type.
errors.MergeError
Exception raised when merging data.
errors.NullFrequencyError
Exception raised when a freq cannot be null.
errors.NumbaUtilError
Error raised for unsupported Numba engine routines.
errors.NumExprClobberingError
Exception raised when trying to use a built-in numexpr name as a variable name.
errors.OptionError
Exception raised for pandas.options.
errors.OutOfBoundsDatetime
Raised when the datetime is outside the range that can be represented.
errors.OutOfBoundsTimedelta
Raised when encountering a timedelta value that cannot be represented.
errors.ParserError
Exception that is raised by an error encountered in parsing file contents.
errors.ParserWarning
Warning raised when reading a file that doesn't use the default 'c' parser.
errors.PerformanceWarning
Warning raised when there is a possible performance impact.
errors.PossibleDataLossError
Exception raised when trying to open a HDFStore file when already opened.
errors.PossiblePrecisionLoss
Warning raised by to_stata on a column with a value outside or equal to int64.
errors.PyperclipException
Exception raised when clipboard functionality is unsupported.
errors.PyperclipWindowsException(message)
Exception raised when clipboard functionality is unsupported by Windows.
errors.SettingWithCopyError
Exception raised when trying to set on a copied slice from a DataFrame.
errors.SettingWithCopyWarning
Warning raised when trying to set on a copied slice from a DataFrame.
errors.SpecificationError
Exception raised by agg when the functions are ill-specified.
errors.UndefinedVariableError(name[, is_local])
Exception raised by query or eval when using an undefined variable name.
errors.UnsortedIndexError
Error raised when slicing a MultiIndex which has not been lexsorted.
errors.UnsupportedFunctionCall
Exception raised when attempting to call a unsupported numpy function.
errors.ValueLabelTypeMismatch
Warning raised by to_stata on a category column that contains non-string values.
Bug report function#
show_versions([as_json])
Provide useful information, important for bug reports.
Test suite runner#
test([extra_args])
Run the pandas test suite using pytest.
|
reference/testing.html
|
pandas.tseries.offsets.BQuarterEnd.is_month_start
|
`pandas.tseries.offsets.BQuarterEnd.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
BQuarterEnd.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.BQuarterEnd.is_month_start.html
|
pandas.tseries.offsets.YearBegin.n
|
pandas.tseries.offsets.YearBegin.n
|
YearBegin.n#
|
reference/api/pandas.tseries.offsets.YearBegin.n.html
|
pandas.tseries.offsets.Week.kwds
|
`pandas.tseries.offsets.Week.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
Week.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Week.kwds.html
|
pandas.core.resample.Resampler.fillna
|
`pandas.core.resample.Resampler.fillna`
Fill missing values introduced by upsampling.
```
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
```
|
Resampler.fillna(method, limit=None)[source]#
Fill missing values introduced by upsampling.
In statistics, imputation is the process of replacing missing data with
substituted values [1]. When resampling data, missing values may
appear (e.g., when the resampling frequency is higher than the original
frequency).
Missing values that existed in the original data will
not be modified.
Parameters
method{‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’}Method to use for filling holes in resampled data
‘pad’ or ‘ffill’: use previous valid observation to fill gap
(forward fill).
‘backfill’ or ‘bfill’: use next valid observation to fill gap.
‘nearest’: use nearest valid observation to fill gap.
limitint, optionalLimit of how many consecutive missing values to fill.
Returns
Series or DataFrameAn upsampled Series or DataFrame with missing values filled.
See also
bfillBackward fill NaN values in the resampled data.
ffillForward fill NaN values in the resampled data.
nearestFill NaN values in the resampled data with nearest neighbor starting from center.
interpolateFill NaN values using interpolation.
Series.fillnaFill NaN values in the Series using the specified method, which can be ‘bfill’ and ‘ffill’.
DataFrame.fillnaFill NaN values in the DataFrame using the specified method, which can be ‘bfill’ and ‘ffill’.
References
1
https://en.wikipedia.org/wiki/Imputation_(statistics)
Examples
Resampling a Series:
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
Without filling the missing values you get:
>>> s.resample("30min").asfreq()
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 2.0
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> s.resample('30min').fillna("backfill")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('15min').fillna("backfill", limit=2)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 NaN
2018-01-01 00:30:00 2.0
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:15:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
Freq: 15T, dtype: float64
>>> s.resample('30min').fillna("pad")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 1
2018-01-01 01:00:00 2
2018-01-01 01:30:00 2
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('30min').fillna("nearest")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
Missing values present before the upsampling are not affected.
>>> sm = pd.Series([1, None, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> sm
2018-01-01 00:00:00 1.0
2018-01-01 01:00:00 NaN
2018-01-01 02:00:00 3.0
Freq: H, dtype: float64
>>> sm.resample('30min').fillna('backfill')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('pad')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 1.0
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('nearest')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
DataFrame resampling is done column-wise. All the same options are
available.
>>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},
... index=pd.date_range('20180101', periods=3,
... freq='h'))
>>> df
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 01:00:00 NaN 3
2018-01-01 02:00:00 6.0 5
>>> df.resample('30min').fillna("bfill")
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
|
reference/api/pandas.core.resample.Resampler.fillna.html
|
pandas.tseries.offsets.YearBegin.kwds
|
`pandas.tseries.offsets.YearBegin.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
YearBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.YearBegin.kwds.html
|
pandas.tseries.offsets.SemiMonthBegin.kwds
|
`pandas.tseries.offsets.SemiMonthBegin.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
SemiMonthBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.kwds.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.n
|
pandas.tseries.offsets.CustomBusinessMonthBegin.n
|
CustomBusinessMonthBegin.n#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.n.html
|
pandas.tseries.offsets.Micro.is_quarter_end
|
`pandas.tseries.offsets.Micro.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
Micro.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.Micro.is_quarter_end.html
|
pandas.DatetimeIndex.is_year_end
|
`pandas.DatetimeIndex.is_year_end`
Indicate whether the date is the last day of the year.
The same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
```
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
```
|
property DatetimeIndex.is_year_end[source]#
Indicate whether the date is the last day of the year.
Returns
Series or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
is_year_startSimilar property indicating the start of the year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_end
0 False
1 True
2 False
dtype: bool
>>> idx = pd.date_range("2017-12-30", periods=3)
>>> idx
DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_year_end
array([False, True, False])
|
reference/api/pandas.DatetimeIndex.is_year_end.html
|
pandas.Index.symmetric_difference
|
`pandas.Index.symmetric_difference`
Compute the symmetric difference of two Index objects.
```
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([2, 3, 4, 5])
>>> idx1.symmetric_difference(idx2)
Int64Index([1, 5], dtype='int64')
```
|
Index.symmetric_difference(other, result_name=None, sort=None)[source]#
Compute the symmetric difference of two Index objects.
Parameters
otherIndex or array-like
result_namestr
sortFalse or None, default NoneWhether to sort the resulting index. By default, the
values are attempted to be sorted, but any TypeError from
incomparable elements is caught by pandas.
None : Attempt to sort the result, but catch any TypeErrors
from comparing incomparable elements.
False : Do not sort the result.
Returns
symmetric_differenceIndex
Notes
symmetric_difference contains elements that appear in either
idx1 or idx2 but not both. Equivalent to the Index created by
idx1.difference(idx2) | idx2.difference(idx1) with duplicates
dropped.
Examples
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([2, 3, 4, 5])
>>> idx1.symmetric_difference(idx2)
Int64Index([1, 5], dtype='int64')
|
reference/api/pandas.Index.symmetric_difference.html
|
pandas.api.types.infer_dtype
|
`pandas.api.types.infer_dtype`
Return a string label of the type of a scalar or list-like of values.
Ignore NaN values when inferring the type.
```
>>> import datetime
>>> infer_dtype(['foo', 'bar'])
'string'
```
|
pandas.api.types.infer_dtype()#
Return a string label of the type of a scalar or list-like of values.
Parameters
valuescalar, list, ndarray, or pandas type
skipnabool, default TrueIgnore NaN values when inferring the type.
Returns
strDescribing the common type of the input data.
Results can include:
string
bytes
floating
integer
mixed-integer
mixed-integer-float
decimal
complex
categorical
boolean
datetime64
datetime
date
timedelta64
timedelta
time
period
mixed
unknown-array
Raises
TypeErrorIf ndarray-like but cannot infer the dtype
Notes
‘mixed’ is the catchall for anything that is not otherwise
specialized
‘mixed-integer-float’ are floats and integers
‘mixed-integer’ are integers mixed with non-integers
‘unknown-array’ is the catchall for something that is an array (has
a dtype attribute), but has a dtype unknown to pandas (e.g. external
extension array)
Examples
>>> import datetime
>>> infer_dtype(['foo', 'bar'])
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=True)
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=False)
'mixed'
>>> infer_dtype([b'foo', b'bar'])
'bytes'
>>> infer_dtype([1, 2, 3])
'integer'
>>> infer_dtype([1, 2, 3.5])
'mixed-integer-float'
>>> infer_dtype([1.0, 2.0, 3.5])
'floating'
>>> infer_dtype(['a', 1])
'mixed-integer'
>>> infer_dtype([Decimal(1), Decimal(2.0)])
'decimal'
>>> infer_dtype([True, False])
'boolean'
>>> infer_dtype([True, False, np.nan])
'boolean'
>>> infer_dtype([pd.Timestamp('20130101')])
'datetime'
>>> infer_dtype([datetime.date(2013, 1, 1)])
'date'
>>> infer_dtype([np.datetime64('2013-01-01')])
'datetime64'
>>> infer_dtype([datetime.timedelta(0, 1, 1)])
'timedelta'
>>> infer_dtype(pd.Series(list('aabc')).astype('category'))
'categorical'
|
reference/api/pandas.api.types.infer_dtype.html
|
pandas.Series.sparse
|
`pandas.Series.sparse`
Accessor for SparseSparse from other sparse matrix data types.
|
Series.sparse()[source]#
Accessor for SparseSparse from other sparse matrix data types.
|
reference/api/pandas.Series.sparse.html
|
pandas.DataFrame.cummax
|
`pandas.DataFrame.cummax`
Return cumulative maximum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
maximum.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
```
|
DataFrame.cummax(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative maximum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
maximum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative maximum of Series or DataFrame.
See also
core.window.expanding.Expanding.maxSimilar functionality but ignores NaN values.
DataFrame.maxReturn the maximum over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the maximum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row,
use axis=1
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0
|
reference/api/pandas.DataFrame.cummax.html
|
pandas.PeriodIndex.day
|
`pandas.PeriodIndex.day`
The days of the period.
|
property PeriodIndex.day[source]#
The days of the period.
|
reference/api/pandas.PeriodIndex.day.html
|
pandas.tseries.offsets.Tick.kwds
|
`pandas.tseries.offsets.Tick.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
Tick.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Tick.kwds.html
|
pandas.tseries.offsets.Easter.apply
|
pandas.tseries.offsets.Easter.apply
|
Easter.apply()#
|
reference/api/pandas.tseries.offsets.Easter.apply.html
|
pandas.Series.reindex
|
`pandas.Series.reindex`
Conform Series to new index with optional filling logic.
```
>>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
>>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301],
... 'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
... index=index)
>>> df
http_status response_time
Firefox 200 0.04
Chrome 200 0.02
Safari 404 0.07
IE10 404 0.08
Konqueror 301 1.00
```
|
Series.reindex(*args, **kwargs)[source]#
Conform Series to new index with optional filling logic.
Places NA/NaN in locations having no value in the previous index. A new object
is produced unless the new index is equivalent to the current one and
copy=False.
Parameters
indexarray-like, optionalNew labels / index to conform to, should be specified using
keywords. Preferably an Index object to avoid duplicating data.
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: Propagate last valid observation forward to next
valid.
backfill / bfill: Use next valid observation to fill gap.
nearest: Use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
limitint, default NoneMaximum number of consecutive elements to forward or backward fill.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations most
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series with changed index.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
DataFrame.reindex supports two calling conventions
(index=index_labels, columns=column_labels, ...)
(labels, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your
intent.
Create a dataframe with some fictional data.
>>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
>>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301],
... 'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
... index=index)
>>> df
http_status response_time
Firefox 200 0.04
Chrome 200 0.02
Safari 404 0.07
IE10 404 0.08
Konqueror 301 1.00
Create a new index and reindex the dataframe. By default
values in the new index that do not have corresponding
records in the dataframe are assigned NaN.
>>> new_index = ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
... 'Chrome']
>>> df.reindex(new_index)
http_status response_time
Safari 404.0 0.07
Iceweasel NaN NaN
Comodo Dragon NaN NaN
IE10 404.0 0.08
Chrome 200.0 0.02
We can fill in the missing values by passing a value to
the keyword fill_value. Because the index is not monotonically
increasing or decreasing, we cannot use arguments to the keyword
method to fill the NaN values.
>>> df.reindex(new_index, fill_value=0)
http_status response_time
Safari 404 0.07
Iceweasel 0 0.00
Comodo Dragon 0 0.00
IE10 404 0.08
Chrome 200 0.02
>>> df.reindex(new_index, fill_value='missing')
http_status response_time
Safari 404 0.07
Iceweasel missing missing
Comodo Dragon missing missing
IE10 404 0.08
Chrome 200 0.02
We can also reindex the columns.
>>> df.reindex(columns=['http_status', 'user_agent'])
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
Or we can use “axis-style” keyword arguments
>>> df.reindex(['http_status', 'user_agent'], axis="columns")
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
To further illustrate the filling functionality in
reindex, we will create a dataframe with a
monotonically increasing index (for example, a sequence
of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
Suppose we decide to expand the dataframe to cover a wider
date range.
>>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D')
>>> df2.reindex(date_index2)
prices
2009-12-29 NaN
2009-12-30 NaN
2009-12-31 NaN
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
The index entries that did not have a value in the original data frame
(for example, ‘2009-12-29’) are by default filled with NaN.
If desired, we can fill in the missing values using one of several
options.
For example, to back-propagate the last valid value to fill the NaN
values, pass bfill as an argument to the method keyword.
>>> df2.reindex(date_index2, method='bfill')
prices
2009-12-29 100.0
2009-12-30 100.0
2009-12-31 100.0
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
Please note that the NaN value present in the original dataframe
(at index value 2010-01-03) will not be filled by any of the
value propagation schemes. This is because filling while reindexing
does not look at dataframe values, but only compares the original and
desired indexes. If you do want to fill in the NaN values present
in the original dataframe, use the fillna() method.
See the user guide for more.
|
reference/api/pandas.Series.reindex.html
|
pandas.tseries.offsets.WeekOfMonth.freqstr
|
`pandas.tseries.offsets.WeekOfMonth.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
WeekOfMonth.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.WeekOfMonth.freqstr.html
|
pandas.tseries.offsets.Milli.rollforward
|
`pandas.tseries.offsets.Milli.rollforward`
Roll provided date forward to next offset only if not on offset.
|
Milli.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.Milli.rollforward.html
|
API reference
|
This page gives an overview of all public pandas objects, functions and
methods. All classes and functions exposed in pandas.* namespace are public.
Some subpackages are public which include pandas.errors,
pandas.plotting, and pandas.testing. Public functions in
pandas.io and pandas.tseries submodules are mentioned in
the documentation. pandas.api.types subpackage holds some
public functions related to data types in pandas.
Warning
The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Stable functionality in such modules is not guaranteed.
Input/output
Pickling
Flat file
Clipboard
Excel
JSON
HTML
XML
Latex
HDFStore: PyTables (HDF5)
Feather
Parquet
ORC
SAS
SPSS
SQL
Google BigQuery
STATA
General functions
Data manipulations
Top-level missing data
Top-level dealing with numeric data
Top-level dealing with datetimelike data
Top-level dealing with Interval data
Top-level evaluation
Hashing
Importing from other DataFrame libraries
Series
Constructor
Attributes
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting
Combining / comparing / joining / merging
Time Series-related
Accessors
Plotting
Serialization / IO / conversion
DataFrame
Constructor
Attributes and underlying data
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting, transposing
Combining / comparing / joining / merging
Time Series-related
Flags
Metadata
Plotting
Sparse accessor
Serialization / IO / conversion
pandas arrays, scalars, and data types
Objects
Utilities
Index objects
Index
Numeric Index
CategoricalIndex
IntervalIndex
MultiIndex
DatetimeIndex
TimedeltaIndex
PeriodIndex
Date offsets
DateOffset
BusinessDay
BusinessHour
CustomBusinessDay
CustomBusinessHour
MonthEnd
MonthBegin
BusinessMonthEnd
BusinessMonthBegin
CustomBusinessMonthEnd
CustomBusinessMonthBegin
SemiMonthEnd
SemiMonthBegin
Week
WeekOfMonth
LastWeekOfMonth
BQuarterEnd
BQuarterBegin
QuarterEnd
QuarterBegin
BYearEnd
BYearBegin
YearEnd
YearBegin
FY5253
FY5253Quarter
Easter
Tick
Day
Hour
Minute
Second
Milli
Micro
Nano
Frequencies
pandas.tseries.frequencies.to_offset
Window
Rolling window functions
Weighted window functions
Expanding window functions
Exponentially-weighted window functions
Window indexer
GroupBy
Indexing, iteration
Function application
Computations / descriptive stats
Resampling
Indexing, iteration
Function application
Upsampling
Computations / descriptive stats
Style
Styler constructor
Styler properties
Style application
Builtin styles
Style export and import
Plotting
pandas.plotting.andrews_curves
pandas.plotting.autocorrelation_plot
pandas.plotting.bootstrap_plot
pandas.plotting.boxplot
pandas.plotting.deregister_matplotlib_converters
pandas.plotting.lag_plot
pandas.plotting.parallel_coordinates
pandas.plotting.plot_params
pandas.plotting.radviz
pandas.plotting.register_matplotlib_converters
pandas.plotting.scatter_matrix
pandas.plotting.table
Options and settings
Working with options
Extensions
pandas.api.extensions.register_extension_dtype
pandas.api.extensions.register_dataframe_accessor
pandas.api.extensions.register_series_accessor
pandas.api.extensions.register_index_accessor
pandas.api.extensions.ExtensionDtype
pandas.api.extensions.ExtensionArray
pandas.arrays.PandasArray
pandas.api.indexers.check_array_indexer
Testing
Assertion functions
Exceptions and warnings
Bug report function
Test suite runner
|
reference/index.html
| null |
pandas.Timedelta.isoformat
|
`pandas.Timedelta.isoformat`
Format the Timedelta as ISO 8601 Duration.
```
>>> td = pd.Timedelta(days=6, minutes=50, seconds=3,
... milliseconds=10, microseconds=10, nanoseconds=12)
```
|
Timedelta.isoformat()#
Format the Timedelta as ISO 8601 Duration.
P[n]Y[n]M[n]DT[n]H[n]M[n]S, where the [n] s are replaced by the
values. See https://en.wikipedia.org/wiki/ISO_8601#Durations.
Returns
str
See also
Timestamp.isoformatFunction is used to convert the given Timestamp object into the ISO format.
Notes
The longest component is days, whose value may be larger than
365.
Every component is always included, even if its value is 0.
Pandas uses nanosecond precision, so up to 9 decimal places may
be included in the seconds component.
Trailing 0’s are removed from the seconds component after the decimal.
We do not 0 pad components, so it’s …T5H…, not …T05H…
Examples
>>> td = pd.Timedelta(days=6, minutes=50, seconds=3,
... milliseconds=10, microseconds=10, nanoseconds=12)
>>> td.isoformat()
'P6DT0H50M3.010010012S'
>>> pd.Timedelta(hours=1, seconds=10).isoformat()
'P0DT1H0M10S'
>>> pd.Timedelta(days=500.5).isoformat()
'P500DT12H0M0S'
|
reference/api/pandas.Timedelta.isoformat.html
|
pandas.tseries.offsets.Tick.is_quarter_start
|
`pandas.tseries.offsets.Tick.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Tick.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Tick.is_quarter_start.html
|
Resampling
|
Resampling
|
Resampler objects are returned by resample calls: pandas.DataFrame.resample(), pandas.Series.resample().
Indexing, iteration#
Resampler.__iter__()
Groupby iterator.
Resampler.groups
Dict {group name -> group labels}.
Resampler.indices
Dict {group name -> group indices}.
Resampler.get_group(name[, obj])
Construct DataFrame from group with provided name.
Function application#
Resampler.apply([func])
Aggregate using one or more operations over the specified axis.
Resampler.aggregate([func])
Aggregate using one or more operations over the specified axis.
Resampler.transform(arg, *args, **kwargs)
Call function producing a like-indexed Series on each group.
Resampler.pipe(func, *args, **kwargs)
Apply a func with arguments to this Resampler object and return its result.
Upsampling#
Resampler.ffill([limit])
Forward fill the values.
Resampler.backfill([limit])
(DEPRECATED) Backward fill the values.
Resampler.bfill([limit])
Backward fill the new missing values in the resampled data.
Resampler.pad([limit])
(DEPRECATED) Forward fill the values.
Resampler.nearest([limit])
Resample by using the nearest value.
Resampler.fillna(method[, limit])
Fill missing values introduced by upsampling.
Resampler.asfreq([fill_value])
Return the values at the new freq, essentially a reindex.
Resampler.interpolate([method, axis, limit, ...])
Interpolate values according to different methods.
Computations / descriptive stats#
Resampler.count()
Compute count of group, excluding missing values.
Resampler.nunique(*args, **kwargs)
Return number of unique elements in the group.
Resampler.first([numeric_only, min_count])
Compute the first non-null entry of each column.
Resampler.last([numeric_only, min_count])
Compute the last non-null entry of each column.
Resampler.max([numeric_only, min_count])
Compute max of group values.
Resampler.mean([numeric_only])
Compute mean of groups, excluding missing values.
Resampler.median([numeric_only])
Compute median of groups, excluding missing values.
Resampler.min([numeric_only, min_count])
Compute min of group values.
Resampler.ohlc(*args, **kwargs)
Compute open, high, low and close values of a group, excluding missing values.
Resampler.prod([numeric_only, min_count])
Compute prod of group values.
Resampler.size()
Compute group sizes.
Resampler.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
Resampler.std([ddof, numeric_only])
Compute standard deviation of groups, excluding missing values.
Resampler.sum([numeric_only, min_count])
Compute sum of group values.
Resampler.var([ddof, numeric_only])
Compute variance of groups, excluding missing values.
Resampler.quantile([q])
Return value at the given quantile.
|
reference/resampling.html
|
pandas.tseries.offsets.FY5253Quarter.rollforward
|
`pandas.tseries.offsets.FY5253Quarter.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
FY5253Quarter.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.FY5253Quarter.rollforward.html
|
pandas.DataFrame.mod
|
`pandas.DataFrame.mod`
Get Modulo of dataframe and other, element-wise (binary operator mod).
Equivalent to dataframe % other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rmod.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.mod(other, axis='columns', level=None, fill_value=None)[source]#
Get Modulo of dataframe and other, element-wise (binary operator mod).
Equivalent to dataframe % other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rmod.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.mod.html
|
pandas.tseries.offsets.Day.delta
|
pandas.tseries.offsets.Day.delta
|
Day.delta#
|
reference/api/pandas.tseries.offsets.Day.delta.html
|
pandas.UInt32Dtype
|
`pandas.UInt32Dtype`
An ExtensionDtype for uint32 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
|
class pandas.UInt32Dtype[source]#
An ExtensionDtype for uint32 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
|
reference/api/pandas.UInt32Dtype.html
|
pandas.HDFStore.get
|
`pandas.HDFStore.get`
Retrieve pandas object stored in file.
Same type as object stored in file.
|
HDFStore.get(key)[source]#
Retrieve pandas object stored in file.
Parameters
keystr
Returns
objectSame type as object stored in file.
|
reference/api/pandas.HDFStore.get.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.