title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.Series.rank
|
`pandas.Series.rank`
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of those values.
```
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
```
|
Series.rank(axis=0, method='average', numeric_only=_NoDefault.no_default, na_option='keep', ascending=True, pct=False)[source]#
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of those values.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0Index to direct ranking.
For Series this parameter is unused and defaults to 0.
method{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
first: ranks assigned in order they appear in the array
dense: like ‘min’, but rank always increases by 1 between groups.
numeric_onlybool, optionalFor DataFrame objects, rank only numeric columns if set to True.
na_option{‘keep’, ‘top’, ‘bottom’}, default ‘keep’How to rank NaN values:
keep: assign NaN rank to NaN values
top: assign lowest rank to NaN values
bottom: assign highest rank to NaN values
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
Returns
same type as callerReturn a Series or DataFrame with data ranks as values.
See also
core.groupby.GroupBy.rankRank of values within each group.
Examples
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
Ties are assigned the mean of the ranks (by default) for the group.
>>> s = pd.Series(range(5), index=list("abcde"))
>>> s["d"] = s["b"]
>>> s.rank()
a 1.0
b 2.5
c 4.0
d 2.5
e 5.0
dtype: float64
The following example shows how the method behaves with the above
parameters:
default_rank: this is the default behaviour obtained without using
any parameter.
max_rank: setting method = 'max' the records that have the
same values are ranked using the highest rank (e.g.: since ‘cat’
and ‘dog’ are both in the 2nd and 3rd position, rank 3 is assigned.)
NA_bottom: choosing na_option = 'bottom', if there are records
with NaN values they are placed at the bottom of the ranking.
pct_rank: when setting pct = True, the ranking is expressed as
percentile rank.
>>> df['default_rank'] = df['Number_legs'].rank()
>>> df['max_rank'] = df['Number_legs'].rank(method='max')
>>> df['NA_bottom'] = df['Number_legs'].rank(na_option='bottom')
>>> df['pct_rank'] = df['Number_legs'].rank(pct=True)
>>> df
Animal Number_legs default_rank max_rank NA_bottom pct_rank
0 cat 4.0 2.5 3.0 2.5 0.625
1 penguin 2.0 1.0 1.0 1.0 0.250
2 dog 4.0 2.5 3.0 2.5 0.625
3 spider 8.0 4.0 4.0 4.0 1.000
4 snake NaN NaN NaN 5.0 NaN
|
reference/api/pandas.Series.rank.html
|
pandas.tseries.offsets.SemiMonthBegin.is_anchored
|
`pandas.tseries.offsets.SemiMonthBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
SemiMonthBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.is_anchored.html
|
pandas.arrays.PeriodArray
|
`pandas.arrays.PeriodArray`
Pandas ExtensionArray for storing Period data.
Users should use period_array() to create new instances.
Alternatively, array() can be used to create new instances
from a sequence of Period scalars.
|
class pandas.arrays.PeriodArray(values, dtype=None, freq=None, copy=False)[source]#
Pandas ExtensionArray for storing Period data.
Users should use period_array() to create new instances.
Alternatively, array() can be used to create new instances
from a sequence of Period scalars.
Parameters
valuesUnion[PeriodArray, Series[period], ndarray[int], PeriodIndex]The data to store. These should be arrays that can be directly
converted to ordinals without inference or copy (PeriodArray,
ndarray[int64]), or a box around such an array (Series[period],
PeriodIndex).
dtypePeriodDtype, optionalA PeriodDtype instance from which to extract a freq. If both
freq and dtype are specified, then the frequencies must match.
freqstr or DateOffsetThe freq to use for the array. Mostly applicable when values
is an ndarray of integers, when freq is required. When values
is a PeriodArray (or box around), it’s checked that values.freq
matches freq.
copybool, default FalseWhether to copy the ordinals before storing.
See also
PeriodRepresents a period of time.
PeriodIndexImmutable Index for period data.
period_rangeCreate a fixed-frequency PeriodArray.
arrayConstruct a pandas array.
Notes
There are two components to a PeriodArray
ordinals : integer ndarray
freq : pd.tseries.offsets.Offset
The values are physically stored as a 1-D ndarray of integers. These are
called “ordinals” and represent some kind of offset from a base.
The freq indicates the span covered by each element of the array.
All elements in the PeriodArray have the same freq.
Attributes
None
Methods
None
|
reference/api/pandas.arrays.PeriodArray.html
|
pandas.read_sql_query
|
`pandas.read_sql_query`
Read SQL query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query
string. Optionally provide an index_col parameter to use one of the
columns as the index, otherwise default integer index will be used.
|
pandas.read_sql_query(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, chunksize=None, dtype=None)[source]#
Read SQL query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query
string. Optionally provide an index_col parameter to use one of the
columns as the index, otherwise default integer index will be used.
Parameters
sqlstr SQL query or SQLAlchemy Selectable (select or text object)SQL query to be executed.
conSQLAlchemy connectable, str, or sqlite3 connectionUsing SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported.
index_colstr or list of str, optional, default: NoneColumn(s) to set as index(MultiIndex).
coerce_floatbool, default TrueAttempts to convert values of non-string, non-numeric objects (like
decimal.Decimal) to floating point. Useful for SQL result sets.
paramslist, tuple or dict, optional, default: NoneList of parameters to pass to execute method. The syntax used
to pass parameters is database driver dependent. Check your
database driver documentation for which of the five syntax styles,
described in PEP 249’s paramstyle, is supported.
Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
parse_dateslist or dict, default: None
List of column names to parse as dates.
Dict of {column_name: format string} where format string is
strftime compatible in case of parsing string times, or is one of
(D, s, ns, ms, us) in case of parsing integer timestamps.
Dict of {column_name: arg dict}, where the arg dict corresponds
to the keyword arguments of pandas.to_datetime()
Especially useful with databases without native Datetime support,
such as SQLite.
chunksizeint, default NoneIf specified, return an iterator where chunksize is the number of
rows to include in each chunk.
dtypeType name or dict of columnsData type for data or columns. E.g. np.float64 or
{‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}.
New in version 1.3.0.
Returns
DataFrame or Iterator[DataFrame]
See also
read_sql_tableRead SQL database table into a DataFrame.
read_sqlRead SQL query or database table into a DataFrame.
Notes
Any datetime values with time zone information parsed via the parse_dates
parameter will be converted to UTC.
|
reference/api/pandas.read_sql_query.html
|
pandas.Timedelta.round
|
`pandas.Timedelta.round`
Round the Timedelta to the specified resolution.
Frequency string indicating the rounding resolution.
|
Timedelta.round(freq)#
Round the Timedelta to the specified resolution.
Parameters
freqstrFrequency string indicating the rounding resolution.
Returns
a new Timedelta rounded to the given resolution of freq
Raises
ValueError if the freq cannot be converted
|
reference/api/pandas.Timedelta.round.html
|
pandas.PeriodIndex
|
`pandas.PeriodIndex`
Immutable ndarray holding ordinal values indicating regular periods in time.
```
>>> idx = pd.PeriodIndex(year=[2000, 2002], quarter=[1, 3])
>>> idx
PeriodIndex(['2000Q1', '2002Q3'], dtype='period[Q-DEC]')
```
|
class pandas.PeriodIndex(data=None, ordinal=None, freq=None, dtype=None, copy=False, name=None, **fields)[source]#
Immutable ndarray holding ordinal values indicating regular periods in time.
Index keys are boxed to Period objects which carries the metadata (eg,
frequency information).
Parameters
dataarray-like (1d int np.ndarray or PeriodArray), optionalOptional period-like data to construct index with.
copyboolMake a copy of input ndarray.
freqstr or period object, optionalOne of pandas period strings or corresponding objects.
yearint, array, or Series, default None
monthint, array, or Series, default None
quarterint, array, or Series, default None
dayint, array, or Series, default None
hourint, array, or Series, default None
minuteint, array, or Series, default None
secondint, array, or Series, default None
dtypestr or PeriodDtype, default None
See also
IndexThe base pandas Index type.
PeriodRepresents a period of time.
DatetimeIndexIndex with datetime64 data.
TimedeltaIndexIndex of timedelta64 data.
period_rangeCreate a fixed-frequency PeriodIndex.
Examples
>>> idx = pd.PeriodIndex(year=[2000, 2002], quarter=[1, 3])
>>> idx
PeriodIndex(['2000Q1', '2002Q3'], dtype='period[Q-DEC]')
Attributes
day
The days of the period.
dayofweek
The day of the week with Monday=0, Sunday=6.
day_of_week
The day of the week with Monday=0, Sunday=6.
dayofyear
The ordinal day of the year.
day_of_year
The ordinal day of the year.
days_in_month
The number of days in the month.
daysinmonth
The number of days in the month.
end_time
Get the Timestamp for the end of the period.
freq
Return the frequency object if it is set, otherwise None.
freqstr
Return the frequency object as a string if its set, otherwise None.
hour
The hour of the period.
is_leap_year
Logical indicating if the date belongs to a leap year.
minute
The minute of the period.
month
The month as January=1, December=12.
quarter
The quarter of the date.
second
The second of the period.
start_time
Get the Timestamp for the start of the period.
week
The week ordinal of the year.
weekday
The day of the week with Monday=0, Sunday=6.
weekofyear
The week ordinal of the year.
year
The year of the period.
qyear
Methods
asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
strftime(*args, **kwargs)
Convert to Index using specified date_format.
to_timestamp([freq, how])
Cast to DatetimeArray/Index.
|
reference/api/pandas.PeriodIndex.html
|
Date offsets
|
DateOffset#
DateOffset
Standard kind of date increment used for a date range.
Properties#
DateOffset.freqstr
Return a string representing the frequency.
DateOffset.kwds
Return a dict of extra parameters for the offset.
DateOffset.name
Return a string representing the base frequency.
DateOffset.nanos
DateOffset.normalize
DateOffset.rule_code
DateOffset.n
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
Methods#
DateOffset.apply
DateOffset.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
DateOffset.copy
Return a copy of the frequency.
DateOffset.isAnchored
DateOffset.onOffset
DateOffset.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
DateOffset.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
DateOffset.__call__(*args, **kwargs)
Call self as a function.
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
DateOffset.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
DateOffset.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
DateOffset.is_year_start
Return boolean whether a timestamp occurs on the year start.
DateOffset.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessDay#
BusinessDay
DateOffset subclass representing possibly n business days.
Alias:
BDay
alias of pandas._libs.tslibs.offsets.BusinessDay
Properties#
BusinessDay.freqstr
Return a string representing the frequency.
BusinessDay.kwds
Return a dict of extra parameters for the offset.
BusinessDay.name
Return a string representing the base frequency.
BusinessDay.nanos
BusinessDay.normalize
BusinessDay.rule_code
BusinessDay.n
BusinessDay.weekmask
BusinessDay.holidays
BusinessDay.calendar
Methods#
BusinessDay.apply
BusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessDay.copy
Return a copy of the frequency.
BusinessDay.isAnchored
BusinessDay.onOffset
BusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessDay.__call__(*args, **kwargs)
Call self as a function.
BusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessHour#
BusinessHour
DateOffset subclass representing possibly n business hours.
Properties#
BusinessHour.freqstr
Return a string representing the frequency.
BusinessHour.kwds
Return a dict of extra parameters for the offset.
BusinessHour.name
Return a string representing the base frequency.
BusinessHour.nanos
BusinessHour.normalize
BusinessHour.rule_code
BusinessHour.n
BusinessHour.start
BusinessHour.end
BusinessHour.weekmask
BusinessHour.holidays
BusinessHour.calendar
Methods#
BusinessHour.apply
BusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessHour.copy
Return a copy of the frequency.
BusinessHour.isAnchored
BusinessHour.onOffset
BusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessHour.__call__(*args, **kwargs)
Call self as a function.
BusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessDay#
CustomBusinessDay
DateOffset subclass representing custom business days excluding holidays.
Alias:
CDay
alias of pandas._libs.tslibs.offsets.CustomBusinessDay
Properties#
CustomBusinessDay.freqstr
Return a string representing the frequency.
CustomBusinessDay.kwds
Return a dict of extra parameters for the offset.
CustomBusinessDay.name
Return a string representing the base frequency.
CustomBusinessDay.nanos
CustomBusinessDay.normalize
CustomBusinessDay.rule_code
CustomBusinessDay.n
CustomBusinessDay.weekmask
CustomBusinessDay.calendar
CustomBusinessDay.holidays
Methods#
CustomBusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessDay.apply
CustomBusinessDay.copy
Return a copy of the frequency.
CustomBusinessDay.isAnchored
CustomBusinessDay.onOffset
CustomBusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessDay.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessHour#
CustomBusinessHour
DateOffset subclass representing possibly n custom business days.
Properties#
CustomBusinessHour.freqstr
Return a string representing the frequency.
CustomBusinessHour.kwds
Return a dict of extra parameters for the offset.
CustomBusinessHour.name
Return a string representing the base frequency.
CustomBusinessHour.nanos
CustomBusinessHour.normalize
CustomBusinessHour.rule_code
CustomBusinessHour.n
CustomBusinessHour.weekmask
CustomBusinessHour.calendar
CustomBusinessHour.holidays
CustomBusinessHour.start
CustomBusinessHour.end
Methods#
CustomBusinessHour.apply
CustomBusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessHour.copy
Return a copy of the frequency.
CustomBusinessHour.isAnchored
CustomBusinessHour.onOffset
CustomBusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessHour.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthEnd#
MonthEnd
DateOffset of one month end.
Properties#
MonthEnd.freqstr
Return a string representing the frequency.
MonthEnd.kwds
Return a dict of extra parameters for the offset.
MonthEnd.name
Return a string representing the base frequency.
MonthEnd.nanos
MonthEnd.normalize
MonthEnd.rule_code
MonthEnd.n
Methods#
MonthEnd.apply
MonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthEnd.copy
Return a copy of the frequency.
MonthEnd.isAnchored
MonthEnd.onOffset
MonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthEnd.__call__(*args, **kwargs)
Call self as a function.
MonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthBegin#
MonthBegin
DateOffset of one month at beginning.
Properties#
MonthBegin.freqstr
Return a string representing the frequency.
MonthBegin.kwds
Return a dict of extra parameters for the offset.
MonthBegin.name
Return a string representing the base frequency.
MonthBegin.nanos
MonthBegin.normalize
MonthBegin.rule_code
MonthBegin.n
Methods#
MonthBegin.apply
MonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthBegin.copy
Return a copy of the frequency.
MonthBegin.isAnchored
MonthBegin.onOffset
MonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthBegin.__call__(*args, **kwargs)
Call self as a function.
MonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthEnd#
BusinessMonthEnd
DateOffset increments between the last business day of the month.
Alias:
BMonthEnd
alias of pandas._libs.tslibs.offsets.BusinessMonthEnd
Properties#
BusinessMonthEnd.freqstr
Return a string representing the frequency.
BusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
BusinessMonthEnd.name
Return a string representing the base frequency.
BusinessMonthEnd.nanos
BusinessMonthEnd.normalize
BusinessMonthEnd.rule_code
BusinessMonthEnd.n
Methods#
BusinessMonthEnd.apply
BusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthEnd.copy
Return a copy of the frequency.
BusinessMonthEnd.isAnchored
BusinessMonthEnd.onOffset
BusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthBegin#
BusinessMonthBegin
DateOffset of one month at the first business day.
Alias:
BMonthBegin
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
Properties#
BusinessMonthBegin.freqstr
Return a string representing the frequency.
BusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
BusinessMonthBegin.name
Return a string representing the base frequency.
BusinessMonthBegin.nanos
BusinessMonthBegin.normalize
BusinessMonthBegin.rule_code
BusinessMonthBegin.n
Methods#
BusinessMonthBegin.apply
BusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthBegin.copy
Return a copy of the frequency.
BusinessMonthBegin.isAnchored
BusinessMonthBegin.onOffset
BusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthBegin.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthEnd#
CustomBusinessMonthEnd
Attributes
Alias:
CBMonthEnd
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthEnd
Properties#
CustomBusinessMonthEnd.freqstr
Return a string representing the frequency.
CustomBusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthEnd.m_offset
CustomBusinessMonthEnd.name
Return a string representing the base frequency.
CustomBusinessMonthEnd.nanos
CustomBusinessMonthEnd.normalize
CustomBusinessMonthEnd.rule_code
CustomBusinessMonthEnd.n
CustomBusinessMonthEnd.weekmask
CustomBusinessMonthEnd.calendar
CustomBusinessMonthEnd.holidays
Methods#
CustomBusinessMonthEnd.apply
CustomBusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthEnd.copy
Return a copy of the frequency.
CustomBusinessMonthEnd.isAnchored
CustomBusinessMonthEnd.onOffset
CustomBusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthBegin#
CustomBusinessMonthBegin
Attributes
Alias:
CBMonthBegin
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthBegin
Properties#
CustomBusinessMonthBegin.freqstr
Return a string representing the frequency.
CustomBusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthBegin.m_offset
CustomBusinessMonthBegin.name
Return a string representing the base frequency.
CustomBusinessMonthBegin.nanos
CustomBusinessMonthBegin.normalize
CustomBusinessMonthBegin.rule_code
CustomBusinessMonthBegin.n
CustomBusinessMonthBegin.weekmask
CustomBusinessMonthBegin.calendar
CustomBusinessMonthBegin.holidays
Methods#
CustomBusinessMonthBegin.apply
CustomBusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthBegin.copy
Return a copy of the frequency.
CustomBusinessMonthBegin.isAnchored
CustomBusinessMonthBegin.onOffset
CustomBusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthBegin.__call__(*args, ...)
Call self as a function.
CustomBusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthEnd#
SemiMonthEnd
Two DateOffset's per month repeating on the last day of the month & day_of_month.
Properties#
SemiMonthEnd.freqstr
Return a string representing the frequency.
SemiMonthEnd.kwds
Return a dict of extra parameters for the offset.
SemiMonthEnd.name
Return a string representing the base frequency.
SemiMonthEnd.nanos
SemiMonthEnd.normalize
SemiMonthEnd.rule_code
SemiMonthEnd.n
SemiMonthEnd.day_of_month
Methods#
SemiMonthEnd.apply
SemiMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthEnd.copy
Return a copy of the frequency.
SemiMonthEnd.isAnchored
SemiMonthEnd.onOffset
SemiMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthEnd.__call__(*args, **kwargs)
Call self as a function.
SemiMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthBegin#
SemiMonthBegin
Two DateOffset's per month repeating on the first day of the month & day_of_month.
Properties#
SemiMonthBegin.freqstr
Return a string representing the frequency.
SemiMonthBegin.kwds
Return a dict of extra parameters for the offset.
SemiMonthBegin.name
Return a string representing the base frequency.
SemiMonthBegin.nanos
SemiMonthBegin.normalize
SemiMonthBegin.rule_code
SemiMonthBegin.n
SemiMonthBegin.day_of_month
Methods#
SemiMonthBegin.apply
SemiMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthBegin.copy
Return a copy of the frequency.
SemiMonthBegin.isAnchored
SemiMonthBegin.onOffset
SemiMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthBegin.__call__(*args, **kwargs)
Call self as a function.
SemiMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
Week#
Week
Weekly offset.
Properties#
Week.freqstr
Return a string representing the frequency.
Week.kwds
Return a dict of extra parameters for the offset.
Week.name
Return a string representing the base frequency.
Week.nanos
Week.normalize
Week.rule_code
Week.n
Week.weekday
Methods#
Week.apply
Week.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Week.copy
Return a copy of the frequency.
Week.isAnchored
Week.onOffset
Week.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Week.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Week.__call__(*args, **kwargs)
Call self as a function.
Week.is_month_start
Return boolean whether a timestamp occurs on the month start.
Week.is_month_end
Return boolean whether a timestamp occurs on the month end.
Week.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Week.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Week.is_year_start
Return boolean whether a timestamp occurs on the year start.
Week.is_year_end
Return boolean whether a timestamp occurs on the year end.
WeekOfMonth#
WeekOfMonth
Describes monthly dates like "the Tuesday of the 2nd week of each month".
Properties#
WeekOfMonth.freqstr
Return a string representing the frequency.
WeekOfMonth.kwds
Return a dict of extra parameters for the offset.
WeekOfMonth.name
Return a string representing the base frequency.
WeekOfMonth.nanos
WeekOfMonth.normalize
WeekOfMonth.rule_code
WeekOfMonth.n
WeekOfMonth.week
Methods#
WeekOfMonth.apply
WeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
WeekOfMonth.copy
Return a copy of the frequency.
WeekOfMonth.isAnchored
WeekOfMonth.onOffset
WeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
WeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
WeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
WeekOfMonth.weekday
WeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
WeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
WeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
WeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
WeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
WeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
LastWeekOfMonth#
LastWeekOfMonth
Describes monthly dates in last week of month.
Properties#
LastWeekOfMonth.freqstr
Return a string representing the frequency.
LastWeekOfMonth.kwds
Return a dict of extra parameters for the offset.
LastWeekOfMonth.name
Return a string representing the base frequency.
LastWeekOfMonth.nanos
LastWeekOfMonth.normalize
LastWeekOfMonth.rule_code
LastWeekOfMonth.n
LastWeekOfMonth.weekday
LastWeekOfMonth.week
Methods#
LastWeekOfMonth.apply
LastWeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
LastWeekOfMonth.copy
Return a copy of the frequency.
LastWeekOfMonth.isAnchored
LastWeekOfMonth.onOffset
LastWeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
LastWeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
LastWeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
LastWeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
LastWeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
LastWeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
LastWeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
LastWeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
LastWeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterEnd#
BQuarterEnd
DateOffset increments between the last business day of each Quarter.
Properties#
BQuarterEnd.freqstr
Return a string representing the frequency.
BQuarterEnd.kwds
Return a dict of extra parameters for the offset.
BQuarterEnd.name
Return a string representing the base frequency.
BQuarterEnd.nanos
BQuarterEnd.normalize
BQuarterEnd.rule_code
BQuarterEnd.n
BQuarterEnd.startingMonth
Methods#
BQuarterEnd.apply
BQuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterEnd.copy
Return a copy of the frequency.
BQuarterEnd.isAnchored
BQuarterEnd.onOffset
BQuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterEnd.__call__(*args, **kwargs)
Call self as a function.
BQuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterBegin#
BQuarterBegin
DateOffset increments between the first business day of each Quarter.
Properties#
BQuarterBegin.freqstr
Return a string representing the frequency.
BQuarterBegin.kwds
Return a dict of extra parameters for the offset.
BQuarterBegin.name
Return a string representing the base frequency.
BQuarterBegin.nanos
BQuarterBegin.normalize
BQuarterBegin.rule_code
BQuarterBegin.n
BQuarterBegin.startingMonth
Methods#
BQuarterBegin.apply
BQuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterBegin.copy
Return a copy of the frequency.
BQuarterBegin.isAnchored
BQuarterBegin.onOffset
BQuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterBegin.__call__(*args, **kwargs)
Call self as a function.
BQuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterEnd#
QuarterEnd
DateOffset increments between Quarter end dates.
Properties#
QuarterEnd.freqstr
Return a string representing the frequency.
QuarterEnd.kwds
Return a dict of extra parameters for the offset.
QuarterEnd.name
Return a string representing the base frequency.
QuarterEnd.nanos
QuarterEnd.normalize
QuarterEnd.rule_code
QuarterEnd.n
QuarterEnd.startingMonth
Methods#
QuarterEnd.apply
QuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterEnd.copy
Return a copy of the frequency.
QuarterEnd.isAnchored
QuarterEnd.onOffset
QuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterEnd.__call__(*args, **kwargs)
Call self as a function.
QuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterBegin#
QuarterBegin
DateOffset increments between Quarter start dates.
Properties#
QuarterBegin.freqstr
Return a string representing the frequency.
QuarterBegin.kwds
Return a dict of extra parameters for the offset.
QuarterBegin.name
Return a string representing the base frequency.
QuarterBegin.nanos
QuarterBegin.normalize
QuarterBegin.rule_code
QuarterBegin.n
QuarterBegin.startingMonth
Methods#
QuarterBegin.apply
QuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterBegin.copy
Return a copy of the frequency.
QuarterBegin.isAnchored
QuarterBegin.onOffset
QuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterBegin.__call__(*args, **kwargs)
Call self as a function.
QuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearEnd#
BYearEnd
DateOffset increments between the last business day of the year.
Properties#
BYearEnd.freqstr
Return a string representing the frequency.
BYearEnd.kwds
Return a dict of extra parameters for the offset.
BYearEnd.name
Return a string representing the base frequency.
BYearEnd.nanos
BYearEnd.normalize
BYearEnd.rule_code
BYearEnd.n
BYearEnd.month
Methods#
BYearEnd.apply
BYearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearEnd.copy
Return a copy of the frequency.
BYearEnd.isAnchored
BYearEnd.onOffset
BYearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearEnd.__call__(*args, **kwargs)
Call self as a function.
BYearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearBegin#
BYearBegin
DateOffset increments between the first business day of the year.
Properties#
BYearBegin.freqstr
Return a string representing the frequency.
BYearBegin.kwds
Return a dict of extra parameters for the offset.
BYearBegin.name
Return a string representing the base frequency.
BYearBegin.nanos
BYearBegin.normalize
BYearBegin.rule_code
BYearBegin.n
BYearBegin.month
Methods#
BYearBegin.apply
BYearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearBegin.copy
Return a copy of the frequency.
BYearBegin.isAnchored
BYearBegin.onOffset
BYearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearBegin.__call__(*args, **kwargs)
Call self as a function.
BYearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearEnd#
YearEnd
DateOffset increments between calendar year ends.
Properties#
YearEnd.freqstr
Return a string representing the frequency.
YearEnd.kwds
Return a dict of extra parameters for the offset.
YearEnd.name
Return a string representing the base frequency.
YearEnd.nanos
YearEnd.normalize
YearEnd.rule_code
YearEnd.n
YearEnd.month
Methods#
YearEnd.apply
YearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearEnd.copy
Return a copy of the frequency.
YearEnd.isAnchored
YearEnd.onOffset
YearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearEnd.__call__(*args, **kwargs)
Call self as a function.
YearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearBegin#
YearBegin
DateOffset increments between calendar year begin dates.
Properties#
YearBegin.freqstr
Return a string representing the frequency.
YearBegin.kwds
Return a dict of extra parameters for the offset.
YearBegin.name
Return a string representing the base frequency.
YearBegin.nanos
YearBegin.normalize
YearBegin.rule_code
YearBegin.n
YearBegin.month
Methods#
YearBegin.apply
YearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearBegin.copy
Return a copy of the frequency.
YearBegin.isAnchored
YearBegin.onOffset
YearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearBegin.__call__(*args, **kwargs)
Call self as a function.
YearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253#
FY5253
Describes 52-53 week fiscal year.
Properties#
FY5253.freqstr
Return a string representing the frequency.
FY5253.kwds
Return a dict of extra parameters for the offset.
FY5253.name
Return a string representing the base frequency.
FY5253.nanos
FY5253.normalize
FY5253.rule_code
FY5253.n
FY5253.startingMonth
FY5253.variation
FY5253.weekday
Methods#
FY5253.apply
FY5253.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253.copy
Return a copy of the frequency.
FY5253.get_rule_code_suffix
FY5253.get_year_end
FY5253.isAnchored
FY5253.onOffset
FY5253.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253.__call__(*args, **kwargs)
Call self as a function.
FY5253.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253Quarter#
FY5253Quarter
DateOffset increments between business quarter dates for 52-53 week fiscal year.
Properties#
FY5253Quarter.freqstr
Return a string representing the frequency.
FY5253Quarter.kwds
Return a dict of extra parameters for the offset.
FY5253Quarter.name
Return a string representing the base frequency.
FY5253Quarter.nanos
FY5253Quarter.normalize
FY5253Quarter.rule_code
FY5253Quarter.n
FY5253Quarter.qtr_with_extra_week
FY5253Quarter.startingMonth
FY5253Quarter.variation
FY5253Quarter.weekday
Methods#
FY5253Quarter.apply
FY5253Quarter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253Quarter.copy
Return a copy of the frequency.
FY5253Quarter.get_rule_code_suffix
FY5253Quarter.get_weeks
FY5253Quarter.isAnchored
FY5253Quarter.onOffset
FY5253Quarter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253Quarter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253Quarter.year_has_extra_week
FY5253Quarter.__call__(*args, **kwargs)
Call self as a function.
FY5253Quarter.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253Quarter.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253Quarter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253Quarter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253Quarter.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253Quarter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Easter#
Easter
DateOffset for the Easter holiday using logic defined in dateutil.
Properties#
Easter.freqstr
Return a string representing the frequency.
Easter.kwds
Return a dict of extra parameters for the offset.
Easter.name
Return a string representing the base frequency.
Easter.nanos
Easter.normalize
Easter.rule_code
Easter.n
Methods#
Easter.apply
Easter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Easter.copy
Return a copy of the frequency.
Easter.isAnchored
Easter.onOffset
Easter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Easter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Easter.__call__(*args, **kwargs)
Call self as a function.
Easter.is_month_start
Return boolean whether a timestamp occurs on the month start.
Easter.is_month_end
Return boolean whether a timestamp occurs on the month end.
Easter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Easter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Easter.is_year_start
Return boolean whether a timestamp occurs on the year start.
Easter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Tick#
Tick
Attributes
Properties#
Tick.delta
Tick.freqstr
Return a string representing the frequency.
Tick.kwds
Return a dict of extra parameters for the offset.
Tick.name
Return a string representing the base frequency.
Tick.nanos
Return an integer of the total number of nanoseconds.
Tick.normalize
Tick.rule_code
Tick.n
Methods#
Tick.copy
Return a copy of the frequency.
Tick.isAnchored
Tick.onOffset
Tick.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Tick.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Tick.__call__(*args, **kwargs)
Call self as a function.
Tick.apply
Tick.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Tick.is_month_start
Return boolean whether a timestamp occurs on the month start.
Tick.is_month_end
Return boolean whether a timestamp occurs on the month end.
Tick.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Tick.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Tick.is_year_start
Return boolean whether a timestamp occurs on the year start.
Tick.is_year_end
Return boolean whether a timestamp occurs on the year end.
Day#
Day
Attributes
Properties#
Day.delta
Day.freqstr
Return a string representing the frequency.
Day.kwds
Return a dict of extra parameters for the offset.
Day.name
Return a string representing the base frequency.
Day.nanos
Return an integer of the total number of nanoseconds.
Day.normalize
Day.rule_code
Day.n
Methods#
Day.copy
Return a copy of the frequency.
Day.isAnchored
Day.onOffset
Day.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Day.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Day.__call__(*args, **kwargs)
Call self as a function.
Day.apply
Day.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Day.is_month_start
Return boolean whether a timestamp occurs on the month start.
Day.is_month_end
Return boolean whether a timestamp occurs on the month end.
Day.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Day.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Day.is_year_start
Return boolean whether a timestamp occurs on the year start.
Day.is_year_end
Return boolean whether a timestamp occurs on the year end.
Hour#
Hour
Attributes
Properties#
Hour.delta
Hour.freqstr
Return a string representing the frequency.
Hour.kwds
Return a dict of extra parameters for the offset.
Hour.name
Return a string representing the base frequency.
Hour.nanos
Return an integer of the total number of nanoseconds.
Hour.normalize
Hour.rule_code
Hour.n
Methods#
Hour.copy
Return a copy of the frequency.
Hour.isAnchored
Hour.onOffset
Hour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Hour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Hour.__call__(*args, **kwargs)
Call self as a function.
Hour.apply
Hour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Hour.is_month_start
Return boolean whether a timestamp occurs on the month start.
Hour.is_month_end
Return boolean whether a timestamp occurs on the month end.
Hour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Hour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Hour.is_year_start
Return boolean whether a timestamp occurs on the year start.
Hour.is_year_end
Return boolean whether a timestamp occurs on the year end.
Minute#
Minute
Attributes
Properties#
Minute.delta
Minute.freqstr
Return a string representing the frequency.
Minute.kwds
Return a dict of extra parameters for the offset.
Minute.name
Return a string representing the base frequency.
Minute.nanos
Return an integer of the total number of nanoseconds.
Minute.normalize
Minute.rule_code
Minute.n
Methods#
Minute.copy
Return a copy of the frequency.
Minute.isAnchored
Minute.onOffset
Minute.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Minute.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Minute.__call__(*args, **kwargs)
Call self as a function.
Minute.apply
Minute.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Minute.is_month_start
Return boolean whether a timestamp occurs on the month start.
Minute.is_month_end
Return boolean whether a timestamp occurs on the month end.
Minute.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Minute.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Minute.is_year_start
Return boolean whether a timestamp occurs on the year start.
Minute.is_year_end
Return boolean whether a timestamp occurs on the year end.
Second#
Second
Attributes
Properties#
Second.delta
Second.freqstr
Return a string representing the frequency.
Second.kwds
Return a dict of extra parameters for the offset.
Second.name
Return a string representing the base frequency.
Second.nanos
Return an integer of the total number of nanoseconds.
Second.normalize
Second.rule_code
Second.n
Methods#
Second.copy
Return a copy of the frequency.
Second.isAnchored
Second.onOffset
Second.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Second.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Second.__call__(*args, **kwargs)
Call self as a function.
Second.apply
Second.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Second.is_month_start
Return boolean whether a timestamp occurs on the month start.
Second.is_month_end
Return boolean whether a timestamp occurs on the month end.
Second.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Second.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Second.is_year_start
Return boolean whether a timestamp occurs on the year start.
Second.is_year_end
Return boolean whether a timestamp occurs on the year end.
Milli#
Milli
Attributes
Properties#
Milli.delta
Milli.freqstr
Return a string representing the frequency.
Milli.kwds
Return a dict of extra parameters for the offset.
Milli.name
Return a string representing the base frequency.
Milli.nanos
Return an integer of the total number of nanoseconds.
Milli.normalize
Milli.rule_code
Milli.n
Methods#
Milli.copy
Return a copy of the frequency.
Milli.isAnchored
Milli.onOffset
Milli.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Milli.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Milli.__call__(*args, **kwargs)
Call self as a function.
Milli.apply
Milli.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Milli.is_month_start
Return boolean whether a timestamp occurs on the month start.
Milli.is_month_end
Return boolean whether a timestamp occurs on the month end.
Milli.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Milli.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Milli.is_year_start
Return boolean whether a timestamp occurs on the year start.
Milli.is_year_end
Return boolean whether a timestamp occurs on the year end.
Micro#
Micro
Attributes
Properties#
Micro.delta
Micro.freqstr
Return a string representing the frequency.
Micro.kwds
Return a dict of extra parameters for the offset.
Micro.name
Return a string representing the base frequency.
Micro.nanos
Return an integer of the total number of nanoseconds.
Micro.normalize
Micro.rule_code
Micro.n
Methods#
Micro.copy
Return a copy of the frequency.
Micro.isAnchored
Micro.onOffset
Micro.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Micro.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Micro.__call__(*args, **kwargs)
Call self as a function.
Micro.apply
Micro.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Micro.is_month_start
Return boolean whether a timestamp occurs on the month start.
Micro.is_month_end
Return boolean whether a timestamp occurs on the month end.
Micro.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Micro.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Micro.is_year_start
Return boolean whether a timestamp occurs on the year start.
Micro.is_year_end
Return boolean whether a timestamp occurs on the year end.
Nano#
Nano
Attributes
Properties#
Nano.delta
Nano.freqstr
Return a string representing the frequency.
Nano.kwds
Return a dict of extra parameters for the offset.
Nano.name
Return a string representing the base frequency.
Nano.nanos
Return an integer of the total number of nanoseconds.
Nano.normalize
Nano.rule_code
Nano.n
Methods#
Nano.copy
Return a copy of the frequency.
Nano.isAnchored
Nano.onOffset
Nano.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Nano.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Nano.__call__(*args, **kwargs)
Call self as a function.
Nano.apply
Nano.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Nano.is_month_start
Return boolean whether a timestamp occurs on the month start.
Nano.is_month_end
Return boolean whether a timestamp occurs on the month end.
Nano.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Nano.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Nano.is_year_start
Return boolean whether a timestamp occurs on the year start.
Nano.is_year_end
Return boolean whether a timestamp occurs on the year end.
Frequencies#
to_offset
Return DateOffset object from string or datetime.timedelta object.
|
reference/offset_frequency.html
| null |
pandas.Index.sort
|
`pandas.Index.sort`
Use sort_values instead.
|
final Index.sort(*args, **kwargs)[source]#
Use sort_values instead.
|
reference/api/pandas.Index.sort.html
|
pandas.describe_option
|
`pandas.describe_option`
Prints the description for one or more registered options.
|
pandas.describe_option(pat, _print_desc=False) = <pandas._config.config.CallableDynamicDoc object>#
Prints the description for one or more registered options.
Call with no arguments to get a listing for all registered options.
Available options:
compute.[use_bottleneck, use_numba, use_numexpr]
display.[chop_threshold, colheader_justify, column_space, date_dayfirst,
date_yearfirst, encoding, expand_frame_repr, float_format]
display.html.[border, table_schema, use_mathjax]
display.[large_repr]
display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow,
repr]
display.[max_categories, max_columns, max_colwidth, max_dir_items,
max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage,
min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision,
show_dimensions]
display.unicode.[ambiguous_as_wide, east_asian_width]
display.[width]
io.excel.ods.[reader, writer]
io.excel.xls.[reader, writer]
io.excel.xlsb.[reader]
io.excel.xlsm.[reader, writer]
io.excel.xlsx.[reader, writer]
io.hdf.[default_format, dropna_table]
io.parquet.[engine]
io.sql.[engine]
mode.[chained_assignment, copy_on_write, data_manager, sim_interactive,
string_storage, use_inf_as_na, use_inf_as_null]
plotting.[backend]
plotting.matplotlib.[register_converters]
styler.format.[decimal, escape, formatter, na_rep, precision, thousands]
styler.html.[mathjax]
styler.latex.[environment, hrules, multicol_align, multirow_align]
styler.render.[encoding, max_columns, max_elements, max_rows, repr]
styler.sparse.[columns, index]
Parameters
patstrRegexp pattern. All matching keys will have their description displayed.
_print_descbool, default TrueIf True (default) the description(s) will be printed to stdout.
Otherwise, the description(s) will be returned as a unicode string
(for testing).
Returns
None by default, the description(s) as a unicode string if _print_desc
is False
Notes
Please reference the User Guide for more information.
The available options with its descriptions:
compute.use_bottleneckboolUse the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numbaboolUse the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexprboolUse the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_thresholdfloat or Noneif set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify‘left’/’right’Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.[default: 12] [currently: 12]
display.date_dayfirstbooleanWhen True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirstbooleanWhen True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encodingstr/unicodeDefaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf-8]
display.expand_frame_reprbooleanWhether to print out the full DataFrame repr for wide DataFrames across
multiple lines, max_columns is still respected, but the output will
wrap-around across multiple “pages” if its width exceeds display.width.
[default: True] [currently: True]
display.float_formatcallableThe callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.borderintA border=value attribute is inserted in the <table> tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schemabooleanWhether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjaxbooleanWhen True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr‘truncate’/’info’For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escapeboolThis specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :boolThis specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumnboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_formatboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirowboolThis specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.reprbooleanWhether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categoriesintThis sets the maximum number of categories pandas should output when
printing out a Categorical or a Series of dtype “category”.
[default: 8] [currently: 8]
display.max_columnsintIf max_cols is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidthint or NoneThe maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a “…”
placeholder is embedded in the output. A ‘None’ value means unlimited.
[default: 50] [currently: 50]
display.max_dir_itemsintThe number of items that will be added to dir(…). ‘None’ value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columnsintmax_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rowsint or Nonedf.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rowsintIf max_rows is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_itemsint or NoneWhen pretty-printing a long sequence, no more then max_seq_items
will be printed. If items are omitted, they will be denoted by the
addition of “…” to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usagebool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,’deep’
[default: True] [currently: True]
display.min_rowsintThe numbers of rows to show in a truncated view (when max_rows is
exceeded). Ignored when max_rows is set to None or 0. When set to
None, follows the value of max_rows.
[default: 10] [currently: 10]
display.multi_sparseboolean“sparsify” MultiIndex display (don’t display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_htmlbooleanWhen True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depthintControls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precisionintFloating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to precision in numpy.set_printoptions().
[default: 6] [currently: 6]
display.show_dimensionsboolean or ‘truncate’Whether to print out dimensions at the end of DataFrame repr.
If ‘truncate’ is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_widebooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_widthbooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.widthintWidth of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.readerstringThe default Excel reader engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writerstringThe default Excel writer engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.readerstringThe default Excel reader engine for ‘xls’ files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writerstringThe default Excel writer engine for ‘xls’ files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.readerstringThe default Excel reader engine for ‘xlsb’ files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.readerstringThe default Excel reader engine for ‘xlsm’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writerstringThe default Excel writer engine for ‘xlsm’ files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.readerstringThe default Excel reader engine for ‘xlsx’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writerstringThe default Excel writer engine for ‘xlsx’ files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_formatformatdefault format writing format, if None, then
put will default to ‘fixed’ and append will default to ‘table’
[default: None] [currently: None]
io.hdf.dropna_tablebooleandrop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.enginestringThe default parquet reader/writer engine. Available options:
‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’
[default: auto] [currently: auto]
io.sql.enginestringThe default sql reader/writer engine. Available options:
‘auto’, ‘sqlalchemy’, the default is ‘auto’
[default: auto] [currently: auto]
mode.chained_assignmentstringRaise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_writeboolUse new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the ‘PANDAS_COPY_ON_WRITE’ environment variable
(if set to “1” for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_managerstringInternal data manager type; can be “block” or “array”. Defaults to “block”,
unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactivebooleanWhether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storagestringThe default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_nabooleanTrue means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_nullbooleanuse_inf_as_null had been deprecated and will be removed in a future
version. Use use_inf_as_na instead.
[default: False] [currently: False]
(Deprecated, use mode.use_inf_as_na instead.)
plotting.backendstrThe plotting backend to use. The default value is “matplotlib”, the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_convertersbool or ‘auto’.Whether to register converters with matplotlib’s units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimalstrThe character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escapestr, optionalWhether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatterstr, callable, dict, optionalA formatter object to be used as default within Styler.format.
[default: None] [currently: None]
styler.format.na_repstr, optionalThe string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precisionintThe precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousandsstr, optionalThe character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjaxboolIf False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environmentstrThe environment to replace \begin{table}. If “longtable” is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrulesboolWhether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align{“r”, “c”, “l”, “naive-l”, “naive-r”}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align{“c”, “t”, “b”}The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encodingstrThe encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columnsint, optionalThe maximum number of columns that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.max_elementsintThe maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rowsint, optionalThe maximum number of rows that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.reprstrDetermine which output to use in Jupyter Notebook in {“html”, “latex”}.
[default: html] [currently: html]
styler.sparse.columnsboolWhether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.indexboolWhether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
|
reference/api/pandas.describe_option.html
|
pandas.Index.get_slice_bound
|
`pandas.Index.get_slice_bound`
Calculate slice bound that corresponds to given label.
|
Index.get_slice_bound(label, side, kind=_NoDefault.no_default)[source]#
Calculate slice bound that corresponds to given label.
Returns leftmost (one-past-the-rightmost if side=='right') position
of given label.
Parameters
labelobject
side{‘left’, ‘right’}
kind{‘loc’, ‘getitem’} or None
Deprecated since version 1.4.0.
Returns
intIndex of label.
|
reference/api/pandas.Index.get_slice_bound.html
|
pandas.tseries.offsets.Nano.copy
|
`pandas.tseries.offsets.Nano.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
Nano.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.Nano.copy.html
|
pandas.Series.to_list
|
`pandas.Series.to_list`
Return a list of the values.
|
Series.to_list()[source]#
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
Returns
list
See also
numpy.ndarray.tolistReturn the array as an a.ndim-levels deep nested list of Python scalars.
|
reference/api/pandas.Series.to_list.html
|
pandas.DataFrame.groupby
|
`pandas.DataFrame.groupby`
Group DataFrame using a mapper or by a Series of columns.
```
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby(['Animal']).mean()
Max Speed
Animal
Falcon 375.0
Parrot 25.0
```
|
DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=_NoDefault.no_default, squeeze=_NoDefault.no_default, observed=False, dropna=True)[source]#
Group DataFrame using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
Parameters
bymapping, function, label, or list of labelsUsed to determine the groups for the groupby.
If by is a function, it’s called on each value of the object’s
index. If a dict or Series is passed, the Series or dict VALUES
will be used to determine the groups (the Series’ values are first
aligned; see .align() method). If a list or ndarray of length
equal to the selected axis is passed (see the groupby user guide),
the values are used as-is to determine the groups. A label or list
of labels may be passed to group by the columns in self.
Notice that a tuple is interpreted as a (single) key.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Split along rows (0) or columns (1). For Series this parameter
is unused and defaults to 0.
levelint, level name, or sequence of such, default NoneIf the axis is a MultiIndex (hierarchical), group by a particular
level or levels. Do not specify both by and level.
as_indexbool, default TrueFor aggregated output, return object with group labels as the
index. Only relevant for DataFrame input. as_index=False is
effectively “SQL-style” grouped output.
sortbool, default TrueSort group keys. Get better performance by turning this off.
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
group_keysbool, optionalWhen calling apply and the by argument produces a like-indexed
(i.e. a transform) result, add group keys to
index to identify pieces. By default group keys are not included
when the result’s index (and column) labels match the inputs, and
are included otherwise. This argument has no effect if the result produced
is not like-indexed with respect to the input.
Changed in version 1.5.0: Warns that group_keys will no longer be ignored when the
result from apply is a like-indexed Series or DataFrame.
Specify group_keys explicitly to include the group keys or
not.
squeezebool, default FalseReduce the dimensionality of the return type if possible,
otherwise return a consistent type.
Deprecated since version 1.1.0.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
dropnabool, default TrueIf True, and if group keys contain NA values, NA values together
with row/column will be dropped.
If False, NA values will also be treated as the key in groups.
New in version 1.1.0.
Returns
DataFrameGroupByReturns a groupby object that contains information about the groups.
See also
resampleConvenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more
detailed usage and examples, including splitting an object into groups,
iterating through groups, selecting a group, aggregation, and more.
Examples
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby(['Animal']).mean()
Max Speed
Animal
Falcon 375.0
Parrot 25.0
Hierarchical Indexes
We can groupby different levels of a hierarchical index
using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
... ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]},
... index=index)
>>> df
Max Speed
Animal Type
Falcon Captive 390.0
Wild 350.0
Parrot Captive 30.0
Wild 20.0
>>> df.groupby(level=0).mean()
Max Speed
Animal
Falcon 370.0
Parrot 25.0
>>> df.groupby(level="Type").mean()
Max Speed
Type
Captive 210.0
Wild 185.0
We can also choose to include NA in group keys or not by setting
dropna parameter, the default setting is True.
>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by=["b"]).sum()
a c
b
1.0 2 3
2.0 2 5
>>> df.groupby(by=["b"], dropna=False).sum()
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
>>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by="a").sum()
b c
a
a 13.0 13.0
b 12.3 123.0
>>> df.groupby(by="a", dropna=False).sum()
b c
a
a 13.0 13.0
b 12.3 123.0
NaN 12.3 33.0
When using .apply(), use group_keys to include or exclude the group keys.
The group_keys argument defaults to True (include).
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df.groupby("Animal", group_keys=True).apply(lambda x: x)
Animal Max Speed
Animal
Falcon 0 Falcon 380.0
1 Falcon 370.0
Parrot 2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby("Animal", group_keys=False).apply(lambda x: x)
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
|
reference/api/pandas.DataFrame.groupby.html
|
pandas.tseries.offsets.Second.copy
|
`pandas.tseries.offsets.Second.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
Second.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.Second.copy.html
|
pandas.Index.is_floating
|
`pandas.Index.is_floating`
Check if the Index is a floating type.
The Index may consist of only floats, NaNs, or a mix of floats,
integers, or NaNs.
```
>>> idx = pd.Index([1.0, 2.0, 3.0, 4.0])
>>> idx.is_floating()
True
```
|
final Index.is_floating()[source]#
Check if the Index is a floating type.
The Index may consist of only floats, NaNs, or a mix of floats,
integers, or NaNs.
Returns
boolWhether or not the Index only consists of only consists of floats, NaNs, or
a mix of floats, integers, or NaNs.
See also
is_booleanCheck if the Index only consists of booleans.
is_integerCheck if the Index only consists of integers.
is_numericCheck if the Index only consists of numeric data.
is_objectCheck if the Index is of the object dtype.
is_categoricalCheck if the Index holds categorical data.
is_intervalCheck if the Index holds Interval objects.
is_mixedCheck if the Index holds data with mixed data types.
Examples
>>> idx = pd.Index([1.0, 2.0, 3.0, 4.0])
>>> idx.is_floating()
True
>>> idx = pd.Index([1.0, 2.0, np.nan, 4.0])
>>> idx.is_floating()
True
>>> idx = pd.Index([1, 2, 3, 4, np.nan])
>>> idx.is_floating()
True
>>> idx = pd.Index([1, 2, 3, 4])
>>> idx.is_floating()
False
|
reference/api/pandas.Index.is_floating.html
|
pandas.tseries.offsets.Tick.normalize
|
pandas.tseries.offsets.Tick.normalize
|
Tick.normalize#
|
reference/api/pandas.tseries.offsets.Tick.normalize.html
|
pandas.tseries.offsets.BusinessMonthEnd.is_month_end
|
`pandas.tseries.offsets.BusinessMonthEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
BusinessMonthEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_month_end.html
|
pandas.tseries.offsets.Milli.is_quarter_start
|
`pandas.tseries.offsets.Milli.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Milli.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Milli.is_quarter_start.html
|
pandas.DataFrame.ne
|
`pandas.DataFrame.ne`
Get Not equal to of dataframe and other, element-wise (binary operator ne).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
```
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
```
|
DataFrame.ne(other, axis='columns', level=None)[source]#
Get Not equal to of dataframe and other, element-wise (binary operator ne).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
Equivalent to ==, !=, <=, <, >=, > with support to choose axis
(rows or columns) and level for comparison.
Parameters
otherscalar, sequence, Series, or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’Whether to compare by the index (0 or ‘index’) or columns
(1 or ‘columns’).
levelint or labelBroadcast across a level, matching Index values on the passed
MultiIndex level.
Returns
DataFrame of boolResult of the comparison.
See also
DataFrame.eqCompare DataFrames for equality elementwise.
DataFrame.neCompare DataFrames for inequality elementwise.
DataFrame.leCompare DataFrames for less than inequality or equality elementwise.
DataFrame.ltCompare DataFrames for strictly less than inequality elementwise.
DataFrame.geCompare DataFrames for greater than inequality or equality elementwise.
DataFrame.gtCompare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together.
NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned
with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must
match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False
|
reference/api/pandas.DataFrame.ne.html
|
pandas arrays, scalars, and data types
|
pandas arrays, scalars, and data types
|
Objects#
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar
Array
TZ-aware datetime
DatetimeTZDtype
Timestamp
Datetimes
Timedeltas
(none)
Timedelta
Timedeltas
Period (time spans)
PeriodDtype
Period
Periods
Intervals
IntervalDtype
Interval
Intervals
Nullable Integer
Int64Dtype, …
(none)
Nullable integer
Categorical
CategoricalDtype
(none)
Categoricals
Sparse
SparseDtype
(none)
Sparse
Strings
StringDtype
str
Strings
Boolean (with NA)
BooleanDtype
bool
Nullable Boolean
PyArrow
ArrowDtype
Python Scalars or NA
PyArrow
pandas and third-party libraries can extend NumPy’s type system (see Extension types).
The top-level array() method can be used to create a new array, which may be
stored in a Series, Index, or as a column in a DataFrame.
array(data[, dtype, copy])
Create an array.
PyArrow#
Warning
This feature is experimental, and the API can change in a future release without warning.
The arrays.ArrowExtensionArray is backed by a pyarrow.ChunkedArray with a
pyarrow.DataType instead of a NumPy array and data type. The .dtype of a arrays.ArrowExtensionArray
is an ArrowDtype.
Pyarrow provides similar array and data type
support as NumPy including first-class nullability support for all data types, immutability and more.
Note
For string types (pyarrow.string(), string[pyarrow]), PyArrow support is still facilitated
by arrays.ArrowStringArray and StringDtype("pyarrow"). See the string section
below.
While individual values in an arrays.ArrowExtensionArray are stored as a PyArrow objects, scalars are returned
as Python scalars corresponding to the data type, e.g. a PyArrow int64 will be returned as Python int, or NA for missing
values.
arrays.ArrowExtensionArray(values)
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
ArrowDtype(pyarrow_dtype)
An ExtensionDtype for PyArrow data types.
Datetimes#
NumPy cannot natively represent timezone-aware datetimes. pandas supports this
with the arrays.DatetimeArray extension array, which can hold timezone-naive
or timezone-aware values.
Timestamp, a subclass of datetime.datetime, is pandas’
scalar type for timezone-naive or timezone-aware datetime data.
Timestamp([ts_input, freq, tz, unit, year, ...])
Pandas replacement for python datetime.datetime object.
Properties#
Timestamp.asm8
Return numpy datetime64 format in nanoseconds.
Timestamp.day
Timestamp.dayofweek
Return day of the week.
Timestamp.day_of_week
Return day of the week.
Timestamp.dayofyear
Return the day of the year.
Timestamp.day_of_year
Return the day of the year.
Timestamp.days_in_month
Return the number of days in the month.
Timestamp.daysinmonth
Return the number of days in the month.
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Return True if year is a leap year.
Timestamp.is_month_end
Return True if date is last day of month.
Timestamp.is_month_start
Return True if date is first day of month.
Timestamp.is_quarter_end
Return True if date is last day of the quarter.
Timestamp.is_quarter_start
Return True if date is first day of the quarter.
Timestamp.is_year_end
Return True if date is last day of the year.
Timestamp.is_year_start
Return True if date is first day of the year.
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Return the quarter of the year.
Timestamp.resolution
Timestamp.second
Timestamp.tz
Alias for tzinfo.
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Return the week number of the year.
Timestamp.weekofyear
Return the week number of the year.
Timestamp.year
Methods#
Timestamp.astimezone(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.ceil(freq[, ambiguous, nonexistent])
Return a new Timestamp ceiled to this resolution.
Timestamp.combine(date, time)
Combine date, time into datetime with same date and time fields.
Timestamp.ctime
Return ctime() style string.
Timestamp.date
Return date object with same year, month and day.
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Timestamp.dst
Return self.tzinfo.dst(self).
Timestamp.floor(freq[, ambiguous, nonexistent])
Return a new Timestamp floored to this resolution.
Timestamp.freq
Timestamp.freqstr
Return the total number of days in the month.
Timestamp.fromordinal(ordinal[, freq, tz])
Construct a timestamp from a a proleptic Gregorian ordinal.
Timestamp.fromtimestamp(ts)
Transform timestamp[, tz] to tz's local time from POSIX timestamp.
Timestamp.isocalendar
Return a 3-tuple containing ISO year, week number, and weekday.
Timestamp.isoformat
Return the time formatted according to ISO 8610.
Timestamp.isoweekday()
Return the day of the week represented by the date.
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
Timestamp.now([tz])
Return new Timestamp object representing current time local to tz.
Timestamp.replace([year, month, day, hour, ...])
Implements datetime.replace, handles nanoseconds.
Timestamp.round(freq[, ambiguous, nonexistent])
Round the Timestamp to the specified resolution.
Timestamp.strftime(format)
Return a formatted string of the Timestamp.
Timestamp.strptime(string, format)
Function is not implemented.
Timestamp.time
Return time object with same time but with tzinfo=None.
Timestamp.timestamp
Return POSIX timestamp as float.
Timestamp.timetuple
Return time tuple, compatible with time.localtime().
Timestamp.timetz
Return time object with same time and tzinfo.
Timestamp.to_datetime64
Return a numpy.datetime64 object with 'ns' precision.
Timestamp.to_numpy
Convert the Timestamp to a NumPy datetime64.
Timestamp.to_julian_date()
Convert TimeStamp to a Julian Date.
Timestamp.to_period
Return an period of which this timestamp is an observation.
Timestamp.to_pydatetime
Convert a Timestamp object to a native Python datetime object.
Timestamp.today([tz])
Return the current time in the local timezone.
Timestamp.toordinal
Return proleptic Gregorian ordinal.
Timestamp.tz_convert(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.tz_localize(tz[, ambiguous, ...])
Localize the Timestamp to a timezone.
Timestamp.tzname
Return self.tzinfo.tzname(self).
Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
Timestamp.utcoffset
Return self.tzinfo.utcoffset(self).
Timestamp.utctimetuple
Return UTC time tuple, compatible with time.localtime().
Timestamp.weekday()
Return the day of the week represented by the date.
A collection of timestamps may be stored in a arrays.DatetimeArray.
For timezone-aware data, the .dtype of a arrays.DatetimeArray is a
DatetimeTZDtype. For timezone-naive data, np.dtype("datetime64[ns]")
is used.
If the data are timezone-aware, then every value in the array must have the same timezone.
arrays.DatetimeArray(values[, dtype, freq, copy])
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
DatetimeTZDtype([unit, tz])
An ExtensionDtype for timezone-aware datetime data.
Timedeltas#
NumPy can natively represent timedeltas. pandas provides Timedelta
for symmetry with Timestamp.
Timedelta([value, unit])
Represents a duration, the difference between two dates or times.
Properties#
Timedelta.asm8
Return a numpy timedelta64 array scalar view.
Timedelta.components
Return a components namedtuple-like.
Timedelta.days
Timedelta.delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
Timedelta.freq
(DEPRECATED) Freq property.
Timedelta.is_populated
(DEPRECATED) Is_populated property.
Timedelta.max
Timedelta.microseconds
Timedelta.min
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Timedelta.resolution
Timedelta.seconds
Timedelta.value
Timedelta.view
Array view compatibility.
Methods#
Timedelta.ceil(freq)
Return a new Timedelta ceiled to this resolution.
Timedelta.floor(freq)
Return a new Timedelta floored to this resolution.
Timedelta.isoformat
Format the Timedelta as ISO 8601 Duration.
Timedelta.round(freq)
Round the Timedelta to the specified resolution.
Timedelta.to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
Timedelta.to_numpy
Convert the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds
Total seconds in the duration.
A collection of Timedelta may be stored in a TimedeltaArray.
arrays.TimedeltaArray(values[, dtype, freq, ...])
Pandas ExtensionArray for timedelta data.
Periods#
pandas represents spans of times as Period objects.
Period#
Period([value, freq, ordinal, year, month, ...])
Represents a period of time.
Properties#
Period.day
Get day of the month that a Period falls on.
Period.dayofweek
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.day_of_week
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.dayofyear
Return the day of the year.
Period.day_of_year
Return the day of the year.
Period.days_in_month
Get the total number of days in the month that this period falls on.
Period.daysinmonth
Get the total number of days of the month that this period falls on.
Period.end_time
Get the Timestamp for the end of the period.
Period.freq
Period.freqstr
Return a string representation of the frequency.
Period.hour
Get the hour of the day component of the Period.
Period.is_leap_year
Return True if the period's year is in a leap year.
Period.minute
Get minute of the hour component of the Period.
Period.month
Return the month this Period falls on.
Period.ordinal
Period.quarter
Return the quarter this Period falls on.
Period.qyear
Fiscal year the Period lies in according to its starting-quarter.
Period.second
Get the second component of the Period.
Period.start_time
Get the Timestamp for the start of the period.
Period.week
Get the week of the year on the given Period.
Period.weekday
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.weekofyear
Get the week of the year on the given Period.
Period.year
Return the year this Period falls on.
Methods#
Period.asfreq
Convert Period to desired frequency, at the start or end of the interval.
Period.now
Return the period of now's date.
Period.strftime
Returns a formatted string representation of the Period.
Period.to_timestamp
Return the Timestamp representation of the Period.
A collection of Period may be stored in a arrays.PeriodArray.
Every period in a arrays.PeriodArray must have the same freq.
arrays.PeriodArray(values[, dtype, freq, copy])
Pandas ExtensionArray for storing Period data.
PeriodDtype([freq])
An ExtensionDtype for Period data.
Intervals#
Arbitrary intervals can be represented as Interval objects.
Interval
Immutable object implementing an Interval, a bounded slice-like interval.
Properties#
Interval.closed
String describing the inclusive side the intervals.
Interval.closed_left
Check if the interval is closed on the left side.
Interval.closed_right
Check if the interval is closed on the right side.
Interval.is_empty
Indicates if an interval is empty, meaning it contains no points.
Interval.left
Left bound for the interval.
Interval.length
Return the length of the Interval.
Interval.mid
Return the midpoint of the Interval.
Interval.open_left
Check if the interval is open on the left side.
Interval.open_right
Check if the interval is open on the right side.
Interval.overlaps
Check whether two Interval objects overlap.
Interval.right
Right bound for the interval.
A collection of intervals may be stored in an arrays.IntervalArray.
arrays.IntervalArray(data[, closed, dtype, ...])
Pandas array for interval data that are closed on the same side.
IntervalDtype([subtype, closed])
An ExtensionDtype for Interval data.
Nullable integer#
numpy.ndarray cannot natively represent integer-data with missing values.
pandas provides this through arrays.IntegerArray.
arrays.IntegerArray(values, mask[, copy])
Array of integer (optional missing) values.
Int8Dtype()
An ExtensionDtype for int8 integer data.
Int16Dtype()
An ExtensionDtype for int16 integer data.
Int32Dtype()
An ExtensionDtype for int32 integer data.
Int64Dtype()
An ExtensionDtype for int64 integer data.
UInt8Dtype()
An ExtensionDtype for uint8 integer data.
UInt16Dtype()
An ExtensionDtype for uint16 integer data.
UInt32Dtype()
An ExtensionDtype for uint32 integer data.
UInt64Dtype()
An ExtensionDtype for uint64 integer data.
Categoricals#
pandas defines a custom data type for representing data that can take only a
limited, fixed set of values. The dtype of a Categorical can be described by
a CategoricalDtype.
CategoricalDtype([categories, ordered])
Type for categorical data with the categories and orderedness.
CategoricalDtype.categories
An Index containing the unique categories allowed.
CategoricalDtype.ordered
Whether the categories have an ordered relationship.
Categorical data can be stored in a pandas.Categorical
Categorical(values[, categories, ordered, ...])
Represent a categorical variable in classic R / S-plus fashion.
The alternative Categorical.from_codes() constructor can be used when you
have the categories and integer codes already:
Categorical.from_codes(codes[, categories, ...])
Make a Categorical type from codes and categories or dtype.
The dtype information is available on the Categorical
Categorical.dtype
The CategoricalDtype for this instance.
Categorical.categories
The categories of this categorical.
Categorical.ordered
Whether the categories have an ordered relationship.
Categorical.codes
The category codes of this categorical.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts
the Categorical back to a NumPy array, so categories and order information is not preserved!
Categorical.__array__([dtype])
The numpy array interface.
A Categorical can be stored in a Series or DataFrame.
To create a Series of dtype category, use cat = s.astype(dtype) or
Series(..., dtype=dtype) where dtype is either
the string 'category'
an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical
data. See Categorical accessor for more.
Sparse#
Data where a single value is repeated many times (e.g. 0 or NaN) may
be stored efficiently as a arrays.SparseArray.
arrays.SparseArray(data[, sparse_index, ...])
An ExtensionArray for storing sparse data.
SparseDtype([dtype, fill_value])
Dtype for data stored in SparseArray.
The Series.sparse accessor may be used to access sparse-specific attributes
and methods if the Series contains sparse values. See
Sparse accessor and the user guide for more.
Strings#
When working with text data, where each valid element is a string or missing,
we recommend using StringDtype (with the alias "string").
arrays.StringArray(values[, copy])
Extension array for string data.
arrays.ArrowStringArray(values)
Extension array for string data in a pyarrow.ChunkedArray.
StringDtype([storage])
Extension dtype for string data.
The Series.str accessor is available for Series backed by a arrays.StringArray.
See String handling for more.
Nullable Boolean#
The boolean dtype (with the alias "boolean") provides support for storing
boolean data (True, False) with missing values, which is not possible
with a bool numpy.ndarray.
arrays.BooleanArray(values, mask[, copy])
Array of boolean (True/False) data with missing values.
BooleanDtype()
Extension dtype for boolean data.
Utilities#
Constructors#
api.types.union_categoricals(to_union[, ...])
Combine list-like of Categorical-like, unioning categories.
api.types.infer_dtype
Return a string label of the type of a scalar or list-like of values.
api.types.pandas_dtype(dtype)
Convert input into a pandas only dtype object or a numpy dtype object.
Data type introspection#
api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
api.types.is_extension_type(arr)
(DEPRECATED) Check whether an array-like is of a pandas extension class instance.
api.types.is_extension_array_dtype(arr_or_dtype)
Check if an object is a pandas extension array type.
api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
api.types.is_sparse(arr)
Check whether an array-like is a 1-D pandas sparse array.
Iterable introspection#
api.types.is_dict_like(obj)
Check if the object is dict-like.
api.types.is_file_like(obj)
Check if the object is a file-like object.
api.types.is_list_like
Check if the object is list-like.
api.types.is_named_tuple(obj)
Check if the object is a named tuple.
api.types.is_iterator
Check if the object is an iterator.
Scalar introspection#
api.types.is_bool
Return True if given object is boolean.
api.types.is_categorical(arr)
(DEPRECATED) Check whether an array-like is a Categorical instance.
api.types.is_complex
Return True if given object is complex.
api.types.is_float
Return True if given object is float.
api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
Return True if given object is integer.
api.types.is_interval
api.types.is_number(obj)
Check if the object is a number.
api.types.is_re(obj)
Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
api.types.is_scalar
Return True if given object is scalar.
|
reference/arrays.html
|
pandas.tseries.offsets.Nano.is_year_start
|
`pandas.tseries.offsets.Nano.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Nano.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Nano.is_year_start.html
|
pandas.Timedelta.days
|
pandas.Timedelta.days
|
Timedelta.days#
|
reference/api/pandas.Timedelta.days.html
|
pandas.tseries.offsets.Hour.delta
|
pandas.tseries.offsets.Hour.delta
|
Hour.delta#
|
reference/api/pandas.tseries.offsets.Hour.delta.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.m_offset
|
pandas.tseries.offsets.CustomBusinessMonthBegin.m_offset
|
CustomBusinessMonthBegin.m_offset#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.m_offset.html
|
pandas.tseries.offsets.SemiMonthEnd.rollforward
|
`pandas.tseries.offsets.SemiMonthEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
|
SemiMonthEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.rollforward.html
|
pandas.Series.floordiv
|
`pandas.Series.floordiv`
Return Integer division of series and other, element-wise (binary operator floordiv).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
```
|
Series.floordiv(other, level=None, fill_value=None, axis=0)[source]#
Return Integer division of series and other, element-wise (binary operator floordiv).
Equivalent to series // other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rfloordivReverse of the Integer division operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.floordiv.html
|
pandas.api.types.infer_dtype
|
`pandas.api.types.infer_dtype`
Return a string label of the type of a scalar or list-like of values.
```
>>> import datetime
>>> infer_dtype(['foo', 'bar'])
'string'
```
|
pandas.api.types.infer_dtype()#
Return a string label of the type of a scalar or list-like of values.
Parameters
valuescalar, list, ndarray, or pandas type
skipnabool, default TrueIgnore NaN values when inferring the type.
Returns
strDescribing the common type of the input data.
Results can include:
string
bytes
floating
integer
mixed-integer
mixed-integer-float
decimal
complex
categorical
boolean
datetime64
datetime
date
timedelta64
timedelta
time
period
mixed
unknown-array
Raises
TypeErrorIf ndarray-like but cannot infer the dtype
Notes
‘mixed’ is the catchall for anything that is not otherwise
specialized
‘mixed-integer-float’ are floats and integers
‘mixed-integer’ are integers mixed with non-integers
‘unknown-array’ is the catchall for something that is an array (has
a dtype attribute), but has a dtype unknown to pandas (e.g. external
extension array)
Examples
>>> import datetime
>>> infer_dtype(['foo', 'bar'])
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=True)
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=False)
'mixed'
>>> infer_dtype([b'foo', b'bar'])
'bytes'
>>> infer_dtype([1, 2, 3])
'integer'
>>> infer_dtype([1, 2, 3.5])
'mixed-integer-float'
>>> infer_dtype([1.0, 2.0, 3.5])
'floating'
>>> infer_dtype(['a', 1])
'mixed-integer'
>>> infer_dtype([Decimal(1), Decimal(2.0)])
'decimal'
>>> infer_dtype([True, False])
'boolean'
>>> infer_dtype([True, False, np.nan])
'boolean'
>>> infer_dtype([pd.Timestamp('20130101')])
'datetime'
>>> infer_dtype([datetime.date(2013, 1, 1)])
'date'
>>> infer_dtype([np.datetime64('2013-01-01')])
'datetime64'
>>> infer_dtype([datetime.timedelta(0, 1, 1)])
'timedelta'
>>> infer_dtype(pd.Series(list('aabc')).astype('category'))
'categorical'
|
reference/api/pandas.api.types.infer_dtype.html
|
pandas.DataFrame.size
|
`pandas.DataFrame.size`
Return an int representing the number of elements in this object.
```
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.size
3
```
|
property DataFrame.size[source]#
Return an int representing the number of elements in this object.
Return the number of rows if Series. Otherwise return the number of
rows times number of columns if DataFrame.
See also
ndarray.sizeNumber of elements in the array.
Examples
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.size
3
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.size
4
|
reference/api/pandas.DataFrame.size.html
|
Style
|
Styler objects are returned by pandas.DataFrame.style.
Styler constructor#
Styler(data[, precision, table_styles, ...])
Helps style a DataFrame or Series according to the data with HTML and CSS.
Styler.from_custom_template(searchpath[, ...])
Factory function for creating a subclass of Styler.
Styler properties#
Styler.env
Styler.template_html
Styler.template_html_style
Styler.template_html_table
Styler.template_latex
Styler.template_string
Styler.loader
Style application#
Styler.apply(func[, axis, subset])
Apply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymap(func[, subset])
Apply a CSS-styling function elementwise.
Styler.apply_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, level-wise.
Styler.applymap_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, elementwise.
Styler.format([formatter, subset, na_rep, ...])
Format the text display value of cells.
Styler.format_index([formatter, axis, ...])
Format the text display value of index labels or column headers.
Styler.relabel_index(labels[, axis, level])
Relabel the index, or column header, keys to display a set of specified values.
Styler.hide([subset, axis, level, names])
Hide the entire index / column headers, or specific rows / columns from display.
Styler.concat(other)
Append another Styler to combine the output into a single table.
Styler.set_td_classes(classes)
Set the class attribute of <td> HTML elements.
Styler.set_table_styles([table_styles, ...])
Set the table styles included within the <style> HTML element.
Styler.set_table_attributes(attributes)
Set the table attributes added to the <table> HTML element.
Styler.set_tooltips(ttips[, props, css_class])
Set the DataFrame of strings on Styler generating :hover tooltips.
Styler.set_caption(caption)
Set the text added to a <caption> HTML element.
Styler.set_sticky([axis, pixel_size, levels])
Add CSS to permanently display the index or column headers in a scrolling frame.
Styler.set_properties([subset])
Set defined CSS-properties to each <td> HTML element for the given subset.
Styler.set_uuid(uuid)
Set the uuid applied to id attributes of HTML elements.
Styler.clear()
Reset the Styler, removing any previously applied styles.
Styler.pipe(func, *args, **kwargs)
Apply func(self, *args, **kwargs), and return the result.
Builtin styles#
Styler.highlight_null([color, subset, ...])
Highlight missing values with a style.
Styler.highlight_max([subset, color, axis, ...])
Highlight the maximum with a style.
Styler.highlight_min([subset, color, axis, ...])
Highlight the minimum with a style.
Styler.highlight_between([subset, color, ...])
Highlight a defined range with a style.
Styler.highlight_quantile([subset, color, ...])
Highlight values defined by a quantile with a style.
Styler.background_gradient([cmap, low, ...])
Color the background in a gradient style.
Styler.text_gradient([cmap, low, high, ...])
Color the text in a gradient style.
Styler.bar([subset, axis, color, cmap, ...])
Draw bar chart in the cell backgrounds.
Style export and import#
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
Styler.to_string([buf, encoding, ...])
Write Styler to a file, buffer or string in text format.
Styler.export()
Export the styles applied to the current Styler.
Styler.use(styles)
Set the styles on the current Styler.
|
reference/style.html
| null |
pandas.Series.to_markdown
|
`pandas.Series.to_markdown`
Print Series in Markdown-friendly format.
New in version 1.0.0.
```
>>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
| | animal |
|---:|:---------|
| 0 | elk |
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
```
|
Series.to_markdown(buf=None, mode='wt', index=True, storage_options=None, **kwargs)[source]#
Print Series in Markdown-friendly format.
New in version 1.0.0.
Parameters
bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string.
modestr, optionalMode in which file is opened, “wt” by default.
indexbool, optional, default TrueAdd index (row) labels.
New in version 1.1.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
**kwargsThese parameters will be passed to tabulate.
Returns
strSeries in Markdown-friendly format.
Notes
Requires the tabulate package.
Examples
>>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
| | animal |
|---:|:---------|
| 0 | elk |
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
Output markdown with a tabulate option.
>>> print(s.to_markdown(tablefmt="grid"))
+----+----------+
| | animal |
+====+==========+
| 0 | elk |
+----+----------+
| 1 | pig |
+----+----------+
| 2 | dog |
+----+----------+
| 3 | quetzal |
+----+----------+
|
reference/api/pandas.Series.to_markdown.html
|
pandas.MultiIndex.to_flat_index
|
`pandas.MultiIndex.to_flat_index`
Convert a MultiIndex to an Index of Tuples containing the level values.
```
>>> index = pd.MultiIndex.from_product(
... [['foo', 'bar'], ['baz', 'qux']],
... names=['a', 'b'])
>>> index.to_flat_index()
Index([('foo', 'baz'), ('foo', 'qux'),
('bar', 'baz'), ('bar', 'qux')],
dtype='object')
```
|
MultiIndex.to_flat_index()[source]#
Convert a MultiIndex to an Index of Tuples containing the level values.
Returns
pd.IndexIndex with the MultiIndex data represented in Tuples.
See also
MultiIndex.from_tuplesConvert flat index back to MultiIndex.
Notes
This method will simply return the caller if called by anything other
than a MultiIndex.
Examples
>>> index = pd.MultiIndex.from_product(
... [['foo', 'bar'], ['baz', 'qux']],
... names=['a', 'b'])
>>> index.to_flat_index()
Index([('foo', 'baz'), ('foo', 'qux'),
('bar', 'baz'), ('bar', 'qux')],
dtype='object')
|
reference/api/pandas.MultiIndex.to_flat_index.html
|
pandas.Series.str.split
|
`pandas.Series.str.split`
Split strings around given separator/delimiter.
```
>>> s = pd.Series(
... [
... "this is a regular sentence",
... "https://docs.python.org/3/tutorial/index.html",
... np.nan
... ]
... )
>>> s
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html
2 NaN
dtype: object
```
|
Series.str.split(pat=None, *, n=- 1, expand=False, regex=None)[source]#
Split strings around given separator/delimiter.
Splits the string in the Series/Index from the beginning,
at the specified delimiter string.
Parameters
patstr or compiled regex, optionalString or regular expression to split on.
If not specified, split on whitespace.
nint, default -1 (all)Limit number of splits in output.
None, 0 and -1 will be interpreted as return all splits.
expandbool, default FalseExpand the split strings into separate columns.
If True, return DataFrame/MultiIndex expanding dimensionality.
If False, return Series/Index, containing lists of strings.
regexbool, default NoneDetermines if the passed-in pattern is a regular expression:
If True, assumes the passed-in pattern is a regular expression
If False, treats the pattern as a literal string.
If None and pat length is 1, treats pat as a literal string.
If None and pat length is not 1, treats pat as a regular expression.
Cannot be set to False if pat is a compiled regex
New in version 1.4.0.
Returns
Series, Index, DataFrame or MultiIndexType matches caller unless expand=True (see Notes).
Raises
ValueError
if regex is False and pat is a compiled regex
See also
Series.str.splitSplit strings around given separator/delimiter.
Series.str.rsplitSplits string around given separator/delimiter, starting from the right.
Series.str.joinJoin lists contained as elements in the Series/Index with passed delimiter.
str.splitStandard library version for split.
str.rsplitStandard library version for rsplit.
Notes
The handling of the n keyword depends on the number of found splits:
If found splits > n, make first n splits only
If found splits <= n, make all splits
If for a certain row the number of found splits < n,
append None for padding up to n if expand=True
If using expand=True, Series and Index callers return DataFrame and
MultiIndex objects, respectively.
Use of regex =False with a pat as a compiled regex will raise an error.
Examples
>>> s = pd.Series(
... [
... "this is a regular sentence",
... "https://docs.python.org/3/tutorial/index.html",
... np.nan
... ]
... )
>>> s
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html
2 NaN
dtype: object
In the default setting, the string is split by whitespace.
>>> s.str.split()
0 [this, is, a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
Without the n parameter, the outputs of rsplit and split
are identical.
>>> s.str.rsplit()
0 [this, is, a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
The n parameter can be used to limit the number of splits on the
delimiter. The outputs of split and rsplit are different.
>>> s.str.split(n=2)
0 [this, is, a regular sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
>>> s.str.rsplit(n=2)
0 [this is a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
The pat parameter can be used to split by other characters.
>>> s.str.split(pat="/")
0 [this is a regular sentence]
1 [https:, , docs.python.org, 3, tutorial, index...
2 NaN
dtype: object
When using expand=True, the split elements will expand out into
separate columns. If NaN is present, it is propagated throughout
the columns during the split.
>>> s.str.split(expand=True)
0 1 2 3 4
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html None None None None
2 NaN NaN NaN NaN NaN
For slightly more complex use cases like splitting the html document name
from a url, a combination of parameter settings can be used.
>>> s.str.rsplit("/", n=1, expand=True)
0 1
0 this is a regular sentence None
1 https://docs.python.org/3/tutorial index.html
2 NaN NaN
Remember to escape special characters when explicitly using regular expressions.
>>> s = pd.Series(["foo and bar plus baz"])
>>> s.str.split(r"and|plus", expand=True)
0 1 2
0 foo bar baz
Regular expressions can be used to handle urls or file names.
When pat is a string and regex=None (the default), the given pat is compiled
as a regex only if len(pat) != 1.
>>> s = pd.Series(['foojpgbar.jpg'])
>>> s.str.split(r".", expand=True)
0 1
0 foojpgbar jpg
>>> s.str.split(r"\.jpg", expand=True)
0 1
0 foojpgbar
When regex=True, pat is interpreted as a regex
>>> s.str.split(r"\.jpg", regex=True, expand=True)
0 1
0 foojpgbar
A compiled regex can be passed as pat
>>> import re
>>> s.str.split(re.compile(r"\.jpg"), expand=True)
0 1
0 foojpgbar
When regex=False, pat is interpreted as the string itself
>>> s.str.split(r"\.jpg", regex=False, expand=True)
0
0 foojpgbar.jpg
|
reference/api/pandas.Series.str.split.html
|
pandas.Series.str.swapcase
|
`pandas.Series.str.swapcase`
Convert strings in the Series/Index to be swapcased.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.swapcase()[source]#
Convert strings in the Series/Index to be swapcased.
Equivalent to str.swapcase().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.swapcase.html
|
pandas.Series.dot
|
`pandas.Series.dot`
Compute the dot product between the Series and the columns of other.
This method computes the dot product between the Series and another
one, or the Series and each columns of a DataFrame, or the Series and
each columns of an array.
```
>>> s = pd.Series([0, 1, 2, 3])
>>> other = pd.Series([-1, 2, -3, 4])
>>> s.dot(other)
8
>>> s @ other
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(arr)
array([24, 14])
```
|
Series.dot(other)[source]#
Compute the dot product between the Series and the columns of other.
This method computes the dot product between the Series and another
one, or the Series and each columns of a DataFrame, or the Series and
each columns of an array.
It can also be called using self @ other in Python >= 3.5.
Parameters
otherSeries, DataFrame or array-likeThe other object to compute the dot product with its columns.
Returns
scalar, Series or numpy.ndarrayReturn the dot product of the Series and other if other is a
Series, the Series of the dot product of Series and each rows of
other if other is a DataFrame or a numpy.ndarray between the Series
and each columns of the numpy array.
See also
DataFrame.dotCompute the matrix product with the DataFrame.
Series.mulMultiplication of series and other, element-wise.
Notes
The Series and other has to share the same index if other is a Series
or a DataFrame.
Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> other = pd.Series([-1, 2, -3, 4])
>>> s.dot(other)
8
>>> s @ other
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(arr)
array([24, 14])
|
reference/api/pandas.Series.dot.html
|
Index objects
|
Index objects
|
Index#
Many of these methods or variants thereof are available on the objects
that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.
Index([data, dtype, copy, name, tupleize_cols])
Immutable sequence used for indexing and alignment.
Properties#
Index.values
Return an array representing the data in the Index.
Index.is_monotonic
(DEPRECATED) Alias for is_monotonic_increasing.
Index.is_monotonic_increasing
Return a boolean if the values are equal or increasing.
Index.is_monotonic_decreasing
Return a boolean if the values are equal or decreasing.
Index.is_unique
Return if the index has unique values.
Index.has_duplicates
Check if the Index has duplicate values.
Index.hasnans
Return True if there are any NaNs.
Index.dtype
Return the dtype object of the underlying data.
Index.inferred_type
Return a string of the type inferred from the values.
Index.is_all_dates
Whether or not the index values only consist of dates.
Index.shape
Return a tuple of the shape of the underlying data.
Index.name
Return Index or MultiIndex name.
Index.names
Index.nbytes
Return the number of bytes in the underlying data.
Index.ndim
Number of dimensions of the underlying data, by definition 1.
Index.size
Return the number of elements in the underlying data.
Index.empty
Index.T
Return the transpose, which is by definition self.
Index.memory_usage([deep])
Memory usage of the values.
Modifying and computations#
Index.all(*args, **kwargs)
Return whether all elements are Truthy.
Index.any(*args, **kwargs)
Return whether any element is Truthy.
Index.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Index.argmax([axis, skipna])
Return int position of the largest value in the Series.
Index.copy([name, deep, dtype, names])
Make a copy of this object.
Index.delete(loc)
Make new Index with passed location(-s) deleted.
Index.drop(labels[, errors])
Make new Index with passed list of labels deleted.
Index.drop_duplicates(*[, keep])
Return Index with duplicate values removed.
Index.duplicated([keep])
Indicate duplicate index values.
Index.equals(other)
Determine if two Index object are equal.
Index.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Index.identical(other)
Similar to equals, but checks that object attributes and types are also equal.
Index.insert(loc, item)
Make new Index inserting new item at location.
Index.is_(other)
More flexible, faster check like is but that works through views.
Index.is_boolean()
Check if the Index only consists of booleans.
Index.is_categorical()
Check if the Index holds categorical data.
Index.is_floating()
Check if the Index is a floating type.
Index.is_integer()
Check if the Index only consists of integers.
Index.is_interval()
Check if the Index holds Interval objects.
Index.is_mixed()
Check if the Index holds data with mixed data types.
Index.is_numeric()
Check if the Index only consists of numeric data.
Index.is_object()
Check if the Index is of the object dtype.
Index.min([axis, skipna])
Return the minimum value of the Index.
Index.max([axis, skipna])
Return the maximum value of the Index.
Index.reindex(target[, method, level, ...])
Create index with target's values.
Index.rename(name[, inplace])
Alter Index or MultiIndex name.
Index.repeat(repeats[, axis])
Repeat elements of a Index.
Index.where(cond[, other])
Replace values where the condition is False.
Index.take(indices[, axis, allow_fill, ...])
Return a new Index of the values selected by the indices.
Index.putmask(mask, value)
Return a new Index of the values set with the mask.
Index.unique([level])
Return unique values in the index.
Index.nunique([dropna])
Return number of unique elements in the object.
Index.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Compatibility with MultiIndex#
Index.set_names(names, *[, level, inplace])
Set Index or MultiIndex name.
Index.droplevel([level])
Return index with requested level(s) removed.
Missing values#
Index.fillna([value, downcast])
Fill NA/NaN values with the specified value.
Index.dropna([how])
Return Index without NA/NaN values.
Index.isna()
Detect missing values.
Index.notna()
Detect existing (non-missing) values.
Conversion#
Index.astype(dtype[, copy])
Create an Index with values cast to dtypes.
Index.item()
Return the first element of the underlying data as a Python scalar.
Index.map(mapper[, na_action])
Map values using an input mapping or function.
Index.ravel([order])
Return an ndarray of the flattened values of the underlying data.
Index.to_list()
Return a list of the values.
Index.to_native_types([slicer])
(DEPRECATED) Format specified values of self and return them.
Index.to_series([index, name])
Create a Series with both index and values equal to the index keys.
Index.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Index.view([cls])
Sorting#
Index.argsort(*args, **kwargs)
Return the integer indices that would sort the index.
Index.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Index.sort_values([return_indexer, ...])
Return a sorted copy of the index.
Time-specific operations#
Index.shift([periods, freq])
Shift index by desired number of time frequency increments.
Combining / joining / set operations#
Index.append(other)
Append a collection of Index options together.
Index.join(other, *[, how, level, ...])
Compute join_index and indexers to conform data structures to the new index.
Index.intersection(other[, sort])
Form the intersection of two Index objects.
Index.union(other[, sort])
Form the union of two Index objects.
Index.difference(other[, sort])
Return a new Index with elements of index not in other.
Index.symmetric_difference(other[, ...])
Compute the symmetric difference of two Index objects.
Selecting#
Index.asof(label)
Return the label from the index, or, if not present, the previous one.
Index.asof_locs(where, mask)
Return the locations (indices) of labels in the index.
Index.get_indexer(target[, method, limit, ...])
Compute indexer and mask for new index given the current index.
Index.get_indexer_for(target)
Guaranteed return of an indexer even when non-unique.
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index.
Index.get_level_values(level)
Return an Index of values for requested level.
Index.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
Index.get_slice_bound(label, side[, kind])
Calculate slice bound that corresponds to given label.
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray.
Index.isin(values[, level])
Return a boolean array where the index values are in values.
Index.slice_indexer([start, end, step, kind])
Compute the slice indexer for input labels and step.
Index.slice_locs([start, end, step, kind])
Compute slice locations for input labels.
Numeric Index#
RangeIndex([start, stop, step, dtype, copy, ...])
Immutable Index implementing a monotonic integer range.
Int64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
UInt64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
Float64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
RangeIndex.start
The value of the start parameter (0 if this was not supplied).
RangeIndex.stop
The value of the stop parameter.
RangeIndex.step
The value of the step parameter (1 if this was not supplied).
RangeIndex.from_range(data[, name, dtype])
Create RangeIndex from a range object.
CategoricalIndex#
CategoricalIndex([data, categories, ...])
Index based on an underlying Categorical.
Categorical components#
CategoricalIndex.codes
The category codes of this categorical.
CategoricalIndex.categories
The categories of this categorical.
CategoricalIndex.ordered
Whether the categories have an ordered relationship.
CategoricalIndex.rename_categories(*args, ...)
Rename categories.
CategoricalIndex.reorder_categories(*args, ...)
Reorder categories as specified in new_categories.
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
CategoricalIndex.remove_categories(*args, ...)
Remove the specified categories.
CategoricalIndex.remove_unused_categories(...)
Remove categories which are not used.
CategoricalIndex.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
CategoricalIndex.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
CategoricalIndex.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Modifying and computations#
CategoricalIndex.map(mapper)
Map values using input an input mapping or function.
CategoricalIndex.equals(other)
Determine if two CategoricalIndex objects contain the same elements.
IntervalIndex#
IntervalIndex(data[, closed, dtype, copy, ...])
Immutable index of intervals that are closed on the same side.
IntervalIndex components#
IntervalIndex.from_arrays(left, right[, ...])
Construct from two arrays defining the left and right bounds.
IntervalIndex.from_tuples(data[, closed, ...])
Construct an IntervalIndex from an array-like of tuples.
IntervalIndex.from_breaks(breaks[, closed, ...])
Construct an IntervalIndex from an array of splits.
IntervalIndex.left
IntervalIndex.right
IntervalIndex.mid
IntervalIndex.closed
String describing the inclusive side the intervals.
IntervalIndex.length
IntervalIndex.values
Return an array representing the data in the Index.
IntervalIndex.is_empty
Indicates if an interval is empty, meaning it contains no points.
IntervalIndex.is_non_overlapping_monotonic
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
IntervalIndex.is_overlapping
Return True if the IntervalIndex has overlapping intervals, else False.
IntervalIndex.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
IntervalIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
IntervalIndex.set_closed(*args, **kwargs)
Return an identical IntervalArray closed on the specified side.
IntervalIndex.contains(*args, **kwargs)
Check elementwise if the Intervals contain the value.
IntervalIndex.overlaps(*args, **kwargs)
Check elementwise if an Interval overlaps the values in the IntervalArray.
IntervalIndex.to_tuples(*args, **kwargs)
Return an ndarray of tuples of the form (left, right).
MultiIndex#
MultiIndex([levels, codes, sortorder, ...])
A multi-level, or hierarchical, index object for pandas objects.
IndexSlice
Create an object to more easily perform multi-index slicing.
MultiIndex constructors#
MultiIndex.from_arrays(arrays[, sortorder, ...])
Convert arrays to MultiIndex.
MultiIndex.from_tuples(tuples[, sortorder, ...])
Convert list of tuples to MultiIndex.
MultiIndex.from_product(iterables[, ...])
Make a MultiIndex from the cartesian product of multiple iterables.
MultiIndex.from_frame(df[, sortorder, names])
Make a MultiIndex from a DataFrame.
MultiIndex properties#
MultiIndex.names
Names of levels in MultiIndex.
MultiIndex.levels
MultiIndex.codes
MultiIndex.nlevels
Integer number of levels in this MultiIndex.
MultiIndex.levshape
A tuple with the length of each level.
MultiIndex.dtypes
Return the dtypes as a Series for the underlying MultiIndex.
MultiIndex components#
MultiIndex.set_levels(levels, *[, level, ...])
Set new levels on MultiIndex.
MultiIndex.set_codes(codes, *[, level, ...])
Set new codes on MultiIndex.
MultiIndex.to_flat_index()
Convert a MultiIndex to an Index of Tuples containing the level values.
MultiIndex.to_frame([index, name, ...])
Create a DataFrame with the levels of the MultiIndex as columns.
MultiIndex.sortlevel([level, ascending, ...])
Sort MultiIndex at the requested level.
MultiIndex.droplevel([level])
Return index with requested level(s) removed.
MultiIndex.swaplevel([i, j])
Swap level i with level j.
MultiIndex.reorder_levels(order)
Rearrange levels using input order.
MultiIndex.remove_unused_levels()
Create new MultiIndex from current that removes unused levels.
MultiIndex selecting#
MultiIndex.get_loc(key[, method])
Get location for a label or a tuple of labels.
MultiIndex.get_locs(seq)
Get location for a sequence of labels.
MultiIndex.get_loc_level(key[, level, ...])
Get location and sliced index for requested label(s)/level(s).
MultiIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
MultiIndex.get_level_values(level)
Return vector of label values for requested level.
DatetimeIndex#
DatetimeIndex([data, freq, tz, normalize, ...])
Immutable ndarray-like of datetime64 data.
Time/date components#
DatetimeIndex.year
The year of the datetime.
DatetimeIndex.month
The month as January=1, December=12.
DatetimeIndex.day
The day of the datetime.
DatetimeIndex.hour
The hours of the datetime.
DatetimeIndex.minute
The minutes of the datetime.
DatetimeIndex.second
The seconds of the datetime.
DatetimeIndex.microsecond
The microseconds of the datetime.
DatetimeIndex.nanosecond
The nanoseconds of the datetime.
DatetimeIndex.date
Returns numpy array of python datetime.date objects.
DatetimeIndex.time
Returns numpy array of datetime.time objects.
DatetimeIndex.timetz
Returns numpy array of datetime.time objects with timezones.
DatetimeIndex.dayofyear
The ordinal day of the year.
DatetimeIndex.day_of_year
The ordinal day of the year.
DatetimeIndex.weekofyear
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.week
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.quarter
The quarter of the date.
DatetimeIndex.tz
Return the timezone.
DatetimeIndex.freq
Return the frequency object if it is set, otherwise None.
DatetimeIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
DatetimeIndex.is_month_start
Indicates whether the date is the first day of the month.
DatetimeIndex.is_month_end
Indicates whether the date is the last day of the month.
DatetimeIndex.is_quarter_start
Indicator for whether the date is the first day of a quarter.
DatetimeIndex.is_quarter_end
Indicator for whether the date is the last day of a quarter.
DatetimeIndex.is_year_start
Indicate whether the date is the first day of a year.
DatetimeIndex.is_year_end
Indicate whether the date is the last day of the year.
DatetimeIndex.is_leap_year
Boolean indicator if the date belongs to a leap year.
DatetimeIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Selecting#
DatetimeIndex.indexer_at_time(time[, asof])
Return index locations of values at particular time of day.
DatetimeIndex.indexer_between_time(...[, ...])
Return index locations of values between particular times of day.
Time-specific operations#
DatetimeIndex.normalize(*args, **kwargs)
Convert times to midnight.
DatetimeIndex.strftime(date_format)
Convert to Index using specified date_format.
DatetimeIndex.snap([freq])
Snap time stamps to nearest occurring frequency.
DatetimeIndex.tz_convert(tz)
Convert tz-aware Datetime Array/Index from one time zone to another.
DatetimeIndex.tz_localize(tz[, ambiguous, ...])
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
DatetimeIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
DatetimeIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
DatetimeIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
DatetimeIndex.month_name(*args, **kwargs)
Return the month names with specified locale.
DatetimeIndex.day_name(*args, **kwargs)
Return the day names with specified locale.
Conversion#
DatetimeIndex.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
DatetimeIndex.to_perioddelta(freq)
Calculate deltas between self values and self converted to Periods at a freq.
DatetimeIndex.to_pydatetime(*args, **kwargs)
Return an ndarray of datetime.datetime objects.
DatetimeIndex.to_series([keep_tz, index, name])
Create a Series with both index and values equal to the index keys.
DatetimeIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
DatetimeIndex.mean(*args, **kwargs)
Return the mean value of the Array.
DatetimeIndex.std(*args, **kwargs)
Return sample standard deviation over requested axis.
TimedeltaIndex#
TimedeltaIndex([data, unit, freq, closed, ...])
Immutable Index of timedelta64 data.
Components#
TimedeltaIndex.days
Number of days for each element.
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
TimedeltaIndex.components
Return a DataFrame of the individual resolution components of the Timedeltas.
TimedeltaIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Conversion#
TimedeltaIndex.to_pytimedelta(*args, **kwargs)
Return an ndarray of datetime.timedelta objects.
TimedeltaIndex.to_series([index, name])
Create a Series with both index and values equal to the index keys.
TimedeltaIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
TimedeltaIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
TimedeltaIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
TimedeltaIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
TimedeltaIndex.mean(*args, **kwargs)
Return the mean value of the Array.
PeriodIndex#
PeriodIndex([data, ordinal, freq, dtype, ...])
Immutable ndarray holding ordinal values indicating regular periods in time.
Properties#
PeriodIndex.day
The days of the period.
PeriodIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
PeriodIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
PeriodIndex.dayofyear
The ordinal day of the year.
PeriodIndex.day_of_year
The ordinal day of the year.
PeriodIndex.days_in_month
The number of days in the month.
PeriodIndex.daysinmonth
The number of days in the month.
PeriodIndex.end_time
Get the Timestamp for the end of the period.
PeriodIndex.freq
Return the frequency object if it is set, otherwise None.
PeriodIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
PeriodIndex.hour
The hour of the period.
PeriodIndex.is_leap_year
Logical indicating if the date belongs to a leap year.
PeriodIndex.minute
The minute of the period.
PeriodIndex.month
The month as January=1, December=12.
PeriodIndex.quarter
The quarter of the date.
PeriodIndex.qyear
PeriodIndex.second
The second of the period.
PeriodIndex.start_time
Get the Timestamp for the start of the period.
PeriodIndex.week
The week ordinal of the year.
PeriodIndex.weekday
The day of the week with Monday=0, Sunday=6.
PeriodIndex.weekofyear
The week ordinal of the year.
PeriodIndex.year
The year of the period.
Methods#
PeriodIndex.asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
PeriodIndex.strftime(*args, **kwargs)
Convert to Index using specified date_format.
PeriodIndex.to_timestamp([freq, how])
Cast to DatetimeArray/Index.
|
reference/indexing.html
|
pandas.errors.ParserWarning
|
`pandas.errors.ParserWarning`
Warning raised when reading a file that doesn’t use the default ‘c’ parser.
Raised by pd.read_csv and pd.read_table when it is necessary to change
parsers, generally from the default ‘c’ parser to ‘python’.
```
>>> import io
>>> csv = '''a;b;c
... 1;1,8
... 1;2,1'''
>>> df = pd.read_csv(io.StringIO(csv), sep='[;,]')
... # ParserWarning: Falling back to the 'python' engine...
```
|
exception pandas.errors.ParserWarning[source]#
Warning raised when reading a file that doesn’t use the default ‘c’ parser.
Raised by pd.read_csv and pd.read_table when it is necessary to change
parsers, generally from the default ‘c’ parser to ‘python’.
It happens due to a lack of support or functionality for parsing a
particular attribute of a CSV file with the requested engine.
Currently, ‘c’ unsupported options include the following parameters:
sep other than a single character (e.g. regex separators)
skipfooter higher than 0
sep=None with delim_whitespace=False
The warning can be avoided by adding engine=’python’ as a parameter in
pd.read_csv and pd.read_table methods.
See also
pd.read_csvRead CSV (comma-separated) file into DataFrame.
pd.read_tableRead general delimited file into DataFrame.
Examples
Using a sep in pd.read_csv other than a single character:
>>> import io
>>> csv = '''a;b;c
... 1;1,8
... 1;2,1'''
>>> df = pd.read_csv(io.StringIO(csv), sep='[;,]')
... # ParserWarning: Falling back to the 'python' engine...
Adding engine=’python’ to pd.read_csv removes the Warning:
>>> df = pd.read_csv(io.StringIO(csv), sep='[;,]', engine='python')
|
reference/api/pandas.errors.ParserWarning.html
|
pandas.DatetimeIndex.is_month_start
|
`pandas.DatetimeIndex.is_month_start`
Indicates whether the date is the first day of the month.
```
>>> s = pd.Series(pd.date_range("2018-02-27", periods=3))
>>> s
0 2018-02-27
1 2018-02-28
2 2018-03-01
dtype: datetime64[ns]
>>> s.dt.is_month_start
0 False
1 False
2 True
dtype: bool
>>> s.dt.is_month_end
0 False
1 True
2 False
dtype: bool
```
|
property DatetimeIndex.is_month_start[source]#
Indicates whether the date is the first day of the month.
Returns
Series or arrayFor Series, returns a Series with boolean values.
For DatetimeIndex, returns a boolean array.
See also
is_month_startReturn a boolean indicating whether the date is the first day of the month.
is_month_endReturn a boolean indicating whether the date is the last day of the month.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> s = pd.Series(pd.date_range("2018-02-27", periods=3))
>>> s
0 2018-02-27
1 2018-02-28
2 2018-03-01
dtype: datetime64[ns]
>>> s.dt.is_month_start
0 False
1 False
2 True
dtype: bool
>>> s.dt.is_month_end
0 False
1 True
2 False
dtype: bool
>>> idx = pd.date_range("2018-02-27", periods=3)
>>> idx.is_month_start
array([False, False, True])
>>> idx.is_month_end
array([False, True, False])
|
reference/api/pandas.DatetimeIndex.is_month_start.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.weekmask
|
pandas.tseries.offsets.CustomBusinessMonthEnd.weekmask
|
CustomBusinessMonthEnd.weekmask#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.weekmask.html
|
pandas.api.types.is_datetime64_ns_dtype
|
`pandas.api.types.is_datetime64_ns_dtype`
Check whether the provided array or dtype is of the datetime64[ns] dtype.
```
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
```
|
pandas.api.types.is_datetime64_ns_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of the datetime64[ns] dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
boolWhether or not the array or dtype is of the datetime64[ns] dtype.
Examples
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
|
reference/api/pandas.api.types.is_datetime64_ns_dtype.html
|
pandas.tseries.offsets.Second.is_quarter_start
|
`pandas.tseries.offsets.Second.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Second.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Second.is_quarter_start.html
|
pandas.DataFrame.pct_change
|
`pandas.DataFrame.pct_change`
Percentage change between the current and a prior element.
```
>>> s = pd.Series([90, 91, 85])
>>> s
0 90
1 91
2 85
dtype: int64
```
|
DataFrame.pct_change(periods=1, fill_method='pad', limit=None, freq=None, **kwargs)[source]#
Percentage change between the current and a prior element.
Computes the percentage change from the immediately previous row by
default. This is useful in comparing the percentage of change in a time
series of elements.
Parameters
periodsint, default 1Periods to shift for forming percent change.
fill_methodstr, default ‘pad’How to handle NAs before computing percent changes.
limitint, default NoneThe number of consecutive NAs to fill before stopping.
freqDateOffset, timedelta, or str, optionalIncrement to use from time series API (e.g. ‘M’ or BDay()).
**kwargsAdditional keyword arguments are passed into
DataFrame.shift or Series.shift.
Returns
chgSeries or DataFrameThe same type as the calling object.
See also
Series.diffCompute the difference of two elements in a Series.
DataFrame.diffCompute the difference of two elements in a DataFrame.
Series.shiftShift the index by some number of periods.
DataFrame.shiftShift the index by some number of periods.
Examples
Series
>>> s = pd.Series([90, 91, 85])
>>> s
0 90
1 91
2 85
dtype: int64
>>> s.pct_change()
0 NaN
1 0.011111
2 -0.065934
dtype: float64
>>> s.pct_change(periods=2)
0 NaN
1 NaN
2 -0.055556
dtype: float64
See the percentage change in a Series where filling NAs with last
valid observation forward to next valid.
>>> s = pd.Series([90, 91, None, 85])
>>> s
0 90.0
1 91.0
2 NaN
3 85.0
dtype: float64
>>> s.pct_change(fill_method='ffill')
0 NaN
1 0.011111
2 0.000000
3 -0.065934
dtype: float64
DataFrame
Percentage change in French franc, Deutsche Mark, and Italian lira from
1980-01-01 to 1980-03-01.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
FR GR IT
1980-01-01 4.0405 1.7246 804.74
1980-02-01 4.0963 1.7482 810.01
1980-03-01 4.3149 1.8519 860.13
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-01 0.013810 0.013684 0.006549
1980-03-01 0.053365 0.059318 0.061876
Percentage of change in GOOG and APPL stock volume. Shows computing
the percentage change between columns.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
... '2014': [1371819, 41403351]},
... index=['GOOG', 'APPL'])
>>> df
2016 2015 2014
GOOG 1769950 1500923 1371819
APPL 30586265 40912316 41403351
>>> df.pct_change(axis='columns', periods=-1)
2016 2015 2014
GOOG 0.179241 0.094112 NaN
APPL -0.252395 -0.011860 NaN
|
reference/api/pandas.DataFrame.pct_change.html
|
pandas.Timestamp.tzinfo
|
pandas.Timestamp.tzinfo
|
Timestamp.tzinfo#
|
reference/api/pandas.Timestamp.tzinfo.html
|
pandas.tseries.offsets.MonthEnd.rollforward
|
`pandas.tseries.offsets.MonthEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
|
MonthEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.MonthEnd.rollforward.html
|
Time series / date functionality
|
Time series / date functionality
|
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
|
user_guide/timeseries.html
|
pandas.tseries.offsets.BYearBegin.is_quarter_end
|
`pandas.tseries.offsets.BYearBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
BYearBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.BYearBegin.is_quarter_end.html
|
pandas.tseries.offsets.YearEnd.freqstr
|
`pandas.tseries.offsets.YearEnd.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
YearEnd.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.YearEnd.freqstr.html
|
pandas.DataFrame.sub
|
`pandas.DataFrame.sub`
Get Subtraction of dataframe and other, element-wise (binary operator sub).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.sub(other, axis='columns', level=None, fill_value=None)[source]#
Get Subtraction of dataframe and other, element-wise (binary operator sub).
Equivalent to dataframe - other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rsub.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.sub.html
|
pandas.io.formats.style.Styler.applymap_index
|
`pandas.io.formats.style.Styler.applymap_index`
Apply a CSS-styling function to the index or column headers, elementwise.
```
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return "background-color: yellow;" if v == "B" else None
>>> df.style.applymap_index(color_b)
```
|
Styler.applymap_index(func, axis=0, level=None, **kwargs)[source]#
Apply a CSS-styling function to the index or column headers, elementwise.
Updates the HTML representation with the result.
New in version 1.4.0.
Parameters
funcfunctionfunc should take a scalar and return a string.
axis{0, 1, “index”, “columns”}The headers over which to apply the function.
levelint, str, list, optionalIf index is MultiIndex the level(s) over which to apply the function.
**kwargsdictPass along to func.
Returns
selfStyler
See also
Styler.apply_indexApply a CSS-styling function to headers level-wise.
Styler.applyApply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymapApply a CSS-styling function elementwise.
Notes
Each input to func will be an index value, if an Index, or a level value of a MultiIndex. The output of func should be
CSS styles as a string, in the format ‘attribute: value; attribute2: value2; …’
or, if nothing is to be applied to that element, an empty string or None.
Examples
Basic usage to conditionally highlight values in the index.
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return "background-color: yellow;" if v == "B" else None
>>> df.style.applymap_index(color_b)
Selectively applying to specific levels of MultiIndex columns.
>>> midx = pd.MultiIndex.from_product([['ix', 'jy'], [0, 1], ['x3', 'z4']])
>>> df = pd.DataFrame([np.arange(8)], columns=midx)
>>> def highlight_x(v):
... return "background-color: yellow;" if "x" in v else None
>>> df.style.applymap_index(highlight_x, axis="columns", level=[0, 2])
...
|
reference/api/pandas.io.formats.style.Styler.applymap_index.html
|
pandas.MultiIndex.sortlevel
|
`pandas.MultiIndex.sortlevel`
Sort MultiIndex at the requested level.
The result will respect the original ordering of the associated
factor at that level.
```
>>> mi = pd.MultiIndex.from_arrays([[0, 0], [2, 1]])
>>> mi
MultiIndex([(0, 2),
(0, 1)],
)
```
|
MultiIndex.sortlevel(level=0, ascending=True, sort_remaining=True)[source]#
Sort MultiIndex at the requested level.
The result will respect the original ordering of the associated
factor at that level.
Parameters
levellist-like, int or str, default 0If a string is given, must be a name of the level.
If list-like must be names or ints of levels.
ascendingbool, default TrueFalse to sort in descending order.
Can also be a list to specify a directed ordering.
sort_remainingsort by the remaining levels after level
Returns
sorted_indexpd.MultiIndexResulting index.
indexernp.ndarray[np.intp]Indices of output values in original index.
Examples
>>> mi = pd.MultiIndex.from_arrays([[0, 0], [2, 1]])
>>> mi
MultiIndex([(0, 2),
(0, 1)],
)
>>> mi.sortlevel()
(MultiIndex([(0, 1),
(0, 2)],
), array([1, 0]))
>>> mi.sortlevel(sort_remaining=False)
(MultiIndex([(0, 2),
(0, 1)],
), array([0, 1]))
>>> mi.sortlevel(1)
(MultiIndex([(0, 1),
(0, 2)],
), array([1, 0]))
>>> mi.sortlevel(1, ascending=False)
(MultiIndex([(0, 2),
(0, 1)],
), array([0, 1]))
|
reference/api/pandas.MultiIndex.sortlevel.html
|
pandas.DataFrame.product
|
`pandas.DataFrame.product`
Return the product of the values over the requested axis.
Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
```
>>> pd.Series([], dtype="float64").prod()
1.0
```
|
DataFrame.product(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source]#
Return the product of the values over the requested axis.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
min_countint, default 0The required number of valid values to perform the operation. If fewer than
min_count non-NA values are present the result will be NA.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
By default, the product of an empty or all-NA Series is 1
>>> pd.Series([], dtype="float64").prod()
1.0
This can be controlled with the min_count parameter
>>> pd.Series([], dtype="float64").prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
|
reference/api/pandas.DataFrame.product.html
|
pandas.tseries.offsets.BusinessDay.base
|
`pandas.tseries.offsets.BusinessDay.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
BusinessDay.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.BusinessDay.base.html
|
pandas.Index.isnull
|
`pandas.Index.isnull`
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None, numpy.NaN or pd.NaT, get
mapped to True values.
Everything else get mapped to False values. Characters such as
empty strings ‘’ or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
```
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
array([False, False, True])
```
|
Index.isnull()[source]#
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None, numpy.NaN or pd.NaT, get
mapped to True values.
Everything else get mapped to False values. Characters such as
empty strings ‘’ or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
Returns
numpy.ndarray[bool]A boolean array of whether my values are NA.
See also
Index.notnaBoolean inverse of isna.
Index.dropnaOmit entries with missing values.
isnaTop-level isna.
Series.isnaDetect missing values in Series object.
Examples
Show which entries in a pandas.Index are NA. The result is an
array.
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
array([False, False, True])
Empty strings are not considered NA values. None is considered an NA
value.
>>> idx = pd.Index(['black', '', 'red', None])
>>> idx
Index(['black', '', 'red', None], dtype='object')
>>> idx.isna()
array([False, False, False, True])
For datetimes, NaT (Not a Time) is considered as an NA value.
>>> idx = pd.DatetimeIndex([pd.Timestamp('1940-04-25'),
... pd.Timestamp(''), None, pd.NaT])
>>> idx
DatetimeIndex(['1940-04-25', 'NaT', 'NaT', 'NaT'],
dtype='datetime64[ns]', freq=None)
>>> idx.isna()
array([False, True, True, True])
|
reference/api/pandas.Index.isnull.html
|
pandas.Series.to_json
|
`pandas.Series.to_json`
Convert the object to a JSON string.
```
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
```
|
Series.to_json(path_or_buf=None, orient=None, date_format=None, double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False, compression='infer', index=True, indent=None, storage_options=None)[source]#
Convert the object to a JSON string.
Note NaN’s and None will be converted to null and datetime objects
will be converted to UNIX timestamps.
Parameters
path_or_bufstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a write() function. If None, the result is
returned as a string.
orientstrIndication of expected JSON string format.
Series:
default is ‘index’
allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}.
DataFrame:
default is ‘columns’
allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’,
‘values’, ‘table’}.
The format of the JSON string:
‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns],
‘data’ -> [values]}
‘records’ : list like [{column -> value}, … , {column -> value}]
‘index’ : dict like {index -> {column -> value}}
‘columns’ : dict like {column -> {index -> value}}
‘values’ : just the values array
‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}}
Describing the data, where data component is like orient='records'.
date_format{None, ‘epoch’, ‘iso’}Type of date conversion. ‘epoch’ = epoch milliseconds,
‘iso’ = ISO8601. The default depends on the orient. For
orient='table', the default is ‘iso’. For all other orients,
the default is ‘epoch’.
double_precisionint, default 10The number of decimal places to use when encoding
floating point values.
force_asciibool, default TrueForce encoded string to be ASCII.
date_unitstr, default ‘ms’ (milliseconds)The time unit to encode to, governs timestamp and ISO8601
precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond,
microsecond, and nanosecond respectively.
default_handlercallable, default NoneHandler to call if object cannot otherwise be converted to a
suitable format for JSON. Should receive a single argument which is
the object to convert and return a serialisable object.
linesbool, default FalseIf ‘orient’ is ‘records’ write out line-delimited json format. Will
throw ValueError if incorrect ‘orient’ since others are not
list-like.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path_or_buf’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
indexbool, default TrueWhether to include the index values in the JSON string. Not
including the index (index=False) is only supported when
orient is ‘split’ or ‘table’.
indentint, optionalLength of whitespace used to indent each record.
New in version 1.0.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
None or strIf path_or_buf is None, returns the resulting json format as a
string. Otherwise returns None.
See also
read_jsonConvert a JSON string to pandas object.
Notes
The behavior of indent=0 varies from the stdlib, which does not
indent the output but does insert newlines. Currently, indent=0
and the default indent=None are equivalent in pandas, though this
may change in a future release.
orient='table' contains a ‘pandas_version’ field under ‘schema’.
This stores the version of pandas used in the latest revision of the
schema.
Examples
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
>>> result = df.to_json(orient="split")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"columns": [
"col 1",
"col 2"
],
"index": [
"row 1",
"row 2"
],
"data": [
[
"a",
"b"
],
[
"c",
"d"
]
]
}
Encoding/decoding a Dataframe using 'records' formatted JSON.
Note that index labels are not preserved with this encoding.
>>> result = df.to_json(orient="records")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
{
"col 1": "a",
"col 2": "b"
},
{
"col 1": "c",
"col 2": "d"
}
]
Encoding/decoding a Dataframe using 'index' formatted JSON:
>>> result = df.to_json(orient="index")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"row 1": {
"col 1": "a",
"col 2": "b"
},
"row 2": {
"col 1": "c",
"col 2": "d"
}
}
Encoding/decoding a Dataframe using 'columns' formatted JSON:
>>> result = df.to_json(orient="columns")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"col 1": {
"row 1": "a",
"row 2": "c"
},
"col 2": {
"row 1": "b",
"row 2": "d"
}
}
Encoding/decoding a Dataframe using 'values' formatted JSON:
>>> result = df.to_json(orient="values")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
[
"a",
"b"
],
[
"c",
"d"
]
]
Encoding with Table Schema:
>>> result = df.to_json(orient="table")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"schema": {
"fields": [
{
"name": "index",
"type": "string"
},
{
"name": "col 1",
"type": "string"
},
{
"name": "col 2",
"type": "string"
}
],
"primaryKey": [
"index"
],
"pandas_version": "1.4.0"
},
"data": [
{
"index": "row 1",
"col 1": "a",
"col 2": "b"
},
{
"index": "row 2",
"col 1": "c",
"col 2": "d"
}
]
}
|
reference/api/pandas.Series.to_json.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.holidays
|
pandas.tseries.offsets.CustomBusinessMonthEnd.holidays
|
CustomBusinessMonthEnd.holidays#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.holidays.html
|
pandas.Series.var
|
`pandas.Series.var`
Return unbiased variance over requested axis.
```
>>> df = pd.DataFrame({'person_id': [0, 1, 2, 3],
... 'age': [21, 25, 62, 43],
... 'height': [1.61, 1.87, 1.49, 2.01]}
... ).set_index('person_id')
>>> df
age height
person_id
0 21 1.61
1 25 1.87
2 62 1.49
3 43 2.01
```
|
Series.var(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source]#
Return unbiased variance over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument.
Parameters
axis{index (0)}For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
Returns
scalar or Series (if level specified)
Examples
>>> df = pd.DataFrame({'person_id': [0, 1, 2, 3],
... 'age': [21, 25, 62, 43],
... 'height': [1.61, 1.87, 1.49, 2.01]}
... ).set_index('person_id')
>>> df
age height
person_id
0 21 1.61
1 25 1.87
2 62 1.49
3 43 2.01
>>> df.var()
age 352.916667
height 0.056367
Alternatively, ddof=0 can be set to normalize by N instead of N-1:
>>> df.var(ddof=0)
age 264.687500
height 0.042275
|
reference/api/pandas.Series.var.html
|
pandas.api.extensions.ExtensionDtype.na_value
|
`pandas.api.extensions.ExtensionDtype.na_value`
Default NA value to use for this type.
|
property ExtensionDtype.na_value[source]#
Default NA value to use for this type.
This is used in e.g. ExtensionArray.take. This should be the
user-facing “boxed” version of the NA value, not the physical NA value
for storage. e.g. for JSONArray, this is an empty dictionary.
|
reference/api/pandas.api.extensions.ExtensionDtype.na_value.html
|
pandas.Series.floordiv
|
`pandas.Series.floordiv`
Return Integer division of series and other, element-wise (binary operator floordiv).
Equivalent to series // other, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
```
|
Series.floordiv(other, level=None, fill_value=None, axis=0)[source]#
Return Integer division of series and other, element-wise (binary operator floordiv).
Equivalent to series // other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rfloordivReverse of the Integer division operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.floordiv(b, fill_value=0)
a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.floordiv.html
|
pandas.tseries.offsets.BQuarterEnd.freqstr
|
`pandas.tseries.offsets.BQuarterEnd.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
BQuarterEnd.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.BQuarterEnd.freqstr.html
|
pandas.DataFrame.to_html
|
`pandas.DataFrame.to_html`
Render a DataFrame as an HTML table.
Buffer to write to. If None, the output is returned as a string.
|
DataFrame.to_html(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', bold_rows=True, classes=None, escape=True, notebook=False, border=None, table_id=None, render_links=False, encoding=None)[source]#
Render a DataFrame as an HTML table.
Parameters
bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string.
columnssequence, optional, default NoneThe subset of columns to write. Writes all columns by default.
col_spacestr or int, list or dict of int or str, optionalThe minimum width of each column in CSS length units. An int is assumed to be px units.
New in version 0.25.0: Ability to use str.
headerbool, optionalWhether to print column labels, default True.
indexbool, optional, default TrueWhether to print index (row) labels.
na_repstr, optional, default ‘NaN’String representation of NaN to use.
formatterslist, tuple or dict of one-param. functions, optionalFormatter functions to apply to columns’ elements by position or
name.
The result of each function must be a unicode string.
List/tuple must be of length equal to the number of columns.
float_formatone-parameter function, optional, default NoneFormatter function to apply to columns’ elements if they are
floats. This function must return a unicode string and will be
applied only to the non-NaN elements, with NaN being
handled by na_rep.
Changed in version 1.2.0.
sparsifybool, optional, default TrueSet to False for a DataFrame with a hierarchical index to print
every multiindex key at each row.
index_namesbool, optional, default TruePrints the names of the indexes.
justifystr, default NoneHow to justify the column labels. If None uses the option from
the print configuration (controlled by set_option), ‘right’ out
of the box. Valid values are
left
right
center
justify
justify-all
start
end
inherit
match-parent
initial
unset.
max_rowsint, optionalMaximum number of rows to display in the console.
max_colsint, optionalMaximum number of columns to display in the console.
show_dimensionsbool, default FalseDisplay DataFrame dimensions (number of rows by number of columns).
decimalstr, default ‘.’Character recognized as decimal separator, e.g. ‘,’ in Europe.
bold_rowsbool, default TrueMake the row labels bold in the output.
classesstr or list or tuple, default NoneCSS class(es) to apply to the resulting html table.
escapebool, default TrueConvert the characters <, >, and & to HTML-safe sequences.
notebook{True, False}, default FalseWhether the generated HTML is for IPython Notebook.
borderintA border=border attribute is included in the opening
<table> tag. Default pd.options.display.html.border.
table_idstr, optionalA css id is included in the opening <table> tag if specified.
render_linksbool, default FalseConvert URLs to HTML links.
encodingstr, default “utf-8”Set character encoding.
New in version 1.0.
Returns
str or NoneIf buf is None, returns the result as a string. Otherwise returns
None.
See also
to_stringConvert DataFrame to a string.
|
reference/api/pandas.DataFrame.to_html.html
|
pandas.core.window.rolling.Rolling.sum
|
`pandas.core.window.rolling.Rolling.sum`
Calculate the rolling sum.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
```
|
Rolling.sum(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the rolling sum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.sumAggregating sum for Series.
pandas.DataFrame.sumAggregating sum for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
Examples
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
For DataFrame, each sum is computed column-wise.
>>> df = pd.DataFrame({"A": s, "B": s ** 2})
>>> df
A B
0 1 1
1 2 4
2 3 9
3 4 16
4 5 25
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
|
reference/api/pandas.core.window.rolling.Rolling.sum.html
|
pandas.DataFrame.groupby
|
`pandas.DataFrame.groupby`
Group DataFrame using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
```
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby(['Animal']).mean()
Max Speed
Animal
Falcon 375.0
Parrot 25.0
```
|
DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=_NoDefault.no_default, squeeze=_NoDefault.no_default, observed=False, dropna=True)[source]#
Group DataFrame using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
Parameters
bymapping, function, label, or list of labelsUsed to determine the groups for the groupby.
If by is a function, it’s called on each value of the object’s
index. If a dict or Series is passed, the Series or dict VALUES
will be used to determine the groups (the Series’ values are first
aligned; see .align() method). If a list or ndarray of length
equal to the selected axis is passed (see the groupby user guide),
the values are used as-is to determine the groups. A label or list
of labels may be passed to group by the columns in self.
Notice that a tuple is interpreted as a (single) key.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Split along rows (0) or columns (1). For Series this parameter
is unused and defaults to 0.
levelint, level name, or sequence of such, default NoneIf the axis is a MultiIndex (hierarchical), group by a particular
level or levels. Do not specify both by and level.
as_indexbool, default TrueFor aggregated output, return object with group labels as the
index. Only relevant for DataFrame input. as_index=False is
effectively “SQL-style” grouped output.
sortbool, default TrueSort group keys. Get better performance by turning this off.
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
group_keysbool, optionalWhen calling apply and the by argument produces a like-indexed
(i.e. a transform) result, add group keys to
index to identify pieces. By default group keys are not included
when the result’s index (and column) labels match the inputs, and
are included otherwise. This argument has no effect if the result produced
is not like-indexed with respect to the input.
Changed in version 1.5.0: Warns that group_keys will no longer be ignored when the
result from apply is a like-indexed Series or DataFrame.
Specify group_keys explicitly to include the group keys or
not.
squeezebool, default FalseReduce the dimensionality of the return type if possible,
otherwise return a consistent type.
Deprecated since version 1.1.0.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
dropnabool, default TrueIf True, and if group keys contain NA values, NA values together
with row/column will be dropped.
If False, NA values will also be treated as the key in groups.
New in version 1.1.0.
Returns
DataFrameGroupByReturns a groupby object that contains information about the groups.
See also
resampleConvenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more
detailed usage and examples, including splitting an object into groups,
iterating through groups, selecting a group, aggregation, and more.
Examples
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby(['Animal']).mean()
Max Speed
Animal
Falcon 375.0
Parrot 25.0
Hierarchical Indexes
We can groupby different levels of a hierarchical index
using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
... ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]},
... index=index)
>>> df
Max Speed
Animal Type
Falcon Captive 390.0
Wild 350.0
Parrot Captive 30.0
Wild 20.0
>>> df.groupby(level=0).mean()
Max Speed
Animal
Falcon 370.0
Parrot 25.0
>>> df.groupby(level="Type").mean()
Max Speed
Type
Captive 210.0
Wild 185.0
We can also choose to include NA in group keys or not by setting
dropna parameter, the default setting is True.
>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by=["b"]).sum()
a c
b
1.0 2 3
2.0 2 5
>>> df.groupby(by=["b"], dropna=False).sum()
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
>>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by="a").sum()
b c
a
a 13.0 13.0
b 12.3 123.0
>>> df.groupby(by="a", dropna=False).sum()
b c
a
a 13.0 13.0
b 12.3 123.0
NaN 12.3 33.0
When using .apply(), use group_keys to include or exclude the group keys.
The group_keys argument defaults to True (include).
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
>>> df.groupby("Animal", group_keys=True).apply(lambda x: x)
Animal Max Speed
Animal
Falcon 0 Falcon 380.0
1 Falcon 370.0
Parrot 2 Parrot 24.0
3 Parrot 26.0
>>> df.groupby("Animal", group_keys=False).apply(lambda x: x)
Animal Max Speed
0 Falcon 380.0
1 Falcon 370.0
2 Parrot 24.0
3 Parrot 26.0
|
reference/api/pandas.DataFrame.groupby.html
|
pandas.Series.str.contains
|
`pandas.Series.str.contains`
Test if pattern or regex is contained within a string of a Series or Index.
```
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
```
|
Series.str.contains(pat, case=True, flags=0, na=None, regex=True)[source]#
Test if pattern or regex is contained within a string of a Series or Index.
Return boolean Series or Index based on whether a given pattern or regex is
contained within a string of a Series or Index.
Parameters
patstrCharacter sequence or regular expression.
casebool, default TrueIf True, case sensitive.
flagsint, default 0 (no flags)Flags to pass through to the re module, e.g. re.IGNORECASE.
nascalar, optionalFill value for missing values. The default depends on dtype of the
array. For object-dtype, numpy.nan is used. For StringDtype,
pandas.NA is used.
regexbool, default TrueIf True, assumes the pat is a regular expression.
If False, treats the pat as a literal string.
Returns
Series or Index of boolean valuesA Series or Index of boolean values indicating whether the
given pattern is contained within the string of each element
of the Series or Index.
See also
matchAnalogous, but stricter, relying on re.match instead of re.search.
Series.str.startswithTest if the start of each string element matches a pattern.
Series.str.endswithSame as startswith, but tests the end of string.
Examples
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
Returning an Index of booleans using only a literal pattern.
>>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
Specifying case sensitivity using case.
>>> s1.str.contains('oG', case=True, regex=True)
0 False
1 False
2 False
3 False
4 NaN
dtype: object
Specifying na to be False instead of NaN replaces NaN values
with False. If Series or Index does not contain NaN values
the resultant dtype will be bool, otherwise, an object dtype.
>>> s1.str.contains('og', na=False, regex=True)
0 False
1 True
2 False
3 False
4 False
dtype: bool
Returning ‘house’ or ‘dog’ when either expression occurs in a string.
>>> s1.str.contains('house|dog', regex=True)
0 False
1 True
2 True
3 False
4 NaN
dtype: object
Ignoring case sensitivity using flags with regex.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Returning any digit using regular expression.
>>> s1.str.contains('\\d', regex=True)
0 False
1 False
2 False
3 True
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True.
Note in the following example one might expect only s2[1] and s2[3] to
return True. However, ‘.0’ as a regex matches any character
followed by a 0.
>>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
|
reference/api/pandas.Series.str.contains.html
|
pandas.tseries.offsets.DateOffset.is_year_end
|
`pandas.tseries.offsets.DateOffset.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
DateOffset.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.DateOffset.is_year_end.html
|
pandas.Flags.allows_duplicate_labels
|
`pandas.Flags.allows_duplicate_labels`
Whether this object allows duplicate labels.
Setting allows_duplicate_labels=False ensures that the
index (and columns of a DataFrame) are unique. Most methods
that accept and return a Series or DataFrame will propagate
the value of allows_duplicate_labels.
```
>>> df = pd.DataFrame({"A": [1, 2]}, index=['a', 'a'])
>>> df.flags.allows_duplicate_labels
True
>>> df.flags.allows_duplicate_labels = False
Traceback (most recent call last):
...
pandas.errors.DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
```
|
property Flags.allows_duplicate_labels[source]#
Whether this object allows duplicate labels.
Setting allows_duplicate_labels=False ensures that the
index (and columns of a DataFrame) are unique. Most methods
that accept and return a Series or DataFrame will propagate
the value of allows_duplicate_labels.
See Duplicate Labels for more.
See also
DataFrame.attrsSet global metadata on this object.
DataFrame.set_flagsSet global flags on this object.
Examples
>>> df = pd.DataFrame({"A": [1, 2]}, index=['a', 'a'])
>>> df.flags.allows_duplicate_labels
True
>>> df.flags.allows_duplicate_labels = False
Traceback (most recent call last):
...
pandas.errors.DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
|
reference/api/pandas.Flags.allows_duplicate_labels.html
|
Window
|
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
|
reference/window.html
| null |
pandas.tseries.offsets.YearBegin.month
|
pandas.tseries.offsets.YearBegin.month
|
YearBegin.month#
|
reference/api/pandas.tseries.offsets.YearBegin.month.html
|
pandas.DataFrame.compare
|
`pandas.DataFrame.compare`
Compare to another DataFrame and show the differences.
```
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
```
|
DataFrame.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another DataFrame and show the differences.
New in version 1.1.0.
Parameters
otherDataFrameObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
New in version 1.5.0.
Returns
DataFrameDataFrame that shows the differences stacked side by side.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
Raises
ValueErrorWhen the two DataFrames don’t have identical labels or shape.
See also
Series.compareCompare with another Series and show differences.
DataFrame.equalsTest whether two objects contain the same elements.
Notes
Matching NaNs will not appear as a difference.
Can only compare identically-labeled
(i.e. same shape, identical row and column labels) DataFrames
Examples
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
Align the differences on columns
>>> df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Assign result_names
>>> df.compare(df2, result_names=("left", "right"))
col1 col3
left right left right
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Stack the differences on rows
>>> df.compare(df2, align_axis=0)
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
Keep the equal values
>>> df.compare(df2, keep_equal=True)
col1 col3
self other self other
0 a c 1.0 1.0
2 b b 3.0 4.0
Keep all original rows and columns
>>> df.compare(df2, keep_shape=True)
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
Keep all original rows and columns and also all original values
>>> df.compare(df2, keep_shape=True, keep_equal=True)
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
|
reference/api/pandas.DataFrame.compare.html
|
pandas.tseries.offsets.Minute.name
|
`pandas.tseries.offsets.Minute.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Minute.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Minute.name.html
|
pandas.tseries.offsets.BusinessHour.onOffset
|
pandas.tseries.offsets.BusinessHour.onOffset
|
BusinessHour.onOffset()#
|
reference/api/pandas.tseries.offsets.BusinessHour.onOffset.html
|
pandas.DatetimeIndex.second
|
`pandas.DatetimeIndex.second`
The seconds of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="s")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:00:01
2 2000-01-01 00:00:02
dtype: datetime64[ns]
>>> datetime_series.dt.second
0 0
1 1
2 2
dtype: int64
```
|
property DatetimeIndex.second[source]#
The seconds of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="s")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:00:01
2 2000-01-01 00:00:02
dtype: datetime64[ns]
>>> datetime_series.dt.second
0 0
1 1
2 2
dtype: int64
|
reference/api/pandas.DatetimeIndex.second.html
|
pandas.tseries.offsets.SemiMonthBegin.freqstr
|
`pandas.tseries.offsets.SemiMonthBegin.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
SemiMonthBegin.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.freqstr.html
|
pandas.Series.str.partition
|
`pandas.Series.str.partition`
Split the string at the first occurrence of sep.
This method splits the string at the first occurrence of sep,
and returns 3 elements containing the part before the separator,
the separator itself, and the part after the separator.
If the separator is not found, return 3 elements containing the string itself, followed by two empty strings.
```
>>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers'])
>>> s
0 Linda van der Berg
1 George Pitt-Rivers
dtype: object
```
|
Series.str.partition(sep=' ', expand=True)[source]#
Split the string at the first occurrence of sep.
This method splits the string at the first occurrence of sep,
and returns 3 elements containing the part before the separator,
the separator itself, and the part after the separator.
If the separator is not found, return 3 elements containing the string itself, followed by two empty strings.
Parameters
sepstr, default whitespaceString to split on.
expandbool, default TrueIf True, return DataFrame/MultiIndex expanding dimensionality.
If False, return Series/Index.
Returns
DataFrame/MultiIndex or Series/Index of objects
See also
rpartitionSplit the string at the last occurrence of sep.
Series.str.splitSplit strings around given separators.
str.partitionStandard library version.
Examples
>>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers'])
>>> s
0 Linda van der Berg
1 George Pitt-Rivers
dtype: object
>>> s.str.partition()
0 1 2
0 Linda van der Berg
1 George Pitt-Rivers
To partition by the last space instead of the first one:
>>> s.str.rpartition()
0 1 2
0 Linda van der Berg
1 George Pitt-Rivers
To partition by something different than a space:
>>> s.str.partition('-')
0 1 2
0 Linda van der Berg
1 George Pitt - Rivers
To return a Series containing tuples instead of a DataFrame:
>>> s.str.partition('-', expand=False)
0 (Linda van der Berg, , )
1 (George Pitt, -, Rivers)
dtype: object
Also available on indices:
>>> idx = pd.Index(['X 123', 'Y 999'])
>>> idx
Index(['X 123', 'Y 999'], dtype='object')
Which will create a MultiIndex:
>>> idx.str.partition()
MultiIndex([('X', ' ', '123'),
('Y', ' ', '999')],
)
Or an index with tuples with expand=False:
>>> idx.str.partition(expand=False)
Index([('X', ' ', '123'), ('Y', ' ', '999')], dtype='object')
|
reference/api/pandas.Series.str.partition.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
CustomBusinessMonthBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_anchored.html
|
pandas.DataFrame.truncate
|
`pandas.DataFrame.truncate`
Truncate a Series or DataFrame before and after some index value.
```
>>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
... 'B': ['f', 'g', 'h', 'i', 'j'],
... 'C': ['k', 'l', 'm', 'n', 'o']},
... index=[1, 2, 3, 4, 5])
>>> df
A B C
1 a f k
2 b g l
3 c h m
4 d i n
5 e j o
```
|
DataFrame.truncate(before=None, after=None, axis=None, copy=True)[source]#
Truncate a Series or DataFrame before and after some index value.
This is a useful shorthand for boolean indexing based on index
values above or below certain thresholds.
Parameters
beforedate, str, intTruncate all rows before this index value.
afterdate, str, intTruncate all rows after this index value.
axis{0 or ‘index’, 1 or ‘columns’}, optionalAxis to truncate. Truncates the index (rows) by default.
For Series this parameter is unused and defaults to 0.
copybool, default is True,Return a copy of the truncated section.
Returns
type of callerThe truncated Series or DataFrame.
See also
DataFrame.locSelect a subset of a DataFrame by label.
DataFrame.ilocSelect a subset of a DataFrame by position.
Notes
If the index being truncated contains only datetime values,
before and after may be specified as strings instead of
Timestamps.
Examples
>>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
... 'B': ['f', 'g', 'h', 'i', 'j'],
... 'C': ['k', 'l', 'm', 'n', 'o']},
... index=[1, 2, 3, 4, 5])
>>> df
A B C
1 a f k
2 b g l
3 c h m
4 d i n
5 e j o
>>> df.truncate(before=2, after=4)
A B C
2 b g l
3 c h m
4 d i n
The columns of a DataFrame can be truncated.
>>> df.truncate(before="A", after="B", axis="columns")
A B
1 a f
2 b g
3 c h
4 d i
5 e j
For Series, only rows can be truncated.
>>> df['A'].truncate(before=2, after=4)
2 b
3 c
4 d
Name: A, dtype: object
The index values in truncate can be datetimes or string
dates.
>>> dates = pd.date_range('2016-01-01', '2016-02-01', freq='s')
>>> df = pd.DataFrame(index=dates, data={'A': 1})
>>> df.tail()
A
2016-01-31 23:59:56 1
2016-01-31 23:59:57 1
2016-01-31 23:59:58 1
2016-01-31 23:59:59 1
2016-02-01 00:00:00 1
>>> df.truncate(before=pd.Timestamp('2016-01-05'),
... after=pd.Timestamp('2016-01-10')).tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Because the index is a DatetimeIndex containing only dates, we can
specify before and after as strings. They will be coerced to
Timestamps before truncation.
>>> df.truncate('2016-01-05', '2016-01-10').tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Note that truncate assumes a 0 value for any unspecified time
component (midnight). This differs from partial string slicing, which
returns any partially matching dates.
>>> df.loc['2016-01-05':'2016-01-10', :].tail()
A
2016-01-10 23:59:55 1
2016-01-10 23:59:56 1
2016-01-10 23:59:57 1
2016-01-10 23:59:58 1
2016-01-10 23:59:59 1
|
reference/api/pandas.DataFrame.truncate.html
|
pandas.Series.sparse.fill_value
|
`pandas.Series.sparse.fill_value`
Elements in data that are fill_value are not stored.
For memory savings, this should be the most common value in the array.
|
Series.sparse.fill_value[source]#
Elements in data that are fill_value are not stored.
For memory savings, this should be the most common value in the array.
|
reference/api/pandas.Series.sparse.fill_value.html
|
pandas.Series.str.islower
|
`pandas.Series.str.islower`
Check whether all characters in each string are lowercase.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.islower()[source]#
Check whether all characters in each string are lowercase.
This is equivalent to running the Python string method
str.islower() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.islower.html
|
pandas.get_option
|
`pandas.get_option`
Retrieves the value of the specified option.
|
pandas.get_option(pat) = <pandas._config.config.CallableDynamicDoc object>#
Retrieves the value of the specified option.
Available options:
compute.[use_bottleneck, use_numba, use_numexpr]
display.[chop_threshold, colheader_justify, column_space, date_dayfirst,
date_yearfirst, encoding, expand_frame_repr, float_format]
display.html.[border, table_schema, use_mathjax]
display.[large_repr]
display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow,
repr]
display.[max_categories, max_columns, max_colwidth, max_dir_items,
max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage,
min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision,
show_dimensions]
display.unicode.[ambiguous_as_wide, east_asian_width]
display.[width]
io.excel.ods.[reader, writer]
io.excel.xls.[reader, writer]
io.excel.xlsb.[reader]
io.excel.xlsm.[reader, writer]
io.excel.xlsx.[reader, writer]
io.hdf.[default_format, dropna_table]
io.parquet.[engine]
io.sql.[engine]
mode.[chained_assignment, copy_on_write, data_manager, sim_interactive,
string_storage, use_inf_as_na, use_inf_as_null]
plotting.[backend]
plotting.matplotlib.[register_converters]
styler.format.[decimal, escape, formatter, na_rep, precision, thousands]
styler.html.[mathjax]
styler.latex.[environment, hrules, multicol_align, multirow_align]
styler.render.[encoding, max_columns, max_elements, max_rows, repr]
styler.sparse.[columns, index]
Parameters
patstrRegexp which should match a single option.
Note: partial matches are supported for convenience, but unless you use the
full option name (e.g. x.y.z.option_name), your code may break in future
versions if new options with similar names are introduced.
Returns
resultthe value of the option
Raises
OptionErrorif no such option exists
Notes
Please reference the User Guide for more information.
The available options with its descriptions:
compute.use_bottleneckboolUse the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numbaboolUse the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexprboolUse the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_thresholdfloat or Noneif set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify‘left’/’right’Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.[default: 12] [currently: 12]
display.date_dayfirstbooleanWhen True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirstbooleanWhen True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encodingstr/unicodeDefaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf-8]
display.expand_frame_reprbooleanWhether to print out the full DataFrame repr for wide DataFrames across
multiple lines, max_columns is still respected, but the output will
wrap-around across multiple “pages” if its width exceeds display.width.
[default: True] [currently: True]
display.float_formatcallableThe callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.borderintA border=value attribute is inserted in the <table> tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schemabooleanWhether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjaxbooleanWhen True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr‘truncate’/’info’For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escapeboolThis specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :boolThis specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumnboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_formatboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirowboolThis specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.reprbooleanWhether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categoriesintThis sets the maximum number of categories pandas should output when
printing out a Categorical or a Series of dtype “category”.
[default: 8] [currently: 8]
display.max_columnsintIf max_cols is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidthint or NoneThe maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a “…”
placeholder is embedded in the output. A ‘None’ value means unlimited.
[default: 50] [currently: 50]
display.max_dir_itemsintThe number of items that will be added to dir(…). ‘None’ value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columnsintmax_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rowsint or Nonedf.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rowsintIf max_rows is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_itemsint or NoneWhen pretty-printing a long sequence, no more then max_seq_items
will be printed. If items are omitted, they will be denoted by the
addition of “…” to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usagebool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,’deep’
[default: True] [currently: True]
display.min_rowsintThe numbers of rows to show in a truncated view (when max_rows is
exceeded). Ignored when max_rows is set to None or 0. When set to
None, follows the value of max_rows.
[default: 10] [currently: 10]
display.multi_sparseboolean“sparsify” MultiIndex display (don’t display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_htmlbooleanWhen True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depthintControls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precisionintFloating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to precision in numpy.set_printoptions().
[default: 6] [currently: 6]
display.show_dimensionsboolean or ‘truncate’Whether to print out dimensions at the end of DataFrame repr.
If ‘truncate’ is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_widebooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_widthbooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.widthintWidth of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.readerstringThe default Excel reader engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writerstringThe default Excel writer engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.readerstringThe default Excel reader engine for ‘xls’ files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writerstringThe default Excel writer engine for ‘xls’ files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.readerstringThe default Excel reader engine for ‘xlsb’ files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.readerstringThe default Excel reader engine for ‘xlsm’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writerstringThe default Excel writer engine for ‘xlsm’ files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.readerstringThe default Excel reader engine for ‘xlsx’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writerstringThe default Excel writer engine for ‘xlsx’ files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_formatformatdefault format writing format, if None, then
put will default to ‘fixed’ and append will default to ‘table’
[default: None] [currently: None]
io.hdf.dropna_tablebooleandrop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.enginestringThe default parquet reader/writer engine. Available options:
‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’
[default: auto] [currently: auto]
io.sql.enginestringThe default sql reader/writer engine. Available options:
‘auto’, ‘sqlalchemy’, the default is ‘auto’
[default: auto] [currently: auto]
mode.chained_assignmentstringRaise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_writeboolUse new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the ‘PANDAS_COPY_ON_WRITE’ environment variable
(if set to “1” for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_managerstringInternal data manager type; can be “block” or “array”. Defaults to “block”,
unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactivebooleanWhether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storagestringThe default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_nabooleanTrue means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_nullbooleanuse_inf_as_null had been deprecated and will be removed in a future
version. Use use_inf_as_na instead.
[default: False] [currently: False]
(Deprecated, use mode.use_inf_as_na instead.)
plotting.backendstrThe plotting backend to use. The default value is “matplotlib”, the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_convertersbool or ‘auto’.Whether to register converters with matplotlib’s units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimalstrThe character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escapestr, optionalWhether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatterstr, callable, dict, optionalA formatter object to be used as default within Styler.format.
[default: None] [currently: None]
styler.format.na_repstr, optionalThe string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precisionintThe precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousandsstr, optionalThe character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjaxboolIf False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environmentstrThe environment to replace \begin{table}. If “longtable” is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrulesboolWhether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align{“r”, “c”, “l”, “naive-l”, “naive-r”}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align{“c”, “t”, “b”}The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encodingstrThe encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columnsint, optionalThe maximum number of columns that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.max_elementsintThe maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rowsint, optionalThe maximum number of rows that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.reprstrDetermine which output to use in Jupyter Notebook in {“html”, “latex”}.
[default: html] [currently: html]
styler.sparse.columnsboolWhether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.indexboolWhether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
|
reference/api/pandas.get_option.html
|
pandas.Series.ravel
|
`pandas.Series.ravel`
Return the flattened underlying data as an ndarray.
|
Series.ravel(order='C')[source]#
Return the flattened underlying data as an ndarray.
Returns
numpy.ndarray or ndarray-likeFlattened data of the Series.
See also
numpy.ndarray.ravelReturn a flattened array.
|
reference/api/pandas.Series.ravel.html
|
pandas.core.window.expanding.Expanding.quantile
|
`pandas.core.window.expanding.Expanding.quantile`
Calculate the expanding quantile.
|
Expanding.quantile(quantile, interpolation='linear', numeric_only=False, **kwargs)[source]#
Calculate the expanding quantile.
Parameters
quantilefloatQuantile to compute. 0 <= quantile <= 1.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.quantileAggregating quantile for Series.
pandas.DataFrame.quantileAggregating quantile for DataFrame.
|
reference/api/pandas.core.window.expanding.Expanding.quantile.html
|
pandas.TimedeltaIndex.ceil
|
`pandas.TimedeltaIndex.ceil`
Perform ceil operation on the data to the specified freq.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
```
|
TimedeltaIndex.ceil(*args, **kwargs)[source]#
Perform ceil operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, ceiling will take place relative to the
local (“wall”) time and re-localized to the same timezone. When ceiling
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
|
reference/api/pandas.TimedeltaIndex.ceil.html
|
pandas.DataFrame.rpow
|
`pandas.DataFrame.rpow`
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
Equivalent to other ** dataframe, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, pow.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.rpow(other, axis='columns', level=None, fill_value=None)[source]#
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
Equivalent to other ** dataframe, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, pow.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.rpow.html
|
pandas.Series.sort_index
|
`pandas.Series.sort_index`
Sort Series by index labels.
```
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, 4])
>>> s.sort_index()
1 c
2 b
3 a
4 d
dtype: object
```
|
Series.sort_index(*, axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, ignore_index=False, key=None)[source]#
Sort Series by index labels.
Returns a new Series sorted by label if inplace argument is
False, otherwise updates the original series and returns None.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
levelint, optionalIf not None, sort on values in specified index level(s).
ascendingbool or list-like of bools, default TrueSort ascending vs. descending. When the index is a MultiIndex the
sort direction can be controlled for each level individually.
inplacebool, default FalseIf True, perform operation in-place.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See also numpy.sort() for more
information. ‘mergesort’ and ‘stable’ are the only stable algorithms. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position{‘first’, ‘last’}, default ‘last’If ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at the end.
Not implemented for MultiIndex.
sort_remainingbool, default TrueIf True and sorting by level and index is multilevel, sort by other
levels too (in order) after sorting by specified level.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
keycallable, optionalIf not None, apply the key function to the index values
before sorting. This is similar to the key argument in the
builtin sorted() function, with the notable difference that
this key function should be vectorized. It should expect an
Index and return an Index of the same shape.
New in version 1.1.0.
Returns
Series or NoneThe original Series sorted by the labels or None if inplace=True.
See also
DataFrame.sort_indexSort DataFrame by the index.
DataFrame.sort_valuesSort DataFrame by the value.
Series.sort_valuesSort Series by the value.
Examples
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, 4])
>>> s.sort_index()
1 c
2 b
3 a
4 d
dtype: object
Sort Descending
>>> s.sort_index(ascending=False)
4 d
3 a
2 b
1 c
dtype: object
Sort Inplace
>>> s.sort_index(inplace=True)
>>> s
1 c
2 b
3 a
4 d
dtype: object
By default NaNs are put at the end, but use na_position to place
them at the beginning
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, np.nan])
>>> s.sort_index(na_position='first')
NaN d
1.0 c
2.0 b
3.0 a
dtype: object
Specify index level to sort
>>> arrays = [np.array(['qux', 'qux', 'foo', 'foo',
... 'baz', 'baz', 'bar', 'bar']),
... np.array(['two', 'one', 'two', 'one',
... 'two', 'one', 'two', 'one'])]
>>> s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=arrays)
>>> s.sort_index(level=1)
bar one 8
baz one 6
foo one 4
qux one 2
bar two 7
baz two 5
foo two 3
qux two 1
dtype: int64
Does not sort by remaining levels when sorting by levels
>>> s.sort_index(level=1, sort_remaining=False)
qux one 2
foo one 4
baz one 6
bar one 8
qux two 1
foo two 3
baz two 5
bar two 7
dtype: int64
Apply a key function before sorting
>>> s = pd.Series([1, 2, 3, 4], index=['A', 'b', 'C', 'd'])
>>> s.sort_index(key=lambda x : x.str.lower())
A 1
b 2
C 3
d 4
dtype: int64
|
reference/api/pandas.Series.sort_index.html
|
pandas.Index.get_indexer
|
`pandas.Index.get_indexer`
Compute indexer and mask for new index given the current index.
```
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
```
|
final Index.get_indexer(target, method=None, limit=None, tolerance=None)[source]#
Compute indexer and mask for new index given the current index.
The indexer should be then used as an input to ndarray.take to align the
current data to the new index.
Parameters
targetIndex
method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional
default: exact matches only.
pad / ffill: find the PREVIOUS index value if no exact match.
backfill / bfill: use NEXT index value if no exact match
nearest: use the NEAREST index value if no exact match. Tied
distances are broken by preferring the larger index value.
limitint, optionalMaximum number of consecutive labels in target to match for
inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
indexernp.ndarray[np.intp]Integers from 0 to n - 1 indicating that the index at these
positions matches the corresponding target values. Missing values
in the target are marked by -1.
Notes
Returns -1 for unmatched values, for further explanation see the
example below.
Examples
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
Notice that the return value is an array of locations in index
and x is marked by -1, as it is not in index.
|
reference/api/pandas.Index.get_indexer.html
|
pandas.tseries.offsets.Micro.base
|
`pandas.tseries.offsets.Micro.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
Micro.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.Micro.base.html
|
pandas.DataFrame.agg
|
`pandas.DataFrame.agg`
Aggregate using one or more operations over the specified axis.
Function to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
```
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
```
|
DataFrame.agg(func=None, axis=0, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
axis{0 or ‘index’, 1 or ‘columns’}, default 0If 0 or ‘index’: apply function to each column.
If 1 or ‘columns’: apply function to each row.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
The aggregation operations are always performed over an axis, either the
index (default) or the column axis. This behavior is different from
numpy aggregation functions (mean, median, prod, sum, std,
var), where the default is to compute the aggregation of the flattened
array, e.g., numpy.mean(arr_2d) as opposed to
numpy.mean(arr_2d, axis=0).
agg is an alias for aggregate. Use the alias.
See also
DataFrame.applyPerform any type of operations.
DataFrame.transformPerform transformation type operations.
core.groupby.GroupByPerform operations over groups.
core.resample.ResamplerPerform operations over resampled bins.
core.window.RollingPerform operations over rolling window.
core.window.ExpandingPerform operations over expanding window.
core.window.ExponentialMovingWindowPerform operation over exponential weighted window.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
Aggregate these functions over the rows.
>>> df.agg(['sum', 'min'])
A B C
sum 12.0 15.0 18.0
min 1.0 2.0 3.0
Different aggregations per column.
>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
A B
sum 12.0 NaN
min 1.0 2.0
max NaN 8.0
Aggregate different functions over the columns and rename the index of the resulting
DataFrame.
>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
A B C
x 7.0 NaN NaN
y NaN 2.0 NaN
z NaN NaN 6.0
Aggregate over the columns.
>>> df.agg("mean", axis="columns")
0 2.0
1 5.0
2 8.0
3 NaN
dtype: float64
|
reference/api/pandas.DataFrame.agg.html
|
pandas.tseries.offsets.BYearBegin
|
`pandas.tseries.offsets.BYearBegin`
DateOffset increments between the first business day of the year.
Examples
```
>>> from pandas.tseries.offsets import BYearBegin
>>> ts = pd.Timestamp('2020-05-24 05:01:15')
>>> ts + BYearBegin()
Timestamp('2021-01-01 05:01:15')
>>> ts - BYearBegin()
Timestamp('2020-01-01 05:01:15')
>>> ts + BYearBegin(-1)
Timestamp('2020-01-01 05:01:15')
>>> ts + BYearBegin(2)
Timestamp('2022-01-03 05:01:15')
```
|
class pandas.tseries.offsets.BYearBegin#
DateOffset increments between the first business day of the year.
Examples
>>> from pandas.tseries.offsets import BYearBegin
>>> ts = pd.Timestamp('2020-05-24 05:01:15')
>>> ts + BYearBegin()
Timestamp('2021-01-01 05:01:15')
>>> ts - BYearBegin()
Timestamp('2020-01-01 05:01:15')
>>> ts + BYearBegin(-1)
Timestamp('2020-01-01 05:01:15')
>>> ts + BYearBegin(2)
Timestamp('2022-01-03 05:01:15')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
month
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.BYearBegin.html
|
Window
|
Window
|
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
|
reference/window.html
|
pandas.core.groupby.GroupBy.cummin
|
`pandas.core.groupby.GroupBy.cummin`
Cumulative min for each group.
|
final GroupBy.cummin(axis=0, numeric_only=False, **kwargs)[source]#
Cumulative min for each group.
Returns
Series or DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.cummin.html
|
pandas.Series.dt.date
|
`pandas.Series.dt.date`
Returns numpy array of python datetime.date objects.
|
Series.dt.date[source]#
Returns numpy array of python datetime.date objects.
Namely, the date part of Timestamps without time and
timezone information.
|
reference/api/pandas.Series.dt.date.html
|
pandas.Series.str.startswith
|
`pandas.Series.str.startswith`
Test if the start of each string element matches a pattern.
```
>>> s = pd.Series(['bat', 'Bear', 'cat', np.nan])
>>> s
0 bat
1 Bear
2 cat
3 NaN
dtype: object
```
|
Series.str.startswith(pat, na=None)[source]#
Test if the start of each string element matches a pattern.
Equivalent to str.startswith().
Parameters
patstr or tuple[str, …]Character sequence or tuple of strings. Regular expressions are not
accepted.
naobject, default NaNObject shown if element tested is not a string. The default depends
on dtype of the array. For object-dtype, numpy.nan is used.
For StringDtype, pandas.NA is used.
Returns
Series or Index of boolA Series of booleans indicating whether the given pattern matches
the start of each string element.
See also
str.startswithPython standard library string method.
Series.str.endswithSame as startswith, but tests the end of string.
Series.str.containsTests if string element contains a pattern.
Examples
>>> s = pd.Series(['bat', 'Bear', 'cat', np.nan])
>>> s
0 bat
1 Bear
2 cat
3 NaN
dtype: object
>>> s.str.startswith('b')
0 True
1 False
2 False
3 NaN
dtype: object
>>> s.str.startswith(('b', 'B'))
0 True
1 True
2 False
3 NaN
dtype: object
Specifying na to be False instead of NaN.
>>> s.str.startswith('b', na=False)
0 True
1 False
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.startswith.html
|
pandas.tseries.offsets.Milli.apply_index
|
Milli.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Milli.apply_index.html
| null |
pandas.Series.pop
|
`pandas.Series.pop`
Return item and drops from series. Raise KeyError if not found.
```
>>> ser = pd.Series([1,2,3])
```
|
Series.pop(item)[source]#
Return item and drops from series. Raise KeyError if not found.
Parameters
itemlabelIndex of the element that needs to be removed.
Returns
Value that is popped from series.
Examples
>>> ser = pd.Series([1,2,3])
>>> ser.pop(0)
1
>>> ser
1 2
2 3
dtype: int64
|
reference/api/pandas.Series.pop.html
|
pandas.Timedelta.value
|
pandas.Timedelta.value
|
Timedelta.value#
|
reference/api/pandas.Timedelta.value.html
|
pandas.core.window.rolling.Rolling.corr
|
`pandas.core.window.rolling.Rolling.corr`
Calculate the rolling correlation.
```
>>> v1 = [3, 3, 3, 5, 8]
>>> v2 = [3, 4, 4, 4, 8]
>>> # numpy returns a 2X2 array, the correlation coefficient
>>> # is the number at entry [0][1]
>>> print(f"{np.corrcoef(v1[:-1], v2[:-1])[0][1]:.6f}")
0.333333
>>> print(f"{np.corrcoef(v1[1:], v2[1:])[0][1]:.6f}")
0.916949
>>> s1 = pd.Series(v1)
>>> s2 = pd.Series(v2)
>>> s1.rolling(4).corr(s2)
0 NaN
1 NaN
2 NaN
3 0.333333
4 0.916949
dtype: float64
```
|
Rolling.corr(other=None, pairwise=None, ddof=1, numeric_only=False, **kwargs)[source]#
Calculate the rolling correlation.
Parameters
otherSeries or DataFrame, optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndexed DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
covSimilar method to calculate covariance.
numpy.corrcoefNumPy Pearson’s correlation calculation.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.corrAggregating corr for Series.
pandas.DataFrame.corrAggregating corr for DataFrame.
Notes
This function uses Pearson’s definition of correlation
(https://en.wikipedia.org/wiki/Pearson_correlation_coefficient).
When other is not specified, the output will be self correlation (e.g.
all 1’s), except for DataFrame inputs with pairwise
set to True.
Function will return NaN for correlations of equal valued sequences;
this is the result of a 0/0 division error.
When pairwise is set to False, only matching columns between self and
other will be used.
When pairwise is set to True, the output will be a MultiIndex DataFrame
with the original index on the first level, and the other DataFrame
columns on the second level.
In the case of missing elements, only complete pairwise observations
will be used.
Examples
The below example shows a rolling calculation with a window size of
four matching the equivalent function call using numpy.corrcoef().
>>> v1 = [3, 3, 3, 5, 8]
>>> v2 = [3, 4, 4, 4, 8]
>>> # numpy returns a 2X2 array, the correlation coefficient
>>> # is the number at entry [0][1]
>>> print(f"{np.corrcoef(v1[:-1], v2[:-1])[0][1]:.6f}")
0.333333
>>> print(f"{np.corrcoef(v1[1:], v2[1:])[0][1]:.6f}")
0.916949
>>> s1 = pd.Series(v1)
>>> s2 = pd.Series(v2)
>>> s1.rolling(4).corr(s2)
0 NaN
1 NaN
2 NaN
3 0.333333
4 0.916949
dtype: float64
The below example shows a similar rolling calculation on a
DataFrame using the pairwise option.
>>> matrix = np.array([[51., 35.], [49., 30.], [47., 32.], [46., 31.], [50., 36.]])
>>> print(np.corrcoef(matrix[:-1,0], matrix[:-1,1]).round(7))
[[1. 0.6263001]
[0.6263001 1. ]]
>>> print(np.corrcoef(matrix[1:,0], matrix[1:,1]).round(7))
[[1. 0.5553681]
[0.5553681 1. ]]
>>> df = pd.DataFrame(matrix, columns=['X','Y'])
>>> df
X Y
0 51.0 35.0
1 49.0 30.0
2 47.0 32.0
3 46.0 31.0
4 50.0 36.0
>>> df.rolling(4).corr(pairwise=True)
X Y
0 X NaN NaN
Y NaN NaN
1 X NaN NaN
Y NaN NaN
2 X NaN NaN
Y NaN NaN
3 X 1.000000 0.626300
Y 0.626300 1.000000
4 X 1.000000 0.555368
Y 0.555368 1.000000
|
reference/api/pandas.core.window.rolling.Rolling.corr.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
CustomBusinessMonthBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start.html
|
pandas.tseries.offsets.Minute.is_year_end
|
`pandas.tseries.offsets.Minute.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Minute.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Minute.is_year_end.html
|
pandas.Series.groupby
|
`pandas.Series.groupby`
Group Series using a mapper or by a Series of columns.
```
>>> ser = pd.Series([390., 350., 30., 20.],
... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], name="Max Speed")
>>> ser
Falcon 390.0
Falcon 350.0
Parrot 30.0
Parrot 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", "b"]).mean()
a 210.0
b 185.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(ser > 100).mean()
Max Speed
False 25.0
True 370.0
Name: Max Speed, dtype: float64
```
|
Series.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=_NoDefault.no_default, squeeze=_NoDefault.no_default, observed=False, dropna=True)[source]#
Group Series using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
Parameters
bymapping, function, label, or list of labelsUsed to determine the groups for the groupby.
If by is a function, it’s called on each value of the object’s
index. If a dict or Series is passed, the Series or dict VALUES
will be used to determine the groups (the Series’ values are first
aligned; see .align() method). If a list or ndarray of length
equal to the selected axis is passed (see the groupby user guide),
the values are used as-is to determine the groups. A label or list
of labels may be passed to group by the columns in self.
Notice that a tuple is interpreted as a (single) key.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Split along rows (0) or columns (1). For Series this parameter
is unused and defaults to 0.
levelint, level name, or sequence of such, default NoneIf the axis is a MultiIndex (hierarchical), group by a particular
level or levels. Do not specify both by and level.
as_indexbool, default TrueFor aggregated output, return object with group labels as the
index. Only relevant for DataFrame input. as_index=False is
effectively “SQL-style” grouped output.
sortbool, default TrueSort group keys. Get better performance by turning this off.
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
group_keysbool, optionalWhen calling apply and the by argument produces a like-indexed
(i.e. a transform) result, add group keys to
index to identify pieces. By default group keys are not included
when the result’s index (and column) labels match the inputs, and
are included otherwise. This argument has no effect if the result produced
is not like-indexed with respect to the input.
Changed in version 1.5.0: Warns that group_keys will no longer be ignored when the
result from apply is a like-indexed Series or DataFrame.
Specify group_keys explicitly to include the group keys or
not.
squeezebool, default FalseReduce the dimensionality of the return type if possible,
otherwise return a consistent type.
Deprecated since version 1.1.0.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
dropnabool, default TrueIf True, and if group keys contain NA values, NA values together
with row/column will be dropped.
If False, NA values will also be treated as the key in groups.
New in version 1.1.0.
Returns
SeriesGroupByReturns a groupby object that contains information about the groups.
See also
resampleConvenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more
detailed usage and examples, including splitting an object into groups,
iterating through groups, selecting a group, aggregation, and more.
Examples
>>> ser = pd.Series([390., 350., 30., 20.],
... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], name="Max Speed")
>>> ser
Falcon 390.0
Falcon 350.0
Parrot 30.0
Parrot 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", "b"]).mean()
a 210.0
b 185.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(ser > 100).mean()
Max Speed
False 25.0
True 370.0
Name: Max Speed, dtype: float64
Grouping by Indexes
We can groupby different levels of a hierarchical index
using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
... ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> ser = pd.Series([390., 350., 30., 20.], index=index, name="Max Speed")
>>> ser
Animal Type
Falcon Captive 390.0
Wild 350.0
Parrot Captive 30.0
Wild 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Animal
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level="Type").mean()
Type
Captive 210.0
Wild 185.0
Name: Max Speed, dtype: float64
We can also choose to include NA in group keys or not by defining
dropna parameter, the default setting is True.
>>> ser = pd.Series([1, 2, 3, 3], index=["a", 'a', 'b', np.nan])
>>> ser.groupby(level=0).sum()
a 3
b 3
dtype: int64
>>> ser.groupby(level=0, dropna=False).sum()
a 3
b 3
NaN 3
dtype: int64
>>> arrays = ['Falcon', 'Falcon', 'Parrot', 'Parrot']
>>> ser = pd.Series([390., 350., 30., 20.], index=arrays, name="Max Speed")
>>> ser.groupby(["a", "b", "a", np.nan]).mean()
a 210.0
b 350.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", np.nan], dropna=False).mean()
a 210.0
b 350.0
NaN 20.0
Name: Max Speed, dtype: float64
|
reference/api/pandas.Series.groupby.html
|
pandas.tseries.offsets.CustomBusinessHour.is_year_end
|
`pandas.tseries.offsets.CustomBusinessHour.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
CustomBusinessHour.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.is_year_end.html
|
pandas.tseries.offsets.BYearEnd.kwds
|
`pandas.tseries.offsets.BYearEnd.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
BYearEnd.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BYearEnd.kwds.html
|
pandas.Series.reorder_levels
|
`pandas.Series.reorder_levels`
Rearrange index levels using input order.
|
Series.reorder_levels(order)[source]#
Rearrange index levels using input order.
May not drop or duplicate levels.
Parameters
orderlist of int representing new level orderReference level by number or key.
Returns
type of caller (new object)
|
reference/api/pandas.Series.reorder_levels.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.