title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.DatetimeIndex.tz_localize
|
`pandas.DatetimeIndex.tz_localize`
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
```
>>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3)
>>> tz_naive
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
```
|
DatetimeIndex.tz_localize(tz, ambiguous='raise', nonexistent='raise')[source]#
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
This method takes a time zone (tz) naive Datetime Array/Index object
and makes this time zone aware. It does not move the time to another
time zone.
This method can also be used to do the inverse – to create a time
zone unaware object from an aware object. To that end, pass tz=None.
Parameters
tzstr, pytz.timezone, dateutil.tz.tzfile or NoneTime zone to convert timestamps to. Passing None will
remove the time zone information preserving local time.
ambiguous‘infer’, ‘NaT’, bool array, default ‘raise’When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from
03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at
00:30:00 UTC and at 01:30:00 UTC. In such a situation, the
ambiguous parameter dictates how ambiguous times should be
handled.
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False signifies a
non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
Same type as selfArray/Index converted to the specified time zone.
Raises
TypeErrorIf the Datetime Array/Index is tz-aware and tz is not None.
See also
DatetimeIndex.tz_convertConvert tz-aware DatetimeIndex from one time zone to another.
Examples
>>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3)
>>> tz_naive
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
Localize DatetimeIndex in US/Eastern time zone:
>>> tz_aware = tz_naive.tz_localize(tz='US/Eastern')
>>> tz_aware
DatetimeIndex(['2018-03-01 09:00:00-05:00',
'2018-03-02 09:00:00-05:00',
'2018-03-03 09:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
With the tz=None, we can remove the time zone information
while keeping the local time (not converted to UTC):
>>> tz_aware.tz_localize(None)
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq=None)
Be careful with DST changes. When there is sequential data, pandas can
infer the DST time:
>>> s = pd.to_datetime(pd.Series(['2018-10-28 01:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 03:00:00',
... '2018-10-28 03:30:00']))
>>> s.dt.tz_localize('CET', ambiguous='infer')
0 2018-10-28 01:30:00+02:00
1 2018-10-28 02:00:00+02:00
2 2018-10-28 02:30:00+02:00
3 2018-10-28 02:00:00+01:00
4 2018-10-28 02:30:00+01:00
5 2018-10-28 03:00:00+01:00
6 2018-10-28 03:30:00+01:00
dtype: datetime64[ns, CET]
In some cases, inferring the DST is impossible. In such cases, you can
pass an ndarray to the ambiguous parameter to set the DST explicitly
>>> s = pd.to_datetime(pd.Series(['2018-10-28 01:20:00',
... '2018-10-28 02:36:00',
... '2018-10-28 03:46:00']))
>>> s.dt.tz_localize('CET', ambiguous=np.array([True, True, False]))
0 2018-10-28 01:20:00+02:00
1 2018-10-28 02:36:00+02:00
2 2018-10-28 03:46:00+01:00
dtype: datetime64[ns, CET]
If the DST transition causes nonexistent times, you can shift these
dates forward or backwards with a timedelta object or ‘shift_forward’
or ‘shift_backwards’.
>>> s = pd.to_datetime(pd.Series(['2015-03-29 02:30:00',
... '2015-03-29 03:30:00']))
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_forward')
0 2015-03-29 03:00:00+02:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_backward')
0 2015-03-29 01:59:59.999999999+01:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H'))
0 2015-03-29 03:30:00+02:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
|
reference/api/pandas.DatetimeIndex.tz_localize.html
|
pandas.tseries.offsets.YearBegin.is_on_offset
|
`pandas.tseries.offsets.YearBegin.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
YearBegin.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_on_offset.html
|
pandas.tseries.offsets.FY5253.isAnchored
|
pandas.tseries.offsets.FY5253.isAnchored
|
FY5253.isAnchored()#
|
reference/api/pandas.tseries.offsets.FY5253.isAnchored.html
|
pandas.tseries.offsets.QuarterEnd
|
`pandas.tseries.offsets.QuarterEnd`
DateOffset increments between Quarter end dates.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.QuarterEnd()
Timestamp('2022-03-31 00:00:00')
```
|
class pandas.tseries.offsets.QuarterEnd#
DateOffset increments between Quarter end dates.
startingMonth = 1 corresponds to dates like 1/31/2007, 4/30/2007, …
startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, …
startingMonth = 3 corresponds to dates like 3/31/2007, 6/30/2007, …
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.QuarterEnd()
Timestamp('2022-03-31 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
startingMonth
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.QuarterEnd.html
|
pandas.Index.union
|
`pandas.Index.union`
Form the union of two Index objects.
```
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.union(idx2)
Int64Index([1, 2, 3, 4, 5, 6], dtype='int64')
```
|
final Index.union(other, sort=None)[source]#
Form the union of two Index objects.
If the Index objects are incompatible, both Index objects will be
cast to dtype(‘object’) first.
Changed in version 0.25.0.
Parameters
otherIndex or array-like
sortbool or None, default NoneWhether to sort the resulting Index.
None : Sort the result, except when
self and other are equal.
self or other has length 0.
Some values in self or other cannot be compared.
A RuntimeWarning is issued in this case.
False : do not sort the result.
Returns
unionIndex
Examples
Union matching dtypes
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.union(idx2)
Int64Index([1, 2, 3, 4, 5, 6], dtype='int64')
Union mismatched dtypes
>>> idx1 = pd.Index(['a', 'b', 'c', 'd'])
>>> idx2 = pd.Index([1, 2, 3, 4])
>>> idx1.union(idx2)
Index(['a', 'b', 'c', 'd', 1, 2, 3, 4], dtype='object')
MultiIndex case
>>> idx1 = pd.MultiIndex.from_arrays(
... [[1, 1, 2, 2], ["Red", "Blue", "Red", "Blue"]]
... )
>>> idx1
MultiIndex([(1, 'Red'),
(1, 'Blue'),
(2, 'Red'),
(2, 'Blue')],
)
>>> idx2 = pd.MultiIndex.from_arrays(
... [[3, 3, 2, 2], ["Red", "Green", "Red", "Green"]]
... )
>>> idx2
MultiIndex([(3, 'Red'),
(3, 'Green'),
(2, 'Red'),
(2, 'Green')],
)
>>> idx1.union(idx2)
MultiIndex([(1, 'Blue'),
(1, 'Red'),
(2, 'Blue'),
(2, 'Green'),
(2, 'Red'),
(3, 'Green'),
(3, 'Red')],
)
>>> idx1.union(idx2, sort=False)
MultiIndex([(1, 'Red'),
(1, 'Blue'),
(2, 'Red'),
(2, 'Blue'),
(3, 'Red'),
(3, 'Green'),
(2, 'Green')],
)
|
reference/api/pandas.Index.union.html
|
pandas.core.window.rolling.Rolling.kurt
|
`pandas.core.window.rolling.Rolling.kurt`
Calculate the rolling Fisher’s definition of kurtosis without bias.
```
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
```
|
Rolling.kurt(numeric_only=False, **kwargs)[source]#
Calculate the rolling Fisher’s definition of kurtosis without bias.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
scipy.stats.kurtosisReference SciPy method.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.kurtAggregating kurt for Series.
pandas.DataFrame.kurtAggregating kurt for DataFrame.
Notes
A minimum of four periods is required for the calculation.
Examples
The example below will show a rolling calculation with a window size of
four matching the equivalent function call using scipy.stats.
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.kurt.html
|
pandas.Series.drop_duplicates
|
`pandas.Series.drop_duplicates`
Return Series with duplicate values removed.
```
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
```
|
Series.drop_duplicates(*, keep='first', inplace=False)[source]#
Return Series with duplicate values removed.
Parameters
keep{‘first’, ‘last’, False}, default ‘first’Method to handle dropping duplicates:
‘first’ : Drop duplicates except for the first occurrence.
‘last’ : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
inplacebool, default FalseIf True, performs operation inplace and returns None.
Returns
Series or NoneSeries with duplicates dropped or None if inplace=True.
See also
Index.drop_duplicatesEquivalent method on Index.
DataFrame.drop_duplicatesEquivalent method on DataFrame.
Series.duplicatedRelated method on Series, indicating duplicate Series values.
Series.uniqueReturn unique values as an array.
Examples
Generate a Series with duplicated entries.
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
With the ‘keep’ parameter, the selection behaviour of duplicated values
can be changed. The value ‘first’ keeps the first occurrence for each
set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for
each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value False for parameter ‘keep’ discards all sets of
duplicated entries. Setting the value of ‘inplace’ to True performs
the operation inplace and returns None.
>>> s.drop_duplicates(keep=False, inplace=True)
>>> s
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
|
reference/api/pandas.Series.drop_duplicates.html
|
pandas.DataFrame.rfloordiv
|
`pandas.DataFrame.rfloordiv`
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.rfloordiv(other, axis='columns', level=None, fill_value=None)[source]#
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
Equivalent to other // dataframe, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, floordiv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.rfloordiv.html
|
pandas.Timestamp.weekofyear
|
`pandas.Timestamp.weekofyear`
Return the week number of the year.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.week
11
```
|
Timestamp.weekofyear#
Return the week number of the year.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.week
11
|
reference/api/pandas.Timestamp.weekofyear.html
|
pandas.tseries.offsets.BusinessHour.onOffset
|
pandas.tseries.offsets.BusinessHour.onOffset
|
BusinessHour.onOffset()#
|
reference/api/pandas.tseries.offsets.BusinessHour.onOffset.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
CustomBusinessMonthEnd.normalize#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.normalize.html
|
pandas.DataFrame.kurtosis
|
`pandas.DataFrame.kurtosis`
Return unbiased kurtosis over requested axis.
Kurtosis obtained using Fisher’s definition of
kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
|
DataFrame.kurtosis(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return unbiased kurtosis over requested axis.
Kurtosis obtained using Fisher’s definition of
kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
|
reference/api/pandas.DataFrame.kurtosis.html
|
pandas.core.groupby.GroupBy.head
|
`pandas.core.groupby.GroupBy.head`
Return first n rows of each group.
```
>>> df = pd.DataFrame([[1, 2], [1, 4], [5, 6]],
... columns=['A', 'B'])
>>> df.groupby('A').head(1)
A B
0 1 2
2 5 6
>>> df.groupby('A').head(-1)
A B
0 1 2
```
|
final GroupBy.head(n=5)[source]#
Return first n rows of each group.
Similar to .apply(lambda x: x.head(n)), but it returns a subset of rows
from the original DataFrame with original index and order preserved
(as_index flag is ignored).
Parameters
nintIf positive: number of entries to include from start of each group.
If negative: number of entries to exclude from end of each group.
Returns
Series or DataFrameSubset of original Series or DataFrame as determined by n.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame([[1, 2], [1, 4], [5, 6]],
... columns=['A', 'B'])
>>> df.groupby('A').head(1)
A B
0 1 2
2 5 6
>>> df.groupby('A').head(-1)
A B
0 1 2
|
reference/api/pandas.core.groupby.GroupBy.head.html
|
pandas.DataFrame.items
|
`pandas.DataFrame.items`
Iterate over (column name, Series) pairs.
Iterates over the DataFrame columns, returning a tuple with
the column name and the content as a Series.
```
>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
... 'population': [1864, 22000, 80000]},
... index=['panda', 'polar', 'koala'])
>>> df
species population
panda bear 1864
polar bear 22000
koala marsupial 80000
>>> for label, content in df.items():
... print(f'label: {label}')
... print(f'content: {content}', sep='\n')
...
label: species
content:
panda bear
polar bear
koala marsupial
Name: species, dtype: object
label: population
content:
panda 1864
polar 22000
koala 80000
Name: population, dtype: int64
```
|
DataFrame.items()[source]#
Iterate over (column name, Series) pairs.
Iterates over the DataFrame columns, returning a tuple with
the column name and the content as a Series.
Yields
labelobjectThe column names for the DataFrame being iterated over.
contentSeriesThe column entries belonging to each label, as a Series.
See also
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuplesIterate over DataFrame rows as namedtuples of the values.
Examples
>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
... 'population': [1864, 22000, 80000]},
... index=['panda', 'polar', 'koala'])
>>> df
species population
panda bear 1864
polar bear 22000
koala marsupial 80000
>>> for label, content in df.items():
... print(f'label: {label}')
... print(f'content: {content}', sep='\n')
...
label: species
content:
panda bear
polar bear
koala marsupial
Name: species, dtype: object
label: population
content:
panda 1864
polar 22000
koala 80000
Name: population, dtype: int64
|
reference/api/pandas.DataFrame.items.html
|
pandas.Timestamp.asm8
|
`pandas.Timestamp.asm8`
Return numpy datetime64 format in nanoseconds.
Examples
```
>>> ts = pd.Timestamp(2020, 3, 14, 15)
>>> ts.asm8
numpy.datetime64('2020-03-14T15:00:00.000000000')
```
|
Timestamp.asm8#
Return numpy datetime64 format in nanoseconds.
Examples
>>> ts = pd.Timestamp(2020, 3, 14, 15)
>>> ts.asm8
numpy.datetime64('2020-03-14T15:00:00.000000000')
|
reference/api/pandas.Timestamp.asm8.html
|
pandas.tseries.offsets.QuarterEnd.is_year_start
|
`pandas.tseries.offsets.QuarterEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
QuarterEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.QuarterEnd.is_year_start.html
|
pandas.tseries.offsets.BusinessMonthEnd.freqstr
|
`pandas.tseries.offsets.BusinessMonthEnd.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
BusinessMonthEnd.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.freqstr.html
|
pandas.tseries.offsets.QuarterBegin.apply_index
|
`pandas.tseries.offsets.QuarterBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
|
QuarterBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.QuarterBegin.apply_index.html
|
pandas.DataFrame.plot.line
|
`pandas.DataFrame.plot.line`
Plot Series or DataFrame as lines.
```
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
```
|
DataFrame.plot.line(x=None, y=None, **kwargs)[source]#
Plot Series or DataFrame as lines.
This function is useful to plot lines using DataFrame’s values
as coordinates.
Parameters
xlabel or position, optionalAllows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
ylabel or position, optionalAllows plotting of one column versus another. If not specified,
all numerical columns are used.
colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are:
A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’.
A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For
instance [‘green’,’yellow’] each column’s line will be filled in
green or yellow, alternatively. If there is only a single column to
be plotted, then only the first color from the color list will be
used.
A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and
b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color lines for
column a in green and lines for column b in red.
New in version 1.1.0.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes
per column when subplots=True.
See also
matplotlib.pyplot.plotPlot y versus x as lines and/or markers.
Examples
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
The following example shows the populations for some animals
over the years.
>>> df = pd.DataFrame({
... 'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]
... }, index=[1990, 1997, 2003, 2009, 2014])
>>> lines = df.plot.line()
An example with subplots, so an array of axes is returned.
>>> axes = df.plot.line(subplots=True)
>>> type(axes)
<class 'numpy.ndarray'>
Let’s repeat the same example, but specifying colors for
each column (in this case, for each animal).
>>> axes = df.plot.line(
... subplots=True, color={"pig": "pink", "horse": "#742802"}
... )
The following example shows the relationship between both
populations.
>>> lines = df.plot.line(x='pig', y='horse')
|
reference/api/pandas.DataFrame.plot.line.html
|
pandas.DataFrame.from_dict
|
`pandas.DataFrame.from_dict`
Construct DataFrame from dict of array-like or dicts.
```
>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
col_1 col_2
0 3 a
1 2 b
2 1 c
3 0 d
```
|
classmethod DataFrame.from_dict(data, orient='columns', dtype=None, columns=None)[source]#
Construct DataFrame from dict of array-like or dicts.
Creates DataFrame object from dictionary by columns or by index
allowing dtype specification.
Parameters
datadictOf the form {field : array-like} or {field : dict}.
orient{‘columns’, ‘index’, ‘tight’}, default ‘columns’The “orientation” of the data. If the keys of the passed dict
should be the columns of the resulting DataFrame, pass ‘columns’
(default). Otherwise if the keys should be rows, pass ‘index’.
If ‘tight’, assume a dict with keys [‘index’, ‘columns’, ‘data’,
‘index_names’, ‘column_names’].
New in version 1.4.0: ‘tight’ as an allowed value for the orient argument
dtypedtype, default NoneData type to force, otherwise infer.
columnslist, default NoneColumn labels to use when orient='index'. Raises a ValueError
if used with orient='columns' or orient='tight'.
Returns
DataFrame
See also
DataFrame.from_recordsDataFrame from structured ndarray, sequence of tuples or dicts, or DataFrame.
DataFrameDataFrame object creation using constructor.
DataFrame.to_dictConvert the DataFrame to a dictionary.
Examples
By default the keys of the dict become the DataFrame columns:
>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
col_1 col_2
0 3 a
1 2 b
2 1 c
3 0 d
Specify orient='index' to create the DataFrame using dictionary
keys as rows:
>>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data, orient='index')
0 1 2 3
row_1 3 2 1 0
row_2 a b c d
When using the ‘index’ orientation, the column names can be
specified manually:
>>> pd.DataFrame.from_dict(data, orient='index',
... columns=['A', 'B', 'C', 'D'])
A B C D
row_1 3 2 1 0
row_2 a b c d
Specify orient='tight' to create the DataFrame using a ‘tight’
format:
>>> data = {'index': [('a', 'b'), ('a', 'c')],
... 'columns': [('x', 1), ('y', 2)],
... 'data': [[1, 3], [2, 4]],
... 'index_names': ['n1', 'n2'],
... 'column_names': ['z1', 'z2']}
>>> pd.DataFrame.from_dict(data, orient='tight')
z1 x y
z2 1 2
n1 n2
a b 1 3
c 2 4
|
reference/api/pandas.DataFrame.from_dict.html
|
pandas.DatetimeIndex.mean
|
`pandas.DatetimeIndex.mean`
Return the mean value of the Array.
|
DatetimeIndex.mean(*args, **kwargs)[source]#
Return the mean value of the Array.
New in version 0.25.0.
Parameters
skipnabool, default TrueWhether to ignore any NaT elements.
axisint, optional, default 0
Returns
scalarTimestamp or Timedelta.
See also
numpy.ndarray.meanReturns the average of array elements along a given axis.
Series.meanReturn the mean value in a Series.
Notes
mean is only defined for Datetime and Timedelta dtypes, not for Period.
|
reference/api/pandas.DatetimeIndex.mean.html
|
pandas.tseries.offsets.Easter.isAnchored
|
pandas.tseries.offsets.Easter.isAnchored
|
Easter.isAnchored()#
|
reference/api/pandas.tseries.offsets.Easter.isAnchored.html
|
Options and settings
|
Options and settings
pandas has an options API configure and customize global behavior related to
DataFrame display, data behavior and more.
Options have a full “dotted-style”, case-insensitive name (e.g. display.max_rows).
You can get/set options directly as attributes of the top-level options attribute:
The API is composed of 5 relevant functions, available directly from the pandas
namespace:
get_option() / set_option() - get/set the value of a single option.
reset_option() - reset one or more options to their default value.
|
Overview#
pandas has an options API configure and customize global behavior related to
DataFrame display, data behavior and more.
Options have a full “dotted-style”, case-insensitive name (e.g. display.max_rows).
You can get/set options directly as attributes of the top-level options attribute:
In [1]: import pandas as pd
In [2]: pd.options.display.max_rows
Out[2]: 15
In [3]: pd.options.display.max_rows = 999
In [4]: pd.options.display.max_rows
Out[4]: 999
The API is composed of 5 relevant functions, available directly from the pandas
namespace:
get_option() / set_option() - get/set the value of a single option.
reset_option() - reset one or more options to their default value.
describe_option() - print the descriptions of one or more options.
option_context() - execute a codeblock with a set of options
that revert to prior settings after execution.
Note
Developers can check out pandas/core/config_init.py for more information.
All of the functions above accept a regexp pattern (re.search style) as an argument,
to match an unambiguous substring:
In [5]: pd.get_option("display.chop_threshold")
In [6]: pd.set_option("display.chop_threshold", 2)
In [7]: pd.get_option("display.chop_threshold")
Out[7]: 2
In [8]: pd.set_option("chop", 4)
In [9]: pd.get_option("display.chop_threshold")
Out[9]: 4
The following will not work because it matches multiple option names, e.g.
display.max_colwidth, display.max_rows, display.max_columns:
In [10]: pd.get_option("max")
---------------------------------------------------------------------------
OptionError Traceback (most recent call last)
Cell In[10], line 1
----> 1 pd.get_option("max")
File ~/work/pandas/pandas/pandas/_config/config.py:263, in CallableDynamicDoc.__call__(self, *args, **kwds)
262 def __call__(self, *args, **kwds) -> T:
--> 263 return self.__func__(*args, **kwds)
File ~/work/pandas/pandas/pandas/_config/config.py:135, in _get_option(pat, silent)
134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135 key = _get_single_key(pat, silent)
137 # walk the nested dict
138 root, k = _get_root(key)
File ~/work/pandas/pandas/pandas/_config/config.py:123, in _get_single_key(pat, silent)
121 raise OptionError(f"No such keys(s): {repr(pat)}")
122 if len(keys) > 1:
--> 123 raise OptionError("Pattern matched multiple keys")
124 key = keys[0]
126 if not silent:
OptionError: 'Pattern matched multiple keys'
Warning
Using this form of shorthand may cause your code to break if new options with similar names are added in future versions.
Available options#
You can get a list of available options and their descriptions with describe_option(). When called
with no argument describe_option() will print out the descriptions for all available options.
In [11]: pd.describe_option()
compute.use_bottleneck : bool
Use the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numba : bool
Use the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexpr : bool
Use the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_threshold : float or None
if set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify : 'left'/'right'
Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.
[default: 12] [currently: 12]
display.date_dayfirst : boolean
When True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirst : boolean
When True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encoding : str/unicode
Defaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf8]
display.expand_frame_repr : boolean
Whether to print out the full DataFrame repr for wide DataFrames across
multiple lines, `max_columns` is still respected, but the output will
wrap-around across multiple "pages" if its width exceeds `display.width`.
[default: True] [currently: True]
display.float_format : callable
The callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.border : int
A ``border=value`` attribute is inserted in the ``<table>`` tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schema : boolean
Whether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjax : boolean
When True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr : 'truncate'/'info'
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escape : bool
This specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :bool
This specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumn : bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_format : bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirow : bool
This specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.repr : boolean
Whether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categories : int
This sets the maximum number of categories pandas should output when
printing out a `Categorical` or a Series of dtype "category".
[default: 8] [currently: 8]
display.max_columns : int
If max_cols is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidth : int or None
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output. A 'None' value means unlimited.
[default: 50] [currently: 50]
display.max_dir_items : int
The number of items that will be added to `dir(...)`. 'None' value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columns : int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rows : int or None
df.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rows : int
If max_rows is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_items : int or None
When pretty-printing a long sequence, no more then `max_seq_items`
will be printed. If items are omitted, they will be denoted by the
addition of "..." to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usage : bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,'deep'
[default: True] [currently: True]
display.min_rows : int
The numbers of rows to show in a truncated view (when `max_rows` is
exceeded). Ignored when `max_rows` is set to None or 0. When set to
None, follows the value of `max_rows`.
[default: 10] [currently: 10]
display.multi_sparse : boolean
"sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_html : boolean
When True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depth : int
Controls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precision : int
Floating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to ``precision`` in :meth:`numpy.set_printoptions`.
[default: 6] [currently: 6]
display.show_dimensions : boolean or 'truncate'
Whether to print out dimensions at the end of DataFrame repr.
If 'truncate' is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide : boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_width : boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.width : int
Width of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.reader : string
The default Excel reader engine for 'ods' files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writer : string
The default Excel writer engine for 'ods' files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.reader : string
The default Excel reader engine for 'xls' files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writer : string
The default Excel writer engine for 'xls' files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.reader : string
The default Excel reader engine for 'xlsb' files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.reader : string
The default Excel reader engine for 'xlsm' files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writer : string
The default Excel writer engine for 'xlsm' files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.reader : string
The default Excel reader engine for 'xlsx' files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writer : string
The default Excel writer engine for 'xlsx' files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_format : format
default format writing format, if None, then
put will default to 'fixed' and append will default to 'table'
[default: None] [currently: None]
io.hdf.dropna_table : boolean
drop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.engine : string
The default parquet reader/writer engine. Available options:
'auto', 'pyarrow', 'fastparquet', the default is 'auto'
[default: auto] [currently: auto]
io.sql.engine : string
The default sql reader/writer engine. Available options:
'auto', 'sqlalchemy', the default is 'auto'
[default: auto] [currently: auto]
mode.chained_assignment : string
Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_write : bool
Use new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable
(if set to "1" for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_manager : string
Internal data manager type; can be "block" or "array". Defaults to "block",
unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactive : boolean
Whether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storage : string
The default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_na : boolean
True means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_null : boolean
use_inf_as_null had been deprecated and will be removed in a future
version. Use `use_inf_as_na` instead.
[default: False] [currently: False]
(Deprecated, use `mode.use_inf_as_na` instead.)
plotting.backend : str
The plotting backend to use. The default value is "matplotlib", the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_converters : bool or 'auto'.
Whether to register converters with matplotlib's units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimal : str
The character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escape : str, optional
Whether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatter : str, callable, dict, optional
A formatter object to be used as default within ``Styler.format``.
[default: None] [currently: None]
styler.format.na_rep : str, optional
The string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precision : int
The precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousands : str, optional
The character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjax : bool
If False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environment : str
The environment to replace ``\begin{table}``. If "longtable" is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrules : bool
Whether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align : {"r", "c", "l", "naive-l", "naive-r"}
The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align : {"c", "t", "b"}
The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encoding : str
The encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columns : int, optional
The maximum number of columns that will be rendered. May still be reduced to
satsify ``max_elements``, which takes precedence.
[default: None] [currently: None]
styler.render.max_elements : int
The maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rows : int, optional
The maximum number of rows that will be rendered. May still be reduced to
satsify ``max_elements``, which takes precedence.
[default: None] [currently: None]
styler.render.repr : str
Determine which output to use in Jupyter Notebook in {"html", "latex"}.
[default: html] [currently: html]
styler.sparse.columns : bool
Whether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.index : bool
Whether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
Getting and setting options#
As described above, get_option() and set_option()
are available from the pandas namespace. To change an option, call
set_option('option regex', new_value).
In [12]: pd.get_option("mode.sim_interactive")
Out[12]: False
In [13]: pd.set_option("mode.sim_interactive", True)
In [14]: pd.get_option("mode.sim_interactive")
Out[14]: True
Note
The option 'mode.sim_interactive' is mostly used for debugging purposes.
You can use reset_option() to revert to a setting’s default value
In [15]: pd.get_option("display.max_rows")
Out[15]: 60
In [16]: pd.set_option("display.max_rows", 999)
In [17]: pd.get_option("display.max_rows")
Out[17]: 999
In [18]: pd.reset_option("display.max_rows")
In [19]: pd.get_option("display.max_rows")
Out[19]: 60
It’s also possible to reset multiple options at once (using a regex):
In [20]: pd.reset_option("^display")
option_context() context manager has been exposed through
the top-level API, allowing you to execute code with given option values. Option values
are restored automatically when you exit the with block:
In [21]: with pd.option_context("display.max_rows", 10, "display.max_columns", 5):
....: print(pd.get_option("display.max_rows"))
....: print(pd.get_option("display.max_columns"))
....:
10
5
In [22]: print(pd.get_option("display.max_rows"))
60
In [23]: print(pd.get_option("display.max_columns"))
0
Setting startup options in Python/IPython environment#
Using startup scripts for the Python/IPython environment to import pandas and set options makes working with pandas more efficient.
To do this, create a .py or .ipy script in the startup directory of the desired profile.
An example where the startup folder is in a default IPython profile can be found at:
$IPYTHONDIR/profile_default/startup
More information can be found in the IPython documentation. An example startup script for pandas is displayed below:
import pandas as pd
pd.set_option("display.max_rows", 999)
pd.set_option("display.precision", 5)
Frequently used options#
The following is a demonstrates the more frequently used display options.
display.max_rows and display.max_columns sets the maximum number
of rows and columns displayed when a frame is pretty-printed. Truncated
lines are replaced by an ellipsis.
In [24]: df = pd.DataFrame(np.random.randn(7, 2))
In [25]: pd.set_option("display.max_rows", 7)
In [26]: df
Out[26]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
3 0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929 1.071804
6 0.721555 -0.706771
In [27]: pd.set_option("display.max_rows", 5)
In [28]: df
Out[28]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
.. ... ...
5 -0.494929 1.071804
6 0.721555 -0.706771
[7 rows x 2 columns]
In [29]: pd.reset_option("display.max_rows")
Once the display.max_rows is exceeded, the display.min_rows options
determines how many rows are shown in the truncated repr.
In [30]: pd.set_option("display.max_rows", 8)
In [31]: pd.set_option("display.min_rows", 4)
# below max_rows -> all rows shown
In [32]: df = pd.DataFrame(np.random.randn(7, 2))
In [33]: df
Out[33]:
0 1
0 -1.039575 0.271860
1 -0.424972 0.567020
2 0.276232 -1.087401
3 -0.673690 0.113648
4 -1.478427 0.524988
5 0.404705 0.577046
6 -1.715002 -1.039268
# above max_rows -> only min_rows (4) rows shown
In [34]: df = pd.DataFrame(np.random.randn(9, 2))
In [35]: df
Out[35]:
0 1
0 -0.370647 -1.157892
1 -1.344312 0.844885
.. ... ...
7 0.276662 -0.472035
8 -0.013960 -0.362543
[9 rows x 2 columns]
In [36]: pd.reset_option("display.max_rows")
In [37]: pd.reset_option("display.min_rows")
display.expand_frame_repr allows for the representation of a
DataFrame to stretch across pages, wrapped over the all the columns.
In [38]: df = pd.DataFrame(np.random.randn(5, 10))
In [39]: pd.set_option("expand_frame_repr", True)
In [40]: df
Out[40]:
0 1 2 ... 7 8 9
0 -0.006154 -0.923061 0.895717 ... 1.340309 -1.170299 -0.226169
1 0.410835 0.813850 0.132003 ... -1.436737 -1.413681 1.607920
2 1.024180 0.569605 0.875906 ... -0.078638 0.545952 -1.219217
3 -1.226825 0.769804 -1.281247 ... 0.341734 0.959726 -1.110336
4 -0.619976 0.149748 -0.732339 ... 0.301624 -2.179861 -1.369849
[5 rows x 10 columns]
In [41]: pd.set_option("expand_frame_repr", False)
In [42]: df
Out[42]:
0 1 2 3 4 5 6 7 8 9
0 -0.006154 -0.923061 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299 -0.226169
1 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737 -1.413681 1.607920
2 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747 -0.410001 -0.078638 0.545952 -1.219217
3 -1.226825 0.769804 -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734 0.959726 -1.110336
4 -0.619976 0.149748 -0.732339 0.687738 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
In [43]: pd.reset_option("expand_frame_repr")
display.large_repr displays a DataFrame that exceed
max_columns or max_rows as a truncated frame or summary.
In [44]: df = pd.DataFrame(np.random.randn(10, 10))
In [45]: pd.set_option("display.max_rows", 5)
In [46]: pd.set_option("large_repr", "truncate")
In [47]: df
Out[47]:
0 1 2 ... 7 8 9
0 -0.954208 1.462696 -1.743161 ... 0.995761 2.396780 0.014871
1 3.357427 -0.317441 -1.236269 ... 0.380396 0.084844 0.432390
.. ... ... ... ... ... ... ...
8 -0.303421 -0.858447 0.306996 ... 0.476720 0.473424 -0.242861
9 -0.014805 -0.284319 0.650776 ... 1.613616 0.464000 0.227371
[10 rows x 10 columns]
In [48]: pd.set_option("large_repr", "info")
In [49]: df
Out[49]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [50]: pd.reset_option("large_repr")
In [51]: pd.reset_option("display.max_rows")
display.max_colwidth sets the maximum width of columns. Cells
of this length or longer will be truncated with an ellipsis.
In [52]: df = pd.DataFrame(
....: np.array(
....: [
....: ["foo", "bar", "bim", "uncomfortably long string"],
....: ["horse", "cow", "banana", "apple"],
....: ]
....: )
....: )
....:
In [53]: pd.set_option("max_colwidth", 40)
In [54]: df
Out[54]:
0 1 2 3
0 foo bar bim uncomfortably long string
1 horse cow banana apple
In [55]: pd.set_option("max_colwidth", 6)
In [56]: df
Out[56]:
0 1 2 3
0 foo bar bim un...
1 horse cow ba... apple
In [57]: pd.reset_option("max_colwidth")
display.max_info_columns sets a threshold for the number of columns
displayed when calling info().
In [58]: df = pd.DataFrame(np.random.randn(10, 10))
In [59]: pd.set_option("max_info_columns", 11)
In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [61]: pd.set_option("max_info_columns", 5)
In [62]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 928.0 bytes
In [63]: pd.reset_option("max_info_columns")
display.max_info_rows: info() will usually show null-counts for each column.
For a large DataFrame, this can be quite slow. max_info_rows and max_info_cols
limit this null check to the specified rows and columns respectively. The info()
keyword argument null_counts=True will override this.
In [64]: df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10)))
In [65]: df
Out[65]:
0 1 2 3 4 5 6 7 8 9
0 0.0 NaN 1.0 NaN NaN 0.0 NaN 0.0 NaN 1.0
1 1.0 NaN 1.0 1.0 1.0 1.0 NaN 0.0 0.0 NaN
2 0.0 NaN 1.0 0.0 0.0 NaN NaN NaN NaN 0.0
3 NaN NaN NaN 0.0 1.0 1.0 NaN 1.0 NaN 1.0
4 0.0 NaN NaN NaN 0.0 NaN NaN NaN 1.0 0.0
5 0.0 1.0 1.0 1.0 1.0 0.0 NaN NaN 1.0 0.0
6 1.0 1.0 1.0 NaN 1.0 NaN 1.0 0.0 NaN NaN
7 0.0 0.0 1.0 0.0 1.0 0.0 1.0 1.0 0.0 NaN
8 NaN NaN NaN 0.0 NaN NaN NaN NaN 1.0 NaN
9 0.0 NaN 0.0 NaN NaN 0.0 NaN 1.0 1.0 0.0
In [66]: pd.set_option("max_info_rows", 11)
In [67]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 8 non-null float64
1 1 3 non-null float64
2 2 7 non-null float64
3 3 6 non-null float64
4 4 7 non-null float64
5 5 6 non-null float64
6 6 2 non-null float64
7 7 6 non-null float64
8 8 6 non-null float64
9 9 6 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [68]: pd.set_option("max_info_rows", 5)
In [69]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Dtype
--- ------ -----
0 0 float64
1 1 float64
2 2 float64
3 3 float64
4 4 float64
5 5 float64
6 6 float64
7 7 float64
8 8 float64
9 9 float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [70]: pd.reset_option("max_info_rows")
display.precision sets the output display precision in terms of decimal places.
In [71]: df = pd.DataFrame(np.random.randn(5, 5))
In [72]: pd.set_option("display.precision", 7)
In [73]: df
Out[73]:
0 1 2 3 4
0 -1.1506406 -0.7983341 -0.5576966 0.3813531 1.3371217
1 -1.5310949 1.3314582 -0.5713290 -0.0266708 -1.0856630
2 -1.1147378 -0.0582158 -0.4867681 1.6851483 0.1125723
3 -1.4953086 0.8984347 -0.1482168 -1.5960698 0.1596530
4 0.2621358 0.0362196 0.1847350 -0.2550694 -0.2710197
In [74]: pd.set_option("display.precision", 4)
In [75]: df
Out[75]:
0 1 2 3 4
0 -1.1506 -0.7983 -0.5577 0.3814 1.3371
1 -1.5311 1.3315 -0.5713 -0.0267 -1.0857
2 -1.1147 -0.0582 -0.4868 1.6851 0.1126
3 -1.4953 0.8984 -0.1482 -1.5961 0.1597
4 0.2621 0.0362 0.1847 -0.2551 -0.2710
display.chop_threshold sets the rounding threshold to zero when displaying a
Series or DataFrame. This setting does not change the
precision at which the number is stored.
In [76]: df = pd.DataFrame(np.random.randn(6, 6))
In [77]: pd.set_option("chop_threshold", 0)
In [78]: df
Out[78]:
0 1 2 3 4 5
0 1.2884 0.2946 -1.1658 0.8470 -0.6856 0.6091
1 -0.3040 0.6256 -0.0593 0.2497 1.1039 -1.0875
2 1.9980 -0.2445 0.1362 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 -0.3882 -2.3144 0.6655 0.4026
4 0.3996 -1.7660 0.8504 0.3881 0.9923 0.7441
5 -0.7398 -1.0549 -0.1796 0.6396 1.5850 1.9067
In [79]: pd.set_option("chop_threshold", 0.5)
In [80]: df
Out[80]:
0 1 2 3 4 5
0 1.2884 0.0000 -1.1658 0.8470 -0.6856 0.6091
1 0.0000 0.6256 0.0000 0.0000 1.1039 -1.0875
2 1.9980 0.0000 0.0000 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 0.0000 -2.3144 0.6655 0.0000
4 0.0000 -1.7660 0.8504 0.0000 0.9923 0.7441
5 -0.7398 -1.0549 0.0000 0.6396 1.5850 1.9067
In [81]: pd.reset_option("chop_threshold")
display.colheader_justify controls the justification of the headers.
The options are 'right', and 'left'.
In [82]: df = pd.DataFrame(
....: np.array([np.random.randn(6), np.random.randint(1, 9, 6) * 0.1, np.zeros(6)]).T,
....: columns=["A", "B", "C"],
....: dtype="float",
....: )
....:
In [83]: pd.set_option("colheader_justify", "right")
In [84]: df
Out[84]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0
In [85]: pd.set_option("colheader_justify", "left")
In [86]: df
Out[86]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0
In [87]: pd.reset_option("colheader_justify")
Number formatting#
pandas also allows you to set how numbers are displayed in the console.
This option is not set through the set_options API.
Use the set_eng_float_format function
to alter the floating-point formatting of pandas objects to produce a particular
format.
In [88]: import numpy as np
In [89]: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True)
In [90]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [91]: s / 1.0e3
Out[91]:
a 303.638u
b -721.084u
c -622.696u
d 648.250u
e -1.945m
dtype: float64
In [92]: s / 1.0e6
Out[92]:
a 303.638n
b -721.084n
c -622.696n
d 648.250n
e -1.945u
dtype: float64
Use round() to specifically control rounding of an individual DataFrame
Unicode formatting#
Warning
Enabling this option will affect the performance for printing of DataFrame and Series (about 2 times slower).
Use only when it is actually required.
Some East Asian countries use Unicode characters whose width corresponds to two Latin characters.
If a DataFrame or Series contains these characters, the default output mode may not align them properly.
In [93]: df = pd.DataFrame({"国籍": ["UK", "日本"], "名前": ["Alice", "しのぶ"]})
In [94]: df
Out[94]:
国籍 名前
0 UK Alice
1 日本 しのぶ
Enabling display.unicode.east_asian_width allows pandas to check each character’s “East Asian Width” property.
These characters can be aligned properly by setting this option to True. However, this will result in longer render
times than the standard len function.
In [95]: pd.set_option("display.unicode.east_asian_width", True)
In [96]: df
Out[96]:
国籍 名前
0 UK Alice
1 日本 しのぶ
In addition, Unicode characters whose width is “ambiguous” can either be 1 or 2 characters wide depending on the
terminal setting or encoding. The option display.unicode.ambiguous_as_wide can be used to handle the ambiguity.
By default, an “ambiguous” character’s width, such as “¡” (inverted exclamation) in the example below, is taken to be 1.
In [97]: df = pd.DataFrame({"a": ["xxx", "¡¡"], "b": ["yyy", "¡¡"]})
In [98]: df
Out[98]:
a b
0 xxx yyy
1 ¡¡ ¡¡
Enabling display.unicode.ambiguous_as_wide makes pandas interpret these characters’ widths to be 2.
(Note that this option will only be effective when display.unicode.east_asian_width is enabled.)
However, setting this option incorrectly for your terminal will cause these characters to be aligned incorrectly:
In [99]: pd.set_option("display.unicode.ambiguous_as_wide", True)
In [100]: df
Out[100]:
a b
0 xxx yyy
1 ¡¡ ¡¡
Table schema display#
DataFrame and Series will publish a Table Schema representation
by default. This can be enabled globally with the
display.html.table_schema option:
In [101]: pd.set_option("display.html.table_schema", True)
Only 'display.max_rows' are serialized and published.
|
user_guide/options.html
|
pandas.tseries.offsets.LastWeekOfMonth.is_quarter_start
|
`pandas.tseries.offsets.LastWeekOfMonth.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
LastWeekOfMonth.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.is_quarter_start.html
|
pandas.core.groupby.GroupBy.__iter__
|
`pandas.core.groupby.GroupBy.__iter__`
Groupby iterator.
|
GroupBy.__iter__()[source]#
Groupby iterator.
Returns
Generator yielding sequence of (name, subsetted object)
for each group
|
reference/api/pandas.core.groupby.GroupBy.__iter__.html
|
pandas.tseries.offsets.Hour.apply
|
pandas.tseries.offsets.Hour.apply
|
Hour.apply()#
|
reference/api/pandas.tseries.offsets.Hour.apply.html
|
pandas.tseries.offsets.BusinessDay.apply
|
pandas.tseries.offsets.BusinessDay.apply
|
BusinessDay.apply()#
|
reference/api/pandas.tseries.offsets.BusinessDay.apply.html
|
pandas.core.groupby.GroupBy.max
|
`pandas.core.groupby.GroupBy.max`
Compute max of group values.
|
final GroupBy.max(numeric_only=False, min_count=- 1, engine=None, engine_kwargs=None)[source]#
Compute max of group values.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameComputed max of values within each group.
|
reference/api/pandas.core.groupby.GroupBy.max.html
|
pandas.Series.plot.box
|
`pandas.Series.plot.box`
Make a box plot of the DataFrame columns.
```
>>> data = np.random.randn(25, 4)
>>> df = pd.DataFrame(data, columns=list('ABCD'))
>>> ax = df.plot.box()
```
|
Series.plot.box(by=None, **kwargs)[source]#
Make a box plot of the DataFrame columns.
A box plot is a method for graphically depicting groups of numerical
data through their quartiles.
The box extends from the Q1 to Q3 quartile values of the data,
with a line at the median (Q2). The whiskers extend from the edges
of box to show the range of the data. The position of the whiskers
is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the
box. Outlier points are those past the end of the whiskers.
For further details see Wikipedia’s
entry for boxplot.
A consideration when using this chart is that the box and the whiskers
can overlap, which is very common when plotting small sets of data.
Parameters
bystr or sequenceColumn in the DataFrame to group by.
Changed in version 1.4.0: Previously, by is silently ignore and makes no groupings
**kwargsAdditional keywords are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also
DataFrame.boxplotAnother method to draw a box plot.
Series.plot.boxDraw a box plot from a Series object.
matplotlib.pyplot.boxplotDraw a box plot in matplotlib.
Examples
Draw a box plot from a DataFrame with four columns of randomly
generated data.
>>> data = np.random.randn(25, 4)
>>> df = pd.DataFrame(data, columns=list('ABCD'))
>>> ax = df.plot.box()
You can also generate groupings if you specify the by parameter (which
can take a column name, or a list or tuple of column names):
Changed in version 1.4.0.
>>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
>>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
>>> ax = df.plot.box(column="age", by="gender", figsize=(10, 8))
|
reference/api/pandas.Series.plot.box.html
|
How to create new columns derived from existing columns?
|
How to create new columns derived from existing columns?
|
this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) is used, made
available by OpenAQ and using the
py-openaq package.
The air_quality_no2.csv data set provides \(NO_2\) values for
the measurement stations FR04014, BETR801 and London Westminster
in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv", index_col=0, parse_dates=True)
In [3]: air_quality.head()
Out[3]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
How to create new columns derived from existing columns?#
I want to express the \(NO_2\) concentration of the station in London in mg/m\(^3\).
(If we assume temperature of 25 degrees Celsius and pressure of 1013
hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882
In [5]: air_quality.head()
Out[5]:
station_antwerp ... london_mg_per_cubic
datetime ...
2019-05-07 02:00:00 NaN ... 43.286
2019-05-07 03:00:00 50.5 ... 35.758
2019-05-07 04:00:00 45.0 ... 35.758
2019-05-07 05:00:00 NaN ... 30.112
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 4 columns]
To create a new column, use the [] brackets with the new column name
at the left side of the assignment.
Note
The calculation of the values is done element-wise. This
means all values in the given column are multiplied by the value 1.882
at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column.
In [6]: air_quality["ratio_paris_antwerp"] = (
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...: )
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp ... ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN ... NaN
2019-05-07 03:00:00 50.5 ... 0.495050
2019-05-07 04:00:00 45.0 ... 0.615556
2019-05-07 05:00:00 NaN ... NaN
2019-05-07 06:00:00 NaN ... NaN
[5 rows x 5 columns]
The calculation is again element-wise, so the / is applied for the
values in each row.
Also other mathematical operators (+, -, *, /,…) or
logical operators (<, >, ==,…) work element-wise. The latter was already
used in the subset data tutorial to filter
rows of a table using a conditional expression.
If you need more advanced logic, you can use arbitrary Python code via apply().
I want to rename the data columns to the corresponding station identifiers used by OpenAQ.
In [8]: air_quality_renamed = air_quality.rename(
...: columns={
...: "station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster",
...: }
...: )
...:
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
The rename() function can be used for both row labels and column
labels. Provide a dictionary with the keys the current names and the
values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a
mapping function as well. For example, converting the column names to
lowercase letters can be done using a function as well:
In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 ... london_mg_per_cubic ratio_paris_antwerp
datetime ...
2019-05-07 02:00:00 NaN NaN ... 43.286 NaN
2019-05-07 03:00:00 50.5 25.0 ... 35.758 0.495050
2019-05-07 04:00:00 45.0 27.7 ... 35.758 0.615556
2019-05-07 05:00:00 NaN 50.4 ... 30.112 NaN
2019-05-07 06:00:00 NaN 61.9 ... NaN NaN
[5 rows x 5 columns]
To user guideDetails about column or row label renaming is provided in the user guide section on renaming labels.
REMEMBER
Create a new column by assigning the output to the DataFrame with a
new column name in between the [].
Operations are element-wise, no need to loop over rows.
Use rename with a dictionary or function to rename row labels or
column names.
To user guideThe user guide contains a separate section on column addition and deletion.
|
getting_started/intro_tutorials/05_add_columns.html
|
pandas.core.window.expanding.Expanding.sem
|
`pandas.core.window.expanding.Expanding.sem`
Calculate the expanding standard error of mean.
Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
```
>>> s = pd.Series([0, 1, 2, 3])
```
|
Expanding.sem(ddof=1, numeric_only=False, *args, **kwargs)[source]#
Calculate the expanding standard error of mean.
Parameters
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.semAggregating sem for Series.
pandas.DataFrame.semAggregating sem for DataFrame.
Notes
A minimum of one period is required for the calculation.
Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> s.expanding().sem()
0 NaN
1 0.707107
2 0.707107
3 0.745356
dtype: float64
|
reference/api/pandas.core.window.expanding.Expanding.sem.html
|
pandas.core.window.rolling.Rolling.kurt
|
`pandas.core.window.rolling.Rolling.kurt`
Calculate the rolling Fisher’s definition of kurtosis without bias.
```
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
```
|
Rolling.kurt(numeric_only=False, **kwargs)[source]#
Calculate the rolling Fisher’s definition of kurtosis without bias.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
scipy.stats.kurtosisReference SciPy method.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.kurtAggregating kurt for Series.
pandas.DataFrame.kurtAggregating kurt for DataFrame.
Notes
A minimum of four periods is required for the calculation.
Examples
The example below will show a rolling calculation with a window size of
four matching the equivalent function call using scipy.stats.
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.kurt.html
|
pandas.tseries.offsets.QuarterBegin.kwds
|
`pandas.tseries.offsets.QuarterBegin.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
QuarterBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.QuarterBegin.kwds.html
|
pandas.tseries.offsets.BQuarterBegin.is_month_end
|
`pandas.tseries.offsets.BQuarterBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
BQuarterBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.BQuarterBegin.is_month_end.html
|
pandas.tseries.offsets.BusinessMonthBegin.is_year_start
|
`pandas.tseries.offsets.BusinessMonthBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BusinessMonthBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.is_year_start.html
|
pandas.api.types.is_interval
|
pandas.api.types.is_interval
|
pandas.api.types.is_interval()#
|
reference/api/pandas.api.types.is_interval.html
|
pandas.Timedelta.min
|
pandas.Timedelta.min
|
Timedelta.min = Timedelta('-106752 days +00:12:43.145224193')#
|
reference/api/pandas.Timedelta.min.html
|
pandas.tseries.offsets.QuarterEnd.is_month_end
|
`pandas.tseries.offsets.QuarterEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
QuarterEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.QuarterEnd.is_month_end.html
|
pandas.tseries.offsets.Day.is_year_start
|
`pandas.tseries.offsets.Day.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Day.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Day.is_year_start.html
|
pandas.tseries.offsets.Milli.is_anchored
|
`pandas.tseries.offsets.Milli.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
Milli.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.Milli.is_anchored.html
|
pandas.DataFrame.asof
|
`pandas.DataFrame.asof`
Return the last row(s) without any NaNs before where.
```
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
```
|
DataFrame.asof(where, subset=None)[source]#
Return the last row(s) without any NaNs before where.
The last row (for each element in where, if list) without any
NaN is taken.
In case of a DataFrame, the last row without NaN
considering only the subset of columns (if not None)
If there is no good value, NaN is returned for a Series or
a Series of NaN values for a DataFrame
Parameters
wheredate or array-like of datesDate(s) before which the last row(s) are returned.
subsetstr or array-like of str, default NoneFor DataFrame, if not None, only use these columns to
check for NaNs.
Returns
scalar, Series, or DataFrameThe return can be:
scalar : when self is a Series and where is a scalar
Series: when self is a Series and where is an array-like,
or when self is a DataFrame and where is a scalar
DataFrame : when self is a DataFrame and where is an
array-like
Return scalar, Series, or DataFrame.
See also
merge_asofPerform an asof merge. Similar to left join.
Notes
Dates are assumed to be sorted. Raises if this is not the case.
Examples
A Series and a scalar where.
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
>>> s.asof(20)
2.0
For a sequence where, a Series is returned. The first value is
NaN, because the first element of where is before the first
index value.
>>> s.asof([5, 20])
5 NaN
20 2.0
dtype: float64
Missing values are not considered. The following is 2.0, not
NaN, even though NaN is at the index location for 30.
>>> s.asof(30)
2.0
Take all columns into consideration
>>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
... 'b': [None, None, None, None, 500]},
... index=pd.DatetimeIndex(['2018-02-27 09:01:00',
... '2018-02-27 09:02:00',
... '2018-02-27 09:03:00',
... '2018-02-27 09:04:00',
... '2018-02-27 09:05:00']))
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']))
a b
2018-02-27 09:03:30 NaN NaN
2018-02-27 09:04:30 NaN NaN
Take a single column into consideration
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']),
... subset=['a'])
a b
2018-02-27 09:03:30 30 NaN
2018-02-27 09:04:30 40 NaN
|
reference/api/pandas.DataFrame.asof.html
|
Series
|
Series
Series([data, index, dtype, name, copy, ...])
One-dimensional ndarray with axis labels (including time series).
Axes
Series.index
The index (axis labels) of the Series.
|
Constructor#
Series([data, index, dtype, name, copy, ...])
One-dimensional ndarray with axis labels (including time series).
Attributes#
Axes
Series.index
The index (axis labels) of the Series.
Series.array
The ExtensionArray of the data backing this Series or Index.
Series.values
Return Series as ndarray or ndarray-like depending on the dtype.
Series.dtype
Return the dtype object of the underlying data.
Series.shape
Return a tuple of the shape of the underlying data.
Series.nbytes
Return the number of bytes in the underlying data.
Series.ndim
Number of dimensions of the underlying data, by definition 1.
Series.size
Return the number of elements in the underlying data.
Series.T
Return the transpose, which is by definition self.
Series.memory_usage([index, deep])
Return the memory usage of the Series.
Series.hasnans
Return True if there are any NaNs.
Series.empty
Indicator whether Series/DataFrame is empty.
Series.dtypes
Return the dtype object of the underlying data.
Series.name
Return the name of the Series.
Series.flags
Get the properties associated with this pandas object.
Series.set_flags(*[, copy, ...])
Return a new object with updated flags.
Conversion#
Series.astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
Series.convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
Series.infer_objects()
Attempt to infer better dtypes for object columns.
Series.copy([deep])
Make a copy of this object's indices and data.
Series.bool()
Return the bool of a single element Series or DataFrame.
Series.to_numpy([dtype, copy, na_value])
A NumPy ndarray representing the values in this Series or Index.
Series.to_period([freq, copy])
Convert Series from DatetimeIndex to PeriodIndex.
Series.to_timestamp([freq, how, copy])
Cast to DatetimeIndex of Timestamps, at beginning of period.
Series.to_list()
Return a list of the values.
Series.__array__([dtype])
Return the values as a NumPy array.
Indexing, iteration#
Series.get(key[, default])
Get item from object for given key (ex: DataFrame column).
Series.at
Access a single value for a row/column label pair.
Series.iat
Access a single value for a row/column pair by integer position.
Series.loc
Access a group of rows and columns by label(s) or a boolean array.
Series.iloc
Purely integer-location based indexing for selection by position.
Series.__iter__()
Return an iterator of the values.
Series.items()
Lazily iterate over (index, value) tuples.
Series.iteritems()
(DEPRECATED) Lazily iterate over (index, value) tuples.
Series.keys()
Return alias for index.
Series.pop(item)
Return item and drops from series.
Series.item()
Return the first element of the underlying data as a Python scalar.
Series.xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
For more information on .at, .iat, .loc, and
.iloc, see the indexing documentation.
Binary operator functions#
Series.add(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator add).
Series.sub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator sub).
Series.mul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator mul).
Series.div(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
Series.truediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
Series.floordiv(other[, level, fill_value, axis])
Return Integer division of series and other, element-wise (binary operator floordiv).
Series.mod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator mod).
Series.pow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator pow).
Series.radd(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator radd).
Series.rsub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator rsub).
Series.rmul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator rmul).
Series.rdiv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
Series.rtruediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
Series.rfloordiv(other[, level, fill_value, ...])
Return Integer division of series and other, element-wise (binary operator rfloordiv).
Series.rmod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator rmod).
Series.rpow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator rpow).
Series.combine(other, func[, fill_value])
Combine the Series with a Series or scalar according to func.
Series.combine_first(other)
Update null elements with value in the same location in 'other'.
Series.round([decimals])
Round each value in a Series to the given number of decimals.
Series.lt(other[, level, fill_value, axis])
Return Less than of series and other, element-wise (binary operator lt).
Series.gt(other[, level, fill_value, axis])
Return Greater than of series and other, element-wise (binary operator gt).
Series.le(other[, level, fill_value, axis])
Return Less than or equal to of series and other, element-wise (binary operator le).
Series.ge(other[, level, fill_value, axis])
Return Greater than or equal to of series and other, element-wise (binary operator ge).
Series.ne(other[, level, fill_value, axis])
Return Not equal to of series and other, element-wise (binary operator ne).
Series.eq(other[, level, fill_value, axis])
Return Equal to of series and other, element-wise (binary operator eq).
Series.product([axis, skipna, level, ...])
Return the product of the values over the requested axis.
Series.dot(other)
Compute the dot product between the Series and the columns of other.
Function application, GroupBy & window#
Series.apply(func[, convert_dtype, args])
Invoke function on values of Series.
Series.agg([func, axis])
Aggregate using one or more operations over the specified axis.
Series.aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
Series.transform(func[, axis])
Call func on self producing a Series with the same axis shape as self.
Series.map(arg[, na_action])
Map values of Series according to an input mapping or function.
Series.groupby([by, axis, level, as_index, ...])
Group Series using a mapper or by a Series of columns.
Series.rolling(window[, min_periods, ...])
Provide rolling window calculations.
Series.expanding([min_periods, center, ...])
Provide expanding window calculations.
Series.ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
Series.pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
Computations / descriptive stats#
Series.abs()
Return a Series/DataFrame with absolute numeric value of each element.
Series.all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
Series.any(*[, axis, bool_only, skipna, level])
Return whether any element is True, potentially over an axis.
Series.autocorr([lag])
Compute the lag-N autocorrelation.
Series.between(left, right[, inclusive])
Return boolean Series equivalent to left <= series <= right.
Series.clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
Series.corr(other[, method, min_periods])
Compute correlation with other Series, excluding missing values.
Series.count([level])
Return number of non-NA/null observations in the Series.
Series.cov(other[, min_periods, ddof])
Compute covariance with Series, excluding missing values.
Series.cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
Series.cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
Series.cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
Series.cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
Series.describe([percentiles, include, ...])
Generate descriptive statistics.
Series.diff([periods])
First discrete difference of element.
Series.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Series.kurt([axis, skipna, level, numeric_only])
Return unbiased kurtosis over requested axis.
Series.mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
Series.max([axis, skipna, level, numeric_only])
Return the maximum of the values over the requested axis.
Series.mean([axis, skipna, level, numeric_only])
Return the mean of the values over the requested axis.
Series.median([axis, skipna, level, ...])
Return the median of the values over the requested axis.
Series.min([axis, skipna, level, numeric_only])
Return the minimum of the values over the requested axis.
Series.mode([dropna])
Return the mode(s) of the Series.
Series.nlargest([n, keep])
Return the largest n elements.
Series.nsmallest([n, keep])
Return the smallest n elements.
Series.pct_change([periods, fill_method, ...])
Percentage change between the current and a prior element.
Series.prod([axis, skipna, level, ...])
Return the product of the values over the requested axis.
Series.quantile([q, interpolation])
Return value at the given quantile.
Series.rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
Series.sem([axis, skipna, level, ddof, ...])
Return unbiased standard error of the mean over requested axis.
Series.skew([axis, skipna, level, numeric_only])
Return unbiased skew over requested axis.
Series.std([axis, skipna, level, ddof, ...])
Return sample standard deviation over requested axis.
Series.sum([axis, skipna, level, ...])
Return the sum of the values over the requested axis.
Series.var([axis, skipna, level, ddof, ...])
Return unbiased variance over requested axis.
Series.kurtosis([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
Series.unique()
Return unique values of Series object.
Series.nunique([dropna])
Return number of unique elements in the object.
Series.is_unique
Return boolean if values in the object are unique.
Series.is_monotonic
(DEPRECATED) Return boolean if values in the object are monotonically increasing.
Series.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
Series.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
Series.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Reindexing / selection / label manipulation#
Series.align(other[, join, axis, level, ...])
Align two objects on their axes with the specified join method.
Series.drop([labels, axis, index, columns, ...])
Return Series with specified index labels removed.
Series.droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
Series.drop_duplicates(*[, keep, inplace])
Return Series with duplicate values removed.
Series.duplicated([keep])
Indicate duplicate Series values.
Series.equals(other)
Test whether two objects contain the same elements.
Series.first(offset)
Select initial periods of time series data based on a date offset.
Series.head([n])
Return the first n rows.
Series.idxmax([axis, skipna])
Return the row label of the maximum value.
Series.idxmin([axis, skipna])
Return the row label of the minimum value.
Series.isin(values)
Whether elements in Series are contained in values.
Series.last(offset)
Select final periods of time series data based on a date offset.
Series.reindex(*args, **kwargs)
Conform Series to new index with optional filling logic.
Series.reindex_like(other[, method, copy, ...])
Return an object with matching indices as other object.
Series.rename([index, axis, copy, inplace, ...])
Alter Series index labels or name.
Series.rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
Series.reset_index([level, drop, name, ...])
Generate a new DataFrame or Series with the index reset.
Series.sample([n, frac, replace, weights, ...])
Return a random sample of items from an axis of object.
Series.set_axis(labels, *[, axis, inplace, copy])
Assign desired index to given axis.
Series.take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
Series.tail([n])
Return the last n rows.
Series.truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
Series.where(cond[, other, inplace, axis, ...])
Replace values where the condition is False.
Series.mask(cond[, other, inplace, axis, ...])
Replace values where the condition is True.
Series.add_prefix(prefix)
Prefix labels with string prefix.
Series.add_suffix(suffix)
Suffix labels with string suffix.
Series.filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
Missing data handling#
Series.backfill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='bfill'.
Series.bfill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='bfill'.
Series.dropna(*[, axis, inplace, how])
Return a new Series with missing values removed.
Series.ffill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
Series.fillna([value, method, axis, ...])
Fill NA/NaN values using the specified method.
Series.interpolate([method, axis, limit, ...])
Fill NaN values using an interpolation method.
Series.isna()
Detect missing values.
Series.isnull()
Series.isnull is an alias for Series.isna.
Series.notna()
Detect existing (non-missing) values.
Series.notnull()
Series.notnull is an alias for Series.notna.
Series.pad(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
Series.replace([to_replace, value, inplace, ...])
Replace values given in to_replace with value.
Reshaping, sorting#
Series.argsort([axis, kind, order])
Return the integer indices that would sort the Series values.
Series.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Series.argmax([axis, skipna])
Return int position of the largest value in the Series.
Series.reorder_levels(order)
Rearrange index levels using input order.
Series.sort_values(*[, axis, ascending, ...])
Sort by the values.
Series.sort_index(*[, axis, level, ...])
Sort Series by index labels.
Series.swaplevel([i, j, copy])
Swap levels i and j in a MultiIndex.
Series.unstack([level, fill_value])
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
Series.explode([ignore_index])
Transform each element of a list-like to a row.
Series.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Series.ravel([order])
Return the flattened underlying data as an ndarray.
Series.repeat(repeats[, axis])
Repeat elements of a Series.
Series.squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
Series.view([dtype])
Create a new view of the Series.
Combining / comparing / joining / merging#
Series.append(to_append[, ignore_index, ...])
(DEPRECATED) Concatenate two or more Series.
Series.compare(other[, align_axis, ...])
Compare to another Series and show the differences.
Series.update(other)
Modify Series in place using values from passed Series.
Time Series-related#
Series.asfreq(freq[, method, how, ...])
Convert time series to specified frequency.
Series.asof(where[, subset])
Return the last row(s) without any NaNs before where.
Series.shift([periods, freq, axis, fill_value])
Shift index by desired number of periods with an optional time freq.
Series.first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
Series.last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
Series.resample(rule[, axis, closed, label, ...])
Resample time-series data.
Series.tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
Series.tz_localize(tz[, axis, level, copy, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
Series.at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
Series.between_time(start_time, end_time[, ...])
Select values between particular times of the day (e.g., 9:00-9:30 AM).
Series.tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
Series.slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
Accessors#
pandas provides dtype-specific methods under various accessors.
These are separate namespaces within Series that only apply
to specific data types.
Data Type
Accessor
Datetime, Timedelta, Period
dt
String
str
Categorical
cat
Sparse
sparse
Datetimelike properties#
Series.dt can be used to access the values of the series as
datetimelike and return several properties.
These can be accessed like Series.dt.<property>.
Datetime properties#
Series.dt.date
Returns numpy array of python datetime.date objects.
Series.dt.time
Returns numpy array of datetime.time objects.
Series.dt.timetz
Returns numpy array of datetime.time objects with timezones.
Series.dt.year
The year of the datetime.
Series.dt.month
The month as January=1, December=12.
Series.dt.day
The day of the datetime.
Series.dt.hour
The hours of the datetime.
Series.dt.minute
The minutes of the datetime.
Series.dt.second
The seconds of the datetime.
Series.dt.microsecond
The microseconds of the datetime.
Series.dt.nanosecond
The nanoseconds of the datetime.
Series.dt.week
(DEPRECATED) The week ordinal of the year according to the ISO 8601 standard.
Series.dt.weekofyear
(DEPRECATED) The week ordinal of the year according to the ISO 8601 standard.
Series.dt.dayofweek
The day of the week with Monday=0, Sunday=6.
Series.dt.day_of_week
The day of the week with Monday=0, Sunday=6.
Series.dt.weekday
The day of the week with Monday=0, Sunday=6.
Series.dt.dayofyear
The ordinal day of the year.
Series.dt.day_of_year
The ordinal day of the year.
Series.dt.quarter
The quarter of the date.
Series.dt.is_month_start
Indicates whether the date is the first day of the month.
Series.dt.is_month_end
Indicates whether the date is the last day of the month.
Series.dt.is_quarter_start
Indicator for whether the date is the first day of a quarter.
Series.dt.is_quarter_end
Indicator for whether the date is the last day of a quarter.
Series.dt.is_year_start
Indicate whether the date is the first day of a year.
Series.dt.is_year_end
Indicate whether the date is the last day of the year.
Series.dt.is_leap_year
Boolean indicator if the date belongs to a leap year.
Series.dt.daysinmonth
The number of days in the month.
Series.dt.days_in_month
The number of days in the month.
Series.dt.tz
Return the timezone.
Series.dt.freq
Return the frequency object for this PeriodArray.
Datetime methods#
Series.dt.isocalendar()
Calculate year, week, and day according to the ISO 8601 standard.
Series.dt.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
Series.dt.to_pydatetime()
Return the data as an array of datetime.datetime objects.
Series.dt.tz_localize(*args, **kwargs)
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
Series.dt.tz_convert(*args, **kwargs)
Convert tz-aware Datetime Array/Index from one time zone to another.
Series.dt.normalize(*args, **kwargs)
Convert times to midnight.
Series.dt.strftime(*args, **kwargs)
Convert to Index using specified date_format.
Series.dt.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
Series.dt.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
Series.dt.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
Series.dt.month_name(*args, **kwargs)
Return the month names with specified locale.
Series.dt.day_name(*args, **kwargs)
Return the day names with specified locale.
Period properties#
Series.dt.qyear
Series.dt.start_time
Get the Timestamp for the start of the period.
Series.dt.end_time
Get the Timestamp for the end of the period.
Timedelta properties#
Series.dt.days
Number of days for each element.
Series.dt.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
Series.dt.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
Series.dt.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
Series.dt.components
Return a Dataframe of the components of the Timedeltas.
Timedelta methods#
Series.dt.to_pytimedelta()
Return an array of native datetime.timedelta objects.
Series.dt.total_seconds(*args, **kwargs)
Return total duration of each element expressed in seconds.
String handling#
Series.str can be used to access the values of the series as
strings and apply several methods to it. These can be accessed like
Series.str.<function/property>.
Series.str.capitalize()
Convert strings in the Series/Index to be capitalized.
Series.str.casefold()
Convert strings in the Series/Index to be casefolded.
Series.str.cat([others, sep, na_rep, join])
Concatenate strings in the Series/Index with given separator.
Series.str.center(width[, fillchar])
Pad left and right side of strings in the Series/Index.
Series.str.contains(pat[, case, flags, na, ...])
Test if pattern or regex is contained within a string of a Series or Index.
Series.str.count(pat[, flags])
Count occurrences of pattern in each string of the Series/Index.
Series.str.decode(encoding[, errors])
Decode character string in the Series/Index using indicated encoding.
Series.str.encode(encoding[, errors])
Encode character string in the Series/Index using indicated encoding.
Series.str.endswith(pat[, na])
Test if the end of each string element matches a pattern.
Series.str.extract(pat[, flags, expand])
Extract capture groups in the regex pat as columns in a DataFrame.
Series.str.extractall(pat[, flags])
Extract capture groups in the regex pat as columns in DataFrame.
Series.str.find(sub[, start, end])
Return lowest indexes in each strings in the Series/Index.
Series.str.findall(pat[, flags])
Find all occurrences of pattern or regular expression in the Series/Index.
Series.str.fullmatch(pat[, case, flags, na])
Determine if each string entirely matches a regular expression.
Series.str.get(i)
Extract element from each component at specified position or with specified key.
Series.str.index(sub[, start, end])
Return lowest indexes in each string in Series/Index.
Series.str.join(sep)
Join lists contained as elements in the Series/Index with passed delimiter.
Series.str.len()
Compute the length of each element in the Series/Index.
Series.str.ljust(width[, fillchar])
Pad right side of strings in the Series/Index.
Series.str.lower()
Convert strings in the Series/Index to lowercase.
Series.str.lstrip([to_strip])
Remove leading characters.
Series.str.match(pat[, case, flags, na])
Determine if each string starts with a match of a regular expression.
Series.str.normalize(form)
Return the Unicode normal form for the strings in the Series/Index.
Series.str.pad(width[, side, fillchar])
Pad strings in the Series/Index up to width.
Series.str.partition([sep, expand])
Split the string at the first occurrence of sep.
Series.str.removeprefix(prefix)
Remove a prefix from an object series.
Series.str.removesuffix(suffix)
Remove a suffix from an object series.
Series.str.repeat(repeats)
Duplicate each string in the Series or Index.
Series.str.replace(pat, repl[, n, case, ...])
Replace each occurrence of pattern/regex in the Series/Index.
Series.str.rfind(sub[, start, end])
Return highest indexes in each strings in the Series/Index.
Series.str.rindex(sub[, start, end])
Return highest indexes in each string in Series/Index.
Series.str.rjust(width[, fillchar])
Pad left side of strings in the Series/Index.
Series.str.rpartition([sep, expand])
Split the string at the last occurrence of sep.
Series.str.rstrip([to_strip])
Remove trailing characters.
Series.str.slice([start, stop, step])
Slice substrings from each element in the Series or Index.
Series.str.slice_replace([start, stop, repl])
Replace a positional slice of a string with another value.
Series.str.split([pat, n, expand, regex])
Split strings around given separator/delimiter.
Series.str.rsplit([pat, n, expand])
Split strings around given separator/delimiter.
Series.str.startswith(pat[, na])
Test if the start of each string element matches a pattern.
Series.str.strip([to_strip])
Remove leading and trailing characters.
Series.str.swapcase()
Convert strings in the Series/Index to be swapcased.
Series.str.title()
Convert strings in the Series/Index to titlecase.
Series.str.translate(table)
Map all characters in the string through the given mapping table.
Series.str.upper()
Convert strings in the Series/Index to uppercase.
Series.str.wrap(width, **kwargs)
Wrap strings in Series/Index at specified line width.
Series.str.zfill(width)
Pad strings in the Series/Index by prepending '0' characters.
Series.str.isalnum()
Check whether all characters in each string are alphanumeric.
Series.str.isalpha()
Check whether all characters in each string are alphabetic.
Series.str.isdigit()
Check whether all characters in each string are digits.
Series.str.isspace()
Check whether all characters in each string are whitespace.
Series.str.islower()
Check whether all characters in each string are lowercase.
Series.str.isupper()
Check whether all characters in each string are uppercase.
Series.str.istitle()
Check whether all characters in each string are titlecase.
Series.str.isnumeric()
Check whether all characters in each string are numeric.
Series.str.isdecimal()
Check whether all characters in each string are decimal.
Series.str.get_dummies([sep])
Return DataFrame of dummy/indicator variables for Series.
Categorical accessor#
Categorical-dtype specific methods and attributes are available under
the Series.cat accessor.
Series.cat.categories
The categories of this categorical.
Series.cat.ordered
Whether the categories have an ordered relationship.
Series.cat.codes
Return Series of codes as well as the index.
Series.cat.rename_categories(*args, **kwargs)
Rename categories.
Series.cat.reorder_categories(*args, **kwargs)
Reorder categories as specified in new_categories.
Series.cat.add_categories(*args, **kwargs)
Add new categories.
Series.cat.remove_categories(*args, **kwargs)
Remove the specified categories.
Series.cat.remove_unused_categories(*args, ...)
Remove categories which are not used.
Series.cat.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
Series.cat.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
Series.cat.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Sparse accessor#
Sparse-dtype specific methods and attributes are provided under the
Series.sparse accessor.
Series.sparse.npoints
The number of non- fill_value points.
Series.sparse.density
The percent of non- fill_value points, as decimal.
Series.sparse.fill_value
Elements in data that are fill_value are not stored.
Series.sparse.sp_values
An ndarray containing the non- fill_value values.
Series.sparse.from_coo(A[, dense_index])
Create a Series with sparse values from a scipy.sparse.coo_matrix.
Series.sparse.to_coo([row_levels, ...])
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
Flags#
Flags refer to attributes of the pandas object. Properties of the dataset (like
the date is was recorded, the URL it was accessed from, etc.) should be stored
in Series.attrs.
Flags(obj, *, allows_duplicate_labels)
Flags that apply to pandas objects.
Metadata#
Series.attrs is a dictionary for storing global metadata for this Series.
Warning
Series.attrs is considered experimental and may change without warning.
Series.attrs
Dictionary of global attributes of this dataset.
Plotting#
Series.plot is both a callable method and a namespace attribute for
specific plotting methods of the form Series.plot.<kind>.
Series.plot([kind, ax, figsize, ....])
Series plotting accessor and method
Series.plot.area([x, y])
Draw a stacked area plot.
Series.plot.bar([x, y])
Vertical bar plot.
Series.plot.barh([x, y])
Make a horizontal bar plot.
Series.plot.box([by])
Make a box plot of the DataFrame columns.
Series.plot.density([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
Series.plot.hist([by, bins])
Draw one histogram of the DataFrame's columns.
Series.plot.kde([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
Series.plot.line([x, y])
Plot Series or DataFrame as lines.
Series.plot.pie(**kwargs)
Generate a pie plot.
Series.hist([by, ax, grid, xlabelsize, ...])
Draw histogram of the input series using matplotlib.
Serialization / IO / conversion#
Series.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Series.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
Series.to_dict([into])
Convert Series to {label -> value} dict or dict-like object.
Series.to_excel(excel_writer[, sheet_name, ...])
Write object to an Excel sheet.
Series.to_frame([name])
Convert Series to DataFrame.
Series.to_xarray()
Return an xarray object from the pandas object.
Series.to_hdf(path_or_buf, key[, mode, ...])
Write the contained data to an HDF5 file using HDFStore.
Series.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Series.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
Series.to_string([buf, na_rep, ...])
Render a string representation of the Series.
Series.to_clipboard([excel, sep])
Copy object to the system clipboard.
Series.to_latex([buf, columns, col_space, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Series.to_markdown([buf, mode, index, ...])
Print Series in Markdown-friendly format.
|
reference/series.html
|
pandas.tseries.offsets.BYearBegin.normalize
|
pandas.tseries.offsets.BYearBegin.normalize
|
BYearBegin.normalize#
|
reference/api/pandas.tseries.offsets.BYearBegin.normalize.html
|
pandas.tseries.offsets.DateOffset.kwds
|
`pandas.tseries.offsets.DateOffset.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
DateOffset.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.DateOffset.kwds.html
|
pandas.tseries.offsets.YearEnd.month
|
pandas.tseries.offsets.YearEnd.month
|
YearEnd.month#
|
reference/api/pandas.tseries.offsets.YearEnd.month.html
|
pandas.Series.filter
|
`pandas.Series.filter`
Subset the dataframe rows or columns according to the specified index labels.
```
>>> df = pd.DataFrame(np.array(([1, 2, 3], [4, 5, 6])),
... index=['mouse', 'rabbit'],
... columns=['one', 'two', 'three'])
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
```
|
Series.filter(items=None, like=None, regex=None, axis=None)[source]#
Subset the dataframe rows or columns according to the specified index labels.
Note that this routine does not filter a dataframe on its
contents. The filter is applied to the labels of the index.
Parameters
itemslist-likeKeep labels from axis which are in items.
likestrKeep labels from axis for which “like in label == True”.
regexstr (regular expression)Keep labels from axis for which re.search(regex, label) == True.
axis{0 or ‘index’, 1 or ‘columns’, None}, default NoneThe axis to filter on, expressed either as an index (int)
or axis name (str). By default this is the info axis, ‘columns’ for
DataFrame. For Series this parameter is unused and defaults to None.
Returns
same type as input object
See also
DataFrame.locAccess a group of rows and columns by label(s) or a boolean array.
Notes
The items, like, and regex parameters are
enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing
with [].
Examples
>>> df = pd.DataFrame(np.array(([1, 2, 3], [4, 5, 6])),
... index=['mouse', 'rabbit'],
... columns=['one', 'two', 'three'])
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
>>> # select columns by name
>>> df.filter(items=['one', 'three'])
one three
mouse 1 3
rabbit 4 6
>>> # select columns by regular expression
>>> df.filter(regex='e$', axis=1)
one three
mouse 1 3
rabbit 4 6
>>> # select rows containing 'bbi'
>>> df.filter(like='bbi', axis=0)
one two three
rabbit 4 5 6
|
reference/api/pandas.Series.filter.html
|
pandas.DataFrame.to_xarray
|
`pandas.DataFrame.to_xarray`
Return an xarray object from the pandas object.
Data in the pandas structure converted to Dataset if the object is
a DataFrame, or a DataArray if the object is a Series.
```
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
... ('parrot', 'bird', 24.0, 2),
... ('lion', 'mammal', 80.5, 4),
... ('monkey', 'mammal', np.nan, 4)],
... columns=['name', 'class', 'max_speed',
... 'num_legs'])
>>> df
name class max_speed num_legs
0 falcon bird 389.0 2
1 parrot bird 24.0 2
2 lion mammal 80.5 4
3 monkey mammal NaN 4
```
|
DataFrame.to_xarray()[source]#
Return an xarray object from the pandas object.
Returns
xarray.DataArray or xarray.DatasetData in the pandas structure converted to Dataset if the object is
a DataFrame, or a DataArray if the object is a Series.
See also
DataFrame.to_hdfWrite DataFrame to an HDF5 file.
DataFrame.to_parquetWrite a DataFrame to the binary parquet format.
Notes
See the xarray docs
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
... ('parrot', 'bird', 24.0, 2),
... ('lion', 'mammal', 80.5, 4),
... ('monkey', 'mammal', np.nan, 4)],
... columns=['name', 'class', 'max_speed',
... 'num_legs'])
>>> df
name class max_speed num_legs
0 falcon bird 389.0 2
1 parrot bird 24.0 2
2 lion mammal 80.5 4
3 monkey mammal NaN 4
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 4)
Coordinates:
* index (index) int64 0 1 2 3
Data variables:
name (index) object 'falcon' 'parrot' 'lion' 'monkey'
class (index) object 'bird' 'bird' 'mammal' 'mammal'
max_speed (index) float64 389.0 24.0 80.5 nan
num_legs (index) int64 2 2 4 4
>>> df['max_speed'].to_xarray()
<xarray.DataArray 'max_speed' (index: 4)>
array([389. , 24. , 80.5, nan])
Coordinates:
* index (index) int64 0 1 2 3
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-01',
... '2018-01-02', '2018-01-02'])
>>> df_multiindex = pd.DataFrame({'date': dates,
... 'animal': ['falcon', 'parrot',
... 'falcon', 'parrot'],
... 'speed': [350, 18, 361, 15]})
>>> df_multiindex = df_multiindex.set_index(['date', 'animal'])
>>> df_multiindex
speed
date animal
2018-01-01 falcon 350
parrot 18
2018-01-02 falcon 361
parrot 15
>>> df_multiindex.to_xarray()
<xarray.Dataset>
Dimensions: (date: 2, animal: 2)
Coordinates:
* date (date) datetime64[ns] 2018-01-01 2018-01-02
* animal (animal) object 'falcon' 'parrot'
Data variables:
speed (date, animal) int64 350 18 361 15
|
reference/api/pandas.DataFrame.to_xarray.html
|
DataFrame
|
DataFrame
|
Constructor#
DataFrame([data, index, columns, dtype, copy])
Two-dimensional, size-mutable, potentially heterogeneous tabular data.
Attributes and underlying data#
Axes
DataFrame.index
The index (row labels) of the DataFrame.
DataFrame.columns
The column labels of the DataFrame.
DataFrame.dtypes
Return the dtypes in the DataFrame.
DataFrame.info([verbose, buf, max_cols, ...])
Print a concise summary of a DataFrame.
DataFrame.select_dtypes([include, exclude])
Return a subset of the DataFrame's columns based on the column dtypes.
DataFrame.values
Return a Numpy representation of the DataFrame.
DataFrame.axes
Return a list representing the axes of the DataFrame.
DataFrame.ndim
Return an int representing the number of axes / array dimensions.
DataFrame.size
Return an int representing the number of elements in this object.
DataFrame.shape
Return a tuple representing the dimensionality of the DataFrame.
DataFrame.memory_usage([index, deep])
Return the memory usage of each column in bytes.
DataFrame.empty
Indicator whether Series/DataFrame is empty.
DataFrame.set_flags(*[, copy, ...])
Return a new object with updated flags.
Conversion#
DataFrame.astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
DataFrame.convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
DataFrame.infer_objects()
Attempt to infer better dtypes for object columns.
DataFrame.copy([deep])
Make a copy of this object's indices and data.
DataFrame.bool()
Return the bool of a single element Series or DataFrame.
Indexing, iteration#
DataFrame.head([n])
Return the first n rows.
DataFrame.at
Access a single value for a row/column label pair.
DataFrame.iat
Access a single value for a row/column pair by integer position.
DataFrame.loc
Access a group of rows and columns by label(s) or a boolean array.
DataFrame.iloc
Purely integer-location based indexing for selection by position.
DataFrame.insert(loc, column, value[, ...])
Insert column into DataFrame at specified location.
DataFrame.__iter__()
Iterate over info axis.
DataFrame.items()
Iterate over (column name, Series) pairs.
DataFrame.iteritems()
(DEPRECATED) Iterate over (column name, Series) pairs.
DataFrame.keys()
Get the 'info axis' (see Indexing for more).
DataFrame.iterrows()
Iterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuples([index, name])
Iterate over DataFrame rows as namedtuples.
DataFrame.lookup(row_labels, col_labels)
(DEPRECATED) Label-based "fancy indexing" function for DataFrame.
DataFrame.pop(item)
Return item and drop from frame.
DataFrame.tail([n])
Return the last n rows.
DataFrame.xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
DataFrame.get(key[, default])
Get item from object for given key (ex: DataFrame column).
DataFrame.isin(values)
Whether each element in the DataFrame is contained in values.
DataFrame.where(cond[, other, inplace, ...])
Replace values where the condition is False.
DataFrame.mask(cond[, other, inplace, axis, ...])
Replace values where the condition is True.
DataFrame.query(expr, *[, inplace])
Query the columns of a DataFrame with a boolean expression.
For more information on .at, .iat, .loc, and
.iloc, see the indexing documentation.
Binary operator functions#
DataFrame.add(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator add).
DataFrame.sub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator sub).
DataFrame.mul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator mul).
DataFrame.div(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.truediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.floordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator mod).
DataFrame.pow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator pow).
DataFrame.dot(other)
Compute the matrix multiplication between the DataFrame and other.
DataFrame.radd(other[, axis, level, fill_value])
Get Addition of dataframe and other, element-wise (binary operator radd).
DataFrame.rsub(other[, axis, level, fill_value])
Get Subtraction of dataframe and other, element-wise (binary operator rsub).
DataFrame.rmul(other[, axis, level, fill_value])
Get Multiplication of dataframe and other, element-wise (binary operator rmul).
DataFrame.rdiv(other[, axis, level, fill_value])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rtruediv(other[, axis, level, ...])
Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rfloordiv(other[, axis, level, ...])
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
DataFrame.rmod(other[, axis, level, fill_value])
Get Modulo of dataframe and other, element-wise (binary operator rmod).
DataFrame.rpow(other[, axis, level, fill_value])
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
DataFrame.lt(other[, axis, level])
Get Less than of dataframe and other, element-wise (binary operator lt).
DataFrame.gt(other[, axis, level])
Get Greater than of dataframe and other, element-wise (binary operator gt).
DataFrame.le(other[, axis, level])
Get Less than or equal to of dataframe and other, element-wise (binary operator le).
DataFrame.ge(other[, axis, level])
Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
DataFrame.ne(other[, axis, level])
Get Not equal to of dataframe and other, element-wise (binary operator ne).
DataFrame.eq(other[, axis, level])
Get Equal to of dataframe and other, element-wise (binary operator eq).
DataFrame.combine(other, func[, fill_value, ...])
Perform column-wise combine with another DataFrame.
DataFrame.combine_first(other)
Update null elements with value in the same location in other.
Function application, GroupBy & window#
DataFrame.apply(func[, axis, raw, ...])
Apply a function along an axis of the DataFrame.
DataFrame.applymap(func[, na_action])
Apply a function to a Dataframe elementwise.
DataFrame.pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
DataFrame.agg([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
DataFrame.transform(func[, axis])
Call func on self producing a DataFrame with the same axis shape as self.
DataFrame.groupby([by, axis, level, ...])
Group DataFrame using a mapper or by a Series of columns.
DataFrame.rolling(window[, min_periods, ...])
Provide rolling window calculations.
DataFrame.expanding([min_periods, center, ...])
Provide expanding window calculations.
DataFrame.ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
Computations / descriptive stats#
DataFrame.abs()
Return a Series/DataFrame with absolute numeric value of each element.
DataFrame.all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
DataFrame.any(*[, axis, bool_only, skipna, ...])
Return whether any element is True, potentially over an axis.
DataFrame.clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
DataFrame.corr([method, min_periods, ...])
Compute pairwise correlation of columns, excluding NA/null values.
DataFrame.corrwith(other[, axis, drop, ...])
Compute pairwise correlation.
DataFrame.count([axis, level, numeric_only])
Count non-NA cells for each column or row.
DataFrame.cov([min_periods, ddof, numeric_only])
Compute pairwise covariance of columns, excluding NA/null values.
DataFrame.cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
DataFrame.cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
DataFrame.cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
DataFrame.cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
DataFrame.describe([percentiles, include, ...])
Generate descriptive statistics.
DataFrame.diff([periods, axis])
First discrete difference of element.
DataFrame.eval(expr, *[, inplace])
Evaluate a string describing operations on DataFrame columns.
DataFrame.kurt([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.kurtosis([axis, skipna, level, ...])
Return unbiased kurtosis over requested axis.
DataFrame.mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrame.max([axis, skipna, level, ...])
Return the maximum of the values over the requested axis.
DataFrame.mean([axis, skipna, level, ...])
Return the mean of the values over the requested axis.
DataFrame.median([axis, skipna, level, ...])
Return the median of the values over the requested axis.
DataFrame.min([axis, skipna, level, ...])
Return the minimum of the values over the requested axis.
DataFrame.mode([axis, numeric_only, dropna])
Get the mode(s) of each element along the selected axis.
DataFrame.pct_change([periods, fill_method, ...])
Percentage change between the current and a prior element.
DataFrame.prod([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.product([axis, skipna, level, ...])
Return the product of the values over the requested axis.
DataFrame.quantile([q, axis, numeric_only, ...])
Return values at the given quantile over requested axis.
DataFrame.rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
DataFrame.round([decimals])
Round a DataFrame to a variable number of decimal places.
DataFrame.sem([axis, skipna, level, ddof, ...])
Return unbiased standard error of the mean over requested axis.
DataFrame.skew([axis, skipna, level, ...])
Return unbiased skew over requested axis.
DataFrame.sum([axis, skipna, level, ...])
Return the sum of the values over the requested axis.
DataFrame.std([axis, skipna, level, ddof, ...])
Return sample standard deviation over requested axis.
DataFrame.var([axis, skipna, level, ddof, ...])
Return unbiased variance over requested axis.
DataFrame.nunique([axis, dropna])
Count number of distinct elements in specified axis.
DataFrame.value_counts([subset, normalize, ...])
Return a Series containing counts of unique rows in the DataFrame.
Reindexing / selection / label manipulation#
DataFrame.add_prefix(prefix)
Prefix labels with string prefix.
DataFrame.add_suffix(suffix)
Suffix labels with string suffix.
DataFrame.align(other[, join, axis, level, ...])
Align two objects on their axes with the specified join method.
DataFrame.at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
DataFrame.between_time(start_time, end_time)
Select values between particular times of the day (e.g., 9:00-9:30 AM).
DataFrame.drop([labels, axis, index, ...])
Drop specified labels from rows or columns.
DataFrame.drop_duplicates([subset, keep, ...])
Return DataFrame with duplicate rows removed.
DataFrame.duplicated([subset, keep])
Return boolean Series denoting duplicate rows.
DataFrame.equals(other)
Test whether two objects contain the same elements.
DataFrame.filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
DataFrame.first(offset)
Select initial periods of time series data based on a date offset.
DataFrame.head([n])
Return the first n rows.
DataFrame.idxmax([axis, skipna, numeric_only])
Return index of first occurrence of maximum over requested axis.
DataFrame.idxmin([axis, skipna, numeric_only])
Return index of first occurrence of minimum over requested axis.
DataFrame.last(offset)
Select final periods of time series data based on a date offset.
DataFrame.reindex([labels, index, columns, ...])
Conform Series/DataFrame to new index with optional filling logic.
DataFrame.reindex_like(other[, method, ...])
Return an object with matching indices as other object.
DataFrame.rename([mapper, index, columns, ...])
Alter axes labels.
DataFrame.rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
DataFrame.reset_index([level, drop, ...])
Reset the index, or a level of it.
DataFrame.sample([n, frac, replace, ...])
Return a random sample of items from an axis of object.
DataFrame.set_axis(labels, *[, axis, ...])
Assign desired index to given axis.
DataFrame.set_index(keys, *[, drop, append, ...])
Set the DataFrame index using existing columns.
DataFrame.tail([n])
Return the last n rows.
DataFrame.take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
DataFrame.truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
Missing data handling#
DataFrame.backfill(*[, axis, inplace, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.bfill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.dropna(*[, axis, how, thresh, ...])
Remove missing values.
DataFrame.ffill(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.fillna([value, method, axis, ...])
Fill NA/NaN values using the specified method.
DataFrame.interpolate([method, axis, limit, ...])
Fill NaN values using an interpolation method.
DataFrame.isna()
Detect missing values.
DataFrame.isnull()
DataFrame.isnull is an alias for DataFrame.isna.
DataFrame.notna()
Detect existing (non-missing) values.
DataFrame.notnull()
DataFrame.notnull is an alias for DataFrame.notna.
DataFrame.pad(*[, axis, inplace, limit, ...])
Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.replace([to_replace, value, ...])
Replace values given in to_replace with value.
Reshaping, sorting, transposing#
DataFrame.droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
DataFrame.pivot(*[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
DataFrame.pivot_table([values, index, ...])
Create a spreadsheet-style pivot table as a DataFrame.
DataFrame.reorder_levels(order[, axis])
Rearrange index levels using input order.
DataFrame.sort_values(by, *[, axis, ...])
Sort by the values along either axis.
DataFrame.sort_index(*[, axis, level, ...])
Sort object by labels (along an axis).
DataFrame.nlargest(n, columns[, keep])
Return the first n rows ordered by columns in descending order.
DataFrame.nsmallest(n, columns[, keep])
Return the first n rows ordered by columns in ascending order.
DataFrame.swaplevel([i, j, axis])
Swap levels i and j in a MultiIndex.
DataFrame.stack([level, dropna])
Stack the prescribed level(s) from columns to index.
DataFrame.unstack([level, fill_value])
Pivot a level of the (necessarily hierarchical) index labels.
DataFrame.swapaxes(axis1, axis2[, copy])
Interchange axes and swap values axes appropriately.
DataFrame.melt([id_vars, value_vars, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
DataFrame.explode(column[, ignore_index])
Transform each element of a list-like to a row, replicating index values.
DataFrame.squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
DataFrame.to_xarray()
Return an xarray object from the pandas object.
DataFrame.T
DataFrame.transpose(*args[, copy])
Transpose index and columns.
Combining / comparing / joining / merging#
DataFrame.append(other[, ignore_index, ...])
(DEPRECATED) Append rows of other to the end of caller, returning a new object.
DataFrame.assign(**kwargs)
Assign new columns to a DataFrame.
DataFrame.compare(other[, align_axis, ...])
Compare to another DataFrame and show the differences.
DataFrame.join(other[, on, how, lsuffix, ...])
Join columns of another DataFrame.
DataFrame.merge(right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
DataFrame.update(other[, join, overwrite, ...])
Modify in place using non-NA values from another DataFrame.
Time Series-related#
DataFrame.asfreq(freq[, method, how, ...])
Convert time series to specified frequency.
DataFrame.asof(where[, subset])
Return the last row(s) without any NaNs before where.
DataFrame.shift([periods, freq, axis, ...])
Shift index by desired number of periods with an optional time freq.
DataFrame.slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
DataFrame.tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrame.first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
DataFrame.last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
DataFrame.resample(rule[, axis, closed, ...])
Resample time-series data.
DataFrame.to_period([freq, axis, copy])
Convert DataFrame from DatetimeIndex to PeriodIndex.
DataFrame.to_timestamp([freq, how, axis, copy])
Cast to DatetimeIndex of timestamps, at beginning of period.
DataFrame.tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
DataFrame.tz_localize(tz[, axis, level, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
Flags#
Flags refer to attributes of the pandas object. Properties of the dataset (like
the date is was recorded, the URL it was accessed from, etc.) should be stored
in DataFrame.attrs.
Flags(obj, *, allows_duplicate_labels)
Flags that apply to pandas objects.
Metadata#
DataFrame.attrs is a dictionary for storing global metadata for this DataFrame.
Warning
DataFrame.attrs is considered experimental and may change without warning.
DataFrame.attrs
Dictionary of global attributes of this dataset.
Plotting#
DataFrame.plot is both a callable method and a namespace attribute for
specific plotting methods of the form DataFrame.plot.<kind>.
DataFrame.plot([x, y, kind, ax, ....])
DataFrame plotting accessor and method
DataFrame.plot.area([x, y])
Draw a stacked area plot.
DataFrame.plot.bar([x, y])
Vertical bar plot.
DataFrame.plot.barh([x, y])
Make a horizontal bar plot.
DataFrame.plot.box([by])
Make a box plot of the DataFrame columns.
DataFrame.plot.density([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.hexbin(x, y[, C, ...])
Generate a hexagonal binning plot.
DataFrame.plot.hist([by, bins])
Draw one histogram of the DataFrame's columns.
DataFrame.plot.kde([bw_method, ind])
Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.line([x, y])
Plot Series or DataFrame as lines.
DataFrame.plot.pie(**kwargs)
Generate a pie plot.
DataFrame.plot.scatter(x, y[, s, c])
Create a scatter plot with varying marker point size and color.
DataFrame.boxplot([column, by, ax, ...])
Make a box plot from DataFrame columns.
DataFrame.hist([column, by, grid, ...])
Make a histogram of the DataFrame's columns.
Sparse accessor#
Sparse-dtype specific methods and attributes are provided under the
DataFrame.sparse accessor.
DataFrame.sparse.density
Ratio of non-sparse points to total (dense) data points.
DataFrame.sparse.from_spmatrix(data[, ...])
Create a new DataFrame from a scipy sparse matrix.
DataFrame.sparse.to_coo()
Return the contents of the frame as a sparse SciPy COO matrix.
DataFrame.sparse.to_dense()
Convert a DataFrame with sparse values to dense.
Serialization / IO / conversion#
DataFrame.from_dict(data[, orient, dtype, ...])
Construct DataFrame from dict of array-like or dicts.
DataFrame.from_records(data[, index, ...])
Convert structured or record ndarray to DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
DataFrame.to_hdf(path_or_buf, key[, mode, ...])
Write the contained data to an HDF5 file using HDFStore.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
DataFrame.to_dict([orient, into])
Convert the DataFrame to a dictionary.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
DataFrame.to_gbq(destination_table[, ...])
Write a DataFrame to a Google BigQuery table.
DataFrame.to_records([index, column_dtypes, ...])
Convert DataFrame to a NumPy record array.
DataFrame.to_string([buf, columns, ...])
Render a DataFrame to a console-friendly tabular output.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
DataFrame.to_markdown([buf, mode, index, ...])
Print DataFrame in Markdown-friendly format.
DataFrame.style
Returns a Styler object.
DataFrame.__dataframe__([nan_as_null, ...])
Return the dataframe interchange object implementing the interchange protocol.
|
reference/frame.html
|
pandas.DataFrame.replace
|
`pandas.DataFrame.replace`
Replace values given in to_replace with value.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
```
|
DataFrame.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the DataFrame are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
DataFrameObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
DataFrame.fillnaFill NA values.
DataFrame.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
|
reference/api/pandas.DataFrame.replace.html
|
pandas.HDFStore.append
|
`pandas.HDFStore.append`
Append to Table in file.
|
HDFStore.append(key, value, format=None, axes=None, index=True, append=True, complib=None, complevel=None, columns=None, min_itemsize=None, nan_rep=None, chunksize=None, expectedrows=None, dropna=None, data_columns=None, encoding=None, errors='strict')[source]#
Append to Table in file.
Node must already exist and be Table format.
Parameters
keystr
value{Series, DataFrame}
format‘table’ is the defaultFormat to use when storing object in HDFStore. Value can be one of:
'table'Table format. Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching / selecting
subsets of the data.
indexbool, default TrueWrite DataFrame index as a column.
appendbool, default TrueAppend the input data to the existing.
data_columnslist of columns, or True, default NoneList of columns to create as indexed data columns for on-disk
queries, or True to use all columns. By default only the axes
of the object are indexed. See here.
min_itemsizedict of columns that specify minimum str sizes
nan_repstr to use as str nan representation
chunksizesize to chunk the writing
expectedrowsexpected TOTAL row size of this table
encodingdefault None, provide an encoding for str
dropnabool, default False, optionalDo not write an ALL nan row to the store settable
by the option ‘io.hdf.dropna_table’.
Notes
Does not check if data being appended overlaps with existing
data in the table, so be careful
|
reference/api/pandas.HDFStore.append.html
|
pandas.tseries.offsets.Micro.delta
|
pandas.tseries.offsets.Micro.delta
|
Micro.delta#
|
reference/api/pandas.tseries.offsets.Micro.delta.html
|
pandas.Series.cummin
|
`pandas.Series.cummin`
Return cumulative minimum over a DataFrame or Series axis.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
```
|
Series.cummin(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative minimum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
minimum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
scalar or SeriesReturn cumulative minimum of scalar or Series.
See also
core.window.expanding.Expanding.minSimilar functionality but ignores NaN values.
Series.minReturn the minimum over Series axis.
Series.cummaxReturn cumulative maximum over Series axis.
Series.cumminReturn cumulative minimum over Series axis.
Series.cumsumReturn cumulative sum over Series axis.
Series.cumprodReturn cumulative product over Series axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the minimum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row,
use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
|
reference/api/pandas.Series.cummin.html
|
pandas.tseries.offsets.Hour.is_quarter_end
|
`pandas.tseries.offsets.Hour.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
Hour.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.Hour.is_quarter_end.html
|
pandas.tseries.offsets.MonthEnd.is_anchored
|
`pandas.tseries.offsets.MonthEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
MonthEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.MonthEnd.is_anchored.html
|
pandas.tseries.offsets.Nano.is_quarter_start
|
`pandas.tseries.offsets.Nano.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Nano.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Nano.is_quarter_start.html
|
pandas.tseries.offsets.BQuarterBegin.isAnchored
|
pandas.tseries.offsets.BQuarterBegin.isAnchored
|
BQuarterBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.BQuarterBegin.isAnchored.html
|
pandas.tseries.offsets.YearEnd.nanos
|
pandas.tseries.offsets.YearEnd.nanos
|
YearEnd.nanos#
|
reference/api/pandas.tseries.offsets.YearEnd.nanos.html
|
pandas.tseries.offsets.CustomBusinessHour.apply
|
pandas.tseries.offsets.CustomBusinessHour.apply
|
CustomBusinessHour.apply()#
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.apply.html
|
pandas.BooleanDtype
|
`pandas.BooleanDtype`
Extension dtype for boolean data.
```
>>> pd.BooleanDtype()
BooleanDtype
```
|
class pandas.BooleanDtype[source]#
Extension dtype for boolean data.
New in version 1.0.0.
Warning
BooleanDtype is considered experimental. The implementation and
parts of the API may change without warning.
Examples
>>> pd.BooleanDtype()
BooleanDtype
Attributes
None
Methods
None
|
reference/api/pandas.BooleanDtype.html
|
pandas.Series.str.isnumeric
|
`pandas.Series.str.isnumeric`
Check whether all characters in each string are numeric.
This is equivalent to running the Python string method
str.isnumeric() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.isnumeric()[source]#
Check whether all characters in each string are numeric.
This is equivalent to running the Python string method
str.isnumeric() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.isnumeric.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.rollback
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.rollback`
Roll provided date backward to next offset only if not on offset.
|
CustomBusinessMonthEnd.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.rollback.html
|
pandas.tseries.offsets.YearBegin.rule_code
|
pandas.tseries.offsets.YearBegin.rule_code
|
YearBegin.rule_code#
|
reference/api/pandas.tseries.offsets.YearBegin.rule_code.html
|
pandas.DataFrame.idxmin
|
`pandas.DataFrame.idxmin`
Return index of first occurrence of minimum over requested axis.
NA/null values are excluded.
```
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
```
|
DataFrame.idxmin(axis=0, skipna=True, numeric_only=False)[source]#
Return index of first occurrence of minimum over requested axis.
NA/null values are excluded.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
numeric_onlybool, default FalseInclude only float, int or boolean data.
New in version 1.5.0.
Returns
SeriesIndexes of minima along the specified axis.
Raises
ValueError
If the row/column is empty
See also
Series.idxminReturn index of the minimum element.
Notes
This method is the DataFrame version of ndarray.argmin.
Examples
Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the minimum value in each column.
>>> df.idxmin()
consumption Pork
co2_emissions Wheat Products
dtype: object
To return the index for the minimum value in each row, use axis="columns".
>>> df.idxmin(axis="columns")
Pork consumption
Wheat Products co2_emissions
Beef consumption
dtype: object
|
reference/api/pandas.DataFrame.idxmin.html
|
pandas.core.groupby.GroupBy.pad
|
`pandas.core.groupby.GroupBy.pad`
Forward fill the values.
|
GroupBy.pad(limit=None)[source]#
Forward fill the values.
Deprecated since version 1.4: Use ffill instead.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameObject with missing values filled.
|
reference/api/pandas.core.groupby.GroupBy.pad.html
|
pandas.tseries.offsets.Second.isAnchored
|
pandas.tseries.offsets.Second.isAnchored
|
Second.isAnchored()#
|
reference/api/pandas.tseries.offsets.Second.isAnchored.html
|
pandas.date_range
|
`pandas.date_range`
Return a fixed frequency DatetimeIndex.
Returns the range of equally spaced time points (where the difference between any
two adjacent points is specified by the given frequency) such that they all
satisfy start <[=] x <[=] end, where the first one and the last one are, resp.,
the first and last time points in that range that fall on the boundary of freq
(if given as a frequency string) or that are valid for freq (if given as a
pandas.tseries.offsets.DateOffset). (If exactly one of start,
end, or freq is not specified, this missing parameter can be computed
given periods, the number of timesteps in the range. See the note below.)
```
>>> pd.date_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
```
|
pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=_NoDefault.no_default, inclusive=None, **kwargs)[source]#
Return a fixed frequency DatetimeIndex.
Returns the range of equally spaced time points (where the difference between any
two adjacent points is specified by the given frequency) such that they all
satisfy start <[=] x <[=] end, where the first one and the last one are, resp.,
the first and last time points in that range that fall on the boundary of freq
(if given as a frequency string) or that are valid for freq (if given as a
pandas.tseries.offsets.DateOffset). (If exactly one of start,
end, or freq is not specified, this missing parameter can be computed
given periods, the number of timesteps in the range. See the note below.)
Parameters
startstr or datetime-like, optionalLeft bound for generating dates.
endstr or datetime-like, optionalRight bound for generating dates.
periodsint, optionalNumber of periods to generate.
freqstr or DateOffset, default ‘D’Frequency strings can have multiples, e.g. ‘5H’. See
here for a list of
frequency aliases.
tzstr or tzinfo, optionalTime zone name for returning localized DatetimeIndex, for example
‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is
timezone-naive.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
namestr, default NoneName of the resulting DatetimeIndex.
closed{None, ‘left’, ‘right’}, optionalMake the interval closed with respect to the given frequency to
the ‘left’, ‘right’, or both sides (None, the default).
Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs.
Use inclusive instead, to set each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; Whether to set each bound as closed or open.
New in version 1.4.0.
**kwargsFor compatibility. Has no effect on the result.
Returns
rngDatetimeIndex
See also
DatetimeIndexAn immutable container for datetimes.
timedelta_rangeReturn a fixed frequency TimedeltaIndex.
period_rangeReturn a fixed frequency PeriodIndex.
interval_rangeReturn a fixed frequency IntervalIndex.
Notes
Of the four parameters start, end, periods, and freq,
exactly three must be specified. If freq is omitted, the resulting
DatetimeIndex will have periods linearly spaced elements between
start and end (closed on both sides).
To learn more about the frequency strings, please see this link.
Examples
Specifying the values
The next four examples generate the same DatetimeIndex, but vary
the combination of start, end and periods.
Specify start and end, with the default daily frequency.
>>> pd.date_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify start and periods, the number of periods (days).
>>> pd.date_range(start='1/1/2018', periods=8)
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify end and periods, the number of periods (days).
>>> pd.date_range(end='1/1/2018', periods=8)
DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28',
'2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
Specify start, end, and periods; the frequency is generated
automatically (linearly spaced).
>>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
'2018-04-27 00:00:00'],
dtype='datetime64[ns]', freq=None)
Other Parameters
Changed the freq (frequency) to 'M' (month end frequency).
>>> pd.date_range(start='1/1/2018', periods=5, freq='M')
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30',
'2018-05-31'],
dtype='datetime64[ns]', freq='M')
Multiples are allowed
>>> pd.date_range(start='1/1/2018', periods=5, freq='3M')
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
freq can also be specified as an Offset object.
>>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3))
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
Specify tz to set the timezone.
>>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo')
DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00',
'2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00',
'2018-01-05 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq='D')
inclusive controls whether to include start and end that are on the
boundary. The default, “both”, includes boundary points on either end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive="both")
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
Use inclusive='left' to exclude end if it falls on the boundary.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='left')
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'],
dtype='datetime64[ns]', freq='D')
Use inclusive='right' to exclude start if it falls on the boundary, and
similarly inclusive='neither' will exclude both start and end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='right')
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
|
reference/api/pandas.date_range.html
|
pandas.DatetimeIndex.indexer_at_time
|
`pandas.DatetimeIndex.indexer_at_time`
Return index locations of values at particular time of day.
|
DatetimeIndex.indexer_at_time(time, asof=False)[source]#
Return index locations of values at particular time of day.
Parameters
timedatetime.time or strTime passed in either as object (datetime.time) or as string in
appropriate format (“%H:%M”, “%H%M”, “%I:%M%p”, “%I%M%p”,
“%H:%M:%S”, “%H%M%S”, “%I:%M:%S%p”, “%I%M%S%p”).
Returns
np.ndarray[np.intp]
See also
indexer_between_timeGet index locations of values between particular times of day.
DataFrame.at_timeSelect values at particular time of day.
|
reference/api/pandas.DatetimeIndex.indexer_at_time.html
|
pandas.tseries.offsets.BQuarterBegin.kwds
|
`pandas.tseries.offsets.BQuarterBegin.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
BQuarterBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BQuarterBegin.kwds.html
|
pandas.Index.ravel
|
`pandas.Index.ravel`
Return an ndarray of the flattened values of the underlying data.
|
final Index.ravel(order='C')[source]#
Return an ndarray of the flattened values of the underlying data.
Returns
numpy.ndarrayFlattened array.
See also
numpy.ndarray.ravelReturn a flattened array.
|
reference/api/pandas.Index.ravel.html
|
pandas.DataFrame.plot.density
|
`pandas.DataFrame.plot.density`
Generate Kernel Density Estimate plot using Gaussian kernels.
```
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
```
|
DataFrame.plot.density(bw_method=None, ind=None, **kwargs)[source]#
Generate Kernel Density Estimate plot using Gaussian kernels.
In statistics, kernel density estimation (KDE) is a non-parametric
way to estimate the probability density function (PDF) of a random
variable. This function uses Gaussian kernels and includes automatic
bandwidth determination.
Parameters
bw_methodstr, scalar or callable, optionalThe method used to calculate the estimator bandwidth. This can be
‘scott’, ‘silverman’, a scalar constant or a callable.
If None (default), ‘scott’ is used.
See scipy.stats.gaussian_kde for more information.
indNumPy array or int, optionalEvaluation points for the estimated PDF. If None (default),
1000 equally spaced points are used. If ind is a NumPy array, the
KDE is evaluated at the points passed. If ind is an integer,
ind number of equally spaced points are used.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also
scipy.stats.gaussian_kdeRepresentation of a kernel-density estimate using Gaussian kernels. This is the function used internally to estimate the PDF.
Examples
Given a Series of points randomly sampled from an unknown
distribution, estimate its PDF using KDE with automatic
bandwidth determination and plot the results, evaluating them at
1000 equally spaced points (default):
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = s.plot.kde(ind=[1, 2, 3, 4, 5])
For DataFrame, it works in the same way:
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = df.plot.kde(ind=[1, 2, 3, 4, 5, 6])
|
reference/api/pandas.DataFrame.plot.density.html
|
pandas.reset_option
|
`pandas.reset_option`
Reset one or more options to their default value.
|
pandas.reset_option(pat) = <pandas._config.config.CallableDynamicDoc object>#
Reset one or more options to their default value.
Pass “all” as argument to reset all options.
Available options:
compute.[use_bottleneck, use_numba, use_numexpr]
display.[chop_threshold, colheader_justify, column_space, date_dayfirst,
date_yearfirst, encoding, expand_frame_repr, float_format]
display.html.[border, table_schema, use_mathjax]
display.[large_repr]
display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow,
repr]
display.[max_categories, max_columns, max_colwidth, max_dir_items,
max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage,
min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision,
show_dimensions]
display.unicode.[ambiguous_as_wide, east_asian_width]
display.[width]
io.excel.ods.[reader, writer]
io.excel.xls.[reader, writer]
io.excel.xlsb.[reader]
io.excel.xlsm.[reader, writer]
io.excel.xlsx.[reader, writer]
io.hdf.[default_format, dropna_table]
io.parquet.[engine]
io.sql.[engine]
mode.[chained_assignment, copy_on_write, data_manager, sim_interactive,
string_storage, use_inf_as_na, use_inf_as_null]
plotting.[backend]
plotting.matplotlib.[register_converters]
styler.format.[decimal, escape, formatter, na_rep, precision, thousands]
styler.html.[mathjax]
styler.latex.[environment, hrules, multicol_align, multirow_align]
styler.render.[encoding, max_columns, max_elements, max_rows, repr]
styler.sparse.[columns, index]
Parameters
patstr/regexIf specified only options matching prefix* will be reset.
Note: partial matches are supported for convenience, but unless you
use the full option name (e.g. x.y.z.option_name), your code may break
in future versions if new options with similar names are introduced.
Returns
None
Notes
Please reference the User Guide for more information.
The available options with its descriptions:
compute.use_bottleneckboolUse the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numbaboolUse the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexprboolUse the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_thresholdfloat or Noneif set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify‘left’/’right’Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.[default: 12] [currently: 12]
display.date_dayfirstbooleanWhen True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirstbooleanWhen True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encodingstr/unicodeDefaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf-8]
display.expand_frame_reprbooleanWhether to print out the full DataFrame repr for wide DataFrames across
multiple lines, max_columns is still respected, but the output will
wrap-around across multiple “pages” if its width exceeds display.width.
[default: True] [currently: True]
display.float_formatcallableThe callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.borderintA border=value attribute is inserted in the <table> tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schemabooleanWhether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjaxbooleanWhen True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr‘truncate’/’info’For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escapeboolThis specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :boolThis specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumnboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_formatboolThis specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirowboolThis specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.reprbooleanWhether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categoriesintThis sets the maximum number of categories pandas should output when
printing out a Categorical or a Series of dtype “category”.
[default: 8] [currently: 8]
display.max_columnsintIf max_cols is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidthint or NoneThe maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a “…”
placeholder is embedded in the output. A ‘None’ value means unlimited.
[default: 50] [currently: 50]
display.max_dir_itemsintThe number of items that will be added to dir(…). ‘None’ value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columnsintmax_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rowsint or Nonedf.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rowsintIf max_rows is exceeded, switch to truncate view. Depending on
large_repr, objects are either centrally truncated or printed as
a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr
equals ‘truncate’ this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_itemsint or NoneWhen pretty-printing a long sequence, no more then max_seq_items
will be printed. If items are omitted, they will be denoted by the
addition of “…” to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usagebool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,’deep’
[default: True] [currently: True]
display.min_rowsintThe numbers of rows to show in a truncated view (when max_rows is
exceeded). Ignored when max_rows is set to None or 0. When set to
None, follows the value of max_rows.
[default: 10] [currently: 10]
display.multi_sparseboolean“sparsify” MultiIndex display (don’t display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_htmlbooleanWhen True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depthintControls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precisionintFloating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to precision in numpy.set_printoptions().
[default: 6] [currently: 6]
display.show_dimensionsboolean or ‘truncate’Whether to print out dimensions at the end of DataFrame repr.
If ‘truncate’ is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_widebooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_widthbooleanWhether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.widthintWidth of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.readerstringThe default Excel reader engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writerstringThe default Excel writer engine for ‘ods’ files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.readerstringThe default Excel reader engine for ‘xls’ files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writerstringThe default Excel writer engine for ‘xls’ files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.readerstringThe default Excel reader engine for ‘xlsb’ files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.readerstringThe default Excel reader engine for ‘xlsm’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writerstringThe default Excel writer engine for ‘xlsm’ files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.readerstringThe default Excel reader engine for ‘xlsx’ files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writerstringThe default Excel writer engine for ‘xlsx’ files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_formatformatdefault format writing format, if None, then
put will default to ‘fixed’ and append will default to ‘table’
[default: None] [currently: None]
io.hdf.dropna_tablebooleandrop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.enginestringThe default parquet reader/writer engine. Available options:
‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’
[default: auto] [currently: auto]
io.sql.enginestringThe default sql reader/writer engine. Available options:
‘auto’, ‘sqlalchemy’, the default is ‘auto’
[default: auto] [currently: auto]
mode.chained_assignmentstringRaise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_writeboolUse new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the ‘PANDAS_COPY_ON_WRITE’ environment variable
(if set to “1” for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_managerstringInternal data manager type; can be “block” or “array”. Defaults to “block”,
unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactivebooleanWhether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storagestringThe default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_nabooleanTrue means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_nullbooleanuse_inf_as_null had been deprecated and will be removed in a future
version. Use use_inf_as_na instead.
[default: False] [currently: False]
(Deprecated, use mode.use_inf_as_na instead.)
plotting.backendstrThe plotting backend to use. The default value is “matplotlib”, the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_convertersbool or ‘auto’.Whether to register converters with matplotlib’s units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimalstrThe character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escapestr, optionalWhether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatterstr, callable, dict, optionalA formatter object to be used as default within Styler.format.
[default: None] [currently: None]
styler.format.na_repstr, optionalThe string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precisionintThe precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousandsstr, optionalThe character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjaxboolIf False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environmentstrThe environment to replace \begin{table}. If “longtable” is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrulesboolWhether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align{“r”, “c”, “l”, “naive-l”, “naive-r”}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align{“c”, “t”, “b”}The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encodingstrThe encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columnsint, optionalThe maximum number of columns that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.max_elementsintThe maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rowsint, optionalThe maximum number of rows that will be rendered. May still be reduced to
satsify max_elements, which takes precedence.
[default: None] [currently: None]
styler.render.reprstrDetermine which output to use in Jupyter Notebook in {“html”, “latex”}.
[default: html] [currently: html]
styler.sparse.columnsboolWhether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.indexboolWhether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
|
reference/api/pandas.reset_option.html
|
pandas.tseries.offsets.FY5253.is_quarter_start
|
`pandas.tseries.offsets.FY5253.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
FY5253.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.FY5253.is_quarter_start.html
|
pandas.unique
|
`pandas.unique`
Return unique values based on a hash table.
```
>>> pd.unique(pd.Series([2, 1, 3, 3]))
array([2, 1, 3])
```
|
pandas.unique(values)[source]#
Return unique values based on a hash table.
Uniques are returned in order of appearance. This does NOT sort.
Significantly faster than numpy.unique for long enough sequences.
Includes NA values.
Parameters
values1d array-like
Returns
numpy.ndarray or ExtensionArrayThe return can be:
Index : when the input is an Index
Categorical : when the input is a Categorical dtype
ndarray : when the input is a Series/ndarray
Return numpy.ndarray or ExtensionArray.
See also
Index.uniqueReturn unique values from an Index.
Series.uniqueReturn unique values of Series object.
Examples
>>> pd.unique(pd.Series([2, 1, 3, 3]))
array([2, 1, 3])
>>> pd.unique(pd.Series([2] + [1] * 5))
array([2, 1])
>>> pd.unique(pd.Series([pd.Timestamp("20160101"), pd.Timestamp("20160101")]))
array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.unique(
... pd.Series(
... [
... pd.Timestamp("20160101", tz="US/Eastern"),
... pd.Timestamp("20160101", tz="US/Eastern"),
... ]
... )
... )
<DatetimeArray>
['2016-01-01 00:00:00-05:00']
Length: 1, dtype: datetime64[ns, US/Eastern]
>>> pd.unique(
... pd.Index(
... [
... pd.Timestamp("20160101", tz="US/Eastern"),
... pd.Timestamp("20160101", tz="US/Eastern"),
... ]
... )
... )
DatetimeIndex(['2016-01-01 00:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]',
freq=None)
>>> pd.unique(list("baabc"))
array(['b', 'a', 'c'], dtype=object)
An unordered Categorical will return categories in the
order of appearance.
>>> pd.unique(pd.Series(pd.Categorical(list("baabc"))))
['b', 'a', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> pd.unique(pd.Series(pd.Categorical(list("baabc"), categories=list("abc"))))
['b', 'a', 'c']
Categories (3, object): ['a', 'b', 'c']
An ordered Categorical preserves the category ordering.
>>> pd.unique(
... pd.Series(
... pd.Categorical(list("baabc"), categories=list("abc"), ordered=True)
... )
... )
['b', 'a', 'c']
Categories (3, object): ['a' < 'b' < 'c']
An array of tuples
>>> pd.unique([("a", "b"), ("b", "a"), ("a", "c"), ("b", "a")])
array([('a', 'b'), ('b', 'a'), ('a', 'c')], dtype=object)
|
reference/api/pandas.unique.html
|
pandas.tseries.offsets.BusinessMonthBegin.base
|
`pandas.tseries.offsets.BusinessMonthBegin.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
BusinessMonthBegin.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.base.html
|
pandas.arrays.IntervalArray.from_breaks
|
`pandas.arrays.IntervalArray.from_breaks`
Construct an IntervalArray from an array of splits.
```
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
```
|
classmethod IntervalArray.from_breaks(breaks, closed='right', copy=False, dtype=None)[source]#
Construct an IntervalArray from an array of splits.
Parameters
breaksarray-like (1-dimensional)Left and right bounds for each interval.
closed{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’Whether the intervals are closed on the left-side, right-side, both
or neither.
copybool, default FalseCopy the data.
dtypedtype or None, default NoneIf None, dtype will be inferred.
Returns
IntervalArray
See also
interval_rangeFunction to create a fixed frequency IntervalIndex.
IntervalArray.from_arraysConstruct from a left and right array.
IntervalArray.from_tuplesConstruct from a sequence of tuples.
Examples
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
|
reference/api/pandas.arrays.IntervalArray.from_breaks.html
|
pandas.core.resample.Resampler.mean
|
`pandas.core.resample.Resampler.mean`
Compute mean of groups, excluding missing values.
```
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
```
|
Resampler.mean(numeric_only=_NoDefault.no_default, *args, **kwargs)[source]#
Compute mean of groups, excluding missing values.
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting
compute.use_numba
New in version 1.4.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{{'nopython': True, 'nogil': False, 'parallel': False}}
New in version 1.4.0.
Returns
pandas.Series or pandas.DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in
each group.
>>> df.groupby('A').mean()
B C
A
1 3.0 1.333333
2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean()
C
A B
1 2.0 2.0
4.0 1.0
2 3.0 1.0
5.0 2.0
Groupby one column and return the mean of only particular column in
the group.
>>> df.groupby('A')['B'].mean()
A
1 3.0
2 4.0
Name: B, dtype: float64
|
reference/api/pandas.core.resample.Resampler.mean.html
|
pandas.get_dummies
|
`pandas.get_dummies`
Convert categorical variable into dummy/indicator variables.
Data of which to get dummy indicators.
```
>>> s = pd.Series(list('abca'))
```
|
pandas.get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=None)[source]#
Convert categorical variable into dummy/indicator variables.
Parameters
dataarray-like, Series, or DataFrameData of which to get dummy indicators.
prefixstr, list of str, or dict of str, default NoneString to append DataFrame column names.
Pass a list with length equal to the number of columns
when calling get_dummies on a DataFrame. Alternatively, prefix
can be a dictionary mapping column names to prefixes.
prefix_sepstr, default ‘_’If appending prefix, separator/delimiter to use. Or pass a
list or dictionary as with prefix.
dummy_nabool, default FalseAdd a column to indicate NaNs, if False NaNs are ignored.
columnslist-like, default NoneColumn names in the DataFrame to be encoded.
If columns is None then all the columns with
object, string, or category dtype will be converted.
sparsebool, default FalseWhether the dummy-encoded columns should be backed by
a SparseArray (True) or a regular NumPy array (False).
drop_firstbool, default FalseWhether to get k-1 dummies out of k categorical levels by removing the
first level.
dtypedtype, default np.uint8Data type for new columns. Only a single dtype is allowed.
Returns
DataFrameDummy-coded data.
See also
Series.str.get_dummiesConvert Series to dummy codes.
from_dummies()Convert dummy codes to categorical DataFrame.
Notes
Reference the user guide for more examples.
Examples
>>> s = pd.Series(list('abca'))
>>> pd.get_dummies(s)
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
>>> s1 = ['a', 'b', np.nan]
>>> pd.get_dummies(s1)
a b
0 1 0
1 0 1
2 0 0
>>> pd.get_dummies(s1, dummy_na=True)
a b NaN
0 1 0 0
1 0 1 0
2 0 0 1
>>> df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'],
... 'C': [1, 2, 3]})
>>> pd.get_dummies(df, prefix=['col1', 'col2'])
C col1_a col1_b col2_a col2_b col2_c
0 1 1 0 0 1 0
1 2 0 1 1 0 0
2 3 1 0 0 0 1
>>> pd.get_dummies(pd.Series(list('abcaa')))
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
>>> pd.get_dummies(pd.Series(list('abcaa')), drop_first=True)
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
>>> pd.get_dummies(pd.Series(list('abc')), dtype=float)
a b c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
|
reference/api/pandas.get_dummies.html
|
pandas.tseries.offsets.DateOffset.kwds
|
`pandas.tseries.offsets.DateOffset.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
DateOffset.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.DateOffset.kwds.html
|
pandas.DataFrame.combine_first
|
`pandas.DataFrame.combine_first`
Update null elements with value in the same location in other.
Combine two DataFrame objects by filling null values in one DataFrame
with non-null values from other DataFrame. The row and column indexes
of the resulting DataFrame will be the union of the two. The resulting
dataframe contains the ‘first’ dataframe values and overrides the
second one values where both first.loc[index, col] and
second.loc[index, col] are not missing values, upon calling
first.combine_first(second).
```
>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine_first(df2)
A B
0 1.0 3.0
1 0.0 4.0
```
|
DataFrame.combine_first(other)[source]#
Update null elements with value in the same location in other.
Combine two DataFrame objects by filling null values in one DataFrame
with non-null values from other DataFrame. The row and column indexes
of the resulting DataFrame will be the union of the two. The resulting
dataframe contains the ‘first’ dataframe values and overrides the
second one values where both first.loc[index, col] and
second.loc[index, col] are not missing values, upon calling
first.combine_first(second).
Parameters
otherDataFrameProvided DataFrame to use to fill null values.
Returns
DataFrameThe result of combining the provided DataFrame with the other object.
See also
DataFrame.combinePerform series-wise operation on two DataFrames using a given function.
Examples
>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine_first(df2)
A B
0 1.0 3.0
1 0.0 4.0
Null values still persist if the location of that null value
does not exist in other
>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]})
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2])
>>> df1.combine_first(df2)
A B C
0 NaN 4.0 NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
|
reference/api/pandas.DataFrame.combine_first.html
|
pandas.Period.ordinal
|
pandas.Period.ordinal
|
Period.ordinal#
|
reference/api/pandas.Period.ordinal.html
|
pandas.tseries.offsets.CustomBusinessDay.weekmask
|
pandas.tseries.offsets.CustomBusinessDay.weekmask
|
CustomBusinessDay.weekmask#
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.weekmask.html
|
Resampling
|
Resampler objects are returned by resample calls: pandas.DataFrame.resample(), pandas.Series.resample().
Indexing, iteration#
Resampler.__iter__()
Groupby iterator.
Resampler.groups
Dict {group name -> group labels}.
Resampler.indices
Dict {group name -> group indices}.
Resampler.get_group(name[, obj])
Construct DataFrame from group with provided name.
Function application#
Resampler.apply([func])
Aggregate using one or more operations over the specified axis.
Resampler.aggregate([func])
Aggregate using one or more operations over the specified axis.
Resampler.transform(arg, *args, **kwargs)
Call function producing a like-indexed Series on each group.
Resampler.pipe(func, *args, **kwargs)
Apply a func with arguments to this Resampler object and return its result.
Upsampling#
Resampler.ffill([limit])
Forward fill the values.
Resampler.backfill([limit])
(DEPRECATED) Backward fill the values.
Resampler.bfill([limit])
Backward fill the new missing values in the resampled data.
Resampler.pad([limit])
(DEPRECATED) Forward fill the values.
Resampler.nearest([limit])
Resample by using the nearest value.
Resampler.fillna(method[, limit])
Fill missing values introduced by upsampling.
Resampler.asfreq([fill_value])
Return the values at the new freq, essentially a reindex.
Resampler.interpolate([method, axis, limit, ...])
Interpolate values according to different methods.
Computations / descriptive stats#
Resampler.count()
Compute count of group, excluding missing values.
Resampler.nunique(*args, **kwargs)
Return number of unique elements in the group.
Resampler.first([numeric_only, min_count])
Compute the first non-null entry of each column.
Resampler.last([numeric_only, min_count])
Compute the last non-null entry of each column.
Resampler.max([numeric_only, min_count])
Compute max of group values.
Resampler.mean([numeric_only])
Compute mean of groups, excluding missing values.
Resampler.median([numeric_only])
Compute median of groups, excluding missing values.
Resampler.min([numeric_only, min_count])
Compute min of group values.
Resampler.ohlc(*args, **kwargs)
Compute open, high, low and close values of a group, excluding missing values.
Resampler.prod([numeric_only, min_count])
Compute prod of group values.
Resampler.size()
Compute group sizes.
Resampler.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
Resampler.std([ddof, numeric_only])
Compute standard deviation of groups, excluding missing values.
Resampler.sum([numeric_only, min_count])
Compute sum of group values.
Resampler.var([ddof, numeric_only])
Compute variance of groups, excluding missing values.
Resampler.quantile([q])
Return value at the given quantile.
|
reference/resampling.html
| null |
pandas.io.formats.style.Styler.applymap
|
`pandas.io.formats.style.Styler.applymap`
Apply a CSS-styling function elementwise.
```
>>> def color_negative(v, color):
... return f"color: {color};" if v < 0 else None
>>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"])
>>> df.style.applymap(color_negative, color='red')
```
|
Styler.applymap(func, subset=None, **kwargs)[source]#
Apply a CSS-styling function elementwise.
Updates the HTML representation with the result.
Parameters
funcfunctionfunc should take a scalar and return a string.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
**kwargsdictPass along to func.
Returns
selfStyler
See also
Styler.applymap_indexApply a CSS-styling function to headers elementwise.
Styler.apply_indexApply a CSS-styling function to headers level-wise.
Styler.applyApply a CSS-styling function column-wise, row-wise, or table-wise.
Notes
The elements of the output of func should be CSS styles as strings, in the
format ‘attribute: value; attribute2: value2; …’ or,
if nothing is to be applied to that element, an empty string or None.
Examples
>>> def color_negative(v, color):
... return f"color: {color};" if v < 0 else None
>>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"])
>>> df.style.applymap(color_negative, color='red')
Using subset to restrict application to a single column or multiple columns
>>> df.style.applymap(color_negative, color='red', subset="A")
...
>>> df.style.applymap(color_negative, color='red', subset=["A", "B"])
...
Using a 2d input to subset to select rows in addition to columns
>>> df.style.applymap(color_negative, color='red',
... subset=([0,1,2], slice(None)))
>>> df.style.applymap(color_negative, color='red', subset=(slice(0,5,2), "A"))
...
See Table Visualization user guide for
more details.
|
reference/api/pandas.io.formats.style.Styler.applymap.html
|
pandas.tseries.offsets.SemiMonthEnd.rule_code
|
pandas.tseries.offsets.SemiMonthEnd.rule_code
|
SemiMonthEnd.rule_code#
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.rule_code.html
|
Input/output
|
Input/output
|
Pickling#
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Flat file#
read_table(filepath_or_buffer, *[, sep, ...])
Read general delimited file into DataFrame.
read_csv(filepath_or_buffer, *[, sep, ...])
Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer, *[, colspecs, ...])
Read a table of fixed-width formatted lines into DataFrame.
Clipboard#
read_clipboard([sep])
Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
Excel#
read_excel(io[, sheet_name, header, names, ...])
Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
Class for writing DataFrame objects into excel sheets.
JSON#
read_json(path_or_buf, *[, orient, typ, ...])
Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...])
Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
build_table_schema(data[, index, ...])
Create a Table schema from data.
HTML#
read_html(io, *[, match, flavor, header, ...])
Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
XML#
read_xml(path_or_buffer, *[, xpath, ...])
Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...])
Render a DataFrame to an XML document.
Latex#
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)#
read_hdf(path_or_buf[, key, mode, errors, ...])
Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
Append to Table in file.
HDFStore.get(key)
Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
Print detailed information on the store.
HDFStore.keys([include])
Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
Return a list of all the top-level nodes.
HDFStore.walk([where])
Walk the pytables group hierarchy for pandas objects.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
Feather#
read_feather(path[, columns, use_threads, ...])
Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
Parquet#
read_parquet(path[, engine, columns, ...])
Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
ORC#
read_orc(path[, columns])
Load an ORC object from the file path, returning a DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
SAS#
read_sas(filepath_or_buffer, *[, format, ...])
Read SAS files stored as either XPORT or SAS7BDAT format files.
SPSS#
read_spss(path[, usecols, convert_categoricals])
Load an SPSS file from the file path, returning a DataFrame.
SQL#
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Google BigQuery#
read_gbq(query[, project_id, index_col, ...])
Load data from Google BigQuery.
STATA#
read_stata(filepath_or_buffer, *[, ...])
Read Stata file into DataFrame.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
StataReader.data_label
Return data label of Stata file.
StataReader.value_labels()
Return a nested dict associating each variable name to its value and label.
StataReader.variable_labels()
Return a dict associating each variable name with corresponding label.
StataWriter.write_file()
Export DataFrame object to Stata dta format.
|
reference/io.html
|
pandas.io.formats.style.Styler.set_table_attributes
|
`pandas.io.formats.style.Styler.set_table_attributes`
Set the table attributes added to the <table> HTML element.
```
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_table_attributes('class="pure-table"')
# ... <table class="pure-table"> ...
```
|
Styler.set_table_attributes(attributes)[source]#
Set the table attributes added to the <table> HTML element.
These are items in addition to automatic (by default) id attribute.
Parameters
attributesstr
Returns
selfStyler
See also
Styler.set_table_stylesSet the table styles included within the <style> HTML element.
Styler.set_td_classesSet the DataFrame of strings added to the class attribute of <td> HTML elements.
Examples
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_table_attributes('class="pure-table"')
# ... <table class="pure-table"> ...
|
reference/api/pandas.io.formats.style.Styler.set_table_attributes.html
|
pandas.core.resample.Resampler.get_group
|
`pandas.core.resample.Resampler.get_group`
Construct DataFrame from group with provided name.
|
Resampler.get_group(name, obj=None)[source]#
Construct DataFrame from group with provided name.
Parameters
nameobjectThe name of the group to get as a DataFrame.
objDataFrame, default NoneThe DataFrame to take the DataFrame out of. If
it is None, the object groupby was called on will
be used.
Returns
groupsame type as obj
|
reference/api/pandas.core.resample.Resampler.get_group.html
|
pandas.errors.DuplicateLabelError
|
`pandas.errors.DuplicateLabelError`
Error raised when an operation would introduce duplicate labels.
```
>>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags(
... allows_duplicate_labels=False
... )
>>> s.reindex(['a', 'a', 'b'])
Traceback (most recent call last):
...
DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
```
|
exception pandas.errors.DuplicateLabelError[source]#
Error raised when an operation would introduce duplicate labels.
New in version 1.2.0.
Examples
>>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags(
... allows_duplicate_labels=False
... )
>>> s.reindex(['a', 'a', 'b'])
Traceback (most recent call last):
...
DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
|
reference/api/pandas.errors.DuplicateLabelError.html
|
pandas.core.groupby.DataFrameGroupBy.tshift
|
`pandas.core.groupby.DataFrameGroupBy.tshift`
Shift the time index, using the index’s frequency if available.
Deprecated since version 1.1.0: Use shift instead.
|
property DataFrameGroupBy.tshift[source]#
Shift the time index, using the index’s frequency if available.
Deprecated since version 1.1.0: Use shift instead.
Parameters
periodsintNumber of periods to move, can be positive or negative.
freqDateOffset, timedelta, or str, default NoneIncrement to use from the tseries module
or time rule expressed as a string (e.g. ‘EOM’).
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Corresponds to the axis that contains the Index.
For Series this parameter is unused and defaults to 0.
Returns
shiftedSeries/DataFrame
Notes
If freq is not specified then tries to use the freq or inferred_freq
attributes of the index. If neither of those attributes exist, a
ValueError is thrown
|
reference/api/pandas.core.groupby.DataFrameGroupBy.tshift.html
|
pandas.Series.resample
|
`pandas.Series.resample`
Resample time-series data.
Convenience method for frequency conversion and resampling of time series.
The object must have a datetime-like index (DatetimeIndex, PeriodIndex,
or TimedeltaIndex), or the caller must pass the label of a datetime-like
series/index to the on/level keyword parameter.
```
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
```
|
Series.resample(rule, axis=0, closed=None, label=None, convention='start', kind=None, loffset=None, base=None, on=None, level=None, origin='start_day', offset=None, group_keys=_NoDefault.no_default)[source]#
Resample time-series data.
Convenience method for frequency conversion and resampling of time series.
The object must have a datetime-like index (DatetimeIndex, PeriodIndex,
or TimedeltaIndex), or the caller must pass the label of a datetime-like
series/index to the on/level keyword parameter.
Parameters
ruleDateOffset, Timedelta or strThe offset string or object representing target conversion.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Which axis to use for up- or down-sampling. For Series this parameter
is unused and defaults to 0. Must be
DatetimeIndex, TimedeltaIndex or PeriodIndex.
closed{‘right’, ‘left’}, default NoneWhich side of bin interval is closed. The default is ‘left’
for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’,
‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
label{‘right’, ‘left’}, default NoneWhich bin edge label to label bucket with. The default is ‘left’
for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’,
‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
convention{‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’For PeriodIndex only, controls whether to use the start or
end of rule.
kind{‘timestamp’, ‘period’}, optional, default NonePass ‘timestamp’ to convert the resulting index to a
DateTimeIndex or ‘period’ to convert it to a PeriodIndex.
By default the input representation is retained.
loffsettimedelta, default NoneAdjust the resampled time labels.
Deprecated since version 1.1.0: You should add the loffset to the df.index after the resample.
See below.
baseint, default 0For frequencies that evenly subdivide 1 day, the “origin” of the
aggregated intervals. For example, for ‘5min’ frequency, base could
range from 0 through 4. Defaults to 0.
Deprecated since version 1.1.0: The new arguments that you should use are ‘offset’ or ‘origin’.
onstr, optionalFor a DataFrame, column to use instead of index for resampling.
Column must be datetime-like.
levelstr or int, optionalFor a MultiIndex, level (name or number) to use for
resampling. level must be datetime-like.
originTimestamp or str, default ‘start_day’The timestamp on which to adjust the grouping. The timezone of origin
must match the timezone of the index.
If string, must be one of the following:
‘epoch’: origin is 1970-01-01
‘start’: origin is the first value of the timeseries
‘start_day’: origin is the first day at midnight of the timeseries
New in version 1.1.0.
‘end’: origin is the last value of the timeseries
‘end_day’: origin is the ceiling midnight of the last day
New in version 1.3.0.
offsetTimedelta or str, default is NoneAn offset timedelta added to the origin.
New in version 1.1.0.
group_keysbool, optionalWhether to include the group keys in the result index when using
.apply() on the resampled object. Not specifying group_keys
will retain values-dependent behavior from pandas 1.4
and earlier (see pandas 1.5.0 Release notes
for examples). In a future version of pandas, the behavior will
default to the same as specifying group_keys=False.
New in version 1.5.0.
Returns
pandas.core.ResamplerResampler object.
See also
Series.resampleResample a Series.
DataFrame.resampleResample a DataFrame.
groupbyGroup Series by mapping, function, label, or list of labels.
asfreqReindex a Series with the given frequency without grouping.
Notes
See the user guide
for more.
To learn more about the offset strings, please see this link.
Examples
Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values
of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each
bin using the right edge instead of the left. Please note that the
value in the bucket used as the label is not included in the bucket,
which it labels. For example, in the original series the
bucket 2000-01-01 00:03:00 contains the value 3, but the summed
value in the resampled bucket with the label 2000-01-01 00:03:00
does not include 3 (if it did, the summed value would be 6, not 3).
To include this value close the right side of the bin interval as
illustrated in the example below this one.
>>> series.resample('3T', label='right').sum()
2000-01-01 00:03:00 3
2000-01-01 00:06:00 12
2000-01-01 00:09:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
Freq: 3T, dtype: int64
Upsample the series into 30 second bins.
>>> series.resample('30S').asfreq()[0:5] # Select first 5 rows
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 1.0
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
Freq: 30S, dtype: float64
Upsample the series into 30 second bins and fill the NaN
values using the ffill method.
>>> series.resample('30S').ffill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the
NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Pass a custom function via apply
>>> def custom_resampler(arraylike):
... return np.sum(arraylike) + 5
...
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be
used to control whether to use the start or end of rule.
Resample a year by quarter using ‘start’ convention. Values are
assigned to the first quarter of the period.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
... freq='A',
... periods=2))
>>> s
2012 1
2013 2
Freq: A-DEC, dtype: int64
>>> s.resample('Q', convention='start').asfreq()
2012Q1 1.0
2012Q2 NaN
2012Q3 NaN
2012Q4 NaN
2013Q1 2.0
2013Q2 NaN
2013Q3 NaN
2013Q4 NaN
Freq: Q-DEC, dtype: float64
Resample quarters by month using ‘end’ convention. Values are
assigned to the last month of the period.
>>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01',
... freq='Q',
... periods=4))
>>> q
2018Q1 1
2018Q2 2
2018Q3 3
2018Q4 4
Freq: Q-DEC, dtype: int64
>>> q.resample('M', convention='end').asfreq()
2018-03 1.0
2018-04 NaN
2018-05 NaN
2018-06 2.0
2018-07 NaN
2018-08 NaN
2018-09 3.0
2018-10 NaN
2018-11 NaN
2018-12 4.0
Freq: M, dtype: float64
For DataFrame objects, the keyword on can be used to specify the
column instead of the index for resampling.
>>> d = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df = pd.DataFrame(d)
>>> df['week_starting'] = pd.date_range('01/01/2018',
... periods=8,
... freq='W')
>>> df
price volume week_starting
0 10 50 2018-01-07
1 11 60 2018-01-14
2 9 40 2018-01-21
3 13 100 2018-01-28
4 14 50 2018-02-04
5 18 100 2018-02-11
6 17 40 2018-02-18
7 19 50 2018-02-25
>>> df.resample('M', on='week_starting').mean()
price volume
week_starting
2018-01-31 10.75 62.5
2018-02-28 17.00 60.0
For a DataFrame with MultiIndex, the keyword level can be used to
specify on which level the resampling needs to take place.
>>> days = pd.date_range('1/1/2000', periods=4, freq='D')
>>> d2 = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df2 = pd.DataFrame(
... d2,
... index=pd.MultiIndex.from_product(
... [days, ['morning', 'afternoon']]
... )
... )
>>> df2
price volume
2000-01-01 morning 10 50
afternoon 11 60
2000-01-02 morning 9 40
afternoon 13 100
2000-01-03 morning 14 50
afternoon 18 100
2000-01-04 morning 17 40
afternoon 19 50
>>> df2.resample('D', level=0).sum()
price volume
2000-01-01 21 110
2000-01-02 22 140
2000-01-03 32 150
2000-01-04 36 90
If you want to adjust the start of the bins based on a fixed timestamp:
>>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00'
>>> rng = pd.date_range(start, end, freq='7min')
>>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
>>> ts
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
>>> ts.resample('17min').sum()
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='epoch').sum()
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='2000-01-01').sum()
2000-10-01 23:24:00 3
2000-10-01 23:41:00 15
2000-10-01 23:58:00 45
2000-10-02 00:15:00 45
Freq: 17T, dtype: int64
If you want to adjust the start of the bins with an offset Timedelta, the two
following lines are equivalent:
>>> ts.resample('17min', origin='start').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', offset='23h30min').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If you want to take the largest Timestamp as the end of the bins:
>>> ts.resample('17min', origin='end').sum()
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
In contrast with the start_day, you can use end_day to take the ceiling
midnight of the largest Timestamp as the end of the bins and drop the bins
not containing data:
>>> ts.resample('17min', origin='end_day').sum()
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
To replace the use of the deprecated base argument, you can now use offset,
in this example it is equivalent to have base=2:
>>> ts.resample('17min', offset='2min').sum()
2000-10-01 23:16:00 0
2000-10-01 23:33:00 9
2000-10-01 23:50:00 36
2000-10-02 00:07:00 39
2000-10-02 00:24:00 24
Freq: 17T, dtype: int64
To replace the use of the deprecated loffset argument:
>>> from pandas.tseries.frequencies import to_offset
>>> loffset = '19min'
>>> ts_out = ts.resample('17min').sum()
>>> ts_out.index = ts_out.index + to_offset(loffset)
>>> ts_out
2000-10-01 23:33:00 0
2000-10-01 23:50:00 9
2000-10-02 00:07:00 21
2000-10-02 00:24:00 54
2000-10-02 00:41:00 24
Freq: 17T, dtype: int64
|
reference/api/pandas.Series.resample.html
|
pandas.DataFrame.divide
|
`pandas.DataFrame.divide`
Get Floating division of dataframe and other, element-wise (binary operator truediv).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.divide(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.divide.html
|
pandas.Period.second
|
`pandas.Period.second`
Get the second component of the Period.
```
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.second
12
```
|
Period.second#
Get the second component of the Period.
Returns
intThe second of the Period (ranges from 0 to 59).
See also
Period.hourGet the hour component of the Period.
Period.minuteGet the minute component of the Period.
Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.second
12
|
reference/api/pandas.Period.second.html
|
pandas.Series.mean
|
`pandas.Series.mean`
Return the mean of the values over the requested axis.
|
Series.mean(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the mean of the values over the requested axis.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
|
reference/api/pandas.Series.mean.html
|
Extending pandas
|
Extending pandas
|
While pandas provides a rich set of methods, containers, and data types, your
needs may not be fully satisfied. pandas offers a few options for extending
pandas.
Registering custom accessors#
Libraries can use the decorators
pandas.api.extensions.register_dataframe_accessor(),
pandas.api.extensions.register_series_accessor(), and
pandas.api.extensions.register_index_accessor(), to add additional
“namespaces” to pandas objects. All of these follow a similar convention: you
decorate a class, providing the name of attribute to add. The class’s
__init__ method gets the object being decorated. For example:
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._validate(pandas_obj)
self._obj = pandas_obj
@staticmethod
def _validate(obj):
# verify there is a column latitude and a column longitude
if "latitude" not in obj.columns or "longitude" not in obj.columns:
raise AttributeError("Must have 'latitude' and 'longitude'.")
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Now users can access your methods using the geo namespace:
>>> ds = pd.DataFrame(
... {"longitude": np.linspace(0, 10), "latitude": np.linspace(0, 20)}
... )
>>> ds.geo.center
(5.0, 10.0)
>>> ds.geo.plot()
# plots data on a map
This can be a convenient way to extend pandas objects without subclassing them.
If you write a custom accessor, make a pull request adding it to our
pandas ecosystem page.
We highly recommend validating the data in your accessor’s __init__.
In our GeoAccessor, we validate that the data contains the expected columns,
raising an AttributeError when the validation fails.
For a Series accessor, you should validate the dtype if the accessor
applies only to certain dtypes.
Extension types#
Note
The pandas.api.extensions.ExtensionDtype and pandas.api.extensions.ExtensionArray APIs were
experimental prior to pandas 1.5. Starting with version 1.5, future changes will follow
the pandas deprecation policy.
pandas defines an interface for implementing data types and arrays that extend
NumPy’s type system. pandas itself uses the extension system for some types
that aren’t built into NumPy (categorical, period, interval, datetime with
timezone).
Libraries can define a custom array and data type. When pandas encounters these
objects, they will be handled properly (i.e. not converted to an ndarray of
objects). Many methods like pandas.isna() will dispatch to the extension
type’s implementation.
If you’re building a library that implements the interface, please publicize it
on Extension data types.
The interface consists of two classes.
ExtensionDtype#
A pandas.api.extensions.ExtensionDtype is similar to a numpy.dtype object. It describes the
data type. Implementors are responsible for a few unique items like the name.
One particularly important item is the type property. This should be the
class that is the scalar type for your data. For example, if you were writing an
extension array for IP Address data, this might be ipaddress.IPv4Address.
See the extension dtype source for interface definition.
pandas.api.extensions.ExtensionDtype can be registered to pandas to allow creation via a string dtype name.
This allows one to instantiate Series and .astype() with a registered string name, for
example 'category' is a registered string accessor for the CategoricalDtype.
See the extension dtype dtypes for more on how to register dtypes.
ExtensionArray#
This class provides all the array-like functionality. ExtensionArrays are
limited to 1 dimension. An ExtensionArray is linked to an ExtensionDtype via the
dtype attribute.
pandas makes no restrictions on how an extension array is created via its
__new__ or __init__, and puts no restrictions on how you store your
data. We do require that your array be convertible to a NumPy array, even if
this is relatively expensive (as it is for Categorical).
They may be backed by none, one, or many NumPy arrays. For example,
pandas.Categorical is an extension array backed by two arrays,
one for codes and one for categories. An array of IPv6 addresses may
be backed by a NumPy structured array with two fields, one for the
lower 64 bits and one for the upper 64 bits. Or they may be backed
by some other storage type, like Python lists.
See the extension array source for the interface definition. The docstrings
and comments contain guidance for properly implementing the interface.
ExtensionArray operator support#
By default, there are no operators defined for the class ExtensionArray.
There are two approaches for providing operator support for your ExtensionArray:
Define each of the operators on your ExtensionArray subclass.
Use an operator implementation from pandas that depends on operators that are already defined
on the underlying elements (scalars) of the ExtensionArray.
Note
Regardless of the approach, you may want to set __array_priority__
if you want your implementation to be called when involved in binary operations
with NumPy arrays.
For the first approach, you define selected operators, e.g., __add__, __le__, etc. that
you want your ExtensionArray subclass to support.
The second approach assumes that the underlying elements (i.e., scalar type) of the ExtensionArray
have the individual operators already defined. In other words, if your ExtensionArray
named MyExtensionArray is implemented so that each element is an instance
of the class MyExtensionElement, then if the operators are defined
for MyExtensionElement, the second approach will automatically
define the operators for MyExtensionArray.
A mixin class, ExtensionScalarOpsMixin supports this second
approach. If developing an ExtensionArray subclass, for example MyExtensionArray,
can simply include ExtensionScalarOpsMixin as a parent class of MyExtensionArray,
and then call the methods _add_arithmetic_ops() and/or
_add_comparison_ops() to hook the operators into
your MyExtensionArray class, as follows:
from pandas.api.extensions import ExtensionArray, ExtensionScalarOpsMixin
class MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin):
pass
MyExtensionArray._add_arithmetic_ops()
MyExtensionArray._add_comparison_ops()
Note
Since pandas automatically calls the underlying operator on each
element one-by-one, this might not be as performant as implementing your own
version of the associated operators directly on the ExtensionArray.
For arithmetic operations, this implementation will try to reconstruct a new
ExtensionArray with the result of the element-wise operation. Whether
or not that succeeds depends on whether the operation returns a result
that’s valid for the ExtensionArray. If an ExtensionArray cannot
be reconstructed, an ndarray containing the scalars returned instead.
For ease of implementation and consistency with operations between pandas
and NumPy ndarrays, we recommend not handling Series and Indexes in your binary ops.
Instead, you should detect these cases and return NotImplemented.
When pandas encounters an operation like op(Series, ExtensionArray), pandas
will
unbox the array from the Series (Series.array)
call result = op(values, ExtensionArray)
re-box the result in a Series
NumPy universal functions#
Series implements __array_ufunc__. As part of the implementation,
pandas unboxes the ExtensionArray from the Series, applies the ufunc,
and re-boxes it if necessary.
If applicable, we highly recommend that you implement __array_ufunc__ in your
extension array to avoid coercion to an ndarray. See
the NumPy documentation
for an example.
As part of your implementation, we require that you defer to pandas when a pandas
container (Series, DataFrame, Index) is detected in inputs.
If any of those is present, you should return NotImplemented. pandas will take care of
unboxing the array from the container and re-calling the ufunc with the unwrapped input.
Testing extension arrays#
We provide a test suite for ensuring that your extension arrays satisfy the expected
behavior. To use the test suite, you must provide several pytest fixtures and inherit
from the base test class. The required fixtures are found in
https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/conftest.py.
To use a test, subclass it:
from pandas.tests.extension import base
class TestConstructors(base.BaseConstructorsTests):
pass
See https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/base/__init__.py
for a list of all the tests available.
Compatibility with Apache Arrow#
An ExtensionArray can support conversion to / from pyarrow arrays
(and thus support for example serialization to the Parquet file format)
by implementing two methods: ExtensionArray.__arrow_array__ and
ExtensionDtype.__from_arrow__.
The ExtensionArray.__arrow_array__ ensures that pyarrow knowns how
to convert the specific extension array into a pyarrow.Array (also when
included as a column in a pandas DataFrame):
class MyExtensionArray(ExtensionArray):
...
def __arrow_array__(self, type=None):
# convert the underlying array values to a pyarrow Array
import pyarrow
return pyarrow.array(..., type=type)
The ExtensionDtype.__from_arrow__ method then controls the conversion
back from pyarrow to a pandas ExtensionArray. This method receives a pyarrow
Array or ChunkedArray as only argument and is expected to return the
appropriate pandas ExtensionArray for this dtype and the passed values:
class ExtensionDtype:
...
def __from_arrow__(self, array: pyarrow.Array/ChunkedArray) -> ExtensionArray:
...
See more in the Arrow documentation.
Those methods have been implemented for the nullable integer and string extension
dtypes included in pandas, and ensure roundtrip to pyarrow and the Parquet file format.
Subclassing pandas data structures#
Warning
There are some easier alternatives before considering subclassing pandas data structures.
Extensible method chains with pipe
Use composition. See here.
Extending by registering an accessor
Extending by extension type
This section describes how to subclass pandas data structures to meet more specific needs. There are two points that need attention:
Override constructor properties.
Define original properties
Note
You can find a nice example in geopandas project.
Override constructor properties#
Each data structure has several constructor properties for returning a new
data structure as the result of an operation. By overriding these properties,
you can retain subclasses through pandas data manipulations.
There are 3 possible constructor properties to be defined on a subclass:
DataFrame/Series._constructor: Used when a manipulation result has the same dimension as the original.
DataFrame._constructor_sliced: Used when a DataFrame (sub-)class manipulation result should be a Series (sub-)class.
Series._constructor_expanddim: Used when a Series (sub-)class manipulation result should be a DataFrame (sub-)class, e.g. Series.to_frame().
Below example shows how to define SubclassedSeries and SubclassedDataFrame overriding constructor properties.
class SubclassedSeries(pd.Series):
@property
def _constructor(self):
return SubclassedSeries
@property
def _constructor_expanddim(self):
return SubclassedDataFrame
class SubclassedDataFrame(pd.DataFrame):
@property
def _constructor(self):
return SubclassedDataFrame
@property
def _constructor_sliced(self):
return SubclassedSeries
>>> s = SubclassedSeries([1, 2, 3])
>>> type(s)
<class '__main__.SubclassedSeries'>
>>> to_framed = s.to_frame()
>>> type(to_framed)
<class '__main__.SubclassedDataFrame'>
>>> df = SubclassedDataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> type(df)
<class '__main__.SubclassedDataFrame'>
>>> sliced1 = df[["A", "B"]]
>>> sliced1
A B
0 1 4
1 2 5
2 3 6
>>> type(sliced1)
<class '__main__.SubclassedDataFrame'>
>>> sliced2 = df["A"]
>>> sliced2
0 1
1 2
2 3
Name: A, dtype: int64
>>> type(sliced2)
<class '__main__.SubclassedSeries'>
Define original properties#
To let original data structures have additional properties, you should let pandas know what properties are added. pandas maps unknown properties to data names overriding __getattribute__. Defining original properties can be done in one of 2 ways:
Define _internal_names and _internal_names_set for temporary properties which WILL NOT be passed to manipulation results.
Define _metadata for normal properties which will be passed to manipulation results.
Below is an example to define two original properties, “internal_cache” as a temporary property and “added_property” as a normal property
class SubclassedDataFrame2(pd.DataFrame):
# temporary properties
_internal_names = pd.DataFrame._internal_names + ["internal_cache"]
_internal_names_set = set(_internal_names)
# normal properties
_metadata = ["added_property"]
@property
def _constructor(self):
return SubclassedDataFrame2
>>> df = SubclassedDataFrame2({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> df.internal_cache = "cached"
>>> df.added_property = "property"
>>> df.internal_cache
cached
>>> df.added_property
property
# properties defined in _internal_names is reset after manipulation
>>> df[["A", "B"]].internal_cache
AttributeError: 'SubclassedDataFrame2' object has no attribute 'internal_cache'
# properties defined in _metadata are retained
>>> df[["A", "B"]].added_property
property
Plotting backends#
Starting in 0.25 pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair,…)
to generate the plots.
Libraries implementing the plotting backend should use entry points
to make their backend discoverable to pandas. The key is "pandas_plotting_backends". For example, pandas
registers the default “matplotlib” backend as follows.
# in setup.py
setup( # noqa: F821
...,
entry_points={
"pandas_plotting_backends": [
"matplotlib = pandas:plotting._matplotlib",
],
},
)
More information on how to implement a third-party plotting backend can be found at
https://github.com/pandas-dev/pandas/blob/main/pandas/plotting/__init__.py#L1.
|
development/extending.html
|
pandas.tseries.offsets.Minute.kwds
|
`pandas.tseries.offsets.Minute.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
Minute.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Minute.kwds.html
|
pandas.DataFrame.empty
|
`pandas.DataFrame.empty`
Indicator whether Series/DataFrame is empty.
True if Series/DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
```
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
```
|
property DataFrame.empty[source]#
Indicator whether Series/DataFrame is empty.
True if Series/DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
Returns
boolIf Series/DataFrame is empty, return True, if not return False.
See also
Series.dropnaReturn series without null values.
DataFrame.dropnaReturn DataFrame with labels on given axis omitted where (all or any) data are missing.
Notes
If Series/DataFrame contains only NaNs, it is still not considered empty. See
the example below.
Examples
An example of an actual empty DataFrame. Notice the index is empty:
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
If we only have NaNs in our DataFrame, it is not considered empty! We
will need to drop the NaNs to make the DataFrame empty:
>>> df = pd.DataFrame({'A' : [np.nan]})
>>> df
A
0 NaN
>>> df.empty
False
>>> df.dropna().empty
True
>>> ser_empty = pd.Series({'A' : []})
>>> ser_empty
A []
dtype: object
>>> ser_empty.empty
False
>>> ser_empty = pd.Series()
>>> ser_empty.empty
True
|
reference/api/pandas.DataFrame.empty.html
|
pandas.DataFrame.to_json
|
`pandas.DataFrame.to_json`
Convert the object to a JSON string.
```
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
```
|
DataFrame.to_json(path_or_buf=None, orient=None, date_format=None, double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False, compression='infer', index=True, indent=None, storage_options=None)[source]#
Convert the object to a JSON string.
Note NaN’s and None will be converted to null and datetime objects
will be converted to UNIX timestamps.
Parameters
path_or_bufstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a write() function. If None, the result is
returned as a string.
orientstrIndication of expected JSON string format.
Series:
default is ‘index’
allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}.
DataFrame:
default is ‘columns’
allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’,
‘values’, ‘table’}.
The format of the JSON string:
‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns],
‘data’ -> [values]}
‘records’ : list like [{column -> value}, … , {column -> value}]
‘index’ : dict like {index -> {column -> value}}
‘columns’ : dict like {column -> {index -> value}}
‘values’ : just the values array
‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}}
Describing the data, where data component is like orient='records'.
date_format{None, ‘epoch’, ‘iso’}Type of date conversion. ‘epoch’ = epoch milliseconds,
‘iso’ = ISO8601. The default depends on the orient. For
orient='table', the default is ‘iso’. For all other orients,
the default is ‘epoch’.
double_precisionint, default 10The number of decimal places to use when encoding
floating point values.
force_asciibool, default TrueForce encoded string to be ASCII.
date_unitstr, default ‘ms’ (milliseconds)The time unit to encode to, governs timestamp and ISO8601
precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond,
microsecond, and nanosecond respectively.
default_handlercallable, default NoneHandler to call if object cannot otherwise be converted to a
suitable format for JSON. Should receive a single argument which is
the object to convert and return a serialisable object.
linesbool, default FalseIf ‘orient’ is ‘records’ write out line-delimited json format. Will
throw ValueError if incorrect ‘orient’ since others are not
list-like.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path_or_buf’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
indexbool, default TrueWhether to include the index values in the JSON string. Not
including the index (index=False) is only supported when
orient is ‘split’ or ‘table’.
indentint, optionalLength of whitespace used to indent each record.
New in version 1.0.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
None or strIf path_or_buf is None, returns the resulting json format as a
string. Otherwise returns None.
See also
read_jsonConvert a JSON string to pandas object.
Notes
The behavior of indent=0 varies from the stdlib, which does not
indent the output but does insert newlines. Currently, indent=0
and the default indent=None are equivalent in pandas, though this
may change in a future release.
orient='table' contains a ‘pandas_version’ field under ‘schema’.
This stores the version of pandas used in the latest revision of the
schema.
Examples
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
>>> result = df.to_json(orient="split")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"columns": [
"col 1",
"col 2"
],
"index": [
"row 1",
"row 2"
],
"data": [
[
"a",
"b"
],
[
"c",
"d"
]
]
}
Encoding/decoding a Dataframe using 'records' formatted JSON.
Note that index labels are not preserved with this encoding.
>>> result = df.to_json(orient="records")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
{
"col 1": "a",
"col 2": "b"
},
{
"col 1": "c",
"col 2": "d"
}
]
Encoding/decoding a Dataframe using 'index' formatted JSON:
>>> result = df.to_json(orient="index")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"row 1": {
"col 1": "a",
"col 2": "b"
},
"row 2": {
"col 1": "c",
"col 2": "d"
}
}
Encoding/decoding a Dataframe using 'columns' formatted JSON:
>>> result = df.to_json(orient="columns")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"col 1": {
"row 1": "a",
"row 2": "c"
},
"col 2": {
"row 1": "b",
"row 2": "d"
}
}
Encoding/decoding a Dataframe using 'values' formatted JSON:
>>> result = df.to_json(orient="values")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
[
"a",
"b"
],
[
"c",
"d"
]
]
Encoding with Table Schema:
>>> result = df.to_json(orient="table")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"schema": {
"fields": [
{
"name": "index",
"type": "string"
},
{
"name": "col 1",
"type": "string"
},
{
"name": "col 2",
"type": "string"
}
],
"primaryKey": [
"index"
],
"pandas_version": "1.4.0"
},
"data": [
{
"index": "row 1",
"col 1": "a",
"col 2": "b"
},
{
"index": "row 2",
"col 1": "c",
"col 2": "d"
}
]
}
|
reference/api/pandas.DataFrame.to_json.html
|
pandas.DataFrame.add_suffix
|
`pandas.DataFrame.add_suffix`
Suffix labels with string suffix.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
```
|
DataFrame.add_suffix(suffix)[source]#
Suffix labels with string suffix.
For Series, the row labels are suffixed.
For DataFrame, the column labels are suffixed.
Parameters
suffixstrThe string to add after each label.
Returns
Series or DataFrameNew Series or DataFrame with updated labels.
See also
Series.add_prefixPrefix row labels with string prefix.
DataFrame.add_prefixPrefix column labels with string prefix.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6
|
reference/api/pandas.DataFrame.add_suffix.html
|
Comparison with other tools
|
Comparison with other tools
|
Comparison with R / R libraries
Quick reference
Base R
plyr
reshape / reshape2
Comparison with SQL
Copies vs. in place operations
SELECT
WHERE
GROUP BY
JOIN
UNION
LIMIT
pandas equivalents for some SQL analytic and aggregate functions
UPDATE
DELETE
Comparison with spreadsheets
Data structures
Data input / output
Data operations
String processing
Merging
Other considerations
Comparison with SAS
Data structures
Data input / output
Data operations
String processing
Merging
Missing data
GroupBy
Other considerations
Comparison with Stata
Data structures
Data input / output
Data operations
String processing
Merging
Missing data
GroupBy
Other considerations
|
getting_started/comparison/index.html
|
pandas.tseries.offsets.Hour.onOffset
|
pandas.tseries.offsets.Hour.onOffset
|
Hour.onOffset()#
|
reference/api/pandas.tseries.offsets.Hour.onOffset.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.