title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.core.groupby.DataFrameGroupBy.hist
|
`pandas.core.groupby.DataFrameGroupBy.hist`
Make a histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function calls matplotlib.pyplot.hist(), on each series in
the DataFrame, resulting in one histogram per column.
```
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index=['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
```
|
property DataFrameGroupBy.hist[source]#
Make a histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function calls matplotlib.pyplot.hist(), on each series in
the DataFrame, resulting in one histogram per column.
Parameters
dataDataFrameThe pandas object holding the data.
columnstr or sequence, optionalIf passed, will be used to limit data to a subset of columns.
byobject, optionalIf passed, then used to form histograms for separate groups.
gridbool, default TrueWhether to show axis grid lines.
xlabelsizeint, default NoneIf specified changes the x-axis label size.
xrotfloat, default NoneRotation of x axis labels. For example, a value of 90 displays the
x labels rotated 90 degrees clockwise.
ylabelsizeint, default NoneIf specified changes the y-axis label size.
yrotfloat, default NoneRotation of y axis labels. For example, a value of 90 displays the
y labels rotated 90 degrees clockwise.
axMatplotlib axes object, default NoneThe axes to plot the histogram on.
sharexbool, default True if ax is None else FalseIn case subplots=True, share x axis and set some x axis labels to
invisible; defaults to True if ax is None otherwise False if an ax
is passed in.
Note that passing in both an ax and sharex=True will alter all x axis
labels for all subplots in a figure.
shareybool, default FalseIn case subplots=True, share y axis and set some y axis labels to
invisible.
figsizetuple, optionalThe size in inches of the figure to create. Uses the value in
matplotlib.rcParams by default.
layouttuple, optionalTuple of (rows, columns) for the layout of the histograms.
binsint or sequence, default 10Number of histogram bins to be used. If an integer is given, bins + 1
bin edges are calculated and returned. If bins is a sequence, gives
bin edges, including left edge of first bin and right edge of last
bin. In this case, bins is returned unmodified.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
legendbool, default FalseWhether to show the legend.
New in version 1.1.0.
**kwargsAll other plotting keyword arguments to be passed to
matplotlib.pyplot.hist().
Returns
matplotlib.AxesSubplot or numpy.ndarray of them
See also
matplotlib.pyplot.histPlot a histogram using matplotlib.
Examples
This example draws a histogram based on the length and width of
some animals, displayed in three bins
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index=['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
|
reference/api/pandas.core.groupby.DataFrameGroupBy.hist.html
|
pandas.tseries.offsets.QuarterEnd.is_quarter_start
|
`pandas.tseries.offsets.QuarterEnd.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
QuarterEnd.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.QuarterEnd.is_quarter_start.html
|
pandas.tseries.offsets.BusinessHour.copy
|
`pandas.tseries.offsets.BusinessHour.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
BusinessHour.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.BusinessHour.copy.html
|
pandas.tseries.offsets.YearEnd.is_on_offset
|
`pandas.tseries.offsets.YearEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
Timestamp to check intersections with frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
YearEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.YearEnd.is_on_offset.html
|
pandas.DataFrame.value_counts
|
`pandas.DataFrame.value_counts`
Return a Series containing counts of unique rows in the DataFrame.
```
>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
... 'num_wings': [2, 0, 0, 0]},
... index=['falcon', 'dog', 'cat', 'ant'])
>>> df
num_legs num_wings
falcon 2 2
dog 4 0
cat 4 0
ant 6 0
```
|
DataFrame.value_counts(subset=None, normalize=False, sort=True, ascending=False, dropna=True)[source]#
Return a Series containing counts of unique rows in the DataFrame.
New in version 1.1.0.
Parameters
subsetlist-like, optionalColumns to use when counting unique combinations.
normalizebool, default FalseReturn proportions rather than frequencies.
sortbool, default TrueSort by frequencies.
ascendingbool, default FalseSort in ascending order.
dropnabool, default TrueDon’t include counts of rows that contain NA values.
New in version 1.3.0.
Returns
Series
See also
Series.value_countsEquivalent method on Series.
Notes
The returned Series will have a MultiIndex with one level per input
column. By default, rows that contain any NA values are omitted from
the result. By default, the resulting Series will be in descending
order so that the first element is the most frequently-occurring row.
Examples
>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
... 'num_wings': [2, 0, 0, 0]},
... index=['falcon', 'dog', 'cat', 'ant'])
>>> df
num_legs num_wings
falcon 2 2
dog 4 0
cat 4 0
ant 6 0
>>> df.value_counts()
num_legs num_wings
4 0 2
2 2 1
6 0 1
dtype: int64
>>> df.value_counts(sort=False)
num_legs num_wings
2 2 1
4 0 2
6 0 1
dtype: int64
>>> df.value_counts(ascending=True)
num_legs num_wings
2 2 1
6 0 1
4 0 2
dtype: int64
>>> df.value_counts(normalize=True)
num_legs num_wings
4 0 0.50
2 2 0.25
6 0 0.25
dtype: float64
With dropna set to False we can also count rows with NA values.
>>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'],
... 'middle_name': ['Smith', pd.NA, pd.NA, 'Louise']})
>>> df
first_name middle_name
0 John Smith
1 Anne <NA>
2 John <NA>
3 Beth Louise
>>> df.value_counts()
first_name middle_name
Beth Louise 1
John Smith 1
dtype: int64
>>> df.value_counts(dropna=False)
first_name middle_name
Anne NaN 1
Beth Louise 1
John Smith 1
NaN 1
dtype: int64
|
reference/api/pandas.DataFrame.value_counts.html
|
pandas.tseries.offsets.SemiMonthEnd.rollforward
|
`pandas.tseries.offsets.SemiMonthEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
|
SemiMonthEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.rollforward.html
|
pandas.TimedeltaIndex.components
|
`pandas.TimedeltaIndex.components`
Return a DataFrame of the individual resolution components of the Timedeltas.
The components (days, hours, minutes seconds, milliseconds, microseconds,
nanoseconds) are returned as columns in a DataFrame.
|
property TimedeltaIndex.components[source]#
Return a DataFrame of the individual resolution components of the Timedeltas.
The components (days, hours, minutes seconds, milliseconds, microseconds,
nanoseconds) are returned as columns in a DataFrame.
Returns
DataFrame
|
reference/api/pandas.TimedeltaIndex.components.html
|
pandas.core.window.rolling.Rolling.aggregate
|
`pandas.core.window.rolling.Rolling.aggregate`
Aggregate using one or more operations over the specified axis.
```
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
```
|
Rolling.aggregate(func, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a Series/Dataframe or when passed to Series/Dataframe.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.Series.rollingCalling object with Series data.
pandas.DataFrame.rollingCalling object with DataFrame data.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> df.rolling(2).sum()
A B C
0 NaN NaN NaN
1 3.0 9.0 15.0
2 5.0 11.0 17.0
>>> df.rolling(2).agg({"A": "sum", "B": "min"})
A B
0 NaN NaN
1 3.0 4.0
2 5.0 5.0
|
reference/api/pandas.core.window.rolling.Rolling.aggregate.html
|
pandas.Timestamp.fold
|
pandas.Timestamp.fold
|
Timestamp.fold#
|
reference/api/pandas.Timestamp.fold.html
|
pandas.tseries.offsets.Day.apply
|
pandas.tseries.offsets.Day.apply
|
Day.apply()#
|
reference/api/pandas.tseries.offsets.Day.apply.html
|
pandas.tseries.offsets.BYearBegin.kwds
|
`pandas.tseries.offsets.BYearBegin.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
BYearBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BYearBegin.kwds.html
|
pandas.tseries.offsets.BYearEnd.rule_code
|
pandas.tseries.offsets.BYearEnd.rule_code
|
BYearEnd.rule_code#
|
reference/api/pandas.tseries.offsets.BYearEnd.rule_code.html
|
pandas.tseries.offsets.Milli.nanos
|
`pandas.tseries.offsets.Milli.nanos`
Return an integer of the total number of nanoseconds.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
```
|
Milli.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
|
reference/api/pandas.tseries.offsets.Milli.nanos.html
|
pandas.Series.dt.is_year_start
|
`pandas.Series.dt.is_year_start`
Indicate whether the date is the first day of a year.
```
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
```
|
Series.dt.is_year_start[source]#
Indicate whether the date is the first day of a year.
Returns
Series or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
is_year_endSimilar property indicating the last day of the year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
>>> idx = pd.date_range("2017-12-30", periods=3)
>>> idx
DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_year_start
array([False, False, True])
|
reference/api/pandas.Series.dt.is_year_start.html
|
pandas.Series.cat.set_categories
|
`pandas.Series.cat.set_categories`
Set the categories to the specified new_categories.
new_categories can include new categories (which will result in
unused categories) or remove old categories (which results in values
set to NaN). If rename==True, the categories will simple be renamed
(less or more items than in old categories will result in values set to
NaN or in unused categories respectively).
|
Series.cat.set_categories(*args, **kwargs)[source]#
Set the categories to the specified new_categories.
new_categories can include new categories (which will result in
unused categories) or remove old categories (which results in values
set to NaN). If rename==True, the categories will simple be renamed
(less or more items than in old categories will result in values set to
NaN or in unused categories respectively).
This method can be used to perform more than one action of adding,
removing, and reordering simultaneously and is therefore faster than
performing the individual steps via the more specialised methods.
On the other hand this methods does not do checks (e.g., whether the
old categories are included in the new categories on a reorder), which
can result in surprising changes, for example when using special string
dtypes, which does not considers a S1 string equal to a single char
python string.
Parameters
new_categoriesIndex-likeThe categories in new order.
orderedbool, default FalseWhether or not the categorical is treated as a ordered categorical.
If not given, do not change the ordered information.
renamebool, default FalseWhether or not the new_categories should be considered as a rename
of the old categories or as reordered categories.
inplacebool, default FalseWhether or not to reorder the categories in-place or return a copy
of this categorical with reordered categories.
Deprecated since version 1.3.0.
Returns
Categorical with reordered categories or None if inplace.
Raises
ValueErrorIf new_categories does not validate as categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
|
reference/api/pandas.Series.cat.set_categories.html
|
pandas.io.formats.style.Styler.relabel_index
|
`pandas.io.formats.style.Styler.relabel_index`
Relabel the index, or column header, keys to display a set of specified values.
```
>>> df = pd.DataFrame({"col": ["a", "b", "c"]})
>>> df.style.relabel_index(["A", "B", "C"])
col
A a
B b
C c
```
|
Styler.relabel_index(labels, axis=0, level=None)[source]#
Relabel the index, or column header, keys to display a set of specified values.
New in version 1.5.0.
Parameters
labelslist-like or IndexNew labels to display. Must have same length as the underlying values not
hidden.
axis{“index”, 0, “columns”, 1}Apply to the index or columns.
levelint, str, list, optionalThe level(s) over which to apply the new labels. If None will apply
to all levels of an Index or MultiIndex which are not hidden.
Returns
selfStyler
See also
Styler.format_indexFormat the text display value of index or column headers.
Styler.hideHide the index, column headers, or specified data from display.
Notes
As part of Styler, this method allows the display of an index to be
completely user-specified without affecting the underlying DataFrame data,
index, or column headers. This means that the flexibility of indexing is
maintained whilst the final display is customisable.
Since Styler is designed to be progressively constructed with method chaining,
this method is adapted to react to the currently specified hidden elements.
This is useful because it means one does not have to specify all the new
labels if the majority of an index, or column headers, have already been hidden.
The following produce equivalent display (note the length of labels in
each case).
# relabel first, then hide
df = pd.DataFrame({"col": ["a", "b", "c"]})
df.style.relabel_index(["A", "B", "C"]).hide([0,1])
# hide first, then relabel
df = pd.DataFrame({"col": ["a", "b", "c"]})
df.style.hide([0,1]).relabel_index(["C"])
This method should be used, rather than Styler.format_index(), in one of
the following cases (see examples):
A specified set of labels are required which are not a function of the
underlying index keys.
The function of the underlying index keys requires a counter variable,
such as those available upon enumeration.
Examples
Basic use
>>> df = pd.DataFrame({"col": ["a", "b", "c"]})
>>> df.style.relabel_index(["A", "B", "C"])
col
A a
B b
C c
Chaining with pre-hidden elements
>>> df.style.hide([0,1]).relabel_index(["C"])
col
C c
Using a MultiIndex
>>> midx = pd.MultiIndex.from_product([[0, 1], [0, 1], [0, 1]])
>>> df = pd.DataFrame({"col": list(range(8))}, index=midx)
>>> styler = df.style
col
0 0 0 0
1 1
1 0 2
1 3
1 0 0 4
1 5
1 0 6
1 7
>>> styler.hide((midx.get_level_values(0)==0)|(midx.get_level_values(1)==0))
...
>>> styler.hide(level=[0,1])
>>> styler.relabel_index(["binary6", "binary7"])
col
binary6 6
binary7 7
We can also achieve the above by indexing first and then re-labeling
>>> styler = df.loc[[(1,1,0), (1,1,1)]].style
>>> styler.hide(level=[0,1]).relabel_index(["binary6", "binary7"])
...
col
binary6 6
binary7 7
Defining a formatting function which uses an enumeration counter. Also note
that the value of the index key is passed in the case of string labels so it
can also be inserted into the label, using curly brackets (or double curly
brackets if the string if pre-formatted),
>>> df = pd.DataFrame({"samples": np.random.rand(10)})
>>> styler = df.loc[np.random.randint(0,10,3)].style
>>> styler.relabel_index([f"sample{i+1} ({{}})" for i in range(3)])
...
samples
sample1 (5) 0.315811
sample2 (0) 0.495941
sample3 (2) 0.067946
|
reference/api/pandas.io.formats.style.Styler.relabel_index.html
|
pandas.tseries.offsets.BYearBegin.is_year_start
|
`pandas.tseries.offsets.BYearBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
BYearBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.BYearBegin.is_year_start.html
|
pandas.Series.dt.to_period
|
`pandas.Series.dt.to_period`
Cast to PeriodArray/Index at a particular frequency.
```
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
```
|
Series.dt.to_period(*args, **kwargs)[source]#
Cast to PeriodArray/Index at a particular frequency.
Converts DatetimeArray/Index to PeriodArray/Index.
Parameters
freqstr or Offset, optionalOne of pandas’ offset strings
or an Offset object. Will be inferred by default.
Returns
PeriodArray/Index
Raises
ValueErrorWhen converting a DatetimeArray/Index with non-regular values,
so that a frequency cannot be inferred.
See also
PeriodIndexImmutable ndarray holding ordinal values.
DatetimeIndex.to_pydatetimeReturn DatetimeIndex as object.
Examples
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
Infer the daily frequency
>>> idx = pd.date_range("2017-01-01", periods=2)
>>> idx.to_period()
PeriodIndex(['2017-01-01', '2017-01-02'],
dtype='period[D]')
|
reference/api/pandas.Series.dt.to_period.html
|
pandas.DataFrame.sub
|
`pandas.DataFrame.sub`
Get Subtraction of dataframe and other, element-wise (binary operator sub).
Equivalent to dataframe - other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rsub.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.sub(other, axis='columns', level=None, fill_value=None)[source]#
Get Subtraction of dataframe and other, element-wise (binary operator sub).
Equivalent to dataframe - other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rsub.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.sub.html
|
pandas ecosystem
|
Increasingly, packages are being built on top of pandas to address specific needs
in data preparation, analysis and visualization.
This is encouraging because it means pandas is not only helping users to handle
their data tasks but also that it provides a better starting point for developers to
build powerful and more focused data tools.
The creation of libraries that complement pandas’ functionality also allows pandas
development to remain focused around it’s original requirements.
This is an inexhaustive list of projects that build on pandas in order to provide
tools in the PyData space. For a list of projects that depend on pandas,
see the
Github network dependents for pandas
or search pypi for pandas.
We’d like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
Data cleaning and validation#
Pyjanitor#
Pyjanitor provides a clean API for cleaning data, using method chaining.
Pandera#
Pandera provides a flexible and expressive API for performing data validation on dataframes
to make data processing pipelines more readable and robust.
Dataframes contain information that pandera explicitly validates at runtime. This is useful in
production-critical data pipelines or reproducible research settings.
pandas-path#
Since Python 3.4, pathlib has been
included in the Python standard library. Path objects provide a simple
and delightful way to interact with the file system. The pandas-path package enables the
Path API for pandas through a custom accessor .path. Getting just the filenames from
a series of full file paths is as simple as my_files.path.name. Other convenient operations like
joining paths, replacing file extensions, and checking if files exist are also available.
Statistics and machine learning#
pandas-tfrecords#
Easy saving pandas dataframe to tensorflow tfrecords format and reading tfrecords to pandas.
Statsmodels#
Statsmodels is the prominent Python “statistics and econometrics library” and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas’ scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
sklearn-pandas#
Use pandas DataFrames in your scikit-learn
ML pipeline.
Featuretools#
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primitives”. Users can contribute their own primitives in Python and share them with the rest of the community.
Compose#
Compose is a machine learning tool for labeling data and prediction engineering. It allows you to structure the labeling process by parameterizing prediction problems and transforming time-driven relational data into target values with cutoff times that can be used for supervised learning.
STUMPY#
STUMPY is a powerful and scalable Python library for modern time series analysis.
At its core, STUMPY efficiently computes something called a
matrix profile,
which can be used for a wide variety of time series data mining tasks.
Visualization#
Pandas has its own Styler class for table visualization, and while
pandas also has built-in support for data visualization through charts with matplotlib,
there are a number of other pandas-compatible libraries.
Altair#
Altair is a declarative statistical visualization library for Python.
With Altair, you can spend more time understanding your data and its
meaning. Altair’s API is simple, friendly and consistent and built on
top of the powerful Vega-Lite JSON specification. This elegant
simplicity produces beautiful and effective visualizations with a
minimal amount of code. Altair works with pandas DataFrames.
Bokeh#
Bokeh is a Python interactive visualization library for large datasets that natively uses
the latest web technologies. Its goal is to provide elegant, concise construction of novel
graphics in the style of Protovis/D3, while delivering high-performance interactivity over
large data to thin clients.
Pandas-Bokeh provides a high level API
for Bokeh that can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides interactive
web-based charts and maps.
Seaborn#
Seaborn is a Python visualization library based on
matplotlib. It provides a high-level, dataset-oriented
interface for creating attractive statistical graphics. The plotting functions
in seaborn understand pandas objects and leverage pandas grouping operations
internally to support concise specification of complex visualizations. Seaborn
also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
plotnine#
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language.
Based on “The Grammar of Graphics” it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
Various implementations to other languages are available.
A good implementation for Python users is has2k1/plotnine.
IPython vega#
IPython Vega leverages Vega to create plots within Jupyter Notebook.
Plotly#
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has offline, or on-premise accounts for private use.
Lux#
Lux is a Python library that facilitates fast and easy experimentation with data by automating the visual data exploration process. To use Lux, simply add an extra import alongside pandas:
import lux
import pandas as pd
df = pd.read_csv("data.csv")
df # discover interesting insights!
By printing out a dataframe, Lux automatically recommends a set of visualizations that highlights interesting trends and patterns in the dataframe. Users can leverage any existing pandas commands without modifying their code, while being able to visualize their pandas data structures (e.g., DataFrame, Series, Index) at the same time. Lux also offers a powerful, intuitive language that allow users to create Altair, matplotlib, or Vega-Lite visualizations without having to think at the level of code.
Qtpandas#
Spun off from the main pandas library, the qtpandas
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
D-Tale#
D-Tale is a lightweight web client for visualizing pandas data structures. It
provides a rich spreadsheet-style grid which acts as a wrapper for a lot of
pandas functionality (query, sort, describe, corr…) so users can quickly
manipulate their data. There is also an interactive chart-builder using Plotly
Dash allowing users to build nice portable visualizations. D-Tale can be
invoked with the following command
import dtale
dtale.show(df)
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the grid.
hvplot#
hvPlot is a high-level plotting API for the PyData ecosystem built on HoloViews.
It can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "hvplot")
IDE#
IPython#
IPython is an interactive command shell and distributed computing
environment. IPython tab completion works with pandas methods and also
attributes like DataFrame columns.
Jupyter Notebook / Jupyter Lab#
Jupyter Notebook is a web application for creating Jupyter notebooks.
A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
Python) through ‘Download As’ in the web interface and jupyter convert
in a shell.
pandas DataFrames implement _repr_html_ and _repr_latex methods
which are utilized by Jupyter Notebook for displaying
(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
(Note: HTML tables may or may not be
compatible with non-HTML Jupyter output formats.)
See Options and Settings and
Available Options
for pandas display. settings.
Quantopian/qgrid#
qgrid is “an interactive grid for sorting and filtering
DataFrames in IPython Notebook” built with SlickGrid.
Spyder#
Spyder is a cross-platform PyQt-based IDE combining the editing, analysis,
debugging and profiling functionality of a software development tool with the
data exploration, interactive execution, deep inspection and rich visualization
capabilities of a scientific environment like MATLAB or Rstudio.
Its Variable Explorer
allows users to view, manipulate and edit pandas Index, Series,
and DataFrame objects like a “spreadsheet”, including copying and modifying
values, sorting, displaying a “heatmap”, converting data types and more.
pandas objects can also be renamed, duplicated, new columns added,
copied/pasted to/from the clipboard (as TSV), and saved/loaded to/from a file.
Spyder can also import data from a variety of plain text and binary files
or the clipboard into a new pandas DataFrame via a sophisticated import wizard.
Most pandas classes, methods and data attributes can be autocompleted in
Spyder’s Editor and
IPython Console,
and Spyder’s Help pane can retrieve
and render Numpydoc documentation on pandas objects in rich text with Sphinx
both automatically and on-demand.
API#
pandas-datareader#
pandas-datareader is a remote data access library for pandas (PyPI:pandas-datareader).
It is based on functionality that was located in pandas.io.data and pandas.io.wb but was
split off in v0.19.
See more in the pandas-datareader docs:
The following data feeds are available:
Google Finance
Tiingo
Morningstar
IEX
Robinhood
Enigma
Quandl
FRED
Fama/French
World Bank
OECD
Eurostat
TSP Fund Data
Nasdaq Trader Symbol Definitions
Stooq Index Data
MOEX Data
Quandl/Python#
Quandl API for Python wraps the Quandl REST API to return
pandas DataFrames with timeseries indexes.
Pydatastream#
PyDatastream is a Python interface to the
Refinitiv Datastream (DWS)
REST API to return indexed pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
pandaSDMX#
pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
SDMX 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or MultiIndexed DataFrames.
fredapi#
fredapi is a Python interface to the Federal Reserve Economic Data (FRED)
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
dataframe_sql#
dataframe_sql is a Python package that translates SQL syntax directly into
operations on pandas DataFrames. This is useful when migrating from a database to
using pandas or for users more comfortable with SQL looking for a way to interface
with pandas.
Domain specific#
Geopandas#
Geopandas extends pandas data objects to include geographic information which support
geometric operations. If your work entails maps and geographical coordinates, and
you love pandas, you should take a close look at Geopandas.
staircase#
staircase is a data analysis package, built upon pandas and numpy, for modelling and
manipulation of mathematical step functions. It provides a rich variety of arithmetic
operations, relational operations, logical operations, statistical operations and
aggregations for step functions defined over real numbers, datetime and timedelta domains.
xarray#
xarray brings the labeled data power of pandas to the physical sciences by
providing N-dimensional variants of the core pandas data structures. It aims to
provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
IO#
BCPandas#
BCPandas provides high performance writes from pandas to Microsoft SQL Server,
far exceeding the performance of the native df.to_sql method. Internally, it uses
Microsoft’s BCP utility, but the complexity is fully abstracted away from the end user.
Rigorously tested, it is a complete replacement for df.to_sql.
Deltalake#
Deltalake python package lets you access tables stored in
Delta Lake natively in Python without the need to use Spark or
JVM. It provides the delta_table.to_pyarrow_table().to_pandas() method to convert
any Delta table into Pandas dataframe.
Out-of-core#
Blaze#
Blaze provides a standard API for doing computations with various
in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables,
PySpark.
Cylon#
Cylon is a fast, scalable, distributed memory parallel runtime with a pandas
like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache
Arrow format to represent the data in-memory. Cylon DataFrame API implements
most of the core operators of pandas such as merge, filter, join, concat,
group-by, drop_duplicates, etc. These operators are designed to work across
thousands of cores to scale applications. It can interoperate with pandas
DataFrame by reading data from pandas or converting data to pandas so users
can selectively scale parts of their pandas DataFrame applications.
from pycylon import read_csv, DataFrame, CylonEnv
from pycylon.net import MPIConfig
# Initialize Cylon distributed environment
config: MPIConfig = MPIConfig()
env: CylonEnv = CylonEnv(config=config, distributed=True)
df1: DataFrame = read_csv('/tmp/csv1.csv')
df2: DataFrame = read_csv('/tmp/csv2.csv')
# Using 1000s of cores across the cluster to compute the join
df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env)
print(df3)
Dask#
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar DataFrame interface for out-of-core, parallel and distributed computing.
Dask-ML#
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
Ibis#
Ibis offers a standard way to write analytics code, that can be run in multiple engines. It helps in bridging the gap between local Python environments (like pandas) and remote storage and execution systems like Hadoop components (like HDFS, Impala, Hive, Spark) and SQL databases (Postgres, etc.).
Koalas#
Koalas provides a familiar pandas DataFrame interface on top of Apache Spark. It enables users to leverage multi-cores on one machine or a cluster of machines to speed up or scale their DataFrame code.
Modin#
The modin.pandas DataFrame is a parallel and distributed drop-in replacement
for pandas. This means that you can use Modin with existing pandas code or write
new code with the existing pandas API. Modin can leverage your entire machine or
cluster to speed up and scale your pandas workloads, including traditionally
time-consuming tasks like ingesting data (read_csv, read_excel,
read_parquet, etc.).
# import pandas as pd
import modin.pandas as pd
df = pd.read_csv("big.csv") # use all your cores!
Odo#
Odo provides a uniform API for moving data between different formats. It uses
pandas own read_csv for CSV IO and leverages many existing packages such as
PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
Pandarallel#
Pandarallel provides a simple way to parallelize your pandas operations on all your CPUs by changing only one line of code.
If also displays progress bars.
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
# df.apply(func)
df.parallel_apply(func)
Vaex#
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (109) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
vaex.from_pandas
vaex.to_pandas_df
Extension data types#
pandas provides an interface for defining
extension types to extend NumPy’s type
system. The following libraries implement that interface to provide types not
found in NumPy or pandas, which work well with pandas’ data containers.
Cyberpandas#
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas’ Series and DataFrame.
Pandas-Genomics#
Pandas-Genomics provides extension types, extension arrays, and extension accessors for working with genomics data
Pint-Pandas#
Pint-Pandas provides an extension type for
storing numeric arrays with units. These arrays can be stored inside pandas’
Series and DataFrame. Operations between Series and DataFrame columns which
use pint’s extension array are then units aware.
Text Extensions for Pandas#
Text Extensions for Pandas
provides extension types to cover common data structures for representing natural language
data, plus library integrations that convert the outputs of popular natural language
processing libraries into Pandas DataFrames.
Accessors#
A directory of projects providing
extension accessors. This is for users to
discover new accessors and for library authors to coordinate on the namespace.
Library
Accessor
Classes
Description
cyberpandas
ip
Series
Provides common operations for working with IP addresses.
pdvega
vgplot
Series, DataFrame
Provides plotting functions from the Altair library.
pandas-genomics
genomics
Series, DataFrame
Provides common operations for quality control and analysis of genomics data.
pandas_path
path
Index, Series
Provides pathlib.Path functions for Series.
pint-pandas
pint
Series, DataFrame
Provides units support for numeric Series and DataFrames.
composeml
slice
DataFrame
Provides a generator for enhanced data slicing.
datatest
validate
Series, DataFrame, Index
Provides validation, differences, and acceptance managers.
woodwork
ww
Series, DataFrame
Provides physical, logical, and semantic data typing information for Series and DataFrames.
staircase
sc
Series
Provides methods for querying, aggregating and plotting step functions
Development tools#
pandas-stubs#
While pandas repository is partially typed, the package itself doesn’t expose this information for external use.
Install pandas-stubs to enable basic type coverage of pandas API.
Learn more by reading through GH14468, GH26766, GH28142.
See installation and usage instructions on the github page.
|
ecosystem.html
| null |
pandas.Series.min
|
`pandas.Series.min`
Return the minimum of the values over the requested axis.
If you want the index of the minimum, use idxmin. This is the equivalent of the numpy.ndarray method argmin.
```
>>> idx = pd.MultiIndex.from_arrays([
... ['warm', 'warm', 'cold', 'cold'],
... ['dog', 'falcon', 'fish', 'spider']],
... names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded animal
warm dog 4
falcon 2
cold fish 0
spider 8
Name: legs, dtype: int64
```
|
Series.min(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the minimum of the values over the requested axis.
If you want the index of the minimum, use idxmin. This is the equivalent of the numpy.ndarray method argmin.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([
... ['warm', 'warm', 'cold', 'cold'],
... ['dog', 'falcon', 'fish', 'spider']],
... names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded animal
warm dog 4
falcon 2
cold fish 0
spider 8
Name: legs, dtype: int64
>>> s.min()
0
|
reference/api/pandas.Series.min.html
|
pandas.tseries.offsets.LastWeekOfMonth.weekday
|
pandas.tseries.offsets.LastWeekOfMonth.weekday
|
LastWeekOfMonth.weekday#
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.weekday.html
|
pandas.tseries.offsets.BusinessHour
|
`pandas.tseries.offsets.BusinessHour`
DateOffset subclass representing possibly n business hours.
```
>>> ts = pd.Timestamp(2022, 8, 5, 16)
>>> ts + pd.offsets.BusinessHour()
Timestamp('2022-08-08 09:00:00')
```
|
class pandas.tseries.offsets.BusinessHour#
DateOffset subclass representing possibly n business hours.
Parameters
nint, default 1The number of months represented.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
weekmaskstr, Default ‘Mon Tue Wed Thu Fri’Weekmask of valid business days, passed to numpy.busdaycalendar.
startstr, default “09:00”Start time of your custom business hour in 24h format.
endstr, default: “17:00”End time of your custom business hour in 24h format.
Examples
>>> ts = pd.Timestamp(2022, 8, 5, 16)
>>> ts + pd.offsets.BusinessHour()
Timestamp('2022-08-08 09:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
next_bday
Used for moving to next business day.
offset
Alias for self._offset.
calendar
end
holidays
n
nanos
normalize
rule_code
start
weekmask
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback(other)
Roll provided date backward to next offset only if not on offset.
rollforward(other)
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.BusinessHour.html
|
pandas.tseries.offsets.Day.rollback
|
`pandas.tseries.offsets.Day.rollback`
Roll provided date backward to next offset only if not on offset.
|
Day.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.Day.rollback.html
|
Input/output
|
Input/output
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
read_table(filepath_or_buffer, *[, sep, ...])
|
Pickling#
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Flat file#
read_table(filepath_or_buffer, *[, sep, ...])
Read general delimited file into DataFrame.
read_csv(filepath_or_buffer, *[, sep, ...])
Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer, *[, colspecs, ...])
Read a table of fixed-width formatted lines into DataFrame.
Clipboard#
read_clipboard([sep])
Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
Excel#
read_excel(io[, sheet_name, header, names, ...])
Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
Class for writing DataFrame objects into excel sheets.
JSON#
read_json(path_or_buf, *[, orient, typ, ...])
Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...])
Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
build_table_schema(data[, index, ...])
Create a Table schema from data.
HTML#
read_html(io, *[, match, flavor, header, ...])
Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
XML#
read_xml(path_or_buffer, *[, xpath, ...])
Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...])
Render a DataFrame to an XML document.
Latex#
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)#
read_hdf(path_or_buf[, key, mode, errors, ...])
Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
Append to Table in file.
HDFStore.get(key)
Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
Print detailed information on the store.
HDFStore.keys([include])
Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
Return a list of all the top-level nodes.
HDFStore.walk([where])
Walk the pytables group hierarchy for pandas objects.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
Feather#
read_feather(path[, columns, use_threads, ...])
Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
Parquet#
read_parquet(path[, engine, columns, ...])
Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
ORC#
read_orc(path[, columns])
Load an ORC object from the file path, returning a DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
SAS#
read_sas(filepath_or_buffer, *[, format, ...])
Read SAS files stored as either XPORT or SAS7BDAT format files.
SPSS#
read_spss(path[, usecols, convert_categoricals])
Load an SPSS file from the file path, returning a DataFrame.
SQL#
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Google BigQuery#
read_gbq(query[, project_id, index_col, ...])
Load data from Google BigQuery.
STATA#
read_stata(filepath_or_buffer, *[, ...])
Read Stata file into DataFrame.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
StataReader.data_label
Return data label of Stata file.
StataReader.value_labels()
Return a nested dict associating each variable name to its value and label.
StataReader.variable_labels()
Return a dict associating each variable name with corresponding label.
StataWriter.write_file()
Export DataFrame object to Stata dta format.
|
reference/io.html
|
pandas.MultiIndex.dtypes
|
`pandas.MultiIndex.dtypes`
Return the dtypes as a Series for the underlying MultiIndex.
|
MultiIndex.dtypes[source]#
Return the dtypes as a Series for the underlying MultiIndex.
|
reference/api/pandas.MultiIndex.dtypes.html
|
pandas.core.groupby.GroupBy.var
|
`pandas.core.groupby.GroupBy.var`
Compute variance of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex.
|
final GroupBy.var(ddof=1, engine=None, engine_kwargs=None, numeric_only=_NoDefault.no_default)[source]#
Compute variance of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex.
Parameters
ddofint, default 1Degrees of freedom.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting
compute.use_numba
New in version 1.4.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{{'nopython': True, 'nogil': False, 'parallel': False}}
New in version 1.4.0.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Returns
Series or DataFrameVariance of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.var.html
|
pandas.tseries.offsets.MonthEnd.apply_index
|
`pandas.tseries.offsets.MonthEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
MonthEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.MonthEnd.apply_index.html
|
pandas.tseries.offsets.WeekOfMonth.__call__
|
`pandas.tseries.offsets.WeekOfMonth.__call__`
Call self as a function.
|
WeekOfMonth.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.WeekOfMonth.__call__.html
|
pandas.plotting.deregister_matplotlib_converters
|
`pandas.plotting.deregister_matplotlib_converters`
Remove pandas formatters and converters.
Removes the custom converters added by register(). This
attempts to set the state of the registry back to the state before
pandas registered its own units. Converters for pandas’ own types like
Timestamp and Period are removed completely. Converters for types
pandas overwrites, like datetime.datetime, are restored to their
original value.
|
pandas.plotting.deregister_matplotlib_converters()[source]#
Remove pandas formatters and converters.
Removes the custom converters added by register(). This
attempts to set the state of the registry back to the state before
pandas registered its own units. Converters for pandas’ own types like
Timestamp and Period are removed completely. Converters for types
pandas overwrites, like datetime.datetime, are restored to their
original value.
See also
register_matplotlib_convertersRegister pandas formatters and converters with matplotlib.
|
reference/api/pandas.plotting.deregister_matplotlib_converters.html
|
pandas.tseries.offsets.Week.is_year_end
|
`pandas.tseries.offsets.Week.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Week.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Week.is_year_end.html
|
pandas.Series.get
|
`pandas.Series.get`
Get item from object for given key (ex: DataFrame column).
Returns default value if not found.
```
>>> df = pd.DataFrame(
... [
... [24.3, 75.7, "high"],
... [31, 87.8, "high"],
... [22, 71.6, "medium"],
... [35, 95, "medium"],
... ],
... columns=["temp_celsius", "temp_fahrenheit", "windspeed"],
... index=pd.date_range(start="2014-02-12", end="2014-02-15", freq="D"),
... )
```
|
Series.get(key, default=None)[source]#
Get item from object for given key (ex: DataFrame column).
Returns default value if not found.
Parameters
keyobject
Returns
valuesame type as items contained in object
Examples
>>> df = pd.DataFrame(
... [
... [24.3, 75.7, "high"],
... [31, 87.8, "high"],
... [22, 71.6, "medium"],
... [35, 95, "medium"],
... ],
... columns=["temp_celsius", "temp_fahrenheit", "windspeed"],
... index=pd.date_range(start="2014-02-12", end="2014-02-15", freq="D"),
... )
>>> df
temp_celsius temp_fahrenheit windspeed
2014-02-12 24.3 75.7 high
2014-02-13 31.0 87.8 high
2014-02-14 22.0 71.6 medium
2014-02-15 35.0 95.0 medium
>>> df.get(["temp_celsius", "windspeed"])
temp_celsius windspeed
2014-02-12 24.3 high
2014-02-13 31.0 high
2014-02-14 22.0 medium
2014-02-15 35.0 medium
If the key isn’t found, the default value will be used.
>>> df.get(["temp_celsius", "temp_kelvin"], default="default_value")
'default_value'
|
reference/api/pandas.Series.get.html
|
pandas.api.extensions.ExtensionArray.searchsorted
|
`pandas.api.extensions.ExtensionArray.searchsorted`
Find indices where elements should be inserted to maintain order.
|
ExtensionArray.searchsorted(value, side='left', sorter=None)[source]#
Find indices where elements should be inserted to maintain order.
Find the indices into a sorted array self (a) such that, if the
corresponding elements in value were inserted before the indices,
the order of self would be preserved.
Assuming that self is sorted:
side
returned index i satisfies
left
self[i-1] < value <= self[i]
right
self[i-1] <= value < self[i]
Parameters
valuearray-like, list or scalarValue(s) to insert into self.
side{‘left’, ‘right’}, optionalIf ‘left’, the index of the first suitable location found is given.
If ‘right’, return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of self).
sorter1-D array-like, optionalOptional array of integer indices that sort array a into ascending
order. They are typically the result of argsort.
Returns
array of ints or intIf value is array-like, array of insertion points.
If value is scalar, a single integer.
See also
numpy.searchsortedSimilar method from NumPy.
|
reference/api/pandas.api.extensions.ExtensionArray.searchsorted.html
|
pandas.tseries.offsets.BQuarterEnd.is_anchored
|
`pandas.tseries.offsets.BQuarterEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
BQuarterEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.BQuarterEnd.is_anchored.html
|
pandas.Timedelta.resolution
|
pandas.Timedelta.resolution
|
Timedelta.resolution = Timedelta('0 days 00:00:00.000000001')#
|
reference/api/pandas.Timedelta.resolution.html
|
pandas.tseries.offsets.Minute.nanos
|
`pandas.tseries.offsets.Minute.nanos`
Return an integer of the total number of nanoseconds.
If the frequency is non-fixed.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
```
|
Minute.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
|
reference/api/pandas.tseries.offsets.Minute.nanos.html
|
pandas.read_gbq
|
`pandas.read_gbq`
Load data from Google BigQuery.
This function requires the pandas-gbq package.
|
pandas.read_gbq(query, project_id=None, index_col=None, col_order=None, reauth=False, auth_local_webserver=True, dialect=None, location=None, configuration=None, credentials=None, use_bqstorage_api=None, max_results=None, progress_bar_type=None)[source]#
Load data from Google BigQuery.
This function requires the pandas-gbq package.
See the How to authenticate with Google BigQuery
guide for authentication instructions.
Parameters
querystrSQL-Like Query to return data values.
project_idstr, optionalGoogle BigQuery Account project ID. Optional when available from
the environment.
index_colstr, optionalName of result column to use for index in results DataFrame.
col_orderlist(str), optionalList of BigQuery column names in the desired order for results
DataFrame.
reauthbool, default FalseForce Google BigQuery to re-authenticate the user. This is useful
if multiple accounts are used.
auth_local_webserverbool, default TrueUse the local webserver flow instead of the console flow
when getting user credentials.
New in version 0.2.0 of pandas-gbq.
Changed in version 1.5.0: Default value is changed to True. Google has deprecated the
auth_local_webserver = False “out of band” (copy-paste)
flow.
dialectstr, default ‘legacy’Note: The default value is changing to ‘standard’ in a future version.
SQL syntax dialect to use. Value can be one of:
'legacy'Use BigQuery’s legacy SQL dialect. For more information see
BigQuery Legacy SQL Reference.
'standard'Use BigQuery’s standard SQL, which is
compliant with the SQL 2011 standard. For more information
see BigQuery Standard SQL Reference.
locationstr, optionalLocation where the query job should run. See the BigQuery locations
documentation for a
list of available locations. The location must match that of any
datasets used in the query.
New in version 0.5.0 of pandas-gbq.
configurationdict, optionalQuery config parameters for job processing.
For example:
configuration = {‘query’: {‘useQueryCache’: False}}
For more information see BigQuery REST API Reference.
credentialsgoogle.auth.credentials.Credentials, optionalCredentials for accessing Google APIs. Use this parameter to override
default credentials, such as to use Compute Engine
google.auth.compute_engine.Credentials or Service Account
google.oauth2.service_account.Credentials directly.
New in version 0.8.0 of pandas-gbq.
use_bqstorage_apibool, default FalseUse the BigQuery Storage API to
download query results quickly, but at an increased cost. To use this
API, first enable it in the Cloud Console.
You must also have the bigquery.readsessions.create
permission on the project you are billing queries to.
This feature requires version 0.10.0 or later of the pandas-gbq
package. It also requires the google-cloud-bigquery-storage and
fastavro packages.
New in version 0.25.0.
max_resultsint, optionalIf set, limit the maximum number of rows to fetch from the query
results.
New in version 0.12.0 of pandas-gbq.
New in version 1.1.0.
progress_bar_typeOptional, strIf set, use the tqdm library to
display a progress bar while the data downloads. Install the
tqdm package to use this feature.
Possible values of progress_bar_type include:
NoneNo progress bar.
'tqdm'Use the tqdm.tqdm() function to print a progress bar
to sys.stderr.
'tqdm_notebook'Use the tqdm.tqdm_notebook() function to display a
progress bar as a Jupyter notebook widget.
'tqdm_gui'Use the tqdm.tqdm_gui() function to display a
progress bar as a graphical dialog box.
Note that this feature requires version 0.12.0 or later of the
pandas-gbq package. And it requires the tqdm package. Slightly
different than pandas-gbq, here the default is None.
New in version 1.0.0.
Returns
df: DataFrameDataFrame representing results of query.
See also
pandas_gbq.read_gbqThis function in the pandas-gbq library.
DataFrame.to_gbqWrite a DataFrame to Google BigQuery.
|
reference/api/pandas.read_gbq.html
|
pandas.Series.to_period
|
`pandas.Series.to_period`
Convert Series from DatetimeIndex to PeriodIndex.
Frequency associated with the PeriodIndex.
|
Series.to_period(freq=None, copy=True)[source]#
Convert Series from DatetimeIndex to PeriodIndex.
Parameters
freqstr, default NoneFrequency associated with the PeriodIndex.
copybool, default TrueWhether or not to return a copy.
Returns
SeriesSeries with index converted to PeriodIndex.
|
reference/api/pandas.Series.to_period.html
|
pandas.Series.searchsorted
|
`pandas.Series.searchsorted`
Find indices where elements should be inserted to maintain order.
```
>>> ser = pd.Series([1, 2, 3])
>>> ser
0 1
1 2
2 3
dtype: int64
```
|
Series.searchsorted(value, side='left', sorter=None)[source]#
Find indices where elements should be inserted to maintain order.
Find the indices into a sorted Series self such that, if the
corresponding elements in value were inserted before the indices,
the order of self would be preserved.
Note
The Series must be monotonically sorted, otherwise
wrong locations will likely be returned. Pandas does not
check this for you.
Parameters
valuearray-like or scalarValues to insert into self.
side{‘left’, ‘right’}, optionalIf ‘left’, the index of the first suitable location found is given.
If ‘right’, return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of self).
sorter1-D array-like, optionalOptional array of integer indices that sort self into ascending
order. They are typically the result of np.argsort.
Returns
int or array of intA scalar or array of insertion points with the
same shape as value.
See also
sort_valuesSort by the values along either axis.
numpy.searchsortedSimilar method from NumPy.
Notes
Binary search is used to find the required insertion points.
Examples
>>> ser = pd.Series([1, 2, 3])
>>> ser
0 1
1 2
2 3
dtype: int64
>>> ser.searchsorted(4)
3
>>> ser.searchsorted([0, 4])
array([0, 3])
>>> ser.searchsorted([1, 3], side='left')
array([0, 2])
>>> ser.searchsorted([1, 3], side='right')
array([1, 3])
>>> ser = pd.Series(pd.to_datetime(['3/11/2000', '3/12/2000', '3/13/2000']))
>>> ser
0 2000-03-11
1 2000-03-12
2 2000-03-13
dtype: datetime64[ns]
>>> ser.searchsorted('3/14/2000')
3
>>> ser = pd.Categorical(
... ['apple', 'bread', 'bread', 'cheese', 'milk'], ordered=True
... )
>>> ser
['apple', 'bread', 'bread', 'cheese', 'milk']
Categories (4, object): ['apple' < 'bread' < 'cheese' < 'milk']
>>> ser.searchsorted('bread')
1
>>> ser.searchsorted(['bread'], side='right')
array([3])
If the values are not monotonically sorted, wrong locations
may be returned:
>>> ser = pd.Series([2, 1, 3])
>>> ser
0 2
1 1
2 3
dtype: int64
>>> ser.searchsorted(1)
0 # wrong result, correct would be 1
|
reference/api/pandas.Series.searchsorted.html
|
pandas.Period.daysinmonth
|
`pandas.Period.daysinmonth`
Get the total number of days of the month that this period falls on.
```
>>> p = pd.Period("2018-03-11", freq='H')
>>> p.daysinmonth
31
```
|
Period.daysinmonth#
Get the total number of days of the month that this period falls on.
Returns
int
See also
Period.days_in_monthReturn the days of the month.
Period.dayofyearReturn the day of the year.
Examples
>>> p = pd.Period("2018-03-11", freq='H')
>>> p.daysinmonth
31
|
reference/api/pandas.Period.daysinmonth.html
|
pandas.core.groupby.DataFrameGroupBy.nunique
|
`pandas.core.groupby.DataFrameGroupBy.nunique`
Return DataFrame with counts of unique elements in each position.
```
>>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam',
... 'ham', 'ham'],
... 'value1': [1, 5, 5, 2, 5, 5],
... 'value2': list('abbaxy')})
>>> df
id value1 value2
0 spam 1 a
1 egg 5 b
2 egg 5 b
3 spam 2 a
4 ham 5 x
5 ham 5 y
```
|
DataFrameGroupBy.nunique(dropna=True)[source]#
Return DataFrame with counts of unique elements in each position.
Parameters
dropnabool, default TrueDon’t include NaN in the counts.
Returns
nunique: DataFrame
Examples
>>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam',
... 'ham', 'ham'],
... 'value1': [1, 5, 5, 2, 5, 5],
... 'value2': list('abbaxy')})
>>> df
id value1 value2
0 spam 1 a
1 egg 5 b
2 egg 5 b
3 spam 2 a
4 ham 5 x
5 ham 5 y
>>> df.groupby('id').nunique()
value1 value2
id
egg 1 1
ham 1 2
spam 2 1
Check for rows with the same id but conflicting values:
>>> df.groupby('id').filter(lambda g: (g.nunique() > 1).any())
id value1 value2
0 spam 1 a
3 spam 2 a
4 ham 5 x
5 ham 5 y
|
reference/api/pandas.core.groupby.DataFrameGroupBy.nunique.html
|
pandas.tseries.offsets.CustomBusinessHour.apply_index
|
`pandas.tseries.offsets.CustomBusinessHour.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
CustomBusinessHour.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.apply_index.html
|
pandas.DataFrame.at_time
|
`pandas.DataFrame.at_time`
Select values at particular time of day (e.g., 9:30AM).
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-09 12:00:00 2
2018-04-10 00:00:00 3
2018-04-10 12:00:00 4
```
|
DataFrame.at_time(time, asof=False, axis=None)[source]#
Select values at particular time of day (e.g., 9:30AM).
Parameters
timedatetime.time or str
axis{0 or ‘index’, 1 or ‘columns’}, default 0For Series this parameter is unused and defaults to 0.
Returns
Series or DataFrame
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
between_timeSelect values between particular times of the day.
firstSelect initial periods of time series based on a date offset.
lastSelect final periods of time series based on a date offset.
DatetimeIndex.indexer_at_timeGet just the index locations for values at particular time of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-09 12:00:00 2
2018-04-10 00:00:00 3
2018-04-10 12:00:00 4
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4
|
reference/api/pandas.DataFrame.at_time.html
|
pandas.tseries.offsets.MonthEnd.is_month_end
|
`pandas.tseries.offsets.MonthEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
MonthEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.MonthEnd.is_month_end.html
|
pandas.DataFrame.__dataframe__
|
`pandas.DataFrame.__dataframe__`
Return the dataframe interchange object implementing the interchange protocol.
Whether to tell the DataFrame to overwrite null values in the data
with NaN (or NaT).
|
DataFrame.__dataframe__(nan_as_null=False, allow_copy=True)[source]#
Return the dataframe interchange object implementing the interchange protocol.
Parameters
nan_as_nullbool, default FalseWhether to tell the DataFrame to overwrite null values in the data
with NaN (or NaT).
allow_copybool, default TrueWhether to allow memory copying when exporting. If set to False
it would cause non-zero-copy exports to fail.
Returns
DataFrame interchange objectThe object which consuming library can use to ingress the dataframe.
Notes
Details on the interchange protocol:
https://data-apis.org/dataframe-protocol/latest/index.html
nan_as_null currently has no effect; once support for nullable extension
dtypes is added, this value should be propagated to columns.
|
reference/api/pandas.DataFrame.__dataframe__.html
|
pandas.tseries.offsets.BQuarterBegin.isAnchored
|
pandas.tseries.offsets.BQuarterBegin.isAnchored
|
BQuarterBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.BQuarterBegin.isAnchored.html
|
pandas.tseries.offsets.FY5253Quarter.__call__
|
`pandas.tseries.offsets.FY5253Quarter.__call__`
Call self as a function.
|
FY5253Quarter.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.FY5253Quarter.__call__.html
|
pandas.errors.PerformanceWarning
|
`pandas.errors.PerformanceWarning`
Warning raised when there is a possible performance impact.
|
exception pandas.errors.PerformanceWarning[source]#
Warning raised when there is a possible performance impact.
|
reference/api/pandas.errors.PerformanceWarning.html
|
pandas.tseries.offsets.SemiMonthEnd.copy
|
`pandas.tseries.offsets.SemiMonthEnd.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
SemiMonthEnd.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.copy.html
|
pandas.tseries.offsets.YearEnd.kwds
|
`pandas.tseries.offsets.YearEnd.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
YearEnd.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.YearEnd.kwds.html
|
pandas.api.extensions.ExtensionArray.factorize
|
`pandas.api.extensions.ExtensionArray.factorize`
Encode the extension array as an enumerated type.
Value to use in the codes array to indicate missing values.
|
ExtensionArray.factorize(na_sentinel=_NoDefault.no_default, use_na_sentinel=_NoDefault.no_default)[source]#
Encode the extension array as an enumerated type.
Parameters
na_sentinelint, default -1Value to use in the codes array to indicate missing values.
Deprecated since version 1.5.0: The na_sentinel argument is deprecated and
will be removed in a future version of pandas. Specify use_na_sentinel
as either True or False.
use_na_sentinelbool, default TrueIf True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
New in version 1.5.0.
Returns
codesndarrayAn integer NumPy array that’s an indexer into the original
ExtensionArray.
uniquesExtensionArrayAn ExtensionArray containing the unique values of self.
Note
uniques will not contain an entry for the NA value of
the ExtensionArray if there are any missing values present
in self.
See also
factorizeTop-level factorize method that dispatches here.
Notes
pandas.factorize() offers a sort keyword as well.
|
reference/api/pandas.api.extensions.ExtensionArray.factorize.html
|
pandas.tseries.offsets.Minute.name
|
`pandas.tseries.offsets.Minute.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
Minute.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Minute.name.html
|
pandas.Series.to_list
|
`pandas.Series.to_list`
Return a list of the values.
|
Series.to_list()[source]#
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
Returns
list
See also
numpy.ndarray.tolistReturn the array as an a.ndim-levels deep nested list of Python scalars.
|
reference/api/pandas.Series.to_list.html
|
pandas.Series.str.startswith
|
`pandas.Series.str.startswith`
Test if the start of each string element matches a pattern.
Equivalent to str.startswith().
```
>>> s = pd.Series(['bat', 'Bear', 'cat', np.nan])
>>> s
0 bat
1 Bear
2 cat
3 NaN
dtype: object
```
|
Series.str.startswith(pat, na=None)[source]#
Test if the start of each string element matches a pattern.
Equivalent to str.startswith().
Parameters
patstr or tuple[str, …]Character sequence or tuple of strings. Regular expressions are not
accepted.
naobject, default NaNObject shown if element tested is not a string. The default depends
on dtype of the array. For object-dtype, numpy.nan is used.
For StringDtype, pandas.NA is used.
Returns
Series or Index of boolA Series of booleans indicating whether the given pattern matches
the start of each string element.
See also
str.startswithPython standard library string method.
Series.str.endswithSame as startswith, but tests the end of string.
Series.str.containsTests if string element contains a pattern.
Examples
>>> s = pd.Series(['bat', 'Bear', 'cat', np.nan])
>>> s
0 bat
1 Bear
2 cat
3 NaN
dtype: object
>>> s.str.startswith('b')
0 True
1 False
2 False
3 NaN
dtype: object
>>> s.str.startswith(('b', 'B'))
0 True
1 True
2 False
3 NaN
dtype: object
Specifying na to be False instead of NaN.
>>> s.str.startswith('b', na=False)
0 True
1 False
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.startswith.html
|
pandas.DataFrame.swapaxes
|
`pandas.DataFrame.swapaxes`
Interchange axes and swap values axes appropriately.
|
DataFrame.swapaxes(axis1, axis2, copy=True)[source]#
Interchange axes and swap values axes appropriately.
Returns
ysame as input
|
reference/api/pandas.DataFrame.swapaxes.html
|
pandas.tseries.offsets.Milli.name
|
`pandas.tseries.offsets.Milli.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
Milli.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.Milli.name.html
|
pandas.tseries.offsets.WeekOfMonth.is_month_end
|
`pandas.tseries.offsets.WeekOfMonth.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
WeekOfMonth.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.WeekOfMonth.is_month_end.html
|
pandas.CategoricalDtype.ordered
|
`pandas.CategoricalDtype.ordered`
Whether the categories have an ordered relationship.
|
property CategoricalDtype.ordered[source]#
Whether the categories have an ordered relationship.
|
reference/api/pandas.CategoricalDtype.ordered.html
|
Options and settings
|
Options and settings
|
Overview#
pandas has an options API configure and customize global behavior related to
DataFrame display, data behavior and more.
Options have a full “dotted-style”, case-insensitive name (e.g. display.max_rows).
You can get/set options directly as attributes of the top-level options attribute:
In [1]: import pandas as pd
In [2]: pd.options.display.max_rows
Out[2]: 15
In [3]: pd.options.display.max_rows = 999
In [4]: pd.options.display.max_rows
Out[4]: 999
The API is composed of 5 relevant functions, available directly from the pandas
namespace:
get_option() / set_option() - get/set the value of a single option.
reset_option() - reset one or more options to their default value.
describe_option() - print the descriptions of one or more options.
option_context() - execute a codeblock with a set of options
that revert to prior settings after execution.
Note
Developers can check out pandas/core/config_init.py for more information.
All of the functions above accept a regexp pattern (re.search style) as an argument,
to match an unambiguous substring:
In [5]: pd.get_option("display.chop_threshold")
In [6]: pd.set_option("display.chop_threshold", 2)
In [7]: pd.get_option("display.chop_threshold")
Out[7]: 2
In [8]: pd.set_option("chop", 4)
In [9]: pd.get_option("display.chop_threshold")
Out[9]: 4
The following will not work because it matches multiple option names, e.g.
display.max_colwidth, display.max_rows, display.max_columns:
In [10]: pd.get_option("max")
---------------------------------------------------------------------------
OptionError Traceback (most recent call last)
Cell In[10], line 1
----> 1 pd.get_option("max")
File ~/work/pandas/pandas/pandas/_config/config.py:263, in CallableDynamicDoc.__call__(self, *args, **kwds)
262 def __call__(self, *args, **kwds) -> T:
--> 263 return self.__func__(*args, **kwds)
File ~/work/pandas/pandas/pandas/_config/config.py:135, in _get_option(pat, silent)
134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135 key = _get_single_key(pat, silent)
137 # walk the nested dict
138 root, k = _get_root(key)
File ~/work/pandas/pandas/pandas/_config/config.py:123, in _get_single_key(pat, silent)
121 raise OptionError(f"No such keys(s): {repr(pat)}")
122 if len(keys) > 1:
--> 123 raise OptionError("Pattern matched multiple keys")
124 key = keys[0]
126 if not silent:
OptionError: 'Pattern matched multiple keys'
Warning
Using this form of shorthand may cause your code to break if new options with similar names are added in future versions.
Available options#
You can get a list of available options and their descriptions with describe_option(). When called
with no argument describe_option() will print out the descriptions for all available options.
In [11]: pd.describe_option()
compute.use_bottleneck : bool
Use the bottleneck library to accelerate if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
compute.use_numba : bool
Use the numba engine option for select operations if it is installed,
the default is False
Valid values: False,True
[default: False] [currently: False]
compute.use_numexpr : bool
Use the numexpr library to accelerate computation if it is installed,
the default is True
Valid values: False,True
[default: True] [currently: True]
display.chop_threshold : float or None
if set to a float value, all float values smaller than the given threshold
will be displayed as exactly 0 by repr and friends.
[default: None] [currently: None]
display.colheader_justify : 'left'/'right'
Controls the justification of column headers. used by DataFrameFormatter.
[default: right] [currently: right]
display.column_space No description available.
[default: 12] [currently: 12]
display.date_dayfirst : boolean
When True, prints and parses dates with the day first, eg 20/01/2005
[default: False] [currently: False]
display.date_yearfirst : boolean
When True, prints and parses dates with the year first, eg 2005/01/20
[default: False] [currently: False]
display.encoding : str/unicode
Defaults to the detected encoding of the console.
Specifies the encoding to be used for strings returned by to_string,
these are generally strings meant to be displayed on the console.
[default: utf-8] [currently: utf8]
display.expand_frame_repr : boolean
Whether to print out the full DataFrame repr for wide DataFrames across
multiple lines, `max_columns` is still respected, but the output will
wrap-around across multiple "pages" if its width exceeds `display.width`.
[default: True] [currently: True]
display.float_format : callable
The callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
[default: None] [currently: None]
display.html.border : int
A ``border=value`` attribute is inserted in the ``<table>`` tag
for the DataFrame HTML repr.
[default: 1] [currently: 1]
display.html.table_schema : boolean
Whether to publish a Table Schema representation for frontends
that support it.
(default: False)
[default: False] [currently: False]
display.html.use_mathjax : boolean
When True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
[default: True] [currently: True]
display.large_repr : 'truncate'/'info'
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
[default: truncate] [currently: truncate]
display.latex.escape : bool
This specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
[default: True] [currently: True]
display.latex.longtable :bool
This specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
[default: False] [currently: False]
display.latex.multicolumn : bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: True] [currently: True]
display.latex.multicolumn_format : bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
[default: l] [currently: l]
display.latex.multirow : bool
This specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
[default: False] [currently: False]
display.latex.repr : boolean
Whether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
[default: False] [currently: False]
display.max_categories : int
This sets the maximum number of categories pandas should output when
printing out a `Categorical` or a Series of dtype "category".
[default: 8] [currently: 8]
display.max_columns : int
If max_cols is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 0] [currently: 0]
display.max_colwidth : int or None
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output. A 'None' value means unlimited.
[default: 50] [currently: 50]
display.max_dir_items : int
The number of items that will be added to `dir(...)`. 'None' value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
[default: 100] [currently: 100]
display.max_info_columns : int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
display.max_info_rows : int or None
df.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
[default: 1690785] [currently: 1690785]
display.max_rows : int
If max_rows is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
[default: 60] [currently: 60]
display.max_seq_items : int or None
When pretty-printing a long sequence, no more then `max_seq_items`
will be printed. If items are omitted, they will be denoted by the
addition of "..." to the resulting string.
If set to None, the number of items to be printed is unlimited.
[default: 100] [currently: 100]
display.memory_usage : bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,'deep'
[default: True] [currently: True]
display.min_rows : int
The numbers of rows to show in a truncated view (when `max_rows` is
exceeded). Ignored when `max_rows` is set to None or 0. When set to
None, follows the value of `max_rows`.
[default: 10] [currently: 10]
display.multi_sparse : boolean
"sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
[default: True] [currently: True]
display.notebook_repr_html : boolean
When True, IPython notebook will use html representation for
pandas objects (if it is available).
[default: True] [currently: True]
display.pprint_nest_depth : int
Controls the number of nested levels to process when pretty-printing
[default: 3] [currently: 3]
display.precision : int
Floating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to ``precision`` in :meth:`numpy.set_printoptions`.
[default: 6] [currently: 6]
display.show_dimensions : boolean or 'truncate'
Whether to print out dimensions at the end of DataFrame repr.
If 'truncate' is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide : boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.unicode.east_asian_width : boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
[default: False] [currently: False]
display.width : int
Width of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
io.excel.ods.reader : string
The default Excel reader engine for 'ods' files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.ods.writer : string
The default Excel writer engine for 'ods' files. Available options:
auto, odf.
[default: auto] [currently: auto]
io.excel.xls.reader : string
The default Excel reader engine for 'xls' files. Available options:
auto, xlrd.
[default: auto] [currently: auto]
io.excel.xls.writer : string
The default Excel writer engine for 'xls' files. Available options:
auto, xlwt.
[default: auto] [currently: auto]
(Deprecated, use `` instead.)
io.excel.xlsb.reader : string
The default Excel reader engine for 'xlsb' files. Available options:
auto, pyxlsb.
[default: auto] [currently: auto]
io.excel.xlsm.reader : string
The default Excel reader engine for 'xlsm' files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsm.writer : string
The default Excel writer engine for 'xlsm' files. Available options:
auto, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.reader : string
The default Excel reader engine for 'xlsx' files. Available options:
auto, xlrd, openpyxl.
[default: auto] [currently: auto]
io.excel.xlsx.writer : string
The default Excel writer engine for 'xlsx' files. Available options:
auto, openpyxl, xlsxwriter.
[default: auto] [currently: auto]
io.hdf.default_format : format
default format writing format, if None, then
put will default to 'fixed' and append will default to 'table'
[default: None] [currently: None]
io.hdf.dropna_table : boolean
drop ALL nan rows when appending to a table
[default: False] [currently: False]
io.parquet.engine : string
The default parquet reader/writer engine. Available options:
'auto', 'pyarrow', 'fastparquet', the default is 'auto'
[default: auto] [currently: auto]
io.sql.engine : string
The default sql reader/writer engine. Available options:
'auto', 'sqlalchemy', the default is 'auto'
[default: auto] [currently: auto]
mode.chained_assignment : string
Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn
[default: warn] [currently: warn]
mode.copy_on_write : bool
Use new copy-view behaviour using Copy-on-Write. Defaults to False,
unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable
(if set to "1" for True, needs to be set before pandas is imported).
[default: False] [currently: False]
mode.data_manager : string
Internal data manager type; can be "block" or "array". Defaults to "block",
unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needs
to be set before pandas is imported).
[default: block] [currently: block]
mode.sim_interactive : boolean
Whether to simulate interactive mode for purposes of testing
[default: False] [currently: False]
mode.string_storage : string
The default storage for StringDtype.
[default: python] [currently: python]
mode.use_inf_as_na : boolean
True means treat None, NaN, INF, -INF as NA (old way),
False means None and NaN are null, but INF, -INF are not NA
(new way).
[default: False] [currently: False]
mode.use_inf_as_null : boolean
use_inf_as_null had been deprecated and will be removed in a future
version. Use `use_inf_as_na` instead.
[default: False] [currently: False]
(Deprecated, use `mode.use_inf_as_na` instead.)
plotting.backend : str
The plotting backend to use. The default value is "matplotlib", the
backend provided with pandas. Other backends can be specified by
providing the name of the module that implements the backend.
[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_converters : bool or 'auto'.
Whether to register converters with matplotlib's units registry for
dates, times, datetimes, and Periods. Toggling to False will remove
the converters, restoring any converters that pandas overwrote.
[default: auto] [currently: auto]
styler.format.decimal : str
The character representation for the decimal separator for floats and complex.
[default: .] [currently: .]
styler.format.escape : str, optional
Whether to escape certain characters according to the given context; html or latex.
[default: None] [currently: None]
styler.format.formatter : str, callable, dict, optional
A formatter object to be used as default within ``Styler.format``.
[default: None] [currently: None]
styler.format.na_rep : str, optional
The string representation for values identified as missing.
[default: None] [currently: None]
styler.format.precision : int
The precision for floats and complex numbers.
[default: 6] [currently: 6]
styler.format.thousands : str, optional
The character representation for thousands separator for floats, int and complex.
[default: None] [currently: None]
styler.html.mathjax : bool
If False will render special CSS classes to table attributes that indicate Mathjax
will not be used in Jupyter Notebook.
[default: True] [currently: True]
styler.latex.environment : str
The environment to replace ``\begin{table}``. If "longtable" is used results
in a specific longtable environment format.
[default: None] [currently: None]
styler.latex.hrules : bool
Whether to add horizontal rules on top and bottom and below the headers.
[default: False] [currently: False]
styler.latex.multicol_align : {"r", "c", "l", "naive-l", "naive-r"}
The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe
decorators can also be added to non-naive values to draw vertical
rules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells.
[default: r] [currently: r]
styler.latex.multirow_align : {"c", "t", "b"}
The specifier for vertical alignment of sparsified LaTeX multirows.
[default: c] [currently: c]
styler.render.encoding : str
The encoding used for output HTML and LaTeX files.
[default: utf-8] [currently: utf-8]
styler.render.max_columns : int, optional
The maximum number of columns that will be rendered. May still be reduced to
satsify ``max_elements``, which takes precedence.
[default: None] [currently: None]
styler.render.max_elements : int
The maximum number of data-cell (<td>) elements that will be rendered before
trimming will occur over columns, rows or both if needed.
[default: 262144] [currently: 262144]
styler.render.max_rows : int, optional
The maximum number of rows that will be rendered. May still be reduced to
satsify ``max_elements``, which takes precedence.
[default: None] [currently: None]
styler.render.repr : str
Determine which output to use in Jupyter Notebook in {"html", "latex"}.
[default: html] [currently: html]
styler.sparse.columns : bool
Whether to sparsify the display of hierarchical columns. Setting to False will
display each explicit level element in a hierarchical key for each column.
[default: True] [currently: True]
styler.sparse.index : bool
Whether to sparsify the display of a hierarchical index. Setting to False will
display each explicit level element in a hierarchical key for each row.
[default: True] [currently: True]
Getting and setting options#
As described above, get_option() and set_option()
are available from the pandas namespace. To change an option, call
set_option('option regex', new_value).
In [12]: pd.get_option("mode.sim_interactive")
Out[12]: False
In [13]: pd.set_option("mode.sim_interactive", True)
In [14]: pd.get_option("mode.sim_interactive")
Out[14]: True
Note
The option 'mode.sim_interactive' is mostly used for debugging purposes.
You can use reset_option() to revert to a setting’s default value
In [15]: pd.get_option("display.max_rows")
Out[15]: 60
In [16]: pd.set_option("display.max_rows", 999)
In [17]: pd.get_option("display.max_rows")
Out[17]: 999
In [18]: pd.reset_option("display.max_rows")
In [19]: pd.get_option("display.max_rows")
Out[19]: 60
It’s also possible to reset multiple options at once (using a regex):
In [20]: pd.reset_option("^display")
option_context() context manager has been exposed through
the top-level API, allowing you to execute code with given option values. Option values
are restored automatically when you exit the with block:
In [21]: with pd.option_context("display.max_rows", 10, "display.max_columns", 5):
....: print(pd.get_option("display.max_rows"))
....: print(pd.get_option("display.max_columns"))
....:
10
5
In [22]: print(pd.get_option("display.max_rows"))
60
In [23]: print(pd.get_option("display.max_columns"))
0
Setting startup options in Python/IPython environment#
Using startup scripts for the Python/IPython environment to import pandas and set options makes working with pandas more efficient.
To do this, create a .py or .ipy script in the startup directory of the desired profile.
An example where the startup folder is in a default IPython profile can be found at:
$IPYTHONDIR/profile_default/startup
More information can be found in the IPython documentation. An example startup script for pandas is displayed below:
import pandas as pd
pd.set_option("display.max_rows", 999)
pd.set_option("display.precision", 5)
Frequently used options#
The following is a demonstrates the more frequently used display options.
display.max_rows and display.max_columns sets the maximum number
of rows and columns displayed when a frame is pretty-printed. Truncated
lines are replaced by an ellipsis.
In [24]: df = pd.DataFrame(np.random.randn(7, 2))
In [25]: pd.set_option("display.max_rows", 7)
In [26]: df
Out[26]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
3 0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929 1.071804
6 0.721555 -0.706771
In [27]: pd.set_option("display.max_rows", 5)
In [28]: df
Out[28]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
.. ... ...
5 -0.494929 1.071804
6 0.721555 -0.706771
[7 rows x 2 columns]
In [29]: pd.reset_option("display.max_rows")
Once the display.max_rows is exceeded, the display.min_rows options
determines how many rows are shown in the truncated repr.
In [30]: pd.set_option("display.max_rows", 8)
In [31]: pd.set_option("display.min_rows", 4)
# below max_rows -> all rows shown
In [32]: df = pd.DataFrame(np.random.randn(7, 2))
In [33]: df
Out[33]:
0 1
0 -1.039575 0.271860
1 -0.424972 0.567020
2 0.276232 -1.087401
3 -0.673690 0.113648
4 -1.478427 0.524988
5 0.404705 0.577046
6 -1.715002 -1.039268
# above max_rows -> only min_rows (4) rows shown
In [34]: df = pd.DataFrame(np.random.randn(9, 2))
In [35]: df
Out[35]:
0 1
0 -0.370647 -1.157892
1 -1.344312 0.844885
.. ... ...
7 0.276662 -0.472035
8 -0.013960 -0.362543
[9 rows x 2 columns]
In [36]: pd.reset_option("display.max_rows")
In [37]: pd.reset_option("display.min_rows")
display.expand_frame_repr allows for the representation of a
DataFrame to stretch across pages, wrapped over the all the columns.
In [38]: df = pd.DataFrame(np.random.randn(5, 10))
In [39]: pd.set_option("expand_frame_repr", True)
In [40]: df
Out[40]:
0 1 2 ... 7 8 9
0 -0.006154 -0.923061 0.895717 ... 1.340309 -1.170299 -0.226169
1 0.410835 0.813850 0.132003 ... -1.436737 -1.413681 1.607920
2 1.024180 0.569605 0.875906 ... -0.078638 0.545952 -1.219217
3 -1.226825 0.769804 -1.281247 ... 0.341734 0.959726 -1.110336
4 -0.619976 0.149748 -0.732339 ... 0.301624 -2.179861 -1.369849
[5 rows x 10 columns]
In [41]: pd.set_option("expand_frame_repr", False)
In [42]: df
Out[42]:
0 1 2 3 4 5 6 7 8 9
0 -0.006154 -0.923061 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299 -0.226169
1 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737 -1.413681 1.607920
2 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747 -0.410001 -0.078638 0.545952 -1.219217
3 -1.226825 0.769804 -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734 0.959726 -1.110336
4 -0.619976 0.149748 -0.732339 0.687738 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
In [43]: pd.reset_option("expand_frame_repr")
display.large_repr displays a DataFrame that exceed
max_columns or max_rows as a truncated frame or summary.
In [44]: df = pd.DataFrame(np.random.randn(10, 10))
In [45]: pd.set_option("display.max_rows", 5)
In [46]: pd.set_option("large_repr", "truncate")
In [47]: df
Out[47]:
0 1 2 ... 7 8 9
0 -0.954208 1.462696 -1.743161 ... 0.995761 2.396780 0.014871
1 3.357427 -0.317441 -1.236269 ... 0.380396 0.084844 0.432390
.. ... ... ... ... ... ... ...
8 -0.303421 -0.858447 0.306996 ... 0.476720 0.473424 -0.242861
9 -0.014805 -0.284319 0.650776 ... 1.613616 0.464000 0.227371
[10 rows x 10 columns]
In [48]: pd.set_option("large_repr", "info")
In [49]: df
Out[49]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [50]: pd.reset_option("large_repr")
In [51]: pd.reset_option("display.max_rows")
display.max_colwidth sets the maximum width of columns. Cells
of this length or longer will be truncated with an ellipsis.
In [52]: df = pd.DataFrame(
....: np.array(
....: [
....: ["foo", "bar", "bim", "uncomfortably long string"],
....: ["horse", "cow", "banana", "apple"],
....: ]
....: )
....: )
....:
In [53]: pd.set_option("max_colwidth", 40)
In [54]: df
Out[54]:
0 1 2 3
0 foo bar bim uncomfortably long string
1 horse cow banana apple
In [55]: pd.set_option("max_colwidth", 6)
In [56]: df
Out[56]:
0 1 2 3
0 foo bar bim un...
1 horse cow ba... apple
In [57]: pd.reset_option("max_colwidth")
display.max_info_columns sets a threshold for the number of columns
displayed when calling info().
In [58]: df = pd.DataFrame(np.random.randn(10, 10))
In [59]: pd.set_option("max_info_columns", 11)
In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [61]: pd.set_option("max_info_columns", 5)
In [62]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 928.0 bytes
In [63]: pd.reset_option("max_info_columns")
display.max_info_rows: info() will usually show null-counts for each column.
For a large DataFrame, this can be quite slow. max_info_rows and max_info_cols
limit this null check to the specified rows and columns respectively. The info()
keyword argument null_counts=True will override this.
In [64]: df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10)))
In [65]: df
Out[65]:
0 1 2 3 4 5 6 7 8 9
0 0.0 NaN 1.0 NaN NaN 0.0 NaN 0.0 NaN 1.0
1 1.0 NaN 1.0 1.0 1.0 1.0 NaN 0.0 0.0 NaN
2 0.0 NaN 1.0 0.0 0.0 NaN NaN NaN NaN 0.0
3 NaN NaN NaN 0.0 1.0 1.0 NaN 1.0 NaN 1.0
4 0.0 NaN NaN NaN 0.0 NaN NaN NaN 1.0 0.0
5 0.0 1.0 1.0 1.0 1.0 0.0 NaN NaN 1.0 0.0
6 1.0 1.0 1.0 NaN 1.0 NaN 1.0 0.0 NaN NaN
7 0.0 0.0 1.0 0.0 1.0 0.0 1.0 1.0 0.0 NaN
8 NaN NaN NaN 0.0 NaN NaN NaN NaN 1.0 NaN
9 0.0 NaN 0.0 NaN NaN 0.0 NaN 1.0 1.0 0.0
In [66]: pd.set_option("max_info_rows", 11)
In [67]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 8 non-null float64
1 1 3 non-null float64
2 2 7 non-null float64
3 3 6 non-null float64
4 4 7 non-null float64
5 5 6 non-null float64
6 6 2 non-null float64
7 7 6 non-null float64
8 8 6 non-null float64
9 9 6 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [68]: pd.set_option("max_info_rows", 5)
In [69]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Dtype
--- ------ -----
0 0 float64
1 1 float64
2 2 float64
3 3 float64
4 4 float64
5 5 float64
6 6 float64
7 7 float64
8 8 float64
9 9 float64
dtypes: float64(10)
memory usage: 928.0 bytes
In [70]: pd.reset_option("max_info_rows")
display.precision sets the output display precision in terms of decimal places.
In [71]: df = pd.DataFrame(np.random.randn(5, 5))
In [72]: pd.set_option("display.precision", 7)
In [73]: df
Out[73]:
0 1 2 3 4
0 -1.1506406 -0.7983341 -0.5576966 0.3813531 1.3371217
1 -1.5310949 1.3314582 -0.5713290 -0.0266708 -1.0856630
2 -1.1147378 -0.0582158 -0.4867681 1.6851483 0.1125723
3 -1.4953086 0.8984347 -0.1482168 -1.5960698 0.1596530
4 0.2621358 0.0362196 0.1847350 -0.2550694 -0.2710197
In [74]: pd.set_option("display.precision", 4)
In [75]: df
Out[75]:
0 1 2 3 4
0 -1.1506 -0.7983 -0.5577 0.3814 1.3371
1 -1.5311 1.3315 -0.5713 -0.0267 -1.0857
2 -1.1147 -0.0582 -0.4868 1.6851 0.1126
3 -1.4953 0.8984 -0.1482 -1.5961 0.1597
4 0.2621 0.0362 0.1847 -0.2551 -0.2710
display.chop_threshold sets the rounding threshold to zero when displaying a
Series or DataFrame. This setting does not change the
precision at which the number is stored.
In [76]: df = pd.DataFrame(np.random.randn(6, 6))
In [77]: pd.set_option("chop_threshold", 0)
In [78]: df
Out[78]:
0 1 2 3 4 5
0 1.2884 0.2946 -1.1658 0.8470 -0.6856 0.6091
1 -0.3040 0.6256 -0.0593 0.2497 1.1039 -1.0875
2 1.9980 -0.2445 0.1362 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 -0.3882 -2.3144 0.6655 0.4026
4 0.3996 -1.7660 0.8504 0.3881 0.9923 0.7441
5 -0.7398 -1.0549 -0.1796 0.6396 1.5850 1.9067
In [79]: pd.set_option("chop_threshold", 0.5)
In [80]: df
Out[80]:
0 1 2 3 4 5
0 1.2884 0.0000 -1.1658 0.8470 -0.6856 0.6091
1 0.0000 0.6256 0.0000 0.0000 1.1039 -1.0875
2 1.9980 0.0000 0.0000 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 0.0000 -2.3144 0.6655 0.0000
4 0.0000 -1.7660 0.8504 0.0000 0.9923 0.7441
5 -0.7398 -1.0549 0.0000 0.6396 1.5850 1.9067
In [81]: pd.reset_option("chop_threshold")
display.colheader_justify controls the justification of the headers.
The options are 'right', and 'left'.
In [82]: df = pd.DataFrame(
....: np.array([np.random.randn(6), np.random.randint(1, 9, 6) * 0.1, np.zeros(6)]).T,
....: columns=["A", "B", "C"],
....: dtype="float",
....: )
....:
In [83]: pd.set_option("colheader_justify", "right")
In [84]: df
Out[84]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0
In [85]: pd.set_option("colheader_justify", "left")
In [86]: df
Out[86]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0
In [87]: pd.reset_option("colheader_justify")
Number formatting#
pandas also allows you to set how numbers are displayed in the console.
This option is not set through the set_options API.
Use the set_eng_float_format function
to alter the floating-point formatting of pandas objects to produce a particular
format.
In [88]: import numpy as np
In [89]: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True)
In [90]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [91]: s / 1.0e3
Out[91]:
a 303.638u
b -721.084u
c -622.696u
d 648.250u
e -1.945m
dtype: float64
In [92]: s / 1.0e6
Out[92]:
a 303.638n
b -721.084n
c -622.696n
d 648.250n
e -1.945u
dtype: float64
Use round() to specifically control rounding of an individual DataFrame
Unicode formatting#
Warning
Enabling this option will affect the performance for printing of DataFrame and Series (about 2 times slower).
Use only when it is actually required.
Some East Asian countries use Unicode characters whose width corresponds to two Latin characters.
If a DataFrame or Series contains these characters, the default output mode may not align them properly.
In [93]: df = pd.DataFrame({"国籍": ["UK", "日本"], "名前": ["Alice", "しのぶ"]})
In [94]: df
Out[94]:
国籍 名前
0 UK Alice
1 日本 しのぶ
Enabling display.unicode.east_asian_width allows pandas to check each character’s “East Asian Width” property.
These characters can be aligned properly by setting this option to True. However, this will result in longer render
times than the standard len function.
In [95]: pd.set_option("display.unicode.east_asian_width", True)
In [96]: df
Out[96]:
国籍 名前
0 UK Alice
1 日本 しのぶ
In addition, Unicode characters whose width is “ambiguous” can either be 1 or 2 characters wide depending on the
terminal setting or encoding. The option display.unicode.ambiguous_as_wide can be used to handle the ambiguity.
By default, an “ambiguous” character’s width, such as “¡” (inverted exclamation) in the example below, is taken to be 1.
In [97]: df = pd.DataFrame({"a": ["xxx", "¡¡"], "b": ["yyy", "¡¡"]})
In [98]: df
Out[98]:
a b
0 xxx yyy
1 ¡¡ ¡¡
Enabling display.unicode.ambiguous_as_wide makes pandas interpret these characters’ widths to be 2.
(Note that this option will only be effective when display.unicode.east_asian_width is enabled.)
However, setting this option incorrectly for your terminal will cause these characters to be aligned incorrectly:
In [99]: pd.set_option("display.unicode.ambiguous_as_wide", True)
In [100]: df
Out[100]:
a b
0 xxx yyy
1 ¡¡ ¡¡
Table schema display#
DataFrame and Series will publish a Table Schema representation
by default. This can be enabled globally with the
display.html.table_schema option:
In [101]: pd.set_option("display.html.table_schema", True)
Only 'display.max_rows' are serialized and published.
|
user_guide/options.html
|
pandas.io.formats.style.Styler.highlight_null
|
`pandas.io.formats.style.Styler.highlight_null`
Highlight missing values with a style.
Background color to use for highlighting.
|
Styler.highlight_null(color=None, subset=None, props=None, null_color=_NoDefault.no_default)[source]#
Highlight missing values with a style.
Parameters
colorstr, default ‘yellow’Background color to use for highlighting.
New in version 1.5.0.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
New in version 1.1.0.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
New in version 1.3.0.
null_colorstr, default NoneThe background color for highlighting.
Deprecated since version 1.5.0: Use color instead. If color is given null_color is
not used.
Returns
selfStyler
See also
Styler.highlight_maxHighlight the maximum with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_betweenHighlight a defined range with a style.
Styler.highlight_quantileHighlight values defined by a quantile with a style.
|
reference/api/pandas.io.formats.style.Styler.highlight_null.html
|
pandas.tseries.offsets.BusinessMonthEnd.is_quarter_end
|
`pandas.tseries.offsets.BusinessMonthEnd.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
BusinessMonthEnd.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_quarter_end.html
|
pandas.arrays.IntervalArray.from_breaks
|
`pandas.arrays.IntervalArray.from_breaks`
Construct an IntervalArray from an array of splits.
Left and right bounds for each interval.
```
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
```
|
classmethod IntervalArray.from_breaks(breaks, closed='right', copy=False, dtype=None)[source]#
Construct an IntervalArray from an array of splits.
Parameters
breaksarray-like (1-dimensional)Left and right bounds for each interval.
closed{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’Whether the intervals are closed on the left-side, right-side, both
or neither.
copybool, default FalseCopy the data.
dtypedtype or None, default NoneIf None, dtype will be inferred.
Returns
IntervalArray
See also
interval_rangeFunction to create a fixed frequency IntervalIndex.
IntervalArray.from_arraysConstruct from a left and right array.
IntervalArray.from_tuplesConstruct from a sequence of tuples.
Examples
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
|
reference/api/pandas.arrays.IntervalArray.from_breaks.html
|
pandas.Series.str.endswith
|
`pandas.Series.str.endswith`
Test if the end of each string element matches a pattern.
```
>>> s = pd.Series(['bat', 'bear', 'caT', np.nan])
>>> s
0 bat
1 bear
2 caT
3 NaN
dtype: object
```
|
Series.str.endswith(pat, na=None)[source]#
Test if the end of each string element matches a pattern.
Equivalent to str.endswith().
Parameters
patstr or tuple[str, …]Character sequence or tuple of strings. Regular expressions are not
accepted.
naobject, default NaNObject shown if element tested is not a string. The default depends
on dtype of the array. For object-dtype, numpy.nan is used.
For StringDtype, pandas.NA is used.
Returns
Series or Index of boolA Series of booleans indicating whether the given pattern matches
the end of each string element.
See also
str.endswithPython standard library string method.
Series.str.startswithSame as endswith, but tests the start of string.
Series.str.containsTests if string element contains a pattern.
Examples
>>> s = pd.Series(['bat', 'bear', 'caT', np.nan])
>>> s
0 bat
1 bear
2 caT
3 NaN
dtype: object
>>> s.str.endswith('t')
0 True
1 False
2 False
3 NaN
dtype: object
>>> s.str.endswith(('t', 'T'))
0 True
1 False
2 True
3 NaN
dtype: object
Specifying na to be False instead of NaN.
>>> s.str.endswith('t', na=False)
0 True
1 False
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.endswith.html
|
pandas.DatetimeIndex.week
|
`pandas.DatetimeIndex.week`
The week ordinal of the year.
|
property DatetimeIndex.week[source]#
The week ordinal of the year.
Deprecated since version 1.1.0.
weekofyear and week have been deprecated.
Please use DatetimeIndex.isocalendar().week instead.
|
reference/api/pandas.DatetimeIndex.week.html
|
pandas.tseries.offsets.Nano.is_quarter_start
|
`pandas.tseries.offsets.Nano.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Nano.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Nano.is_quarter_start.html
|
pandas.plotting.andrews_curves
|
`pandas.plotting.andrews_curves`
Generate a matplotlib plot for visualising clusters of multivariate data.
Andrews curves have the functional form:
```
>>> df = pd.read_csv(
... 'https://raw.githubusercontent.com/pandas-dev/'
... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
... )
>>> pd.plotting.andrews_curves(df, 'Name')
<AxesSubplot: title={'center': 'width'}>
```
|
pandas.plotting.andrews_curves(frame, class_column, ax=None, samples=200, color=None, colormap=None, **kwargs)[source]#
Generate a matplotlib plot for visualising clusters of multivariate data.
Andrews curves have the functional form:
f(t) = x_1/sqrt(2) + x_2 sin(t) + x_3 cos(t) +x_4 sin(2t) + x_5 cos(2t) + …
Where x coefficients correspond to the values of each dimension and t is
linearly spaced between -pi and +pi. Each row of frame then corresponds to
a single curve.
Parameters
frameDataFrameData to be plotted, preferably normalized to (0.0, 1.0).
class_columnName of the column containing class names
axmatplotlib axes object, default None
samplesNumber of points to plot in each curve
colorlist or tuple, optionalColors to use for the different classes.
colormapstr or matplotlib colormap object, default NoneColormap to select colors from. If string, load colormap with that name
from matplotlib.
**kwargsOptions to pass to matplotlib plotting method.
Returns
class:matplotlip.axis.Axes
Examples
>>> df = pd.read_csv(
... 'https://raw.githubusercontent.com/pandas-dev/'
... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
... )
>>> pd.plotting.andrews_curves(df, 'Name')
<AxesSubplot: title={'center': 'width'}>
|
reference/api/pandas.plotting.andrews_curves.html
|
pandas.TimedeltaIndex.mean
|
`pandas.TimedeltaIndex.mean`
Return the mean value of the Array.
|
TimedeltaIndex.mean(*args, **kwargs)[source]#
Return the mean value of the Array.
New in version 0.25.0.
Parameters
skipnabool, default TrueWhether to ignore any NaT elements.
axisint, optional, default 0
Returns
scalarTimestamp or Timedelta.
See also
numpy.ndarray.meanReturns the average of array elements along a given axis.
Series.meanReturn the mean value in a Series.
Notes
mean is only defined for Datetime and Timedelta dtypes, not for Period.
|
reference/api/pandas.TimedeltaIndex.mean.html
|
pandas.Series.prod
|
`pandas.Series.prod`
Return the product of the values over the requested axis.
Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
```
>>> pd.Series([], dtype="float64").prod()
1.0
```
|
Series.prod(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source]#
Return the product of the values over the requested axis.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
min_countint, default 0The required number of valid values to perform the operation. If fewer than
min_count non-NA values are present the result will be NA.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
By default, the product of an empty or all-NA Series is 1
>>> pd.Series([], dtype="float64").prod()
1.0
This can be controlled with the min_count parameter
>>> pd.Series([], dtype="float64").prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
|
reference/api/pandas.Series.prod.html
|
pandas.tseries.offsets.QuarterEnd.rule_code
|
pandas.tseries.offsets.QuarterEnd.rule_code
|
QuarterEnd.rule_code#
|
reference/api/pandas.tseries.offsets.QuarterEnd.rule_code.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
CustomBusinessMonthBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.rollforward.html
|
pandas.DataFrame.bfill
|
`pandas.DataFrame.bfill`
Synonym for DataFrame.fillna() with method='bfill'.
|
DataFrame.bfill(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='bfill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
|
reference/api/pandas.DataFrame.bfill.html
|
pandas.Series.cat.codes
|
`pandas.Series.cat.codes`
Return Series of codes as well as the index.
|
Series.cat.codes[source]#
Return Series of codes as well as the index.
|
reference/api/pandas.Series.cat.codes.html
|
pandas.DataFrame.swapaxes
|
`pandas.DataFrame.swapaxes`
Interchange axes and swap values axes appropriately.
|
DataFrame.swapaxes(axis1, axis2, copy=True)[source]#
Interchange axes and swap values axes appropriately.
Returns
ysame as input
|
reference/api/pandas.DataFrame.swapaxes.html
|
pandas.tseries.offsets.DateOffset.nanos
|
pandas.tseries.offsets.DateOffset.nanos
|
DateOffset.nanos#
|
reference/api/pandas.tseries.offsets.DateOffset.nanos.html
|
pandas.Series.str.upper
|
`pandas.Series.str.upper`
Convert strings in the Series/Index to uppercase.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.upper()[source]#
Convert strings in the Series/Index to uppercase.
Equivalent to str.upper().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.upper.html
|
pandas.tseries.offsets.BYearEnd.__call__
|
`pandas.tseries.offsets.BYearEnd.__call__`
Call self as a function.
|
BYearEnd.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.BYearEnd.__call__.html
|
pandas.MultiIndex.get_level_values
|
`pandas.MultiIndex.get_level_values`
Return vector of label values for requested level.
Length of returned vector is equal to the length of the index.
```
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
```
|
MultiIndex.get_level_values(level)[source]#
Return vector of label values for requested level.
Length of returned vector is equal to the length of the index.
Parameters
levelint or strlevel is either the integer position of the level in the
MultiIndex, or the name of the level.
Returns
valuesIndexValues is a level of this MultiIndex converted to
a single Index (or subclass thereof).
Notes
If the level contains missing values, the result may be casted to
float with missing values specified as NaN. This is because
the level is converted to a regular Index.
Examples
Create a MultiIndex:
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
Get level values by supplying level as either integer or name:
>>> mi.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object', name='level_1')
>>> mi.get_level_values('level_2')
Index(['d', 'e', 'f'], dtype='object', name='level_2')
If a level contains missing values, the return type of the level
maybe casted to float.
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).dtypes
level_0 int64
level_1 int64
dtype: object
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).get_level_values(0)
Float64Index([1.0, nan, 2.0], dtype='float64')
|
reference/api/pandas.MultiIndex.get_level_values.html
|
pandas.tseries.offsets.BusinessDay.base
|
`pandas.tseries.offsets.BusinessDay.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
BusinessDay.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.BusinessDay.base.html
|
pandas.Interval.open_right
|
`pandas.Interval.open_right`
Check if the interval is open on the right side.
|
Interval.open_right#
Check if the interval is open on the right side.
For the meaning of closed and open see Interval.
Returns
boolTrue if the Interval is not closed on the left-side.
|
reference/api/pandas.Interval.open_right.html
|
pandas.tseries.offsets.Milli.__call__
|
`pandas.tseries.offsets.Milli.__call__`
Call self as a function.
|
Milli.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.Milli.__call__.html
|
pandas.Index.ravel
|
`pandas.Index.ravel`
Return an ndarray of the flattened values of the underlying data.
|
final Index.ravel(order='C')[source]#
Return an ndarray of the flattened values of the underlying data.
Returns
numpy.ndarrayFlattened array.
See also
numpy.ndarray.ravelReturn a flattened array.
|
reference/api/pandas.Index.ravel.html
|
pandas.tseries.offsets.Week.base
|
`pandas.tseries.offsets.Week.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
Week.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.Week.base.html
|
pandas.MultiIndex.get_level_values
|
`pandas.MultiIndex.get_level_values`
Return vector of label values for requested level.
```
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
```
|
MultiIndex.get_level_values(level)[source]#
Return vector of label values for requested level.
Length of returned vector is equal to the length of the index.
Parameters
levelint or strlevel is either the integer position of the level in the
MultiIndex, or the name of the level.
Returns
valuesIndexValues is a level of this MultiIndex converted to
a single Index (or subclass thereof).
Notes
If the level contains missing values, the result may be casted to
float with missing values specified as NaN. This is because
the level is converted to a regular Index.
Examples
Create a MultiIndex:
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
Get level values by supplying level as either integer or name:
>>> mi.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object', name='level_1')
>>> mi.get_level_values('level_2')
Index(['d', 'e', 'f'], dtype='object', name='level_2')
If a level contains missing values, the return type of the level
maybe casted to float.
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).dtypes
level_0 int64
level_1 int64
dtype: object
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).get_level_values(0)
Float64Index([1.0, nan, 2.0], dtype='float64')
|
reference/api/pandas.MultiIndex.get_level_values.html
|
pandas.Series.to_pickle
|
`pandas.Series.to_pickle`
Pickle (serialize) object to file.
String, path object (implementing os.PathLike[str]), or file-like
object implementing a binary write() function. File path where
the pickled object will be stored.
```
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
>>> original_df.to_pickle("./dummy.pkl")
```
|
Series.to_pickle(path, compression='infer', protocol=5, storage_options=None)[source]#
Pickle (serialize) object to file.
Parameters
pathstr, path object, or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary write() function. File path where
the pickled object will be stored.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
protocolintInt which indicates which protocol should be used by the pickler,
default HIGHEST_PROTOCOL (see [1] paragraph 12.1.2). The possible
values are 0, 1, 2, 3, 4, 5. A negative value for the protocol
parameter is equivalent to setting its value to HIGHEST_PROTOCOL.
1
https://docs.python.org/3/library/pickle.html.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
See also
read_pickleLoad pickled pandas object (or any object) from file.
DataFrame.to_hdfWrite DataFrame to an HDF5 file.
DataFrame.to_sqlWrite DataFrame to a SQL database.
DataFrame.to_parquetWrite a DataFrame to the binary parquet format.
Examples
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
>>> original_df.to_pickle("./dummy.pkl")
>>> unpickled_df = pd.read_pickle("./dummy.pkl")
>>> unpickled_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
|
reference/api/pandas.Series.to_pickle.html
|
pandas.tseries.offsets.Milli.is_month_start
|
`pandas.tseries.offsets.Milli.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
Milli.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.Milli.is_month_start.html
|
pandas.tseries.offsets.MonthBegin.rollforward
|
`pandas.tseries.offsets.MonthBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
|
MonthBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.MonthBegin.rollforward.html
|
Style
|
Style
|
Styler objects are returned by pandas.DataFrame.style.
Styler constructor#
Styler(data[, precision, table_styles, ...])
Helps style a DataFrame or Series according to the data with HTML and CSS.
Styler.from_custom_template(searchpath[, ...])
Factory function for creating a subclass of Styler.
Styler properties#
Styler.env
Styler.template_html
Styler.template_html_style
Styler.template_html_table
Styler.template_latex
Styler.template_string
Styler.loader
Style application#
Styler.apply(func[, axis, subset])
Apply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymap(func[, subset])
Apply a CSS-styling function elementwise.
Styler.apply_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, level-wise.
Styler.applymap_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, elementwise.
Styler.format([formatter, subset, na_rep, ...])
Format the text display value of cells.
Styler.format_index([formatter, axis, ...])
Format the text display value of index labels or column headers.
Styler.relabel_index(labels[, axis, level])
Relabel the index, or column header, keys to display a set of specified values.
Styler.hide([subset, axis, level, names])
Hide the entire index / column headers, or specific rows / columns from display.
Styler.concat(other)
Append another Styler to combine the output into a single table.
Styler.set_td_classes(classes)
Set the class attribute of <td> HTML elements.
Styler.set_table_styles([table_styles, ...])
Set the table styles included within the <style> HTML element.
Styler.set_table_attributes(attributes)
Set the table attributes added to the <table> HTML element.
Styler.set_tooltips(ttips[, props, css_class])
Set the DataFrame of strings on Styler generating :hover tooltips.
Styler.set_caption(caption)
Set the text added to a <caption> HTML element.
Styler.set_sticky([axis, pixel_size, levels])
Add CSS to permanently display the index or column headers in a scrolling frame.
Styler.set_properties([subset])
Set defined CSS-properties to each <td> HTML element for the given subset.
Styler.set_uuid(uuid)
Set the uuid applied to id attributes of HTML elements.
Styler.clear()
Reset the Styler, removing any previously applied styles.
Styler.pipe(func, *args, **kwargs)
Apply func(self, *args, **kwargs), and return the result.
Builtin styles#
Styler.highlight_null([color, subset, ...])
Highlight missing values with a style.
Styler.highlight_max([subset, color, axis, ...])
Highlight the maximum with a style.
Styler.highlight_min([subset, color, axis, ...])
Highlight the minimum with a style.
Styler.highlight_between([subset, color, ...])
Highlight a defined range with a style.
Styler.highlight_quantile([subset, color, ...])
Highlight values defined by a quantile with a style.
Styler.background_gradient([cmap, low, ...])
Color the background in a gradient style.
Styler.text_gradient([cmap, low, high, ...])
Color the text in a gradient style.
Styler.bar([subset, axis, color, cmap, ...])
Draw bar chart in the cell backgrounds.
Style export and import#
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
Styler.to_string([buf, encoding, ...])
Write Styler to a file, buffer or string in text format.
Styler.export()
Export the styles applied to the current Styler.
Styler.use(styles)
Set the styles on the current Styler.
|
reference/style.html
|
pandas.DataFrame.merge
|
`pandas.DataFrame.merge`
Merge DataFrame or named Series objects with a database-style join.
A named Series object is treated as a DataFrame with a single named column.
```
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [5, 6, 7, 8]})
>>> df1
lkey value
0 foo 1
1 bar 2
2 baz 3
3 foo 5
>>> df2
rkey value
0 foo 5
1 bar 6
2 baz 7
3 foo 8
```
|
DataFrame.merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)[source]#
Merge DataFrame or named Series objects with a database-style join.
A named Series object is treated as a DataFrame with a single named column.
The join is done on columns or indexes. If joining columns on
columns, the DataFrame indexes will be ignored. Otherwise if joining indexes
on indexes or indexes on a column or columns, the index will be passed on.
When performing a cross merge, no column specifications to merge on are
allowed.
Warning
If both key columns contain rows where the key is a null value, those
rows will be matched against each other. This is different from usual SQL
join behaviour and can lead to unexpected results.
Parameters
rightDataFrame or named SeriesObject to merge with.
how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘inner’Type of merge to be performed.
left: use only keys from left frame, similar to a SQL left outer join;
preserve key order.
right: use only keys from right frame, similar to a SQL right outer join;
preserve key order.
outer: use union of keys from both frames, similar to a SQL full outer
join; sort keys lexicographically.
inner: use intersection of keys from both frames, similar to a SQL inner
join; preserve the order of the left keys.
cross: creates the cartesian product from both frames, preserves the order
of the left keys.
New in version 1.2.0.
onlabel or listColumn or index level names to join on. These must be found in both
DataFrames. If on is None and not merging on indexes then this defaults
to the intersection of the columns in both DataFrames.
left_onlabel or list, or array-likeColumn or index level names to join on in the left DataFrame. Can also
be an array or list of arrays of the length of the left DataFrame.
These arrays are treated as if they are columns.
right_onlabel or list, or array-likeColumn or index level names to join on in the right DataFrame. Can also
be an array or list of arrays of the length of the right DataFrame.
These arrays are treated as if they are columns.
left_indexbool, default FalseUse the index from the left DataFrame as the join key(s). If it is a
MultiIndex, the number of keys in the other DataFrame (either the index
or a number of columns) must match the number of levels.
right_indexbool, default FalseUse the index from the right DataFrame as the join key. Same caveats as
left_index.
sortbool, default FalseSort the join keys lexicographically in the result DataFrame. If False,
the order of the join keys depends on the join type (how keyword).
suffixeslist-like, default is (“_x”, “_y”)A length-2 sequence where each element is optionally a string
indicating the suffix to add to overlapping column names in
left and right respectively. Pass a value of None instead
of a string to indicate that the column name from left or
right should be left as-is, with no suffix. At least one of the
values must not be None.
copybool, default TrueIf False, avoid copy if possible.
indicatorbool or str, default FalseIf True, adds a column to the output DataFrame called “_merge” with
information on the source of each row. The column can be given a different
name by providing a string argument. The column will have a Categorical
type with the value of “left_only” for observations whose merge key only
appears in the left DataFrame, “right_only” for observations
whose merge key only appears in the right DataFrame, and “both”
if the observation’s merge key is found in both DataFrames.
validatestr, optionalIf specified, checks if merge is of specified type.
“one_to_one” or “1:1”: check if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: check if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: check if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Returns
DataFrameA DataFrame of the two merged objects.
See also
merge_orderedMerge with optional filling/interpolation.
merge_asofMerge on nearest keys.
DataFrame.joinSimilar method using indices.
Notes
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0
Support for merging named Series objects was added in version 0.24.0
Examples
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [5, 6, 7, 8]})
>>> df1
lkey value
0 foo 1
1 bar 2
2 baz 3
3 foo 5
>>> df2
rkey value
0 foo 5
1 bar 6
2 baz 7
3 foo 8
Merge df1 and df2 on the lkey and rkey columns. The value columns have
the default suffixes, _x and _y, appended.
>>> df1.merge(df2, left_on='lkey', right_on='rkey')
lkey value_x rkey value_y
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2 with specified left and right suffixes
appended to any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey',
... suffixes=('_left', '_right'))
lkey value_left rkey value_right
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2, but raise an exception if the DataFrames have
any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False))
Traceback (most recent call last):
...
ValueError: columns overlap but no suffix specified:
Index(['value'], dtype='object')
>>> df1 = pd.DataFrame({'a': ['foo', 'bar'], 'b': [1, 2]})
>>> df2 = pd.DataFrame({'a': ['foo', 'baz'], 'c': [3, 4]})
>>> df1
a b
0 foo 1
1 bar 2
>>> df2
a c
0 foo 3
1 baz 4
>>> df1.merge(df2, how='inner', on='a')
a b c
0 foo 1 3
>>> df1.merge(df2, how='left', on='a')
a b c
0 foo 1 3.0
1 bar 2 NaN
>>> df1 = pd.DataFrame({'left': ['foo', 'bar']})
>>> df2 = pd.DataFrame({'right': [7, 8]})
>>> df1
left
0 foo
1 bar
>>> df2
right
0 7
1 8
>>> df1.merge(df2, how='cross')
left right
0 foo 7
1 foo 8
2 bar 7
3 bar 8
|
reference/api/pandas.DataFrame.merge.html
|
pandas.tseries.offsets.LastWeekOfMonth.name
|
`pandas.tseries.offsets.LastWeekOfMonth.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
LastWeekOfMonth.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.name.html
|
pandas.tseries.offsets.BYearEnd.month
|
pandas.tseries.offsets.BYearEnd.month
|
BYearEnd.month#
|
reference/api/pandas.tseries.offsets.BYearEnd.month.html
|
pandas.DataFrame.combine
|
`pandas.DataFrame.combine`
Perform column-wise combine with another DataFrame.
```
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
>>> df1.combine(df2, take_smaller)
A B
0 0 3
1 0 3
```
|
DataFrame.combine(other, func, fill_value=None, overwrite=True)[source]#
Perform column-wise combine with another DataFrame.
Combines a DataFrame with other DataFrame using func
to element-wise combine columns. The row and column indexes of the
resulting DataFrame will be the union of the two.
Parameters
otherDataFrameThe DataFrame to merge column-wise.
funcfunctionFunction that takes two series as inputs and return a Series or a
scalar. Used to merge the two dataframes column by columns.
fill_valuescalar value, default NoneThe value to fill NaNs with prior to passing any column to the
merge func.
overwritebool, default TrueIf True, columns in self that do not exist in other will be
overwritten with NaNs.
Returns
DataFrameCombination of the provided DataFrames.
See also
DataFrame.combine_firstCombine two DataFrame objects and default to non-null values in frame calling the method.
Examples
Combine using a simple function that chooses the smaller column.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
>>> df1.combine(df2, take_smaller)
A B
0 0 3
1 0 3
Example using a true element-wise combine function.
>>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, np.minimum)
A B
0 1 2
1 0 3
Using fill_value fills Nones prior to passing the column to the
merge function.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
A B
0 0 -5.0
1 0 4.0
However, if the same element in both dataframes is None, that None
is preserved
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
A B
0 0 -5.0
1 0 3.0
Example that demonstrates the use of overwrite and behavior when
the axis differ between the dataframes.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2])
>>> df1.combine(df2, take_smaller)
A B C
0 NaN NaN NaN
1 NaN 3.0 -10.0
2 NaN 3.0 1.0
>>> df1.combine(df2, take_smaller, overwrite=False)
A B C
0 0.0 NaN NaN
1 0.0 3.0 -10.0
2 NaN 3.0 1.0
Demonstrating the preference of the passed in dataframe.
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2])
>>> df2.combine(df1, take_smaller)
A B C
0 0.0 NaN NaN
1 0.0 3.0 NaN
2 NaN 3.0 NaN
>>> df2.combine(df1, take_smaller, overwrite=False)
A B C
0 0.0 NaN NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
|
reference/api/pandas.DataFrame.combine.html
|
pandas.DataFrame.values
|
`pandas.DataFrame.values`
Return a Numpy representation of the DataFrame.
```
>>> df = pd.DataFrame({'age': [ 3, 29],
... 'height': [94, 170],
... 'weight': [31, 115]})
>>> df
age height weight
0 3 94 31
1 29 170 115
>>> df.dtypes
age int64
height int64
weight int64
dtype: object
>>> df.values
array([[ 3, 94, 31],
[ 29, 170, 115]])
```
|
property DataFrame.values[source]#
Return a Numpy representation of the DataFrame.
Warning
We recommend using DataFrame.to_numpy() instead.
Only the values in the DataFrame will be returned, the axes labels
will be removed.
Returns
numpy.ndarrayThe values of the DataFrame.
See also
DataFrame.to_numpyRecommended alternative to this method.
DataFrame.indexRetrieve the index labels.
DataFrame.columnsRetrieving the column names.
Notes
The dtype will be a lower-common-denominator dtype (implicit
upcasting); that is to say if the dtypes (even of numeric types)
are mixed, the one that accommodates all will be chosen. Use this
with care if you are not dealing with the blocks.
e.g. If the dtypes are float16 and float32, dtype will be upcast to
float32. If dtypes are int32 and uint8, dtype will be upcast to
int32. By numpy.find_common_type() convention, mixing int64
and uint64 will result in a float64 dtype.
Examples
A DataFrame where all columns are the same type (e.g., int64) results
in an array of the same type.
>>> df = pd.DataFrame({'age': [ 3, 29],
... 'height': [94, 170],
... 'weight': [31, 115]})
>>> df
age height weight
0 3 94 31
1 29 170 115
>>> df.dtypes
age int64
height int64
weight int64
dtype: object
>>> df.values
array([[ 3, 94, 31],
[ 29, 170, 115]])
A DataFrame with mixed type columns(e.g., str/object, int64, float32)
results in an ndarray of the broadest type that accommodates these
mixed types (e.g., object).
>>> df2 = pd.DataFrame([('parrot', 24.0, 'second'),
... ('lion', 80.5, 1),
... ('monkey', np.nan, None)],
... columns=('name', 'max_speed', 'rank'))
>>> df2.dtypes
name object
max_speed float64
rank object
dtype: object
>>> df2.values
array([['parrot', 24.0, 'second'],
['lion', 80.5, 1],
['monkey', nan, None]], dtype=object)
|
reference/api/pandas.DataFrame.values.html
|
pandas.core.groupby.GroupBy.pct_change
|
`pandas.core.groupby.GroupBy.pct_change`
Calculate pct_change of each value to previous entry in group.
|
final GroupBy.pct_change(periods=1, fill_method='ffill', limit=None, freq=None, axis=0)[source]#
Calculate pct_change of each value to previous entry in group.
Returns
Series or DataFramePercentage changes within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.pct_change.html
|
pandas.DataFrame.select_dtypes
|
`pandas.DataFrame.select_dtypes`
Return a subset of the DataFrame’s columns based on the column dtypes.
```
>>> df = pd.DataFrame({'a': [1, 2] * 3,
... 'b': [True, False] * 3,
... 'c': [1.0, 2.0] * 3})
>>> df
a b c
0 1 True 1.0
1 2 False 2.0
2 1 True 1.0
3 2 False 2.0
4 1 True 1.0
5 2 False 2.0
```
|
DataFrame.select_dtypes(include=None, exclude=None)[source]#
Return a subset of the DataFrame’s columns based on the column dtypes.
Parameters
include, excludescalar or list-likeA selection of dtypes or strings to be included/excluded. At least
one of these parameters must be supplied.
Returns
DataFrameThe subset of the frame including the dtypes in include and
excluding the dtypes in exclude.
Raises
ValueError
If both of include and exclude are empty
If include and exclude have overlapping elements
If any kind of string dtype is passed in.
See also
DataFrame.dtypesReturn Series with the data type of each column.
Notes
To select all numeric types, use np.number or 'number'
To select strings you must use the object dtype, but note that
this will return all object dtype columns
See the numpy dtype hierarchy
To select datetimes, use np.datetime64, 'datetime' or
'datetime64'
To select timedeltas, use np.timedelta64, 'timedelta' or
'timedelta64'
To select Pandas categorical dtypes, use 'category'
To select Pandas datetimetz dtypes, use 'datetimetz' (new in
0.20.0) or 'datetime64[ns, tz]'
Examples
>>> df = pd.DataFrame({'a': [1, 2] * 3,
... 'b': [True, False] * 3,
... 'c': [1.0, 2.0] * 3})
>>> df
a b c
0 1 True 1.0
1 2 False 2.0
2 1 True 1.0
3 2 False 2.0
4 1 True 1.0
5 2 False 2.0
>>> df.select_dtypes(include='bool')
b
0 True
1 False
2 True
3 False
4 True
5 False
>>> df.select_dtypes(include=['float64'])
c
0 1.0
1 2.0
2 1.0
3 2.0
4 1.0
5 2.0
>>> df.select_dtypes(exclude=['int64'])
b c
0 True 1.0
1 False 2.0
2 True 1.0
3 False 2.0
4 True 1.0
5 False 2.0
|
reference/api/pandas.DataFrame.select_dtypes.html
|
pandas.ExcelWriter.save
|
`pandas.ExcelWriter.save`
Save workbook to disk.
Deprecated since version 1.5.0.
|
ExcelWriter.save()[source]#
Save workbook to disk.
Deprecated since version 1.5.0.
|
reference/api/pandas.ExcelWriter.save.html
|
Index objects
|
Index objects
|
Index#
Many of these methods or variants thereof are available on the objects
that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.
Index([data, dtype, copy, name, tupleize_cols])
Immutable sequence used for indexing and alignment.
Properties#
Index.values
Return an array representing the data in the Index.
Index.is_monotonic
(DEPRECATED) Alias for is_monotonic_increasing.
Index.is_monotonic_increasing
Return a boolean if the values are equal or increasing.
Index.is_monotonic_decreasing
Return a boolean if the values are equal or decreasing.
Index.is_unique
Return if the index has unique values.
Index.has_duplicates
Check if the Index has duplicate values.
Index.hasnans
Return True if there are any NaNs.
Index.dtype
Return the dtype object of the underlying data.
Index.inferred_type
Return a string of the type inferred from the values.
Index.is_all_dates
Whether or not the index values only consist of dates.
Index.shape
Return a tuple of the shape of the underlying data.
Index.name
Return Index or MultiIndex name.
Index.names
Index.nbytes
Return the number of bytes in the underlying data.
Index.ndim
Number of dimensions of the underlying data, by definition 1.
Index.size
Return the number of elements in the underlying data.
Index.empty
Index.T
Return the transpose, which is by definition self.
Index.memory_usage([deep])
Memory usage of the values.
Modifying and computations#
Index.all(*args, **kwargs)
Return whether all elements are Truthy.
Index.any(*args, **kwargs)
Return whether any element is Truthy.
Index.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Index.argmax([axis, skipna])
Return int position of the largest value in the Series.
Index.copy([name, deep, dtype, names])
Make a copy of this object.
Index.delete(loc)
Make new Index with passed location(-s) deleted.
Index.drop(labels[, errors])
Make new Index with passed list of labels deleted.
Index.drop_duplicates(*[, keep])
Return Index with duplicate values removed.
Index.duplicated([keep])
Indicate duplicate index values.
Index.equals(other)
Determine if two Index object are equal.
Index.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Index.identical(other)
Similar to equals, but checks that object attributes and types are also equal.
Index.insert(loc, item)
Make new Index inserting new item at location.
Index.is_(other)
More flexible, faster check like is but that works through views.
Index.is_boolean()
Check if the Index only consists of booleans.
Index.is_categorical()
Check if the Index holds categorical data.
Index.is_floating()
Check if the Index is a floating type.
Index.is_integer()
Check if the Index only consists of integers.
Index.is_interval()
Check if the Index holds Interval objects.
Index.is_mixed()
Check if the Index holds data with mixed data types.
Index.is_numeric()
Check if the Index only consists of numeric data.
Index.is_object()
Check if the Index is of the object dtype.
Index.min([axis, skipna])
Return the minimum value of the Index.
Index.max([axis, skipna])
Return the maximum value of the Index.
Index.reindex(target[, method, level, ...])
Create index with target's values.
Index.rename(name[, inplace])
Alter Index or MultiIndex name.
Index.repeat(repeats[, axis])
Repeat elements of a Index.
Index.where(cond[, other])
Replace values where the condition is False.
Index.take(indices[, axis, allow_fill, ...])
Return a new Index of the values selected by the indices.
Index.putmask(mask, value)
Return a new Index of the values set with the mask.
Index.unique([level])
Return unique values in the index.
Index.nunique([dropna])
Return number of unique elements in the object.
Index.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Compatibility with MultiIndex#
Index.set_names(names, *[, level, inplace])
Set Index or MultiIndex name.
Index.droplevel([level])
Return index with requested level(s) removed.
Missing values#
Index.fillna([value, downcast])
Fill NA/NaN values with the specified value.
Index.dropna([how])
Return Index without NA/NaN values.
Index.isna()
Detect missing values.
Index.notna()
Detect existing (non-missing) values.
Conversion#
Index.astype(dtype[, copy])
Create an Index with values cast to dtypes.
Index.item()
Return the first element of the underlying data as a Python scalar.
Index.map(mapper[, na_action])
Map values using an input mapping or function.
Index.ravel([order])
Return an ndarray of the flattened values of the underlying data.
Index.to_list()
Return a list of the values.
Index.to_native_types([slicer])
(DEPRECATED) Format specified values of self and return them.
Index.to_series([index, name])
Create a Series with both index and values equal to the index keys.
Index.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Index.view([cls])
Sorting#
Index.argsort(*args, **kwargs)
Return the integer indices that would sort the index.
Index.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Index.sort_values([return_indexer, ...])
Return a sorted copy of the index.
Time-specific operations#
Index.shift([periods, freq])
Shift index by desired number of time frequency increments.
Combining / joining / set operations#
Index.append(other)
Append a collection of Index options together.
Index.join(other, *[, how, level, ...])
Compute join_index and indexers to conform data structures to the new index.
Index.intersection(other[, sort])
Form the intersection of two Index objects.
Index.union(other[, sort])
Form the union of two Index objects.
Index.difference(other[, sort])
Return a new Index with elements of index not in other.
Index.symmetric_difference(other[, ...])
Compute the symmetric difference of two Index objects.
Selecting#
Index.asof(label)
Return the label from the index, or, if not present, the previous one.
Index.asof_locs(where, mask)
Return the locations (indices) of labels in the index.
Index.get_indexer(target[, method, limit, ...])
Compute indexer and mask for new index given the current index.
Index.get_indexer_for(target)
Guaranteed return of an indexer even when non-unique.
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index.
Index.get_level_values(level)
Return an Index of values for requested level.
Index.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
Index.get_slice_bound(label, side[, kind])
Calculate slice bound that corresponds to given label.
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray.
Index.isin(values[, level])
Return a boolean array where the index values are in values.
Index.slice_indexer([start, end, step, kind])
Compute the slice indexer for input labels and step.
Index.slice_locs([start, end, step, kind])
Compute slice locations for input labels.
Numeric Index#
RangeIndex([start, stop, step, dtype, copy, ...])
Immutable Index implementing a monotonic integer range.
Int64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
UInt64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
Float64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
RangeIndex.start
The value of the start parameter (0 if this was not supplied).
RangeIndex.stop
The value of the stop parameter.
RangeIndex.step
The value of the step parameter (1 if this was not supplied).
RangeIndex.from_range(data[, name, dtype])
Create RangeIndex from a range object.
CategoricalIndex#
CategoricalIndex([data, categories, ...])
Index based on an underlying Categorical.
Categorical components#
CategoricalIndex.codes
The category codes of this categorical.
CategoricalIndex.categories
The categories of this categorical.
CategoricalIndex.ordered
Whether the categories have an ordered relationship.
CategoricalIndex.rename_categories(*args, ...)
Rename categories.
CategoricalIndex.reorder_categories(*args, ...)
Reorder categories as specified in new_categories.
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
CategoricalIndex.remove_categories(*args, ...)
Remove the specified categories.
CategoricalIndex.remove_unused_categories(...)
Remove categories which are not used.
CategoricalIndex.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
CategoricalIndex.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
CategoricalIndex.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Modifying and computations#
CategoricalIndex.map(mapper)
Map values using input an input mapping or function.
CategoricalIndex.equals(other)
Determine if two CategoricalIndex objects contain the same elements.
IntervalIndex#
IntervalIndex(data[, closed, dtype, copy, ...])
Immutable index of intervals that are closed on the same side.
IntervalIndex components#
IntervalIndex.from_arrays(left, right[, ...])
Construct from two arrays defining the left and right bounds.
IntervalIndex.from_tuples(data[, closed, ...])
Construct an IntervalIndex from an array-like of tuples.
IntervalIndex.from_breaks(breaks[, closed, ...])
Construct an IntervalIndex from an array of splits.
IntervalIndex.left
IntervalIndex.right
IntervalIndex.mid
IntervalIndex.closed
String describing the inclusive side the intervals.
IntervalIndex.length
IntervalIndex.values
Return an array representing the data in the Index.
IntervalIndex.is_empty
Indicates if an interval is empty, meaning it contains no points.
IntervalIndex.is_non_overlapping_monotonic
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
IntervalIndex.is_overlapping
Return True if the IntervalIndex has overlapping intervals, else False.
IntervalIndex.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
IntervalIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
IntervalIndex.set_closed(*args, **kwargs)
Return an identical IntervalArray closed on the specified side.
IntervalIndex.contains(*args, **kwargs)
Check elementwise if the Intervals contain the value.
IntervalIndex.overlaps(*args, **kwargs)
Check elementwise if an Interval overlaps the values in the IntervalArray.
IntervalIndex.to_tuples(*args, **kwargs)
Return an ndarray of tuples of the form (left, right).
MultiIndex#
MultiIndex([levels, codes, sortorder, ...])
A multi-level, or hierarchical, index object for pandas objects.
IndexSlice
Create an object to more easily perform multi-index slicing.
MultiIndex constructors#
MultiIndex.from_arrays(arrays[, sortorder, ...])
Convert arrays to MultiIndex.
MultiIndex.from_tuples(tuples[, sortorder, ...])
Convert list of tuples to MultiIndex.
MultiIndex.from_product(iterables[, ...])
Make a MultiIndex from the cartesian product of multiple iterables.
MultiIndex.from_frame(df[, sortorder, names])
Make a MultiIndex from a DataFrame.
MultiIndex properties#
MultiIndex.names
Names of levels in MultiIndex.
MultiIndex.levels
MultiIndex.codes
MultiIndex.nlevels
Integer number of levels in this MultiIndex.
MultiIndex.levshape
A tuple with the length of each level.
MultiIndex.dtypes
Return the dtypes as a Series for the underlying MultiIndex.
MultiIndex components#
MultiIndex.set_levels(levels, *[, level, ...])
Set new levels on MultiIndex.
MultiIndex.set_codes(codes, *[, level, ...])
Set new codes on MultiIndex.
MultiIndex.to_flat_index()
Convert a MultiIndex to an Index of Tuples containing the level values.
MultiIndex.to_frame([index, name, ...])
Create a DataFrame with the levels of the MultiIndex as columns.
MultiIndex.sortlevel([level, ascending, ...])
Sort MultiIndex at the requested level.
MultiIndex.droplevel([level])
Return index with requested level(s) removed.
MultiIndex.swaplevel([i, j])
Swap level i with level j.
MultiIndex.reorder_levels(order)
Rearrange levels using input order.
MultiIndex.remove_unused_levels()
Create new MultiIndex from current that removes unused levels.
MultiIndex selecting#
MultiIndex.get_loc(key[, method])
Get location for a label or a tuple of labels.
MultiIndex.get_locs(seq)
Get location for a sequence of labels.
MultiIndex.get_loc_level(key[, level, ...])
Get location and sliced index for requested label(s)/level(s).
MultiIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
MultiIndex.get_level_values(level)
Return vector of label values for requested level.
DatetimeIndex#
DatetimeIndex([data, freq, tz, normalize, ...])
Immutable ndarray-like of datetime64 data.
Time/date components#
DatetimeIndex.year
The year of the datetime.
DatetimeIndex.month
The month as January=1, December=12.
DatetimeIndex.day
The day of the datetime.
DatetimeIndex.hour
The hours of the datetime.
DatetimeIndex.minute
The minutes of the datetime.
DatetimeIndex.second
The seconds of the datetime.
DatetimeIndex.microsecond
The microseconds of the datetime.
DatetimeIndex.nanosecond
The nanoseconds of the datetime.
DatetimeIndex.date
Returns numpy array of python datetime.date objects.
DatetimeIndex.time
Returns numpy array of datetime.time objects.
DatetimeIndex.timetz
Returns numpy array of datetime.time objects with timezones.
DatetimeIndex.dayofyear
The ordinal day of the year.
DatetimeIndex.day_of_year
The ordinal day of the year.
DatetimeIndex.weekofyear
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.week
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.quarter
The quarter of the date.
DatetimeIndex.tz
Return the timezone.
DatetimeIndex.freq
Return the frequency object if it is set, otherwise None.
DatetimeIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
DatetimeIndex.is_month_start
Indicates whether the date is the first day of the month.
DatetimeIndex.is_month_end
Indicates whether the date is the last day of the month.
DatetimeIndex.is_quarter_start
Indicator for whether the date is the first day of a quarter.
DatetimeIndex.is_quarter_end
Indicator for whether the date is the last day of a quarter.
DatetimeIndex.is_year_start
Indicate whether the date is the first day of a year.
DatetimeIndex.is_year_end
Indicate whether the date is the last day of the year.
DatetimeIndex.is_leap_year
Boolean indicator if the date belongs to a leap year.
DatetimeIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Selecting#
DatetimeIndex.indexer_at_time(time[, asof])
Return index locations of values at particular time of day.
DatetimeIndex.indexer_between_time(...[, ...])
Return index locations of values between particular times of day.
Time-specific operations#
DatetimeIndex.normalize(*args, **kwargs)
Convert times to midnight.
DatetimeIndex.strftime(date_format)
Convert to Index using specified date_format.
DatetimeIndex.snap([freq])
Snap time stamps to nearest occurring frequency.
DatetimeIndex.tz_convert(tz)
Convert tz-aware Datetime Array/Index from one time zone to another.
DatetimeIndex.tz_localize(tz[, ambiguous, ...])
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
DatetimeIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
DatetimeIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
DatetimeIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
DatetimeIndex.month_name(*args, **kwargs)
Return the month names with specified locale.
DatetimeIndex.day_name(*args, **kwargs)
Return the day names with specified locale.
Conversion#
DatetimeIndex.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
DatetimeIndex.to_perioddelta(freq)
Calculate deltas between self values and self converted to Periods at a freq.
DatetimeIndex.to_pydatetime(*args, **kwargs)
Return an ndarray of datetime.datetime objects.
DatetimeIndex.to_series([keep_tz, index, name])
Create a Series with both index and values equal to the index keys.
DatetimeIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
DatetimeIndex.mean(*args, **kwargs)
Return the mean value of the Array.
DatetimeIndex.std(*args, **kwargs)
Return sample standard deviation over requested axis.
TimedeltaIndex#
TimedeltaIndex([data, unit, freq, closed, ...])
Immutable Index of timedelta64 data.
Components#
TimedeltaIndex.days
Number of days for each element.
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
TimedeltaIndex.components
Return a DataFrame of the individual resolution components of the Timedeltas.
TimedeltaIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Conversion#
TimedeltaIndex.to_pytimedelta(*args, **kwargs)
Return an ndarray of datetime.timedelta objects.
TimedeltaIndex.to_series([index, name])
Create a Series with both index and values equal to the index keys.
TimedeltaIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
TimedeltaIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
TimedeltaIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
TimedeltaIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
TimedeltaIndex.mean(*args, **kwargs)
Return the mean value of the Array.
PeriodIndex#
PeriodIndex([data, ordinal, freq, dtype, ...])
Immutable ndarray holding ordinal values indicating regular periods in time.
Properties#
PeriodIndex.day
The days of the period.
PeriodIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
PeriodIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
PeriodIndex.dayofyear
The ordinal day of the year.
PeriodIndex.day_of_year
The ordinal day of the year.
PeriodIndex.days_in_month
The number of days in the month.
PeriodIndex.daysinmonth
The number of days in the month.
PeriodIndex.end_time
Get the Timestamp for the end of the period.
PeriodIndex.freq
Return the frequency object if it is set, otherwise None.
PeriodIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
PeriodIndex.hour
The hour of the period.
PeriodIndex.is_leap_year
Logical indicating if the date belongs to a leap year.
PeriodIndex.minute
The minute of the period.
PeriodIndex.month
The month as January=1, December=12.
PeriodIndex.quarter
The quarter of the date.
PeriodIndex.qyear
PeriodIndex.second
The second of the period.
PeriodIndex.start_time
Get the Timestamp for the start of the period.
PeriodIndex.week
The week ordinal of the year.
PeriodIndex.weekday
The day of the week with Monday=0, Sunday=6.
PeriodIndex.weekofyear
The week ordinal of the year.
PeriodIndex.year
The year of the period.
Methods#
PeriodIndex.asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
PeriodIndex.strftime(*args, **kwargs)
Convert to Index using specified date_format.
PeriodIndex.to_timestamp([freq, how])
Cast to DatetimeArray/Index.
|
reference/indexing.html
|
pandas.tseries.offsets.Minute.is_year_end
|
`pandas.tseries.offsets.Minute.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Minute.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Minute.is_year_end.html
|
pandas.io.formats.style.Styler.template_string
|
pandas.io.formats.style.Styler.template_string
|
Styler.template_string = <Template 'string.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_string.html
|
pandas.tseries.offsets.FY5253Quarter.apply
|
pandas.tseries.offsets.FY5253Quarter.apply
|
FY5253Quarter.apply()#
|
reference/api/pandas.tseries.offsets.FY5253Quarter.apply.html
|
pandas.tseries.offsets.FY5253Quarter.kwds
|
`pandas.tseries.offsets.FY5253Quarter.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
FY5253Quarter.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.FY5253Quarter.kwds.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.