title
stringlengths
5
65
summary
stringlengths
5
98.2k
context
stringlengths
9
121k
path
stringlengths
10
84
pandas.Series.mean
`pandas.Series.mean` Return the mean of the values over the requested axis.
Series.mean(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]# Return the mean of the values over the requested axis. Parameters axis{index (0)}Axis for the function to be applied on. For Series this parameter is unused and defaults to 0. skipnabool, default TrueExclude NA/null values when computing the result. levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar. Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead. numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be False in a future version of pandas. **kwargsAdditional keyword arguments to be passed to the function. Returns scalar or Series (if level specified)
reference/api/pandas.Series.mean.html
pandas.Index.drop_duplicates
`pandas.Index.drop_duplicates` Return Index with duplicate values removed. ‘first’ : Drop duplicates except for the first occurrence. ``` >>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo']) ```
Index.drop_duplicates(*, keep='first')[source]# Return Index with duplicate values removed. Parameters keep{‘first’, ‘last’, False}, default ‘first’ ‘first’ : Drop duplicates except for the first occurrence. ‘last’ : Drop duplicates except for the last occurrence. False : Drop all duplicates. Returns deduplicatedIndex See also Series.drop_duplicatesEquivalent method on Series. DataFrame.drop_duplicatesEquivalent method on DataFrame. Index.duplicatedRelated method on Index, indicating duplicate Index values. Examples Generate an pandas.Index with duplicate values. >>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo']) The keep parameter controls which duplicate values are removed. The value ‘first’ keeps the first occurrence for each set of duplicated entries. The default value of keep is ‘first’. >>> idx.drop_duplicates(keep='first') Index(['lama', 'cow', 'beetle', 'hippo'], dtype='object') The value ‘last’ keeps the last occurrence for each set of duplicated entries. >>> idx.drop_duplicates(keep='last') Index(['cow', 'beetle', 'lama', 'hippo'], dtype='object') The value False discards all sets of duplicated entries. >>> idx.drop_duplicates(keep=False) Index(['cow', 'beetle', 'hippo'], dtype='object')
reference/api/pandas.Index.drop_duplicates.html
pandas.core.groupby.SeriesGroupBy.aggregate
`pandas.core.groupby.SeriesGroupBy.aggregate` Aggregate using one or more operations over the specified axis. ``` >>> s = pd.Series([1, 2, 3, 4]) ```
SeriesGroupBy.aggregate(func=None, *args, engine=None, engine_kwargs=None, **kwargs)[source]# Aggregate using one or more operations over the specified axis. Parameters funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. Can also accept a Numba JIT function with engine='numba' specified. Only passing a single function is supported with this engine. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *argsPositional arguments to pass to func. enginestr, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.1.0. engine_kwargsdict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargsKeyword arguments to be passed into func. Returns Series See also Series.groupby.applyApply function func group-wise and combine the results together. Series.groupby.transformAggregate using one or more operations over the specified axis. Series.aggregateTransforms the Series on each group based on the given function. Notes When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.groupby([1, 1, 2, 2]).min() 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg('min') 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg(['min', 'max']) min max 1 1 2 2 3 4 The output column names can be controlled by passing the desired column names and aggregations as keyword arguments. >>> s.groupby([1, 1, 2, 2]).agg( ... minimum='min', ... maximum='max', ... ) minimum maximum 1 1 2 2 3 4 Changed in version 1.3.0: The resulting dtype will reflect the return value of the aggregating function. >>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min()) 1 1.0 2 3.0 dtype: float64
reference/api/pandas.core.groupby.SeriesGroupBy.aggregate.html
pandas.tseries.offsets.MonthEnd.rollback
`pandas.tseries.offsets.MonthEnd.rollback` Roll provided date backward to next offset only if not on offset.
MonthEnd.rollback()# Roll provided date backward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.MonthEnd.rollback.html
pandas.Index.name
`pandas.Index.name` Return Index or MultiIndex name.
property Index.name[source]# Return Index or MultiIndex name.
reference/api/pandas.Index.name.html
pandas.tseries.offsets.CustomBusinessHour.name
`pandas.tseries.offsets.CustomBusinessHour.name` Return a string representing the base frequency. ``` >>> pd.offsets.Hour().name 'H' ```
CustomBusinessHour.name# Return a string representing the base frequency. Examples >>> pd.offsets.Hour().name 'H' >>> pd.offsets.Hour(5).name 'H'
reference/api/pandas.tseries.offsets.CustomBusinessHour.name.html
pandas.tseries.offsets.MonthBegin.is_quarter_start
`pandas.tseries.offsets.MonthBegin.is_quarter_start` Return boolean whether a timestamp occurs on the quarter start. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True ```
MonthBegin.is_quarter_start()# Return boolean whether a timestamp occurs on the quarter start. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True
reference/api/pandas.tseries.offsets.MonthBegin.is_quarter_start.html
pandas.core.groupby.GroupBy.prod
`pandas.core.groupby.GroupBy.prod` Compute prod of group values. Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data.
final GroupBy.prod(numeric_only=_NoDefault.no_default, min_count=0)[source]# Compute prod of group values. Parameters numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_countint, default 0The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrameComputed prod of values within each group.
reference/api/pandas.core.groupby.GroupBy.prod.html
pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start
`pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start` Return boolean whether a timestamp occurs on the quarter start. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True ```
BusinessMonthEnd.is_quarter_start()# Return boolean whether a timestamp occurs on the quarter start. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start.html
pandas.Index.reindex
`pandas.Index.reindex` Create index with target’s values. default: exact matches only. ``` >>> idx = pd.Index(['car', 'bike', 'train', 'tractor']) >>> idx Index(['car', 'bike', 'train', 'tractor'], dtype='object') >>> idx.reindex(['car', 'bike']) (Index(['car', 'bike'], dtype='object'), array([0, 1])) ```
Index.reindex(target, method=None, level=None, limit=None, tolerance=None)[source]# Create index with target’s values. Parameters targetan iterable method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional default: exact matches only. pad / ffill: find the PREVIOUS index value if no exact match. backfill / bfill: use NEXT index value if no exact match nearest: use the NEAREST index value if no exact match. Tied distances are broken by preferring the larger index value. levelint, optionalLevel of multiindex. limitint, optionalMaximum number of consecutive labels in target to match for inexact matches. toleranceint or float, optionalMaximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation abs(index[indexer] - target) <= tolerance. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type. Returns new_indexpd.IndexResulting index. indexernp.ndarray[np.intp] or NoneIndices of output values in original index. Raises TypeErrorIf method passed along with level. ValueErrorIf non-unique multi-index ValueErrorIf non-unique index and method or limit passed. See also Series.reindexConform Series to new index with optional filling logic. DataFrame.reindexConform DataFrame to new index with optional filling logic. Examples >>> idx = pd.Index(['car', 'bike', 'train', 'tractor']) >>> idx Index(['car', 'bike', 'train', 'tractor'], dtype='object') >>> idx.reindex(['car', 'bike']) (Index(['car', 'bike'], dtype='object'), array([0, 1]))
reference/api/pandas.Index.reindex.html
pandas.tseries.offsets.Micro.is_on_offset
`pandas.tseries.offsets.Micro.is_on_offset` Return boolean whether a timestamp intersects with this frequency. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Day(1) >>> freq.is_on_offset(ts) True ```
Micro.is_on_offset()# Return boolean whether a timestamp intersects with this frequency. Parameters dtdatetime.datetimeTimestamp to check intersections with frequency. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Day(1) >>> freq.is_on_offset(ts) True >>> ts = pd.Timestamp(2022, 8, 6) >>> ts.day_name() 'Saturday' >>> freq = pd.offsets.BusinessDay(1) >>> freq.is_on_offset(ts) False
reference/api/pandas.tseries.offsets.Micro.is_on_offset.html
pandas.Period.quarter
`pandas.Period.quarter` Return the quarter this Period falls on.
Period.quarter# Return the quarter this Period falls on.
reference/api/pandas.Period.quarter.html
pandas.DataFrame.plot.bar
`pandas.DataFrame.plot.bar` Vertical bar plot. ``` >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) >>> ax = df.plot.bar(x='lab', y='val', rot=0) ```
DataFrame.plot.bar(x=None, y=None, **kwargs)[source]# Vertical bar plot. A bar plot is a plot that presents categorical data with rectangular bars with lengths proportional to the values that they represent. A bar plot shows comparisons among discrete categories. One axis of the plot shows the specific categories being compared, and the other axis represents a measured value. Parameters xlabel or position, optionalAllows plotting of one column versus another. If not specified, the index of the DataFrame is used. ylabel or position, optionalAllows plotting of one column versus another. If not specified, all numerical columns are used. colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are: A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’. A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For instance [‘green’,’yellow’] each column’s bar will be filled in green or yellow, alternatively. If there is only a single column to be plotted, then only the first color from the color list will be used. A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color bars for column a in green and bars for column b in red. New in version 1.1.0. **kwargsAdditional keyword arguments are documented in DataFrame.plot(). Returns matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes per column when subplots=True. See also DataFrame.plot.barhHorizontal bar plot. DataFrame.plotMake plots of a DataFrame. matplotlib.pyplot.barMake a bar plot with matplotlib. Examples Basic plot. >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) >>> ax = df.plot.bar(x='lab', y='val', rot=0) Plot a whole dataframe to a bar plot. Each column is assigned a distinct color, and each row is nested in a group along the horizontal axis. >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.bar(rot=0) Plot stacked bar charts for the DataFrame >>> ax = df.plot.bar(stacked=True) Instead of nesting, the figure can be split by column with subplots=True. In this case, a numpy.ndarray of matplotlib.axes.Axes are returned. >>> axes = df.plot.bar(rot=0, subplots=True) >>> axes[1].legend(loc=2) If you don’t like the default colours, you can specify how you’d like each column to be colored. >>> axes = df.plot.bar( ... rot=0, subplots=True, color={"speed": "red", "lifespan": "green"} ... ) >>> axes[1].legend(loc=2) Plot a single column. >>> ax = df.plot.bar(y='speed', rot=0) Plot only selected categories for the DataFrame. >>> ax = df.plot.bar(x='lifespan', rot=0)
reference/api/pandas.DataFrame.plot.bar.html
pandas.Period.to_timestamp
`pandas.Period.to_timestamp` Return the Timestamp representation of the Period. Uses the target frequency specified at the part of the period specified by how, which is either Start or Finish.
Period.to_timestamp()# Return the Timestamp representation of the Period. Uses the target frequency specified at the part of the period specified by how, which is either Start or Finish. Parameters freqstr or DateOffsetTarget frequency. Default is ‘D’ if self.freq is week or longer and ‘S’ otherwise. howstr, default ‘S’ (start)One of ‘S’, ‘E’. Can be aliased as case insensitive ‘Start’, ‘Finish’, ‘Begin’, ‘End’. Returns Timestamp
reference/api/pandas.Period.to_timestamp.html
pandas.DataFrame.idxmin
`pandas.DataFrame.idxmin` Return index of first occurrence of minimum over requested axis. ``` >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], ... 'co2_emissions': [37.2, 19.66, 1712]}, ... index=['Pork', 'Wheat Products', 'Beef']) ```
DataFrame.idxmin(axis=0, skipna=True, numeric_only=False)[source]# Return index of first occurrence of minimum over requested axis. NA/null values are excluded. Parameters axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise. skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result will be NA. numeric_onlybool, default FalseInclude only float, int or boolean data. New in version 1.5.0. Returns SeriesIndexes of minima along the specified axis. Raises ValueError If the row/column is empty See also Series.idxminReturn index of the minimum element. Notes This method is the DataFrame version of ndarray.argmin. Examples Consider a dataset containing food consumption in Argentina. >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], ... 'co2_emissions': [37.2, 19.66, 1712]}, ... index=['Pork', 'Wheat Products', 'Beef']) >>> df consumption co2_emissions Pork 10.51 37.20 Wheat Products 103.11 19.66 Beef 55.48 1712.00 By default, it returns the index for the minimum value in each column. >>> df.idxmin() consumption Pork co2_emissions Wheat Products dtype: object To return the index for the minimum value in each row, use axis="columns". >>> df.idxmin(axis="columns") Pork consumption Wheat Products co2_emissions Beef consumption dtype: object
reference/api/pandas.DataFrame.idxmin.html
General functions
General functions melt(frame[, id_vars, value_vars, var_name, ...]) Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. pivot(data, *[, index, columns, values]) Return reshaped DataFrame organized by given index / column values. pivot_table(data[, values, index, columns, ...])
Data manipulations# melt(frame[, id_vars, value_vars, var_name, ...]) Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. pivot(data, *[, index, columns, values]) Return reshaped DataFrame organized by given index / column values. pivot_table(data[, values, index, columns, ...]) Create a spreadsheet-style pivot table as a DataFrame. crosstab(index, columns[, values, rownames, ...]) Compute a simple cross tabulation of two (or more) factors. cut(x, bins[, right, labels, retbins, ...]) Bin values into discrete intervals. qcut(x, q[, labels, retbins, precision, ...]) Quantile-based discretization function. merge(left, right[, how, on, left_on, ...]) Merge DataFrame or named Series objects with a database-style join. merge_ordered(left, right[, on, left_on, ...]) Perform a merge for ordered data with optional filling/interpolation. merge_asof(left, right[, on, left_on, ...]) Perform a merge by key distance. concat(objs, *[, axis, join, ignore_index, ...]) Concatenate pandas objects along a particular axis. get_dummies(data[, prefix, prefix_sep, ...]) Convert categorical variable into dummy/indicator variables. from_dummies(data[, sep, default_category]) Create a categorical DataFrame from a DataFrame of dummy variables. factorize(values[, sort, na_sentinel, ...]) Encode the object as an enumerated type or categorical variable. unique(values) Return unique values based on a hash table. wide_to_long(df, stubnames, i, j[, sep, suffix]) Unpivot a DataFrame from wide to long format. Top-level missing data# isna(obj) Detect missing values for an array-like object. isnull(obj) Detect missing values for an array-like object. notna(obj) Detect non-missing values for an array-like object. notnull(obj) Detect non-missing values for an array-like object. Top-level dealing with numeric data# to_numeric(arg[, errors, downcast]) Convert argument to a numeric type. Top-level dealing with datetimelike data# to_datetime(arg[, errors, dayfirst, ...]) Convert argument to datetime. to_timedelta(arg[, unit, errors]) Convert argument to timedelta. date_range([start, end, periods, freq, tz, ...]) Return a fixed frequency DatetimeIndex. bdate_range([start, end, periods, freq, tz, ...]) Return a fixed frequency DatetimeIndex with business day as the default. period_range([start, end, periods, freq, name]) Return a fixed frequency PeriodIndex. timedelta_range([start, end, periods, freq, ...]) Return a fixed frequency TimedeltaIndex with day as the default. infer_freq(index[, warn]) Infer the most likely frequency given the input index. Top-level dealing with Interval data# interval_range([start, end, periods, freq, ...]) Return a fixed frequency IntervalIndex. Top-level evaluation# eval(expr[, parser, engine, truediv, ...]) Evaluate a Python expression as a string using various backends. Hashing# util.hash_array(vals[, encoding, hash_key, ...]) Given a 1d array, return an array of deterministic integers. util.hash_pandas_object(obj[, index, ...]) Return a data hash of the Index/Series/DataFrame. Importing from other DataFrame libraries# api.interchange.from_dataframe(df[, allow_copy]) Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
reference/general_functions.html
pandas.DataFrame.between_time
`pandas.DataFrame.between_time` Select values between particular times of the day (e.g., 9:00-9:30 AM). By setting start_time to be later than end_time, you can get the times that are not between the two times. ``` >>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min') >>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i) >>> ts A 2018-04-09 00:00:00 1 2018-04-10 00:20:00 2 2018-04-11 00:40:00 3 2018-04-12 01:00:00 4 ```
DataFrame.between_time(start_time, end_time, include_start=_NoDefault.no_default, include_end=_NoDefault.no_default, inclusive=None, axis=None)[source]# Select values between particular times of the day (e.g., 9:00-9:30 AM). By setting start_time to be later than end_time, you can get the times that are not between the two times. Parameters start_timedatetime.time or strInitial time as a time filter limit. end_timedatetime.time or strEnd time as a time filter limit. include_startbool, default TrueWhether the start time needs to be included in the result. Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open. include_endbool, default TrueWhether the end time needs to be included in the result. Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open. inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; whether to set each bound as closed or open. axis{0 or ‘index’, 1 or ‘columns’}, default 0Determine range time on index or columns value. For Series this parameter is unused and defaults to 0. Returns Series or DataFrameData from the original object filtered to the specified dates range. Raises TypeErrorIf the index is not a DatetimeIndex See also at_timeSelect values at a particular time of the day. firstSelect initial periods of time series based on a date offset. lastSelect final periods of time series based on a date offset. DatetimeIndex.indexer_between_timeGet just the index locations for values between particular times of the day. Examples >>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min') >>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i) >>> ts A 2018-04-09 00:00:00 1 2018-04-10 00:20:00 2 2018-04-11 00:40:00 3 2018-04-12 01:00:00 4 >>> ts.between_time('0:15', '0:45') A 2018-04-10 00:20:00 2 2018-04-11 00:40:00 3 You get the times that are not between two times by setting start_time later than end_time: >>> ts.between_time('0:45', '0:15') A 2018-04-09 00:00:00 1 2018-04-12 01:00:00 4
reference/api/pandas.DataFrame.between_time.html
pandas.tseries.offsets.BYearEnd.normalize
pandas.tseries.offsets.BYearEnd.normalize
BYearEnd.normalize#
reference/api/pandas.tseries.offsets.BYearEnd.normalize.html
pandas.tseries.offsets.FY5253.rollback
`pandas.tseries.offsets.FY5253.rollback` Roll provided date backward to next offset only if not on offset.
FY5253.rollback()# Roll provided date backward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.FY5253.rollback.html
pandas.tseries.offsets.QuarterBegin.rule_code
pandas.tseries.offsets.QuarterBegin.rule_code
QuarterBegin.rule_code#
reference/api/pandas.tseries.offsets.QuarterBegin.rule_code.html
pandas.api.extensions.ExtensionArray.argsort
`pandas.api.extensions.ExtensionArray.argsort` Return the indices that would sort this array. Whether the indices should result in an ascending or descending sort.
ExtensionArray.argsort(*args, ascending=True, kind='quicksort', na_position='last', **kwargs)[source]# Return the indices that would sort this array. Parameters ascendingbool, default TrueWhether the indices should result in an ascending or descending sort. kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optionalSorting algorithm. *args, **kwargs:Passed through to numpy.argsort(). Returns np.ndarray[np.intp]Array of indices that sort self. If NaN values are contained, NaN values are placed at the end. See also numpy.argsortSorting implementation used internally.
reference/api/pandas.api.extensions.ExtensionArray.argsort.html
pandas.merge_ordered
`pandas.merge_ordered` Perform a merge for ordered data with optional filling/interpolation. ``` >>> df1 = pd.DataFrame( ... { ... "key": ["a", "c", "e", "a", "c", "e"], ... "lvalue": [1, 2, 3, 1, 2, 3], ... "group": ["a", "a", "a", "b", "b", "b"] ... } ... ) >>> df1 key lvalue group 0 a 1 a 1 c 2 a 2 e 3 a 3 a 1 b 4 c 2 b 5 e 3 b ```
pandas.merge_ordered(left, right, on=None, left_on=None, right_on=None, left_by=None, right_by=None, fill_method=None, suffixes=('_x', '_y'), how='outer')[source]# Perform a merge for ordered data with optional filling/interpolation. Designed for ordered data like time series data. Optionally perform group-wise merge (see examples). Parameters leftDataFrame rightDataFrame onlabel or listField names to join on. Must be found in both DataFrames. left_onlabel or list, or array-likeField names to join on in left DataFrame. Can be a vector or list of vectors of the length of the DataFrame to use a particular vector as the join key instead of columns. right_onlabel or list, or array-likeField names to join on in right DataFrame or vector/list of vectors per left_on docs. left_bycolumn name or list of column namesGroup left DataFrame by group columns and merge piece by piece with right DataFrame. right_bycolumn name or list of column namesGroup right DataFrame by group columns and merge piece by piece with left DataFrame. fill_method{‘ffill’, None}, default NoneInterpolation method for data. suffixeslist-like, default is (“_x”, “_y”)A length-2 sequence where each element is optionally a string indicating the suffix to add to overlapping column names in left and right respectively. Pass a value of None instead of a string to indicate that the column name from left or right should be left as-is, with no suffix. At least one of the values must not be None. Changed in version 0.25.0. how{‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘outer’ left: use only keys from left frame (SQL: left outer join) right: use only keys from right frame (SQL: right outer join) outer: use union of keys from both frames (SQL: full outer join) inner: use intersection of keys from both frames (SQL: inner join). Returns DataFrameThe merged DataFrame output type will the be same as ‘left’, if it is a subclass of DataFrame. See also mergeMerge with a database-style join. merge_asofMerge on nearest keys. Examples >>> df1 = pd.DataFrame( ... { ... "key": ["a", "c", "e", "a", "c", "e"], ... "lvalue": [1, 2, 3, 1, 2, 3], ... "group": ["a", "a", "a", "b", "b", "b"] ... } ... ) >>> df1 key lvalue group 0 a 1 a 1 c 2 a 2 e 3 a 3 a 1 b 4 c 2 b 5 e 3 b >>> df2 = pd.DataFrame({"key": ["b", "c", "d"], "rvalue": [1, 2, 3]}) >>> df2 key rvalue 0 b 1 1 c 2 2 d 3 >>> merge_ordered(df1, df2, fill_method="ffill", left_by="group") key lvalue group rvalue 0 a 1 a NaN 1 b 1 a 1.0 2 c 2 a 2.0 3 d 2 a 3.0 4 e 3 a 3.0 5 a 1 b NaN 6 b 1 b 1.0 7 c 2 b 2.0 8 d 2 b 3.0 9 e 3 b 3.0
reference/api/pandas.merge_ordered.html
pandas.UInt16Dtype
`pandas.UInt16Dtype` An ExtensionDtype for uint16 integer data.
class pandas.UInt16Dtype[source]# An ExtensionDtype for uint16 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes None Methods None
reference/api/pandas.UInt16Dtype.html
pandas.tseries.offsets.LastWeekOfMonth.nanos
pandas.tseries.offsets.LastWeekOfMonth.nanos
LastWeekOfMonth.nanos#
reference/api/pandas.tseries.offsets.LastWeekOfMonth.nanos.html
pandas.tseries.offsets.DateOffset.rollforward
`pandas.tseries.offsets.DateOffset.rollforward` Roll provided date forward to next offset only if not on offset.
DateOffset.rollforward()# Roll provided date forward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.DateOffset.rollforward.html
pandas.DataFrame.to_latex
`pandas.DataFrame.to_latex` Render object to a LaTeX tabular, longtable, or nested table. ``` >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular} ```
DataFrame.to_latex(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=False, column_format=None, longtable=None, escape=None, encoding=None, decimal='.', multicolumn=None, multicolumn_format=None, multirow=None, caption=None, label=None, position=None)[source]# Render object to a LaTeX tabular, longtable, or nested table. Requires \usepackage{booktabs}. The output can be copy/pasted into a main LaTeX document or read from an external file with \input{table.tex}. Changed in version 1.0.0: Added caption and label arguments. Changed in version 1.2.0: Added position argument, changed meaning of caption argument. Parameters bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string. columnslist of label, optionalThe subset of columns to write. Writes all columns by default. col_spaceint, optionalThe minimum width of each column. headerbool or list of str, default TrueWrite out the column names. If a list of strings is given, it is assumed to be aliases for the column names. indexbool, default TrueWrite row names (index). na_repstr, default ‘NaN’Missing data representation. formatterslist of functions or dict of {str: function}, optionalFormatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns. float_formatone-parameter function or str, optional, default NoneFormatter for floating point numbers. For example float_format="%.2f" and float_format="{:0.2f}".format will both result in 0.1234 being formatted as 0.12. sparsifybool, optionalSet to False for a DataFrame with a hierarchical index to print every multiindex key at each row. By default, the value will be read from the config module. index_namesbool, default TruePrints the names of the indexes. bold_rowsbool, default FalseMake the row labels bold in the output. column_formatstr, optionalThe columns format as specified in LaTeX table format e.g. ‘rcl’ for 3 columns. By default, ‘l’ will be used for all columns except columns of numbers, which default to ‘r’. longtablebool, optionalBy default, the value will be read from the pandas config module. Use a longtable environment instead of tabular. Requires adding a usepackage{longtable} to your LaTeX preamble. escapebool, optionalBy default, the value will be read from the pandas config module. When set to False prevents from escaping latex special characters in column names. encodingstr, optionalA string representing the encoding to use in the output file, defaults to ‘utf-8’. decimalstr, default ‘.’Character recognized as decimal separator, e.g. ‘,’ in Europe. multicolumnbool, default TrueUse multicolumn to enhance MultiIndex columns. The default will be read from the config module. multicolumn_formatstr, default ‘l’The alignment for multicolumns, similar to column_format The default will be read from the config module. multirowbool, default FalseUse multirow to enhance MultiIndex rows. Requires adding a usepackage{multirow} to your LaTeX preamble. Will print centered labels (instead of top-aligned) across the contained rows, separating groups via clines. The default will be read from the pandas config module. captionstr or tuple, optionalTuple (full_caption, short_caption), which results in \caption[short_caption]{full_caption}; if a single string is passed, no short caption will be set. New in version 1.0.0. Changed in version 1.2.0: Optionally allow caption to be a tuple (full_caption, short_caption). labelstr, optionalThe LaTeX label to be placed inside \label{} in the output. This is used with \ref{} in the main .tex file. New in version 1.0.0. positionstr, optionalThe LaTeX positional argument for tables, to be placed after \begin{} in the output. New in version 1.2.0. Returns str or NoneIf buf is None, returns the result as a string. Otherwise returns None. See also io.formats.style.Styler.to_latexRender a DataFrame to LaTeX with conditional formatting. DataFrame.to_stringRender a DataFrame to a console-friendly tabular output. DataFrame.to_htmlRender a DataFrame as an HTML table. Examples >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular}
reference/api/pandas.DataFrame.to_latex.html
pandas.tseries.offsets.Minute.normalize
pandas.tseries.offsets.Minute.normalize
Minute.normalize#
reference/api/pandas.tseries.offsets.Minute.normalize.html
pandas.tseries.offsets.CustomBusinessDay.is_anchored
`pandas.tseries.offsets.CustomBusinessDay.is_anchored` Return boolean whether the frequency is a unit frequency (n=1). Examples ``` >>> pd.DateOffset().is_anchored() True >>> pd.DateOffset(2).is_anchored() False ```
CustomBusinessDay.is_anchored()# Return boolean whether the frequency is a unit frequency (n=1). Examples >>> pd.DateOffset().is_anchored() True >>> pd.DateOffset(2).is_anchored() False
reference/api/pandas.tseries.offsets.CustomBusinessDay.is_anchored.html
pandas.Series.subtract
`pandas.Series.subtract` Return Subtraction of series and other, element-wise (binary operator sub). Equivalent to series - other, but with support to substitute a fill_value for missing data in either one of the inputs. ``` >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64 ```
Series.subtract(other, level=None, fill_value=None, axis=0)[source]# Return Subtraction of series and other, element-wise (binary operator sub). Equivalent to series - other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters otherSeries or scalar value levelint or nameBroadcast across a level, matching Index values on the passed MultiIndex level. fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame. Returns SeriesThe result of the operation. See also Series.rsubReverse of the Subtraction operator, see Python documentation for more details. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64
reference/api/pandas.Series.subtract.html
pandas.tseries.offsets.BYearBegin.is_anchored
`pandas.tseries.offsets.BYearBegin.is_anchored` Return boolean whether the frequency is a unit frequency (n=1). ``` >>> pd.DateOffset().is_anchored() True >>> pd.DateOffset(2).is_anchored() False ```
BYearBegin.is_anchored()# Return boolean whether the frequency is a unit frequency (n=1). Examples >>> pd.DateOffset().is_anchored() True >>> pd.DateOffset(2).is_anchored() False
reference/api/pandas.tseries.offsets.BYearBegin.is_anchored.html
pandas.Series.size
`pandas.Series.size` Return the number of elements in the underlying data.
property Series.size[source]# Return the number of elements in the underlying data.
reference/api/pandas.Series.size.html
pandas.Series.pct_change
`pandas.Series.pct_change` Percentage change between the current and a prior element. ``` >>> s = pd.Series([90, 91, 85]) >>> s 0 90 1 91 2 85 dtype: int64 ```
Series.pct_change(periods=1, fill_method='pad', limit=None, freq=None, **kwargs)[source]# Percentage change between the current and a prior element. Computes the percentage change from the immediately previous row by default. This is useful in comparing the percentage of change in a time series of elements. Parameters periodsint, default 1Periods to shift for forming percent change. fill_methodstr, default ‘pad’How to handle NAs before computing percent changes. limitint, default NoneThe number of consecutive NAs to fill before stopping. freqDateOffset, timedelta, or str, optionalIncrement to use from time series API (e.g. ‘M’ or BDay()). **kwargsAdditional keyword arguments are passed into DataFrame.shift or Series.shift. Returns chgSeries or DataFrameThe same type as the calling object. See also Series.diffCompute the difference of two elements in a Series. DataFrame.diffCompute the difference of two elements in a DataFrame. Series.shiftShift the index by some number of periods. DataFrame.shiftShift the index by some number of periods. Examples Series >>> s = pd.Series([90, 91, 85]) >>> s 0 90 1 91 2 85 dtype: int64 >>> s.pct_change() 0 NaN 1 0.011111 2 -0.065934 dtype: float64 >>> s.pct_change(periods=2) 0 NaN 1 NaN 2 -0.055556 dtype: float64 See the percentage change in a Series where filling NAs with last valid observation forward to next valid. >>> s = pd.Series([90, 91, None, 85]) >>> s 0 90.0 1 91.0 2 NaN 3 85.0 dtype: float64 >>> s.pct_change(fill_method='ffill') 0 NaN 1 0.011111 2 0.000000 3 -0.065934 dtype: float64 DataFrame Percentage change in French franc, Deutsche Mark, and Italian lira from 1980-01-01 to 1980-03-01. >>> df = pd.DataFrame({ ... 'FR': [4.0405, 4.0963, 4.3149], ... 'GR': [1.7246, 1.7482, 1.8519], ... 'IT': [804.74, 810.01, 860.13]}, ... index=['1980-01-01', '1980-02-01', '1980-03-01']) >>> df FR GR IT 1980-01-01 4.0405 1.7246 804.74 1980-02-01 4.0963 1.7482 810.01 1980-03-01 4.3149 1.8519 860.13 >>> df.pct_change() FR GR IT 1980-01-01 NaN NaN NaN 1980-02-01 0.013810 0.013684 0.006549 1980-03-01 0.053365 0.059318 0.061876 Percentage of change in GOOG and APPL stock volume. Shows computing the percentage change between columns. >>> df = pd.DataFrame({ ... '2016': [1769950, 30586265], ... '2015': [1500923, 40912316], ... '2014': [1371819, 41403351]}, ... index=['GOOG', 'APPL']) >>> df 2016 2015 2014 GOOG 1769950 1500923 1371819 APPL 30586265 40912316 41403351 >>> df.pct_change(axis='columns', periods=-1) 2016 2015 2014 GOOG 0.179241 0.094112 NaN APPL -0.252395 -0.011860 NaN
reference/api/pandas.Series.pct_change.html
pandas.core.groupby.DataFrameGroupBy.describe
`pandas.core.groupby.DataFrameGroupBy.describe` Generate descriptive statistics. ``` >>> s = pd.Series([1, 2, 3]) >>> s.describe() count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 dtype: float64 ```
DataFrameGroupBy.describe(**kwargs)[source]# Generate descriptive statistics. Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail. Parameters percentileslist-like of numbers, optionalThe percentiles to include in the output. All should fall between 0 and 1. The default is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles. include‘all’, list-like of dtypes or None (default), optionalA white list of data types to include in the result. Ignored for Series. Here are the options: ‘all’ : All columns of the input will be included in the output. A list-like of dtypes : Limits the results to the provided data types. To limit the result to numeric types submit numpy.number. To limit it instead to object columns submit the numpy.object data type. Strings can also be used in the style of select_dtypes (e.g. df.describe(include=['O'])). To select pandas categorical columns, use 'category' None (default) : The result will include all numeric columns. excludelist-like of dtypes or None (default), optional,A black list of data types to omit from the result. Ignored for Series. Here are the options: A list-like of dtypes : Excludes the provided data types from the result. To exclude numeric types submit numpy.number. To exclude object columns submit the data type numpy.object. Strings can also be used in the style of select_dtypes (e.g. df.describe(exclude=['O'])). To exclude pandas categorical columns, use 'category' None (default) : The result will exclude nothing. datetime_is_numericbool, default FalseWhether to treat datetime dtypes as numeric. This affects statistics calculated for the column. For DataFrame input, this also controls whether datetime columns are included by default. New in version 1.1.0. Returns Series or DataFrameSummary statistics of the Series or Dataframe provided. See also DataFrame.countCount number of non-NA/null observations. DataFrame.maxMaximum of the values in the object. DataFrame.minMinimum of the values in the object. DataFrame.meanMean of the values. DataFrame.stdStandard deviation of the observations. DataFrame.select_dtypesSubset of a DataFrame including/excluding columns based on their dtype. Notes For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same as the median. For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and freq. The top is the most common value. The freq is the most common value’s frequency. Timestamps also include the first and last items. If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from among those with the highest count. For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. If include='all' is provided as an option, the result will include a union of attributes of each type. The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the output. The parameters are ignored when analyzing a Series. Examples Describing a numeric Series. >>> s = pd.Series([1, 2, 3]) >>> s.describe() count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 dtype: float64 Describing a categorical Series. >>> s = pd.Series(['a', 'a', 'b', 'c']) >>> s.describe() count 4 unique 3 top a freq 2 dtype: object Describing a timestamp Series. >>> s = pd.Series([ ... np.datetime64("2000-01-01"), ... np.datetime64("2010-01-01"), ... np.datetime64("2010-01-01") ... ]) >>> s.describe(datetime_is_numeric=True) count 3 mean 2006-09-01 08:00:00 min 2000-01-01 00:00:00 25% 2004-12-31 12:00:00 50% 2010-01-01 00:00:00 75% 2010-01-01 00:00:00 max 2010-01-01 00:00:00 dtype: object Describing a DataFrame. By default only numeric fields are returned. >>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']), ... 'numeric': [1, 2, 3], ... 'object': ['a', 'b', 'c'] ... }) >>> df.describe() numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Describing all columns of a DataFrame regardless of data type. >>> df.describe(include='all') categorical numeric object count 3 3.0 3 unique 3 NaN 3 top f NaN a freq 1 NaN 1 mean NaN 2.0 NaN std NaN 1.0 NaN min NaN 1.0 NaN 25% NaN 1.5 NaN 50% NaN 2.0 NaN 75% NaN 2.5 NaN max NaN 3.0 NaN Describing a column from a DataFrame by accessing it as an attribute. >>> df.numeric.describe() count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Name: numeric, dtype: float64 Including only numeric columns in a DataFrame description. >>> df.describe(include=[np.number]) numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Including only string columns in a DataFrame description. >>> df.describe(include=[object]) object count 3 unique 3 top a freq 1 Including only categorical columns from a DataFrame description. >>> df.describe(include=['category']) categorical count 3 unique 3 top d freq 1 Excluding numeric columns from a DataFrame description. >>> df.describe(exclude=[np.number]) categorical object count 3 3 unique 3 3 top f a freq 1 1 Excluding object columns from a DataFrame description. >>> df.describe(exclude=[object]) categorical numeric count 3 3.0 unique 3 NaN top f NaN freq 1 NaN mean NaN 2.0 std NaN 1.0 min NaN 1.0 25% NaN 1.5 50% NaN 2.0 75% NaN 2.5 max NaN 3.0
reference/api/pandas.core.groupby.DataFrameGroupBy.describe.html
pandas.tseries.offsets.QuarterBegin.is_quarter_start
`pandas.tseries.offsets.QuarterBegin.is_quarter_start` Return boolean whether a timestamp occurs on the quarter start. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True ```
QuarterBegin.is_quarter_start()# Return boolean whether a timestamp occurs on the quarter start. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True
reference/api/pandas.tseries.offsets.QuarterBegin.is_quarter_start.html
pandas.tseries.offsets.BQuarterBegin.freqstr
`pandas.tseries.offsets.BQuarterBegin.freqstr` Return a string representing the frequency. ``` >>> pd.DateOffset(5).freqstr '<5 * DateOffsets>' ```
BQuarterBegin.freqstr# Return a string representing the frequency. Examples >>> pd.DateOffset(5).freqstr '<5 * DateOffsets>' >>> pd.offsets.BusinessHour(2).freqstr '2BH' >>> pd.offsets.Nano().freqstr 'N' >>> pd.offsets.Nano(-3).freqstr '-3N'
reference/api/pandas.tseries.offsets.BQuarterBegin.freqstr.html
API reference
API reference
This page gives an overview of all public pandas objects, functions and methods. All classes and functions exposed in pandas.* namespace are public. Some subpackages are public which include pandas.errors, pandas.plotting, and pandas.testing. Public functions in pandas.io and pandas.tseries submodules are mentioned in the documentation. pandas.api.types subpackage holds some public functions related to data types in pandas. Warning The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Stable functionality in such modules is not guaranteed. Input/output Pickling Flat file Clipboard Excel JSON HTML XML Latex HDFStore: PyTables (HDF5) Feather Parquet ORC SAS SPSS SQL Google BigQuery STATA General functions Data manipulations Top-level missing data Top-level dealing with numeric data Top-level dealing with datetimelike data Top-level dealing with Interval data Top-level evaluation Hashing Importing from other DataFrame libraries Series Constructor Attributes Conversion Indexing, iteration Binary operator functions Function application, GroupBy & window Computations / descriptive stats Reindexing / selection / label manipulation Missing data handling Reshaping, sorting Combining / comparing / joining / merging Time Series-related Accessors Plotting Serialization / IO / conversion DataFrame Constructor Attributes and underlying data Conversion Indexing, iteration Binary operator functions Function application, GroupBy & window Computations / descriptive stats Reindexing / selection / label manipulation Missing data handling Reshaping, sorting, transposing Combining / comparing / joining / merging Time Series-related Flags Metadata Plotting Sparse accessor Serialization / IO / conversion pandas arrays, scalars, and data types Objects Utilities Index objects Index Numeric Index CategoricalIndex IntervalIndex MultiIndex DatetimeIndex TimedeltaIndex PeriodIndex Date offsets DateOffset BusinessDay BusinessHour CustomBusinessDay CustomBusinessHour MonthEnd MonthBegin BusinessMonthEnd BusinessMonthBegin CustomBusinessMonthEnd CustomBusinessMonthBegin SemiMonthEnd SemiMonthBegin Week WeekOfMonth LastWeekOfMonth BQuarterEnd BQuarterBegin QuarterEnd QuarterBegin BYearEnd BYearBegin YearEnd YearBegin FY5253 FY5253Quarter Easter Tick Day Hour Minute Second Milli Micro Nano Frequencies pandas.tseries.frequencies.to_offset Window Rolling window functions Weighted window functions Expanding window functions Exponentially-weighted window functions Window indexer GroupBy Indexing, iteration Function application Computations / descriptive stats Resampling Indexing, iteration Function application Upsampling Computations / descriptive stats Style Styler constructor Styler properties Style application Builtin styles Style export and import Plotting pandas.plotting.andrews_curves pandas.plotting.autocorrelation_plot pandas.plotting.bootstrap_plot pandas.plotting.boxplot pandas.plotting.deregister_matplotlib_converters pandas.plotting.lag_plot pandas.plotting.parallel_coordinates pandas.plotting.plot_params pandas.plotting.radviz pandas.plotting.register_matplotlib_converters pandas.plotting.scatter_matrix pandas.plotting.table Options and settings Working with options Extensions pandas.api.extensions.register_extension_dtype pandas.api.extensions.register_dataframe_accessor pandas.api.extensions.register_series_accessor pandas.api.extensions.register_index_accessor pandas.api.extensions.ExtensionDtype pandas.api.extensions.ExtensionArray pandas.arrays.PandasArray pandas.api.indexers.check_array_indexer Testing Assertion functions Exceptions and warnings Bug report function Test suite runner
reference/index.html
pandas.tseries.offsets.MonthEnd.is_year_start
`pandas.tseries.offsets.MonthEnd.is_year_start` Return boolean whether a timestamp occurs on the year start. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_year_start(ts) True ```
MonthEnd.is_year_start()# Return boolean whether a timestamp occurs on the year start. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_year_start(ts) True
reference/api/pandas.tseries.offsets.MonthEnd.is_year_start.html
pandas.tseries.offsets.BYearEnd.n
pandas.tseries.offsets.BYearEnd.n
BYearEnd.n#
reference/api/pandas.tseries.offsets.BYearEnd.n.html
pandas.tseries.offsets.BusinessMonthEnd.is_year_end
`pandas.tseries.offsets.BusinessMonthEnd.is_year_end` Return boolean whether a timestamp occurs on the year end. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_year_end(ts) False ```
BusinessMonthEnd.is_year_end()# Return boolean whether a timestamp occurs on the year end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_year_end(ts) False
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_year_end.html
pandas.Series.is_monotonic_increasing
`pandas.Series.is_monotonic_increasing` Return boolean if values in the object are monotonically increasing.
property Series.is_monotonic_increasing[source]# Return boolean if values in the object are monotonically increasing. Returns bool
reference/api/pandas.Series.is_monotonic_increasing.html
pandas.Series.add
`pandas.Series.add` Return Addition of series and other, element-wise (binary operator add). Equivalent to series + other, but with support to substitute a fill_value for missing data in either one of the inputs. ``` >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.add(b, fill_value=0) a 2.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64 ```
Series.add(other, level=None, fill_value=None, axis=0)[source]# Return Addition of series and other, element-wise (binary operator add). Equivalent to series + other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters otherSeries or scalar value levelint or nameBroadcast across a level, matching Index values on the passed MultiIndex level. fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame. Returns SeriesThe result of the operation. See also Series.raddReverse of the Addition operator, see Python documentation for more details. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.add(b, fill_value=0) a 2.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64
reference/api/pandas.Series.add.html
pandas.api.types.is_dict_like
`pandas.api.types.is_dict_like` Check if the object is dict-like. ``` >>> is_dict_like({1: 2}) True >>> is_dict_like([1, 2, 3]) False >>> is_dict_like(dict) False >>> is_dict_like(dict()) True ```
pandas.api.types.is_dict_like(obj)[source]# Check if the object is dict-like. Parameters objThe object to check Returns is_dict_likeboolWhether obj has dict-like properties. Examples >>> is_dict_like({1: 2}) True >>> is_dict_like([1, 2, 3]) False >>> is_dict_like(dict) False >>> is_dict_like(dict()) True
reference/api/pandas.api.types.is_dict_like.html
pandas.core.groupby.DataFrameGroupBy.rank
`pandas.core.groupby.DataFrameGroupBy.rank` Provide the rank of values within each group. average: average rank of group. ``` >>> df = pd.DataFrame( ... { ... "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"], ... "value": [2, 4, 2, 3, 5, 1, 2, 4, 1, 5], ... } ... ) >>> df group value 0 a 2 1 a 4 2 a 2 3 a 3 4 a 5 5 b 1 6 b 2 7 b 4 8 b 1 9 b 5 >>> for method in ['average', 'min', 'max', 'dense', 'first']: ... df[f'{method}_rank'] = df.groupby('group')['value'].rank(method) >>> df group value average_rank min_rank max_rank dense_rank first_rank 0 a 2 1.5 1.0 2.0 1.0 1.0 1 a 4 4.0 4.0 4.0 3.0 4.0 2 a 2 1.5 1.0 2.0 1.0 2.0 3 a 3 3.0 3.0 3.0 2.0 3.0 4 a 5 5.0 5.0 5.0 4.0 5.0 5 b 1 1.5 1.0 2.0 1.0 1.0 6 b 2 3.0 3.0 3.0 2.0 3.0 7 b 4 4.0 4.0 4.0 3.0 4.0 8 b 1 1.5 1.0 2.0 1.0 2.0 9 b 5 5.0 5.0 5.0 4.0 5.0 ```
DataFrameGroupBy.rank(method='average', ascending=True, na_option='keep', pct=False, axis=0)[source]# Provide the rank of values within each group. Parameters method{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’ average: average rank of group. min: lowest rank in group. max: highest rank in group. first: ranks assigned in order they appear in the array. dense: like ‘min’, but rank always increases by 1 between groups. ascendingbool, default TrueFalse for ranks by high (1) to low (N). na_option{‘keep’, ‘top’, ‘bottom’}, default ‘keep’ keep: leave NA values where they are. top: smallest rank if ascending. bottom: smallest rank if descending. pctbool, default FalseCompute percentage rank of data within each group. axisint, default 0The axis of the object over which to compute the rank. Returns DataFrame with ranking of values within each group See also Series.groupbyApply a function groupby to a Series. DataFrame.groupbyApply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame( ... { ... "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"], ... "value": [2, 4, 2, 3, 5, 1, 2, 4, 1, 5], ... } ... ) >>> df group value 0 a 2 1 a 4 2 a 2 3 a 3 4 a 5 5 b 1 6 b 2 7 b 4 8 b 1 9 b 5 >>> for method in ['average', 'min', 'max', 'dense', 'first']: ... df[f'{method}_rank'] = df.groupby('group')['value'].rank(method) >>> df group value average_rank min_rank max_rank dense_rank first_rank 0 a 2 1.5 1.0 2.0 1.0 1.0 1 a 4 4.0 4.0 4.0 3.0 4.0 2 a 2 1.5 1.0 2.0 1.0 2.0 3 a 3 3.0 3.0 3.0 2.0 3.0 4 a 5 5.0 5.0 5.0 4.0 5.0 5 b 1 1.5 1.0 2.0 1.0 1.0 6 b 2 3.0 3.0 3.0 2.0 3.0 7 b 4 4.0 4.0 4.0 3.0 4.0 8 b 1 1.5 1.0 2.0 1.0 2.0 9 b 5 5.0 5.0 5.0 4.0 5.0
reference/api/pandas.core.groupby.DataFrameGroupBy.rank.html
pandas.Series.to_latex
`pandas.Series.to_latex` Render object to a LaTeX tabular, longtable, or nested table. ``` >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular} ```
Series.to_latex(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=False, column_format=None, longtable=None, escape=None, encoding=None, decimal='.', multicolumn=None, multicolumn_format=None, multirow=None, caption=None, label=None, position=None)[source]# Render object to a LaTeX tabular, longtable, or nested table. Requires \usepackage{booktabs}. The output can be copy/pasted into a main LaTeX document or read from an external file with \input{table.tex}. Changed in version 1.0.0: Added caption and label arguments. Changed in version 1.2.0: Added position argument, changed meaning of caption argument. Parameters bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string. columnslist of label, optionalThe subset of columns to write. Writes all columns by default. col_spaceint, optionalThe minimum width of each column. headerbool or list of str, default TrueWrite out the column names. If a list of strings is given, it is assumed to be aliases for the column names. indexbool, default TrueWrite row names (index). na_repstr, default ‘NaN’Missing data representation. formatterslist of functions or dict of {str: function}, optionalFormatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns. float_formatone-parameter function or str, optional, default NoneFormatter for floating point numbers. For example float_format="%.2f" and float_format="{:0.2f}".format will both result in 0.1234 being formatted as 0.12. sparsifybool, optionalSet to False for a DataFrame with a hierarchical index to print every multiindex key at each row. By default, the value will be read from the config module. index_namesbool, default TruePrints the names of the indexes. bold_rowsbool, default FalseMake the row labels bold in the output. column_formatstr, optionalThe columns format as specified in LaTeX table format e.g. ‘rcl’ for 3 columns. By default, ‘l’ will be used for all columns except columns of numbers, which default to ‘r’. longtablebool, optionalBy default, the value will be read from the pandas config module. Use a longtable environment instead of tabular. Requires adding a usepackage{longtable} to your LaTeX preamble. escapebool, optionalBy default, the value will be read from the pandas config module. When set to False prevents from escaping latex special characters in column names. encodingstr, optionalA string representing the encoding to use in the output file, defaults to ‘utf-8’. decimalstr, default ‘.’Character recognized as decimal separator, e.g. ‘,’ in Europe. multicolumnbool, default TrueUse multicolumn to enhance MultiIndex columns. The default will be read from the config module. multicolumn_formatstr, default ‘l’The alignment for multicolumns, similar to column_format The default will be read from the config module. multirowbool, default FalseUse multirow to enhance MultiIndex rows. Requires adding a usepackage{multirow} to your LaTeX preamble. Will print centered labels (instead of top-aligned) across the contained rows, separating groups via clines. The default will be read from the pandas config module. captionstr or tuple, optionalTuple (full_caption, short_caption), which results in \caption[short_caption]{full_caption}; if a single string is passed, no short caption will be set. New in version 1.0.0. Changed in version 1.2.0: Optionally allow caption to be a tuple (full_caption, short_caption). labelstr, optionalThe LaTeX label to be placed inside \label{} in the output. This is used with \ref{} in the main .tex file. New in version 1.0.0. positionstr, optionalThe LaTeX positional argument for tables, to be placed after \begin{} in the output. New in version 1.2.0. Returns str or NoneIf buf is None, returns the result as a string. Otherwise returns None. See also io.formats.style.Styler.to_latexRender a DataFrame to LaTeX with conditional formatting. DataFrame.to_stringRender a DataFrame to a console-friendly tabular output. DataFrame.to_htmlRender a DataFrame as an HTML table. Examples >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular}
reference/api/pandas.Series.to_latex.html
pandas.tseries.offsets.Second.rollback
`pandas.tseries.offsets.Second.rollback` Roll provided date backward to next offset only if not on offset.
Second.rollback()# Roll provided date backward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.Second.rollback.html
pandas.tseries.offsets.YearBegin.nanos
pandas.tseries.offsets.YearBegin.nanos
YearBegin.nanos#
reference/api/pandas.tseries.offsets.YearBegin.nanos.html
pandas.api.extensions.register_index_accessor
`pandas.api.extensions.register_index_accessor` Register a custom accessor on Index objects. Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. ``` >>> pd.Series(['a', 'b']).dt Traceback (most recent call last): ... AttributeError: Can only use .dt accessor with datetimelike values ```
pandas.api.extensions.register_index_accessor(name)[source]# Register a custom accessor on Index objects. Parameters namestrName under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns callableA class decorator. See also register_dataframe_accessorRegister a custom accessor on DataFrame objects. register_series_accessorRegister a custom accessor on Series objects. register_index_accessorRegister a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be def __init__(self, pandas_object): # noqa: E999 ... For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype. >>> pd.Series(['a', 'b']).dt Traceback (most recent call last): ... AttributeError: Can only use .dt accessor with datetimelike values Examples In your library code: import pandas as pd @pd.api.extensions.register_dataframe_accessor("geo") class GeoAccessor: def __init__(self, pandas_obj): self._obj = pandas_obj @property def center(self): # return the geographic center point of this DataFrame lat = self._obj.latitude lon = self._obj.longitude return (float(lon.mean()), float(lat.mean())) def plot(self): # plot this array's data on a map, e.g., using Cartopy pass Back in an interactive IPython session: In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10), ...: "latitude": np.linspace(0, 20)}) In [2]: ds.geo.center Out[2]: (5.0, 10.0) In [3]: ds.geo.plot() # plots data on a map
reference/api/pandas.api.extensions.register_index_accessor.html
pandas.Timestamp.month_name
`pandas.Timestamp.month_name` Return the month name of the Timestamp with specified locale. ``` >>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651') >>> ts.month_name() 'March' ```
Timestamp.month_name()# Return the month name of the Timestamp with specified locale. Parameters localestr, default None (English locale)Locale determining the language in which to return the month name. Returns str Examples >>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651') >>> ts.month_name() 'March' Analogous for pd.NaT: >>> pd.NaT.month_name() nan
reference/api/pandas.Timestamp.month_name.html
pandas.Series.last
`pandas.Series.last` Select final periods of time series data based on a date offset. ``` >>> i = pd.date_range('2018-04-09', periods=4, freq='2D') >>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i) >>> ts A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4 ```
Series.last(offset)[source]# Select final periods of time series data based on a date offset. For a DataFrame with a sorted DatetimeIndex, this function selects the last few rows based on a date offset. Parameters offsetstr, DateOffset, dateutil.relativedeltaThe offset length of the data that will be selected. For instance, ‘3D’ will display all the rows having their index within the last 3 days. Returns Series or DataFrameA subset of the caller. Raises TypeErrorIf the index is not a DatetimeIndex See also firstSelect initial periods of time series based on a date offset. at_timeSelect values at a particular time of the day. between_timeSelect values between particular times of the day. Examples >>> i = pd.date_range('2018-04-09', periods=4, freq='2D') >>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i) >>> ts A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4 Get the rows for the last 3 days: >>> ts.last('3D') A 2018-04-13 3 2018-04-15 4 Notice the data for 3 last calendar days were returned, not the last 3 observed days in the dataset, and therefore data for 2018-04-11 was not returned.
reference/api/pandas.Series.last.html
pandas.Timestamp.year
pandas.Timestamp.year
Timestamp.year#
reference/api/pandas.Timestamp.year.html
pandas.core.groupby.SeriesGroupBy.unique
`pandas.core.groupby.SeriesGroupBy.unique` Return unique values of Series object. ``` >>> pd.Series([2, 1, 3, 3], name='A').unique() array([2, 1, 3]) ```
property SeriesGroupBy.unique[source]# Return unique values of Series object. Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort. Returns ndarray or ExtensionArrayThe unique values returned as a NumPy array. See Notes. See also Series.drop_duplicatesReturn Series with duplicate values removed. uniqueTop-level unique method for any 1-d array-like object. Index.uniqueReturn Index with unique values from an Index object. Notes Returns the unique values as a NumPy array. In case of an extension-array backed Series, a new ExtensionArray of that type with just the unique values is returned. This includes Categorical Period Datetime with Timezone Interval Sparse IntegerNA See Examples section. Examples >>> pd.Series([2, 1, 3, 3], name='A').unique() array([2, 1, 3]) >>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique() array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]') >>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern') ... for _ in range(3)]).unique() <DatetimeArray> ['2016-01-01 00:00:00-05:00'] Length: 1, dtype: datetime64[ns, US/Eastern] An Categorical will return categories in the order of appearance and with the same dtype. >>> pd.Series(pd.Categorical(list('baabc'))).unique() ['b', 'a', 'c'] Categories (3, object): ['a', 'b', 'c'] >>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'), ... ordered=True)).unique() ['b', 'a', 'c'] Categories (3, object): ['a' < 'b' < 'c']
reference/api/pandas.core.groupby.SeriesGroupBy.unique.html
pandas.tseries.offsets.SemiMonthBegin.rollback
`pandas.tseries.offsets.SemiMonthBegin.rollback` Roll provided date backward to next offset only if not on offset.
SemiMonthBegin.rollback()# Roll provided date backward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.SemiMonthBegin.rollback.html
pandas.tseries.offsets.YearEnd.is_quarter_start
`pandas.tseries.offsets.YearEnd.is_quarter_start` Return boolean whether a timestamp occurs on the quarter start. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True ```
YearEnd.is_quarter_start()# Return boolean whether a timestamp occurs on the quarter start. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_start(ts) True
reference/api/pandas.tseries.offsets.YearEnd.is_quarter_start.html
pandas.Series.ne
`pandas.Series.ne` Return Not equal to of series and other, element-wise (binary operator ne). ``` >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.ne(b, fill_value=0) a False b True c True d True e True dtype: bool ```
Series.ne(other, level=None, fill_value=None, axis=0)[source]# Return Not equal to of series and other, element-wise (binary operator ne). Equivalent to series != other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters otherSeries or scalar value levelint or nameBroadcast across a level, matching Index values on the passed MultiIndex level. fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame. Returns SeriesThe result of the operation. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.ne(b, fill_value=0) a False b True c True d True e True dtype: bool
reference/api/pandas.Series.ne.html
pandas.tseries.offsets.BusinessMonthEnd.is_on_offset
`pandas.tseries.offsets.BusinessMonthEnd.is_on_offset` Return boolean whether a timestamp intersects with this frequency. Timestamp to check intersections with frequency. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Day(1) >>> freq.is_on_offset(ts) True ```
BusinessMonthEnd.is_on_offset()# Return boolean whether a timestamp intersects with this frequency. Parameters dtdatetime.datetimeTimestamp to check intersections with frequency. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Day(1) >>> freq.is_on_offset(ts) True >>> ts = pd.Timestamp(2022, 8, 6) >>> ts.day_name() 'Saturday' >>> freq = pd.offsets.BusinessDay(1) >>> freq.is_on_offset(ts) False
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_on_offset.html
pandas.core.groupby.GroupBy.backfill
`pandas.core.groupby.GroupBy.backfill` Backward fill the values.
GroupBy.backfill(limit=None)[source]# Backward fill the values. Deprecated since version 1.4: Use bfill instead. Parameters limitint, optionalLimit of how many values to fill. Returns Series or DataFrameObject with missing values filled.
reference/api/pandas.core.groupby.GroupBy.backfill.html
pandas.DataFrame.style
`pandas.DataFrame.style` Returns a Styler object.
property DataFrame.style[source]# Returns a Styler object. Contains methods for building a styled HTML representation of the DataFrame. See also io.formats.style.StylerHelps style a DataFrame or Series according to the data with HTML and CSS.
reference/api/pandas.DataFrame.style.html
pandas.IndexSlice
`pandas.IndexSlice` Create an object to more easily perform multi-index slicing. ``` >>> midx = pd.MultiIndex.from_product([['A0','A1'], ['B0','B1','B2','B3']]) >>> columns = ['foo', 'bar'] >>> dfmi = pd.DataFrame(np.arange(16).reshape((len(midx), len(columns))), ... index=midx, columns=columns) ```
pandas.IndexSlice = <pandas.core.indexing._IndexSlice object># Create an object to more easily perform multi-index slicing. See also MultiIndex.remove_unused_levelsNew MultiIndex with no unused levels. Notes See Defined Levels for further info on slicing a MultiIndex. Examples >>> midx = pd.MultiIndex.from_product([['A0','A1'], ['B0','B1','B2','B3']]) >>> columns = ['foo', 'bar'] >>> dfmi = pd.DataFrame(np.arange(16).reshape((len(midx), len(columns))), ... index=midx, columns=columns) Using the default slice command: >>> dfmi.loc[(slice(None), slice('B0', 'B1')), :] foo bar A0 B0 0 1 B1 2 3 A1 B0 8 9 B1 10 11 Using the IndexSlice class for a more intuitive command: >>> idx = pd.IndexSlice >>> dfmi.loc[idx[:, 'B0':'B1'], :] foo bar A0 B0 0 1 B1 2 3 A1 B0 8 9 B1 10 11
reference/api/pandas.IndexSlice.html
pandas.Series.ravel
`pandas.Series.ravel` Return the flattened underlying data as an ndarray. Flattened data of the Series.
Series.ravel(order='C')[source]# Return the flattened underlying data as an ndarray. Returns numpy.ndarray or ndarray-likeFlattened data of the Series. See also numpy.ndarray.ravelReturn a flattened array.
reference/api/pandas.Series.ravel.html
pandas.tseries.offsets.FY5253Quarter.__call__
`pandas.tseries.offsets.FY5253Quarter.__call__` Call self as a function.
FY5253Quarter.__call__(*args, **kwargs)# Call self as a function.
reference/api/pandas.tseries.offsets.FY5253Quarter.__call__.html
pandas.DataFrame.quantile
`pandas.DataFrame.quantile` Return values at the given quantile over requested axis. ``` >>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]), ... columns=['a', 'b']) >>> df.quantile(.1) a 1.3 b 3.7 Name: 0.1, dtype: float64 >>> df.quantile([.1, .5]) a b 0.1 1.3 3.7 0.5 2.5 55.0 ```
DataFrame.quantile(q=0.5, axis=0, numeric_only=_NoDefault.no_default, interpolation='linear', method='single')[source]# Return values at the given quantile over requested axis. Parameters qfloat or array-like, default 0.5 (50% quantile)Value between 0 <= q <= 1, the quantile(s) to compute. axis{0 or ‘index’, 1 or ‘columns’}, default 0Equals 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise. numeric_onlybool, default TrueIf False, the quantile of datetime and timedelta data will be computed as well. Deprecated since version 1.5.0: The default value of numeric_only will be False in a future version of pandas. interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points i and j: linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. lower: i. higher: j. nearest: i or j whichever is nearest. midpoint: (i + j) / 2. method{‘single’, ‘table’}, default ‘single’Whether to compute quantiles per-column (‘single’) or over all columns (‘table’). When ‘table’, the only allowed interpolation methods are ‘nearest’, ‘lower’, and ‘higher’. Returns Series or DataFrame If q is an array, a DataFrame will be returned where theindex is q, the columns are the columns of self, and the values are the quantiles. If q is a float, a Series will be returned where theindex is the columns of self and the values are the quantiles. See also core.window.rolling.Rolling.quantileRolling quantile. numpy.percentileNumpy function to compute the percentile. Examples >>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]), ... columns=['a', 'b']) >>> df.quantile(.1) a 1.3 b 3.7 Name: 0.1, dtype: float64 >>> df.quantile([.1, .5]) a b 0.1 1.3 3.7 0.5 2.5 55.0 Specifying method=’table’ will compute the quantile over all columns. >>> df.quantile(.1, method="table", interpolation="nearest") a 1 b 1 Name: 0.1, dtype: int64 >>> df.quantile([.1, .5], method="table", interpolation="nearest") a b 0.1 1 1 0.5 3 100 Specifying numeric_only=False will also compute the quantile of datetime and timedelta data. >>> df = pd.DataFrame({'A': [1, 2], ... 'B': [pd.Timestamp('2010'), ... pd.Timestamp('2011')], ... 'C': [pd.Timedelta('1 days'), ... pd.Timedelta('2 days')]}) >>> df.quantile(0.5, numeric_only=False) A 1.5 B 2010-07-02 12:00:00 C 1 days 12:00:00 Name: 0.5, dtype: object
reference/api/pandas.DataFrame.quantile.html
pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_end
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_end` Return boolean whether a timestamp occurs on the month end. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False ```
CustomBusinessMonthBegin.is_month_end()# Return boolean whether a timestamp occurs on the month end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_end.html
pandas.tseries.offsets.Day.apply_index
`pandas.tseries.offsets.Day.apply_index` Vectorized apply of DateOffset to DatetimeIndex.
Day.apply_index()# Vectorized apply of DateOffset to DatetimeIndex. Deprecated since version 1.1.0: Use offset + dtindex instead. Parameters indexDatetimeIndex Returns DatetimeIndex Raises NotImplementedErrorWhen the specific offset subclass does not have a vectorized implementation.
reference/api/pandas.tseries.offsets.Day.apply_index.html
pandas.Series.dt.microseconds
`pandas.Series.dt.microseconds` Number of microseconds (>= 0 and less than 1 second) for each element.
Series.dt.microseconds[source]# Number of microseconds (>= 0 and less than 1 second) for each element.
reference/api/pandas.Series.dt.microseconds.html
pandas.tseries.offsets.QuarterBegin.base
`pandas.tseries.offsets.QuarterBegin.base` Returns a copy of the calling offset object with n=1 and all other attributes equal.
QuarterBegin.base# Returns a copy of the calling offset object with n=1 and all other attributes equal.
reference/api/pandas.tseries.offsets.QuarterBegin.base.html
pandas.tseries.offsets.BYearBegin.is_quarter_end
`pandas.tseries.offsets.BYearBegin.is_quarter_end` Return boolean whether a timestamp occurs on the quarter end. ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_end(ts) False ```
BYearBegin.is_quarter_end()# Return boolean whether a timestamp occurs on the quarter end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_end(ts) False
reference/api/pandas.tseries.offsets.BYearBegin.is_quarter_end.html
pandas.api.extensions.ExtensionArray._values_for_factorize
`pandas.api.extensions.ExtensionArray._values_for_factorize` Return an array and missing value suitable for factorization.
ExtensionArray._values_for_factorize()[source]# Return an array and missing value suitable for factorization. Returns valuesndarrayAn array suitable for factorization. This should maintain order and be a supported dtype (Float64, Int64, UInt64, String, Object). By default, the extension array is cast to object dtype. na_valueobjectThe value in values to consider missing. This will be treated as NA in the factorization routines, so it will be coded as na_sentinel and not included in uniques. By default, np.nan is used. Notes The values returned by this method are also used in pandas.util.hash_pandas_object().
reference/api/pandas.api.extensions.ExtensionArray._values_for_factorize.html
Comparison with R / R libraries
Comparison with R / R libraries Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this page was started to provide a more detailed look at the R language and its many third party libraries as they relate to pandas. In comparisons with R and CRAN libraries, we care about the following things: Functionality / flexibility: what can/cannot be done with each tool Performance: how fast are operations. Hard numbers/benchmarks are preferable Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code comparisons) This page is also here to offer a bit of a translation guide for users of these R packages.
Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this page was started to provide a more detailed look at the R language and its many third party libraries as they relate to pandas. In comparisons with R and CRAN libraries, we care about the following things: Functionality / flexibility: what can/cannot be done with each tool Performance: how fast are operations. Hard numbers/benchmarks are preferable Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code comparisons) This page is also here to offer a bit of a translation guide for users of these R packages. For transfer of DataFrame objects from pandas to R, one option is to use HDF5 files, see External compatibility for an example. Quick reference# We’ll start off with a quick reference guide pairing some common R operations using dplyr with pandas equivalents. Querying, filtering, sampling# R pandas dim(df) df.shape head(df) df.head() slice(df, 1:10) df.iloc[:9] filter(df, col1 == 1, col2 == 1) df.query('col1 == 1 & col2 == 1') df[df$col1 == 1 & df$col2 == 1,] df[(df.col1 == 1) & (df.col2 == 1)] select(df, col1, col2) df[['col1', 'col2']] select(df, col1:col3) df.loc[:, 'col1':'col3'] select(df, -(col1:col3)) df.drop(cols_to_drop, axis=1) but see 1 distinct(select(df, col1)) df[['col1']].drop_duplicates() distinct(select(df, col1, col2)) df[['col1', 'col2']].drop_duplicates() sample_n(df, 10) df.sample(n=10) sample_frac(df, 0.01) df.sample(frac=0.01) 1 R’s shorthand for a subrange of columns (select(df, col1:col3)) can be approached cleanly in pandas, if you have the list of columns, for example df[cols[1:3]] or df.drop(cols[1:3]), but doing this by column name is a bit messy. Sorting# R pandas arrange(df, col1, col2) df.sort_values(['col1', 'col2']) arrange(df, desc(col1)) df.sort_values('col1', ascending=False) Transforming# R pandas select(df, col_one = col1) df.rename(columns={'col1': 'col_one'})['col_one'] rename(df, col_one = col1) df.rename(columns={'col1': 'col_one'}) mutate(df, c=a-b) df.assign(c=df['a']-df['b']) Grouping and summarizing# R pandas summary(df) df.describe() gdf <- group_by(df, col1) gdf = df.groupby('col1') summarise(gdf, avg=mean(col1, na.rm=TRUE)) df.groupby('col1').agg({'col1': 'mean'}) summarise(gdf, total=sum(col1)) df.groupby('col1').sum() Base R# Slicing with R’s c# R makes it easy to access data.frame columns by name df <- data.frame(a=rnorm(5), b=rnorm(5), c=rnorm(5), d=rnorm(5), e=rnorm(5)) df[, c("a", "c", "e")] or by integer location df <- data.frame(matrix(rnorm(1000), ncol=100)) df[, c(1:10, 25:30, 40, 50:100)] Selecting multiple columns by name in pandas is straightforward In [1]: df = pd.DataFrame(np.random.randn(10, 3), columns=list("abc")) In [2]: df[["a", "c"]] Out[2]: a c 0 0.469112 -1.509059 1 -1.135632 -0.173215 2 0.119209 -0.861849 3 -2.104569 1.071804 4 0.721555 -1.039575 5 0.271860 0.567020 6 0.276232 -0.673690 7 0.113648 0.524988 8 0.404705 -1.715002 9 -1.039268 -1.157892 In [3]: df.loc[:, ["a", "c"]] Out[3]: a c 0 0.469112 -1.509059 1 -1.135632 -0.173215 2 0.119209 -0.861849 3 -2.104569 1.071804 4 0.721555 -1.039575 5 0.271860 0.567020 6 0.276232 -0.673690 7 0.113648 0.524988 8 0.404705 -1.715002 9 -1.039268 -1.157892 Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer attribute and numpy.r_. In [4]: named = list("abcdefg") In [5]: n = 30 In [6]: columns = named + np.arange(len(named), n).tolist() In [7]: df = pd.DataFrame(np.random.randn(n, n), columns=columns) In [8]: df.iloc[:, np.r_[:10, 24:30]] Out[8]: a b c ... 27 28 29 0 -1.344312 0.844885 1.075770 ... 0.813850 0.132003 -0.827317 1 -0.076467 -1.187678 1.130127 ... 0.149748 -0.732339 0.687738 2 0.176444 0.403310 -0.154951 ... -0.493662 0.600178 0.274230 3 0.132885 -0.023688 2.410179 ... 0.109121 1.126203 -0.977349 4 1.474071 -0.064034 -1.282782 ... -0.858447 0.306996 -0.028665 .. ... ... ... ... ... ... ... 25 1.492125 -0.068190 0.681456 ... 0.428572 0.880609 0.487645 26 0.725238 0.624607 -0.141185 ... 1.008500 1.424017 0.717110 27 1.262419 1.950057 0.301038 ... 1.007824 2.826008 1.458383 28 -1.585746 -0.899734 0.921494 ... 0.577223 -1.088417 0.326687 29 -0.986248 0.169729 -1.158091 ... -2.013086 -1.602549 0.333109 [30 rows x 16 columns] aggregate# In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting it into groups by1 and by2: df <- data.frame( v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9), v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99), by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12), by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA)) aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean) The groupby() method is similar to base R aggregate function. In [9]: df = pd.DataFrame( ...: { ...: "v1": [1, 3, 5, 7, 8, 3, 5, np.nan, 4, 5, 7, 9], ...: "v2": [11, 33, 55, 77, 88, 33, 55, np.nan, 44, 55, 77, 99], ...: "by1": ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12], ...: "by2": [ ...: "wet", ...: "dry", ...: 99, ...: 95, ...: np.nan, ...: "damp", ...: 95, ...: 99, ...: "red", ...: 99, ...: np.nan, ...: np.nan, ...: ], ...: } ...: ) ...: In [10]: g = df.groupby(["by1", "by2"]) In [11]: g[["v1", "v2"]].mean() Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation. match / %in%# A common way to select data in R is using %in% which is defined using the function match. The operator %in% is used to return a logical vector indicating if there is a match or not: s <- 0:4 s %in% c(2,4) The isin() method is similar to R %in% operator: In [12]: s = pd.Series(np.arange(5), dtype=np.float32) In [13]: s.isin([2, 4]) Out[13]: 0 False 1 False 2 True 3 False 4 True dtype: bool The match function returns a vector of the positions of matches of its first argument in its second: s <- 0:4 match(s, c(2,4)) For more details and examples see the reshaping documentation. tapply# tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular. Using a data.frame called baseball, and retrieving information based on the array team: baseball <- data.frame(team = gl(5, 5, labels = paste("Team", LETTERS[1:5])), player = sample(letters, 25), batting.average = runif(25, .200, .400)) tapply(baseball$batting.average, baseball.example$team, max) In pandas we may use pivot_table() method to handle this: In [14]: import random In [15]: import string In [16]: baseball = pd.DataFrame( ....: { ....: "team": ["team %d" % (x + 1) for x in range(5)] * 5, ....: "player": random.sample(list(string.ascii_lowercase), 25), ....: "batting avg": np.random.uniform(0.200, 0.400, 25), ....: } ....: ) ....: In [17]: baseball.pivot_table(values="batting avg", columns="team", aggfunc=np.max) Out[17]: team team 1 team 2 team 3 team 4 team 5 batting avg 0.352134 0.295327 0.397191 0.394457 0.396194 For more details and examples see the reshaping documentation. subset# The query() method is similar to the base R subset function. In R you might want to get the rows of a data.frame where one column’s values are less than another column’s values: df <- data.frame(a=rnorm(10), b=rnorm(10)) subset(df, a <= b) df[df$a <= df$b,] # note the comma In pandas, there are a few ways to perform subsetting. You can use query() or pass an expression as if it were an index/slice as well as standard boolean indexing: In [18]: df = pd.DataFrame({"a": np.random.randn(10), "b": np.random.randn(10)}) In [19]: df.query("a <= b") Out[19]: a b 1 0.174950 0.552887 2 -0.023167 0.148084 3 -0.495291 -0.300218 4 -0.860736 0.197378 5 -1.134146 1.720780 7 -0.290098 0.083515 8 0.238636 0.946550 In [20]: df[df["a"] <= df["b"]] Out[20]: a b 1 0.174950 0.552887 2 -0.023167 0.148084 3 -0.495291 -0.300218 4 -0.860736 0.197378 5 -1.134146 1.720780 7 -0.290098 0.083515 8 0.238636 0.946550 In [21]: df.loc[df["a"] <= df["b"]] Out[21]: a b 1 0.174950 0.552887 2 -0.023167 0.148084 3 -0.495291 -0.300218 4 -0.860736 0.197378 5 -1.134146 1.720780 7 -0.290098 0.083515 8 0.238636 0.946550 For more details and examples see the query documentation. with# An expression using a data.frame called df in R with the columns a and b would be evaluated using with like so: df <- data.frame(a=rnorm(10), b=rnorm(10)) with(df, a + b) df$a + df$b # same as the previous expression In pandas the equivalent expression, using the eval() method, would be: In [22]: df = pd.DataFrame({"a": np.random.randn(10), "b": np.random.randn(10)}) In [23]: df.eval("a + b") Out[23]: 0 -0.091430 1 -2.483890 2 -0.252728 3 -0.626444 4 -0.261740 5 2.149503 6 -0.332214 7 0.799331 8 -2.377245 9 2.104677 dtype: float64 In [24]: df["a"] + df["b"] # same as the previous expression Out[24]: 0 -0.091430 1 -2.483890 2 -0.252728 3 -0.626444 4 -0.261740 5 2.149503 6 -0.332214 7 0.799331 8 -2.377245 9 2.104677 dtype: float64 In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the eval documentation. plyr# plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data structures could be mapped in Python. R Python array list lists dictionary or list of objects data.frame dataframe ddply# An expression using a data.frame called df in R where you want to summarize x by month: require(plyr) df <- data.frame( x = runif(120, 1, 168), y = runif(120, 7, 334), z = runif(120, 1.7, 20.7), month = rep(c(5,6,7,8),30), week = sample(1:4, 120, TRUE) ) ddply(df, .(month, week), summarize, mean = round(mean(x), 2), sd = round(sd(x), 2)) In pandas the equivalent expression, using the groupby() method, would be: In [25]: df = pd.DataFrame( ....: { ....: "x": np.random.uniform(1.0, 168.0, 120), ....: "y": np.random.uniform(7.0, 334.0, 120), ....: "z": np.random.uniform(1.7, 20.7, 120), ....: "month": [5, 6, 7, 8] * 30, ....: "week": np.random.randint(1, 4, 120), ....: } ....: ) ....: In [26]: grouped = df.groupby(["month", "week"]) In [27]: grouped["x"].agg([np.mean, np.std]) Out[27]: mean std month week 5 1 63.653367 40.601965 2 78.126605 53.342400 3 92.091886 57.630110 6 1 81.747070 54.339218 2 70.971205 54.687287 3 100.968344 54.010081 7 1 61.576332 38.844274 2 61.733510 48.209013 3 71.688795 37.595638 8 1 62.741922 34.618153 2 91.774627 49.790202 3 73.936856 60.773900 For more details and examples see the groupby documentation. reshape / reshape2# meltarray# An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame: a <- array(c(1:23, NA), c(2,3,4)) data.frame(melt(a)) In Python, since a is a list, you can simply use list comprehension. In [28]: a = np.array(list(range(1, 24)) + [np.NAN]).reshape(2, 3, 4) In [29]: pd.DataFrame([tuple(list(x) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23.0 23 1 2 3 NaN [24 rows x 4 columns] meltlist# An expression using a list called a in R where you want to melt it into a data.frame: a <- as.list(c(1:4, NA)) data.frame(melt(a)) In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required. In [30]: a = list(enumerate(list(range(1, 5)) + [np.NAN])) In [31]: pd.DataFrame(a) Out[31]: 0 1 0 0 1.0 1 1 2.0 2 2 3.0 3 3 4.0 4 4 NaN For more details and examples see the Into to Data Structures documentation. meltdf# An expression using a data.frame called cheese in R where you want to reshape the data.frame: cheese <- data.frame( first = c('John', 'Mary'), last = c('Doe', 'Bo'), height = c(5.5, 6.0), weight = c(130, 150) ) melt(cheese, id=c("first", "last")) In Python, the melt() method is the R equivalent: In [32]: cheese = pd.DataFrame( ....: { ....: "first": ["John", "Mary"], ....: "last": ["Doe", "Bo"], ....: "height": [5.5, 6.0], ....: "weight": [130, 150], ....: } ....: ) ....: In [33]: pd.melt(cheese, id_vars=["first", "last"]) Out[33]: first last variable value 0 John Doe height 5.5 1 Mary Bo height 6.0 2 John Doe weight 130.0 3 Mary Bo weight 150.0 In [34]: cheese.set_index(["first", "last"]).stack() # alternative way Out[34]: first last John Doe height 5.5 weight 130.0 Mary Bo height 6.0 weight 150.0 dtype: float64 For more details and examples see the reshaping documentation. cast# In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array: df <- data.frame( x = runif(12, 1, 168), y = runif(12, 7, 334), z = runif(12, 1.7, 20.7), month = rep(c(5,6,7),4), week = rep(c(1,2), 6) ) mdf <- melt(df, id=c("month", "week")) acast(mdf, week ~ month ~ variable, mean) In Python the best way is to make use of pivot_table(): In [35]: df = pd.DataFrame( ....: { ....: "x": np.random.uniform(1.0, 168.0, 12), ....: "y": np.random.uniform(7.0, 334.0, 12), ....: "z": np.random.uniform(1.7, 20.7, 12), ....: "month": [5, 6, 7] * 4, ....: "week": [1, 2] * 6, ....: } ....: ) ....: In [36]: mdf = pd.melt(df, id_vars=["month", "week"]) In [37]: pd.pivot_table( ....: mdf, ....: values="value", ....: index=["variable", "week"], ....: columns=["month"], ....: aggfunc=np.mean, ....: ) ....: Out[37]: month 5 6 7 variable week x 1 93.888747 98.762034 55.219673 2 94.391427 38.112932 83.942781 y 1 94.306912 279.454811 227.840449 2 87.392662 193.028166 173.899260 z 1 11.016009 10.079307 16.170549 2 8.476111 17.638509 19.003494 Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and FeedType: df <- data.frame( Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1', 'Animal2', 'Animal3'), FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'), Amount = c(10, 7, 4, 2, 5, 6, 2) ) dcast(df, Animal ~ FeedType, sum, fill=NaN) # Alternative method using base R with(df, tapply(Amount, list(Animal, FeedType), sum)) Python can approach this in two different ways. Firstly, similar to above using pivot_table(): In [38]: df = pd.DataFrame( ....: { ....: "Animal": [ ....: "Animal1", ....: "Animal2", ....: "Animal3", ....: "Animal2", ....: "Animal1", ....: "Animal2", ....: "Animal3", ....: ], ....: "FeedType": ["A", "B", "A", "A", "B", "B", "A"], ....: "Amount": [10, 7, 4, 2, 5, 6, 2], ....: } ....: ) ....: In [39]: df.pivot_table(values="Amount", index="Animal", columns="FeedType", aggfunc="sum") Out[39]: FeedType A B Animal Animal1 10.0 5.0 Animal2 2.0 13.0 Animal3 6.0 NaN The second approach is to use the groupby() method: In [40]: df.groupby(["Animal", "FeedType"])["Amount"].sum() Out[40]: Animal FeedType Animal1 A 10 B 5 Animal2 A 2 B 13 Animal3 A 6 Name: Amount, dtype: int64 For more details and examples see the reshaping documentation or the groupby documentation. factor# pandas has a data type for categorical data. cut(c(1,2,3,4,5,6), 3) factor(c(1,2,3,2,2,3)) In pandas this is accomplished with pd.cut and astype("category"): In [41]: pd.cut(pd.Series([1, 2, 3, 4, 5, 6]), 3) Out[41]: 0 (0.995, 2.667] 1 (0.995, 2.667] 2 (2.667, 4.333] 3 (2.667, 4.333] 4 (4.333, 6.0] 5 (4.333, 6.0] dtype: category Categories (3, interval[float64, right]): [(0.995, 2.667] < (2.667, 4.333] < (4.333, 6.0]] In [42]: pd.Series([1, 2, 3, 2, 2, 3]).astype("category") Out[42]: 0 1 1 2 2 3 3 2 4 2 5 3 dtype: category Categories (3, int64): [1, 2, 3] For more details and examples see categorical introduction and the API documentation. There is also a documentation regarding the differences to R’s factor.
getting_started/comparison/comparison_with_r.html
pandas.tseries.offsets.Week.n
pandas.tseries.offsets.Week.n
Week.n#
reference/api/pandas.tseries.offsets.Week.n.html
pandas.tseries.offsets.BYearBegin.month
pandas.tseries.offsets.BYearBegin.month
BYearBegin.month#
reference/api/pandas.tseries.offsets.BYearBegin.month.html
pandas.tseries.offsets.BusinessMonthEnd.is_month_end
`pandas.tseries.offsets.BusinessMonthEnd.is_month_end` Return boolean whether a timestamp occurs on the month end. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False ```
BusinessMonthEnd.is_month_end()# Return boolean whether a timestamp occurs on the month end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_month_end.html
pandas.tseries.offsets.Hour.is_month_end
`pandas.tseries.offsets.Hour.is_month_end` Return boolean whether a timestamp occurs on the month end. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False ```
Hour.is_month_end()# Return boolean whether a timestamp occurs on the month end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_month_end(ts) False
reference/api/pandas.tseries.offsets.Hour.is_month_end.html
pandas.core.groupby.DataFrameGroupBy.tshift
`pandas.core.groupby.DataFrameGroupBy.tshift` Shift the time index, using the index’s frequency if available.
property DataFrameGroupBy.tshift[source]# Shift the time index, using the index’s frequency if available. Deprecated since version 1.1.0: Use shift instead. Parameters periodsintNumber of periods to move, can be positive or negative. freqDateOffset, timedelta, or str, default NoneIncrement to use from the tseries module or time rule expressed as a string (e.g. ‘EOM’). axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Corresponds to the axis that contains the Index. For Series this parameter is unused and defaults to 0. Returns shiftedSeries/DataFrame Notes If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown
reference/api/pandas.core.groupby.DataFrameGroupBy.tshift.html
pandas.infer_freq
`pandas.infer_freq` Infer the most likely frequency given the input index. If passed a Series will use the values of the series (NOT THE INDEX). ``` >>> idx = pd.date_range(start='2020/12/01', end='2020/12/30', periods=30) >>> pd.infer_freq(idx) 'D' ```
pandas.infer_freq(index, warn=True)[source]# Infer the most likely frequency given the input index. Parameters indexDatetimeIndex or TimedeltaIndexIf passed a Series will use the values of the series (NOT THE INDEX). warnbool, default True Deprecated since version 1.5.0. Returns str or NoneNone if no discernible frequency. Raises TypeErrorIf the index is not datetime-like. ValueErrorIf there are fewer than three values. Examples >>> idx = pd.date_range(start='2020/12/01', end='2020/12/30', periods=30) >>> pd.infer_freq(idx) 'D'
reference/api/pandas.infer_freq.html
pandas.tseries.offsets.BusinessMonthEnd.rule_code
pandas.tseries.offsets.BusinessMonthEnd.rule_code
BusinessMonthEnd.rule_code#
reference/api/pandas.tseries.offsets.BusinessMonthEnd.rule_code.html
Table Visualization
Table Visualization This section demonstrates visualization of tabular data using the Styler class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here. Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs. The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook. The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven’t yet created any styles. We can view these by calling the .to_html() method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build s: Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavalues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
This section demonstrates visualization of tabular data using the Styler class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here. Styler Object and HTML# Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs. The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook. [2]: import pandas as pd import numpy as np import matplotlib as mpl df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]], index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'), columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:'])) df.style [2]: Model: Decision Tree Regression Random Predicted: Tumour Non-Tumour Tumour Non-Tumour Tumour Non-Tumour Actual Label:             Tumour (Positive) 38.000000 2.000000 18.000000 22.000000 21 nan Non-Tumour (Negative) 19.000000 439.000000 6.000000 452.000000 226 232.000000 The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven’t yet created any styles. We can view these by calling the .to_html() method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build s: [4]: s [4]: Confusion matrix for multiple cancer prediction models. Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Formatting the Display# Formatting Values# Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavalues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels. Additionally, the format function has a precision argument to specifically help formatting floats, as well as decimal and thousands separators to support other locales, an na_rep argument to display missing data, and an escape argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas’ styler.format.precision option, controllable using with pd.option_context('format.precision', 2): [5]: df.style.format(precision=0, na_rep='MISSING', thousands=" ", formatter={('Decision Tree', 'Tumour'): "{:.2f}", ('Regression', 'Non-Tumour'): lambda x: "$ {:,.1f}".format(x*-1e6) }) [5]: Model: Decision Tree Regression Random Predicted: Tumour Non-Tumour Tumour Non-Tumour Tumour Non-Tumour Actual Label:             Tumour (Positive) 38.00 2 18 $ -22 000 000.0 21 MISSING Non-Tumour (Negative) 19.00 439 6 $ -452 000 000.0 226 232 Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations. [6]: weather_df = pd.DataFrame(np.random.rand(10,2)*5, index=pd.date_range(start="2021-01-01", periods=10), columns=["Tokyo", "Beijing"]) def rain_condition(v): if v < 1.75: return "Dry" elif v < 2.75: return "Rain" return "Heavy Rain" def make_pretty(styler): styler.set_caption("Weather Conditions") styler.format(rain_condition) styler.format_index(lambda v: v.strftime("%A")) styler.background_gradient(axis=None, vmin=1, vmax=5, cmap="YlGnBu") return styler weather_df [6]: Tokyo Beijing 2021-01-01 1.156896 0.483482 2021-01-02 4.274907 2.740275 2021-01-03 2.347367 4.046314 2021-01-04 4.762118 1.187866 2021-01-05 3.364955 1.436871 2021-01-06 1.714027 0.031307 2021-01-07 2.402132 4.665891 2021-01-08 3.262800 0.759015 2021-01-09 4.260268 2.226552 2021-01-10 4.277346 2.286653 [7]: weather_df.loc["2021-01-04":"2021-01-08"].style.pipe(make_pretty) [7]: Weather Conditions   Tokyo Beijing Monday Heavy Rain Dry Tuesday Heavy Rain Dry Wednesday Dry Dry Thursday Rain Heavy Rain Friday Heavy Rain Dry Hiding Data# The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods. The index can be hidden from rendering by calling .hide() without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling .hide(axis=“columns”) without any further arguments. Specific rows or columns can be hidden from rendering by calling the same .hide() method and passing in a row/column label, a list-like or a slice of row/column labels to for the subset argument. Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at col2, since col0 and col1 are simply ignored. We can update our Styler object from before to hide some data and format the values. [8]: s = df.style.format('{:.0f}').hide([('Random', 'Tumour'), ('Random', 'Non-Tumour')], axis="columns") s [8]: Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Methods to Add Styles# There are 3 primary methods of adding custom CSS styles to Styler: Using .set_table_styles() to control broader areas of the table with specified internal CSS. Although table styles allow the flexibility to add CSS selectors and properties controlling all individual parts of the table, they are unwieldy for individual cell specifications. Also, note that table styles cannot be exported to Excel. Using .set_td_classes() to directly link either external CSS classes to your data cells or link the internal CSS classes created by .set_table_styles(). See here. These cannot be used on column header rows or indexes, and also won’t export to Excel. Using the .apply() and .applymap() functions to add direct internal CSS to specific data cells. See here. As of v1.4.0 there are also methods that work directly on column header rows or indexes; .apply_index() and .applymap_index(). Note that only these methods add styles that will export to Excel. These methods work in a similar way to DataFrame.apply() and DataFrame.applymap(). Table Styles# Table styles are flexible enough to control all individual parts of the table, including column headers and indexes. However, they can be unwieldy to type for individual data cells or for any kind of conditional formatting, so we recommend that table styles are used for broad styling, such as entire rows or columns at a time. Table styles are also used to control features which can apply to the whole table at once such as creating a generic hover functionality. The :hover pseudo-selector, as well as other pseudo-selectors, can only be used this way. To replicate the normal format of CSS selectors and properties (attribute value pairs), e.g. tr:hover { background-color: #ffff99; } the necessary format to pass styles to .set_table_styles() is as a list of dicts, each with a CSS-selector tag and CSS-properties. Properties can either be a list of 2-tuples, or a regular CSS-string, for example: [10]: cell_hover = { # for row hover use <tr> instead of <td> 'selector': 'td:hover', 'props': [('background-color', '#ffffb3')] } index_names = { 'selector': '.index_name', 'props': 'font-style: italic; color: darkgrey; font-weight:normal;' } headers = { 'selector': 'th:not(.index_name)', 'props': 'background-color: #000066; color: white;' } s.set_table_styles([cell_hover, index_names, headers]) [10]: Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Next we just add a couple more styling artifacts targeting specific parts of the table. Be careful here, since we are chaining methods we need to explicitly instruct the method not to overwrite the existing styles. [12]: s.set_table_styles([ {'selector': 'th.col_heading', 'props': 'text-align: center;'}, {'selector': 'th.col_heading.level0', 'props': 'font-size: 1.5em;'}, {'selector': 'td', 'props': 'text-align: center; font-weight: bold;'}, ], overwrite=False) [12]: Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 As a convenience method (since version 1.2.0) we can also pass a dict to .set_table_styles() which contains row or column keys. Behind the scenes Styler just indexes the keys and adds relevant .col<m> or .row<n> classes as necessary to the given CSS selectors. [14]: s.set_table_styles({ ('Regression', 'Tumour'): [{'selector': 'th', 'props': 'border-left: 1px solid white'}, {'selector': 'td', 'props': 'border-left: 1px solid #000066'}] }, overwrite=False, axis=0) [14]: Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Setting Classes and Linking to External CSS# If you have designed a website then it is likely you will already have an external CSS file that controls the styling of table and cell objects within it. You may want to use these native files rather than duplicate all the CSS in python (and duplicate any maintenance work). Table Attributes# It is very easy to add a class to the main <table> using .set_table_attributes(). This method can also attach inline styles - read more in CSS Hierarchies. [16]: out = s.set_table_attributes('class="my-table-cls"').to_html() print(out[out.find('<table'):][:109]) <table id="T_xyz01" class="my-table-cls"> <thead> <tr> <th class="index_name level0" >Model:</th> Data Cell CSS Classes# New in version 1.2.0 The .set_td_classes() method accepts a DataFrame with matching indices and columns to the underlying Styler’s DataFrame. That DataFrame will contain strings as css-classes to add to individual data cells: the <td> elements of the <table>. Rather than use external CSS we will create our classes internally and add them to table style. We will save adding the borders until the section on tooltips. [17]: s.set_table_styles([ # create internal CSS classes {'selector': '.true', 'props': 'background-color: #e6ffe6;'}, {'selector': '.false', 'props': 'background-color: #ffe6e6;'}, ], overwrite=False) cell_color = pd.DataFrame([['true ', 'false ', 'true ', 'false '], ['false ', 'true ', 'false ', 'true ']], index=df.index, columns=df.columns[:4]) s.set_td_classes(cell_color) [17]: Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Styler Functions# Acting on Data# We use the following methods to pass your style functions. Both of those methods take a function (and some other keyword arguments) and apply it to the DataFrame in a certain way, rendering CSS styles. .applymap() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair. .apply() (column-/row-/table-wise): accepts a function that takes a Series or DataFrame and returns a Series, DataFrame, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each column or row of your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument. For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None. This method is powerful for applying multiple, complex logic to data cells. We create a new DataFrame to demonstrate this. [19]: np.random.seed(0) df2 = pd.DataFrame(np.random.randn(10,4), columns=['A','B','C','D']) df2.style [19]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 For example we can build a function that colors text if it is negative, and chain this with a function that partially fades cells of negligible value. Since this looks at each element in turn we use applymap. [20]: def style_negative(v, props=''): return props if v < 0 else None s2 = df2.style.applymap(style_negative, props='color:red;')\ .applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None) s2 [20]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 We can also build a function that highlights the maximum value across rows, cols, and the DataFrame all at once. In this case we use apply. Below we highlight the maximum in a column. [22]: def highlight_max(s, props=''): return np.where(s == np.nanmax(s.values), props, '') s2.apply(highlight_max, props='color:white;background-color:darkblue', axis=0) [22]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 We can use the same function across the different axes, highlighting here the DataFrame maximum in purple, and row maximums in pink. [24]: s2.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\ .apply(highlight_max, props='color:white;background-color:purple', axis=None) [24]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 This last example shows how some styles have been overwritten by others. In general the most recent style applied is active but you can read more in the section on CSS hierarchies. You can also apply these styles to more granular parts of the DataFrame - read more in section on subset slicing. It is possible to replicate some of this functionality using just classes but it can be more cumbersome. See item 3) of Optimization Debugging Tip: If you’re having trouble writing your style function, try just passing it into DataFrame.apply. Internally, Styler.apply uses DataFrame.apply so the result should be the same, and with DataFrame.apply you will be able to inspect the CSS string output of your intended function in each cell. Acting on the Index and Column Headers# Similar application is achieved for headers by using: .applymap_index() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair. .apply_index() (level-wise): accepts a function that takes a Series and returns a Series, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each level of your Index one-at-a-time. To style the index use axis=0 and to style the column headers use axis=1. You can select a level of a MultiIndex but currently no similar subset application is available for these methods. [26]: s2.applymap_index(lambda v: "color:pink;" if v>4 else "color:darkblue;", axis=0) s2.apply_index(lambda s: np.where(s.isin(["A", "B"]), "color:pink;", "color:darkblue;"), axis=1) [26]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 Tooltips and Captions# Table captions can be added with the .set_caption() method. You can use table styles to control the CSS relevant to the caption. [27]: s.set_caption("Confusion matrix for multiple cancer prediction models.")\ .set_table_styles([{ 'selector': 'caption', 'props': 'caption-side: bottom; font-size:1.25em;' }], overwrite=False) [27]: Confusion matrix for multiple cancer prediction models. Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Adding tooltips (since version 1.3.0) can be done using the .set_tooltips() method in the same way you can add CSS classes to data cells by providing a string based DataFrame with intersecting indices and columns. You don’t have to specify a css_class name or any css props for the tooltips, since there are standard defaults, but the option is there if you want more visual control. [29]: tt = pd.DataFrame([['This model has a very strong true positive rate', "This model's total number of false negatives is too high"]], index=['Tumour (Positive)'], columns=df.columns[[0,3]]) s.set_tooltips(tt, props='visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;' 'background-color: white; color: #000066; font-size: 0.8em;' 'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;') [29]: Confusion matrix for multiple cancer prediction models. Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 The only thing left to do for our table is to add the highlighting borders to draw the audience attention to the tooltips. We will create internal CSS classes as before using table styles. Setting classes always overwrites so we need to make sure we add the previous classes. [31]: s.set_table_styles([ # create internal CSS classes {'selector': '.border-red', 'props': 'border: 2px dashed red;'}, {'selector': '.border-green', 'props': 'border: 2px dashed green;'}, ], overwrite=False) cell_border = pd.DataFrame([['border-green ', ' ', ' ', 'border-red '], [' ', ' ', ' ', ' ']], index=df.index, columns=df.columns[:4]) s.set_td_classes(cell_color + cell_border) [31]: Confusion matrix for multiple cancer prediction models. Model: Decision Tree Regression Predicted: Tumour Non-Tumour Tumour Non-Tumour Actual Label:         Tumour (Positive) 38 2 18 22 Non-Tumour (Negative) 19 439 6 452 Finer Control with Slicing# The examples we have shown so far for the Styler.apply and Styler.applymap functions have not demonstrated the use of the subset argument. This is a useful argument which permits a lot of flexibility: it allows you to apply styles to specific rows or columns, without having to code that logic into your style function. The value passed to subset behaves similar to slicing a DataFrame; A scalar is treated as a column label A list (or Series or NumPy array) is treated as multiple column labels A tuple is treated as (row_indexer, column_indexer) Consider using pd.IndexSlice to construct the tuple for the last one. We will create a MultiIndexed DataFrame to demonstrate the functionality. [33]: df3 = pd.DataFrame(np.random.randn(4,4), pd.MultiIndex.from_product([['A', 'B'], ['r1', 'r2']]), columns=['c1','c2','c3','c4']) df3 [33]: c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 We will use subset to highlight the maximum in the third and fourth columns with red text. We will highlight the subset sliced region in yellow. [34]: slice_ = ['c3', 'c4'] df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\ .set_properties(**{'background-color': '#ffffb3'}, subset=slice_) [34]:     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 If combined with the IndexSlice as suggested then it can index across both dimensions with greater flexibility. [35]: idx = pd.IndexSlice slice_ = idx[idx[:,'r1'], idx['c2':'c4']] df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\ .set_properties(**{'background-color': '#ffffb3'}, subset=slice_) [35]:     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 This also provides the flexibility to sub select rows when used with the axis=1. [36]: slice_ = idx[idx[:,'r2'], :] df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\ .set_properties(**{'background-color': '#ffffb3'}, subset=slice_) [36]:     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 There is also scope to provide conditional filtering. Suppose we want to highlight the maximum across columns 2 and 4 only in the case that the sum of columns 1 and 3 is less than -2.0 (essentially excluding rows (:,'r2')). [37]: slice_ = idx[idx[(df3['c1'] + df3['c3']) < -2.0], ['c2', 'c4']] df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\ .set_properties(**{'background-color': '#ffffb3'}, subset=slice_) [37]:     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 Only label-based slicing is supported right now, not positional, and not callables. If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword. my_func2 = functools.partial(my_func, subset=42) Optimization# Generally, for smaller tables and most cases, the rendered HTML does not need to be optimized, and we don’t really recommend it. There are two cases where it is worth considering: If you are rendering and styling a very large HTML table, certain browsers have performance issues. If you are using Styler to dynamically create part of online user interfaces and want to improve network performance. Here we recommend the following steps to implement: 1. Remove UUID and cell_ids# Ignore the uuid and set cell_ids to False. This will prevent unnecessary HTML. This is sub-optimal: [38]: df4 = pd.DataFrame([[1,2],[3,4]]) s4 = df4.style This is better: [39]: from pandas.io.formats.style import Styler s4 = Styler(df4, uuid_len=0, cell_ids=False) 2. Use table styles# Use table styles where possible (e.g. for all cells or rows or columns at a time) since the CSS is nearly always more efficient than other formats. This is sub-optimal: [40]: props = 'font-family: "Times New Roman", Times, serif; color: #e83e8c; font-size:1.3em;' df4.style.applymap(lambda x: props, subset=[1]) [40]:   0 1 0 1 2 1 3 4 This is better: [41]: df4.style.set_table_styles([{'selector': 'td.col1', 'props': props}]) [41]:   0 1 0 1 2 1 3 4 3. Set classes instead of using Styler functions# For large DataFrames where the same style is applied to many cells it can be more efficient to declare the styles as classes and then apply those classes to data cells, rather than directly applying styles to cells. It is, however, probably still easier to use the Styler function api when you are not concerned about optimization. This is sub-optimal: [42]: df2.style.apply(highlight_max, props='color:white;background-color:darkblue;', axis=0)\ .apply(highlight_max, props='color:white;background-color:pink;', axis=1)\ .apply(highlight_max, props='color:white;background-color:purple', axis=None) [42]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 This is better: [43]: build = lambda x: pd.DataFrame(x, index=df2.index, columns=df2.columns) cls1 = build(df2.apply(highlight_max, props='cls-1 ', axis=0)) cls2 = build(df2.apply(highlight_max, props='cls-2 ', axis=1, result_type='expand').values) cls3 = build(highlight_max(df2, props='cls-3 ')) df2.style.set_table_styles([ {'selector': '.cls-1', 'props': 'color:white;background-color:darkblue;'}, {'selector': '.cls-2', 'props': 'color:white;background-color:pink;'}, {'selector': '.cls-3', 'props': 'color:white;background-color:purple;'} ]).set_td_classes(cls1 + cls2 + cls3) [43]:   A B C D 0 1.764052 0.400157 0.978738 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 -0.854096 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 4. Don’t use tooltips# Tooltips require cell_ids to work and they generate extra HTML elements for every data cell. 5. If every byte counts use string replacement# You can remove unnecessary HTML, or shorten the default class names by replacing the default css dict. You can read a little more about CSS below. [44]: my_css = { "row_heading": "", "col_heading": "", "index_name": "", "col": "c", "row": "r", "col_trim": "", "row_trim": "", "level": "l", "data": "", "blank": "", } html = Styler(df4, uuid_len=0, cell_ids=False) html.set_table_styles([{'selector': 'td', 'props': props}, {'selector': '.c1', 'props': 'color:green;'}, {'selector': '.l0', 'props': 'color:blue;'}], css_class_names=my_css) print(html.to_html()) <style type="text/css"> #T_ td { font-family: "Times New Roman", Times, serif; color: #e83e8c; font-size: 1.3em; } #T_ .c1 { color: green; } #T_ .l0 { color: blue; } </style> <table id="T_"> <thead> <tr> <th class=" l0" >&nbsp;</th> <th class=" l0 c0" >0</th> <th class=" l0 c1" >1</th> </tr> </thead> <tbody> <tr> <th class=" l0 r0" >0</th> <td class=" r0 c0" >1</td> <td class=" r0 c1" >2</td> </tr> <tr> <th class=" l0 r1" >1</th> <td class=" r1 c0" >3</td> <td class=" r1 c1" >4</td> </tr> </tbody> </table> [45]: html [45]:   0 1 0 1 2 1 3 4 Builtin Styles# Some styling functions are common enough that we’ve “built them in” to the Styler, so you don’t have to write them and apply them yourself. The current list of such functions is: .highlight_null: for use with identifying missing data. .highlight_min and .highlight_max: for use with identifying extremeties in data. .highlight_between and .highlight_quantile: for use with identifying classes within data. .background_gradient: a flexible method for highlighting cells based on their, or other, values on a numeric scale. .text_gradient: similar method for highlighting text based on their, or other, values on a numeric scale. .bar: to display mini-charts within cell backgrounds. The individual documentation on each function often gives more examples of their arguments. Highlight Null# [46]: df2.iloc[0,2] = np.nan df2.iloc[4,3] = np.nan df2.loc[:4].style.highlight_null(color='yellow') [46]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan Highlight Min or Max# [47]: df2.loc[:4].style.highlight_max(axis=1, props='color:white; font-weight:bold; background-color:darkblue;') [47]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan Highlight Between# This method accepts ranges as float, or NumPy arrays or Series provided the indexes match. [48]: left = pd.Series([1.0, 0.0, 1.0], index=["A", "B", "D"]) df2.loc[:4].style.highlight_between(left=left, right=1.5, axis=1, props='color:white; background-color:purple;') [48]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan Highlight Quantile# Useful for detecting the highest or lowest percentile values [49]: df2.loc[:4].style.highlight_quantile(q_left=0.85, axis=None, color='yellow') [49]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan Background Gradient and Text Gradient# You can create “heatmaps” with the background_gradient and text_gradient methods. These require matplotlib, and we’ll use Seaborn to get a nice colormap. [50]: import seaborn as sns cm = sns.light_palette("green", as_cmap=True) df2.style.background_gradient(cmap=cm) [50]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 [51]: df2.style.text_gradient(cmap=cm) [51]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 .background_gradient and .text_gradient have a number of keyword arguments to customise the gradients and colors. See the documentation. Set properties# Use Styler.set_properties when the style doesn’t actually depend on the values. This is just a simple wrapper for .applymap where the function returns the same properties for all cells. [52]: df2.loc[:4].style.set_properties(**{'background-color': 'black', 'color': 'lawngreen', 'border-color': 'white'}) [52]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan Bar charts# You can include “bar charts” in your DataFrame. [53]: df2.style.bar(subset=['A', 'B'], color='#d65f5f') [53]:   A B C D 0 1.764052 0.400157 nan 2.240893 1 1.867558 -0.977278 0.950088 -0.151357 2 -0.103219 0.410599 0.144044 1.454274 3 0.761038 0.121675 0.443863 0.333674 4 1.494079 -0.205158 0.313068 nan 5 -2.552990 0.653619 0.864436 -0.742165 6 2.269755 -1.454366 0.045759 -0.187184 7 1.532779 1.469359 0.154947 0.378163 8 -0.887786 -1.980796 -0.347912 0.156349 9 1.230291 1.202380 -0.387327 -0.302303 Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values or a matplotlib colormap. To showcase an example here’s how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars. We also use text_gradient to color the text the same as the bars using a matplotlib colormap (although in this case the visualization is probably better without this additional effect). [54]: df2.style.format('{:.3f}', na_rep="")\ .bar(align=0, vmin=-2.5, vmax=2.5, cmap="bwr", height=50, width=60, props="width: 120px; border-right: 1px solid black;")\ .text_gradient(cmap="bwr", vmin=-2.5, vmax=2.5) [54]:   A B C D 0 1.764 0.400 2.241 1 1.868 -0.977 0.950 -0.151 2 -0.103 0.411 0.144 1.454 3 0.761 0.122 0.444 0.334 4 1.494 -0.205 0.313 5 -2.553 0.654 0.864 -0.742 6 2.270 -1.454 0.046 -0.187 7 1.533 1.469 0.155 0.378 8 -0.888 -1.981 -0.348 0.156 9 1.230 1.202 -0.387 -0.302 The following example aims to give a highlight of the behavior of the new align options: [56]: HTML(head) [56]: Align All Negative Both Neg and Pos All Positive Large Positive left -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 right -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 zero -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 mid -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 mean -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 99 -100 -60 -30 -20 -10 -5 0 90 10 20 50 100 100 103 101 102 Sharing styles# Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set [57]: style1 = df2.style\ .applymap(style_negative, props='color:red;')\ .applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)\ .set_table_styles([{"selector": "th", "props": "color: blue;"}])\ .hide(axis="index") style1 [57]: A B C D 1.764052 0.400157 nan 2.240893 1.867558 -0.977278 0.950088 -0.151357 -0.103219 0.410599 0.144044 1.454274 0.761038 0.121675 0.443863 0.333674 1.494079 -0.205158 0.313068 nan -2.552990 0.653619 0.864436 -0.742165 2.269755 -1.454366 0.045759 -0.187184 1.532779 1.469359 0.154947 0.378163 -0.887786 -1.980796 -0.347912 0.156349 1.230291 1.202380 -0.387327 -0.302303 [58]: style2 = df3.style style2.use(style1.export()) style2 [58]: c1 c2 c3 c4 -1.048553 -1.420018 -1.706270 1.950775 -0.509652 -0.438074 -1.252795 0.777490 -1.613898 -0.212740 -0.895467 0.386902 -0.510805 -1.180632 -0.028182 0.428332 Notice that you’re able to share the styles even though they’re data aware. The styles are re-evaluated on the new DataFrame they’ve been used upon. Limitations# DataFrame only (use Series.to_frame().style) The index and columns do not need to be unique, but certain styling functions can only work with unique indexes. No large repr, and construction performance isn’t great; although we have some HTML optimizations You can only apply styles, you can’t insert new HTML entities, except via subclassing. Other Fun and Useful Stuff# Here are a few interesting examples. Widgets# Styler interacts pretty well with widgets. If you’re viewing this online instead of running the notebook yourself, you’re missing out on interactively adjusting the color palette. [59]: from ipywidgets import widgets @widgets.interact def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)): return df2.style.background_gradient( cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l, as_cmap=True) ) Magnify# [60]: def magnify(): return [dict(selector="th", props=[("font-size", "4pt")]), dict(selector="td", props=[('padding', "0em 0em")]), dict(selector="th:hover", props=[("font-size", "12pt")]), dict(selector="tr:hover td:hover", props=[('max-width', '200px'), ('font-size', '12pt')]) ] [61]: np.random.seed(25) cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True) bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum() bigdf.style.background_gradient(cmap, axis=1)\ .set_properties(**{'max-width': '80px', 'font-size': '1pt'})\ .set_caption("Hover to magnify")\ .format(precision=2)\ .set_table_styles(magnify()) [61]: Hover to magnify   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0.23 1.03 -0.84 -0.59 -0.96 -0.22 -0.62 1.84 -2.05 0.87 -0.92 -0.23 2.15 -1.33 0.08 -1.25 1.20 -1.05 1.06 -0.42 2.29 -2.59 2.82 0.68 -1.58 1 -1.75 1.56 -1.13 -1.10 1.03 0.00 -2.46 3.45 -1.66 1.27 -0.52 -0.02 1.52 -1.09 -1.86 -1.13 -0.68 -0.81 0.35 -0.06 1.79 -2.82 2.26 0.78 0.44 2 -0.65 3.22 -1.76 0.52 2.20 -0.37 -3.00 3.73 -1.87 2.46 0.21 -0.24 -0.10 -0.78 -3.02 -0.82 -0.21 -0.23 0.86 -0.68 1.45 -4.89 3.03 1.91 0.61 3 -1.62 3.71 -2.31 0.43 4.17 -0.43 -3.86 4.16 -2.15 1.08 0.12 0.60 -0.89 0.27 -3.67 -2.71 -0.31 -1.59 1.35 -1.83 0.91 -5.80 2.81 2.11 0.28 4 -3.35 4.48 -1.86 -1.70 5.19 -1.02 -3.81 4.72 -0.72 1.08 -0.18 0.83 -0.22 -1.08 -4.27 -2.88 -0.97 -1.78 1.53 -1.80 2.21 -6.34 3.34 2.49 2.09 5 -0.84 4.23 -1.65 -2.00 5.34 -0.99 -4.13 3.94 -1.06 -0.94 1.24 0.09 -1.78 -0.11 -4.45 -0.85 -2.06 -1.35 0.80 -1.63 1.54 -6.51 2.80 2.14 3.77 6 -0.74 5.35 -2.11 -1.13 4.20 -1.85 -3.20 3.76 -3.22 -1.23 0.34 0.57 -1.82 0.54 -4.43 -1.83 -4.03 -2.62 -0.20 -4.68 1.93 -8.46 3.34 2.52 5.81 7 -0.44 4.69 -2.30 -0.21 5.93 -2.63 -1.83 5.46 -4.50 -3.16 -1.73 0.18 0.11 0.04 -5.99 -0.45 -6.20 -3.89 0.71 -3.95 0.67 -7.26 2.97 3.39 6.66 8 0.92 5.80 -3.33 -0.65 5.99 -3.19 -1.83 5.63 -3.53 -1.30 -1.61 0.82 -2.45 -0.40 -6.06 -0.52 -6.60 -3.48 -0.04 -4.60 0.51 -5.85 3.23 2.40 5.08 9 0.38 5.54 -4.49 -0.80 7.05 -2.64 -0.44 5.35 -1.96 -0.33 -0.80 0.26 -3.37 -0.82 -6.05 -2.61 -8.45 -4.45 0.41 -4.71 1.89 -6.93 2.14 3.00 5.16 10 2.06 5.84 -3.90 -0.98 7.78 -2.49 -0.59 5.59 -2.22 -0.71 -0.46 1.80 -2.79 0.48 -5.97 -3.44 -7.77 -5.49 -0.70 -4.61 -0.52 -7.72 1.54 5.02 5.81 11 1.86 4.47 -2.17 -1.38 5.90 -0.49 0.02 5.78 -1.04 -0.60 0.49 1.96 -1.47 1.88 -5.92 -4.55 -8.15 -3.42 -2.24 -4.33 -1.17 -7.90 1.36 5.31 5.83 12 3.19 4.22 -3.06 -2.27 5.93 -2.64 0.33 6.72 -2.84 -0.20 1.89 2.63 -1.53 0.75 -5.27 -4.53 -7.57 -2.85 -2.17 -4.78 -1.13 -8.99 2.11 6.42 5.60 13 2.31 4.45 -3.87 -2.05 6.76 -3.25 -2.17 7.99 -2.56 -0.80 0.71 2.33 -0.16 -0.46 -5.10 -3.79 -7.58 -4.00 0.33 -3.67 -1.05 -8.71 2.47 5.87 6.71 14 3.78 4.33 -3.88 -1.58 6.22 -3.23 -1.46 5.57 -2.93 -0.33 -0.97 1.72 3.61 0.29 -4.21 -4.10 -6.68 -4.50 -2.19 -2.43 -1.64 -9.36 3.36 6.11 7.53 15 5.64 5.31 -3.98 -2.26 5.91 -3.30 -1.03 5.68 -3.06 -0.33 -1.16 2.19 4.20 1.01 -3.22 -4.31 -5.74 -4.44 -2.30 -1.36 -1.20 -11.27 2.59 6.69 5.91 16 4.08 4.34 -2.44 -3.30 6.04 -2.52 -0.47 5.28 -4.84 1.58 0.23 0.10 5.79 1.80 -3.13 -3.85 -5.53 -2.97 -2.13 -1.15 -0.56 -13.13 2.07 6.16 4.94 17 5.64 4.57 -3.53 -3.76 6.58 -2.58 -0.75 6.58 -4.78 3.63 -0.29 0.56 5.76 2.05 -2.27 -2.31 -4.95 -3.16 -3.06 -2.43 0.84 -12.57 3.56 7.36 4.70 18 5.99 5.82 -2.85 -4.15 7.12 -3.32 -1.21 7.93 -4.85 1.44 -0.63 0.35 7.47 0.87 -1.52 -2.09 -4.23 -2.55 -2.46 -2.89 1.90 -9.74 3.43 7.07 4.39 19 4.03 6.23 -4.10 -4.11 7.19 -4.10 -1.52 6.53 -5.21 -0.24 0.01 1.16 6.43 -1.97 -2.64 -1.66 -5.20 -3.25 -2.87 -1.65 1.64 -10.66 2.83 7.48 3.94 Sticky Headers# If you display a large matrix or DataFrame in a notebook, but you want to always see the column and row headers you can use the .set_sticky method which manipulates the table styles CSS. [62]: bigdf = pd.DataFrame(np.random.randn(16, 100)) bigdf.style.set_sticky(axis="index") [62]:   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 0 -0.773866 -0.240521 -0.217165 1.173609 0.686390 0.008358 0.696232 0.173166 0.620498 0.504067 0.428066 -0.051824 0.719915 0.057165 0.562808 -0.369536 0.483399 0.620765 -0.354342 -1.469471 -1.937266 0.038031 -1.518162 -0.417599 0.386717 0.716193 0.489961 0.733957 0.914415 0.679894 0.255448 -0.508338 0.332030 -0.111107 -0.251983 -1.456620 0.409630 1.062320 -0.577115 0.718796 -0.399260 -1.311389 0.649122 0.091566 0.628872 0.297894 -0.142290 -0.542291 -0.914290 1.144514 0.313584 1.182635 1.214235 -0.416446 -1.653940 -2.550787 0.442473 0.052127 -0.464469 -0.523852 0.989726 -1.325539 -0.199687 -1.226727 0.290018 1.164574 0.817841 -0.309509 0.496599 0.943536 -0.091850 -2.802658 2.126219 -0.521161 0.288098 -0.454663 -1.676143 -0.357661 -0.788960 0.185911 -0.017106 2.454020 1.832706 -0.911743 -0.655873 -0.000514 -2.226997 0.677285 -0.140249 -0.408407 -0.838665 0.482228 1.243458 -0.477394 -0.220343 -2.463966 0.237325 -0.307380 1.172478 0.819492 1 0.405906 -0.978919 1.267526 0.145250 -1.066786 -2.114192 -1.128346 -1.082523 0.372216 0.004127 -0.211984 0.937326 -0.935890 -1.704118 0.611789 -1.030015 0.636123 -1.506193 1.736609 1.392958 1.009424 0.353266 0.697339 -0.297424 0.428702 -0.145346 -0.333553 -0.974699 0.665314 0.971944 0.121950 -1.439668 1.018808 1.442399 -0.199585 -1.165916 0.645656 1.436466 -0.921215 1.293906 -2.706443 1.460928 -0.823197 0.292952 -1.448992 0.026692 -0.975883 0.392823 0.442166 0.745741 1.187982 -0.218570 0.305288 0.054932 -1.476953 -0.114434 0.014103 0.825394 -0.060654 -0.413688 0.974836 1.339210 1.034838 0.040775 0.705001 0.017796 1.867681 -0.390173 2.285277 2.311464 -0.085070 -0.648115 0.576300 -0.790087 -1.183798 -1.334558 -0.454118 0.319302 1.706488 0.830429 0.502476 -0.079631 0.414635 0.332511 0.042935 -0.160910 0.918553 -0.292697 -1.303834 -0.199604 0.871023 -1.370681 -0.205701 -0.492973 1.123083 -0.081842 -0.118527 0.245838 -0.315742 -0.511806 2 0.011470 -0.036104 1.399603 -0.418176 -0.412229 -1.234783 -1.121500 1.196478 -0.569522 0.422022 -0.220484 0.804338 2.892667 -0.511055 -0.168722 -1.477996 -1.969917 0.471354 1.698548 0.137105 -0.762052 0.199379 -0.964346 -0.256692 1.265275 0.848762 -0.784161 1.863776 -0.355569 0.854552 0.768061 -2.075718 -2.501069 1.109868 0.957545 -0.683276 0.307764 0.733073 1.706250 -1.118091 0.374961 -1.414503 -0.524183 -1.662696 0.687921 0.521732 1.451396 -0.833491 -0.362796 -1.174444 -0.813893 -0.893220 0.770743 1.156647 -0.647444 0.125929 0.513600 -0.537874 1.992052 -1.946584 -0.104759 0.484779 -0.290936 -0.441075 0.542993 -1.050038 1.630482 0.239771 -1.177310 0.464804 -0.966995 0.646086 0.486899 1.022196 -2.267827 -1.229616 1.313805 1.073292 2.324940 -0.542720 -1.504292 0.777643 -0.618553 0.011342 1.385062 1.363552 -0.549834 0.688896 1.361288 -0.381137 0.797812 -1.128198 0.369208 0.540132 0.413853 -0.200308 -0.969126 0.981293 -0.009783 -0.320020 3 -0.574816 1.419977 0.434813 -1.101217 -1.586275 1.979573 0.378298 0.782326 2.178987 0.657564 0.683774 -0.091000 -0.059552 -0.738908 -0.907653 -0.701936 0.580039 -0.618757 0.453684 1.665382 -0.152321 0.880077 0.571073 -0.604736 0.532359 0.515031 -0.959844 -0.887184 0.435781 0.862093 -0.956321 -0.625909 0.194472 0.442490 0.526503 -0.215274 0.090711 0.932592 0.811999 -2.497026 0.631545 0.321418 -0.425549 -1.078832 0.753444 0.199790 -0.360526 -0.013448 -0.819476 0.814869 0.442118 -0.972048 -0.060603 -2.349825 1.265445 -0.573257 0.429124 1.049783 1.954773 0.071883 -0.094209 0.265616 0.948318 0.331645 1.343401 -0.167934 -1.105252 -0.167077 -0.096576 -0.838161 -0.208564 0.394534 0.762533 1.235357 -0.207282 -0.202946 -0.468025 0.256944 2.587584 1.186697 -1.031903 1.428316 0.658899 -0.046582 -0.075422 1.329359 -0.684267 -1.524182 2.014061 3.770933 0.647353 -1.021377 -0.345493 0.582811 0.797812 1.326020 1.422857 -3.077007 0.184083 1.478935 4 -0.600142 1.929561 -2.346771 -0.669700 -1.165258 0.814788 0.444449 -0.576758 0.353091 0.408893 0.091391 -2.294389 0.485506 -0.081304 -0.716272 -1.648010 1.005361 -1.489603 0.363098 0.758602 -1.373847 -0.972057 1.988537 0.319829 1.169060 0.146585 1.030388 1.165984 1.369563 0.730984 -1.383696 -0.515189 -0.808927 -1.174651 -1.631502 -1.123414 -0.478155 -1.583067 1.419074 1.668777 1.567517 0.222103 -0.336040 -1.352064 0.251032 -0.401695 0.268413 -0.012299 -0.918953 2.921208 -0.581588 0.672848 1.251136 1.382263 1.429897 1.290990 -1.272673 -0.308611 -0.422988 -0.675642 0.874441 1.305736 -0.262585 -1.099395 -0.667101 -0.646737 -0.556338 -0.196591 0.119306 -0.266455 -0.524267 2.650951 0.097318 -0.974697 0.189964 1.141155 -0.064434 1.104971 -1.508908 -0.031833 0.803919 -0.659221 0.939145 0.214041 -0.531805 0.956060 0.249328 0.637903 -0.510158 1.850287 -0.348407 2.001376 -0.389643 -0.024786 -0.470973 0.869339 0.170667 0.598062 1.217262 1.274013 5 -0.389981 -0.752441 -0.734871 3.517318 -1.173559 -0.004956 0.145419 2.151368 -3.086037 -1.569139 1.449784 -0.868951 -1.687716 -0.994401 1.153266 1.803045 -0.819059 0.847970 0.227102 -0.500762 0.868210 1.823540 1.161007 -0.307606 -0.713416 0.363560 -0.822162 2.427681 -0.129537 -0.078716 1.345644 -1.286094 0.237242 -0.136056 0.596664 -1.412381 1.206341 0.299860 0.705238 0.142412 -1.059382 0.833468 1.060015 -0.527045 -1.135732 -1.140983 -0.779540 -0.640875 -1.217196 -1.675663 0.241263 -0.273322 -1.697936 -0.594943 0.101154 1.391735 -0.426953 1.008344 -0.818577 1.924570 -0.578900 -0.457395 -1.096705 0.418522 -0.155623 0.169706 -2.533706 0.018904 1.434160 0.744095 0.647626 -0.770309 2.329141 -0.141547 -1.761594 0.702091 -1.531450 -0.788427 -0.184622 -1.942321 1.530113 0.503406 1.105845 -0.935120 -1.115483 -2.249762 1.307135 0.788412 -0.441091 0.073561 0.812101 -0.916146 1.573714 -0.309508 0.499987 0.187594 0.558913 0.903246 0.317901 -0.809797 6 1.128248 1.516826 -0.186735 -0.668157 1.132259 -0.246648 -0.855167 0.732283 0.931802 1.318684 -1.198418 -1.149318 0.586321 -1.171937 -0.607731 2.753747 1.479287 -1.136365 -0.020485 0.320444 -1.955755 0.660402 -1.545371 0.200519 -0.017263 1.634686 0.599246 0.462989 0.023721 0.225546 0.170972 -0.027496 -0.061233 -0.566411 -0.669567 0.601618 0.503656 -0.678253 -2.907108 -1.717123 0.397631 1.300108 0.215821 -0.593075 -0.225944 -0.946057 1.000308 0.393160 1.342074 -0.370687 -0.166413 -0.419814 -0.255931 1.789478 0.282378 0.742260 -0.050498 1.415309 0.838166 -1.400292 -0.937976 -1.499148 0.801859 0.224824 0.283572 0.643703 -1.198465 0.527206 0.215202 0.437048 1.312868 0.741243 0.077988 0.006123 0.190370 0.018007 -1.026036 -2.378430 -1.069949 0.843822 1.289216 -1.423369 -0.462887 0.197330 -0.935076 0.441271 0.414643 -0.377887 -0.530515 0.621592 1.009572 0.569718 0.175291 -0.656279 -0.112273 -0.392137 -1.043558 -0.467318 -0.384329 -2.009207 7 0.658598 0.101830 -0.682781 0.229349 -0.305657 0.404877 0.252244 -0.837784 -0.039624 0.329457 0.751694 1.469070 -0.157199 1.032628 -0.584639 -0.925544 0.342474 -0.969363 0.133480 -0.385974 -0.600278 0.281939 0.868579 1.129803 -0.041898 0.961193 0.131521 -0.792889 -1.285737 0.073934 -1.333315 -1.044125 1.277338 1.492257 0.411379 1.771805 -1.111128 1.123233 -1.019449 1.738357 -0.690764 -0.120710 -0.421359 -0.727294 -0.857759 -0.069436 -0.328334 -0.558180 1.063474 -0.519133 -0.496902 1.089589 -1.615801 0.080174 -0.229938 -0.498420 -0.624615 0.059481 -0.093158 -1.784549 -0.503789 -0.140528 0.002653 -0.484930 0.055914 -0.680948 -0.994271 1.277052 0.037651 2.155421 -0.437589 0.696404 0.417752 -0.544785 1.190690 0.978262 0.752102 0.504472 0.139853 -0.505089 -0.264975 -1.603194 0.731847 0.010903 -1.165346 -0.125195 -1.032685 -0.465520 1.514808 0.304762 0.793414 0.314635 -1.638279 0.111737 -0.777037 0.251783 1.126303 -0.808798 0.422064 -0.349264 8 -0.356362 -0.089227 0.609373 0.542382 -0.768681 -0.048074 2.015458 -1.552351 0.251552 1.459635 0.949707 0.339465 -0.001372 1.798589 1.559163 0.231783 0.423141 -0.310530 0.353795 2.173336 -0.196247 -0.375636 -0.858221 0.258410 0.656430 0.960819 1.137893 1.553405 0.038981 -0.632038 -0.132009 -1.834997 -0.242576 -0.297879 -0.441559 -0.769691 0.224077 -0.153009 0.519526 -0.680188 0.535851 0.671496 -0.183064 0.301234 1.288256 -2.478240 -0.360403 0.424067 -0.834659 -0.128464 -0.489013 -0.014888 -1.461230 -1.435223 -1.319802 1.083675 0.979140 -0.375291 1.110189 -1.011351 0.587886 -0.822775 -1.183865 1.455173 1.134328 0.239403 -0.837991 -1.130932 0.783168 1.845520 1.437072 -1.198443 1.379098 2.129113 0.260096 -0.011975 0.043302 0.722941 1.028152 -0.235806 1.145245 -1.359598 0.232189 0.503712 -0.614264 -0.530606 -2.435803 -0.255238 -0.064423 0.784643 0.256346 0.128023 1.414103 -1.118659 0.877353 0.500561 0.463651 -2.034512 -0.981683 -0.691944 9 -1.113376 -1.169402 0.680539 -1.534212 1.653817 -1.295181 -0.566826 0.477014 1.413371 0.517105 1.401153 -0.872685 0.830957 0.181507 -0.145616 0.694592 -0.751208 0.324444 0.681973 -0.054972 0.917776 -1.024810 -0.206446 -0.600113 0.852805 1.455109 -0.079769 0.076076 0.207699 -1.850458 -0.124124 -0.610871 -0.883362 0.219049 -0.685094 -0.645330 -0.242805 -0.775602 0.233070 2.422642 -1.423040 -0.582421 0.968304 -0.701025 -0.167850 0.277264 1.301231 0.301205 -3.081249 -0.562868 0.192944 -0.664592 0.565686 0.190913 -0.841858 -1.856545 -1.022777 1.295968 0.451921 0.659955 0.065818 -0.319586 0.253495 -1.144646 -0.483404 0.555902 0.807069 0.714196 0.661196 0.053667 0.346833 -1.288977 -0.386734 -1.262127 0.477495 -0.494034 -0.911414 1.152963 -0.342365 -0.160187 0.470054 -0.853063 -1.387949 -0.257257 -1.030690 -0.110210 0.328911 -0.555923 0.987713 -0.501957 2.069887 -0.067503 0.316029 -1.506232 2.201621 0.492097 -0.085193 -0.977822 1.039147 -0.653932 10 -0.405638 -1.402027 -1.166242 1.306184 0.856283 -1.236170 -0.646721 -1.474064 0.082960 0.090310 -0.169977 0.406345 0.915427 -0.974503 0.271637 1.539184 -0.098866 -0.525149 1.063933 0.085827 -0.129622 0.947959 -0.072496 -0.237592 0.012549 1.065761 0.996596 -0.172481 2.583139 -0.028578 -0.254856 1.328794 -1.592951 2.434350 -0.341500 -0.307719 -1.333273 -1.100845 0.209097 1.734777 0.639632 0.424779 -0.129327 0.905029 -0.482909 1.731628 -2.783425 -0.333677 -0.110895 1.212636 -0.208412 0.427117 1.348563 0.043859 1.772519 -1.416106 0.401155 0.807157 0.303427 -1.246288 0.178774 -0.066126 -1.862288 1.241295 0.377021 -0.822320 -0.749014 1.463652 1.602268 -1.043877 1.185290 -0.565783 -1.076879 1.360241 -0.121991 0.991043 1.007952 0.450185 -0.744376 1.388876 -0.316847 -0.841655 -1.056842 -0.500226 0.096959 1.176896 -2.939652 1.792213 0.316340 0.303218 1.024967 -0.590871 -0.453326 -0.795981 -0.393301 -0.374372 -1.270199 1.618372 1.197727 -0.914863 11 -0.625210 0.288911 0.288374 -1.372667 -0.591395 -0.478942 1.335664 -0.459855 -1.615975 -1.189676 0.374767 -2.488733 0.586656 -1.422008 0.496030 1.911128 -0.560660 -0.499614 -0.372171 -1.833069 0.237124 -0.944446 0.912140 0.359790 -1.359235 0.166966 -0.047107 -0.279789 -0.594454 -0.739013 -1.527645 0.401668 1.791252 -2.774848 0.523873 2.207585 0.488999 -0.339283 0.131711 0.018409 1.186551 -0.424318 1.554994 -0.205917 -0.934975 0.654102 -1.227761 -0.461025 -0.421201 -0.058615 -0.584563 0.336913 -0.477102 -1.381463 0.757745 -0.268968 0.034870 1.231686 0.236600 1.234720 -0.040247 0.029582 1.034905 0.380204 -0.012108 -0.859511 -0.990340 -1.205172 -1.030178 0.426676 0.497796 -0.876808 0.957963 0.173016 0.131612 -1.003556 -1.069908 -1.799207 1.429598 -0.116015 -1.454980 0.261917 0.444412 0.273290 0.844115 0.218745 -1.033350 -1.188295 0.058373 0.800523 -1.627068 0.861651 0.871018 -0.003733 -0.243354 0.947296 0.509406 0.044546 0.266896 1.337165 12 0.699142 -1.928033 0.105363 1.042322 0.715206 -0.763783 0.098798 -1.157898 0.134105 0.042041 0.674826 0.165649 -1.622970 -3.131274 0.597649 -1.880331 0.663980 -0.256033 -1.524058 0.492799 0.221163 0.429622 -0.659584 1.264506 -0.032131 -2.114907 -0.264043 0.457835 -0.676837 -0.629003 0.489145 -0.551686 0.942622 -0.512043 -0.455893 0.021244 -0.178035 -2.498073 -0.171292 0.323510 -0.545163 -0.668909 -0.150031 0.521620 -0.428980 0.676463 0.369081 -0.724832 0.793542 1.237422 0.401275 2.141523 0.249012 0.486755 -0.163274 0.592222 -0.292600 -0.547168 0.619104 -0.013605 0.776734 0.131424 1.189480 -0.666317 -0.939036 1.105515 0.621452 1.586605 -0.760970 1.649646 0.283199 1.275812 -0.452012 0.301361 -0.976951 -0.268106 -0.079255 -1.258332 2.216658 -1.175988 -0.863497 -1.653022 -0.561514 0.450753 0.417200 0.094676 -2.231054 1.316862 -0.477441 0.646654 -0.200252 1.074354 -0.058176 0.120990 0.222522 -0.179507 0.421655 -0.914341 -0.234178 0.741524 13 0.932714 1.423761 -1.280835 0.347882 -0.863171 -0.852580 1.044933 2.094536 0.806206 0.416201 -1.109503 0.145302 -0.996871 0.325456 -0.605081 1.175326 1.645054 0.293432 -2.766822 1.032849 0.079115 -1.414132 1.463376 2.335486 0.411951 -0.048543 0.159284 -0.651554 -1.093128 1.568390 -0.077807 -2.390779 -0.842346 -0.229675 -0.999072 -1.367219 -0.792042 -1.878575 1.451452 1.266250 -0.734315 0.266152 0.735523 -0.430860 0.229864 0.850083 -2.241241 1.063850 0.289409 -0.354360 0.113063 -0.173006 1.386998 1.886236 0.587119 -0.961133 0.399295 1.461560 0.310823 0.280220 -0.879103 -1.326348 0.003337 -1.085908 -0.436723 2.111926 0.106068 0.615597 2.152996 -0.196155 0.025747 -0.039061 0.656823 -0.347105 2.513979 1.758070 1.288473 -0.739185 -0.691592 -0.098728 -0.276386 0.489981 0.516278 -0.838258 0.596673 -0.331053 0.521174 -0.145023 0.836693 -1.092166 0.361733 -1.169981 0.046731 0.655377 -0.756852 1.285805 -0.095019 0.360253 1.370621 0.083010 14 0.888893 2.288725 -1.032332 0.212273 -1.091826 1.692498 1.025367 0.550854 0.679430 -1.335712 -0.798341 2.265351 -1.006938 2.059761 0.420266 -1.189657 0.506674 0.260847 -0.533145 0.727267 1.412276 1.482106 -0.996258 0.588641 -0.412642 -0.920733 -0.874691 0.839002 0.501668 -0.342493 -0.533806 -2.146352 -0.597339 0.115726 0.850683 -0.752239 0.377263 -0.561982 0.262783 -0.356676 -0.367462 0.753611 -1.267414 -1.330698 -0.536453 0.840938 -0.763108 -0.268100 -0.677424 1.606831 0.151732 -2.085701 1.219296 0.400863 0.591165 -1.485213 1.501979 1.196569 -0.214154 0.339554 -0.034446 1.176452 0.546340 -1.255630 -1.309210 -0.445437 0.189437 -0.737463 0.843767 -0.605632 -0.060777 0.409310 1.285569 -0.622638 1.018193 0.880680 0.046805 -1.818058 -0.809829 0.875224 0.409569 -0.116621 -1.238919 3.305724 -0.024121 -1.756500 1.328958 0.507593 -0.866554 -2.240848 -0.661376 -0.671824 0.215720 -0.296326 0.481402 0.829645 -0.721025 1.263914 0.549047 -1.234945 15 -1.978838 0.721823 -0.559067 -1.235243 0.420716 -0.598845 0.359576 -0.619366 -1.757772 -1.156251 0.705212 0.875071 -1.020376 0.394760 -0.147970 0.230249 1.355203 1.794488 2.678058 -0.153565 -0.460959 -0.098108 -1.407930 -2.487702 1.823014 0.099873 -0.517603 -0.509311 -1.833175 -0.900906 0.459493 -0.655440 1.466122 -1.531389 -0.422106 0.421422 0.578615 0.259795 0.018941 -0.168726 1.611107 -1.586550 -1.384941 0.858377 1.033242 1.701343 1.748344 -0.371182 -0.843575 2.089641 -0.345430 -1.740556 0.141915 -2.197138 0.689569 -0.150025 0.287456 0.654016 -1.521919 -0.918008 -0.587528 0.230636 0.262637 0.615674 0.600044 -0.494699 -0.743089 0.220026 -0.242207 0.528216 -0.328174 -1.536517 -1.476640 -1.162114 -1.260222 1.106252 -1.467408 -0.349341 -1.841217 0.031296 -0.076475 -0.353383 0.807545 0.779064 -2.398417 -0.267828 1.549734 0.814397 0.284770 -0.659369 0.761040 -0.722067 0.810332 1.501295 1.440865 -1.367459 -0.700301 -1.540662 0.159837 -0.625415 It is also possible to stick MultiIndexes and even only specific levels. [63]: bigdf.index = pd.MultiIndex.from_product([["A","B"],[0,1],[0,1,2,3]]) bigdf.style.set_sticky(axis="index", pixel_size=18, levels=[1,2]) [63]:       0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 A 0 0 -0.773866 -0.240521 -0.217165 1.173609 0.686390 0.008358 0.696232 0.173166 0.620498 0.504067 0.428066 -0.051824 0.719915 0.057165 0.562808 -0.369536 0.483399 0.620765 -0.354342 -1.469471 -1.937266 0.038031 -1.518162 -0.417599 0.386717 0.716193 0.489961 0.733957 0.914415 0.679894 0.255448 -0.508338 0.332030 -0.111107 -0.251983 -1.456620 0.409630 1.062320 -0.577115 0.718796 -0.399260 -1.311389 0.649122 0.091566 0.628872 0.297894 -0.142290 -0.542291 -0.914290 1.144514 0.313584 1.182635 1.214235 -0.416446 -1.653940 -2.550787 0.442473 0.052127 -0.464469 -0.523852 0.989726 -1.325539 -0.199687 -1.226727 0.290018 1.164574 0.817841 -0.309509 0.496599 0.943536 -0.091850 -2.802658 2.126219 -0.521161 0.288098 -0.454663 -1.676143 -0.357661 -0.788960 0.185911 -0.017106 2.454020 1.832706 -0.911743 -0.655873 -0.000514 -2.226997 0.677285 -0.140249 -0.408407 -0.838665 0.482228 1.243458 -0.477394 -0.220343 -2.463966 0.237325 -0.307380 1.172478 0.819492 1 0.405906 -0.978919 1.267526 0.145250 -1.066786 -2.114192 -1.128346 -1.082523 0.372216 0.004127 -0.211984 0.937326 -0.935890 -1.704118 0.611789 -1.030015 0.636123 -1.506193 1.736609 1.392958 1.009424 0.353266 0.697339 -0.297424 0.428702 -0.145346 -0.333553 -0.974699 0.665314 0.971944 0.121950 -1.439668 1.018808 1.442399 -0.199585 -1.165916 0.645656 1.436466 -0.921215 1.293906 -2.706443 1.460928 -0.823197 0.292952 -1.448992 0.026692 -0.975883 0.392823 0.442166 0.745741 1.187982 -0.218570 0.305288 0.054932 -1.476953 -0.114434 0.014103 0.825394 -0.060654 -0.413688 0.974836 1.339210 1.034838 0.040775 0.705001 0.017796 1.867681 -0.390173 2.285277 2.311464 -0.085070 -0.648115 0.576300 -0.790087 -1.183798 -1.334558 -0.454118 0.319302 1.706488 0.830429 0.502476 -0.079631 0.414635 0.332511 0.042935 -0.160910 0.918553 -0.292697 -1.303834 -0.199604 0.871023 -1.370681 -0.205701 -0.492973 1.123083 -0.081842 -0.118527 0.245838 -0.315742 -0.511806 2 0.011470 -0.036104 1.399603 -0.418176 -0.412229 -1.234783 -1.121500 1.196478 -0.569522 0.422022 -0.220484 0.804338 2.892667 -0.511055 -0.168722 -1.477996 -1.969917 0.471354 1.698548 0.137105 -0.762052 0.199379 -0.964346 -0.256692 1.265275 0.848762 -0.784161 1.863776 -0.355569 0.854552 0.768061 -2.075718 -2.501069 1.109868 0.957545 -0.683276 0.307764 0.733073 1.706250 -1.118091 0.374961 -1.414503 -0.524183 -1.662696 0.687921 0.521732 1.451396 -0.833491 -0.362796 -1.174444 -0.813893 -0.893220 0.770743 1.156647 -0.647444 0.125929 0.513600 -0.537874 1.992052 -1.946584 -0.104759 0.484779 -0.290936 -0.441075 0.542993 -1.050038 1.630482 0.239771 -1.177310 0.464804 -0.966995 0.646086 0.486899 1.022196 -2.267827 -1.229616 1.313805 1.073292 2.324940 -0.542720 -1.504292 0.777643 -0.618553 0.011342 1.385062 1.363552 -0.549834 0.688896 1.361288 -0.381137 0.797812 -1.128198 0.369208 0.540132 0.413853 -0.200308 -0.969126 0.981293 -0.009783 -0.320020 3 -0.574816 1.419977 0.434813 -1.101217 -1.586275 1.979573 0.378298 0.782326 2.178987 0.657564 0.683774 -0.091000 -0.059552 -0.738908 -0.907653 -0.701936 0.580039 -0.618757 0.453684 1.665382 -0.152321 0.880077 0.571073 -0.604736 0.532359 0.515031 -0.959844 -0.887184 0.435781 0.862093 -0.956321 -0.625909 0.194472 0.442490 0.526503 -0.215274 0.090711 0.932592 0.811999 -2.497026 0.631545 0.321418 -0.425549 -1.078832 0.753444 0.199790 -0.360526 -0.013448 -0.819476 0.814869 0.442118 -0.972048 -0.060603 -2.349825 1.265445 -0.573257 0.429124 1.049783 1.954773 0.071883 -0.094209 0.265616 0.948318 0.331645 1.343401 -0.167934 -1.105252 -0.167077 -0.096576 -0.838161 -0.208564 0.394534 0.762533 1.235357 -0.207282 -0.202946 -0.468025 0.256944 2.587584 1.186697 -1.031903 1.428316 0.658899 -0.046582 -0.075422 1.329359 -0.684267 -1.524182 2.014061 3.770933 0.647353 -1.021377 -0.345493 0.582811 0.797812 1.326020 1.422857 -3.077007 0.184083 1.478935 1 0 -0.600142 1.929561 -2.346771 -0.669700 -1.165258 0.814788 0.444449 -0.576758 0.353091 0.408893 0.091391 -2.294389 0.485506 -0.081304 -0.716272 -1.648010 1.005361 -1.489603 0.363098 0.758602 -1.373847 -0.972057 1.988537 0.319829 1.169060 0.146585 1.030388 1.165984 1.369563 0.730984 -1.383696 -0.515189 -0.808927 -1.174651 -1.631502 -1.123414 -0.478155 -1.583067 1.419074 1.668777 1.567517 0.222103 -0.336040 -1.352064 0.251032 -0.401695 0.268413 -0.012299 -0.918953 2.921208 -0.581588 0.672848 1.251136 1.382263 1.429897 1.290990 -1.272673 -0.308611 -0.422988 -0.675642 0.874441 1.305736 -0.262585 -1.099395 -0.667101 -0.646737 -0.556338 -0.196591 0.119306 -0.266455 -0.524267 2.650951 0.097318 -0.974697 0.189964 1.141155 -0.064434 1.104971 -1.508908 -0.031833 0.803919 -0.659221 0.939145 0.214041 -0.531805 0.956060 0.249328 0.637903 -0.510158 1.850287 -0.348407 2.001376 -0.389643 -0.024786 -0.470973 0.869339 0.170667 0.598062 1.217262 1.274013 1 -0.389981 -0.752441 -0.734871 3.517318 -1.173559 -0.004956 0.145419 2.151368 -3.086037 -1.569139 1.449784 -0.868951 -1.687716 -0.994401 1.153266 1.803045 -0.819059 0.847970 0.227102 -0.500762 0.868210 1.823540 1.161007 -0.307606 -0.713416 0.363560 -0.822162 2.427681 -0.129537 -0.078716 1.345644 -1.286094 0.237242 -0.136056 0.596664 -1.412381 1.206341 0.299860 0.705238 0.142412 -1.059382 0.833468 1.060015 -0.527045 -1.135732 -1.140983 -0.779540 -0.640875 -1.217196 -1.675663 0.241263 -0.273322 -1.697936 -0.594943 0.101154 1.391735 -0.426953 1.008344 -0.818577 1.924570 -0.578900 -0.457395 -1.096705 0.418522 -0.155623 0.169706 -2.533706 0.018904 1.434160 0.744095 0.647626 -0.770309 2.329141 -0.141547 -1.761594 0.702091 -1.531450 -0.788427 -0.184622 -1.942321 1.530113 0.503406 1.105845 -0.935120 -1.115483 -2.249762 1.307135 0.788412 -0.441091 0.073561 0.812101 -0.916146 1.573714 -0.309508 0.499987 0.187594 0.558913 0.903246 0.317901 -0.809797 2 1.128248 1.516826 -0.186735 -0.668157 1.132259 -0.246648 -0.855167 0.732283 0.931802 1.318684 -1.198418 -1.149318 0.586321 -1.171937 -0.607731 2.753747 1.479287 -1.136365 -0.020485 0.320444 -1.955755 0.660402 -1.545371 0.200519 -0.017263 1.634686 0.599246 0.462989 0.023721 0.225546 0.170972 -0.027496 -0.061233 -0.566411 -0.669567 0.601618 0.503656 -0.678253 -2.907108 -1.717123 0.397631 1.300108 0.215821 -0.593075 -0.225944 -0.946057 1.000308 0.393160 1.342074 -0.370687 -0.166413 -0.419814 -0.255931 1.789478 0.282378 0.742260 -0.050498 1.415309 0.838166 -1.400292 -0.937976 -1.499148 0.801859 0.224824 0.283572 0.643703 -1.198465 0.527206 0.215202 0.437048 1.312868 0.741243 0.077988 0.006123 0.190370 0.018007 -1.026036 -2.378430 -1.069949 0.843822 1.289216 -1.423369 -0.462887 0.197330 -0.935076 0.441271 0.414643 -0.377887 -0.530515 0.621592 1.009572 0.569718 0.175291 -0.656279 -0.112273 -0.392137 -1.043558 -0.467318 -0.384329 -2.009207 3 0.658598 0.101830 -0.682781 0.229349 -0.305657 0.404877 0.252244 -0.837784 -0.039624 0.329457 0.751694 1.469070 -0.157199 1.032628 -0.584639 -0.925544 0.342474 -0.969363 0.133480 -0.385974 -0.600278 0.281939 0.868579 1.129803 -0.041898 0.961193 0.131521 -0.792889 -1.285737 0.073934 -1.333315 -1.044125 1.277338 1.492257 0.411379 1.771805 -1.111128 1.123233 -1.019449 1.738357 -0.690764 -0.120710 -0.421359 -0.727294 -0.857759 -0.069436 -0.328334 -0.558180 1.063474 -0.519133 -0.496902 1.089589 -1.615801 0.080174 -0.229938 -0.498420 -0.624615 0.059481 -0.093158 -1.784549 -0.503789 -0.140528 0.002653 -0.484930 0.055914 -0.680948 -0.994271 1.277052 0.037651 2.155421 -0.437589 0.696404 0.417752 -0.544785 1.190690 0.978262 0.752102 0.504472 0.139853 -0.505089 -0.264975 -1.603194 0.731847 0.010903 -1.165346 -0.125195 -1.032685 -0.465520 1.514808 0.304762 0.793414 0.314635 -1.638279 0.111737 -0.777037 0.251783 1.126303 -0.808798 0.422064 -0.349264 B 0 0 -0.356362 -0.089227 0.609373 0.542382 -0.768681 -0.048074 2.015458 -1.552351 0.251552 1.459635 0.949707 0.339465 -0.001372 1.798589 1.559163 0.231783 0.423141 -0.310530 0.353795 2.173336 -0.196247 -0.375636 -0.858221 0.258410 0.656430 0.960819 1.137893 1.553405 0.038981 -0.632038 -0.132009 -1.834997 -0.242576 -0.297879 -0.441559 -0.769691 0.224077 -0.153009 0.519526 -0.680188 0.535851 0.671496 -0.183064 0.301234 1.288256 -2.478240 -0.360403 0.424067 -0.834659 -0.128464 -0.489013 -0.014888 -1.461230 -1.435223 -1.319802 1.083675 0.979140 -0.375291 1.110189 -1.011351 0.587886 -0.822775 -1.183865 1.455173 1.134328 0.239403 -0.837991 -1.130932 0.783168 1.845520 1.437072 -1.198443 1.379098 2.129113 0.260096 -0.011975 0.043302 0.722941 1.028152 -0.235806 1.145245 -1.359598 0.232189 0.503712 -0.614264 -0.530606 -2.435803 -0.255238 -0.064423 0.784643 0.256346 0.128023 1.414103 -1.118659 0.877353 0.500561 0.463651 -2.034512 -0.981683 -0.691944 1 -1.113376 -1.169402 0.680539 -1.534212 1.653817 -1.295181 -0.566826 0.477014 1.413371 0.517105 1.401153 -0.872685 0.830957 0.181507 -0.145616 0.694592 -0.751208 0.324444 0.681973 -0.054972 0.917776 -1.024810 -0.206446 -0.600113 0.852805 1.455109 -0.079769 0.076076 0.207699 -1.850458 -0.124124 -0.610871 -0.883362 0.219049 -0.685094 -0.645330 -0.242805 -0.775602 0.233070 2.422642 -1.423040 -0.582421 0.968304 -0.701025 -0.167850 0.277264 1.301231 0.301205 -3.081249 -0.562868 0.192944 -0.664592 0.565686 0.190913 -0.841858 -1.856545 -1.022777 1.295968 0.451921 0.659955 0.065818 -0.319586 0.253495 -1.144646 -0.483404 0.555902 0.807069 0.714196 0.661196 0.053667 0.346833 -1.288977 -0.386734 -1.262127 0.477495 -0.494034 -0.911414 1.152963 -0.342365 -0.160187 0.470054 -0.853063 -1.387949 -0.257257 -1.030690 -0.110210 0.328911 -0.555923 0.987713 -0.501957 2.069887 -0.067503 0.316029 -1.506232 2.201621 0.492097 -0.085193 -0.977822 1.039147 -0.653932 2 -0.405638 -1.402027 -1.166242 1.306184 0.856283 -1.236170 -0.646721 -1.474064 0.082960 0.090310 -0.169977 0.406345 0.915427 -0.974503 0.271637 1.539184 -0.098866 -0.525149 1.063933 0.085827 -0.129622 0.947959 -0.072496 -0.237592 0.012549 1.065761 0.996596 -0.172481 2.583139 -0.028578 -0.254856 1.328794 -1.592951 2.434350 -0.341500 -0.307719 -1.333273 -1.100845 0.209097 1.734777 0.639632 0.424779 -0.129327 0.905029 -0.482909 1.731628 -2.783425 -0.333677 -0.110895 1.212636 -0.208412 0.427117 1.348563 0.043859 1.772519 -1.416106 0.401155 0.807157 0.303427 -1.246288 0.178774 -0.066126 -1.862288 1.241295 0.377021 -0.822320 -0.749014 1.463652 1.602268 -1.043877 1.185290 -0.565783 -1.076879 1.360241 -0.121991 0.991043 1.007952 0.450185 -0.744376 1.388876 -0.316847 -0.841655 -1.056842 -0.500226 0.096959 1.176896 -2.939652 1.792213 0.316340 0.303218 1.024967 -0.590871 -0.453326 -0.795981 -0.393301 -0.374372 -1.270199 1.618372 1.197727 -0.914863 3 -0.625210 0.288911 0.288374 -1.372667 -0.591395 -0.478942 1.335664 -0.459855 -1.615975 -1.189676 0.374767 -2.488733 0.586656 -1.422008 0.496030 1.911128 -0.560660 -0.499614 -0.372171 -1.833069 0.237124 -0.944446 0.912140 0.359790 -1.359235 0.166966 -0.047107 -0.279789 -0.594454 -0.739013 -1.527645 0.401668 1.791252 -2.774848 0.523873 2.207585 0.488999 -0.339283 0.131711 0.018409 1.186551 -0.424318 1.554994 -0.205917 -0.934975 0.654102 -1.227761 -0.461025 -0.421201 -0.058615 -0.584563 0.336913 -0.477102 -1.381463 0.757745 -0.268968 0.034870 1.231686 0.236600 1.234720 -0.040247 0.029582 1.034905 0.380204 -0.012108 -0.859511 -0.990340 -1.205172 -1.030178 0.426676 0.497796 -0.876808 0.957963 0.173016 0.131612 -1.003556 -1.069908 -1.799207 1.429598 -0.116015 -1.454980 0.261917 0.444412 0.273290 0.844115 0.218745 -1.033350 -1.188295 0.058373 0.800523 -1.627068 0.861651 0.871018 -0.003733 -0.243354 0.947296 0.509406 0.044546 0.266896 1.337165 1 0 0.699142 -1.928033 0.105363 1.042322 0.715206 -0.763783 0.098798 -1.157898 0.134105 0.042041 0.674826 0.165649 -1.622970 -3.131274 0.597649 -1.880331 0.663980 -0.256033 -1.524058 0.492799 0.221163 0.429622 -0.659584 1.264506 -0.032131 -2.114907 -0.264043 0.457835 -0.676837 -0.629003 0.489145 -0.551686 0.942622 -0.512043 -0.455893 0.021244 -0.178035 -2.498073 -0.171292 0.323510 -0.545163 -0.668909 -0.150031 0.521620 -0.428980 0.676463 0.369081 -0.724832 0.793542 1.237422 0.401275 2.141523 0.249012 0.486755 -0.163274 0.592222 -0.292600 -0.547168 0.619104 -0.013605 0.776734 0.131424 1.189480 -0.666317 -0.939036 1.105515 0.621452 1.586605 -0.760970 1.649646 0.283199 1.275812 -0.452012 0.301361 -0.976951 -0.268106 -0.079255 -1.258332 2.216658 -1.175988 -0.863497 -1.653022 -0.561514 0.450753 0.417200 0.094676 -2.231054 1.316862 -0.477441 0.646654 -0.200252 1.074354 -0.058176 0.120990 0.222522 -0.179507 0.421655 -0.914341 -0.234178 0.741524 1 0.932714 1.423761 -1.280835 0.347882 -0.863171 -0.852580 1.044933 2.094536 0.806206 0.416201 -1.109503 0.145302 -0.996871 0.325456 -0.605081 1.175326 1.645054 0.293432 -2.766822 1.032849 0.079115 -1.414132 1.463376 2.335486 0.411951 -0.048543 0.159284 -0.651554 -1.093128 1.568390 -0.077807 -2.390779 -0.842346 -0.229675 -0.999072 -1.367219 -0.792042 -1.878575 1.451452 1.266250 -0.734315 0.266152 0.735523 -0.430860 0.229864 0.850083 -2.241241 1.063850 0.289409 -0.354360 0.113063 -0.173006 1.386998 1.886236 0.587119 -0.961133 0.399295 1.461560 0.310823 0.280220 -0.879103 -1.326348 0.003337 -1.085908 -0.436723 2.111926 0.106068 0.615597 2.152996 -0.196155 0.025747 -0.039061 0.656823 -0.347105 2.513979 1.758070 1.288473 -0.739185 -0.691592 -0.098728 -0.276386 0.489981 0.516278 -0.838258 0.596673 -0.331053 0.521174 -0.145023 0.836693 -1.092166 0.361733 -1.169981 0.046731 0.655377 -0.756852 1.285805 -0.095019 0.360253 1.370621 0.083010 2 0.888893 2.288725 -1.032332 0.212273 -1.091826 1.692498 1.025367 0.550854 0.679430 -1.335712 -0.798341 2.265351 -1.006938 2.059761 0.420266 -1.189657 0.506674 0.260847 -0.533145 0.727267 1.412276 1.482106 -0.996258 0.588641 -0.412642 -0.920733 -0.874691 0.839002 0.501668 -0.342493 -0.533806 -2.146352 -0.597339 0.115726 0.850683 -0.752239 0.377263 -0.561982 0.262783 -0.356676 -0.367462 0.753611 -1.267414 -1.330698 -0.536453 0.840938 -0.763108 -0.268100 -0.677424 1.606831 0.151732 -2.085701 1.219296 0.400863 0.591165 -1.485213 1.501979 1.196569 -0.214154 0.339554 -0.034446 1.176452 0.546340 -1.255630 -1.309210 -0.445437 0.189437 -0.737463 0.843767 -0.605632 -0.060777 0.409310 1.285569 -0.622638 1.018193 0.880680 0.046805 -1.818058 -0.809829 0.875224 0.409569 -0.116621 -1.238919 3.305724 -0.024121 -1.756500 1.328958 0.507593 -0.866554 -2.240848 -0.661376 -0.671824 0.215720 -0.296326 0.481402 0.829645 -0.721025 1.263914 0.549047 -1.234945 3 -1.978838 0.721823 -0.559067 -1.235243 0.420716 -0.598845 0.359576 -0.619366 -1.757772 -1.156251 0.705212 0.875071 -1.020376 0.394760 -0.147970 0.230249 1.355203 1.794488 2.678058 -0.153565 -0.460959 -0.098108 -1.407930 -2.487702 1.823014 0.099873 -0.517603 -0.509311 -1.833175 -0.900906 0.459493 -0.655440 1.466122 -1.531389 -0.422106 0.421422 0.578615 0.259795 0.018941 -0.168726 1.611107 -1.586550 -1.384941 0.858377 1.033242 1.701343 1.748344 -0.371182 -0.843575 2.089641 -0.345430 -1.740556 0.141915 -2.197138 0.689569 -0.150025 0.287456 0.654016 -1.521919 -0.918008 -0.587528 0.230636 0.262637 0.615674 0.600044 -0.494699 -0.743089 0.220026 -0.242207 0.528216 -0.328174 -1.536517 -1.476640 -1.162114 -1.260222 1.106252 -1.467408 -0.349341 -1.841217 0.031296 -0.076475 -0.353383 0.807545 0.779064 -2.398417 -0.267828 1.549734 0.814397 0.284770 -0.659369 0.761040 -0.722067 0.810332 1.501295 1.440865 -1.367459 -0.700301 -1.540662 0.159837 -0.625415 HTML Escaping# Suppose you have to display HTML within HTML, that can be a bit of pain when the renderer can’t distinguish. You can use the escape formatting option to handle this, and even use it within a formatter that contains HTML itself. [64]: df4 = pd.DataFrame([['<div></div>', '"&other"', '<span></span>']]) df4.style [64]:   0 1 2 0 "&other" [65]: df4.style.format(escape="html") [65]:   0 1 2 0 <div></div> "&other" <span></span> [66]: df4.style.format('<a href="https://pandas.pydata.org" target="_blank">{}</a>', escape="html") [66]:   0 1 2 0 <div></div> "&other" <span></span> Export to Excel# Some support (since version 0.20.0) is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include: background-color border-style properties border-width properties border-color properties color font-family font-style font-weight text-align text-decoration vertical-align white-space: nowrap Shorthand and side-specific border properties are supported (e.g. border-style and border-left-style) as well as the border shorthands for all sides (border: 1px solid green) or specified sides (border-left: 1px solid green). Using a border shorthand will override any border properties set before it (See CSS Working Group for more details) Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported. The following pseudo CSS properties are also available to set Excel specific style properties: number-format border-style (for Excel-specific styles: “hair”, “mediumDashDot”, “dashDotDot”, “mediumDashDotDot”, “dashDot”, “slantDashDot”, or “mediumDashed”) Table level styles, and data cell CSS-classes are not included in the export to Excel: individual cells must have their properties mapped by the Styler.apply and/or Styler.applymap methods. [67]: df2.style.\ applymap(style_negative, props='color:red;').\ highlight_max(axis=0).\ to_excel('styled.xlsx', engine='openpyxl') A screenshot of the output: Export to LaTeX# There is support (since version 1.3.0) to export Styler to LaTeX. The documentation for the .to_latex method gives further detail and numerous examples. More About CSS and HTML# Cascading Style Sheet (CSS) language, which is designed to influence how a browser renders HTML elements, has its own peculiarities. It never reports errors: it just silently ignores them and doesn’t render your objects how you intend so can sometimes be frustrating. Here is a very brief primer on how Styler creates HTML and interacts with CSS, with advice on common pitfalls to avoid. CSS Classes and Ids# The precise structure of the CSS class attached to each cell is as follows. Cells with Index and Column names include index_name and level<k> where k is its level in a MultiIndex Index label cells include row_heading level<k> where k is the level in a MultiIndex row<m> where m is the numeric position of the row Column label cells include col_heading level<k> where k is the level in a MultiIndex col<n> where n is the numeric position of the column Data cells include data row<m>, where m is the numeric position of the cell. col<n>, where n is the numeric position of the cell. Blank cells include blank Trimmed cells include col_trim or row_trim The structure of the id is T_uuid_level<k>_row<m>_col<n> where level<k> is used only on headings, and headings will only have either row<m> or col<n> whichever is needed. By default we’ve also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn’t collide with the styling from another within the same notebook or page. You can read more about the use of UUIDs in Optimization. We can see example of the HTML by calling the .to_html() method. [68]: print(pd.DataFrame([[1,2],[3,4]], index=['i1', 'i2'], columns=['c1', 'c2']).style.to_html()) <style type="text/css"> </style> <table id="T_d505a"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_d505a_level0_col0" class="col_heading level0 col0" >c1</th> <th id="T_d505a_level0_col1" class="col_heading level0 col1" >c2</th> </tr> </thead> <tbody> <tr> <th id="T_d505a_level0_row0" class="row_heading level0 row0" >i1</th> <td id="T_d505a_row0_col0" class="data row0 col0" >1</td> <td id="T_d505a_row0_col1" class="data row0 col1" >2</td> </tr> <tr> <th id="T_d505a_level0_row1" class="row_heading level0 row1" >i2</th> <td id="T_d505a_row1_col0" class="data row1 col0" >3</td> <td id="T_d505a_row1_col1" class="data row1 col1" >4</td> </tr> </tbody> </table> CSS Hierarchies# The examples have shown that when CSS styles overlap, the one that comes last in the HTML render, takes precedence. So the following yield different results: [69]: df4 = pd.DataFrame([['text']]) df4.style.applymap(lambda x: 'color:green;')\ .applymap(lambda x: 'color:red;') [69]:   0 0 text [70]: df4.style.applymap(lambda x: 'color:red;')\ .applymap(lambda x: 'color:green;') [70]:   0 0 text This is only true for CSS rules that are equivalent in hierarchy, or importance. You can read more about CSS specificity here but for our purposes it suffices to summarize the key points: A CSS importance score for each HTML element is derived by starting at zero and adding: 1000 for an inline style attribute 100 for each ID 10 for each attribute, class or pseudo-class 1 for each element name or pseudo-element Let’s use this to describe the action of the following configurations [71]: df4.style.set_uuid('a_')\ .set_table_styles([{'selector': 'td', 'props': 'color:red;'}])\ .applymap(lambda x: 'color:green;') [71]:   0 0 text This text is red because the generated selector #T_a_ td is worth 101 (ID plus element), whereas #T_a_row0_col0 is only worth 100 (ID), so is considered inferior even though in the HTML it comes after the previous. [72]: df4.style.set_uuid('b_')\ .set_table_styles([{'selector': 'td', 'props': 'color:red;'}, {'selector': '.cls-1', 'props': 'color:blue;'}])\ .applymap(lambda x: 'color:green;')\ .set_td_classes(pd.DataFrame([['cls-1']])) [72]:   0 0 text In the above case the text is blue because the selector #T_b_ .cls-1 is worth 110 (ID plus class), which takes precedence. [73]: df4.style.set_uuid('c_')\ .set_table_styles([{'selector': 'td', 'props': 'color:red;'}, {'selector': '.cls-1', 'props': 'color:blue;'}, {'selector': 'td.data', 'props': 'color:yellow;'}])\ .applymap(lambda x: 'color:green;')\ .set_td_classes(pd.DataFrame([['cls-1']])) [73]:   0 0 text Now we have created another table style this time the selector T_c_ td.data (ID plus element plus class) gets bumped up to 111. If your style fails to be applied, and its really frustrating, try the !important trump card. [74]: df4.style.set_uuid('d_')\ .set_table_styles([{'selector': 'td', 'props': 'color:red;'}, {'selector': '.cls-1', 'props': 'color:blue;'}, {'selector': 'td.data', 'props': 'color:yellow;'}])\ .applymap(lambda x: 'color:green !important;')\ .set_td_classes(pd.DataFrame([['cls-1']])) [74]:   0 0 text Finally got that green text after all! Extensibility# The core of pandas is, and will remain, its “high-performance, easy-to-use data structures”. With that in mind, we hope that DataFrame.style accomplishes two goals Provide an API that is pleasing to use interactively and is “good enough” for many tasks Provide the foundations for dedicated libraries to build on If you build a great library on top of this, let us know and we’ll link to it. Subclassing# If the default template doesn’t quite suit your needs, you can subclass Styler and extend or override the template. We’ll show an example of extending the default template to insert a custom header before each table. [75]: from jinja2 import Environment, ChoiceLoader, FileSystemLoader from IPython.display import HTML from pandas.io.formats.style import Styler We’ll use the following template: [76]: with open("templates/myhtml.tpl") as f: print(f.read()) {% extends "html_table.tpl" %} {% block table %} <h1>{{ table_title|default("My Table") }}</h1> {{ super() }} {% endblock table %} Now that we’ve created a template, we need to set up a subclass of Styler that knows about it. [77]: class MyStyler(Styler): env = Environment( loader=ChoiceLoader([ FileSystemLoader("templates"), # contains ours Styler.loader, # the default ]) ) template_html_table = env.get_template("myhtml.tpl") Notice that we include the original loader in our environment’s loader. That’s because we extend the original template, so the Jinja environment needs to be able to find it. Now we can use that custom styler. It’s __init__ takes a DataFrame. [78]: MyStyler(df3) [78]: My Table     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 Our custom template accepts a table_title keyword. We can provide the value in the .to_html method. [79]: HTML(MyStyler(df3).to_html(table_title="Extending Example")) [79]: Extending Example     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass. [80]: EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl") HTML(EasyStyler(df3).to_html(table_title="Another Title")) [80]: Another Title     c1 c2 c3 c4 A r1 -1.048553 -1.420018 -1.706270 1.950775 r2 -0.509652 -0.438074 -1.252795 0.777490 B r1 -1.613898 -0.212740 -0.895467 0.386902 r2 -0.510805 -1.180632 -0.028182 0.428332 Template Structure# Here’s the template structure for the both the style generation template and the table generation template: Style template: [82]: HTML(style_structure) [82]: before_style style <style type="text/css"> table_styles before_cellstyle cellstyle </style> Table template: [84]: HTML(table_structure) [84]: before_table table <table ...> caption thead before_head_rows head_tr (loop over headers) after_head_rows tbody before_rows tr (loop over data rows) after_rows </table> after_table See the template in the GitHub repo for more details.
user_guide/style.html
Policies
Policies Changed in version 1.0.0. pandas uses a loose variant of semantic versioning (SemVer) to govern deprecations, API compatibility, and version numbering. A pandas release number is made up of MAJOR.MINOR.PATCH. API breaking changes should only occur in major releases. These changes will be documented, with clear guidance on what is changing, why it’s changing, and how to migrate existing code to the new behavior. Whenever possible, a deprecation path will be provided rather than an outright breaking change.
Version policy# Changed in version 1.0.0. pandas uses a loose variant of semantic versioning (SemVer) to govern deprecations, API compatibility, and version numbering. A pandas release number is made up of MAJOR.MINOR.PATCH. API breaking changes should only occur in major releases. These changes will be documented, with clear guidance on what is changing, why it’s changing, and how to migrate existing code to the new behavior. Whenever possible, a deprecation path will be provided rather than an outright breaking change. pandas will introduce deprecations in minor releases. These deprecations will preserve the existing behavior while emitting a warning that provide guidance on: How to achieve similar behavior if an alternative is available The pandas version in which the deprecation will be enforced. We will not introduce new deprecations in patch releases. Deprecations will only be enforced in major releases. For example, if a behavior is deprecated in pandas 1.2.0, it will continue to work, with a warning, for all releases in the 1.x series. The behavior will change and the deprecation removed in the next major release (2.0.0). Note pandas will sometimes make behavior changing bug fixes, as part of minor or patch releases. Whether or not a change is a bug fix or an API-breaking change is a judgement call. We’ll do our best, and we invite you to participate in development discussion on the issue tracker or mailing list. These policies do not apply to features marked as experimental in the documentation. pandas may change the behavior of experimental features at any time. Python support# pandas mirrors the NumPy guidelines for Python support.
development/policies.html
DataFrame
Constructor# DataFrame([data, index, columns, dtype, copy]) Two-dimensional, size-mutable, potentially heterogeneous tabular data. Attributes and underlying data# Axes DataFrame.index The index (row labels) of the DataFrame. DataFrame.columns The column labels of the DataFrame. DataFrame.dtypes Return the dtypes in the DataFrame. DataFrame.info([verbose, buf, max_cols, ...]) Print a concise summary of a DataFrame. DataFrame.select_dtypes([include, exclude]) Return a subset of the DataFrame's columns based on the column dtypes. DataFrame.values Return a Numpy representation of the DataFrame. DataFrame.axes Return a list representing the axes of the DataFrame. DataFrame.ndim Return an int representing the number of axes / array dimensions. DataFrame.size Return an int representing the number of elements in this object. DataFrame.shape Return a tuple representing the dimensionality of the DataFrame. DataFrame.memory_usage([index, deep]) Return the memory usage of each column in bytes. DataFrame.empty Indicator whether Series/DataFrame is empty. DataFrame.set_flags(*[, copy, ...]) Return a new object with updated flags. Conversion# DataFrame.astype(dtype[, copy, errors]) Cast a pandas object to a specified dtype dtype. DataFrame.convert_dtypes([infer_objects, ...]) Convert columns to best possible dtypes using dtypes supporting pd.NA. DataFrame.infer_objects() Attempt to infer better dtypes for object columns. DataFrame.copy([deep]) Make a copy of this object's indices and data. DataFrame.bool() Return the bool of a single element Series or DataFrame. Indexing, iteration# DataFrame.head([n]) Return the first n rows. DataFrame.at Access a single value for a row/column label pair. DataFrame.iat Access a single value for a row/column pair by integer position. DataFrame.loc Access a group of rows and columns by label(s) or a boolean array. DataFrame.iloc Purely integer-location based indexing for selection by position. DataFrame.insert(loc, column, value[, ...]) Insert column into DataFrame at specified location. DataFrame.__iter__() Iterate over info axis. DataFrame.items() Iterate over (column name, Series) pairs. DataFrame.iteritems() (DEPRECATED) Iterate over (column name, Series) pairs. DataFrame.keys() Get the 'info axis' (see Indexing for more). DataFrame.iterrows() Iterate over DataFrame rows as (index, Series) pairs. DataFrame.itertuples([index, name]) Iterate over DataFrame rows as namedtuples. DataFrame.lookup(row_labels, col_labels) (DEPRECATED) Label-based "fancy indexing" function for DataFrame. DataFrame.pop(item) Return item and drop from frame. DataFrame.tail([n]) Return the last n rows. DataFrame.xs(key[, axis, level, drop_level]) Return cross-section from the Series/DataFrame. DataFrame.get(key[, default]) Get item from object for given key (ex: DataFrame column). DataFrame.isin(values) Whether each element in the DataFrame is contained in values. DataFrame.where(cond[, other, inplace, ...]) Replace values where the condition is False. DataFrame.mask(cond[, other, inplace, axis, ...]) Replace values where the condition is True. DataFrame.query(expr, *[, inplace]) Query the columns of a DataFrame with a boolean expression. For more information on .at, .iat, .loc, and .iloc, see the indexing documentation. Binary operator functions# DataFrame.add(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator add). DataFrame.sub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator sub). DataFrame.mul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator mul). DataFrame.div(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator truediv). DataFrame.truediv(other[, axis, level, ...]) Get Floating division of dataframe and other, element-wise (binary operator truediv). DataFrame.floordiv(other[, axis, level, ...]) Get Integer division of dataframe and other, element-wise (binary operator floordiv). DataFrame.mod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator mod). DataFrame.pow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator pow). DataFrame.dot(other) Compute the matrix multiplication between the DataFrame and other. DataFrame.radd(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator radd). DataFrame.rsub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator rsub). DataFrame.rmul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator rmul). DataFrame.rdiv(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv). DataFrame.rtruediv(other[, axis, level, ...]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv). DataFrame.rfloordiv(other[, axis, level, ...]) Get Integer division of dataframe and other, element-wise (binary operator rfloordiv). DataFrame.rmod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator rmod). DataFrame.rpow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator rpow). DataFrame.lt(other[, axis, level]) Get Less than of dataframe and other, element-wise (binary operator lt). DataFrame.gt(other[, axis, level]) Get Greater than of dataframe and other, element-wise (binary operator gt). DataFrame.le(other[, axis, level]) Get Less than or equal to of dataframe and other, element-wise (binary operator le). DataFrame.ge(other[, axis, level]) Get Greater than or equal to of dataframe and other, element-wise (binary operator ge). DataFrame.ne(other[, axis, level]) Get Not equal to of dataframe and other, element-wise (binary operator ne). DataFrame.eq(other[, axis, level]) Get Equal to of dataframe and other, element-wise (binary operator eq). DataFrame.combine(other, func[, fill_value, ...]) Perform column-wise combine with another DataFrame. DataFrame.combine_first(other) Update null elements with value in the same location in other. Function application, GroupBy & window# DataFrame.apply(func[, axis, raw, ...]) Apply a function along an axis of the DataFrame. DataFrame.applymap(func[, na_action]) Apply a function to a Dataframe elementwise. DataFrame.pipe(func, *args, **kwargs) Apply chainable functions that expect Series or DataFrames. DataFrame.agg([func, axis]) Aggregate using one or more operations over the specified axis. DataFrame.aggregate([func, axis]) Aggregate using one or more operations over the specified axis. DataFrame.transform(func[, axis]) Call func on self producing a DataFrame with the same axis shape as self. DataFrame.groupby([by, axis, level, ...]) Group DataFrame using a mapper or by a Series of columns. DataFrame.rolling(window[, min_periods, ...]) Provide rolling window calculations. DataFrame.expanding([min_periods, center, ...]) Provide expanding window calculations. DataFrame.ewm([com, span, halflife, alpha, ...]) Provide exponentially weighted (EW) calculations. Computations / descriptive stats# DataFrame.abs() Return a Series/DataFrame with absolute numeric value of each element. DataFrame.all([axis, bool_only, skipna, level]) Return whether all elements are True, potentially over an axis. DataFrame.any(*[, axis, bool_only, skipna, ...]) Return whether any element is True, potentially over an axis. DataFrame.clip([lower, upper, axis, inplace]) Trim values at input threshold(s). DataFrame.corr([method, min_periods, ...]) Compute pairwise correlation of columns, excluding NA/null values. DataFrame.corrwith(other[, axis, drop, ...]) Compute pairwise correlation. DataFrame.count([axis, level, numeric_only]) Count non-NA cells for each column or row. DataFrame.cov([min_periods, ddof, numeric_only]) Compute pairwise covariance of columns, excluding NA/null values. DataFrame.cummax([axis, skipna]) Return cumulative maximum over a DataFrame or Series axis. DataFrame.cummin([axis, skipna]) Return cumulative minimum over a DataFrame or Series axis. DataFrame.cumprod([axis, skipna]) Return cumulative product over a DataFrame or Series axis. DataFrame.cumsum([axis, skipna]) Return cumulative sum over a DataFrame or Series axis. DataFrame.describe([percentiles, include, ...]) Generate descriptive statistics. DataFrame.diff([periods, axis]) First discrete difference of element. DataFrame.eval(expr, *[, inplace]) Evaluate a string describing operations on DataFrame columns. DataFrame.kurt([axis, skipna, level, ...]) Return unbiased kurtosis over requested axis. DataFrame.kurtosis([axis, skipna, level, ...]) Return unbiased kurtosis over requested axis. DataFrame.mad([axis, skipna, level]) (DEPRECATED) Return the mean absolute deviation of the values over the requested axis. DataFrame.max([axis, skipna, level, ...]) Return the maximum of the values over the requested axis. DataFrame.mean([axis, skipna, level, ...]) Return the mean of the values over the requested axis. DataFrame.median([axis, skipna, level, ...]) Return the median of the values over the requested axis. DataFrame.min([axis, skipna, level, ...]) Return the minimum of the values over the requested axis. DataFrame.mode([axis, numeric_only, dropna]) Get the mode(s) of each element along the selected axis. DataFrame.pct_change([periods, fill_method, ...]) Percentage change between the current and a prior element. DataFrame.prod([axis, skipna, level, ...]) Return the product of the values over the requested axis. DataFrame.product([axis, skipna, level, ...]) Return the product of the values over the requested axis. DataFrame.quantile([q, axis, numeric_only, ...]) Return values at the given quantile over requested axis. DataFrame.rank([axis, method, numeric_only, ...]) Compute numerical data ranks (1 through n) along axis. DataFrame.round([decimals]) Round a DataFrame to a variable number of decimal places. DataFrame.sem([axis, skipna, level, ddof, ...]) Return unbiased standard error of the mean over requested axis. DataFrame.skew([axis, skipna, level, ...]) Return unbiased skew over requested axis. DataFrame.sum([axis, skipna, level, ...]) Return the sum of the values over the requested axis. DataFrame.std([axis, skipna, level, ddof, ...]) Return sample standard deviation over requested axis. DataFrame.var([axis, skipna, level, ddof, ...]) Return unbiased variance over requested axis. DataFrame.nunique([axis, dropna]) Count number of distinct elements in specified axis. DataFrame.value_counts([subset, normalize, ...]) Return a Series containing counts of unique rows in the DataFrame. Reindexing / selection / label manipulation# DataFrame.add_prefix(prefix) Prefix labels with string prefix. DataFrame.add_suffix(suffix) Suffix labels with string suffix. DataFrame.align(other[, join, axis, level, ...]) Align two objects on their axes with the specified join method. DataFrame.at_time(time[, asof, axis]) Select values at particular time of day (e.g., 9:30AM). DataFrame.between_time(start_time, end_time) Select values between particular times of the day (e.g., 9:00-9:30 AM). DataFrame.drop([labels, axis, index, ...]) Drop specified labels from rows or columns. DataFrame.drop_duplicates([subset, keep, ...]) Return DataFrame with duplicate rows removed. DataFrame.duplicated([subset, keep]) Return boolean Series denoting duplicate rows. DataFrame.equals(other) Test whether two objects contain the same elements. DataFrame.filter([items, like, regex, axis]) Subset the dataframe rows or columns according to the specified index labels. DataFrame.first(offset) Select initial periods of time series data based on a date offset. DataFrame.head([n]) Return the first n rows. DataFrame.idxmax([axis, skipna, numeric_only]) Return index of first occurrence of maximum over requested axis. DataFrame.idxmin([axis, skipna, numeric_only]) Return index of first occurrence of minimum over requested axis. DataFrame.last(offset) Select final periods of time series data based on a date offset. DataFrame.reindex([labels, index, columns, ...]) Conform Series/DataFrame to new index with optional filling logic. DataFrame.reindex_like(other[, method, ...]) Return an object with matching indices as other object. DataFrame.rename([mapper, index, columns, ...]) Alter axes labels. DataFrame.rename_axis([mapper, inplace]) Set the name of the axis for the index or columns. DataFrame.reset_index([level, drop, ...]) Reset the index, or a level of it. DataFrame.sample([n, frac, replace, ...]) Return a random sample of items from an axis of object. DataFrame.set_axis(labels, *[, axis, ...]) Assign desired index to given axis. DataFrame.set_index(keys, *[, drop, append, ...]) Set the DataFrame index using existing columns. DataFrame.tail([n]) Return the last n rows. DataFrame.take(indices[, axis, is_copy]) Return the elements in the given positional indices along an axis. DataFrame.truncate([before, after, axis, copy]) Truncate a Series or DataFrame before and after some index value. Missing data handling# DataFrame.backfill(*[, axis, inplace, ...]) Synonym for DataFrame.fillna() with method='bfill'. DataFrame.bfill(*[, axis, inplace, limit, ...]) Synonym for DataFrame.fillna() with method='bfill'. DataFrame.dropna(*[, axis, how, thresh, ...]) Remove missing values. DataFrame.ffill(*[, axis, inplace, limit, ...]) Synonym for DataFrame.fillna() with method='ffill'. DataFrame.fillna([value, method, axis, ...]) Fill NA/NaN values using the specified method. DataFrame.interpolate([method, axis, limit, ...]) Fill NaN values using an interpolation method. DataFrame.isna() Detect missing values. DataFrame.isnull() DataFrame.isnull is an alias for DataFrame.isna. DataFrame.notna() Detect existing (non-missing) values. DataFrame.notnull() DataFrame.notnull is an alias for DataFrame.notna. DataFrame.pad(*[, axis, inplace, limit, ...]) Synonym for DataFrame.fillna() with method='ffill'. DataFrame.replace([to_replace, value, ...]) Replace values given in to_replace with value. Reshaping, sorting, transposing# DataFrame.droplevel(level[, axis]) Return Series/DataFrame with requested index / column level(s) removed. DataFrame.pivot(*[, index, columns, values]) Return reshaped DataFrame organized by given index / column values. DataFrame.pivot_table([values, index, ...]) Create a spreadsheet-style pivot table as a DataFrame. DataFrame.reorder_levels(order[, axis]) Rearrange index levels using input order. DataFrame.sort_values(by, *[, axis, ...]) Sort by the values along either axis. DataFrame.sort_index(*[, axis, level, ...]) Sort object by labels (along an axis). DataFrame.nlargest(n, columns[, keep]) Return the first n rows ordered by columns in descending order. DataFrame.nsmallest(n, columns[, keep]) Return the first n rows ordered by columns in ascending order. DataFrame.swaplevel([i, j, axis]) Swap levels i and j in a MultiIndex. DataFrame.stack([level, dropna]) Stack the prescribed level(s) from columns to index. DataFrame.unstack([level, fill_value]) Pivot a level of the (necessarily hierarchical) index labels. DataFrame.swapaxes(axis1, axis2[, copy]) Interchange axes and swap values axes appropriately. DataFrame.melt([id_vars, value_vars, ...]) Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. DataFrame.explode(column[, ignore_index]) Transform each element of a list-like to a row, replicating index values. DataFrame.squeeze([axis]) Squeeze 1 dimensional axis objects into scalars. DataFrame.to_xarray() Return an xarray object from the pandas object. DataFrame.T DataFrame.transpose(*args[, copy]) Transpose index and columns. Combining / comparing / joining / merging# DataFrame.append(other[, ignore_index, ...]) (DEPRECATED) Append rows of other to the end of caller, returning a new object. DataFrame.assign(**kwargs) Assign new columns to a DataFrame. DataFrame.compare(other[, align_axis, ...]) Compare to another DataFrame and show the differences. DataFrame.join(other[, on, how, lsuffix, ...]) Join columns of another DataFrame. DataFrame.merge(right[, how, on, left_on, ...]) Merge DataFrame or named Series objects with a database-style join. DataFrame.update(other[, join, overwrite, ...]) Modify in place using non-NA values from another DataFrame. Time Series-related# DataFrame.asfreq(freq[, method, how, ...]) Convert time series to specified frequency. DataFrame.asof(where[, subset]) Return the last row(s) without any NaNs before where. DataFrame.shift([periods, freq, axis, ...]) Shift index by desired number of periods with an optional time freq. DataFrame.slice_shift([periods, axis]) (DEPRECATED) Equivalent to shift without copying data. DataFrame.tshift([periods, freq, axis]) (DEPRECATED) Shift the time index, using the index's frequency if available. DataFrame.first_valid_index() Return index for first non-NA value or None, if no non-NA value is found. DataFrame.last_valid_index() Return index for last non-NA value or None, if no non-NA value is found. DataFrame.resample(rule[, axis, closed, ...]) Resample time-series data. DataFrame.to_period([freq, axis, copy]) Convert DataFrame from DatetimeIndex to PeriodIndex. DataFrame.to_timestamp([freq, how, axis, copy]) Cast to DatetimeIndex of timestamps, at beginning of period. DataFrame.tz_convert(tz[, axis, level, copy]) Convert tz-aware axis to target time zone. DataFrame.tz_localize(tz[, axis, level, ...]) Localize tz-naive index of a Series or DataFrame to target time zone. Flags# Flags refer to attributes of the pandas object. Properties of the dataset (like the date is was recorded, the URL it was accessed from, etc.) should be stored in DataFrame.attrs. Flags(obj, *, allows_duplicate_labels) Flags that apply to pandas objects. Metadata# DataFrame.attrs is a dictionary for storing global metadata for this DataFrame. Warning DataFrame.attrs is considered experimental and may change without warning. DataFrame.attrs Dictionary of global attributes of this dataset. Plotting# DataFrame.plot is both a callable method and a namespace attribute for specific plotting methods of the form DataFrame.plot.<kind>. DataFrame.plot([x, y, kind, ax, ....]) DataFrame plotting accessor and method DataFrame.plot.area([x, y]) Draw a stacked area plot. DataFrame.plot.bar([x, y]) Vertical bar plot. DataFrame.plot.barh([x, y]) Make a horizontal bar plot. DataFrame.plot.box([by]) Make a box plot of the DataFrame columns. DataFrame.plot.density([bw_method, ind]) Generate Kernel Density Estimate plot using Gaussian kernels. DataFrame.plot.hexbin(x, y[, C, ...]) Generate a hexagonal binning plot. DataFrame.plot.hist([by, bins]) Draw one histogram of the DataFrame's columns. DataFrame.plot.kde([bw_method, ind]) Generate Kernel Density Estimate plot using Gaussian kernels. DataFrame.plot.line([x, y]) Plot Series or DataFrame as lines. DataFrame.plot.pie(**kwargs) Generate a pie plot. DataFrame.plot.scatter(x, y[, s, c]) Create a scatter plot with varying marker point size and color. DataFrame.boxplot([column, by, ax, ...]) Make a box plot from DataFrame columns. DataFrame.hist([column, by, grid, ...]) Make a histogram of the DataFrame's columns. Sparse accessor# Sparse-dtype specific methods and attributes are provided under the DataFrame.sparse accessor. DataFrame.sparse.density Ratio of non-sparse points to total (dense) data points. DataFrame.sparse.from_spmatrix(data[, ...]) Create a new DataFrame from a scipy sparse matrix. DataFrame.sparse.to_coo() Return the contents of the frame as a sparse SciPy COO matrix. DataFrame.sparse.to_dense() Convert a DataFrame with sparse values to dense. Serialization / IO / conversion# DataFrame.from_dict(data[, orient, dtype, ...]) Construct DataFrame from dict of array-like or dicts. DataFrame.from_records(data[, index, ...]) Convert structured or record ndarray to DataFrame. DataFrame.to_orc([path, engine, index, ...]) Write a DataFrame to the ORC format. DataFrame.to_parquet([path, engine, ...]) Write a DataFrame to the binary parquet format. DataFrame.to_pickle(path[, compression, ...]) Pickle (serialize) object to file. DataFrame.to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file. DataFrame.to_hdf(path_or_buf, key[, mode, ...]) Write the contained data to an HDF5 file using HDFStore. DataFrame.to_sql(name, con[, schema, ...]) Write records stored in a DataFrame to a SQL database. DataFrame.to_dict([orient, into]) Convert the DataFrame to a dictionary. DataFrame.to_excel(excel_writer[, ...]) Write object to an Excel sheet. DataFrame.to_json([path_or_buf, orient, ...]) Convert the object to a JSON string. DataFrame.to_html([buf, columns, col_space, ...]) Render a DataFrame as an HTML table. DataFrame.to_feather(path, **kwargs) Write a DataFrame to the binary Feather format. DataFrame.to_latex([buf, columns, ...]) Render object to a LaTeX tabular, longtable, or nested table. DataFrame.to_stata(path, *[, convert_dates, ...]) Export DataFrame object to Stata dta format. DataFrame.to_gbq(destination_table[, ...]) Write a DataFrame to a Google BigQuery table. DataFrame.to_records([index, column_dtypes, ...]) Convert DataFrame to a NumPy record array. DataFrame.to_string([buf, columns, ...]) Render a DataFrame to a console-friendly tabular output. DataFrame.to_clipboard([excel, sep]) Copy object to the system clipboard. DataFrame.to_markdown([buf, mode, index, ...]) Print DataFrame in Markdown-friendly format. DataFrame.style Returns a Styler object. DataFrame.__dataframe__([nan_as_null, ...]) Return the dataframe interchange object implementing the interchange protocol.
reference/frame.html
null
pandas.core.groupby.SeriesGroupBy.aggregate
`pandas.core.groupby.SeriesGroupBy.aggregate` Aggregate using one or more operations over the specified axis. Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. ``` >>> s = pd.Series([1, 2, 3, 4]) ```
SeriesGroupBy.aggregate(func=None, *args, engine=None, engine_kwargs=None, **kwargs)[source]# Aggregate using one or more operations over the specified axis. Parameters funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. Can also accept a Numba JIT function with engine='numba' specified. Only passing a single function is supported with this engine. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *argsPositional arguments to pass to func. enginestr, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.1.0. engine_kwargsdict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargsKeyword arguments to be passed into func. Returns Series See also Series.groupby.applyApply function func group-wise and combine the results together. Series.groupby.transformAggregate using one or more operations over the specified axis. Series.aggregateTransforms the Series on each group based on the given function. Notes When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.groupby([1, 1, 2, 2]).min() 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg('min') 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg(['min', 'max']) min max 1 1 2 2 3 4 The output column names can be controlled by passing the desired column names and aggregations as keyword arguments. >>> s.groupby([1, 1, 2, 2]).agg( ... minimum='min', ... maximum='max', ... ) minimum maximum 1 1 2 2 3 4 Changed in version 1.3.0: The resulting dtype will reflect the return value of the aggregating function. >>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min()) 1 1.0 2 3.0 dtype: float64
reference/api/pandas.core.groupby.SeriesGroupBy.aggregate.html
pandas.read_spss
`pandas.read_spss` Load an SPSS file from the file path, returning a DataFrame.
pandas.read_spss(path, usecols=None, convert_categoricals=True)[source]# Load an SPSS file from the file path, returning a DataFrame. New in version 0.25.0. Parameters pathstr or PathFile path. usecolslist-like, optionalReturn a subset of the columns. If None, return all columns. convert_categoricalsbool, default is TrueConvert categorical columns into pd.Categorical. Returns DataFrame
reference/api/pandas.read_spss.html
pandas.Series.str.rstrip
`pandas.Series.str.rstrip` Remove trailing characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from right side. Replaces any non-strings in Series with NaNs. Equivalent to str.rstrip(). ``` >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan, 10, True]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN 4 10 5 True dtype: object ```
Series.str.rstrip(to_strip=None)[source]# Remove trailing characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from right side. Replaces any non-strings in Series with NaNs. Equivalent to str.rstrip(). Parameters to_stripstr or None, default NoneSpecifying the set of characters to be removed. All combinations of this set of characters will be stripped. If None then whitespaces are removed. Returns Series or Index of object See also Series.str.stripRemove leading and trailing characters in Series/Index. Series.str.lstripRemove leading characters in Series/Index. Series.str.rstripRemove trailing characters in Series/Index. Examples >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan, 10, True]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN 4 10 5 True dtype: object >>> s.str.strip() 0 1. Ant. 1 2. Bee! 2 3. Cat? 3 NaN 4 NaN 5 NaN dtype: object >>> s.str.lstrip('123.') 0 Ant. 1 Bee!\n 2 Cat?\t 3 NaN 4 NaN 5 NaN dtype: object >>> s.str.rstrip('.!? \n\t') 0 1. Ant 1 2. Bee 2 3. Cat 3 NaN 4 NaN 5 NaN dtype: object >>> s.str.strip('123.!? \n\t') 0 Ant 1 Bee 2 Cat 3 NaN 4 NaN 5 NaN dtype: object
reference/api/pandas.Series.str.rstrip.html
pandas.tseries.offsets.SemiMonthEnd.rollback
`pandas.tseries.offsets.SemiMonthEnd.rollback` Roll provided date backward to next offset only if not on offset.
SemiMonthEnd.rollback()# Roll provided date backward to next offset only if not on offset. Returns TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
reference/api/pandas.tseries.offsets.SemiMonthEnd.rollback.html
pandas.Series.str.capitalize
`pandas.Series.str.capitalize` Convert strings in the Series/Index to be capitalized. Equivalent to str.capitalize(). ``` >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object ```
Series.str.capitalize()[source]# Convert strings in the Series/Index to be capitalized. Equivalent to str.capitalize(). Returns Series or Index of object See also Series.str.lowerConverts all characters to lowercase. Series.str.upperConverts all characters to uppercase. Series.str.titleConverts first character of each word to uppercase and remaining to lowercase. Series.str.capitalizeConverts first character to uppercase and remaining to lowercase. Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase. Series.str.casefoldRemoves all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
reference/api/pandas.Series.str.capitalize.html
pandas.DatetimeIndex.minute
`pandas.DatetimeIndex.minute` The minutes of the datetime. ``` >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="T") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 00:01:00 2 2000-01-01 00:02:00 dtype: datetime64[ns] >>> datetime_series.dt.minute 0 0 1 1 2 2 dtype: int64 ```
property DatetimeIndex.minute[source]# The minutes of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="T") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 00:01:00 2 2000-01-01 00:02:00 dtype: datetime64[ns] >>> datetime_series.dt.minute 0 0 1 1 2 2 dtype: int64
reference/api/pandas.DatetimeIndex.minute.html
pandas.tseries.offsets.DateOffset.freqstr
`pandas.tseries.offsets.DateOffset.freqstr` Return a string representing the frequency. ``` >>> pd.DateOffset(5).freqstr '<5 * DateOffsets>' ```
DateOffset.freqstr# Return a string representing the frequency. Examples >>> pd.DateOffset(5).freqstr '<5 * DateOffsets>' >>> pd.offsets.BusinessHour(2).freqstr '2BH' >>> pd.offsets.Nano().freqstr 'N' >>> pd.offsets.Nano(-3).freqstr '-3N'
reference/api/pandas.tseries.offsets.DateOffset.freqstr.html
pandas.tseries.offsets.YearBegin.is_quarter_end
`pandas.tseries.offsets.YearBegin.is_quarter_end` Return boolean whether a timestamp occurs on the quarter end. Examples ``` >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_end(ts) False ```
YearBegin.is_quarter_end()# Return boolean whether a timestamp occurs on the quarter end. Examples >>> ts = pd.Timestamp(2022, 1, 1) >>> freq = pd.offsets.Hour(5) >>> freq.is_quarter_end(ts) False
reference/api/pandas.tseries.offsets.YearBegin.is_quarter_end.html
pandas.Series.dt.day_of_year
`pandas.Series.dt.day_of_year` The ordinal day of the year.
Series.dt.day_of_year[source]# The ordinal day of the year.
reference/api/pandas.Series.dt.day_of_year.html
pandas.tseries.offsets.BusinessMonthEnd
`pandas.tseries.offsets.BusinessMonthEnd` DateOffset increments between the last business day of the month. ``` >>> from pandas.tseries.offsets import BMonthEnd >>> ts = pd.Timestamp('2020-05-24 05:01:15') >>> ts + BMonthEnd() Timestamp('2020-05-29 05:01:15') >>> ts + BMonthEnd(2) Timestamp('2020-06-30 05:01:15') >>> ts + BMonthEnd(-2) Timestamp('2020-03-31 05:01:15') ```
class pandas.tseries.offsets.BusinessMonthEnd# DateOffset increments between the last business day of the month. Examples >>> from pandas.tseries.offsets import BMonthEnd >>> ts = pd.Timestamp('2020-05-24 05:01:15') >>> ts + BMonthEnd() Timestamp('2020-05-29 05:01:15') >>> ts + BMonthEnd(2) Timestamp('2020-06-30 05:01:15') >>> ts + BMonthEnd(-2) Timestamp('2020-03-31 05:01:15') Attributes base Returns a copy of the calling offset object with n=1 and all other attributes equal. freqstr Return a string representing the frequency. kwds Return a dict of extra parameters for the offset. name Return a string representing the base frequency. n nanos normalize rule_code Methods __call__(*args, **kwargs) Call self as a function. apply_index (DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex. copy Return a copy of the frequency. is_anchored Return boolean whether the frequency is a unit frequency (n=1). is_month_end Return boolean whether a timestamp occurs on the month end. is_month_start Return boolean whether a timestamp occurs on the month start. is_on_offset Return boolean whether a timestamp intersects with this frequency. is_quarter_end Return boolean whether a timestamp occurs on the quarter end. is_quarter_start Return boolean whether a timestamp occurs on the quarter start. is_year_end Return boolean whether a timestamp occurs on the year end. is_year_start Return boolean whether a timestamp occurs on the year start. rollback Roll provided date backward to next offset only if not on offset. rollforward Roll provided date forward to next offset only if not on offset. apply isAnchored onOffset
reference/api/pandas.tseries.offsets.BusinessMonthEnd.html
pandas.DataFrame.rtruediv
`pandas.DataFrame.rtruediv` Get Floating division of dataframe and other, element-wise (binary operator rtruediv). ``` >>> df = pd.DataFrame({'angles': [0, 3, 4], ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df angles degrees circle 0 360 triangle 3 180 rectangle 4 360 ```
DataFrame.rtruediv(other, axis='columns', level=None, fill_value=None)[source]# Get Floating division of dataframe and other, element-wise (binary operator rtruediv). Equivalent to other / dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, truediv. Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **. Parameters otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object. axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns. (1 or ‘columns’). For Series input, axis to match Series index on. levelint or labelBroadcast across a level, matching Index values on the passed MultiIndex level. fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing. Returns DataFrameResult of the arithmetic operation. See also DataFrame.addAdd DataFrames. DataFrame.subSubtract DataFrames. DataFrame.mulMultiply DataFrames. DataFrame.divDivide DataFrames (float division). DataFrame.truedivDivide DataFrames (float division). DataFrame.floordivDivide DataFrames (integer division). DataFrame.modCalculate modulo (remainder after division). DataFrame.powCalculate exponential power. Notes Mismatched indices will be unioned together. Examples >>> df = pd.DataFrame({'angles': [0, 3, 4], ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df angles degrees circle 0 360 triangle 3 180 rectangle 4 360 Add a scalar with operator version which return the same results. >>> df + 1 angles degrees circle 1 361 triangle 4 181 rectangle 5 361 >>> df.add(1) angles degrees circle 1 361 triangle 4 181 rectangle 5 361 Divide by constant with reverse version. >>> df.div(10) angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0 >>> df.rdiv(10) angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778 Subtract a list and Series by axis with operator version. >>> df - [1, 2] angles degrees circle -1 358 triangle 2 178 rectangle 3 358 >>> df.sub([1, 2], axis='columns') angles degrees circle -1 358 triangle 2 178 rectangle 3 358 >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359 Multiply a dictionary by axis. >>> df.mul({'angles': 0, 'degrees': 2}) angles degrees circle 0 720 triangle 0 360 rectangle 0 720 >>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index') angles degrees circle 0 0 triangle 6 360 rectangle 12 1080 Multiply a DataFrame of different shape with operator version. >>> other = pd.DataFrame({'angles': [0, 3, 4]}, ... index=['circle', 'triangle', 'rectangle']) >>> other angles circle 0 triangle 3 rectangle 4 >>> df * other angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN >>> df.mul(other, fill_value=0) angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0 Divide by a MultiIndex by level. >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720 >>> df.div(df_multindex, level=1, fill_value=0) angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
reference/api/pandas.DataFrame.rtruediv.html
pandas.Timestamp.freq
pandas.Timestamp.freq
Timestamp.freq#
reference/api/pandas.Timestamp.freq.html
pandas.tseries.offsets.Day.delta
pandas.tseries.offsets.Day.delta
Day.delta#
reference/api/pandas.tseries.offsets.Day.delta.html
pandas.DataFrame.rename_axis
`pandas.DataFrame.rename_axis` Set the name of the axis for the index or columns. ``` >>> s = pd.Series(["dog", "cat", "monkey"]) >>> s 0 dog 1 cat 2 monkey dtype: object >>> s.rename_axis("animal") animal 0 dog 1 cat 2 monkey dtype: object ```
DataFrame.rename_axis(mapper=_NoDefault.no_default, *, inplace=False, **kwargs)[source]# Set the name of the axis for the index or columns. Parameters mapperscalar, list-like, optionalValue to set the axis name attribute. index, columnsscalar, list-like, dict-like or function, optionalA scalar, list-like, dict-like or functions transformations to apply to that axis’ values. Note that the columns parameter is not allowed if the object is a Series. This parameter only apply for DataFrame type objects. Use either mapper and axis to specify the axis to target with mapper, or index and/or columns. axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to rename. For Series this parameter is unused and defaults to 0. copybool, default TrueAlso copy underlying data. inplacebool, default FalseModifies the object directly, instead of creating a new Series or DataFrame. Returns Series, DataFrame, or NoneThe same type as the caller or None if inplace=True. See also Series.renameAlter Series index labels or name. DataFrame.renameAlter DataFrame index labels or name. Index.renameSet new names on index. Notes DataFrame.rename_axis supports two calling conventions (index=index_mapper, columns=columns_mapper, ...) (mapper, axis={'index', 'columns'}, ...) The first calling convention will only modify the names of the index and/or the names of the Index object that is the columns. In this case, the parameter copy is ignored. The second calling convention will modify the names of the corresponding index if mapper is a list or a scalar. However, if mapper is dict-like or a function, it will use the deprecated behavior of modifying the axis labels. We highly recommend using keyword arguments to clarify your intent. Examples Series >>> s = pd.Series(["dog", "cat", "monkey"]) >>> s 0 dog 1 cat 2 monkey dtype: object >>> s.rename_axis("animal") animal 0 dog 1 cat 2 monkey dtype: object DataFrame >>> df = pd.DataFrame({"num_legs": [4, 4, 2], ... "num_arms": [0, 0, 2]}, ... ["dog", "cat", "monkey"]) >>> df num_legs num_arms dog 4 0 cat 4 0 monkey 2 2 >>> df = df.rename_axis("animal") >>> df num_legs num_arms animal dog 4 0 cat 4 0 monkey 2 2 >>> df = df.rename_axis("limbs", axis="columns") >>> df limbs num_legs num_arms animal dog 4 0 cat 4 0 monkey 2 2 MultiIndex >>> df.index = pd.MultiIndex.from_product([['mammal'], ... ['dog', 'cat', 'monkey']], ... names=['type', 'name']) >>> df limbs num_legs num_arms type name mammal dog 4 0 cat 4 0 monkey 2 2 >>> df.rename_axis(index={'type': 'class'}) limbs num_legs num_arms class name mammal dog 4 0 cat 4 0 monkey 2 2 >>> df.rename_axis(columns=str.upper) LIMBS num_legs num_arms type name mammal dog 4 0 cat 4 0 monkey 2 2
reference/api/pandas.DataFrame.rename_axis.html
pandas.Series.between
`pandas.Series.between` Return boolean Series equivalent to left <= series <= right. This function returns a boolean vector containing True wherever the corresponding Series element is between the boundary values left and right. NA values are treated as False. ``` >>> s = pd.Series([2, 0, 4, 8, np.nan]) ```
Series.between(left, right, inclusive='both')[source]# Return boolean Series equivalent to left <= series <= right. This function returns a boolean vector containing True wherever the corresponding Series element is between the boundary values left and right. NA values are treated as False. Parameters leftscalar or list-likeLeft boundary. rightscalar or list-likeRight boundary. inclusive{“both”, “neither”, “left”, “right”}Include boundaries. Whether to set each bound as closed or open. Changed in version 1.3.0. Returns SeriesSeries representing whether each element is between left and right (inclusive). See also Series.gtGreater than of series and other. Series.ltLess than of series and other. Notes This function is equivalent to (left <= ser) & (ser <= right) Examples >>> s = pd.Series([2, 0, 4, 8, np.nan]) Boundary values are included by default: >>> s.between(1, 4) 0 True 1 False 2 True 3 False 4 False dtype: bool With inclusive set to "neither" boundary values are excluded: >>> s.between(1, 4, inclusive="neither") 0 True 1 False 2 False 3 False 4 False dtype: bool left and right can be any scalar value: >>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve']) >>> s.between('Anna', 'Daniel') 0 False 1 True 2 True 3 False dtype: bool
reference/api/pandas.Series.between.html
pandas.Timedelta.to_pytimedelta
`pandas.Timedelta.to_pytimedelta` Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_pytimedelta()# Convert a pandas Timedelta object into a python datetime.timedelta object. Timedelta objects are internally saved as numpy datetime64[ns] dtype. Use to_pytimedelta() to convert to object dtype. Returns datetime.timedelta or numpy.array of datetime.timedelta See also to_timedeltaConvert argument to Timedelta type. Notes Any nanosecond resolution will be lost.
reference/api/pandas.Timedelta.to_pytimedelta.html
pandas.MultiIndex.get_indexer
`pandas.MultiIndex.get_indexer` Compute indexer and mask for new index given the current index. The indexer should be then used as an input to ndarray.take to align the current data to the new index. ``` >>> index = pd.Index(['c', 'a', 'b']) >>> index.get_indexer(['a', 'b', 'x']) array([ 1, 2, -1]) ```
MultiIndex.get_indexer(target, method=None, limit=None, tolerance=None)[source]# Compute indexer and mask for new index given the current index. The indexer should be then used as an input to ndarray.take to align the current data to the new index. Parameters targetIndex method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional default: exact matches only. pad / ffill: find the PREVIOUS index value if no exact match. backfill / bfill: use NEXT index value if no exact match nearest: use the NEAREST index value if no exact match. Tied distances are broken by preferring the larger index value. limitint, optionalMaximum number of consecutive labels in target to match for inexact matches. toleranceoptionalMaximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation abs(index[indexer] - target) <= tolerance. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type. Returns indexernp.ndarray[np.intp]Integers from 0 to n - 1 indicating that the index at these positions matches the corresponding target values. Missing values in the target are marked by -1. Notes Returns -1 for unmatched values, for further explanation see the example below. Examples >>> index = pd.Index(['c', 'a', 'b']) >>> index.get_indexer(['a', 'b', 'x']) array([ 1, 2, -1]) Notice that the return value is an array of locations in index and x is marked by -1, as it is not in index.
reference/api/pandas.MultiIndex.get_indexer.html
pandas.api.indexers.FixedForwardWindowIndexer
`pandas.api.indexers.FixedForwardWindowIndexer` Creates window boundaries for fixed-length windows that include the current row. ``` >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]}) >>> df B 0 0.0 1 1.0 2 2.0 3 NaN 4 4.0 ```
class pandas.api.indexers.FixedForwardWindowIndexer(index_array=None, window_size=0, **kwargs)[source]# Creates window boundaries for fixed-length windows that include the current row. Examples >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]}) >>> df B 0 0.0 1 1.0 2 2.0 3 NaN 4 4.0 >>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=2) >>> df.rolling(window=indexer, min_periods=1).sum() B 0 1.0 1 3.0 2 2.0 3 4.0 4 4.0 Methods get_window_bounds([num_values, min_periods, ...]) Computes the bounds of a window.
reference/api/pandas.api.indexers.FixedForwardWindowIndexer.html
pandas.DataFrame.add_prefix
`pandas.DataFrame.add_prefix` Prefix labels with string prefix. ``` >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 ```
DataFrame.add_prefix(prefix)[source]# Prefix labels with string prefix. For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed. Parameters prefixstrThe string to add before each label. Returns Series or DataFrameNew Series or DataFrame with updated labels. See also Series.add_suffixSuffix row labels with string suffix. DataFrame.add_suffixSuffix column labels with string suffix. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.add_prefix('item_') item_0 1 item_1 2 item_2 3 item_3 4 dtype: int64 >>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]}) >>> df A B 0 1 3 1 2 4 2 3 5 3 4 6 >>> df.add_prefix('col_') col_A col_B 0 1 3 1 2 4 2 3 5 3 4 6
reference/api/pandas.DataFrame.add_prefix.html
pandas.Series.dt.day_name
`pandas.Series.dt.day_name` Return the day names with specified locale. ``` >>> s = pd.Series(pd.date_range(start='2018-01-01', freq='D', periods=3)) >>> s 0 2018-01-01 1 2018-01-02 2 2018-01-03 dtype: datetime64[ns] >>> s.dt.day_name() 0 Monday 1 Tuesday 2 Wednesday dtype: object ```
Series.dt.day_name(*args, **kwargs)[source]# Return the day names with specified locale. Parameters localestr, optionalLocale determining the language in which to return the day name. Default is English locale. Returns Series or IndexSeries or Index of day names. Examples >>> s = pd.Series(pd.date_range(start='2018-01-01', freq='D', periods=3)) >>> s 0 2018-01-01 1 2018-01-02 2 2018-01-03 dtype: datetime64[ns] >>> s.dt.day_name() 0 Monday 1 Tuesday 2 Wednesday dtype: object >>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3) >>> idx DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'], dtype='datetime64[ns]', freq='D') >>> idx.day_name() Index(['Monday', 'Tuesday', 'Wednesday'], dtype='object')
reference/api/pandas.Series.dt.day_name.html
pandas.Timedelta.asm8
`pandas.Timedelta.asm8` Return a numpy timedelta64 array scalar view. ``` >>> td = pd.Timedelta('1 days 2 min 3 us 42 ns') >>> td.asm8 numpy.timedelta64(86520000003042,'ns') ```
Timedelta.asm8# Return a numpy timedelta64 array scalar view. Provides access to the array scalar view (i.e. a combination of the value and the units) associated with the numpy.timedelta64().view(), including a 64-bit integer representation of the timedelta in nanoseconds (Python int compatible). Returns numpy timedelta64 array scalar viewArray scalar view of the timedelta in nanoseconds. Examples >>> td = pd.Timedelta('1 days 2 min 3 us 42 ns') >>> td.asm8 numpy.timedelta64(86520000003042,'ns') >>> td = pd.Timedelta('2 min 3 s') >>> td.asm8 numpy.timedelta64(123000000000,'ns') >>> td = pd.Timedelta('3 ms 5 us') >>> td.asm8 numpy.timedelta64(3005000,'ns') >>> td = pd.Timedelta(42, unit='ns') >>> td.asm8 numpy.timedelta64(42,'ns')
reference/api/pandas.Timedelta.asm8.html
pandas.Index.is_all_dates
`pandas.Index.is_all_dates` Whether or not the index values only consist of dates.
Index.is_all_dates[source]# Whether or not the index values only consist of dates.
reference/api/pandas.Index.is_all_dates.html