title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.DataFrame.nlargest
|
`pandas.DataFrame.nlargest`
Return the first n rows ordered by columns in descending order.
Return the first n rows with the largest values in columns, in
descending order. The columns that are not specified are returned as
well, but not used for ordering.
```
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
... 434000, 434000, 337000, 11300,
... 11300, 11300],
... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
... 17036, 182, 38, 311],
... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
... "IS", "NR", "TV", "AI"]},
... index=["Italy", "France", "Malta",
... "Maldives", "Brunei", "Iceland",
... "Nauru", "Tuvalu", "Anguilla"])
>>> df
population GDP alpha-2
Italy 59000000 1937894 IT
France 65000000 2583560 FR
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
Iceland 337000 17036 IS
Nauru 11300 182 NR
Tuvalu 11300 38 TV
Anguilla 11300 311 AI
```
|
DataFrame.nlargest(n, columns, keep='first')[source]#
Return the first n rows ordered by columns in descending order.
Return the first n rows with the largest values in columns, in
descending order. The columns that are not specified are returned as
well, but not used for ordering.
This method is equivalent to
df.sort_values(columns, ascending=False).head(n), but more
performant.
Parameters
nintNumber of rows to return.
columnslabel or list of labelsColumn label(s) to order by.
keep{‘first’, ‘last’, ‘all’}, default ‘first’Where there are duplicate values:
first : prioritize the first occurrence(s)
last : prioritize the last occurrence(s)
all : do not drop any duplicates, even it means
selecting more than n items.
Returns
DataFrameThe first n rows ordered by the given columns in descending
order.
See also
DataFrame.nsmallestReturn the first n rows ordered by columns in ascending order.
DataFrame.sort_valuesSort DataFrame by the values.
DataFrame.headReturn the first n rows without re-ordering.
Notes
This function cannot be used with all column types. For example, when
specifying columns with object or category dtypes, TypeError is
raised.
Examples
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
... 434000, 434000, 337000, 11300,
... 11300, 11300],
... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
... 17036, 182, 38, 311],
... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
... "IS", "NR", "TV", "AI"]},
... index=["Italy", "France", "Malta",
... "Maldives", "Brunei", "Iceland",
... "Nauru", "Tuvalu", "Anguilla"])
>>> df
population GDP alpha-2
Italy 59000000 1937894 IT
France 65000000 2583560 FR
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
Iceland 337000 17036 IS
Nauru 11300 182 NR
Tuvalu 11300 38 TV
Anguilla 11300 311 AI
In the following example, we will use nlargest to select the three
rows having the largest values in column “population”.
>>> df.nlargest(3, 'population')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Malta 434000 12011 MT
When using keep='last', ties are resolved in reverse order:
>>> df.nlargest(3, 'population', keep='last')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Brunei 434000 12128 BN
When using keep='all', all duplicate items are maintained:
>>> df.nlargest(3, 'population', keep='all')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
To order by the largest values in column “population” and then “GDP”,
we can specify multiple columns like in the next example.
>>> df.nlargest(3, ['population', 'GDP'])
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Brunei 434000 12128 BN
|
reference/api/pandas.DataFrame.nlargest.html
|
pandas.Series.str.encode
|
`pandas.Series.str.encode`
Encode character string in the Series/Index using indicated encoding.
|
Series.str.encode(encoding, errors='strict')[source]#
Encode character string in the Series/Index using indicated encoding.
Equivalent to str.encode().
Parameters
encodingstr
errorsstr, optional
Returns
encodedSeries/Index of objects
|
reference/api/pandas.Series.str.encode.html
|
pandas.tseries.offsets.BusinessDay.weekmask
|
pandas.tseries.offsets.BusinessDay.weekmask
|
BusinessDay.weekmask#
|
reference/api/pandas.tseries.offsets.BusinessDay.weekmask.html
|
pandas.tseries.offsets.SemiMonthBegin.freqstr
|
`pandas.tseries.offsets.SemiMonthBegin.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
SemiMonthBegin.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.SemiMonthBegin.freqstr.html
|
pandas.TimedeltaIndex.days
|
`pandas.TimedeltaIndex.days`
Number of days for each element.
|
property TimedeltaIndex.days[source]#
Number of days for each element.
|
reference/api/pandas.TimedeltaIndex.days.html
|
pandas.core.groupby.DataFrameGroupBy.count
|
`pandas.core.groupby.DataFrameGroupBy.count`
Compute count of group, excluding missing values.
|
DataFrameGroupBy.count()[source]#
Compute count of group, excluding missing values.
Returns
Series or DataFrameCount of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.DataFrameGroupBy.count.html
|
pandas.Index.is_monotonic
|
`pandas.Index.is_monotonic`
Alias for is_monotonic_increasing.
|
property Index.is_monotonic[source]#
Alias for is_monotonic_increasing.
Deprecated since version 1.5.0: is_monotonic is deprecated and will be removed in a future version.
Use is_monotonic_increasing instead.
|
reference/api/pandas.Index.is_monotonic.html
|
pandas.Series.dt.year
|
`pandas.Series.dt.year`
The year of the datetime.
Examples
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="Y")
... )
>>> datetime_series
0 2000-12-31
1 2001-12-31
2 2002-12-31
dtype: datetime64[ns]
>>> datetime_series.dt.year
0 2000
1 2001
2 2002
dtype: int64
```
|
Series.dt.year[source]#
The year of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="Y")
... )
>>> datetime_series
0 2000-12-31
1 2001-12-31
2 2002-12-31
dtype: datetime64[ns]
>>> datetime_series.dt.year
0 2000
1 2001
2 2002
dtype: int64
|
reference/api/pandas.Series.dt.year.html
|
pandas.tseries.offsets.Micro.is_anchored
|
`pandas.tseries.offsets.Micro.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
Micro.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.Micro.is_anchored.html
|
pandas.Series.str.isalpha
|
`pandas.Series.str.isalpha`
Check whether all characters in each string are alphabetic.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.isalpha()[source]#
Check whether all characters in each string are alphabetic.
This is equivalent to running the Python string method
str.isalpha() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.isalpha.html
|
pandas.errors.SpecificationError
|
`pandas.errors.SpecificationError`
Exception raised by agg when the functions are ill-specified.
```
>>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
... 'B': range(5),
... 'C': range(5)})
>>> df.groupby('A').B.agg({'foo': 'count'})
... # SpecificationError: nested renamer is not supported
```
|
exception pandas.errors.SpecificationError[source]#
Exception raised by agg when the functions are ill-specified.
The exception raised in two scenarios.
The first way is calling agg on a
Dataframe or Series using a nested renamer (dict-of-dict).
The second way is calling agg on a Dataframe with duplicated functions
names without assigning column name.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
... 'B': range(5),
... 'C': range(5)})
>>> df.groupby('A').B.agg({'foo': 'count'})
... # SpecificationError: nested renamer is not supported
>>> df.groupby('A').agg({'B': {'foo': ['sum', 'max']}})
... # SpecificationError: nested renamer is not supported
>>> df.groupby('A').agg(['min', 'min'])
... # SpecificationError: nested renamer is not supported
|
reference/api/pandas.errors.SpecificationError.html
|
pandas.core.window.rolling.Rolling.kurt
|
`pandas.core.window.rolling.Rolling.kurt`
Calculate the rolling Fisher’s definition of kurtosis without bias.
Include only float, int, boolean columns.
```
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
```
|
Rolling.kurt(numeric_only=False, **kwargs)[source]#
Calculate the rolling Fisher’s definition of kurtosis without bias.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
scipy.stats.kurtosisReference SciPy method.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.kurtAggregating kurt for Series.
pandas.DataFrame.kurtAggregating kurt for DataFrame.
Notes
A minimum of four periods is required for the calculation.
Examples
The example below will show a rolling calculation with a window size of
four matching the equivalent function call using scipy.stats.
>>> arr = [1, 2, 3, 4, 999]
>>> import scipy.stats
>>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}")
-1.200000
>>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}")
3.999946
>>> s = pd.Series(arr)
>>> s.rolling(4).kurt()
0 NaN
1 NaN
2 NaN
3 -1.200000
4 3.999946
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.kurt.html
|
pandas.MultiIndex.levels
|
pandas.MultiIndex.levels
|
MultiIndex.levels[source]#
|
reference/api/pandas.MultiIndex.levels.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.apply_index
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
|
CustomBusinessMonthEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.apply_index.html
|
pandas.Series.dt.to_pytimedelta
|
`pandas.Series.dt.to_pytimedelta`
Return an array of native datetime.timedelta objects.
```
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit="d"))
>>> s
0 0 days
1 1 days
2 2 days
3 3 days
4 4 days
dtype: timedelta64[ns]
```
|
Series.dt.to_pytimedelta()[source]#
Return an array of native datetime.timedelta objects.
Python’s standard datetime library uses a different representation
timedelta’s. This method converts a Series of pandas Timedeltas
to datetime.timedelta format with the same length as the original
Series.
Returns
numpy.ndarrayArray of 1D containing data with datetime.timedelta type.
See also
datetime.timedeltaA duration expressing the difference between two date, time, or datetime.
Examples
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit="d"))
>>> s
0 0 days
1 1 days
2 2 days
3 3 days
4 4 days
dtype: timedelta64[ns]
>>> s.dt.to_pytimedelta()
array([datetime.timedelta(0), datetime.timedelta(days=1),
datetime.timedelta(days=2), datetime.timedelta(days=3),
datetime.timedelta(days=4)], dtype=object)
|
reference/api/pandas.Series.dt.to_pytimedelta.html
|
pandas.tseries.offsets.Hour.nanos
|
`pandas.tseries.offsets.Hour.nanos`
Return an integer of the total number of nanoseconds.
If the frequency is non-fixed.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
```
|
Hour.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
|
reference/api/pandas.tseries.offsets.Hour.nanos.html
|
pandas.tseries.offsets.BusinessDay.onOffset
|
pandas.tseries.offsets.BusinessDay.onOffset
|
BusinessDay.onOffset()#
|
reference/api/pandas.tseries.offsets.BusinessDay.onOffset.html
|
pandas.plotting.plot_params
|
`pandas.plotting.plot_params`
Stores pandas plotting options.
|
pandas.plotting.plot_params = {'xaxis.compat': False}#
Stores pandas plotting options.
Allows for parameter aliasing so you can just use parameter names that are
the same as the plot function parameters, but is stored in a canonical
format that makes it easy to breakdown into groups later.
|
reference/api/pandas.plotting.plot_params.html
|
pandas.Timestamp.is_month_start
|
`pandas.Timestamp.is_month_start`
Return True if date is first day of month.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_month_start
False
```
|
Timestamp.is_month_start#
Return True if date is first day of month.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_month_start
False
>>> ts = pd.Timestamp(2020, 1, 1)
>>> ts.is_month_start
True
|
reference/api/pandas.Timestamp.is_month_start.html
|
pandas.tseries.offsets.YearBegin.is_anchored
|
`pandas.tseries.offsets.YearBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
YearBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.YearBegin.is_anchored.html
|
pandas.tseries.offsets.Tick.is_year_end
|
`pandas.tseries.offsets.Tick.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Tick.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Tick.is_year_end.html
|
pandas.tseries.offsets.BusinessDay.isAnchored
|
pandas.tseries.offsets.BusinessDay.isAnchored
|
BusinessDay.isAnchored()#
|
reference/api/pandas.tseries.offsets.BusinessDay.isAnchored.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored
|
pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored
|
CustomBusinessMonthBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored.html
|
pandas.core.groupby.GroupBy.min
|
`pandas.core.groupby.GroupBy.min`
Compute min of group values.
|
final GroupBy.min(numeric_only=False, min_count=- 1, engine=None, engine_kwargs=None)[source]#
Compute min of group values.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default -1The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameComputed min of values within each group.
|
reference/api/pandas.core.groupby.GroupBy.min.html
|
pandas.tseries.offsets.BQuarterEnd.apply
|
pandas.tseries.offsets.BQuarterEnd.apply
|
BQuarterEnd.apply()#
|
reference/api/pandas.tseries.offsets.BQuarterEnd.apply.html
|
Working with text data
|
Working with text data
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
|
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
than 'string'.
Currently, the performance of object dtype arrays of strings and
arrays.StringArray are about the same. We expect future enhancements
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
|
user_guide/text.html
|
pandas.Series.dt.tz
|
`pandas.Series.dt.tz`
Return the timezone.
|
Series.dt.tz[source]#
Return the timezone.
Returns
datetime.tzinfo, pytz.tzinfo.BaseTZInfo, dateutil.tz.tz.tzfile, or NoneReturns None when the array is tz-naive.
|
reference/api/pandas.Series.dt.tz.html
|
pandas.Int8Dtype
|
`pandas.Int8Dtype`
An ExtensionDtype for int8 integer data.
|
class pandas.Int8Dtype[source]#
An ExtensionDtype for int8 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
|
reference/api/pandas.Int8Dtype.html
|
pandas.Series.at
|
`pandas.Series.at`
Access a single value for a row/column label pair.
```
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
```
|
property Series.at[source]#
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use
at if you only need to get or set a single value in a DataFrame
or Series.
Raises
KeyError
If getting a value and ‘label’ does not exist in a DataFrame orSeries.
ValueError
If row/column label pair is not a tuple or if any label fromthe pair is not a scalar for DataFrame.
If label is list-like (excluding NamedTuple) for Series.
See also
DataFrame.atAccess a single value for a row/column pair by label.
DataFrame.iatAccess a single value for a row/column pair by integer position.
DataFrame.locAccess a group of rows and columns by label(s).
DataFrame.ilocAccess a group of rows and columns by integer position(s).
Series.atAccess a single value by label.
Series.iatAccess a single value by integer position.
Series.locAccess a group of rows by label(s).
Series.ilocAccess a group of rows by integer position(s).
Notes
See Fast scalar value getting and setting
for more details.
Examples
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
Get value at specified row/column pair
>>> df.at[4, 'B']
2
Set value at specified row/column pair
>>> df.at[4, 'B'] = 10
>>> df.at[4, 'B']
10
Get value within a Series
>>> df.loc[5].at['B']
4
|
reference/api/pandas.Series.at.html
|
pandas.HDFStore.select
|
`pandas.HDFStore.select`
Retrieve pandas object stored in file, optionally based on where criteria.
|
HDFStore.select(key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, auto_close=False)[source]#
Retrieve pandas object stored in file, optionally based on where criteria.
Warning
Pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle when using the “fixed” format.
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
Parameters
keystrObject being retrieved from file.
wherelist or NoneList of Term (or convertible) objects, optional.
startint or NoneRow number to start selection.
stopint, default NoneRow number to stop selection.
columnslist or NoneA list of columns that if not None, will limit the return columns.
iteratorbool or FalseReturns an iterator.
chunksizeint or NoneNumber or rows to include in iteration, return an iterator.
auto_closebool or FalseShould automatically close the store when finished.
Returns
objectRetrieved object from file.
|
reference/api/pandas.HDFStore.select.html
|
pandas.tseries.offsets.WeekOfMonth.freqstr
|
`pandas.tseries.offsets.WeekOfMonth.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
WeekOfMonth.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.WeekOfMonth.freqstr.html
|
pandas.tseries.offsets.QuarterBegin.apply
|
pandas.tseries.offsets.QuarterBegin.apply
|
QuarterBegin.apply()#
|
reference/api/pandas.tseries.offsets.QuarterBegin.apply.html
|
Index objects
|
Index objects
|
Index#
Many of these methods or variants thereof are available on the objects
that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.
Index([data, dtype, copy, name, tupleize_cols])
Immutable sequence used for indexing and alignment.
Properties#
Index.values
Return an array representing the data in the Index.
Index.is_monotonic
(DEPRECATED) Alias for is_monotonic_increasing.
Index.is_monotonic_increasing
Return a boolean if the values are equal or increasing.
Index.is_monotonic_decreasing
Return a boolean if the values are equal or decreasing.
Index.is_unique
Return if the index has unique values.
Index.has_duplicates
Check if the Index has duplicate values.
Index.hasnans
Return True if there are any NaNs.
Index.dtype
Return the dtype object of the underlying data.
Index.inferred_type
Return a string of the type inferred from the values.
Index.is_all_dates
Whether or not the index values only consist of dates.
Index.shape
Return a tuple of the shape of the underlying data.
Index.name
Return Index or MultiIndex name.
Index.names
Index.nbytes
Return the number of bytes in the underlying data.
Index.ndim
Number of dimensions of the underlying data, by definition 1.
Index.size
Return the number of elements in the underlying data.
Index.empty
Index.T
Return the transpose, which is by definition self.
Index.memory_usage([deep])
Memory usage of the values.
Modifying and computations#
Index.all(*args, **kwargs)
Return whether all elements are Truthy.
Index.any(*args, **kwargs)
Return whether any element is Truthy.
Index.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Index.argmax([axis, skipna])
Return int position of the largest value in the Series.
Index.copy([name, deep, dtype, names])
Make a copy of this object.
Index.delete(loc)
Make new Index with passed location(-s) deleted.
Index.drop(labels[, errors])
Make new Index with passed list of labels deleted.
Index.drop_duplicates(*[, keep])
Return Index with duplicate values removed.
Index.duplicated([keep])
Indicate duplicate index values.
Index.equals(other)
Determine if two Index object are equal.
Index.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Index.identical(other)
Similar to equals, but checks that object attributes and types are also equal.
Index.insert(loc, item)
Make new Index inserting new item at location.
Index.is_(other)
More flexible, faster check like is but that works through views.
Index.is_boolean()
Check if the Index only consists of booleans.
Index.is_categorical()
Check if the Index holds categorical data.
Index.is_floating()
Check if the Index is a floating type.
Index.is_integer()
Check if the Index only consists of integers.
Index.is_interval()
Check if the Index holds Interval objects.
Index.is_mixed()
Check if the Index holds data with mixed data types.
Index.is_numeric()
Check if the Index only consists of numeric data.
Index.is_object()
Check if the Index is of the object dtype.
Index.min([axis, skipna])
Return the minimum value of the Index.
Index.max([axis, skipna])
Return the maximum value of the Index.
Index.reindex(target[, method, level, ...])
Create index with target's values.
Index.rename(name[, inplace])
Alter Index or MultiIndex name.
Index.repeat(repeats[, axis])
Repeat elements of a Index.
Index.where(cond[, other])
Replace values where the condition is False.
Index.take(indices[, axis, allow_fill, ...])
Return a new Index of the values selected by the indices.
Index.putmask(mask, value)
Return a new Index of the values set with the mask.
Index.unique([level])
Return unique values in the index.
Index.nunique([dropna])
Return number of unique elements in the object.
Index.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Compatibility with MultiIndex#
Index.set_names(names, *[, level, inplace])
Set Index or MultiIndex name.
Index.droplevel([level])
Return index with requested level(s) removed.
Missing values#
Index.fillna([value, downcast])
Fill NA/NaN values with the specified value.
Index.dropna([how])
Return Index without NA/NaN values.
Index.isna()
Detect missing values.
Index.notna()
Detect existing (non-missing) values.
Conversion#
Index.astype(dtype[, copy])
Create an Index with values cast to dtypes.
Index.item()
Return the first element of the underlying data as a Python scalar.
Index.map(mapper[, na_action])
Map values using an input mapping or function.
Index.ravel([order])
Return an ndarray of the flattened values of the underlying data.
Index.to_list()
Return a list of the values.
Index.to_native_types([slicer])
(DEPRECATED) Format specified values of self and return them.
Index.to_series([index, name])
Create a Series with both index and values equal to the index keys.
Index.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Index.view([cls])
Sorting#
Index.argsort(*args, **kwargs)
Return the integer indices that would sort the index.
Index.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Index.sort_values([return_indexer, ...])
Return a sorted copy of the index.
Time-specific operations#
Index.shift([periods, freq])
Shift index by desired number of time frequency increments.
Combining / joining / set operations#
Index.append(other)
Append a collection of Index options together.
Index.join(other, *[, how, level, ...])
Compute join_index and indexers to conform data structures to the new index.
Index.intersection(other[, sort])
Form the intersection of two Index objects.
Index.union(other[, sort])
Form the union of two Index objects.
Index.difference(other[, sort])
Return a new Index with elements of index not in other.
Index.symmetric_difference(other[, ...])
Compute the symmetric difference of two Index objects.
Selecting#
Index.asof(label)
Return the label from the index, or, if not present, the previous one.
Index.asof_locs(where, mask)
Return the locations (indices) of labels in the index.
Index.get_indexer(target[, method, limit, ...])
Compute indexer and mask for new index given the current index.
Index.get_indexer_for(target)
Guaranteed return of an indexer even when non-unique.
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index.
Index.get_level_values(level)
Return an Index of values for requested level.
Index.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
Index.get_slice_bound(label, side[, kind])
Calculate slice bound that corresponds to given label.
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray.
Index.isin(values[, level])
Return a boolean array where the index values are in values.
Index.slice_indexer([start, end, step, kind])
Compute the slice indexer for input labels and step.
Index.slice_locs([start, end, step, kind])
Compute slice locations for input labels.
Numeric Index#
RangeIndex([start, stop, step, dtype, copy, ...])
Immutable Index implementing a monotonic integer range.
Int64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
UInt64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
Float64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
RangeIndex.start
The value of the start parameter (0 if this was not supplied).
RangeIndex.stop
The value of the stop parameter.
RangeIndex.step
The value of the step parameter (1 if this was not supplied).
RangeIndex.from_range(data[, name, dtype])
Create RangeIndex from a range object.
CategoricalIndex#
CategoricalIndex([data, categories, ...])
Index based on an underlying Categorical.
Categorical components#
CategoricalIndex.codes
The category codes of this categorical.
CategoricalIndex.categories
The categories of this categorical.
CategoricalIndex.ordered
Whether the categories have an ordered relationship.
CategoricalIndex.rename_categories(*args, ...)
Rename categories.
CategoricalIndex.reorder_categories(*args, ...)
Reorder categories as specified in new_categories.
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
CategoricalIndex.remove_categories(*args, ...)
Remove the specified categories.
CategoricalIndex.remove_unused_categories(...)
Remove categories which are not used.
CategoricalIndex.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
CategoricalIndex.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
CategoricalIndex.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Modifying and computations#
CategoricalIndex.map(mapper)
Map values using input an input mapping or function.
CategoricalIndex.equals(other)
Determine if two CategoricalIndex objects contain the same elements.
IntervalIndex#
IntervalIndex(data[, closed, dtype, copy, ...])
Immutable index of intervals that are closed on the same side.
IntervalIndex components#
IntervalIndex.from_arrays(left, right[, ...])
Construct from two arrays defining the left and right bounds.
IntervalIndex.from_tuples(data[, closed, ...])
Construct an IntervalIndex from an array-like of tuples.
IntervalIndex.from_breaks(breaks[, closed, ...])
Construct an IntervalIndex from an array of splits.
IntervalIndex.left
IntervalIndex.right
IntervalIndex.mid
IntervalIndex.closed
String describing the inclusive side the intervals.
IntervalIndex.length
IntervalIndex.values
Return an array representing the data in the Index.
IntervalIndex.is_empty
Indicates if an interval is empty, meaning it contains no points.
IntervalIndex.is_non_overlapping_monotonic
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
IntervalIndex.is_overlapping
Return True if the IntervalIndex has overlapping intervals, else False.
IntervalIndex.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
IntervalIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
IntervalIndex.set_closed(*args, **kwargs)
Return an identical IntervalArray closed on the specified side.
IntervalIndex.contains(*args, **kwargs)
Check elementwise if the Intervals contain the value.
IntervalIndex.overlaps(*args, **kwargs)
Check elementwise if an Interval overlaps the values in the IntervalArray.
IntervalIndex.to_tuples(*args, **kwargs)
Return an ndarray of tuples of the form (left, right).
MultiIndex#
MultiIndex([levels, codes, sortorder, ...])
A multi-level, or hierarchical, index object for pandas objects.
IndexSlice
Create an object to more easily perform multi-index slicing.
MultiIndex constructors#
MultiIndex.from_arrays(arrays[, sortorder, ...])
Convert arrays to MultiIndex.
MultiIndex.from_tuples(tuples[, sortorder, ...])
Convert list of tuples to MultiIndex.
MultiIndex.from_product(iterables[, ...])
Make a MultiIndex from the cartesian product of multiple iterables.
MultiIndex.from_frame(df[, sortorder, names])
Make a MultiIndex from a DataFrame.
MultiIndex properties#
MultiIndex.names
Names of levels in MultiIndex.
MultiIndex.levels
MultiIndex.codes
MultiIndex.nlevels
Integer number of levels in this MultiIndex.
MultiIndex.levshape
A tuple with the length of each level.
MultiIndex.dtypes
Return the dtypes as a Series for the underlying MultiIndex.
MultiIndex components#
MultiIndex.set_levels(levels, *[, level, ...])
Set new levels on MultiIndex.
MultiIndex.set_codes(codes, *[, level, ...])
Set new codes on MultiIndex.
MultiIndex.to_flat_index()
Convert a MultiIndex to an Index of Tuples containing the level values.
MultiIndex.to_frame([index, name, ...])
Create a DataFrame with the levels of the MultiIndex as columns.
MultiIndex.sortlevel([level, ascending, ...])
Sort MultiIndex at the requested level.
MultiIndex.droplevel([level])
Return index with requested level(s) removed.
MultiIndex.swaplevel([i, j])
Swap level i with level j.
MultiIndex.reorder_levels(order)
Rearrange levels using input order.
MultiIndex.remove_unused_levels()
Create new MultiIndex from current that removes unused levels.
MultiIndex selecting#
MultiIndex.get_loc(key[, method])
Get location for a label or a tuple of labels.
MultiIndex.get_locs(seq)
Get location for a sequence of labels.
MultiIndex.get_loc_level(key[, level, ...])
Get location and sliced index for requested label(s)/level(s).
MultiIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
MultiIndex.get_level_values(level)
Return vector of label values for requested level.
DatetimeIndex#
DatetimeIndex([data, freq, tz, normalize, ...])
Immutable ndarray-like of datetime64 data.
Time/date components#
DatetimeIndex.year
The year of the datetime.
DatetimeIndex.month
The month as January=1, December=12.
DatetimeIndex.day
The day of the datetime.
DatetimeIndex.hour
The hours of the datetime.
DatetimeIndex.minute
The minutes of the datetime.
DatetimeIndex.second
The seconds of the datetime.
DatetimeIndex.microsecond
The microseconds of the datetime.
DatetimeIndex.nanosecond
The nanoseconds of the datetime.
DatetimeIndex.date
Returns numpy array of python datetime.date objects.
DatetimeIndex.time
Returns numpy array of datetime.time objects.
DatetimeIndex.timetz
Returns numpy array of datetime.time objects with timezones.
DatetimeIndex.dayofyear
The ordinal day of the year.
DatetimeIndex.day_of_year
The ordinal day of the year.
DatetimeIndex.weekofyear
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.week
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.quarter
The quarter of the date.
DatetimeIndex.tz
Return the timezone.
DatetimeIndex.freq
Return the frequency object if it is set, otherwise None.
DatetimeIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
DatetimeIndex.is_month_start
Indicates whether the date is the first day of the month.
DatetimeIndex.is_month_end
Indicates whether the date is the last day of the month.
DatetimeIndex.is_quarter_start
Indicator for whether the date is the first day of a quarter.
DatetimeIndex.is_quarter_end
Indicator for whether the date is the last day of a quarter.
DatetimeIndex.is_year_start
Indicate whether the date is the first day of a year.
DatetimeIndex.is_year_end
Indicate whether the date is the last day of the year.
DatetimeIndex.is_leap_year
Boolean indicator if the date belongs to a leap year.
DatetimeIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Selecting#
DatetimeIndex.indexer_at_time(time[, asof])
Return index locations of values at particular time of day.
DatetimeIndex.indexer_between_time(...[, ...])
Return index locations of values between particular times of day.
Time-specific operations#
DatetimeIndex.normalize(*args, **kwargs)
Convert times to midnight.
DatetimeIndex.strftime(date_format)
Convert to Index using specified date_format.
DatetimeIndex.snap([freq])
Snap time stamps to nearest occurring frequency.
DatetimeIndex.tz_convert(tz)
Convert tz-aware Datetime Array/Index from one time zone to another.
DatetimeIndex.tz_localize(tz[, ambiguous, ...])
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
DatetimeIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
DatetimeIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
DatetimeIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
DatetimeIndex.month_name(*args, **kwargs)
Return the month names with specified locale.
DatetimeIndex.day_name(*args, **kwargs)
Return the day names with specified locale.
Conversion#
DatetimeIndex.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
DatetimeIndex.to_perioddelta(freq)
Calculate deltas between self values and self converted to Periods at a freq.
DatetimeIndex.to_pydatetime(*args, **kwargs)
Return an ndarray of datetime.datetime objects.
DatetimeIndex.to_series([keep_tz, index, name])
Create a Series with both index and values equal to the index keys.
DatetimeIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
DatetimeIndex.mean(*args, **kwargs)
Return the mean value of the Array.
DatetimeIndex.std(*args, **kwargs)
Return sample standard deviation over requested axis.
TimedeltaIndex#
TimedeltaIndex([data, unit, freq, closed, ...])
Immutable Index of timedelta64 data.
Components#
TimedeltaIndex.days
Number of days for each element.
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
TimedeltaIndex.components
Return a DataFrame of the individual resolution components of the Timedeltas.
TimedeltaIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Conversion#
TimedeltaIndex.to_pytimedelta(*args, **kwargs)
Return an ndarray of datetime.timedelta objects.
TimedeltaIndex.to_series([index, name])
Create a Series with both index and values equal to the index keys.
TimedeltaIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
TimedeltaIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
TimedeltaIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
TimedeltaIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
TimedeltaIndex.mean(*args, **kwargs)
Return the mean value of the Array.
PeriodIndex#
PeriodIndex([data, ordinal, freq, dtype, ...])
Immutable ndarray holding ordinal values indicating regular periods in time.
Properties#
PeriodIndex.day
The days of the period.
PeriodIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
PeriodIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
PeriodIndex.dayofyear
The ordinal day of the year.
PeriodIndex.day_of_year
The ordinal day of the year.
PeriodIndex.days_in_month
The number of days in the month.
PeriodIndex.daysinmonth
The number of days in the month.
PeriodIndex.end_time
Get the Timestamp for the end of the period.
PeriodIndex.freq
Return the frequency object if it is set, otherwise None.
PeriodIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
PeriodIndex.hour
The hour of the period.
PeriodIndex.is_leap_year
Logical indicating if the date belongs to a leap year.
PeriodIndex.minute
The minute of the period.
PeriodIndex.month
The month as January=1, December=12.
PeriodIndex.quarter
The quarter of the date.
PeriodIndex.qyear
PeriodIndex.second
The second of the period.
PeriodIndex.start_time
Get the Timestamp for the start of the period.
PeriodIndex.week
The week ordinal of the year.
PeriodIndex.weekday
The day of the week with Monday=0, Sunday=6.
PeriodIndex.weekofyear
The week ordinal of the year.
PeriodIndex.year
The year of the period.
Methods#
PeriodIndex.asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
PeriodIndex.strftime(*args, **kwargs)
Convert to Index using specified date_format.
PeriodIndex.to_timestamp([freq, how])
Cast to DatetimeArray/Index.
|
reference/indexing.html
|
pandas.tseries.offsets.Easter.is_month_end
|
`pandas.tseries.offsets.Easter.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Easter.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Easter.is_month_end.html
|
pandas.DataFrame.to_orc
|
`pandas.DataFrame.to_orc`
Write a DataFrame to the ORC format.
New in version 1.5.0.
```
>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]})
>>> df.to_orc('df.orc')
>>> pd.read_orc('df.orc')
col1 col2
0 1 4
1 2 3
```
|
DataFrame.to_orc(path=None, *, engine='pyarrow', index=None, engine_kwargs=None)[source]#
Write a DataFrame to the ORC format.
New in version 1.5.0.
Parameters
pathstr, file-like object or None, default NoneIf a string, it will be used as Root Directory path
when writing a partitioned dataset. By file-like object,
we refer to objects with a write() method, such as a file handle
(e.g. via builtin open function). If path is None,
a bytes object is returned.
enginestr, default ‘pyarrow’ORC library to use. Pyarrow must be >= 7.0.0.
indexbool, optionalIf True, include the dataframe’s index(es) in the file output.
If False, they will not be written to the file.
If None, similar to infer the dataframe’s index(es)
will be saved. However, instead of being saved as values,
the RangeIndex will be stored as a range in the metadata so it
doesn’t require much space and is faster. Other indexes will
be included as columns in the file output.
engine_kwargsdict[str, Any] or None, default NoneAdditional keyword arguments passed to pyarrow.orc.write_table().
Returns
bytes if no path argument is provided else None
Raises
NotImplementedErrorDtype of one or more columns is category, unsigned integers, interval,
period or sparse.
ValueErrorengine is not pyarrow.
See also
read_orcRead a ORC file.
DataFrame.to_parquetWrite a parquet file.
DataFrame.to_csvWrite a csv file.
DataFrame.to_sqlWrite to a sql table.
DataFrame.to_hdfWrite to hdf.
Notes
Before using this function you should read the user guide about
ORC and install optional dependencies.
This function requires pyarrow
library.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a
dataframe is converted into ORC files.
Examples
>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]})
>>> df.to_orc('df.orc')
>>> pd.read_orc('df.orc')
col1 col2
0 1 4
1 2 3
If you want to get a buffer to the orc content you can write it to io.BytesIO
>>> import io
>>> b = io.BytesIO(df.to_orc()) # doctest: +SKIP
>>> b.seek(0) # doctest: +SKIP
0
>>> content = b.read() # doctest: +SKIP
|
reference/api/pandas.DataFrame.to_orc.html
|
API reference
|
API reference
|
This page gives an overview of all public pandas objects, functions and
methods. All classes and functions exposed in pandas.* namespace are public.
Some subpackages are public which include pandas.errors,
pandas.plotting, and pandas.testing. Public functions in
pandas.io and pandas.tseries submodules are mentioned in
the documentation. pandas.api.types subpackage holds some
public functions related to data types in pandas.
Warning
The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Stable functionality in such modules is not guaranteed.
Input/output
Pickling
Flat file
Clipboard
Excel
JSON
HTML
XML
Latex
HDFStore: PyTables (HDF5)
Feather
Parquet
ORC
SAS
SPSS
SQL
Google BigQuery
STATA
General functions
Data manipulations
Top-level missing data
Top-level dealing with numeric data
Top-level dealing with datetimelike data
Top-level dealing with Interval data
Top-level evaluation
Hashing
Importing from other DataFrame libraries
Series
Constructor
Attributes
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting
Combining / comparing / joining / merging
Time Series-related
Accessors
Plotting
Serialization / IO / conversion
DataFrame
Constructor
Attributes and underlying data
Conversion
Indexing, iteration
Binary operator functions
Function application, GroupBy & window
Computations / descriptive stats
Reindexing / selection / label manipulation
Missing data handling
Reshaping, sorting, transposing
Combining / comparing / joining / merging
Time Series-related
Flags
Metadata
Plotting
Sparse accessor
Serialization / IO / conversion
pandas arrays, scalars, and data types
Objects
Utilities
Index objects
Index
Numeric Index
CategoricalIndex
IntervalIndex
MultiIndex
DatetimeIndex
TimedeltaIndex
PeriodIndex
Date offsets
DateOffset
BusinessDay
BusinessHour
CustomBusinessDay
CustomBusinessHour
MonthEnd
MonthBegin
BusinessMonthEnd
BusinessMonthBegin
CustomBusinessMonthEnd
CustomBusinessMonthBegin
SemiMonthEnd
SemiMonthBegin
Week
WeekOfMonth
LastWeekOfMonth
BQuarterEnd
BQuarterBegin
QuarterEnd
QuarterBegin
BYearEnd
BYearBegin
YearEnd
YearBegin
FY5253
FY5253Quarter
Easter
Tick
Day
Hour
Minute
Second
Milli
Micro
Nano
Frequencies
pandas.tseries.frequencies.to_offset
Window
Rolling window functions
Weighted window functions
Expanding window functions
Exponentially-weighted window functions
Window indexer
GroupBy
Indexing, iteration
Function application
Computations / descriptive stats
Resampling
Indexing, iteration
Function application
Upsampling
Computations / descriptive stats
Style
Styler constructor
Styler properties
Style application
Builtin styles
Style export and import
Plotting
pandas.plotting.andrews_curves
pandas.plotting.autocorrelation_plot
pandas.plotting.bootstrap_plot
pandas.plotting.boxplot
pandas.plotting.deregister_matplotlib_converters
pandas.plotting.lag_plot
pandas.plotting.parallel_coordinates
pandas.plotting.plot_params
pandas.plotting.radviz
pandas.plotting.register_matplotlib_converters
pandas.plotting.scatter_matrix
pandas.plotting.table
Options and settings
Working with options
Extensions
pandas.api.extensions.register_extension_dtype
pandas.api.extensions.register_dataframe_accessor
pandas.api.extensions.register_series_accessor
pandas.api.extensions.register_index_accessor
pandas.api.extensions.ExtensionDtype
pandas.api.extensions.ExtensionArray
pandas.arrays.PandasArray
pandas.api.indexers.check_array_indexer
Testing
Assertion functions
Exceptions and warnings
Bug report function
Test suite runner
|
reference/index.html
|
Style
|
Styler objects are returned by pandas.DataFrame.style.
Styler constructor#
Styler(data[, precision, table_styles, ...])
Helps style a DataFrame or Series according to the data with HTML and CSS.
Styler.from_custom_template(searchpath[, ...])
Factory function for creating a subclass of Styler.
Styler properties#
Styler.env
Styler.template_html
Styler.template_html_style
Styler.template_html_table
Styler.template_latex
Styler.template_string
Styler.loader
Style application#
Styler.apply(func[, axis, subset])
Apply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymap(func[, subset])
Apply a CSS-styling function elementwise.
Styler.apply_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, level-wise.
Styler.applymap_index(func[, axis, level])
Apply a CSS-styling function to the index or column headers, elementwise.
Styler.format([formatter, subset, na_rep, ...])
Format the text display value of cells.
Styler.format_index([formatter, axis, ...])
Format the text display value of index labels or column headers.
Styler.relabel_index(labels[, axis, level])
Relabel the index, or column header, keys to display a set of specified values.
Styler.hide([subset, axis, level, names])
Hide the entire index / column headers, or specific rows / columns from display.
Styler.concat(other)
Append another Styler to combine the output into a single table.
Styler.set_td_classes(classes)
Set the class attribute of <td> HTML elements.
Styler.set_table_styles([table_styles, ...])
Set the table styles included within the <style> HTML element.
Styler.set_table_attributes(attributes)
Set the table attributes added to the <table> HTML element.
Styler.set_tooltips(ttips[, props, css_class])
Set the DataFrame of strings on Styler generating :hover tooltips.
Styler.set_caption(caption)
Set the text added to a <caption> HTML element.
Styler.set_sticky([axis, pixel_size, levels])
Add CSS to permanently display the index or column headers in a scrolling frame.
Styler.set_properties([subset])
Set defined CSS-properties to each <td> HTML element for the given subset.
Styler.set_uuid(uuid)
Set the uuid applied to id attributes of HTML elements.
Styler.clear()
Reset the Styler, removing any previously applied styles.
Styler.pipe(func, *args, **kwargs)
Apply func(self, *args, **kwargs), and return the result.
Builtin styles#
Styler.highlight_null([color, subset, ...])
Highlight missing values with a style.
Styler.highlight_max([subset, color, axis, ...])
Highlight the maximum with a style.
Styler.highlight_min([subset, color, axis, ...])
Highlight the minimum with a style.
Styler.highlight_between([subset, color, ...])
Highlight a defined range with a style.
Styler.highlight_quantile([subset, color, ...])
Highlight values defined by a quantile with a style.
Styler.background_gradient([cmap, low, ...])
Color the background in a gradient style.
Styler.text_gradient([cmap, low, high, ...])
Color the text in a gradient style.
Styler.bar([subset, axis, color, cmap, ...])
Draw bar chart in the cell backgrounds.
Style export and import#
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
Styler.to_string([buf, encoding, ...])
Write Styler to a file, buffer or string in text format.
Styler.export()
Export the styles applied to the current Styler.
Styler.use(styles)
Set the styles on the current Styler.
|
reference/style.html
| null |
pandas.Timestamp.daysinmonth
|
`pandas.Timestamp.daysinmonth`
Return the number of days in the month.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
```
|
Timestamp.daysinmonth#
Return the number of days in the month.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
|
reference/api/pandas.Timestamp.daysinmonth.html
|
pandas.Index.to_series
|
`pandas.Index.to_series`
Create a Series with both index and values equal to the index keys.
```
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
```
|
Index.to_series(index=None, name=None)[source]#
Create a Series with both index and values equal to the index keys.
Useful with map for returning an indexer based on an index.
Parameters
indexIndex, optionalIndex of resulting Series. If None, defaults to original index.
namestr, optionalName of resulting Series. If None, defaults to name of original
index.
Returns
SeriesThe dtype will be based on the type of the Index values.
See also
Index.to_frameConvert an Index to a DataFrame.
Series.to_frameConvert Series to DataFrame.
Examples
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
By default, the original Index and original name is reused.
>>> idx.to_series()
animal
Ant Ant
Bear Bear
Cow Cow
Name: animal, dtype: object
To enforce a new Index, specify new labels to index:
>>> idx.to_series(index=[0, 1, 2])
0 Ant
1 Bear
2 Cow
Name: animal, dtype: object
To override the name of the resulting column, specify name:
>>> idx.to_series(name='zoo')
animal
Ant Ant
Bear Bear
Cow Cow
Name: zoo, dtype: object
|
reference/api/pandas.Index.to_series.html
|
pandas.tseries.offsets.FY5253Quarter.name
|
`pandas.tseries.offsets.FY5253Quarter.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
FY5253Quarter.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.FY5253Quarter.name.html
|
Table Visualization
|
Table Visualization
This section demonstrates visualization of tabular data using the Styler class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate
DataFrames into their exiting user interface designs.
The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook.
The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven’t yet created any styles. We can view these by calling the .to_html() method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show
how we can use these to format the DataFrame to be more communicative. For example how we can build s:
Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavalues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to
manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
|
This section demonstrates visualization of tabular data using the Styler class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
Styler Object and HTML#
Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate
DataFrames into their exiting user interface designs.
The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook.
[2]:
import pandas as pd
import numpy as np
import matplotlib as mpl
df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]],
index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'),
columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))
df.style
[2]:
Model:
Decision Tree
Regression
Random
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38.000000
2.000000
18.000000
22.000000
21
nan
Non-Tumour (Negative)
19.000000
439.000000
6.000000
452.000000
226
232.000000
The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven’t yet created any styles. We can view these by calling the .to_html() method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show
how we can use these to format the DataFrame to be more communicative. For example how we can build s:
[4]:
s
[4]:
Confusion matrix for multiple cancer prediction models.
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Formatting the Display#
Formatting Values#
Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavalues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to
manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
Additionally, the format function has a precision argument to specifically help formatting floats, as well as decimal and thousands separators to support other locales, an na_rep argument to display missing data, and an escape argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas’ styler.format.precision option, controllable using with pd.option_context('format.precision', 2):
[5]:
df.style.format(precision=0, na_rep='MISSING', thousands=" ",
formatter={('Decision Tree', 'Tumour'): "{:.2f}",
('Regression', 'Non-Tumour'): lambda x: "$ {:,.1f}".format(x*-1e6)
})
[5]:
Model:
Decision Tree
Regression
Random
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38.00
2
18
$ -22 000 000.0
21
MISSING
Non-Tumour (Negative)
19.00
439
6
$ -452 000 000.0
226
232
Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations.
[6]:
weather_df = pd.DataFrame(np.random.rand(10,2)*5,
index=pd.date_range(start="2021-01-01", periods=10),
columns=["Tokyo", "Beijing"])
def rain_condition(v):
if v < 1.75:
return "Dry"
elif v < 2.75:
return "Rain"
return "Heavy Rain"
def make_pretty(styler):
styler.set_caption("Weather Conditions")
styler.format(rain_condition)
styler.format_index(lambda v: v.strftime("%A"))
styler.background_gradient(axis=None, vmin=1, vmax=5, cmap="YlGnBu")
return styler
weather_df
[6]:
Tokyo
Beijing
2021-01-01
1.156896
0.483482
2021-01-02
4.274907
2.740275
2021-01-03
2.347367
4.046314
2021-01-04
4.762118
1.187866
2021-01-05
3.364955
1.436871
2021-01-06
1.714027
0.031307
2021-01-07
2.402132
4.665891
2021-01-08
3.262800
0.759015
2021-01-09
4.260268
2.226552
2021-01-10
4.277346
2.286653
[7]:
weather_df.loc["2021-01-04":"2021-01-08"].style.pipe(make_pretty)
[7]:
Weather Conditions
Tokyo
Beijing
Monday
Heavy Rain
Dry
Tuesday
Heavy Rain
Dry
Wednesday
Dry
Dry
Thursday
Rain
Heavy Rain
Friday
Heavy Rain
Dry
Hiding Data#
The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.
The index can be hidden from rendering by calling .hide() without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling .hide(axis=“columns”) without any further arguments.
Specific rows or columns can be hidden from rendering by calling the same .hide() method and passing in a row/column label, a list-like or a slice of row/column labels to for the subset argument.
Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at col2, since col0 and col1 are simply ignored.
We can update our Styler object from before to hide some data and format the values.
[8]:
s = df.style.format('{:.0f}').hide([('Random', 'Tumour'), ('Random', 'Non-Tumour')], axis="columns")
s
[8]:
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Methods to Add Styles#
There are 3 primary methods of adding custom CSS styles to Styler:
Using .set_table_styles() to control broader areas of the table with specified internal CSS. Although table styles allow the flexibility to add CSS selectors and properties controlling all individual parts of the table, they are unwieldy for individual cell specifications. Also, note that table styles cannot be exported to Excel.
Using .set_td_classes() to directly link either external CSS classes to your data cells or link the internal CSS classes created by .set_table_styles(). See here. These cannot be used on column header rows or indexes, and also won’t export to Excel.
Using the .apply() and .applymap() functions to add direct internal CSS to specific data cells. See here. As of v1.4.0 there are also methods that work directly on column header rows or indexes; .apply_index() and
.applymap_index(). Note that only these methods add styles that will export to Excel. These methods work in a similar way to DataFrame.apply() and DataFrame.applymap().
Table Styles#
Table styles are flexible enough to control all individual parts of the table, including column headers and indexes. However, they can be unwieldy to type for individual data cells or for any kind of conditional formatting, so we recommend that table styles are used for broad styling, such as entire rows or columns at a time.
Table styles are also used to control features which can apply to the whole table at once such as creating a generic hover functionality. The :hover pseudo-selector, as well as other pseudo-selectors, can only be used this way.
To replicate the normal format of CSS selectors and properties (attribute value pairs), e.g.
tr:hover {
background-color: #ffff99;
}
the necessary format to pass styles to .set_table_styles() is as a list of dicts, each with a CSS-selector tag and CSS-properties. Properties can either be a list of 2-tuples, or a regular CSS-string, for example:
[10]:
cell_hover = { # for row hover use <tr> instead of <td>
'selector': 'td:hover',
'props': [('background-color', '#ffffb3')]
}
index_names = {
'selector': '.index_name',
'props': 'font-style: italic; color: darkgrey; font-weight:normal;'
}
headers = {
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
}
s.set_table_styles([cell_hover, index_names, headers])
[10]:
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Next we just add a couple more styling artifacts targeting specific parts of the table. Be careful here, since we are chaining methods we need to explicitly instruct the method not to overwrite the existing styles.
[12]:
s.set_table_styles([
{'selector': 'th.col_heading', 'props': 'text-align: center;'},
{'selector': 'th.col_heading.level0', 'props': 'font-size: 1.5em;'},
{'selector': 'td', 'props': 'text-align: center; font-weight: bold;'},
], overwrite=False)
[12]:
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
As a convenience method (since version 1.2.0) we can also pass a dict to .set_table_styles() which contains row or column keys. Behind the scenes Styler just indexes the keys and adds relevant .col<m> or .row<n> classes as necessary to the given CSS selectors.
[14]:
s.set_table_styles({
('Regression', 'Tumour'): [{'selector': 'th', 'props': 'border-left: 1px solid white'},
{'selector': 'td', 'props': 'border-left: 1px solid #000066'}]
}, overwrite=False, axis=0)
[14]:
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Setting Classes and Linking to External CSS#
If you have designed a website then it is likely you will already have an external CSS file that controls the styling of table and cell objects within it. You may want to use these native files rather than duplicate all the CSS in python (and duplicate any maintenance work).
Table Attributes#
It is very easy to add a class to the main <table> using .set_table_attributes(). This method can also attach inline styles - read more in CSS Hierarchies.
[16]:
out = s.set_table_attributes('class="my-table-cls"').to_html()
print(out[out.find('<table'):][:109])
<table id="T_xyz01" class="my-table-cls">
<thead>
<tr>
<th class="index_name level0" >Model:</th>
Data Cell CSS Classes#
New in version 1.2.0
The .set_td_classes() method accepts a DataFrame with matching indices and columns to the underlying Styler’s DataFrame. That DataFrame will contain strings as css-classes to add to individual data cells: the <td> elements of the <table>. Rather than use external CSS we will create our classes internally and add them to table style. We will save adding the
borders until the section on tooltips.
[17]:
s.set_table_styles([ # create internal CSS classes
{'selector': '.true', 'props': 'background-color: #e6ffe6;'},
{'selector': '.false', 'props': 'background-color: #ffe6e6;'},
], overwrite=False)
cell_color = pd.DataFrame([['true ', 'false ', 'true ', 'false '],
['false ', 'true ', 'false ', 'true ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color)
[17]:
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Styler Functions#
Acting on Data#
We use the following methods to pass your style functions. Both of those methods take a function (and some other keyword arguments) and apply it to the DataFrame in a certain way, rendering CSS styles.
.applymap() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply() (column-/row-/table-wise): accepts a function that takes a Series or DataFrame and returns a Series, DataFrame, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each column or row of your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument. For columnwise use axis=0, rowwise use axis=1, and for the
entire table at once use axis=None.
This method is powerful for applying multiple, complex logic to data cells. We create a new DataFrame to demonstrate this.
[19]:
np.random.seed(0)
df2 = pd.DataFrame(np.random.randn(10,4), columns=['A','B','C','D'])
df2.style
[19]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
For example we can build a function that colors text if it is negative, and chain this with a function that partially fades cells of negligible value. Since this looks at each element in turn we use applymap.
[20]:
def style_negative(v, props=''):
return props if v < 0 else None
s2 = df2.style.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)
s2
[20]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
We can also build a function that highlights the maximum value across rows, cols, and the DataFrame all at once. In this case we use apply. Below we highlight the maximum in a column.
[22]:
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
s2.apply(highlight_max, props='color:white;background-color:darkblue', axis=0)
[22]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
We can use the same function across the different axes, highlighting here the DataFrame maximum in purple, and row maximums in pink.
[24]:
s2.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
[24]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
This last example shows how some styles have been overwritten by others. In general the most recent style applied is active but you can read more in the section on CSS hierarchies. You can also apply these styles to more granular parts of the DataFrame - read more in section on subset slicing.
It is possible to replicate some of this functionality using just classes but it can be more cumbersome. See item 3) of Optimization
Debugging Tip: If you’re having trouble writing your style function, try just passing it into DataFrame.apply. Internally, Styler.apply uses DataFrame.apply so the result should be the same, and with DataFrame.apply you will be able to inspect the CSS string output of your intended function in each cell.
Acting on the Index and Column Headers#
Similar application is achieved for headers by using:
.applymap_index() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply_index() (level-wise): accepts a function that takes a Series and returns a Series, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each level of your Index one-at-a-time. To style the index use axis=0 and to style the column headers use axis=1.
You can select a level of a MultiIndex but currently no similar subset application is available for these methods.
[26]:
s2.applymap_index(lambda v: "color:pink;" if v>4 else "color:darkblue;", axis=0)
s2.apply_index(lambda s: np.where(s.isin(["A", "B"]), "color:pink;", "color:darkblue;"), axis=1)
[26]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
Tooltips and Captions#
Table captions can be added with the .set_caption() method. You can use table styles to control the CSS relevant to the caption.
[27]:
s.set_caption("Confusion matrix for multiple cancer prediction models.")\
.set_table_styles([{
'selector': 'caption',
'props': 'caption-side: bottom; font-size:1.25em;'
}], overwrite=False)
[27]:
Confusion matrix for multiple cancer prediction models.
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Adding tooltips (since version 1.3.0) can be done using the .set_tooltips() method in the same way you can add CSS classes to data cells by providing a string based DataFrame with intersecting indices and columns. You don’t have to specify a css_class name or any css props for the tooltips, since there are standard defaults, but the option is there if you want more visual control.
[29]:
tt = pd.DataFrame([['This model has a very strong true positive rate',
"This model's total number of false negatives is too high"]],
index=['Tumour (Positive)'], columns=df.columns[[0,3]])
s.set_tooltips(tt, props='visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;'
'background-color: white; color: #000066; font-size: 0.8em;'
'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;')
[29]:
Confusion matrix for multiple cancer prediction models.
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
The only thing left to do for our table is to add the highlighting borders to draw the audience attention to the tooltips. We will create internal CSS classes as before using table styles. Setting classes always overwrites so we need to make sure we add the previous classes.
[31]:
s.set_table_styles([ # create internal CSS classes
{'selector': '.border-red', 'props': 'border: 2px dashed red;'},
{'selector': '.border-green', 'props': 'border: 2px dashed green;'},
], overwrite=False)
cell_border = pd.DataFrame([['border-green ', ' ', ' ', 'border-red '],
[' ', ' ', ' ', ' ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color + cell_border)
[31]:
Confusion matrix for multiple cancer prediction models.
Model:
Decision Tree
Regression
Predicted:
Tumour
Non-Tumour
Tumour
Non-Tumour
Actual Label:
Tumour (Positive)
38
2
18
22
Non-Tumour (Negative)
19
439
6
452
Finer Control with Slicing#
The examples we have shown so far for the Styler.apply and Styler.applymap functions have not demonstrated the use of the subset argument. This is a useful argument which permits a lot of flexibility: it allows you to apply styles to specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame;
A scalar is treated as a column label
A list (or Series or NumPy array) is treated as multiple column labels
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one. We will create a MultiIndexed DataFrame to demonstrate the functionality.
[33]:
df3 = pd.DataFrame(np.random.randn(4,4),
pd.MultiIndex.from_product([['A', 'B'], ['r1', 'r2']]),
columns=['c1','c2','c3','c4'])
df3
[33]:
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
We will use subset to highlight the maximum in the third and fourth columns with red text. We will highlight the subset sliced region in yellow.
[34]:
slice_ = ['c3', 'c4']
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
[34]:
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
If combined with the IndexSlice as suggested then it can index across both dimensions with greater flexibility.
[35]:
idx = pd.IndexSlice
slice_ = idx[idx[:,'r1'], idx['c2':'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
[35]:
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
This also provides the flexibility to sub select rows when used with the axis=1.
[36]:
slice_ = idx[idx[:,'r2'], :]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
[36]:
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
There is also scope to provide conditional filtering.
Suppose we want to highlight the maximum across columns 2 and 4 only in the case that the sum of columns 1 and 3 is less than -2.0 (essentially excluding rows (:,'r2')).
[37]:
slice_ = idx[idx[(df3['c1'] + df3['c3']) < -2.0], ['c2', 'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
[37]:
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
Only label-based slicing is supported right now, not positional, and not callables.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
my_func2 = functools.partial(my_func, subset=42)
Optimization#
Generally, for smaller tables and most cases, the rendered HTML does not need to be optimized, and we don’t really recommend it. There are two cases where it is worth considering:
If you are rendering and styling a very large HTML table, certain browsers have performance issues.
If you are using Styler to dynamically create part of online user interfaces and want to improve network performance.
Here we recommend the following steps to implement:
1. Remove UUID and cell_ids#
Ignore the uuid and set cell_ids to False. This will prevent unnecessary HTML.
This is sub-optimal:
[38]:
df4 = pd.DataFrame([[1,2],[3,4]])
s4 = df4.style
This is better:
[39]:
from pandas.io.formats.style import Styler
s4 = Styler(df4, uuid_len=0, cell_ids=False)
2. Use table styles#
Use table styles where possible (e.g. for all cells or rows or columns at a time) since the CSS is nearly always more efficient than other formats.
This is sub-optimal:
[40]:
props = 'font-family: "Times New Roman", Times, serif; color: #e83e8c; font-size:1.3em;'
df4.style.applymap(lambda x: props, subset=[1])
[40]:
0
1
0
1
2
1
3
4
This is better:
[41]:
df4.style.set_table_styles([{'selector': 'td.col1', 'props': props}])
[41]:
0
1
0
1
2
1
3
4
3. Set classes instead of using Styler functions#
For large DataFrames where the same style is applied to many cells it can be more efficient to declare the styles as classes and then apply those classes to data cells, rather than directly applying styles to cells. It is, however, probably still easier to use the Styler function api when you are not concerned about optimization.
This is sub-optimal:
[42]:
df2.style.apply(highlight_max, props='color:white;background-color:darkblue;', axis=0)\
.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
[42]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
This is better:
[43]:
build = lambda x: pd.DataFrame(x, index=df2.index, columns=df2.columns)
cls1 = build(df2.apply(highlight_max, props='cls-1 ', axis=0))
cls2 = build(df2.apply(highlight_max, props='cls-2 ', axis=1, result_type='expand').values)
cls3 = build(highlight_max(df2, props='cls-3 '))
df2.style.set_table_styles([
{'selector': '.cls-1', 'props': 'color:white;background-color:darkblue;'},
{'selector': '.cls-2', 'props': 'color:white;background-color:pink;'},
{'selector': '.cls-3', 'props': 'color:white;background-color:purple;'}
]).set_td_classes(cls1 + cls2 + cls3)
[43]:
A
B
C
D
0
1.764052
0.400157
0.978738
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
-0.854096
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
4. Don’t use tooltips#
Tooltips require cell_ids to work and they generate extra HTML elements for every data cell.
5. If every byte counts use string replacement#
You can remove unnecessary HTML, or shorten the default class names by replacing the default css dict. You can read a little more about CSS below.
[44]:
my_css = {
"row_heading": "",
"col_heading": "",
"index_name": "",
"col": "c",
"row": "r",
"col_trim": "",
"row_trim": "",
"level": "l",
"data": "",
"blank": "",
}
html = Styler(df4, uuid_len=0, cell_ids=False)
html.set_table_styles([{'selector': 'td', 'props': props},
{'selector': '.c1', 'props': 'color:green;'},
{'selector': '.l0', 'props': 'color:blue;'}],
css_class_names=my_css)
print(html.to_html())
<style type="text/css">
#T_ td {
font-family: "Times New Roman", Times, serif;
color: #e83e8c;
font-size: 1.3em;
}
#T_ .c1 {
color: green;
}
#T_ .l0 {
color: blue;
}
</style>
<table id="T_">
<thead>
<tr>
<th class=" l0" > </th>
<th class=" l0 c0" >0</th>
<th class=" l0 c1" >1</th>
</tr>
</thead>
<tbody>
<tr>
<th class=" l0 r0" >0</th>
<td class=" r0 c0" >1</td>
<td class=" r0 c1" >2</td>
</tr>
<tr>
<th class=" l0 r1" >1</th>
<td class=" r1 c0" >3</td>
<td class=" r1 c1" >4</td>
</tr>
</tbody>
</table>
[45]:
html
[45]:
0
1
0
1
2
1
3
4
Builtin Styles#
Some styling functions are common enough that we’ve “built them in” to the Styler, so you don’t have to write them and apply them yourself. The current list of such functions is:
.highlight_null: for use with identifying missing data.
.highlight_min and .highlight_max: for use with identifying extremeties in data.
.highlight_between and .highlight_quantile: for use with identifying classes within data.
.background_gradient: a flexible method for highlighting cells based on their, or other, values on a numeric scale.
.text_gradient: similar method for highlighting text based on their, or other, values on a numeric scale.
.bar: to display mini-charts within cell backgrounds.
The individual documentation on each function often gives more examples of their arguments.
Highlight Null#
[46]:
df2.iloc[0,2] = np.nan
df2.iloc[4,3] = np.nan
df2.loc[:4].style.highlight_null(color='yellow')
[46]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
Highlight Min or Max#
[47]:
df2.loc[:4].style.highlight_max(axis=1, props='color:white; font-weight:bold; background-color:darkblue;')
[47]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
Highlight Between#
This method accepts ranges as float, or NumPy arrays or Series provided the indexes match.
[48]:
left = pd.Series([1.0, 0.0, 1.0], index=["A", "B", "D"])
df2.loc[:4].style.highlight_between(left=left, right=1.5, axis=1, props='color:white; background-color:purple;')
[48]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
Highlight Quantile#
Useful for detecting the highest or lowest percentile values
[49]:
df2.loc[:4].style.highlight_quantile(q_left=0.85, axis=None, color='yellow')
[49]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
Background Gradient and Text Gradient#
You can create “heatmaps” with the background_gradient and text_gradient methods. These require matplotlib, and we’ll use Seaborn to get a nice colormap.
[50]:
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
df2.style.background_gradient(cmap=cm)
[50]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
[51]:
df2.style.text_gradient(cmap=cm)
[51]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
.background_gradient and .text_gradient have a number of keyword arguments to customise the gradients and colors. See the documentation.
Set properties#
Use Styler.set_properties when the style doesn’t actually depend on the values. This is just a simple wrapper for .applymap where the function returns the same properties for all cells.
[52]:
df2.loc[:4].style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
[52]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
Bar charts#
You can include “bar charts” in your DataFrame.
[53]:
df2.style.bar(subset=['A', 'B'], color='#d65f5f')
[53]:
A
B
C
D
0
1.764052
0.400157
nan
2.240893
1
1.867558
-0.977278
0.950088
-0.151357
2
-0.103219
0.410599
0.144044
1.454274
3
0.761038
0.121675
0.443863
0.333674
4
1.494079
-0.205158
0.313068
nan
5
-2.552990
0.653619
0.864436
-0.742165
6
2.269755
-1.454366
0.045759
-0.187184
7
1.532779
1.469359
0.154947
0.378163
8
-0.887786
-1.980796
-0.347912
0.156349
9
1.230291
1.202380
-0.387327
-0.302303
Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values or a matplotlib colormap.
To showcase an example here’s how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars. We also use text_gradient to color the text the same as the bars using a matplotlib colormap (although in this case the visualization is probably better without this additional effect).
[54]:
df2.style.format('{:.3f}', na_rep="")\
.bar(align=0, vmin=-2.5, vmax=2.5, cmap="bwr", height=50,
width=60, props="width: 120px; border-right: 1px solid black;")\
.text_gradient(cmap="bwr", vmin=-2.5, vmax=2.5)
[54]:
A
B
C
D
0
1.764
0.400
2.241
1
1.868
-0.977
0.950
-0.151
2
-0.103
0.411
0.144
1.454
3
0.761
0.122
0.444
0.334
4
1.494
-0.205
0.313
5
-2.553
0.654
0.864
-0.742
6
2.270
-1.454
0.046
-0.187
7
1.533
1.469
0.155
0.378
8
-0.888
-1.981
-0.348
0.156
9
1.230
1.202
-0.387
-0.302
The following example aims to give a highlight of the behavior of the new align options:
[56]:
HTML(head)
[56]:
Align
All Negative
Both Neg and Pos
All Positive
Large Positive
left
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
right
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
zero
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
mid
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
mean
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
99
-100
-60
-30
-20
-10
-5
0
90
10
20
50
100
100
103
101
102
Sharing styles#
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
[57]:
style1 = df2.style\
.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)\
.set_table_styles([{"selector": "th", "props": "color: blue;"}])\
.hide(axis="index")
style1
[57]:
A
B
C
D
1.764052
0.400157
nan
2.240893
1.867558
-0.977278
0.950088
-0.151357
-0.103219
0.410599
0.144044
1.454274
0.761038
0.121675
0.443863
0.333674
1.494079
-0.205158
0.313068
nan
-2.552990
0.653619
0.864436
-0.742165
2.269755
-1.454366
0.045759
-0.187184
1.532779
1.469359
0.154947
0.378163
-0.887786
-1.980796
-0.347912
0.156349
1.230291
1.202380
-0.387327
-0.302303
[58]:
style2 = df3.style
style2.use(style1.export())
style2
[58]:
c1
c2
c3
c4
-1.048553
-1.420018
-1.706270
1.950775
-0.509652
-0.438074
-1.252795
0.777490
-1.613898
-0.212740
-0.895467
0.386902
-0.510805
-1.180632
-0.028182
0.428332
Notice that you’re able to share the styles even though they’re data aware. The styles are re-evaluated on the new DataFrame they’ve been used upon.
Limitations#
DataFrame only (use Series.to_frame().style)
The index and columns do not need to be unique, but certain styling functions can only work with unique indexes.
No large repr, and construction performance isn’t great; although we have some HTML optimizations
You can only apply styles, you can’t insert new HTML entities, except via subclassing.
Other Fun and Useful Stuff#
Here are a few interesting examples.
Widgets#
Styler interacts pretty well with widgets. If you’re viewing this online instead of running the notebook yourself, you’re missing out on interactively adjusting the color palette.
[59]:
from ipywidgets import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df2.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
Magnify#
[60]:
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
[61]:
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.format(precision=2)\
.set_table_styles(magnify())
[61]:
Hover to magnify
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
0
0.23
1.03
-0.84
-0.59
-0.96
-0.22
-0.62
1.84
-2.05
0.87
-0.92
-0.23
2.15
-1.33
0.08
-1.25
1.20
-1.05
1.06
-0.42
2.29
-2.59
2.82
0.68
-1.58
1
-1.75
1.56
-1.13
-1.10
1.03
0.00
-2.46
3.45
-1.66
1.27
-0.52
-0.02
1.52
-1.09
-1.86
-1.13
-0.68
-0.81
0.35
-0.06
1.79
-2.82
2.26
0.78
0.44
2
-0.65
3.22
-1.76
0.52
2.20
-0.37
-3.00
3.73
-1.87
2.46
0.21
-0.24
-0.10
-0.78
-3.02
-0.82
-0.21
-0.23
0.86
-0.68
1.45
-4.89
3.03
1.91
0.61
3
-1.62
3.71
-2.31
0.43
4.17
-0.43
-3.86
4.16
-2.15
1.08
0.12
0.60
-0.89
0.27
-3.67
-2.71
-0.31
-1.59
1.35
-1.83
0.91
-5.80
2.81
2.11
0.28
4
-3.35
4.48
-1.86
-1.70
5.19
-1.02
-3.81
4.72
-0.72
1.08
-0.18
0.83
-0.22
-1.08
-4.27
-2.88
-0.97
-1.78
1.53
-1.80
2.21
-6.34
3.34
2.49
2.09
5
-0.84
4.23
-1.65
-2.00
5.34
-0.99
-4.13
3.94
-1.06
-0.94
1.24
0.09
-1.78
-0.11
-4.45
-0.85
-2.06
-1.35
0.80
-1.63
1.54
-6.51
2.80
2.14
3.77
6
-0.74
5.35
-2.11
-1.13
4.20
-1.85
-3.20
3.76
-3.22
-1.23
0.34
0.57
-1.82
0.54
-4.43
-1.83
-4.03
-2.62
-0.20
-4.68
1.93
-8.46
3.34
2.52
5.81
7
-0.44
4.69
-2.30
-0.21
5.93
-2.63
-1.83
5.46
-4.50
-3.16
-1.73
0.18
0.11
0.04
-5.99
-0.45
-6.20
-3.89
0.71
-3.95
0.67
-7.26
2.97
3.39
6.66
8
0.92
5.80
-3.33
-0.65
5.99
-3.19
-1.83
5.63
-3.53
-1.30
-1.61
0.82
-2.45
-0.40
-6.06
-0.52
-6.60
-3.48
-0.04
-4.60
0.51
-5.85
3.23
2.40
5.08
9
0.38
5.54
-4.49
-0.80
7.05
-2.64
-0.44
5.35
-1.96
-0.33
-0.80
0.26
-3.37
-0.82
-6.05
-2.61
-8.45
-4.45
0.41
-4.71
1.89
-6.93
2.14
3.00
5.16
10
2.06
5.84
-3.90
-0.98
7.78
-2.49
-0.59
5.59
-2.22
-0.71
-0.46
1.80
-2.79
0.48
-5.97
-3.44
-7.77
-5.49
-0.70
-4.61
-0.52
-7.72
1.54
5.02
5.81
11
1.86
4.47
-2.17
-1.38
5.90
-0.49
0.02
5.78
-1.04
-0.60
0.49
1.96
-1.47
1.88
-5.92
-4.55
-8.15
-3.42
-2.24
-4.33
-1.17
-7.90
1.36
5.31
5.83
12
3.19
4.22
-3.06
-2.27
5.93
-2.64
0.33
6.72
-2.84
-0.20
1.89
2.63
-1.53
0.75
-5.27
-4.53
-7.57
-2.85
-2.17
-4.78
-1.13
-8.99
2.11
6.42
5.60
13
2.31
4.45
-3.87
-2.05
6.76
-3.25
-2.17
7.99
-2.56
-0.80
0.71
2.33
-0.16
-0.46
-5.10
-3.79
-7.58
-4.00
0.33
-3.67
-1.05
-8.71
2.47
5.87
6.71
14
3.78
4.33
-3.88
-1.58
6.22
-3.23
-1.46
5.57
-2.93
-0.33
-0.97
1.72
3.61
0.29
-4.21
-4.10
-6.68
-4.50
-2.19
-2.43
-1.64
-9.36
3.36
6.11
7.53
15
5.64
5.31
-3.98
-2.26
5.91
-3.30
-1.03
5.68
-3.06
-0.33
-1.16
2.19
4.20
1.01
-3.22
-4.31
-5.74
-4.44
-2.30
-1.36
-1.20
-11.27
2.59
6.69
5.91
16
4.08
4.34
-2.44
-3.30
6.04
-2.52
-0.47
5.28
-4.84
1.58
0.23
0.10
5.79
1.80
-3.13
-3.85
-5.53
-2.97
-2.13
-1.15
-0.56
-13.13
2.07
6.16
4.94
17
5.64
4.57
-3.53
-3.76
6.58
-2.58
-0.75
6.58
-4.78
3.63
-0.29
0.56
5.76
2.05
-2.27
-2.31
-4.95
-3.16
-3.06
-2.43
0.84
-12.57
3.56
7.36
4.70
18
5.99
5.82
-2.85
-4.15
7.12
-3.32
-1.21
7.93
-4.85
1.44
-0.63
0.35
7.47
0.87
-1.52
-2.09
-4.23
-2.55
-2.46
-2.89
1.90
-9.74
3.43
7.07
4.39
19
4.03
6.23
-4.10
-4.11
7.19
-4.10
-1.52
6.53
-5.21
-0.24
0.01
1.16
6.43
-1.97
-2.64
-1.66
-5.20
-3.25
-2.87
-1.65
1.64
-10.66
2.83
7.48
3.94
Sticky Headers#
If you display a large matrix or DataFrame in a notebook, but you want to always see the column and row headers you can use the .set_sticky method which manipulates the table styles CSS.
[62]:
bigdf = pd.DataFrame(np.random.randn(16, 100))
bigdf.style.set_sticky(axis="index")
[62]:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
0
-0.773866
-0.240521
-0.217165
1.173609
0.686390
0.008358
0.696232
0.173166
0.620498
0.504067
0.428066
-0.051824
0.719915
0.057165
0.562808
-0.369536
0.483399
0.620765
-0.354342
-1.469471
-1.937266
0.038031
-1.518162
-0.417599
0.386717
0.716193
0.489961
0.733957
0.914415
0.679894
0.255448
-0.508338
0.332030
-0.111107
-0.251983
-1.456620
0.409630
1.062320
-0.577115
0.718796
-0.399260
-1.311389
0.649122
0.091566
0.628872
0.297894
-0.142290
-0.542291
-0.914290
1.144514
0.313584
1.182635
1.214235
-0.416446
-1.653940
-2.550787
0.442473
0.052127
-0.464469
-0.523852
0.989726
-1.325539
-0.199687
-1.226727
0.290018
1.164574
0.817841
-0.309509
0.496599
0.943536
-0.091850
-2.802658
2.126219
-0.521161
0.288098
-0.454663
-1.676143
-0.357661
-0.788960
0.185911
-0.017106
2.454020
1.832706
-0.911743
-0.655873
-0.000514
-2.226997
0.677285
-0.140249
-0.408407
-0.838665
0.482228
1.243458
-0.477394
-0.220343
-2.463966
0.237325
-0.307380
1.172478
0.819492
1
0.405906
-0.978919
1.267526
0.145250
-1.066786
-2.114192
-1.128346
-1.082523
0.372216
0.004127
-0.211984
0.937326
-0.935890
-1.704118
0.611789
-1.030015
0.636123
-1.506193
1.736609
1.392958
1.009424
0.353266
0.697339
-0.297424
0.428702
-0.145346
-0.333553
-0.974699
0.665314
0.971944
0.121950
-1.439668
1.018808
1.442399
-0.199585
-1.165916
0.645656
1.436466
-0.921215
1.293906
-2.706443
1.460928
-0.823197
0.292952
-1.448992
0.026692
-0.975883
0.392823
0.442166
0.745741
1.187982
-0.218570
0.305288
0.054932
-1.476953
-0.114434
0.014103
0.825394
-0.060654
-0.413688
0.974836
1.339210
1.034838
0.040775
0.705001
0.017796
1.867681
-0.390173
2.285277
2.311464
-0.085070
-0.648115
0.576300
-0.790087
-1.183798
-1.334558
-0.454118
0.319302
1.706488
0.830429
0.502476
-0.079631
0.414635
0.332511
0.042935
-0.160910
0.918553
-0.292697
-1.303834
-0.199604
0.871023
-1.370681
-0.205701
-0.492973
1.123083
-0.081842
-0.118527
0.245838
-0.315742
-0.511806
2
0.011470
-0.036104
1.399603
-0.418176
-0.412229
-1.234783
-1.121500
1.196478
-0.569522
0.422022
-0.220484
0.804338
2.892667
-0.511055
-0.168722
-1.477996
-1.969917
0.471354
1.698548
0.137105
-0.762052
0.199379
-0.964346
-0.256692
1.265275
0.848762
-0.784161
1.863776
-0.355569
0.854552
0.768061
-2.075718
-2.501069
1.109868
0.957545
-0.683276
0.307764
0.733073
1.706250
-1.118091
0.374961
-1.414503
-0.524183
-1.662696
0.687921
0.521732
1.451396
-0.833491
-0.362796
-1.174444
-0.813893
-0.893220
0.770743
1.156647
-0.647444
0.125929
0.513600
-0.537874
1.992052
-1.946584
-0.104759
0.484779
-0.290936
-0.441075
0.542993
-1.050038
1.630482
0.239771
-1.177310
0.464804
-0.966995
0.646086
0.486899
1.022196
-2.267827
-1.229616
1.313805
1.073292
2.324940
-0.542720
-1.504292
0.777643
-0.618553
0.011342
1.385062
1.363552
-0.549834
0.688896
1.361288
-0.381137
0.797812
-1.128198
0.369208
0.540132
0.413853
-0.200308
-0.969126
0.981293
-0.009783
-0.320020
3
-0.574816
1.419977
0.434813
-1.101217
-1.586275
1.979573
0.378298
0.782326
2.178987
0.657564
0.683774
-0.091000
-0.059552
-0.738908
-0.907653
-0.701936
0.580039
-0.618757
0.453684
1.665382
-0.152321
0.880077
0.571073
-0.604736
0.532359
0.515031
-0.959844
-0.887184
0.435781
0.862093
-0.956321
-0.625909
0.194472
0.442490
0.526503
-0.215274
0.090711
0.932592
0.811999
-2.497026
0.631545
0.321418
-0.425549
-1.078832
0.753444
0.199790
-0.360526
-0.013448
-0.819476
0.814869
0.442118
-0.972048
-0.060603
-2.349825
1.265445
-0.573257
0.429124
1.049783
1.954773
0.071883
-0.094209
0.265616
0.948318
0.331645
1.343401
-0.167934
-1.105252
-0.167077
-0.096576
-0.838161
-0.208564
0.394534
0.762533
1.235357
-0.207282
-0.202946
-0.468025
0.256944
2.587584
1.186697
-1.031903
1.428316
0.658899
-0.046582
-0.075422
1.329359
-0.684267
-1.524182
2.014061
3.770933
0.647353
-1.021377
-0.345493
0.582811
0.797812
1.326020
1.422857
-3.077007
0.184083
1.478935
4
-0.600142
1.929561
-2.346771
-0.669700
-1.165258
0.814788
0.444449
-0.576758
0.353091
0.408893
0.091391
-2.294389
0.485506
-0.081304
-0.716272
-1.648010
1.005361
-1.489603
0.363098
0.758602
-1.373847
-0.972057
1.988537
0.319829
1.169060
0.146585
1.030388
1.165984
1.369563
0.730984
-1.383696
-0.515189
-0.808927
-1.174651
-1.631502
-1.123414
-0.478155
-1.583067
1.419074
1.668777
1.567517
0.222103
-0.336040
-1.352064
0.251032
-0.401695
0.268413
-0.012299
-0.918953
2.921208
-0.581588
0.672848
1.251136
1.382263
1.429897
1.290990
-1.272673
-0.308611
-0.422988
-0.675642
0.874441
1.305736
-0.262585
-1.099395
-0.667101
-0.646737
-0.556338
-0.196591
0.119306
-0.266455
-0.524267
2.650951
0.097318
-0.974697
0.189964
1.141155
-0.064434
1.104971
-1.508908
-0.031833
0.803919
-0.659221
0.939145
0.214041
-0.531805
0.956060
0.249328
0.637903
-0.510158
1.850287
-0.348407
2.001376
-0.389643
-0.024786
-0.470973
0.869339
0.170667
0.598062
1.217262
1.274013
5
-0.389981
-0.752441
-0.734871
3.517318
-1.173559
-0.004956
0.145419
2.151368
-3.086037
-1.569139
1.449784
-0.868951
-1.687716
-0.994401
1.153266
1.803045
-0.819059
0.847970
0.227102
-0.500762
0.868210
1.823540
1.161007
-0.307606
-0.713416
0.363560
-0.822162
2.427681
-0.129537
-0.078716
1.345644
-1.286094
0.237242
-0.136056
0.596664
-1.412381
1.206341
0.299860
0.705238
0.142412
-1.059382
0.833468
1.060015
-0.527045
-1.135732
-1.140983
-0.779540
-0.640875
-1.217196
-1.675663
0.241263
-0.273322
-1.697936
-0.594943
0.101154
1.391735
-0.426953
1.008344
-0.818577
1.924570
-0.578900
-0.457395
-1.096705
0.418522
-0.155623
0.169706
-2.533706
0.018904
1.434160
0.744095
0.647626
-0.770309
2.329141
-0.141547
-1.761594
0.702091
-1.531450
-0.788427
-0.184622
-1.942321
1.530113
0.503406
1.105845
-0.935120
-1.115483
-2.249762
1.307135
0.788412
-0.441091
0.073561
0.812101
-0.916146
1.573714
-0.309508
0.499987
0.187594
0.558913
0.903246
0.317901
-0.809797
6
1.128248
1.516826
-0.186735
-0.668157
1.132259
-0.246648
-0.855167
0.732283
0.931802
1.318684
-1.198418
-1.149318
0.586321
-1.171937
-0.607731
2.753747
1.479287
-1.136365
-0.020485
0.320444
-1.955755
0.660402
-1.545371
0.200519
-0.017263
1.634686
0.599246
0.462989
0.023721
0.225546
0.170972
-0.027496
-0.061233
-0.566411
-0.669567
0.601618
0.503656
-0.678253
-2.907108
-1.717123
0.397631
1.300108
0.215821
-0.593075
-0.225944
-0.946057
1.000308
0.393160
1.342074
-0.370687
-0.166413
-0.419814
-0.255931
1.789478
0.282378
0.742260
-0.050498
1.415309
0.838166
-1.400292
-0.937976
-1.499148
0.801859
0.224824
0.283572
0.643703
-1.198465
0.527206
0.215202
0.437048
1.312868
0.741243
0.077988
0.006123
0.190370
0.018007
-1.026036
-2.378430
-1.069949
0.843822
1.289216
-1.423369
-0.462887
0.197330
-0.935076
0.441271
0.414643
-0.377887
-0.530515
0.621592
1.009572
0.569718
0.175291
-0.656279
-0.112273
-0.392137
-1.043558
-0.467318
-0.384329
-2.009207
7
0.658598
0.101830
-0.682781
0.229349
-0.305657
0.404877
0.252244
-0.837784
-0.039624
0.329457
0.751694
1.469070
-0.157199
1.032628
-0.584639
-0.925544
0.342474
-0.969363
0.133480
-0.385974
-0.600278
0.281939
0.868579
1.129803
-0.041898
0.961193
0.131521
-0.792889
-1.285737
0.073934
-1.333315
-1.044125
1.277338
1.492257
0.411379
1.771805
-1.111128
1.123233
-1.019449
1.738357
-0.690764
-0.120710
-0.421359
-0.727294
-0.857759
-0.069436
-0.328334
-0.558180
1.063474
-0.519133
-0.496902
1.089589
-1.615801
0.080174
-0.229938
-0.498420
-0.624615
0.059481
-0.093158
-1.784549
-0.503789
-0.140528
0.002653
-0.484930
0.055914
-0.680948
-0.994271
1.277052
0.037651
2.155421
-0.437589
0.696404
0.417752
-0.544785
1.190690
0.978262
0.752102
0.504472
0.139853
-0.505089
-0.264975
-1.603194
0.731847
0.010903
-1.165346
-0.125195
-1.032685
-0.465520
1.514808
0.304762
0.793414
0.314635
-1.638279
0.111737
-0.777037
0.251783
1.126303
-0.808798
0.422064
-0.349264
8
-0.356362
-0.089227
0.609373
0.542382
-0.768681
-0.048074
2.015458
-1.552351
0.251552
1.459635
0.949707
0.339465
-0.001372
1.798589
1.559163
0.231783
0.423141
-0.310530
0.353795
2.173336
-0.196247
-0.375636
-0.858221
0.258410
0.656430
0.960819
1.137893
1.553405
0.038981
-0.632038
-0.132009
-1.834997
-0.242576
-0.297879
-0.441559
-0.769691
0.224077
-0.153009
0.519526
-0.680188
0.535851
0.671496
-0.183064
0.301234
1.288256
-2.478240
-0.360403
0.424067
-0.834659
-0.128464
-0.489013
-0.014888
-1.461230
-1.435223
-1.319802
1.083675
0.979140
-0.375291
1.110189
-1.011351
0.587886
-0.822775
-1.183865
1.455173
1.134328
0.239403
-0.837991
-1.130932
0.783168
1.845520
1.437072
-1.198443
1.379098
2.129113
0.260096
-0.011975
0.043302
0.722941
1.028152
-0.235806
1.145245
-1.359598
0.232189
0.503712
-0.614264
-0.530606
-2.435803
-0.255238
-0.064423
0.784643
0.256346
0.128023
1.414103
-1.118659
0.877353
0.500561
0.463651
-2.034512
-0.981683
-0.691944
9
-1.113376
-1.169402
0.680539
-1.534212
1.653817
-1.295181
-0.566826
0.477014
1.413371
0.517105
1.401153
-0.872685
0.830957
0.181507
-0.145616
0.694592
-0.751208
0.324444
0.681973
-0.054972
0.917776
-1.024810
-0.206446
-0.600113
0.852805
1.455109
-0.079769
0.076076
0.207699
-1.850458
-0.124124
-0.610871
-0.883362
0.219049
-0.685094
-0.645330
-0.242805
-0.775602
0.233070
2.422642
-1.423040
-0.582421
0.968304
-0.701025
-0.167850
0.277264
1.301231
0.301205
-3.081249
-0.562868
0.192944
-0.664592
0.565686
0.190913
-0.841858
-1.856545
-1.022777
1.295968
0.451921
0.659955
0.065818
-0.319586
0.253495
-1.144646
-0.483404
0.555902
0.807069
0.714196
0.661196
0.053667
0.346833
-1.288977
-0.386734
-1.262127
0.477495
-0.494034
-0.911414
1.152963
-0.342365
-0.160187
0.470054
-0.853063
-1.387949
-0.257257
-1.030690
-0.110210
0.328911
-0.555923
0.987713
-0.501957
2.069887
-0.067503
0.316029
-1.506232
2.201621
0.492097
-0.085193
-0.977822
1.039147
-0.653932
10
-0.405638
-1.402027
-1.166242
1.306184
0.856283
-1.236170
-0.646721
-1.474064
0.082960
0.090310
-0.169977
0.406345
0.915427
-0.974503
0.271637
1.539184
-0.098866
-0.525149
1.063933
0.085827
-0.129622
0.947959
-0.072496
-0.237592
0.012549
1.065761
0.996596
-0.172481
2.583139
-0.028578
-0.254856
1.328794
-1.592951
2.434350
-0.341500
-0.307719
-1.333273
-1.100845
0.209097
1.734777
0.639632
0.424779
-0.129327
0.905029
-0.482909
1.731628
-2.783425
-0.333677
-0.110895
1.212636
-0.208412
0.427117
1.348563
0.043859
1.772519
-1.416106
0.401155
0.807157
0.303427
-1.246288
0.178774
-0.066126
-1.862288
1.241295
0.377021
-0.822320
-0.749014
1.463652
1.602268
-1.043877
1.185290
-0.565783
-1.076879
1.360241
-0.121991
0.991043
1.007952
0.450185
-0.744376
1.388876
-0.316847
-0.841655
-1.056842
-0.500226
0.096959
1.176896
-2.939652
1.792213
0.316340
0.303218
1.024967
-0.590871
-0.453326
-0.795981
-0.393301
-0.374372
-1.270199
1.618372
1.197727
-0.914863
11
-0.625210
0.288911
0.288374
-1.372667
-0.591395
-0.478942
1.335664
-0.459855
-1.615975
-1.189676
0.374767
-2.488733
0.586656
-1.422008
0.496030
1.911128
-0.560660
-0.499614
-0.372171
-1.833069
0.237124
-0.944446
0.912140
0.359790
-1.359235
0.166966
-0.047107
-0.279789
-0.594454
-0.739013
-1.527645
0.401668
1.791252
-2.774848
0.523873
2.207585
0.488999
-0.339283
0.131711
0.018409
1.186551
-0.424318
1.554994
-0.205917
-0.934975
0.654102
-1.227761
-0.461025
-0.421201
-0.058615
-0.584563
0.336913
-0.477102
-1.381463
0.757745
-0.268968
0.034870
1.231686
0.236600
1.234720
-0.040247
0.029582
1.034905
0.380204
-0.012108
-0.859511
-0.990340
-1.205172
-1.030178
0.426676
0.497796
-0.876808
0.957963
0.173016
0.131612
-1.003556
-1.069908
-1.799207
1.429598
-0.116015
-1.454980
0.261917
0.444412
0.273290
0.844115
0.218745
-1.033350
-1.188295
0.058373
0.800523
-1.627068
0.861651
0.871018
-0.003733
-0.243354
0.947296
0.509406
0.044546
0.266896
1.337165
12
0.699142
-1.928033
0.105363
1.042322
0.715206
-0.763783
0.098798
-1.157898
0.134105
0.042041
0.674826
0.165649
-1.622970
-3.131274
0.597649
-1.880331
0.663980
-0.256033
-1.524058
0.492799
0.221163
0.429622
-0.659584
1.264506
-0.032131
-2.114907
-0.264043
0.457835
-0.676837
-0.629003
0.489145
-0.551686
0.942622
-0.512043
-0.455893
0.021244
-0.178035
-2.498073
-0.171292
0.323510
-0.545163
-0.668909
-0.150031
0.521620
-0.428980
0.676463
0.369081
-0.724832
0.793542
1.237422
0.401275
2.141523
0.249012
0.486755
-0.163274
0.592222
-0.292600
-0.547168
0.619104
-0.013605
0.776734
0.131424
1.189480
-0.666317
-0.939036
1.105515
0.621452
1.586605
-0.760970
1.649646
0.283199
1.275812
-0.452012
0.301361
-0.976951
-0.268106
-0.079255
-1.258332
2.216658
-1.175988
-0.863497
-1.653022
-0.561514
0.450753
0.417200
0.094676
-2.231054
1.316862
-0.477441
0.646654
-0.200252
1.074354
-0.058176
0.120990
0.222522
-0.179507
0.421655
-0.914341
-0.234178
0.741524
13
0.932714
1.423761
-1.280835
0.347882
-0.863171
-0.852580
1.044933
2.094536
0.806206
0.416201
-1.109503
0.145302
-0.996871
0.325456
-0.605081
1.175326
1.645054
0.293432
-2.766822
1.032849
0.079115
-1.414132
1.463376
2.335486
0.411951
-0.048543
0.159284
-0.651554
-1.093128
1.568390
-0.077807
-2.390779
-0.842346
-0.229675
-0.999072
-1.367219
-0.792042
-1.878575
1.451452
1.266250
-0.734315
0.266152
0.735523
-0.430860
0.229864
0.850083
-2.241241
1.063850
0.289409
-0.354360
0.113063
-0.173006
1.386998
1.886236
0.587119
-0.961133
0.399295
1.461560
0.310823
0.280220
-0.879103
-1.326348
0.003337
-1.085908
-0.436723
2.111926
0.106068
0.615597
2.152996
-0.196155
0.025747
-0.039061
0.656823
-0.347105
2.513979
1.758070
1.288473
-0.739185
-0.691592
-0.098728
-0.276386
0.489981
0.516278
-0.838258
0.596673
-0.331053
0.521174
-0.145023
0.836693
-1.092166
0.361733
-1.169981
0.046731
0.655377
-0.756852
1.285805
-0.095019
0.360253
1.370621
0.083010
14
0.888893
2.288725
-1.032332
0.212273
-1.091826
1.692498
1.025367
0.550854
0.679430
-1.335712
-0.798341
2.265351
-1.006938
2.059761
0.420266
-1.189657
0.506674
0.260847
-0.533145
0.727267
1.412276
1.482106
-0.996258
0.588641
-0.412642
-0.920733
-0.874691
0.839002
0.501668
-0.342493
-0.533806
-2.146352
-0.597339
0.115726
0.850683
-0.752239
0.377263
-0.561982
0.262783
-0.356676
-0.367462
0.753611
-1.267414
-1.330698
-0.536453
0.840938
-0.763108
-0.268100
-0.677424
1.606831
0.151732
-2.085701
1.219296
0.400863
0.591165
-1.485213
1.501979
1.196569
-0.214154
0.339554
-0.034446
1.176452
0.546340
-1.255630
-1.309210
-0.445437
0.189437
-0.737463
0.843767
-0.605632
-0.060777
0.409310
1.285569
-0.622638
1.018193
0.880680
0.046805
-1.818058
-0.809829
0.875224
0.409569
-0.116621
-1.238919
3.305724
-0.024121
-1.756500
1.328958
0.507593
-0.866554
-2.240848
-0.661376
-0.671824
0.215720
-0.296326
0.481402
0.829645
-0.721025
1.263914
0.549047
-1.234945
15
-1.978838
0.721823
-0.559067
-1.235243
0.420716
-0.598845
0.359576
-0.619366
-1.757772
-1.156251
0.705212
0.875071
-1.020376
0.394760
-0.147970
0.230249
1.355203
1.794488
2.678058
-0.153565
-0.460959
-0.098108
-1.407930
-2.487702
1.823014
0.099873
-0.517603
-0.509311
-1.833175
-0.900906
0.459493
-0.655440
1.466122
-1.531389
-0.422106
0.421422
0.578615
0.259795
0.018941
-0.168726
1.611107
-1.586550
-1.384941
0.858377
1.033242
1.701343
1.748344
-0.371182
-0.843575
2.089641
-0.345430
-1.740556
0.141915
-2.197138
0.689569
-0.150025
0.287456
0.654016
-1.521919
-0.918008
-0.587528
0.230636
0.262637
0.615674
0.600044
-0.494699
-0.743089
0.220026
-0.242207
0.528216
-0.328174
-1.536517
-1.476640
-1.162114
-1.260222
1.106252
-1.467408
-0.349341
-1.841217
0.031296
-0.076475
-0.353383
0.807545
0.779064
-2.398417
-0.267828
1.549734
0.814397
0.284770
-0.659369
0.761040
-0.722067
0.810332
1.501295
1.440865
-1.367459
-0.700301
-1.540662
0.159837
-0.625415
It is also possible to stick MultiIndexes and even only specific levels.
[63]:
bigdf.index = pd.MultiIndex.from_product([["A","B"],[0,1],[0,1,2,3]])
bigdf.style.set_sticky(axis="index", pixel_size=18, levels=[1,2])
[63]:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
A
0
0
-0.773866
-0.240521
-0.217165
1.173609
0.686390
0.008358
0.696232
0.173166
0.620498
0.504067
0.428066
-0.051824
0.719915
0.057165
0.562808
-0.369536
0.483399
0.620765
-0.354342
-1.469471
-1.937266
0.038031
-1.518162
-0.417599
0.386717
0.716193
0.489961
0.733957
0.914415
0.679894
0.255448
-0.508338
0.332030
-0.111107
-0.251983
-1.456620
0.409630
1.062320
-0.577115
0.718796
-0.399260
-1.311389
0.649122
0.091566
0.628872
0.297894
-0.142290
-0.542291
-0.914290
1.144514
0.313584
1.182635
1.214235
-0.416446
-1.653940
-2.550787
0.442473
0.052127
-0.464469
-0.523852
0.989726
-1.325539
-0.199687
-1.226727
0.290018
1.164574
0.817841
-0.309509
0.496599
0.943536
-0.091850
-2.802658
2.126219
-0.521161
0.288098
-0.454663
-1.676143
-0.357661
-0.788960
0.185911
-0.017106
2.454020
1.832706
-0.911743
-0.655873
-0.000514
-2.226997
0.677285
-0.140249
-0.408407
-0.838665
0.482228
1.243458
-0.477394
-0.220343
-2.463966
0.237325
-0.307380
1.172478
0.819492
1
0.405906
-0.978919
1.267526
0.145250
-1.066786
-2.114192
-1.128346
-1.082523
0.372216
0.004127
-0.211984
0.937326
-0.935890
-1.704118
0.611789
-1.030015
0.636123
-1.506193
1.736609
1.392958
1.009424
0.353266
0.697339
-0.297424
0.428702
-0.145346
-0.333553
-0.974699
0.665314
0.971944
0.121950
-1.439668
1.018808
1.442399
-0.199585
-1.165916
0.645656
1.436466
-0.921215
1.293906
-2.706443
1.460928
-0.823197
0.292952
-1.448992
0.026692
-0.975883
0.392823
0.442166
0.745741
1.187982
-0.218570
0.305288
0.054932
-1.476953
-0.114434
0.014103
0.825394
-0.060654
-0.413688
0.974836
1.339210
1.034838
0.040775
0.705001
0.017796
1.867681
-0.390173
2.285277
2.311464
-0.085070
-0.648115
0.576300
-0.790087
-1.183798
-1.334558
-0.454118
0.319302
1.706488
0.830429
0.502476
-0.079631
0.414635
0.332511
0.042935
-0.160910
0.918553
-0.292697
-1.303834
-0.199604
0.871023
-1.370681
-0.205701
-0.492973
1.123083
-0.081842
-0.118527
0.245838
-0.315742
-0.511806
2
0.011470
-0.036104
1.399603
-0.418176
-0.412229
-1.234783
-1.121500
1.196478
-0.569522
0.422022
-0.220484
0.804338
2.892667
-0.511055
-0.168722
-1.477996
-1.969917
0.471354
1.698548
0.137105
-0.762052
0.199379
-0.964346
-0.256692
1.265275
0.848762
-0.784161
1.863776
-0.355569
0.854552
0.768061
-2.075718
-2.501069
1.109868
0.957545
-0.683276
0.307764
0.733073
1.706250
-1.118091
0.374961
-1.414503
-0.524183
-1.662696
0.687921
0.521732
1.451396
-0.833491
-0.362796
-1.174444
-0.813893
-0.893220
0.770743
1.156647
-0.647444
0.125929
0.513600
-0.537874
1.992052
-1.946584
-0.104759
0.484779
-0.290936
-0.441075
0.542993
-1.050038
1.630482
0.239771
-1.177310
0.464804
-0.966995
0.646086
0.486899
1.022196
-2.267827
-1.229616
1.313805
1.073292
2.324940
-0.542720
-1.504292
0.777643
-0.618553
0.011342
1.385062
1.363552
-0.549834
0.688896
1.361288
-0.381137
0.797812
-1.128198
0.369208
0.540132
0.413853
-0.200308
-0.969126
0.981293
-0.009783
-0.320020
3
-0.574816
1.419977
0.434813
-1.101217
-1.586275
1.979573
0.378298
0.782326
2.178987
0.657564
0.683774
-0.091000
-0.059552
-0.738908
-0.907653
-0.701936
0.580039
-0.618757
0.453684
1.665382
-0.152321
0.880077
0.571073
-0.604736
0.532359
0.515031
-0.959844
-0.887184
0.435781
0.862093
-0.956321
-0.625909
0.194472
0.442490
0.526503
-0.215274
0.090711
0.932592
0.811999
-2.497026
0.631545
0.321418
-0.425549
-1.078832
0.753444
0.199790
-0.360526
-0.013448
-0.819476
0.814869
0.442118
-0.972048
-0.060603
-2.349825
1.265445
-0.573257
0.429124
1.049783
1.954773
0.071883
-0.094209
0.265616
0.948318
0.331645
1.343401
-0.167934
-1.105252
-0.167077
-0.096576
-0.838161
-0.208564
0.394534
0.762533
1.235357
-0.207282
-0.202946
-0.468025
0.256944
2.587584
1.186697
-1.031903
1.428316
0.658899
-0.046582
-0.075422
1.329359
-0.684267
-1.524182
2.014061
3.770933
0.647353
-1.021377
-0.345493
0.582811
0.797812
1.326020
1.422857
-3.077007
0.184083
1.478935
1
0
-0.600142
1.929561
-2.346771
-0.669700
-1.165258
0.814788
0.444449
-0.576758
0.353091
0.408893
0.091391
-2.294389
0.485506
-0.081304
-0.716272
-1.648010
1.005361
-1.489603
0.363098
0.758602
-1.373847
-0.972057
1.988537
0.319829
1.169060
0.146585
1.030388
1.165984
1.369563
0.730984
-1.383696
-0.515189
-0.808927
-1.174651
-1.631502
-1.123414
-0.478155
-1.583067
1.419074
1.668777
1.567517
0.222103
-0.336040
-1.352064
0.251032
-0.401695
0.268413
-0.012299
-0.918953
2.921208
-0.581588
0.672848
1.251136
1.382263
1.429897
1.290990
-1.272673
-0.308611
-0.422988
-0.675642
0.874441
1.305736
-0.262585
-1.099395
-0.667101
-0.646737
-0.556338
-0.196591
0.119306
-0.266455
-0.524267
2.650951
0.097318
-0.974697
0.189964
1.141155
-0.064434
1.104971
-1.508908
-0.031833
0.803919
-0.659221
0.939145
0.214041
-0.531805
0.956060
0.249328
0.637903
-0.510158
1.850287
-0.348407
2.001376
-0.389643
-0.024786
-0.470973
0.869339
0.170667
0.598062
1.217262
1.274013
1
-0.389981
-0.752441
-0.734871
3.517318
-1.173559
-0.004956
0.145419
2.151368
-3.086037
-1.569139
1.449784
-0.868951
-1.687716
-0.994401
1.153266
1.803045
-0.819059
0.847970
0.227102
-0.500762
0.868210
1.823540
1.161007
-0.307606
-0.713416
0.363560
-0.822162
2.427681
-0.129537
-0.078716
1.345644
-1.286094
0.237242
-0.136056
0.596664
-1.412381
1.206341
0.299860
0.705238
0.142412
-1.059382
0.833468
1.060015
-0.527045
-1.135732
-1.140983
-0.779540
-0.640875
-1.217196
-1.675663
0.241263
-0.273322
-1.697936
-0.594943
0.101154
1.391735
-0.426953
1.008344
-0.818577
1.924570
-0.578900
-0.457395
-1.096705
0.418522
-0.155623
0.169706
-2.533706
0.018904
1.434160
0.744095
0.647626
-0.770309
2.329141
-0.141547
-1.761594
0.702091
-1.531450
-0.788427
-0.184622
-1.942321
1.530113
0.503406
1.105845
-0.935120
-1.115483
-2.249762
1.307135
0.788412
-0.441091
0.073561
0.812101
-0.916146
1.573714
-0.309508
0.499987
0.187594
0.558913
0.903246
0.317901
-0.809797
2
1.128248
1.516826
-0.186735
-0.668157
1.132259
-0.246648
-0.855167
0.732283
0.931802
1.318684
-1.198418
-1.149318
0.586321
-1.171937
-0.607731
2.753747
1.479287
-1.136365
-0.020485
0.320444
-1.955755
0.660402
-1.545371
0.200519
-0.017263
1.634686
0.599246
0.462989
0.023721
0.225546
0.170972
-0.027496
-0.061233
-0.566411
-0.669567
0.601618
0.503656
-0.678253
-2.907108
-1.717123
0.397631
1.300108
0.215821
-0.593075
-0.225944
-0.946057
1.000308
0.393160
1.342074
-0.370687
-0.166413
-0.419814
-0.255931
1.789478
0.282378
0.742260
-0.050498
1.415309
0.838166
-1.400292
-0.937976
-1.499148
0.801859
0.224824
0.283572
0.643703
-1.198465
0.527206
0.215202
0.437048
1.312868
0.741243
0.077988
0.006123
0.190370
0.018007
-1.026036
-2.378430
-1.069949
0.843822
1.289216
-1.423369
-0.462887
0.197330
-0.935076
0.441271
0.414643
-0.377887
-0.530515
0.621592
1.009572
0.569718
0.175291
-0.656279
-0.112273
-0.392137
-1.043558
-0.467318
-0.384329
-2.009207
3
0.658598
0.101830
-0.682781
0.229349
-0.305657
0.404877
0.252244
-0.837784
-0.039624
0.329457
0.751694
1.469070
-0.157199
1.032628
-0.584639
-0.925544
0.342474
-0.969363
0.133480
-0.385974
-0.600278
0.281939
0.868579
1.129803
-0.041898
0.961193
0.131521
-0.792889
-1.285737
0.073934
-1.333315
-1.044125
1.277338
1.492257
0.411379
1.771805
-1.111128
1.123233
-1.019449
1.738357
-0.690764
-0.120710
-0.421359
-0.727294
-0.857759
-0.069436
-0.328334
-0.558180
1.063474
-0.519133
-0.496902
1.089589
-1.615801
0.080174
-0.229938
-0.498420
-0.624615
0.059481
-0.093158
-1.784549
-0.503789
-0.140528
0.002653
-0.484930
0.055914
-0.680948
-0.994271
1.277052
0.037651
2.155421
-0.437589
0.696404
0.417752
-0.544785
1.190690
0.978262
0.752102
0.504472
0.139853
-0.505089
-0.264975
-1.603194
0.731847
0.010903
-1.165346
-0.125195
-1.032685
-0.465520
1.514808
0.304762
0.793414
0.314635
-1.638279
0.111737
-0.777037
0.251783
1.126303
-0.808798
0.422064
-0.349264
B
0
0
-0.356362
-0.089227
0.609373
0.542382
-0.768681
-0.048074
2.015458
-1.552351
0.251552
1.459635
0.949707
0.339465
-0.001372
1.798589
1.559163
0.231783
0.423141
-0.310530
0.353795
2.173336
-0.196247
-0.375636
-0.858221
0.258410
0.656430
0.960819
1.137893
1.553405
0.038981
-0.632038
-0.132009
-1.834997
-0.242576
-0.297879
-0.441559
-0.769691
0.224077
-0.153009
0.519526
-0.680188
0.535851
0.671496
-0.183064
0.301234
1.288256
-2.478240
-0.360403
0.424067
-0.834659
-0.128464
-0.489013
-0.014888
-1.461230
-1.435223
-1.319802
1.083675
0.979140
-0.375291
1.110189
-1.011351
0.587886
-0.822775
-1.183865
1.455173
1.134328
0.239403
-0.837991
-1.130932
0.783168
1.845520
1.437072
-1.198443
1.379098
2.129113
0.260096
-0.011975
0.043302
0.722941
1.028152
-0.235806
1.145245
-1.359598
0.232189
0.503712
-0.614264
-0.530606
-2.435803
-0.255238
-0.064423
0.784643
0.256346
0.128023
1.414103
-1.118659
0.877353
0.500561
0.463651
-2.034512
-0.981683
-0.691944
1
-1.113376
-1.169402
0.680539
-1.534212
1.653817
-1.295181
-0.566826
0.477014
1.413371
0.517105
1.401153
-0.872685
0.830957
0.181507
-0.145616
0.694592
-0.751208
0.324444
0.681973
-0.054972
0.917776
-1.024810
-0.206446
-0.600113
0.852805
1.455109
-0.079769
0.076076
0.207699
-1.850458
-0.124124
-0.610871
-0.883362
0.219049
-0.685094
-0.645330
-0.242805
-0.775602
0.233070
2.422642
-1.423040
-0.582421
0.968304
-0.701025
-0.167850
0.277264
1.301231
0.301205
-3.081249
-0.562868
0.192944
-0.664592
0.565686
0.190913
-0.841858
-1.856545
-1.022777
1.295968
0.451921
0.659955
0.065818
-0.319586
0.253495
-1.144646
-0.483404
0.555902
0.807069
0.714196
0.661196
0.053667
0.346833
-1.288977
-0.386734
-1.262127
0.477495
-0.494034
-0.911414
1.152963
-0.342365
-0.160187
0.470054
-0.853063
-1.387949
-0.257257
-1.030690
-0.110210
0.328911
-0.555923
0.987713
-0.501957
2.069887
-0.067503
0.316029
-1.506232
2.201621
0.492097
-0.085193
-0.977822
1.039147
-0.653932
2
-0.405638
-1.402027
-1.166242
1.306184
0.856283
-1.236170
-0.646721
-1.474064
0.082960
0.090310
-0.169977
0.406345
0.915427
-0.974503
0.271637
1.539184
-0.098866
-0.525149
1.063933
0.085827
-0.129622
0.947959
-0.072496
-0.237592
0.012549
1.065761
0.996596
-0.172481
2.583139
-0.028578
-0.254856
1.328794
-1.592951
2.434350
-0.341500
-0.307719
-1.333273
-1.100845
0.209097
1.734777
0.639632
0.424779
-0.129327
0.905029
-0.482909
1.731628
-2.783425
-0.333677
-0.110895
1.212636
-0.208412
0.427117
1.348563
0.043859
1.772519
-1.416106
0.401155
0.807157
0.303427
-1.246288
0.178774
-0.066126
-1.862288
1.241295
0.377021
-0.822320
-0.749014
1.463652
1.602268
-1.043877
1.185290
-0.565783
-1.076879
1.360241
-0.121991
0.991043
1.007952
0.450185
-0.744376
1.388876
-0.316847
-0.841655
-1.056842
-0.500226
0.096959
1.176896
-2.939652
1.792213
0.316340
0.303218
1.024967
-0.590871
-0.453326
-0.795981
-0.393301
-0.374372
-1.270199
1.618372
1.197727
-0.914863
3
-0.625210
0.288911
0.288374
-1.372667
-0.591395
-0.478942
1.335664
-0.459855
-1.615975
-1.189676
0.374767
-2.488733
0.586656
-1.422008
0.496030
1.911128
-0.560660
-0.499614
-0.372171
-1.833069
0.237124
-0.944446
0.912140
0.359790
-1.359235
0.166966
-0.047107
-0.279789
-0.594454
-0.739013
-1.527645
0.401668
1.791252
-2.774848
0.523873
2.207585
0.488999
-0.339283
0.131711
0.018409
1.186551
-0.424318
1.554994
-0.205917
-0.934975
0.654102
-1.227761
-0.461025
-0.421201
-0.058615
-0.584563
0.336913
-0.477102
-1.381463
0.757745
-0.268968
0.034870
1.231686
0.236600
1.234720
-0.040247
0.029582
1.034905
0.380204
-0.012108
-0.859511
-0.990340
-1.205172
-1.030178
0.426676
0.497796
-0.876808
0.957963
0.173016
0.131612
-1.003556
-1.069908
-1.799207
1.429598
-0.116015
-1.454980
0.261917
0.444412
0.273290
0.844115
0.218745
-1.033350
-1.188295
0.058373
0.800523
-1.627068
0.861651
0.871018
-0.003733
-0.243354
0.947296
0.509406
0.044546
0.266896
1.337165
1
0
0.699142
-1.928033
0.105363
1.042322
0.715206
-0.763783
0.098798
-1.157898
0.134105
0.042041
0.674826
0.165649
-1.622970
-3.131274
0.597649
-1.880331
0.663980
-0.256033
-1.524058
0.492799
0.221163
0.429622
-0.659584
1.264506
-0.032131
-2.114907
-0.264043
0.457835
-0.676837
-0.629003
0.489145
-0.551686
0.942622
-0.512043
-0.455893
0.021244
-0.178035
-2.498073
-0.171292
0.323510
-0.545163
-0.668909
-0.150031
0.521620
-0.428980
0.676463
0.369081
-0.724832
0.793542
1.237422
0.401275
2.141523
0.249012
0.486755
-0.163274
0.592222
-0.292600
-0.547168
0.619104
-0.013605
0.776734
0.131424
1.189480
-0.666317
-0.939036
1.105515
0.621452
1.586605
-0.760970
1.649646
0.283199
1.275812
-0.452012
0.301361
-0.976951
-0.268106
-0.079255
-1.258332
2.216658
-1.175988
-0.863497
-1.653022
-0.561514
0.450753
0.417200
0.094676
-2.231054
1.316862
-0.477441
0.646654
-0.200252
1.074354
-0.058176
0.120990
0.222522
-0.179507
0.421655
-0.914341
-0.234178
0.741524
1
0.932714
1.423761
-1.280835
0.347882
-0.863171
-0.852580
1.044933
2.094536
0.806206
0.416201
-1.109503
0.145302
-0.996871
0.325456
-0.605081
1.175326
1.645054
0.293432
-2.766822
1.032849
0.079115
-1.414132
1.463376
2.335486
0.411951
-0.048543
0.159284
-0.651554
-1.093128
1.568390
-0.077807
-2.390779
-0.842346
-0.229675
-0.999072
-1.367219
-0.792042
-1.878575
1.451452
1.266250
-0.734315
0.266152
0.735523
-0.430860
0.229864
0.850083
-2.241241
1.063850
0.289409
-0.354360
0.113063
-0.173006
1.386998
1.886236
0.587119
-0.961133
0.399295
1.461560
0.310823
0.280220
-0.879103
-1.326348
0.003337
-1.085908
-0.436723
2.111926
0.106068
0.615597
2.152996
-0.196155
0.025747
-0.039061
0.656823
-0.347105
2.513979
1.758070
1.288473
-0.739185
-0.691592
-0.098728
-0.276386
0.489981
0.516278
-0.838258
0.596673
-0.331053
0.521174
-0.145023
0.836693
-1.092166
0.361733
-1.169981
0.046731
0.655377
-0.756852
1.285805
-0.095019
0.360253
1.370621
0.083010
2
0.888893
2.288725
-1.032332
0.212273
-1.091826
1.692498
1.025367
0.550854
0.679430
-1.335712
-0.798341
2.265351
-1.006938
2.059761
0.420266
-1.189657
0.506674
0.260847
-0.533145
0.727267
1.412276
1.482106
-0.996258
0.588641
-0.412642
-0.920733
-0.874691
0.839002
0.501668
-0.342493
-0.533806
-2.146352
-0.597339
0.115726
0.850683
-0.752239
0.377263
-0.561982
0.262783
-0.356676
-0.367462
0.753611
-1.267414
-1.330698
-0.536453
0.840938
-0.763108
-0.268100
-0.677424
1.606831
0.151732
-2.085701
1.219296
0.400863
0.591165
-1.485213
1.501979
1.196569
-0.214154
0.339554
-0.034446
1.176452
0.546340
-1.255630
-1.309210
-0.445437
0.189437
-0.737463
0.843767
-0.605632
-0.060777
0.409310
1.285569
-0.622638
1.018193
0.880680
0.046805
-1.818058
-0.809829
0.875224
0.409569
-0.116621
-1.238919
3.305724
-0.024121
-1.756500
1.328958
0.507593
-0.866554
-2.240848
-0.661376
-0.671824
0.215720
-0.296326
0.481402
0.829645
-0.721025
1.263914
0.549047
-1.234945
3
-1.978838
0.721823
-0.559067
-1.235243
0.420716
-0.598845
0.359576
-0.619366
-1.757772
-1.156251
0.705212
0.875071
-1.020376
0.394760
-0.147970
0.230249
1.355203
1.794488
2.678058
-0.153565
-0.460959
-0.098108
-1.407930
-2.487702
1.823014
0.099873
-0.517603
-0.509311
-1.833175
-0.900906
0.459493
-0.655440
1.466122
-1.531389
-0.422106
0.421422
0.578615
0.259795
0.018941
-0.168726
1.611107
-1.586550
-1.384941
0.858377
1.033242
1.701343
1.748344
-0.371182
-0.843575
2.089641
-0.345430
-1.740556
0.141915
-2.197138
0.689569
-0.150025
0.287456
0.654016
-1.521919
-0.918008
-0.587528
0.230636
0.262637
0.615674
0.600044
-0.494699
-0.743089
0.220026
-0.242207
0.528216
-0.328174
-1.536517
-1.476640
-1.162114
-1.260222
1.106252
-1.467408
-0.349341
-1.841217
0.031296
-0.076475
-0.353383
0.807545
0.779064
-2.398417
-0.267828
1.549734
0.814397
0.284770
-0.659369
0.761040
-0.722067
0.810332
1.501295
1.440865
-1.367459
-0.700301
-1.540662
0.159837
-0.625415
HTML Escaping#
Suppose you have to display HTML within HTML, that can be a bit of pain when the renderer can’t distinguish. You can use the escape formatting option to handle this, and even use it within a formatter that contains HTML itself.
[64]:
df4 = pd.DataFrame([['<div></div>', '"&other"', '<span></span>']])
df4.style
[64]:
0
1
2
0
"&other"
[65]:
df4.style.format(escape="html")
[65]:
0
1
2
0
<div></div>
"&other"
<span></span>
[66]:
df4.style.format('<a href="https://pandas.pydata.org" target="_blank">{}</a>', escape="html")
[66]:
0
1
2
0
<div></div>
"&other"
<span></span>
Export to Excel#
Some support (since version 0.20.0) is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include:
background-color
border-style properties
border-width properties
border-color properties
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Shorthand and side-specific border properties are supported (e.g. border-style and border-left-style) as well as the border shorthands for all sides (border: 1px solid green) or specified sides (border-left: 1px solid green). Using a border shorthand will override any border properties set before it (See CSS Working Group for more details)
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
The following pseudo CSS properties are also available to set Excel specific style properties:
number-format
border-style (for Excel-specific styles: “hair”, “mediumDashDot”, “dashDotDot”, “mediumDashDotDot”, “dashDot”, “slantDashDot”, or “mediumDashed”)
Table level styles, and data cell CSS-classes are not included in the export to Excel: individual cells must have their properties mapped by the Styler.apply and/or Styler.applymap methods.
[67]:
df2.style.\
applymap(style_negative, props='color:red;').\
highlight_max(axis=0).\
to_excel('styled.xlsx', engine='openpyxl')
A screenshot of the output:
Export to LaTeX#
There is support (since version 1.3.0) to export Styler to LaTeX. The documentation for the .to_latex method gives further detail and numerous examples.
More About CSS and HTML#
Cascading Style Sheet (CSS) language, which is designed to influence how a browser renders HTML elements, has its own peculiarities. It never reports errors: it just silently ignores them and doesn’t render your objects how you intend so can sometimes be frustrating. Here is a very brief primer on how Styler creates HTML and interacts with CSS, with advice on common pitfalls to avoid.
CSS Classes and Ids#
The precise structure of the CSS class attached to each cell is as follows.
Cells with Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
level<k> where k is the level in a MultiIndex
row<m> where m is the numeric position of the row
Column label cells include
col_heading
level<k> where k is the level in a MultiIndex
col<n> where n is the numeric position of the column
Data cells include
data
row<m>, where m is the numeric position of the cell.
col<n>, where n is the numeric position of the cell.
Blank cells include blank
Trimmed cells include col_trim or row_trim
The structure of the id is T_uuid_level<k>_row<m>_col<n> where level<k> is used only on headings, and headings will only have either row<m> or col<n> whichever is needed. By default we’ve also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn’t collide with the styling from another within the same notebook or page. You can read more about the use of UUIDs in Optimization.
We can see example of the HTML by calling the .to_html() method.
[68]:
print(pd.DataFrame([[1,2],[3,4]], index=['i1', 'i2'], columns=['c1', 'c2']).style.to_html())
<style type="text/css">
</style>
<table id="T_d505a">
<thead>
<tr>
<th class="blank level0" > </th>
<th id="T_d505a_level0_col0" class="col_heading level0 col0" >c1</th>
<th id="T_d505a_level0_col1" class="col_heading level0 col1" >c2</th>
</tr>
</thead>
<tbody>
<tr>
<th id="T_d505a_level0_row0" class="row_heading level0 row0" >i1</th>
<td id="T_d505a_row0_col0" class="data row0 col0" >1</td>
<td id="T_d505a_row0_col1" class="data row0 col1" >2</td>
</tr>
<tr>
<th id="T_d505a_level0_row1" class="row_heading level0 row1" >i2</th>
<td id="T_d505a_row1_col0" class="data row1 col0" >3</td>
<td id="T_d505a_row1_col1" class="data row1 col1" >4</td>
</tr>
</tbody>
</table>
CSS Hierarchies#
The examples have shown that when CSS styles overlap, the one that comes last in the HTML render, takes precedence. So the following yield different results:
[69]:
df4 = pd.DataFrame([['text']])
df4.style.applymap(lambda x: 'color:green;')\
.applymap(lambda x: 'color:red;')
[69]:
0
0
text
[70]:
df4.style.applymap(lambda x: 'color:red;')\
.applymap(lambda x: 'color:green;')
[70]:
0
0
text
This is only true for CSS rules that are equivalent in hierarchy, or importance. You can read more about CSS specificity here but for our purposes it suffices to summarize the key points:
A CSS importance score for each HTML element is derived by starting at zero and adding:
1000 for an inline style attribute
100 for each ID
10 for each attribute, class or pseudo-class
1 for each element name or pseudo-element
Let’s use this to describe the action of the following configurations
[71]:
df4.style.set_uuid('a_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'}])\
.applymap(lambda x: 'color:green;')
[71]:
0
0
text
This text is red because the generated selector #T_a_ td is worth 101 (ID plus element), whereas #T_a_row0_col0 is only worth 100 (ID), so is considered inferior even though in the HTML it comes after the previous.
[72]:
df4.style.set_uuid('b_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
[72]:
0
0
text
In the above case the text is blue because the selector #T_b_ .cls-1 is worth 110 (ID plus class), which takes precedence.
[73]:
df4.style.set_uuid('c_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
[73]:
0
0
text
Now we have created another table style this time the selector T_c_ td.data (ID plus element plus class) gets bumped up to 111.
If your style fails to be applied, and its really frustrating, try the !important trump card.
[74]:
df4.style.set_uuid('d_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green !important;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
[74]:
0
0
text
Finally got that green text after all!
Extensibility#
The core of pandas is, and will remain, its “high-performance, easy-to-use data structures”. With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is “good enough” for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we’ll link to it.
Subclassing#
If the default template doesn’t quite suit your needs, you can subclass Styler and extend or override the template. We’ll show an example of extending the default template to insert a custom header before each table.
[75]:
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
We’ll use the following template:
[76]:
with open("templates/myhtml.tpl") as f:
print(f.read())
{% extends "html_table.tpl" %}
{% block table %}
<h1>{{ table_title|default("My Table") }}</h1>
{{ super() }}
{% endblock table %}
Now that we’ve created a template, we need to set up a subclass of Styler that knows about it.
[77]:
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template_html_table = env.get_template("myhtml.tpl")
Notice that we include the original loader in our environment’s loader. That’s because we extend the original template, so the Jinja environment needs to be able to find it.
Now we can use that custom styler. It’s __init__ takes a DataFrame.
[78]:
MyStyler(df3)
[78]:
My Table
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
Our custom template accepts a table_title keyword. We can provide the value in the .to_html method.
[79]:
HTML(MyStyler(df3).to_html(table_title="Extending Example"))
[79]:
Extending Example
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
[80]:
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
HTML(EasyStyler(df3).to_html(table_title="Another Title"))
[80]:
Another Title
c1
c2
c3
c4
A
r1
-1.048553
-1.420018
-1.706270
1.950775
r2
-0.509652
-0.438074
-1.252795
0.777490
B
r1
-1.613898
-0.212740
-0.895467
0.386902
r2
-0.510805
-1.180632
-0.028182
0.428332
Template Structure#
Here’s the template structure for the both the style generation template and the table generation template:
Style template:
[82]:
HTML(style_structure)
[82]:
before_style
style
<style type="text/css">
table_styles
before_cellstyle
cellstyle
</style>
Table template:
[84]:
HTML(table_structure)
[84]:
before_table
table
<table ...>
caption
thead
before_head_rows
head_tr (loop over headers)
after_head_rows
tbody
before_rows
tr (loop over data rows)
after_rows
</table>
after_table
See the template in the GitHub repo for more details.
|
user_guide/style.html
|
pandas.DataFrame.lookup
|
`pandas.DataFrame.lookup`
Label-based “fancy indexing” function for DataFrame.
|
DataFrame.lookup(row_labels, col_labels)[source]#
Label-based “fancy indexing” function for DataFrame.
Deprecated since version 1.2.0: DataFrame.lookup is deprecated,
use pandas.factorize and NumPy indexing instead.
For further details see
Looking up values by index/column labels.
Given equal-length arrays of row and column labels, return an
array of the values corresponding to each (row, col) pair.
Parameters
row_labelssequenceThe row labels to use for lookup.
col_labelssequenceThe column labels to use for lookup.
Returns
numpy.ndarrayThe found values.
|
reference/api/pandas.DataFrame.lookup.html
|
pandas.PeriodIndex.second
|
`pandas.PeriodIndex.second`
The second of the period.
|
property PeriodIndex.second[source]#
The second of the period.
|
reference/api/pandas.PeriodIndex.second.html
|
pandas.plotting.autocorrelation_plot
|
`pandas.plotting.autocorrelation_plot`
Autocorrelation plot for time series.
```
>>> spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
>>> s = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
>>> pd.plotting.autocorrelation_plot(s)
<AxesSubplot: title={'center': 'width'}, xlabel='Lag', ylabel='Autocorrelation'>
```
|
pandas.plotting.autocorrelation_plot(series, ax=None, **kwargs)[source]#
Autocorrelation plot for time series.
Parameters
seriesTime series
axMatplotlib axis object, optional
**kwargsOptions to pass to matplotlib plotting method.
Returns
class:matplotlib.axis.Axes
Examples
The horizontal lines in the plot correspond to 95% and 99% confidence bands.
The dashed line is 99% confidence band.
>>> spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
>>> s = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
>>> pd.plotting.autocorrelation_plot(s)
<AxesSubplot: title={'center': 'width'}, xlabel='Lag', ylabel='Autocorrelation'>
|
reference/api/pandas.plotting.autocorrelation_plot.html
|
pandas.Categorical.dtype
|
`pandas.Categorical.dtype`
The CategoricalDtype for this instance.
|
property Categorical.dtype[source]#
The CategoricalDtype for this instance.
|
reference/api/pandas.Categorical.dtype.html
|
pandas.tseries.offsets.YearEnd.is_on_offset
|
`pandas.tseries.offsets.YearEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
YearEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.YearEnd.is_on_offset.html
|
pandas.Series.cat.as_ordered
|
`pandas.Series.cat.as_ordered`
Set the Categorical to be ordered.
|
Series.cat.as_ordered(*args, **kwargs)[source]#
Set the Categorical to be ordered.
Parameters
inplacebool, default FalseWhether or not to set the ordered attribute in-place or return
a copy of this categorical with ordered set to True.
Deprecated since version 1.5.0.
Returns
Categorical or NoneOrdered Categorical or None if inplace=True.
|
reference/api/pandas.Series.cat.as_ordered.html
|
pandas.tseries.offsets.BusinessMonthBegin.copy
|
`pandas.tseries.offsets.BusinessMonthBegin.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
BusinessMonthBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.copy.html
|
pandas.Series.str.endswith
|
`pandas.Series.str.endswith`
Test if the end of each string element matches a pattern.
```
>>> s = pd.Series(['bat', 'bear', 'caT', np.nan])
>>> s
0 bat
1 bear
2 caT
3 NaN
dtype: object
```
|
Series.str.endswith(pat, na=None)[source]#
Test if the end of each string element matches a pattern.
Equivalent to str.endswith().
Parameters
patstr or tuple[str, …]Character sequence or tuple of strings. Regular expressions are not
accepted.
naobject, default NaNObject shown if element tested is not a string. The default depends
on dtype of the array. For object-dtype, numpy.nan is used.
For StringDtype, pandas.NA is used.
Returns
Series or Index of boolA Series of booleans indicating whether the given pattern matches
the end of each string element.
See also
str.endswithPython standard library string method.
Series.str.startswithSame as endswith, but tests the start of string.
Series.str.containsTests if string element contains a pattern.
Examples
>>> s = pd.Series(['bat', 'bear', 'caT', np.nan])
>>> s
0 bat
1 bear
2 caT
3 NaN
dtype: object
>>> s.str.endswith('t')
0 True
1 False
2 False
3 NaN
dtype: object
>>> s.str.endswith(('t', 'T'))
0 True
1 False
2 True
3 NaN
dtype: object
Specifying na to be False instead of NaN.
>>> s.str.endswith('t', na=False)
0 True
1 False
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.endswith.html
|
pandas.Series.str.removesuffix
|
`pandas.Series.str.removesuffix`
Remove a suffix from an object series.
```
>>> s = pd.Series(["str_foo", "str_bar", "no_prefix"])
>>> s
0 str_foo
1 str_bar
2 no_prefix
dtype: object
>>> s.str.removeprefix("str_")
0 foo
1 bar
2 no_prefix
dtype: object
```
|
Series.str.removesuffix(suffix)[source]#
Remove a suffix from an object series.
If the suffix is not present, the original string will be returned.
Parameters
suffixstrRemove the suffix of the string.
Returns
Series/Index: objectThe Series or Index with given suffix removed.
See also
Series.str.removeprefixRemove a prefix from an object series.
Examples
>>> s = pd.Series(["str_foo", "str_bar", "no_prefix"])
>>> s
0 str_foo
1 str_bar
2 no_prefix
dtype: object
>>> s.str.removeprefix("str_")
0 foo
1 bar
2 no_prefix
dtype: object
>>> s = pd.Series(["foo_str", "bar_str", "no_suffix"])
>>> s
0 foo_str
1 bar_str
2 no_suffix
dtype: object
>>> s.str.removesuffix("_str")
0 foo
1 bar
2 no_suffix
dtype: object
|
reference/api/pandas.Series.str.removesuffix.html
|
pandas.tseries.offsets.SemiMonthEnd.apply
|
pandas.tseries.offsets.SemiMonthEnd.apply
|
SemiMonthEnd.apply()#
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.apply.html
|
pandas.IntervalIndex.left
|
pandas.IntervalIndex.left
|
IntervalIndex.left[source]#
|
reference/api/pandas.IntervalIndex.left.html
|
pandas.Series.shape
|
`pandas.Series.shape`
Return a tuple of the shape of the underlying data.
|
property Series.shape[source]#
Return a tuple of the shape of the underlying data.
|
reference/api/pandas.Series.shape.html
|
pandas.Series.describe
|
`pandas.Series.describe`
Generate descriptive statistics.
Descriptive statistics include those that summarize the central
tendency, dispersion and shape of a
dataset’s distribution, excluding NaN values.
```
>>> s = pd.Series([1, 2, 3])
>>> s.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
dtype: float64
```
|
Series.describe(percentiles=None, include=None, exclude=None, datetime_is_numeric=False)[source]#
Generate descriptive statistics.
Descriptive statistics include those that summarize the central
tendency, dispersion and shape of a
dataset’s distribution, excluding NaN values.
Analyzes both numeric and object series, as well
as DataFrame column sets of mixed data types. The output
will vary depending on what is provided. Refer to the notes
below for more detail.
Parameters
percentileslist-like of numbers, optionalThe percentiles to include in the output. All should
fall between 0 and 1. The default is
[.25, .5, .75], which returns the 25th, 50th, and
75th percentiles.
include‘all’, list-like of dtypes or None (default), optionalA white list of data types to include in the result. Ignored
for Series. Here are the options:
‘all’ : All columns of the input will be included in the output.
A list-like of dtypes : Limits the results to the
provided data types.
To limit the result to numeric types submit
numpy.number. To limit it instead to object columns submit
the numpy.object data type. Strings
can also be used in the style of
select_dtypes (e.g. df.describe(include=['O'])). To
select pandas categorical columns, use 'category'
None (default) : The result will include all numeric columns.
excludelist-like of dtypes or None (default), optional,A black list of data types to omit from the result. Ignored
for Series. Here are the options:
A list-like of dtypes : Excludes the provided data types
from the result. To exclude numeric types submit
numpy.number. To exclude object columns submit the data
type numpy.object. Strings can also be used in the style of
select_dtypes (e.g. df.describe(exclude=['O'])). To
exclude pandas categorical columns, use 'category'
None (default) : The result will exclude nothing.
datetime_is_numericbool, default FalseWhether to treat datetime dtypes as numeric. This affects statistics
calculated for the column. For DataFrame input, this also
controls whether datetime columns are included by default.
New in version 1.1.0.
Returns
Series or DataFrameSummary statistics of the Series or Dataframe provided.
See also
DataFrame.countCount number of non-NA/null observations.
DataFrame.maxMaximum of the values in the object.
DataFrame.minMinimum of the values in the object.
DataFrame.meanMean of the values.
DataFrame.stdStandard deviation of the observations.
DataFrame.select_dtypesSubset of a DataFrame including/excluding columns based on their dtype.
Notes
For numeric data, the result’s index will include count,
mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the
upper percentile is 75. The 50 percentile is the
same as the median.
For object data (e.g. strings or timestamps), the result’s index
will include count, unique, top, and freq. The top
is the most common value. The freq is the most common value’s
frequency. Timestamps also include the first and last items.
If multiple object values have the highest count, then the
count and top results will be arbitrarily chosen from
among those with the highest count.
For mixed data types provided via a DataFrame, the default is to
return only an analysis of numeric columns. If the dataframe consists
only of object and categorical data without any numeric columns, the
default is to return an analysis of both the object and categorical
columns. If include='all' is provided as an option, the result
will include a union of attributes of each type.
The include and exclude parameters can be used to limit
which columns in a DataFrame are analyzed for the output.
The parameters are ignored when analyzing a Series.
Examples
Describing a numeric Series.
>>> s = pd.Series([1, 2, 3])
>>> s.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
dtype: float64
Describing a categorical Series.
>>> s = pd.Series(['a', 'a', 'b', 'c'])
>>> s.describe()
count 4
unique 3
top a
freq 2
dtype: object
Describing a timestamp Series.
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s.describe(datetime_is_numeric=True)
count 3
mean 2006-09-01 08:00:00
min 2000-01-01 00:00:00
25% 2004-12-31 12:00:00
50% 2010-01-01 00:00:00
75% 2010-01-01 00:00:00
max 2010-01-01 00:00:00
dtype: object
Describing a DataFrame. By default only numeric fields
are returned.
>>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']),
... 'numeric': [1, 2, 3],
... 'object': ['a', 'b', 'c']
... })
>>> df.describe()
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Describing all columns of a DataFrame regardless of data type.
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN a
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
Describing a column from a DataFrame by accessing it as
an attribute.
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
Including only numeric columns in a DataFrame description.
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Including only string columns in a DataFrame description.
>>> df.describe(include=[object])
object
count 3
unique 3
top a
freq 1
Including only categorical columns from a DataFrame description.
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top d
freq 1
Excluding numeric columns from a DataFrame description.
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f a
freq 1 1
Excluding object columns from a DataFrame description.
>>> df.describe(exclude=[object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
|
reference/api/pandas.Series.describe.html
|
pandas.core.groupby.DataFrameGroupBy.idxmax
|
`pandas.core.groupby.DataFrameGroupBy.idxmax`
Return index of first occurrence of maximum over requested axis.
```
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
```
|
DataFrameGroupBy.idxmax(axis=0, skipna=True, numeric_only=_NoDefault.no_default)[source]#
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
numeric_onlybool, default True for axis=0, False for axis=1Include only float, int or boolean data.
New in version 1.5.0.
Returns
SeriesIndexes of maxima along the specified axis.
Raises
ValueError
If the row/column is empty
See also
Series.idxmaxReturn index of the maximum element.
Notes
This method is the DataFrame version of ndarray.argmax.
Examples
Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the maximum value in each column.
>>> df.idxmax()
consumption Wheat Products
co2_emissions Beef
dtype: object
To return the index for the maximum value in each row, use axis="columns".
>>> df.idxmax(axis="columns")
Pork co2_emissions
Wheat Products consumption
Beef co2_emissions
dtype: object
|
reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html
|
pandas.api.extensions.ExtensionArray.insert
|
`pandas.api.extensions.ExtensionArray.insert`
Insert an item at the given position.
|
ExtensionArray.insert(loc, item)[source]#
Insert an item at the given position.
Parameters
locint
itemscalar-like
Returns
same type as self
Notes
This method should be both type and dtype-preserving. If the item
cannot be held in an array of this type/dtype, either ValueError or
TypeError should be raised.
The default implementation relies on _from_sequence to raise on invalid
items.
|
reference/api/pandas.api.extensions.ExtensionArray.insert.html
|
pandas.PeriodIndex.minute
|
`pandas.PeriodIndex.minute`
The minute of the period.
|
property PeriodIndex.minute[source]#
The minute of the period.
|
reference/api/pandas.PeriodIndex.minute.html
|
Resampling
|
Resampling
Resampler objects are returned by resample calls: pandas.DataFrame.resample(), pandas.Series.resample().
Resampler.__iter__()
Groupby iterator.
Resampler.groups
Dict {group name -> group labels}.
|
Resampler objects are returned by resample calls: pandas.DataFrame.resample(), pandas.Series.resample().
Indexing, iteration#
Resampler.__iter__()
Groupby iterator.
Resampler.groups
Dict {group name -> group labels}.
Resampler.indices
Dict {group name -> group indices}.
Resampler.get_group(name[, obj])
Construct DataFrame from group with provided name.
Function application#
Resampler.apply([func])
Aggregate using one or more operations over the specified axis.
Resampler.aggregate([func])
Aggregate using one or more operations over the specified axis.
Resampler.transform(arg, *args, **kwargs)
Call function producing a like-indexed Series on each group.
Resampler.pipe(func, *args, **kwargs)
Apply a func with arguments to this Resampler object and return its result.
Upsampling#
Resampler.ffill([limit])
Forward fill the values.
Resampler.backfill([limit])
(DEPRECATED) Backward fill the values.
Resampler.bfill([limit])
Backward fill the new missing values in the resampled data.
Resampler.pad([limit])
(DEPRECATED) Forward fill the values.
Resampler.nearest([limit])
Resample by using the nearest value.
Resampler.fillna(method[, limit])
Fill missing values introduced by upsampling.
Resampler.asfreq([fill_value])
Return the values at the new freq, essentially a reindex.
Resampler.interpolate([method, axis, limit, ...])
Interpolate values according to different methods.
Computations / descriptive stats#
Resampler.count()
Compute count of group, excluding missing values.
Resampler.nunique(*args, **kwargs)
Return number of unique elements in the group.
Resampler.first([numeric_only, min_count])
Compute the first non-null entry of each column.
Resampler.last([numeric_only, min_count])
Compute the last non-null entry of each column.
Resampler.max([numeric_only, min_count])
Compute max of group values.
Resampler.mean([numeric_only])
Compute mean of groups, excluding missing values.
Resampler.median([numeric_only])
Compute median of groups, excluding missing values.
Resampler.min([numeric_only, min_count])
Compute min of group values.
Resampler.ohlc(*args, **kwargs)
Compute open, high, low and close values of a group, excluding missing values.
Resampler.prod([numeric_only, min_count])
Compute prod of group values.
Resampler.size()
Compute group sizes.
Resampler.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
Resampler.std([ddof, numeric_only])
Compute standard deviation of groups, excluding missing values.
Resampler.sum([numeric_only, min_count])
Compute sum of group values.
Resampler.var([ddof, numeric_only])
Compute variance of groups, excluding missing values.
Resampler.quantile([q])
Return value at the given quantile.
|
reference/resampling.html
|
pandas.Index.intersection
|
`pandas.Index.intersection`
Form the intersection of two Index objects.
```
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.intersection(idx2)
Int64Index([3, 4], dtype='int64')
```
|
final Index.intersection(other, sort=False)[source]#
Form the intersection of two Index objects.
This returns a new Index with elements common to the index and other.
Parameters
otherIndex or array-like
sortFalse or None, default FalseWhether to sort the resulting index.
False : do not sort the result.
None : sort the result, except when self and other are equal
or when the values cannot be compared.
Returns
intersectionIndex
Examples
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.intersection(idx2)
Int64Index([3, 4], dtype='int64')
|
reference/api/pandas.Index.intersection.html
|
pandas.tseries.offsets.LastWeekOfMonth.isAnchored
|
pandas.tseries.offsets.LastWeekOfMonth.isAnchored
|
LastWeekOfMonth.isAnchored()#
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.isAnchored.html
|
pandas.Index.copy
|
`pandas.Index.copy`
Make a copy of this object.
Name and dtype sets those attributes on the new object.
|
Index.copy(name=None, deep=False, dtype=None, names=None)[source]#
Make a copy of this object.
Name and dtype sets those attributes on the new object.
Parameters
nameLabel, optionalSet name for new object.
deepbool, default False
dtypenumpy dtype or pandas type, optionalSet dtype for new object.
Deprecated since version 1.2.0: use astype method instead.
nameslist-like, optionalKept for compatibility with MultiIndex. Should not be used.
Deprecated since version 1.4.0: use name instead.
Returns
IndexIndex refer to new object which is a copy of this object.
Notes
In most cases, there should be no functional difference from using
deep, but if deep is passed it will attempt to deepcopy.
|
reference/api/pandas.Index.copy.html
|
pandas.tseries.offsets.BQuarterEnd.is_anchored
|
`pandas.tseries.offsets.BQuarterEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
BQuarterEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.BQuarterEnd.is_anchored.html
|
pandas.tseries.offsets.Day.freqstr
|
`pandas.tseries.offsets.Day.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
Day.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.Day.freqstr.html
|
How to combine data from multiple tables?
|
How to combine data from multiple tables?
|
Data used for this tutorial:
Air quality Nitrate data
For this tutorial, air quality data about \(NO_2\) is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv data set provides \(NO_2\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
...: parse_dates=True)
...:
In [3]: air_quality_no2 = air_quality_no2[["date.utc", "location",
...: "parameter", "value"]]
...:
In [4]: air_quality_no2.head()
Out[4]:
date.utc location parameter value
0 2019-06-21 00:00:00+00:00 FR04014 no2 20.0
1 2019-06-20 23:00:00+00:00 FR04014 no2 21.8
2 2019-06-20 22:00:00+00:00 FR04014 no2 26.5
3 2019-06-20 21:00:00+00:00 FR04014 no2 24.9
4 2019-06-20 20:00:00+00:00 FR04014 no2 21.4
Air quality Particulate matter data
For this tutorial, air quality data about Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_pm25_long.csv data set provides \(PM_{25}\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [5]: air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
...: parse_dates=True)
...:
In [6]: air_quality_pm25 = air_quality_pm25[["date.utc", "location",
...: "parameter", "value"]]
...:
In [7]: air_quality_pm25.head()
Out[7]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
How to combine data from multiple tables?#
Concatenating objects#
I want to combine the measurements of \(NO_2\) and \(PM_{25}\), two tables with a similar structure, in a single table.
In [8]: air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)
In [9]: air_quality.head()
Out[9]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
The concat() function performs concatenation operations of multiple
tables along one of the axes (row-wise or column-wise).
By default concatenation is along axis 0, so the resulting table combines the rows
of the input tables. Let’s check the shape of the original and the
concatenated tables to verify the operation:
In [10]: print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)
Shape of the ``air_quality_pm25`` table: (1110, 4)
In [11]: print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)
Shape of the ``air_quality_no2`` table: (2068, 4)
In [12]: print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)
Shape of the resulting ``air_quality`` table: (3178, 4)
Hence, the resulting table has 3178 = 1110 + 2068 rows.
Note
The axis argument will return in a number of pandas
methods that can be applied along an axis. A DataFrame has two
corresponding axes: the first running vertically downwards across rows
(axis 0), and the second running horizontally across columns (axis 1).
Most operations like concatenation or summary statistics are by default
across rows (axis 0), but can be applied across columns as well.
Sorting the table on the datetime information illustrates also the
combination of both tables, with the parameter column defining the
origin of the table (either no2 from table air_quality_no2 or
pm25 from table air_quality_pm25):
In [13]: air_quality = air_quality.sort_values("date.utc")
In [14]: air_quality.head()
Out[14]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In this specific example, the parameter column provided by the data
ensures that each of the original tables can be identified. This is not
always the case. The concat function provides a convenient solution
with the keys argument, adding an additional (hierarchical) row
index. For example:
In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2], keys=["PM25", "NO2"])
In [16]: air_quality_.head()
Out[16]:
date.utc location parameter value
PM25 0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
Note
The existence of multiple row/column indices at the same time
has not been mentioned within these tutorials. Hierarchical indexing
or MultiIndex is an advanced and powerful pandas feature to analyze
higher dimensional data.
Multi-indexing is out of scope for this pandas introduction. For the
moment, remember that the function reset_index can be used to
convert any level of an index to a column, e.g.
air_quality.reset_index(level=0)
To user guideFeel free to dive into the world of multi-indexing at the user guide section on advanced indexing.
To user guideMore options on table concatenation (row and column
wise) and how concat can be used to define the logic (union or
intersection) of the indexes on the other axes is provided at the section on
object concatenation.
Join tables using a common identifier#
Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements table.
Warning
The air quality measurement station coordinates are stored in a data
file air_quality_stations.csv, downloaded using the
py-openaq package.
In [17]: stations_coord = pd.read_csv("data/air_quality_stations.csv")
In [18]: stations_coord.head()
Out[18]:
location coordinates.latitude coordinates.longitude
0 BELAL01 51.23619 4.38522
1 BELHB23 51.17030 4.34100
2 BELLD01 51.10998 5.00486
3 BELLD02 51.12038 5.02155
4 BELR833 51.32766 4.36226
Note
The stations used in this example (FR04014, BETR801 and London
Westminster) are just three entries enlisted in the metadata table. We
only want to add the coordinates of these three to the measurements
table, each on the corresponding rows of the air_quality table.
In [19]: air_quality.head()
Out[19]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In [20]: air_quality = pd.merge(air_quality, stations_coord, how="left", on="location")
In [21]: air_quality.head()
Out[21]:
date.utc ... coordinates.longitude
0 2019-05-07 01:00:00+00:00 ... -0.13193
1 2019-05-07 01:00:00+00:00 ... 2.39390
2 2019-05-07 01:00:00+00:00 ... 2.39390
3 2019-05-07 01:00:00+00:00 ... 4.43182
4 2019-05-07 01:00:00+00:00 ... 4.43182
[5 rows x 6 columns]
Using the merge() function, for each of the rows in the
air_quality table, the corresponding coordinates are added from the
air_quality_stations_coord table. Both tables have the column
location in common which is used as a key to combine the
information. By choosing the left join, only the locations available
in the air_quality (left) table, i.e. FR04014, BETR801 and London
Westminster, end up in the resulting table. The merge function
supports multiple join options similar to database-style operations.
Add the parameters’ full description and name, provided by the parameters metadata table, to the measurements table.
Warning
The air quality parameters metadata are stored in a data file
air_quality_parameters.csv, downloaded using the
py-openaq package.
In [22]: air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")
In [23]: air_quality_parameters.head()
Out[23]:
id description name
0 bc Black Carbon BC
1 co Carbon Monoxide CO
2 no2 Nitrogen Dioxide NO2
3 o3 Ozone O3
4 pm10 Particulate matter less than 10 micrometers in... PM10
In [24]: air_quality = pd.merge(air_quality, air_quality_parameters,
....: how='left', left_on='parameter', right_on='id')
....:
In [25]: air_quality.head()
Out[25]:
date.utc ... name
0 2019-05-07 01:00:00+00:00 ... NO2
1 2019-05-07 01:00:00+00:00 ... NO2
2 2019-05-07 01:00:00+00:00 ... NO2
3 2019-05-07 01:00:00+00:00 ... PM2.5
4 2019-05-07 01:00:00+00:00 ... NO2
[5 rows x 9 columns]
Compared to the previous example, there is no common column name.
However, the parameter column in the air_quality table and the
id column in the air_quality_parameters_name both provide the
measured variable in a common format. The left_on and right_on
arguments are used here (instead of just on) to make the link
between the two tables.
To user guidepandas supports also inner, outer, and right joins.
More information on join/merge of tables is provided in the user guide section on
database style merging of tables. Or have a look at the
comparison with SQL page.
REMEMBER
Multiple tables can be concatenated both column-wise and row-wise using
the concat function.
For database-like merging/joining of tables, use the merge
function.
To user guideSee the user guide for a full description of the various facilities to combine data tables.
|
getting_started/intro_tutorials/08_combine_dataframes.html
|
pandas.tseries.offsets.BusinessHour.is_month_end
|
`pandas.tseries.offsets.BusinessHour.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
BusinessHour.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessHour.is_month_end.html
|
pandas.Series.radd
|
`pandas.Series.radd`
Return Addition of series and other, element-wise (binary operator radd).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64
```
|
Series.radd(other, level=None, fill_value=None, axis=0)[source]#
Return Addition of series and other, element-wise (binary operator radd).
Equivalent to other + series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.addElement-wise Addition, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64
|
reference/api/pandas.Series.radd.html
|
pandas.DataFrame.ndim
|
`pandas.DataFrame.ndim`
Return an int representing the number of axes / array dimensions.
Return 1 if Series. Otherwise return 2 if DataFrame.
```
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.ndim
1
```
|
property DataFrame.ndim[source]#
Return an int representing the number of axes / array dimensions.
Return 1 if Series. Otherwise return 2 if DataFrame.
See also
ndarray.ndimNumber of array dimensions.
Examples
>>> s = pd.Series({'a': 1, 'b': 2, 'c': 3})
>>> s.ndim
1
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.ndim
2
|
reference/api/pandas.DataFrame.ndim.html
|
Comparison with SQL
|
Comparison with SQL
Since many potential pandas users have some familiarity with
SQL, this page is meant to provide some examples of how
various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read
the data into a DataFrame called tips and assume we have a database table of the same name and
structure.
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable:
|
Since many potential pandas users have some familiarity with
SQL, this page is meant to provide some examples of how
various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
In [1]: import pandas as pd
In [2]: import numpy as np
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read
the data into a DataFrame called tips and assume we have a database table of the same name and
structure.
In [3]: url = (
...: "https://raw.githubusercontent.com/pandas-dev"
...: "/pandas/main/pandas/tests/io/data/csv/tips.csv"
...: )
...:
In [4]: tips = pd.read_csv(url)
In [5]: tips
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Copies vs. in place operations#
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable:
sorted_df = df.sort_values("col1")
or overwrite the original one:
df = df.sort_values("col1")
Note
You will see an inplace=True keyword argument available for some methods:
df.sort_values("col1", inplace=True)
Its use is discouraged. More information.
SELECT#
In SQL, selection is done using a comma-separated list of columns you’d like to select (or a *
to select all columns):
SELECT total_bill, tip, smoker, time
FROM tips;
With pandas, column selection is done by passing a list of column names to your DataFrame:
In [6]: tips[["total_bill", "tip", "smoker", "time"]]
Out[6]:
total_bill tip smoker time
0 16.99 1.01 No Dinner
1 10.34 1.66 No Dinner
2 21.01 3.50 No Dinner
3 23.68 3.31 No Dinner
4 24.59 3.61 No Dinner
.. ... ... ... ...
239 29.03 5.92 No Dinner
240 27.18 2.00 Yes Dinner
241 22.67 2.00 Yes Dinner
242 17.82 1.75 No Dinner
243 18.78 3.00 No Dinner
[244 rows x 4 columns]
Calling the DataFrame without the list of column names would display all columns (akin to SQL’s
*).
In SQL, you can add a calculated column:
SELECT *, tip/total_bill as tip_rate
FROM tips;
With pandas, you can use the DataFrame.assign() method of a DataFrame to append a new column:
In [7]: tips.assign(tip_rate=tips["tip"] / tips["total_bill"])
Out[7]:
total_bill tip sex smoker day time size tip_rate
0 16.99 1.01 Female No Sun Dinner 2 0.059447
1 10.34 1.66 Male No Sun Dinner 3 0.160542
2 21.01 3.50 Male No Sun Dinner 3 0.166587
3 23.68 3.31 Male No Sun Dinner 2 0.139780
4 24.59 3.61 Female No Sun Dinner 4 0.146808
.. ... ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3 0.203927
240 27.18 2.00 Female Yes Sat Dinner 2 0.073584
241 22.67 2.00 Male Yes Sat Dinner 2 0.088222
242 17.82 1.75 Male No Sat Dinner 2 0.098204
243 18.78 3.00 Female No Thur Dinner 2 0.159744
[244 rows x 8 columns]
WHERE#
Filtering in SQL is done via a WHERE clause.
SELECT *
FROM tips
WHERE time = 'Dinner';
DataFrames can be filtered in multiple ways; the most intuitive of which is using
boolean indexing.
In [8]: tips[tips["total_bill"] > 10]
Out[8]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[227 rows x 7 columns]
The above statement is simply passing a Series of True/False objects to the DataFrame,
returning all rows with True.
In [9]: is_dinner = tips["time"] == "Dinner"
In [10]: is_dinner
Out[10]:
0 True
1 True
2 True
3 True
4 True
...
239 True
240 True
241 True
242 True
243 True
Name: time, Length: 244, dtype: bool
In [11]: is_dinner.value_counts()
Out[11]:
True 176
False 68
Name: time, dtype: int64
In [12]: tips[is_dinner]
Out[12]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[176 rows x 7 columns]
Just like SQL’s OR and AND, multiple conditions can be passed to a DataFrame using |
(OR) and & (AND).
Tips of more than $5 at Dinner meals:
SELECT *
FROM tips
WHERE time = 'Dinner' AND tip > 5.00;
In [13]: tips[(tips["time"] == "Dinner") & (tips["tip"] > 5.00)]
Out[13]:
total_bill tip sex smoker day time size
23 39.42 7.58 Male No Sat Dinner 4
44 30.40 5.60 Male No Sun Dinner 4
47 32.40 6.00 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
59 48.27 6.73 Male No Sat Dinner 4
116 29.93 5.07 Male No Sun Dinner 4
155 29.85 5.14 Female No Sun Dinner 5
170 50.81 10.00 Male Yes Sat Dinner 3
172 7.25 5.15 Male Yes Sun Dinner 2
181 23.33 5.65 Male Yes Sun Dinner 2
183 23.17 6.50 Male Yes Sun Dinner 4
211 25.89 5.16 Male Yes Sat Dinner 4
212 48.33 9.00 Male No Sat Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
239 29.03 5.92 Male No Sat Dinner 3
Tips by parties of at least 5 diners OR bill total was more than $45:
SELECT *
FROM tips
WHERE size >= 5 OR total_bill > 45;
In [14]: tips[(tips["size"] >= 5) | (tips["total_bill"] > 45)]
Out[14]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5
NULL checking is done using the notna() and isna()
methods.
In [15]: frame = pd.DataFrame(
....: {"col1": ["A", "B", np.NaN, "C", "D"], "col2": ["F", np.NaN, "G", "H", "I"]}
....: )
....:
In [16]: frame
Out[16]:
col1 col2
0 A F
1 B NaN
2 NaN G
3 C H
4 D I
Assume we have a table of the same structure as our DataFrame above. We can see only the records
where col2 IS NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;
In [17]: frame[frame["col2"].isna()]
Out[17]:
col1 col2
1 B NaN
Getting items where col1 IS NOT NULL can be done with notna().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;
In [18]: frame[frame["col1"].notna()]
Out[18]:
col1 col2
0 A F
1 B NaN
3 C H
4 D I
GROUP BY#
In pandas, SQL’s GROUP BY operations are performed using the similarly named
groupby() method. groupby() typically refers to a
process where we’d like to split a dataset into groups, apply some function (typically aggregation)
, and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset.
For instance, a query getting us the number of tips left by sex:
SELECT sex, count(*)
FROM tips
GROUP BY sex;
/*
Female 87
Male 157
*/
The pandas equivalent would be:
In [19]: tips.groupby("sex").size()
Out[19]:
sex
Female 87
Male 157
dtype: int64
Notice that in the pandas code we used size() and not
count(). This is because
count() applies the function to each column, returning
the number of NOT NULL records within each.
In [20]: tips.groupby("sex").count()
Out[20]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157
Alternatively, we could have applied the count() method
to an individual column:
In [21]: tips.groupby("sex")["total_bill"].count()
Out[21]:
sex
Female 87
Male 157
Name: total_bill, dtype: int64
Multiple functions can also be applied at once. For instance, say we’d like to see how tip amount
differs by day of the week - agg() allows you to pass a dictionary
to your grouped DataFrame, indicating which functions to apply to specific columns.
SELECT day, AVG(tip), COUNT(*)
FROM tips
GROUP BY day;
/*
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thu 2.771452 62
*/
In [22]: tips.groupby("day").agg({"tip": np.mean, "day": np.size})
Out[22]:
tip day
day
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thur 2.771452 62
Grouping by more than one column is done by passing a list of columns to the
groupby() method.
SELECT smoker, day, COUNT(*), AVG(tip)
FROM tips
GROUP BY smoker, day;
/*
smoker day
No Fri 4 2.812500
Sat 45 3.102889
Sun 57 3.167895
Thu 45 2.673778
Yes Fri 15 2.714000
Sat 42 2.875476
Sun 19 3.516842
Thu 17 3.030000
*/
In [23]: tips.groupby(["smoker", "day"]).agg({"tip": [np.size, np.mean]})
Out[23]:
tip
size mean
smoker day
No Fri 4 2.812500
Sat 45 3.102889
Sun 57 3.167895
Thur 45 2.673778
Yes Fri 15 2.714000
Sat 42 2.875476
Sun 19 3.516842
Thur 17 3.030000
JOIN#
JOINs can be performed with join() or merge(). By
default, join() will join the DataFrames on their indices. Each method has
parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER,
FULL) or the columns to join on (column names or indices).
Warning
If both key columns contain rows where the key is a null value, those
rows will be matched against each other. This is different from usual SQL
join behaviour and can lead to unexpected results.
In [24]: df1 = pd.DataFrame({"key": ["A", "B", "C", "D"], "value": np.random.randn(4)})
In [25]: df2 = pd.DataFrame({"key": ["B", "D", "D", "E"], "value": np.random.randn(4)})
Assume we have two database tables of the same name and structure as our DataFrames.
Now let’s go over the various types of JOINs.
INNER JOIN#
SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;
# merge performs an INNER JOIN by default
In [26]: pd.merge(df1, df2, on="key")
Out[26]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
merge() also offers parameters for cases when you’d like to join one DataFrame’s
column with another DataFrame’s index.
In [27]: indexed_df2 = df2.set_index("key")
In [28]: pd.merge(df1, indexed_df2, left_on="key", right_index=True)
Out[28]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
3 D -1.135632 0.119209
LEFT OUTER JOIN#
Show all records from df1.
SELECT *
FROM df1
LEFT OUTER JOIN df2
ON df1.key = df2.key;
In [29]: pd.merge(df1, df2, on="key", how="left")
Out[29]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
RIGHT JOIN#
Show all records from df2.
SELECT *
FROM df1
RIGHT OUTER JOIN df2
ON df1.key = df2.key;
In [30]: pd.merge(df1, df2, on="key", how="right")
Out[30]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
FULL JOIN#
pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the
joined columns find a match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).
Show all records from both tables.
SELECT *
FROM df1
FULL OUTER JOIN df2
ON df1.key = df2.key;
In [31]: pd.merge(df1, df2, on="key", how="outer")
Out[31]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
UNION#
UNION ALL can be performed using concat().
In [32]: df1 = pd.DataFrame(
....: {"city": ["Chicago", "San Francisco", "New York City"], "rank": range(1, 4)}
....: )
....:
In [33]: df2 = pd.DataFrame(
....: {"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]}
....: )
....:
SELECT city, rank
FROM df1
UNION ALL
SELECT city, rank
FROM df2;
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Chicago 1
Boston 4
Los Angeles 5
*/
In [34]: pd.concat([df1, df2])
Out[34]:
city rank
0 Chicago 1
1 San Francisco 2
2 New York City 3
0 Chicago 1
1 Boston 4
2 Los Angeles 5
SQL’s UNION is similar to UNION ALL, however UNION will remove duplicate rows.
SELECT city, rank
FROM df1
UNION
SELECT city, rank
FROM df2;
-- notice that there is only one Chicago record this time
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Boston 4
Los Angeles 5
*/
In pandas, you can use concat() in conjunction with
drop_duplicates().
In [35]: pd.concat([df1, df2]).drop_duplicates()
Out[35]:
city rank
0 Chicago 1
1 San Francisco 2
2 New York City 3
1 Boston 4
2 Los Angeles 5
LIMIT#
SELECT * FROM tips
LIMIT 10;
In [36]: tips.head(10)
Out[36]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
5 25.29 4.71 Male No Sun Dinner 4
6 8.77 2.00 Male No Sun Dinner 2
7 26.88 3.12 Male No Sun Dinner 4
8 15.04 1.96 Male No Sun Dinner 2
9 14.78 3.23 Male No Sun Dinner 2
pandas equivalents for some SQL analytic and aggregate functions#
Top n rows with offset#
-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
In [37]: tips.nlargest(10 + 5, columns="tip").tail(10)
Out[37]:
total_bill tip sex smoker day time size
183 23.17 6.50 Male Yes Sun Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
47 32.40 6.00 Male No Sun Dinner 4
239 29.03 5.92 Male No Sat Dinner 3
88 24.71 5.85 Male No Thur Lunch 2
181 23.33 5.65 Male Yes Sun Dinner 2
44 30.40 5.60 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
85 34.83 5.17 Female No Thur Lunch 4
211 25.89 5.16 Male Yes Sat Dinner 4
Top n rows per group#
-- Oracle's ROW_NUMBER() analytic function
SELECT * FROM (
SELECT
t.*,
ROW_NUMBER() OVER(PARTITION BY day ORDER BY total_bill DESC) AS rn
FROM tips t
)
WHERE rn < 3
ORDER BY day, rn;
In [38]: (
....: tips.assign(
....: rn=tips.sort_values(["total_bill"], ascending=False)
....: .groupby(["day"])
....: .cumcount()
....: + 1
....: )
....: .query("rn < 3")
....: .sort_values(["day", "rn"])
....: )
....:
Out[38]:
total_bill tip sex smoker day time size rn
95 40.17 4.73 Male Yes Fri Dinner 4 1
90 28.97 3.00 Male Yes Fri Dinner 2 2
170 50.81 10.00 Male Yes Sat Dinner 3 1
212 48.33 9.00 Male No Sat Dinner 4 2
156 48.17 5.00 Male No Sun Dinner 6 1
182 45.35 3.50 Male Yes Sun Dinner 3 2
197 43.11 5.00 Female Yes Thur Lunch 4 1
142 41.19 5.00 Male No Thur Lunch 5 2
the same using rank(method='first') function
In [39]: (
....: tips.assign(
....: rnk=tips.groupby(["day"])["total_bill"].rank(
....: method="first", ascending=False
....: )
....: )
....: .query("rnk < 3")
....: .sort_values(["day", "rnk"])
....: )
....:
Out[39]:
total_bill tip sex smoker day time size rnk
95 40.17 4.73 Male Yes Fri Dinner 4 1.0
90 28.97 3.00 Male Yes Fri Dinner 2 2.0
170 50.81 10.00 Male Yes Sat Dinner 3 1.0
212 48.33 9.00 Male No Sat Dinner 4 2.0
156 48.17 5.00 Male No Sun Dinner 6 1.0
182 45.35 3.50 Male Yes Sun Dinner 3 2.0
197 43.11 5.00 Female Yes Thur Lunch 4 1.0
142 41.19 5.00 Male No Thur Lunch 5 2.0
-- Oracle's RANK() analytic function
SELECT * FROM (
SELECT
t.*,
RANK() OVER(PARTITION BY sex ORDER BY tip) AS rnk
FROM tips t
WHERE tip < 2
)
WHERE rnk < 3
ORDER BY sex, rnk;
Let’s find tips with (rank < 3) per gender group for (tips < 2).
Notice that when using rank(method='min') function
rnk_min remains the same for the same tip
(as Oracle’s RANK() function)
In [40]: (
....: tips[tips["tip"] < 2]
....: .assign(rnk_min=tips.groupby(["sex"])["tip"].rank(method="min"))
....: .query("rnk_min < 3")
....: .sort_values(["sex", "rnk_min"])
....: )
....:
Out[40]:
total_bill tip sex smoker day time size rnk_min
67 3.07 1.00 Female Yes Sat Dinner 1 1.0
92 5.75 1.00 Female Yes Fri Dinner 2 1.0
111 7.25 1.00 Female No Sat Dinner 1 1.0
236 12.60 1.00 Male Yes Sat Dinner 2 1.0
237 32.83 1.17 Male Yes Sat Dinner 2 2.0
UPDATE#
UPDATE tips
SET tip = tip*2
WHERE tip < 2;
In [41]: tips.loc[tips["tip"] < 2, "tip"] *= 2
DELETE#
DELETE FROM tips
WHERE tip > 9;
In pandas we select the rows that should remain instead of deleting them:
In [42]: tips = tips.loc[tips["tip"] <= 9]
|
getting_started/comparison/comparison_with_sql.html
|
pandas.DatetimeIndex.day_of_year
|
`pandas.DatetimeIndex.day_of_year`
The ordinal day of the year.
|
property DatetimeIndex.day_of_year[source]#
The ordinal day of the year.
|
reference/api/pandas.DatetimeIndex.day_of_year.html
|
pandas.arrays.IntervalArray.overlaps
|
`pandas.arrays.IntervalArray.overlaps`
Check elementwise if an Interval overlaps the values in the IntervalArray.
```
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
```
|
IntervalArray.overlaps(other)[source]#
Check elementwise if an Interval overlaps the values in the IntervalArray.
Two intervals overlap if they share a common point, including closed
endpoints. Intervals that only have an open endpoint in common do not
overlap.
Parameters
otherIntervalArrayInterval to check against for an overlap.
Returns
ndarrayBoolean array positionally indicating where an overlap occurs.
See also
Interval.overlapsCheck whether two Interval objects overlap.
Examples
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.overlaps(pd.Interval(0.5, 1.5))
array([ True, True, False])
Intervals that share closed endpoints overlap:
>>> intervals.overlaps(pd.Interval(1, 3, closed='left'))
array([ True, True, True])
Intervals that only have an open endpoint in common do not overlap:
>>> intervals.overlaps(pd.Interval(1, 2, closed='right'))
array([False, True, False])
|
reference/api/pandas.arrays.IntervalArray.overlaps.html
|
pandas.isna
|
`pandas.isna`
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are missing (NaN in numeric arrays, None or NaN
in object arrays, NaT in datetimelike).
```
>>> pd.isna('dog')
False
```
|
pandas.isna(obj)[source]#
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are missing (NaN in numeric arrays, None or NaN
in object arrays, NaT in datetimelike).
Parameters
objscalar or array-likeObject to check for null or missing values.
Returns
bool or array-like of boolFor scalar input, returns a scalar boolean.
For array input, returns an array of boolean indicating whether each
corresponding element is missing.
See also
notnaBoolean inverse of pandas.isna.
Series.isnaDetect missing values in a Series.
DataFrame.isnaDetect missing values in a DataFrame.
Index.isnaDetect missing values in an Index.
Examples
Scalar arguments (including strings) result in a scalar boolean.
>>> pd.isna('dog')
False
>>> pd.isna(pd.NA)
True
>>> pd.isna(np.nan)
True
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.isna(array)
array([[False, True, False],
[False, False, True]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.isna(index)
array([False, False, True, False])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool
|
reference/api/pandas.isna.html
|
pandas.tseries.offsets.Minute.kwds
|
`pandas.tseries.offsets.Minute.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
Minute.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Minute.kwds.html
|
Options and settings
|
API for configuring global behavior. See the User Guide for more.
Working with options#
describe_option(pat[, _print_desc])
Prints the description for one or more registered options.
reset_option(pat)
Reset one or more options to their default value.
get_option(pat)
Retrieves the value of the specified option.
set_option(pat, value)
Sets the value of the specified option.
option_context(*args)
Context manager to temporarily set options in the with statement context.
|
reference/options.html
| null |
pandas.DataFrame.truediv
|
`pandas.DataFrame.truediv`
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.truediv(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.truediv.html
|
pandas.core.resample.Resampler.aggregate
|
`pandas.core.resample.Resampler.aggregate`
Aggregate using one or more operations over the specified axis.
```
>>> s = pd.Series([1, 2, 3, 4, 5],
... index=pd.date_range('20130101', periods=5, freq='s'))
>>> s
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
```
|
Resampler.aggregate(func=None, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
DataFrame.groupby.aggregateAggregate using callable, string, dict, or list of string/callables.
DataFrame.resample.transformTransforms the Series on each group based on the given function.
DataFrame.aggregateAggregate using one or more operations over the specified axis.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4, 5],
... index=pd.date_range('20130101', periods=5, freq='s'))
>>> s
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum', 'mean', 'max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
>>> r.agg({'result': lambda x: x.mean() / x.std(),
... 'total': np.sum})
result total
2013-01-01 00:00:00 2.121320 3
2013-01-01 00:00:02 4.949747 7
2013-01-01 00:00:04 NaN 5
>>> r.agg(average="mean", total="sum")
average total
2013-01-01 00:00:00 1.5 3
2013-01-01 00:00:02 3.5 7
2013-01-01 00:00:04 5.0 5
|
reference/api/pandas.core.resample.Resampler.aggregate.html
|
pandas.core.groupby.GroupBy.cumprod
|
`pandas.core.groupby.GroupBy.cumprod`
Cumulative product for each group.
See also
|
final GroupBy.cumprod(axis=0, *args, **kwargs)[source]#
Cumulative product for each group.
Returns
Series or DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.cumprod.html
|
pandas.tseries.offsets.Tick.isAnchored
|
pandas.tseries.offsets.Tick.isAnchored
|
Tick.isAnchored()#
|
reference/api/pandas.tseries.offsets.Tick.isAnchored.html
|
pandas.tseries.offsets.MonthEnd.__call__
|
`pandas.tseries.offsets.MonthEnd.__call__`
Call self as a function.
|
MonthEnd.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.MonthEnd.__call__.html
|
pandas.tseries.offsets.QuarterEnd.name
|
`pandas.tseries.offsets.QuarterEnd.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
```
|
QuarterEnd.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.QuarterEnd.name.html
|
pandas.DataFrame.pivot_table
|
`pandas.DataFrame.pivot_table`
Create a spreadsheet-style pivot table as a DataFrame.
```
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
A B C D E
0 foo one small 1 2
1 foo one large 2 4
2 foo one large 2 5
3 foo two small 3 5
4 foo two small 3 6
5 bar one large 4 6
6 bar one small 5 8
7 bar two small 6 9
8 bar two large 7 9
```
|
DataFrame.pivot_table(values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False, sort=True)[source]#
Create a spreadsheet-style pivot table as a DataFrame.
The levels in the pivot table will be stored in MultiIndex objects
(hierarchical indexes) on the index and columns of the result DataFrame.
Parameters
valuescolumn to aggregate, optional
indexcolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table index. If an array is passed,
it is being used as the same manner as column values.
columnscolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table column. If an array is passed,
it is being used as the same manner as column values.
aggfuncfunction, list of functions, dict, default numpy.meanIf list of functions passed, the resulting pivot table will have
hierarchical columns whose top level are the function names
(inferred from the function objects themselves)
If dict is passed, the key is column to aggregate and value
is function or list of functions.
fill_valuescalar, default NoneValue to replace missing values with (in the resulting pivot table,
after aggregation).
marginsbool, default FalseAdd all row / columns (e.g. for subtotal / grand totals).
dropnabool, default TrueDo not include columns whose entries are all NaN. If True,
rows with a NaN value in any column will be omitted before
computing margins.
margins_namestr, default ‘All’Name of the row / column that will contain the totals
when margins is True.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
Changed in version 0.25.0.
sortbool, default TrueSpecifies if the result should be sorted.
New in version 1.3.0.
Returns
DataFrameAn Excel style pivot table.
See also
DataFrame.pivotPivot without aggregation that can handle non-numeric data.
DataFrame.meltUnpivot a DataFrame from wide to long format, optionally leaving identifiers set.
wide_to_longWide panel to long format. Less flexible but more user-friendly than melt.
Notes
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
A B C D E
0 foo one small 1 2
1 foo one large 2 4
2 foo one large 2 5
3 foo two small 3 5
4 foo two small 3 6
5 bar one large 4 6
6 bar one small 5 8
7 bar two small 6 9
8 bar two large 7 9
This first example aggregates values by taking the sum.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum)
>>> table
C large small
A B
bar one 4.0 5.0
two 7.0 6.0
foo one 4.0 1.0
two NaN 6.0
We can also fill missing values using the fill_value parameter.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum, fill_value=0)
>>> table
C large small
A B
bar one 4 5
two 7 6
foo one 4 1
two 0 6
The next example aggregates by taking the mean across multiple columns.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': np.mean})
>>> table
D E
A C
bar large 5.500000 7.500000
small 5.500000 8.500000
foo large 2.000000 4.500000
small 2.333333 4.333333
We can also calculate multiple types of aggregations for any given
value column.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': [min, max, np.mean]})
>>> table
D E
mean max mean min
A C
bar large 5.500000 9 7.500000 6
small 5.500000 9 8.500000 8
foo large 2.000000 5 4.500000 4
small 2.333333 6 4.333333 2
|
reference/api/pandas.DataFrame.pivot_table.html
|
pandas.Index.is_monotonic
|
`pandas.Index.is_monotonic`
Alias for is_monotonic_increasing.
Deprecated since version 1.5.0: is_monotonic is deprecated and will be removed in a future version.
Use is_monotonic_increasing instead.
|
property Index.is_monotonic[source]#
Alias for is_monotonic_increasing.
Deprecated since version 1.5.0: is_monotonic is deprecated and will be removed in a future version.
Use is_monotonic_increasing instead.
|
reference/api/pandas.Index.is_monotonic.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored
|
pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored
|
CustomBusinessMonthBegin.isAnchored()#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.isAnchored.html
|
pandas.Index.tolist
|
`pandas.Index.tolist`
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
|
Index.tolist()[source]#
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
Returns
list
See also
numpy.ndarray.tolistReturn the array as an a.ndim-levels deep nested list of Python scalars.
|
reference/api/pandas.Index.tolist.html
|
pandas.tseries.offsets.BusinessHour.is_quarter_end
|
`pandas.tseries.offsets.BusinessHour.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
BusinessHour.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessHour.is_quarter_end.html
|
How to combine data from multiple tables?
|
How to combine data from multiple tables?
For this tutorial, air quality data about \(NO_2\) is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv data set provides \(NO_2\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
For this tutorial, air quality data about \(NO_2\) is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv data set provides \(NO_2\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
For this tutorial, air quality data about Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_pm25_long.csv data set provides \(PM_{25}\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
For this tutorial, air quality data about Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
|
Data used for this tutorial:
Air quality Nitrate data
For this tutorial, air quality data about \(NO_2\) is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv data set provides \(NO_2\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
...: parse_dates=True)
...:
In [3]: air_quality_no2 = air_quality_no2[["date.utc", "location",
...: "parameter", "value"]]
...:
In [4]: air_quality_no2.head()
Out[4]:
date.utc location parameter value
0 2019-06-21 00:00:00+00:00 FR04014 no2 20.0
1 2019-06-20 23:00:00+00:00 FR04014 no2 21.8
2 2019-06-20 22:00:00+00:00 FR04014 no2 26.5
3 2019-06-20 21:00:00+00:00 FR04014 no2 24.9
4 2019-06-20 20:00:00+00:00 FR04014 no2 21.4
Air quality Particulate matter data
For this tutorial, air quality data about Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_pm25_long.csv data set provides \(PM_{25}\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [5]: air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
...: parse_dates=True)
...:
In [6]: air_quality_pm25 = air_quality_pm25[["date.utc", "location",
...: "parameter", "value"]]
...:
In [7]: air_quality_pm25.head()
Out[7]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
How to combine data from multiple tables?#
Concatenating objects#
I want to combine the measurements of \(NO_2\) and \(PM_{25}\), two tables with a similar structure, in a single table.
In [8]: air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)
In [9]: air_quality.head()
Out[9]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
The concat() function performs concatenation operations of multiple
tables along one of the axes (row-wise or column-wise).
By default concatenation is along axis 0, so the resulting table combines the rows
of the input tables. Let’s check the shape of the original and the
concatenated tables to verify the operation:
In [10]: print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)
Shape of the ``air_quality_pm25`` table: (1110, 4)
In [11]: print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)
Shape of the ``air_quality_no2`` table: (2068, 4)
In [12]: print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)
Shape of the resulting ``air_quality`` table: (3178, 4)
Hence, the resulting table has 3178 = 1110 + 2068 rows.
Note
The axis argument will return in a number of pandas
methods that can be applied along an axis. A DataFrame has two
corresponding axes: the first running vertically downwards across rows
(axis 0), and the second running horizontally across columns (axis 1).
Most operations like concatenation or summary statistics are by default
across rows (axis 0), but can be applied across columns as well.
Sorting the table on the datetime information illustrates also the
combination of both tables, with the parameter column defining the
origin of the table (either no2 from table air_quality_no2 or
pm25 from table air_quality_pm25):
In [13]: air_quality = air_quality.sort_values("date.utc")
In [14]: air_quality.head()
Out[14]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In this specific example, the parameter column provided by the data
ensures that each of the original tables can be identified. This is not
always the case. The concat function provides a convenient solution
with the keys argument, adding an additional (hierarchical) row
index. For example:
In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2], keys=["PM25", "NO2"])
In [16]: air_quality_.head()
Out[16]:
date.utc location parameter value
PM25 0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
Note
The existence of multiple row/column indices at the same time
has not been mentioned within these tutorials. Hierarchical indexing
or MultiIndex is an advanced and powerful pandas feature to analyze
higher dimensional data.
Multi-indexing is out of scope for this pandas introduction. For the
moment, remember that the function reset_index can be used to
convert any level of an index to a column, e.g.
air_quality.reset_index(level=0)
To user guideFeel free to dive into the world of multi-indexing at the user guide section on advanced indexing.
To user guideMore options on table concatenation (row and column
wise) and how concat can be used to define the logic (union or
intersection) of the indexes on the other axes is provided at the section on
object concatenation.
Join tables using a common identifier#
Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements table.
Warning
The air quality measurement station coordinates are stored in a data
file air_quality_stations.csv, downloaded using the
py-openaq package.
In [17]: stations_coord = pd.read_csv("data/air_quality_stations.csv")
In [18]: stations_coord.head()
Out[18]:
location coordinates.latitude coordinates.longitude
0 BELAL01 51.23619 4.38522
1 BELHB23 51.17030 4.34100
2 BELLD01 51.10998 5.00486
3 BELLD02 51.12038 5.02155
4 BELR833 51.32766 4.36226
Note
The stations used in this example (FR04014, BETR801 and London
Westminster) are just three entries enlisted in the metadata table. We
only want to add the coordinates of these three to the measurements
table, each on the corresponding rows of the air_quality table.
In [19]: air_quality.head()
Out[19]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In [20]: air_quality = pd.merge(air_quality, stations_coord, how="left", on="location")
In [21]: air_quality.head()
Out[21]:
date.utc ... coordinates.longitude
0 2019-05-07 01:00:00+00:00 ... -0.13193
1 2019-05-07 01:00:00+00:00 ... 2.39390
2 2019-05-07 01:00:00+00:00 ... 2.39390
3 2019-05-07 01:00:00+00:00 ... 4.43182
4 2019-05-07 01:00:00+00:00 ... 4.43182
[5 rows x 6 columns]
Using the merge() function, for each of the rows in the
air_quality table, the corresponding coordinates are added from the
air_quality_stations_coord table. Both tables have the column
location in common which is used as a key to combine the
information. By choosing the left join, only the locations available
in the air_quality (left) table, i.e. FR04014, BETR801 and London
Westminster, end up in the resulting table. The merge function
supports multiple join options similar to database-style operations.
Add the parameters’ full description and name, provided by the parameters metadata table, to the measurements table.
Warning
The air quality parameters metadata are stored in a data file
air_quality_parameters.csv, downloaded using the
py-openaq package.
In [22]: air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")
In [23]: air_quality_parameters.head()
Out[23]:
id description name
0 bc Black Carbon BC
1 co Carbon Monoxide CO
2 no2 Nitrogen Dioxide NO2
3 o3 Ozone O3
4 pm10 Particulate matter less than 10 micrometers in... PM10
In [24]: air_quality = pd.merge(air_quality, air_quality_parameters,
....: how='left', left_on='parameter', right_on='id')
....:
In [25]: air_quality.head()
Out[25]:
date.utc ... name
0 2019-05-07 01:00:00+00:00 ... NO2
1 2019-05-07 01:00:00+00:00 ... NO2
2 2019-05-07 01:00:00+00:00 ... NO2
3 2019-05-07 01:00:00+00:00 ... PM2.5
4 2019-05-07 01:00:00+00:00 ... NO2
[5 rows x 9 columns]
Compared to the previous example, there is no common column name.
However, the parameter column in the air_quality table and the
id column in the air_quality_parameters_name both provide the
measured variable in a common format. The left_on and right_on
arguments are used here (instead of just on) to make the link
between the two tables.
To user guidepandas supports also inner, outer, and right joins.
More information on join/merge of tables is provided in the user guide section on
database style merging of tables. Or have a look at the
comparison with SQL page.
REMEMBER
Multiple tables can be concatenated both column-wise and row-wise using
the concat function.
For database-like merging/joining of tables, use the merge
function.
To user guideSee the user guide for a full description of the various facilities to combine data tables.
|
getting_started/intro_tutorials/08_combine_dataframes.html
|
pandas.MultiIndex.names
|
`pandas.MultiIndex.names`
Names of levels in MultiIndex.
```
>>> mi = pd.MultiIndex.from_arrays(
... [[1, 2], [3, 4], [5, 6]], names=['x', 'y', 'z'])
>>> mi
MultiIndex([(1, 3, 5),
(2, 4, 6)],
names=['x', 'y', 'z'])
>>> mi.names
FrozenList(['x', 'y', 'z'])
```
|
property MultiIndex.names[source]#
Names of levels in MultiIndex.
Examples
>>> mi = pd.MultiIndex.from_arrays(
... [[1, 2], [3, 4], [5, 6]], names=['x', 'y', 'z'])
>>> mi
MultiIndex([(1, 3, 5),
(2, 4, 6)],
names=['x', 'y', 'z'])
>>> mi.names
FrozenList(['x', 'y', 'z'])
|
reference/api/pandas.MultiIndex.names.html
|
pandas.core.groupby.GroupBy.groups
|
`pandas.core.groupby.GroupBy.groups`
Dict {group name -> group labels}.
|
property GroupBy.groups[source]#
Dict {group name -> group labels}.
|
reference/api/pandas.core.groupby.GroupBy.groups.html
|
pandas.tseries.offsets.CustomBusinessHour.onOffset
|
pandas.tseries.offsets.CustomBusinessHour.onOffset
|
CustomBusinessHour.onOffset()#
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.onOffset.html
|
pandas.tseries.offsets.CustomBusinessDay.normalize
|
pandas.tseries.offsets.CustomBusinessDay.normalize
|
CustomBusinessDay.normalize#
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.normalize.html
|
pandas.errors.OutOfBoundsTimedelta
|
`pandas.errors.OutOfBoundsTimedelta`
Raised when encountering a timedelta value that cannot be represented.
|
exception pandas.errors.OutOfBoundsTimedelta#
Raised when encountering a timedelta value that cannot be represented.
Representation should be within a timedelta64[ns].
|
reference/api/pandas.errors.OutOfBoundsTimedelta.html
|
pandas.DatetimeIndex.timetz
|
`pandas.DatetimeIndex.timetz`
Returns numpy array of datetime.time objects with timezones.
|
property DatetimeIndex.timetz[source]#
Returns numpy array of datetime.time objects with timezones.
The time part of the Timestamps.
|
reference/api/pandas.DatetimeIndex.timetz.html
|
pandas.Series.str.encode
|
`pandas.Series.str.encode`
Encode character string in the Series/Index using indicated encoding.
|
Series.str.encode(encoding, errors='strict')[source]#
Encode character string in the Series/Index using indicated encoding.
Equivalent to str.encode().
Parameters
encodingstr
errorsstr, optional
Returns
encodedSeries/Index of objects
|
reference/api/pandas.Series.str.encode.html
|
pandas.Timestamp.date
|
`pandas.Timestamp.date`
Return date object with same year, month and day.
|
Timestamp.date()#
Return date object with same year, month and day.
|
reference/api/pandas.Timestamp.date.html
|
pandas.DataFrame.cumprod
|
`pandas.DataFrame.cumprod`
Return cumulative product over a DataFrame or Series axis.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
```
|
DataFrame.cumprod(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative product over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
product.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative product of Series or DataFrame.
See also
core.window.expanding.Expanding.prodSimilar functionality but ignores NaN values.
DataFrame.prodReturn the product over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the product
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row,
use axis=1
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0
|
reference/api/pandas.DataFrame.cumprod.html
|
pandas.tseries.offsets.FY5253.rollback
|
`pandas.tseries.offsets.FY5253.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
FY5253.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.FY5253.rollback.html
|
pandas.tseries.offsets.QuarterBegin.apply
|
pandas.tseries.offsets.QuarterBegin.apply
|
QuarterBegin.apply()#
|
reference/api/pandas.tseries.offsets.QuarterBegin.apply.html
|
pandas.Index.is_numeric
|
`pandas.Index.is_numeric`
Check if the Index only consists of numeric data.
Whether or not the Index only consists of numeric data.
```
>>> idx = pd.Index([1.0, 2.0, 3.0, 4.0])
>>> idx.is_numeric()
True
```
|
final Index.is_numeric()[source]#
Check if the Index only consists of numeric data.
Returns
boolWhether or not the Index only consists of numeric data.
See also
is_booleanCheck if the Index only consists of booleans.
is_integerCheck if the Index only consists of integers.
is_floatingCheck if the Index is a floating type.
is_objectCheck if the Index is of the object dtype.
is_categoricalCheck if the Index holds categorical data.
is_intervalCheck if the Index holds Interval objects.
is_mixedCheck if the Index holds data with mixed data types.
Examples
>>> idx = pd.Index([1.0, 2.0, 3.0, 4.0])
>>> idx.is_numeric()
True
>>> idx = pd.Index([1, 2, 3, 4.0])
>>> idx.is_numeric()
True
>>> idx = pd.Index([1, 2, 3, 4])
>>> idx.is_numeric()
True
>>> idx = pd.Index([1, 2, 3, 4.0, np.nan])
>>> idx.is_numeric()
True
>>> idx = pd.Index([1, 2, 3, 4.0, np.nan, "Apple"])
>>> idx.is_numeric()
False
|
reference/api/pandas.Index.is_numeric.html
|
pandas.tseries.offsets.DateOffset
|
`pandas.tseries.offsets.DateOffset`
Standard kind of date increment used for a date range.
```
>>> from pandas.tseries.offsets import DateOffset
>>> ts = pd.Timestamp('2017-01-01 09:10:11')
>>> ts + DateOffset(months=3)
Timestamp('2017-04-01 09:10:11')
```
|
class pandas.tseries.offsets.DateOffset#
Standard kind of date increment used for a date range.
Works exactly like the keyword argument form of relativedelta.
Note that the positional argument form of relativedelata is not
supported. Use of the keyword n is discouraged– you would be better
off specifying n in the keywords you use, but regardless it is
there for you. n is needed for DateOffset subclasses.
DateOffset works as follows. Each offset specify a set of dates
that conform to the DateOffset. For example, Bday defines this
set to be the set of dates that are weekdays (M-F). To test if a
date is in the set of a DateOffset dateOffset we can use the
is_on_offset method: dateOffset.is_on_offset(date).
If a date is not on a valid date, the rollback and rollforward
methods can be used to roll the date to the nearest valid date
before/after the date.
DateOffsets can be created to move dates forward a given number of
valid dates. For example, Bday(2) can be added to a date to move
it two business days forward. If the date does not start on a
valid date, first it is moved to a valid date. Thus pseudo code
is:
def __add__(date):date = rollback(date) # does nothing if date is valid
return date + <n number of periods>
When a date offset is created for a negative number of periods,
the date is first rolled forward. The pseudo code is:
def __add__(date):date = rollforward(date) # does nothing is date is valid
return date + <n number of periods>
Zero presents a problem. Should it roll forward or back? We
arbitrarily have it rollforward:
date + BDay(0) == BDay.rollforward(date)
Since 0 is a bit weird, we suggest avoiding its use.
Besides, adding a DateOffsets specified by the singular form of the date
component can be used to replace certain component of the timestamp.
Parameters
nint, default 1The number of time periods the offset represents.
If specified without a temporal pattern, defaults to n days.
normalizebool, default FalseWhether to round the result of a DateOffset addition down to the
previous midnight.
**kwdsTemporal parameter that add to or replace the offset value.
Parameters that add to the offset (like Timedelta):
years
months
weeks
days
hours
minutes
seconds
milliseconds
microseconds
nanoseconds
Parameters that replace the offset value:
year
month
day
weekday
hour
minute
second
microsecond
nanosecond.
See also
dateutil.relativedelta.relativedeltaThe relativedelta type is designed to be applied to an existing datetime an can replace specific components of that datetime, or represents an interval of time.
Examples
>>> from pandas.tseries.offsets import DateOffset
>>> ts = pd.Timestamp('2017-01-01 09:10:11')
>>> ts + DateOffset(months=3)
Timestamp('2017-04-01 09:10:11')
>>> ts = pd.Timestamp('2017-01-01 09:10:11')
>>> ts + DateOffset(months=2)
Timestamp('2017-03-01 09:10:11')
>>> ts + DateOffset(day=31)
Timestamp('2017-01-31 09:10:11')
>>> ts + pd.DateOffset(hour=8)
Timestamp('2017-01-01 08:10:11')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.DateOffset.html
|
Testing
|
Testing
|
Assertion functions#
testing.assert_frame_equal(left, right[, ...])
Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...])
Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...])
Check that left and right Index are equal.
testing.assert_extension_array_equal(left, right)
Check that left and right ExtensionArrays are equal.
Exceptions and warnings#
errors.AbstractMethodError(class_instance[, ...])
Raise this error instead of NotImplementedError for abstract methods.
errors.AccessorRegistrationWarning
Warning for attribute conflicts in accessor registration.
errors.AttributeConflictWarning
Warning raised when index attributes conflict when using HDFStore.
errors.CategoricalConversionWarning
Warning is raised when reading a partial labeled Stata file using a iterator.
errors.ClosedFileError
Exception is raised when trying to perform an operation on a closed HDFStore file.
errors.CSSWarning
Warning is raised when converting css styling fails.
errors.DatabaseError
Error is raised when executing sql with bad syntax or sql that throws an error.
errors.DataError
Exceptionn raised when performing an operation on non-numerical data.
errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
errors.DuplicateLabelError
Error raised when an operation would introduce duplicate labels.
errors.EmptyDataError
Exception raised in pd.read_csv when empty data or header is encountered.
errors.IncompatibilityWarning
Warning raised when trying to use where criteria on an incompatible HDF5 file.
errors.IndexingError
Exception is raised when trying to index and there is a mismatch in dimensions.
errors.InvalidColumnName
Warning raised by to_stata the column contains a non-valid stata name.
errors.InvalidIndexError
Exception raised when attempting to use an invalid index key.
errors.IntCastingNaNError
Exception raised when converting (astype) an array with NaN to an integer type.
errors.MergeError
Exception raised when merging data.
errors.NullFrequencyError
Exception raised when a freq cannot be null.
errors.NumbaUtilError
Error raised for unsupported Numba engine routines.
errors.NumExprClobberingError
Exception raised when trying to use a built-in numexpr name as a variable name.
errors.OptionError
Exception raised for pandas.options.
errors.OutOfBoundsDatetime
Raised when the datetime is outside the range that can be represented.
errors.OutOfBoundsTimedelta
Raised when encountering a timedelta value that cannot be represented.
errors.ParserError
Exception that is raised by an error encountered in parsing file contents.
errors.ParserWarning
Warning raised when reading a file that doesn't use the default 'c' parser.
errors.PerformanceWarning
Warning raised when there is a possible performance impact.
errors.PossibleDataLossError
Exception raised when trying to open a HDFStore file when already opened.
errors.PossiblePrecisionLoss
Warning raised by to_stata on a column with a value outside or equal to int64.
errors.PyperclipException
Exception raised when clipboard functionality is unsupported.
errors.PyperclipWindowsException(message)
Exception raised when clipboard functionality is unsupported by Windows.
errors.SettingWithCopyError
Exception raised when trying to set on a copied slice from a DataFrame.
errors.SettingWithCopyWarning
Warning raised when trying to set on a copied slice from a DataFrame.
errors.SpecificationError
Exception raised by agg when the functions are ill-specified.
errors.UndefinedVariableError(name[, is_local])
Exception raised by query or eval when using an undefined variable name.
errors.UnsortedIndexError
Error raised when slicing a MultiIndex which has not been lexsorted.
errors.UnsupportedFunctionCall
Exception raised when attempting to call a unsupported numpy function.
errors.ValueLabelTypeMismatch
Warning raised by to_stata on a category column that contains non-string values.
Bug report function#
show_versions([as_json])
Provide useful information, important for bug reports.
Test suite runner#
test([extra_args])
Run the pandas test suite using pytest.
|
reference/testing.html
|
pandas.Timestamp.is_quarter_start
|
`pandas.Timestamp.is_quarter_start`
Return True if date is first day of the quarter.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_quarter_start
False
```
|
Timestamp.is_quarter_start#
Return True if date is first day of the quarter.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_quarter_start
False
>>> ts = pd.Timestamp(2020, 4, 1)
>>> ts.is_quarter_start
True
|
reference/api/pandas.Timestamp.is_quarter_start.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.