title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.tseries.offsets.MonthEnd.is_year_start | `pandas.tseries.offsets.MonthEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | MonthEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.MonthEnd.is_year_start.html |
pandas.Period.minute | `pandas.Period.minute`
Get minute of the hour component of the Period.
```
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.minute
3
``` | Period.minute#
Get minute of the hour component of the Period.
Returns
intThe minute as an integer, between 0 and 59.
See also
Period.hourGet the hour component of the Period.
Period.secondGet the second component of the Period.
Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.minute
3
| reference/api/pandas.Period.minute.html |
pandas.Series.dt.tz_localize | `pandas.Series.dt.tz_localize`
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
```
>>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3)
>>> tz_naive
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
``` | Series.dt.tz_localize(*args, **kwargs)[source]#
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
This method takes a time zone (tz) naive Datetime Array/Index object
and makes this time zone aware. It does not move the time to another
time zone.
This method can also be used to do the inverse – to create a time
zone unaware object from an aware object. To that end, pass tz=None.
Parameters
tzstr, pytz.timezone, dateutil.tz.tzfile or NoneTime zone to convert timestamps to. Passing None will
remove the time zone information preserving local time.
ambiguous‘infer’, ‘NaT’, bool array, default ‘raise’When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from
03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at
00:30:00 UTC and at 01:30:00 UTC. In such a situation, the
ambiguous parameter dictates how ambiguous times should be
handled.
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False signifies a
non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
Same type as selfArray/Index converted to the specified time zone.
Raises
TypeErrorIf the Datetime Array/Index is tz-aware and tz is not None.
See also
DatetimeIndex.tz_convertConvert tz-aware DatetimeIndex from one time zone to another.
Examples
>>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3)
>>> tz_naive
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
Localize DatetimeIndex in US/Eastern time zone:
>>> tz_aware = tz_naive.tz_localize(tz='US/Eastern')
>>> tz_aware
DatetimeIndex(['2018-03-01 09:00:00-05:00',
'2018-03-02 09:00:00-05:00',
'2018-03-03 09:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
With the tz=None, we can remove the time zone information
while keeping the local time (not converted to UTC):
>>> tz_aware.tz_localize(None)
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq=None)
Be careful with DST changes. When there is sequential data, pandas can
infer the DST time:
>>> s = pd.to_datetime(pd.Series(['2018-10-28 01:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 03:00:00',
... '2018-10-28 03:30:00']))
>>> s.dt.tz_localize('CET', ambiguous='infer')
0 2018-10-28 01:30:00+02:00
1 2018-10-28 02:00:00+02:00
2 2018-10-28 02:30:00+02:00
3 2018-10-28 02:00:00+01:00
4 2018-10-28 02:30:00+01:00
5 2018-10-28 03:00:00+01:00
6 2018-10-28 03:30:00+01:00
dtype: datetime64[ns, CET]
In some cases, inferring the DST is impossible. In such cases, you can
pass an ndarray to the ambiguous parameter to set the DST explicitly
>>> s = pd.to_datetime(pd.Series(['2018-10-28 01:20:00',
... '2018-10-28 02:36:00',
... '2018-10-28 03:46:00']))
>>> s.dt.tz_localize('CET', ambiguous=np.array([True, True, False]))
0 2018-10-28 01:20:00+02:00
1 2018-10-28 02:36:00+02:00
2 2018-10-28 03:46:00+01:00
dtype: datetime64[ns, CET]
If the DST transition causes nonexistent times, you can shift these
dates forward or backwards with a timedelta object or ‘shift_forward’
or ‘shift_backwards’.
>>> s = pd.to_datetime(pd.Series(['2015-03-29 02:30:00',
... '2015-03-29 03:30:00']))
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_forward')
0 2015-03-29 03:00:00+02:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_backward')
0 2015-03-29 01:59:59.999999999+01:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
>>> s.dt.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H'))
0 2015-03-29 03:30:00+02:00
1 2015-03-29 03:30:00+02:00
dtype: datetime64[ns, Europe/Warsaw]
| reference/api/pandas.Series.dt.tz_localize.html |
pandas.core.groupby.DataFrameGroupBy.mad | `pandas.core.groupby.DataFrameGroupBy.mad`
Return the mean absolute deviation of the values over the requested axis.
Deprecated since version 1.5.0: mad is deprecated. | property DataFrameGroupBy.mad[source]#
Return the mean absolute deviation of the values over the requested axis.
Deprecated since version 1.5.0: mad is deprecated.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Returns
Series or DataFrame (if level specified)
| reference/api/pandas.core.groupby.DataFrameGroupBy.mad.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_start | `pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | CustomBusinessMonthBegin.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_quarter_start.html |
pandas.tseries.offsets.Week.n | pandas.tseries.offsets.Week.n | Week.n#
| reference/api/pandas.tseries.offsets.Week.n.html |
pandas.DataFrame.first | `pandas.DataFrame.first`
Select initial periods of time series data based on a date offset.
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
``` | DataFrame.first(offset)[source]#
Select initial periods of time series data based on a date offset.
When having a DataFrame with dates as index, this function can
select the first few rows based on a date offset.
Parameters
offsetstr, DateOffset or dateutil.relativedeltaThe offset length of the data that will be selected. For instance,
‘1M’ will display all the rows having their index within the first month.
Returns
Series or DataFrameA subset of the caller.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
lastSelect final periods of time series based on a date offset.
at_timeSelect values at a particular time of the day.
between_timeSelect values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
Get the rows for the first 3 days:
>>> ts.first('3D')
A
2018-04-09 1
2018-04-11 2
Notice the data for 3 first calendar days were returned, not the first
3 days observed in the dataset, and therefore data for 2018-04-13 was
not returned.
| reference/api/pandas.DataFrame.first.html |
How do I read and write tabular data? | How do I read and write tabular data?
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3. | Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
To raw data
How do I read and write tabular data?#
I want to analyze the Titanic passenger data, available as a CSV file.
In [2]: titanic = pd.read_csv("data/titanic.csv")
pandas provides the read_csv() function to read data stored as a csv
file into a pandas DataFrame. pandas supports many different file
formats or data sources out of the box (csv, excel, sql, json, parquet,
…), each of them with the prefix read_*.
Make sure to always have a check on the data after reading in the
data. When displaying a DataFrame, the first and last 5 rows will be
shown by default:
In [3]: titanic
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
.. ... ... ... ... ... ... ...
886 887 0 2 ... 13.0000 NaN S
887 888 1 1 ... 30.0000 B42 S
888 889 0 3 ... 23.4500 NaN S
889 890 1 1 ... 30.0000 C148 C
890 891 0 3 ... 7.7500 NaN Q
[891 rows x 12 columns]
I want to see the first 8 rows of a pandas DataFrame.
In [4]: titanic.head(8)
Out[4]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
6 7 0 1 ... 51.8625 E46 S
7 8 0 3 ... 21.0750 NaN S
[8 rows x 12 columns]
To see the first N rows of a DataFrame, use the head() method with
the required number of rows (in this case 8) as argument.
Note
Interested in the last N rows instead? pandas also provides a
tail() method. For example, titanic.tail(10) will return the last
10 rows of the DataFrame.
A check on how pandas interpreted each of the column data types can be
done by requesting the pandas dtypes attribute:
In [5]: titanic.dtypes
Out[5]:
PassengerId int64
Survived int64
Pclass int64
Name object
Sex object
Age float64
SibSp int64
Parch int64
Ticket object
Fare float64
Cabin object
Embarked object
dtype: object
For each of the columns, the used data type is enlisted. The data types
in this DataFrame are integers (int64), floats (float64) and
strings (object).
Note
When asking for the dtypes, no brackets are used!
dtypes is an attribute of a DataFrame and Series. Attributes
of DataFrame or Series do not need brackets. Attributes
represent a characteristic of a DataFrame/Series, whereas a
method (which requires brackets) do something with the
DataFrame/Series as introduced in the first tutorial.
My colleague requested the Titanic data as a spreadsheet.
In [6]: titanic.to_excel("titanic.xlsx", sheet_name="passengers", index=False)
Whereas read_* functions are used to read data to pandas, the
to_* methods are used to store data. The to_excel() method stores
the data as an excel file. In the example here, the sheet_name is
named passengers instead of the default Sheet1. By setting
index=False the row index labels are not saved in the spreadsheet.
The equivalent read function read_excel() will reload the data to a
DataFrame:
In [7]: titanic = pd.read_excel("titanic.xlsx", sheet_name="passengers")
In [8]: titanic.head()
Out[8]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
I’m interested in a technical summary of a DataFrame
In [9]: titanic.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
The method info() provides technical information about a
DataFrame, so let’s explain the output in more detail:
It is indeed a DataFrame.
There are 891 entries, i.e. 891 rows.
Each row has a row label (aka the index) with values ranging from
0 to 890.
The table has 12 columns. Most columns have a value for each of the
rows (all 891 values are non-null). Some columns do have missing
values and less than 891 non-null values.
The columns Name, Sex, Cabin and Embarked consists of
textual data (strings, aka object). The other columns are
numerical data with some of them whole numbers (aka integer) and
others are real numbers (aka float).
The kind of data (characters, integers,…) in the different columns
are summarized by listing the dtypes.
The approximate amount of RAM used to hold the DataFrame is provided
as well.
REMEMBER
Getting data in to pandas from many different file formats or data
sources is supported by read_* functions.
Exporting data out of pandas is provided by different
to_*methods.
The head/tail/info methods and the dtypes attribute
are convenient for a first check.
To user guideFor a complete overview of the input and output possibilities from and to pandas, see the user guide section about reader and writer functions.
| getting_started/intro_tutorials/02_read_write.html |
pandas.tseries.offsets.MonthEnd.is_quarter_start | `pandas.tseries.offsets.MonthEnd.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | MonthEnd.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.MonthEnd.is_quarter_start.html |
pandas.Series.dt.ceil | `pandas.Series.dt.ceil`
Perform ceil operation on the data to the specified freq.
The frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
``` | Series.dt.ceil(*args, **kwargs)[source]#
Perform ceil operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, ceiling will take place relative to the
local (“wall”) time and re-localized to the same timezone. When ceiling
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
| reference/api/pandas.Series.dt.ceil.html |
pandas.tseries.offsets.BusinessDay.rule_code | pandas.tseries.offsets.BusinessDay.rule_code | BusinessDay.rule_code#
| reference/api/pandas.tseries.offsets.BusinessDay.rule_code.html |
pandas.core.groupby.GroupBy.mean | `pandas.core.groupby.GroupBy.mean`
Compute mean of groups, excluding missing values.
```
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
``` | final GroupBy.mean(numeric_only=_NoDefault.no_default, engine='cython', engine_kwargs=None)[source]#
Compute mean of groups, excluding missing values.
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting
compute.use_numba
New in version 1.4.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{{'nopython': True, 'nogil': False, 'parallel': False}}
New in version 1.4.0.
Returns
pandas.Series or pandas.DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in
each group.
>>> df.groupby('A').mean()
B C
A
1 3.0 1.333333
2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean()
C
A B
1 2.0 2.0
4.0 1.0
2 3.0 1.0
5.0 2.0
Groupby one column and return the mean of only particular column in
the group.
>>> df.groupby('A')['B'].mean()
A
1 3.0
2 4.0
Name: B, dtype: float64
| reference/api/pandas.core.groupby.GroupBy.mean.html |
pandas.core.groupby.GroupBy.prod | `pandas.core.groupby.GroupBy.prod`
Compute prod of group values. | final GroupBy.prod(numeric_only=_NoDefault.no_default, min_count=0)[source]#
Compute prod of group values.
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default 0The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameComputed prod of values within each group.
| reference/api/pandas.core.groupby.GroupBy.prod.html |
pandas.Series.mask | `pandas.Series.mask`
Replace values where the condition is True.
Where cond is False, keep the original value. Where
True, replace with corresponding value from other.
If cond is callable, it is computed on the Series/DataFrame and
should return boolean Series/DataFrame or array. The callable must
not change input Series/DataFrame (though pandas doesn’t check it).
```
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
``` | Series.mask(cond, other=nan, *, inplace=False, axis=None, level=None, errors=_NoDefault.no_default, try_cast=_NoDefault.no_default)[source]#
Replace values where the condition is True.
Parameters
condbool Series/DataFrame, array-like, or callableWhere cond is False, keep the original value. Where
True, replace with corresponding value from other.
If cond is callable, it is computed on the Series/DataFrame and
should return boolean Series/DataFrame or array. The callable must
not change input Series/DataFrame (though pandas doesn’t check it).
otherscalar, Series/DataFrame, or callableEntries where cond is True are replaced with
corresponding value from other.
If other is callable, it is computed on the Series/DataFrame and
should return scalar or Series/DataFrame. The callable must not
change input Series/DataFrame (though pandas doesn’t check it).
inplacebool, default FalseWhether to perform the operation in place on the data.
axisint, default NoneAlignment axis if needed. For Series this parameter is
unused and defaults to 0.
levelint, default NoneAlignment level if needed.
errorsstr, {‘raise’, ‘ignore’}, default ‘raise’Note that currently this parameter won’t affect
the results and will always coerce to a suitable dtype.
‘raise’ : allow exceptions to be raised.
‘ignore’ : suppress exceptions. On error return original object.
Deprecated since version 1.5.0: This argument had no effect.
try_castbool, default NoneTry to cast the result back to the input type (if possible).
Deprecated since version 1.3.0: Manually cast back if necessary.
Returns
Same type as caller or None if inplace=True.
See also
DataFrame.where()Return an object of same shape as self.
Notes
The mask method is an application of the if-then idiom. For each
element in the calling DataFrame, if cond is False the
element is used; otherwise the corresponding element from the DataFrame
other is used. If the axis of other does not align with axis of
cond Series/DataFrame, the misaligned index positions will be filled with
True.
The signature for DataFrame.where() differs from
numpy.where(). Roughly df1.where(m, df2) is equivalent to
np.where(m, df1, df2).
For further details and examples see the mask documentation in
indexing.
The dtype of the object takes precedence. The fill value is casted to
the object’s dtype, if this can be done losslessly.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
>>> s = pd.Series(range(5))
>>> t = pd.Series([True, False])
>>> s.where(t, 99)
0 0
1 99
2 99
3 99
4 99
dtype: int64
>>> s.mask(t, 99)
0 99
1 1
2 99
3 99
4 99
dtype: int64
>>> s.where(s > 1, 10)
0 10
1 10
2 2
3 3
4 4
dtype: int64
>>> s.mask(s > 1, 10)
0 0
1 1
2 10
3 10
4 10
dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
>>> df
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
>>> m = df % 3 == 0
>>> df.where(m, -df)
A B
0 0 -1
1 -2 3
2 -4 -5
3 6 -7
4 -8 9
>>> df.where(m, -df) == np.where(m, df, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
>>> df.where(m, -df) == df.mask(~m, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
| reference/api/pandas.Series.mask.html |
pandas.Timestamp.dst | `pandas.Timestamp.dst`
Return self.tzinfo.dst(self). | Timestamp.dst()#
Return self.tzinfo.dst(self).
| reference/api/pandas.Timestamp.dst.html |
pandas.Timestamp.to_pydatetime | `pandas.Timestamp.to_pydatetime`
Convert a Timestamp object to a native Python datetime object.
If warn=True, issue a warning if nanoseconds is nonzero.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548')
>>> ts.to_pydatetime()
datetime.datetime(2020, 3, 14, 15, 32, 52, 192548)
``` | Timestamp.to_pydatetime()#
Convert a Timestamp object to a native Python datetime object.
If warn=True, issue a warning if nanoseconds is nonzero.
Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548')
>>> ts.to_pydatetime()
datetime.datetime(2020, 3, 14, 15, 32, 52, 192548)
Analogous for pd.NaT:
>>> pd.NaT.to_pydatetime()
NaT
| reference/api/pandas.Timestamp.to_pydatetime.html |
pandas.tseries.offsets.BusinessHour.rollforward | `pandas.tseries.offsets.BusinessHour.rollforward`
Roll provided date forward to next offset only if not on offset. | BusinessHour.rollforward(other)#
Roll provided date forward to next offset only if not on offset.
| reference/api/pandas.tseries.offsets.BusinessHour.rollforward.html |
pandas.DatetimeIndex.is_year_start | `pandas.DatetimeIndex.is_year_start`
Indicate whether the date is the first day of a year.
The same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
```
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
``` | property DatetimeIndex.is_year_start[source]#
Indicate whether the date is the first day of a year.
Returns
Series or DatetimeIndexThe same type as the original data with boolean values. Series will
have the same name and index. DatetimeIndex will have the same
name.
See also
is_year_endSimilar property indicating the last day of the year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
>>> idx = pd.date_range("2017-12-30", periods=3)
>>> idx
DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
>>> idx.is_year_start
array([False, False, True])
| reference/api/pandas.DatetimeIndex.is_year_start.html |
pandas.DataFrame.at | `pandas.DataFrame.at`
Access a single value for a row/column label pair.
```
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
``` | property DataFrame.at[source]#
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use
at if you only need to get or set a single value in a DataFrame
or Series.
Raises
KeyError
If getting a value and ‘label’ does not exist in a DataFrame orSeries.
ValueError
If row/column label pair is not a tuple or if any label fromthe pair is not a scalar for DataFrame.
If label is list-like (excluding NamedTuple) for Series.
See also
DataFrame.atAccess a single value for a row/column pair by label.
DataFrame.iatAccess a single value for a row/column pair by integer position.
DataFrame.locAccess a group of rows and columns by label(s).
DataFrame.ilocAccess a group of rows and columns by integer position(s).
Series.atAccess a single value by label.
Series.iatAccess a single value by integer position.
Series.locAccess a group of rows by label(s).
Series.ilocAccess a group of rows by integer position(s).
Notes
See Fast scalar value getting and setting
for more details.
Examples
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
Get value at specified row/column pair
>>> df.at[4, 'B']
2
Set value at specified row/column pair
>>> df.at[4, 'B'] = 10
>>> df.at[4, 'B']
10
Get value within a Series
>>> df.loc[5].at['B']
4
| reference/api/pandas.DataFrame.at.html |
pandas.core.groupby.DataFrameGroupBy.resample | `pandas.core.groupby.DataFrameGroupBy.resample`
Provide resampling when using a TimeGrouper.
Given a grouper, the function resamples it according to a string
“string” -> “frequency”.
```
>>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
>>> df.iloc[2, 0] = 5
>>> df
a b
2000-01-01 00:00:00 0 1
2000-01-01 00:01:00 0 1
2000-01-01 00:02:00 5 1
2000-01-01 00:03:00 0 1
``` | DataFrameGroupBy.resample(rule, *args, **kwargs)[source]#
Provide resampling when using a TimeGrouper.
Given a grouper, the function resamples it according to a string
“string” -> “frequency”.
See the frequency aliases
documentation for more details.
Parameters
rulestr or DateOffsetThe offset string or object representing target grouper conversion.
*args, **kwargsPossible arguments are how, fill_method, limit, kind and
on, and other arguments of TimeGrouper.
Returns
GrouperReturn a new grouper with our resampler appended.
See also
GrouperSpecify a frequency to resample with when grouping by a key.
DatetimeIndex.resampleFrequency conversion and resampling of time series.
Examples
>>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
>>> df.iloc[2, 0] = 5
>>> df
a b
2000-01-01 00:00:00 0 1
2000-01-01 00:01:00 0 1
2000-01-01 00:02:00 5 1
2000-01-01 00:03:00 0 1
Downsample the DataFrame into 3 minute bins and sum the values of
the timestamps falling into a bin.
>>> df.groupby('a').resample('3T').sum()
a b
a
0 2000-01-01 00:00:00 0 2
2000-01-01 00:03:00 0 1
5 2000-01-01 00:00:00 5 1
Upsample the series into 30 second bins.
>>> df.groupby('a').resample('30S').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:00:30 0 0
2000-01-01 00:01:00 0 1
2000-01-01 00:01:30 0 0
2000-01-01 00:02:00 0 0
2000-01-01 00:02:30 0 0
2000-01-01 00:03:00 0 1
5 2000-01-01 00:02:00 5 1
Resample by month. Values are assigned to the month of the period.
>>> df.groupby('a').resample('M').sum()
a b
a
0 2000-01-31 0 3
5 2000-01-31 5 1
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
>>> df.groupby('a').resample('3T', closed='right').sum()
a b
a
0 1999-12-31 23:57:00 0 1
2000-01-01 00:00:00 0 2
5 2000-01-01 00:00:00 5 1
Downsample the series into 3 minute bins and close the right side of
the bin interval, but label each bin using the right edge instead of
the left.
>>> df.groupby('a').resample('3T', closed='right', label='right').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:03:00 0 2
5 2000-01-01 00:03:00 5 1
| reference/api/pandas.core.groupby.DataFrameGroupBy.resample.html |
General functions | Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
| reference/general_functions.html | null |
pandas.tseries.offsets.Micro.name | `pandas.tseries.offsets.Micro.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | Micro.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.Micro.name.html |
pandas.Series.compare | `pandas.Series.compare`
Compare to another Series and show the differences.
New in version 1.1.0.
```
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
``` | Series.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another Series and show the differences.
New in version 1.1.0.
Parameters
otherSeriesObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
New in version 1.5.0.
Returns
Series or DataFrameIf axis is 0 or ‘index’ the result will be a Series.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
If axis is 1 or ‘columns’ the result will be a DataFrame.
It will have two columns namely ‘self’ and ‘other’.
See also
DataFrame.compareCompare with another DataFrame and show differences.
Notes
Matching NaNs will not appear as a difference.
Examples
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
Align the differences on columns
>>> s1.compare(s2)
self other
1 b a
3 d b
Stack the differences on indices
>>> s1.compare(s2, align_axis=0)
1 self b
other a
3 self d
other b
dtype: object
Keep all original rows
>>> s1.compare(s2, keep_shape=True)
self other
0 NaN NaN
1 b a
2 NaN NaN
3 d b
4 NaN NaN
Keep all original rows and also all original values
>>> s1.compare(s2, keep_shape=True, keep_equal=True)
self other
0 a a
1 b a
2 c c
3 d b
4 e e
| reference/api/pandas.Series.compare.html |
pandas.Series.le | `pandas.Series.le`
Return Less than or equal to of series and other, element-wise (binary operator le).
```
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.le(b, fill_value=0)
a False
b True
c True
d False
e False
f True
dtype: bool
``` | Series.le(other, level=None, fill_value=None, axis=0)[source]#
Return Less than or equal to of series and other, element-wise (binary operator le).
Equivalent to series <= other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
Examples
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.le(b, fill_value=0)
a False
b True
c True
d False
e False
f True
dtype: bool
| reference/api/pandas.Series.le.html |
pandas.DataFrame.ge | `pandas.DataFrame.ge`
Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
```
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
``` | DataFrame.ge(other, axis='columns', level=None)[source]#
Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
Equivalent to ==, !=, <=, <, >=, > with support to choose axis
(rows or columns) and level for comparison.
Parameters
otherscalar, sequence, Series, or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’Whether to compare by the index (0 or ‘index’) or columns
(1 or ‘columns’).
levelint or labelBroadcast across a level, matching Index values on the passed
MultiIndex level.
Returns
DataFrame of boolResult of the comparison.
See also
DataFrame.eqCompare DataFrames for equality elementwise.
DataFrame.neCompare DataFrames for inequality elementwise.
DataFrame.leCompare DataFrames for less than inequality or equality elementwise.
DataFrame.ltCompare DataFrames for strictly less than inequality elementwise.
DataFrame.geCompare DataFrames for greater than inequality or equality elementwise.
DataFrame.gtCompare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together.
NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned
with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must
match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False
| reference/api/pandas.DataFrame.ge.html |
pandas.Series.var | `pandas.Series.var`
Return unbiased variance over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument.
```
>>> df = pd.DataFrame({'person_id': [0, 1, 2, 3],
... 'age': [21, 25, 62, 43],
... 'height': [1.61, 1.87, 1.49, 2.01]}
... ).set_index('person_id')
>>> df
age height
person_id
0 21 1.61
1 25 1.87
2 62 1.49
3 43 2.01
``` | Series.var(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source]#
Return unbiased variance over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument.
Parameters
axis{index (0)}For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
Returns
scalar or Series (if level specified)
Examples
>>> df = pd.DataFrame({'person_id': [0, 1, 2, 3],
... 'age': [21, 25, 62, 43],
... 'height': [1.61, 1.87, 1.49, 2.01]}
... ).set_index('person_id')
>>> df
age height
person_id
0 21 1.61
1 25 1.87
2 62 1.49
3 43 2.01
>>> df.var()
age 352.916667
height 0.056367
Alternatively, ddof=0 can be set to normalize by N instead of N-1:
>>> df.var(ddof=0)
age 264.687500
height 0.042275
| reference/api/pandas.Series.var.html |
pandas.Interval.open_right | `pandas.Interval.open_right`
Check if the interval is open on the right side.
For the meaning of closed and open see Interval. | Interval.open_right#
Check if the interval is open on the right side.
For the meaning of closed and open see Interval.
Returns
boolTrue if the Interval is not closed on the left-side.
| reference/api/pandas.Interval.open_right.html |
pandas.Float64Index | `pandas.Float64Index`
Immutable sequence used for indexing and alignment. | class pandas.Float64Index(data=None, dtype=None, copy=False, name=None)[source]#
Immutable sequence used for indexing and alignment.
Deprecated since version 1.4.0: In pandas v2.0 Float64Index will be removed and NumericIndex used instead.
Float64Index will remain fully functional for the duration of pandas 1.x.
The basic object storing axis labels for all pandas objects.
Float64Index is a special case of Index with purely float labels. .
Parameters
dataarray-like (1-dimensional)
dtypeNumPy dtype (default: float64)
copyboolMake a copy of input ndarray.
nameobjectName to be stored in the index.
See also
IndexThe base pandas Index type.
NumericIndexIndex of numpy int/uint/float data.
Notes
An Index instance can only contain hashable objects.
Attributes
None
Methods
None
| reference/api/pandas.Float64Index.html |
pandas.tseries.offsets.FY5253Quarter.get_weeks | pandas.tseries.offsets.FY5253Quarter.get_weeks | FY5253Quarter.get_weeks()#
| reference/api/pandas.tseries.offsets.FY5253Quarter.get_weeks.html |
pandas.core.groupby.GroupBy.cummax | `pandas.core.groupby.GroupBy.cummax`
Cumulative max for each group.
See also | final GroupBy.cummax(axis=0, numeric_only=False, **kwargs)[source]#
Cumulative max for each group.
Returns
Series or DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.GroupBy.cummax.html |
pandas.tseries.offsets.Hour.freqstr | `pandas.tseries.offsets.Hour.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | Hour.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.Hour.freqstr.html |
pandas.Timestamp.fromtimestamp | `pandas.Timestamp.fromtimestamp`
Transform timestamp[, tz] to tz’s local time from POSIX timestamp.
```
>>> pd.Timestamp.fromtimestamp(1584199972)
Timestamp('2020-03-14 15:32:52')
``` | classmethod Timestamp.fromtimestamp(ts)#
Transform timestamp[, tz] to tz’s local time from POSIX timestamp.
Examples
>>> pd.Timestamp.fromtimestamp(1584199972)
Timestamp('2020-03-14 15:32:52')
Note that the output may change depending on your local time.
| reference/api/pandas.Timestamp.fromtimestamp.html |
pandas.tseries.offsets.CustomBusinessDay.normalize | pandas.tseries.offsets.CustomBusinessDay.normalize | CustomBusinessDay.normalize#
| reference/api/pandas.tseries.offsets.CustomBusinessDay.normalize.html |
pandas.arrays.IntervalArray.to_tuples | `pandas.arrays.IntervalArray.to_tuples`
Return an ndarray of tuples of the form (left, right). | IntervalArray.to_tuples(na_tuple=True)[source]#
Return an ndarray of tuples of the form (left, right).
Parameters
na_tuplebool, default TrueReturns NA as a tuple if True, (nan, nan), or just as the NA
value itself if False, nan.
Returns
tuples: ndarray
| reference/api/pandas.arrays.IntervalArray.to_tuples.html |
Plotting | Plotting | The following functions are contained in the pandas.plotting module.
andrews_curves(frame, class_column[, ax, ...])
Generate a matplotlib plot for visualising clusters of multivariate data.
autocorrelation_plot(series[, ax])
Autocorrelation plot for time series.
bootstrap_plot(series[, fig, size, samples])
Bootstrap plot on mean, median and mid-range statistics.
boxplot(data[, column, by, ax, fontsize, ...])
Make a box plot from DataFrame columns.
deregister_matplotlib_converters()
Remove pandas formatters and converters.
lag_plot(series[, lag, ax])
Lag plot for time series.
parallel_coordinates(frame, class_column[, ...])
Parallel coordinates plotting.
plot_params
Stores pandas plotting options.
radviz(frame, class_column[, ax, color, ...])
Plot a multidimensional dataset in 2D.
register_matplotlib_converters()
Register pandas formatters and converters with matplotlib.
scatter_matrix(frame[, alpha, figsize, ax, ...])
Draw a matrix of scatter plots.
table(ax, data[, rowLabels, colLabels])
Helper function to convert DataFrame and Series to matplotlib.table.
| reference/plotting.html |
pandas.DataFrame.set_index | `pandas.DataFrame.set_index`
Set the DataFrame index using existing columns.
Set the DataFrame index (row labels) using one or more existing
columns or arrays (of the correct length). The index can replace the
existing index or expand on it.
```
>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
... 'year': [2012, 2014, 2013, 2014],
... 'sale': [55, 40, 84, 31]})
>>> df
month year sale
0 1 2012 55
1 4 2014 40
2 7 2013 84
3 10 2014 31
``` | DataFrame.set_index(keys, *, drop=True, append=False, inplace=False, verify_integrity=False)[source]#
Set the DataFrame index using existing columns.
Set the DataFrame index (row labels) using one or more existing
columns or arrays (of the correct length). The index can replace the
existing index or expand on it.
Parameters
keyslabel or array-like or list of labels/arraysThis parameter can be either a single column key, a single array of
the same length as the calling DataFrame, or a list containing an
arbitrary combination of column keys and arrays. Here, “array”
encompasses Series, Index, np.ndarray, and
instances of Iterator.
dropbool, default TrueDelete columns to be used as the new index.
appendbool, default FalseWhether to append columns to existing index.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
verify_integritybool, default FalseCheck the new index for duplicates. Otherwise defer the check until
necessary. Setting to False will improve the performance of this
method.
Returns
DataFrame or NoneChanged row labels or None if inplace=True.
See also
DataFrame.reset_indexOpposite of set_index.
DataFrame.reindexChange to new indices or expand indices.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
... 'year': [2012, 2014, 2013, 2014],
... 'sale': [55, 40, 84, 31]})
>>> df
month year sale
0 1 2012 55
1 4 2014 40
2 7 2013 84
3 10 2014 31
Set the index to become the ‘month’ column:
>>> df.set_index('month')
year sale
month
1 2012 55
4 2014 40
7 2013 84
10 2014 31
Create a MultiIndex using columns ‘year’ and ‘month’:
>>> df.set_index(['year', 'month'])
sale
year month
2012 1 55
2014 4 40
2013 7 84
2014 10 31
Create a MultiIndex using an Index and a column:
>>> df.set_index([pd.Index([1, 2, 3, 4]), 'year'])
month sale
year
1 2012 1 55
2 2014 4 40
3 2013 7 84
4 2014 10 31
Create a MultiIndex using two Series:
>>> s = pd.Series([1, 2, 3, 4])
>>> df.set_index([s, s**2])
month year sale
1 1 1 2012 55
2 4 4 2014 40
3 9 7 2013 84
4 16 10 2014 31
| reference/api/pandas.DataFrame.set_index.html |
pandas.tseries.offsets.MonthBegin.freqstr | `pandas.tseries.offsets.MonthBegin.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | MonthBegin.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.MonthBegin.freqstr.html |
pandas.io.formats.style.Styler.set_caption | `pandas.io.formats.style.Styler.set_caption`
Set the text added to a <caption> HTML element.
For HTML output either the string input is used or the first element of the
tuple. For LaTeX the string input provides a caption and the additional
tuple input allows for full captions and short captions, in that order. | Styler.set_caption(caption)[source]#
Set the text added to a <caption> HTML element.
Parameters
captionstr, tupleFor HTML output either the string input is used or the first element of the
tuple. For LaTeX the string input provides a caption and the additional
tuple input allows for full captions and short captions, in that order.
Returns
selfStyler
| reference/api/pandas.io.formats.style.Styler.set_caption.html |
pandas.tseries.offsets.SemiMonthEnd.is_quarter_start | `pandas.tseries.offsets.SemiMonthEnd.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | SemiMonthEnd.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.SemiMonthEnd.is_quarter_start.html |
pandas.tseries.offsets.LastWeekOfMonth.freqstr | `pandas.tseries.offsets.LastWeekOfMonth.freqstr`
Return a string representing the frequency.
Examples
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
``` | LastWeekOfMonth.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.freqstr.html |
pandas.Series.tz_localize | `pandas.Series.tz_localize`
Localize tz-naive index of a Series or DataFrame to target time zone.
```
>>> s = pd.Series([1],
... index=pd.DatetimeIndex(['2018-09-15 01:30:00']))
>>> s.tz_localize('CET')
2018-09-15 01:30:00+02:00 1
dtype: int64
``` | Series.tz_localize(tz, axis=0, level=None, copy=True, ambiguous='raise', nonexistent='raise')[source]#
Localize tz-naive index of a Series or DataFrame to target time zone.
This operation localizes the Index. To localize the values in a
timezone-naive Series, use Series.dt.tz_localize().
Parameters
tzstr or tzinfo
axisthe axis to localize
levelint, str, default NoneIf axis ia a MultiIndex, localize a specific level. Otherwise
must be None.
copybool, default TrueAlso make a copy of the underlying data.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from
03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at
00:30:00 UTC and at 01:30:00 UTC. In such a situation, the
ambiguous parameter dictates how ambiguous times should be
handled.
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistentstr, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST. Valid values are:
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
Series/DataFrameSame type as the input.
Raises
TypeErrorIf the TimeSeries is tz-aware and tz is not None.
Examples
Localize local times:
>>> s = pd.Series([1],
... index=pd.DatetimeIndex(['2018-09-15 01:30:00']))
>>> s.tz_localize('CET')
2018-09-15 01:30:00+02:00 1
dtype: int64
Be careful with DST changes. When there is sequential data, pandas
can infer the DST time:
>>> s = pd.Series(range(7),
... index=pd.DatetimeIndex(['2018-10-28 01:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 02:00:00',
... '2018-10-28 02:30:00',
... '2018-10-28 03:00:00',
... '2018-10-28 03:30:00']))
>>> s.tz_localize('CET', ambiguous='infer')
2018-10-28 01:30:00+02:00 0
2018-10-28 02:00:00+02:00 1
2018-10-28 02:30:00+02:00 2
2018-10-28 02:00:00+01:00 3
2018-10-28 02:30:00+01:00 4
2018-10-28 03:00:00+01:00 5
2018-10-28 03:30:00+01:00 6
dtype: int64
In some cases, inferring the DST is impossible. In such cases, you can
pass an ndarray to the ambiguous parameter to set the DST explicitly
>>> s = pd.Series(range(3),
... index=pd.DatetimeIndex(['2018-10-28 01:20:00',
... '2018-10-28 02:36:00',
... '2018-10-28 03:46:00']))
>>> s.tz_localize('CET', ambiguous=np.array([True, True, False]))
2018-10-28 01:20:00+02:00 0
2018-10-28 02:36:00+02:00 1
2018-10-28 03:46:00+01:00 2
dtype: int64
If the DST transition causes nonexistent times, you can shift these
dates forward or backward with a timedelta object or ‘shift_forward’
or ‘shift_backward’.
>>> s = pd.Series(range(2),
... index=pd.DatetimeIndex(['2015-03-29 02:30:00',
... '2015-03-29 03:30:00']))
>>> s.tz_localize('Europe/Warsaw', nonexistent='shift_forward')
2015-03-29 03:00:00+02:00 0
2015-03-29 03:30:00+02:00 1
dtype: int64
>>> s.tz_localize('Europe/Warsaw', nonexistent='shift_backward')
2015-03-29 01:59:59.999999999+01:00 0
2015-03-29 03:30:00+02:00 1
dtype: int64
>>> s.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H'))
2015-03-29 03:30:00+02:00 0
2015-03-29 03:30:00+02:00 1
dtype: int64
| reference/api/pandas.Series.tz_localize.html |
pandas.DatetimeIndex.hour | `pandas.DatetimeIndex.hour`
The hours of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="h")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 01:00:00
2 2000-01-01 02:00:00
dtype: datetime64[ns]
>>> datetime_series.dt.hour
0 0
1 1
2 2
dtype: int64
``` | property DatetimeIndex.hour[source]#
The hours of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="h")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 01:00:00
2 2000-01-01 02:00:00
dtype: datetime64[ns]
>>> datetime_series.dt.hour
0 0
1 1
2 2
dtype: int64
| reference/api/pandas.DatetimeIndex.hour.html |
pandas.tseries.offsets.Nano.onOffset | pandas.tseries.offsets.Nano.onOffset | Nano.onOffset()#
| reference/api/pandas.tseries.offsets.Nano.onOffset.html |
pandas.tseries.offsets.YearBegin.is_month_end | `pandas.tseries.offsets.YearBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | YearBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.YearBegin.is_month_end.html |
pandas.DataFrame.to_markdown | `pandas.DataFrame.to_markdown`
Print DataFrame in Markdown-friendly format.
```
>>> df = pd.DataFrame(
... data={"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]}
... )
>>> print(df.to_markdown())
| | animal_1 | animal_2 |
|---:|:-----------|:-----------|
| 0 | elk | dog |
| 1 | pig | quetzal |
``` | DataFrame.to_markdown(buf=None, mode='wt', index=True, storage_options=None, **kwargs)[source]#
Print DataFrame in Markdown-friendly format.
New in version 1.0.0.
Parameters
bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string.
modestr, optionalMode in which file is opened, “wt” by default.
indexbool, optional, default TrueAdd index (row) labels.
New in version 1.1.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
**kwargsThese parameters will be passed to tabulate.
Returns
strDataFrame in Markdown-friendly format.
Notes
Requires the tabulate package.
Examples
>>> df = pd.DataFrame(
... data={"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]}
... )
>>> print(df.to_markdown())
| | animal_1 | animal_2 |
|---:|:-----------|:-----------|
| 0 | elk | dog |
| 1 | pig | quetzal |
Output markdown with a tabulate option.
>>> print(df.to_markdown(tablefmt="grid"))
+----+------------+------------+
| | animal_1 | animal_2 |
+====+============+============+
| 0 | elk | dog |
+----+------------+------------+
| 1 | pig | quetzal |
+----+------------+------------+
| reference/api/pandas.DataFrame.to_markdown.html |
pandas.IntervalIndex.is_empty | `pandas.IntervalIndex.is_empty`
Indicates if an interval is empty, meaning it contains no points.
```
>>> pd.Interval(0, 1, closed='right').is_empty
False
``` | property IntervalIndex.is_empty[source]#
Indicates if an interval is empty, meaning it contains no points.
New in version 0.25.0.
Returns
bool or ndarrayA boolean indicating if a scalar Interval is empty, or a
boolean ndarray positionally indicating if an Interval in
an IntervalArray or IntervalIndex is
empty.
Examples
An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a
boolean ndarray positionally indicating if an Interval is
empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
| reference/api/pandas.IntervalIndex.is_empty.html |
pandas.tseries.offsets.DateOffset.is_year_end | `pandas.tseries.offsets.DateOffset.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | DateOffset.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.DateOffset.is_year_end.html |
pandas.io.formats.style.Styler.set_precision | `pandas.io.formats.style.Styler.set_precision`
Set the precision used to display values. | Styler.set_precision(precision)[source]#
Set the precision used to display values.
Deprecated since version 1.3.0.
Parameters
precisionint
Returns
selfStyler
Notes
This method is deprecated see Styler.format.
| reference/api/pandas.io.formats.style.Styler.set_precision.html |
pandas.tseries.offsets.FY5253.apply | pandas.tseries.offsets.FY5253.apply | FY5253.apply()#
| reference/api/pandas.tseries.offsets.FY5253.apply.html |
pandas.Index.union | `pandas.Index.union`
Form the union of two Index objects.
If the Index objects are incompatible, both Index objects will be
cast to dtype(‘object’) first.
```
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.union(idx2)
Int64Index([1, 2, 3, 4, 5, 6], dtype='int64')
``` | final Index.union(other, sort=None)[source]#
Form the union of two Index objects.
If the Index objects are incompatible, both Index objects will be
cast to dtype(‘object’) first.
Changed in version 0.25.0.
Parameters
otherIndex or array-like
sortbool or None, default NoneWhether to sort the resulting Index.
None : Sort the result, except when
self and other are equal.
self or other has length 0.
Some values in self or other cannot be compared.
A RuntimeWarning is issued in this case.
False : do not sort the result.
Returns
unionIndex
Examples
Union matching dtypes
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.union(idx2)
Int64Index([1, 2, 3, 4, 5, 6], dtype='int64')
Union mismatched dtypes
>>> idx1 = pd.Index(['a', 'b', 'c', 'd'])
>>> idx2 = pd.Index([1, 2, 3, 4])
>>> idx1.union(idx2)
Index(['a', 'b', 'c', 'd', 1, 2, 3, 4], dtype='object')
MultiIndex case
>>> idx1 = pd.MultiIndex.from_arrays(
... [[1, 1, 2, 2], ["Red", "Blue", "Red", "Blue"]]
... )
>>> idx1
MultiIndex([(1, 'Red'),
(1, 'Blue'),
(2, 'Red'),
(2, 'Blue')],
)
>>> idx2 = pd.MultiIndex.from_arrays(
... [[3, 3, 2, 2], ["Red", "Green", "Red", "Green"]]
... )
>>> idx2
MultiIndex([(3, 'Red'),
(3, 'Green'),
(2, 'Red'),
(2, 'Green')],
)
>>> idx1.union(idx2)
MultiIndex([(1, 'Blue'),
(1, 'Red'),
(2, 'Blue'),
(2, 'Green'),
(2, 'Red'),
(3, 'Green'),
(3, 'Red')],
)
>>> idx1.union(idx2, sort=False)
MultiIndex([(1, 'Red'),
(1, 'Blue'),
(2, 'Red'),
(2, 'Blue'),
(3, 'Red'),
(3, 'Green'),
(2, 'Green')],
)
| reference/api/pandas.Index.union.html |
pandas.tseries.offsets.BusinessMonthEnd.apply_index | `pandas.tseries.offsets.BusinessMonthEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex. | BusinessMonthEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.apply_index.html |
pandas.errors.NumExprClobberingError | `pandas.errors.NumExprClobberingError`
Exception raised when trying to use a built-in numexpr name as a variable name.
```
>>> df = pd.DataFrame({'abs': [1, 1, 1]})
>>> df.query("abs > 2")
... # NumExprClobberingError: Variables in expression "(abs) > (2)" overlap...
>>> sin, a = 1, 2
>>> pd.eval("sin + a", engine='numexpr')
... # NumExprClobberingError: Variables in expression "(sin) + (a)" overlap...
``` | exception pandas.errors.NumExprClobberingError[source]#
Exception raised when trying to use a built-in numexpr name as a variable name.
eval or query will throw the error if the engine is set
to ‘numexpr’. ‘numexpr’ is the default engine value for these methods if the
numexpr package is installed.
Examples
>>> df = pd.DataFrame({'abs': [1, 1, 1]})
>>> df.query("abs > 2")
... # NumExprClobberingError: Variables in expression "(abs) > (2)" overlap...
>>> sin, a = 1, 2
>>> pd.eval("sin + a", engine='numexpr')
... # NumExprClobberingError: Variables in expression "(sin) + (a)" overlap...
| reference/api/pandas.errors.NumExprClobberingError.html |
pandas.tseries.offsets.CustomBusinessHour.weekmask | pandas.tseries.offsets.CustomBusinessHour.weekmask | CustomBusinessHour.weekmask#
| reference/api/pandas.tseries.offsets.CustomBusinessHour.weekmask.html |
pandas.core.groupby.DataFrameGroupBy.resample | `pandas.core.groupby.DataFrameGroupBy.resample`
Provide resampling when using a TimeGrouper.
```
>>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
>>> df.iloc[2, 0] = 5
>>> df
a b
2000-01-01 00:00:00 0 1
2000-01-01 00:01:00 0 1
2000-01-01 00:02:00 5 1
2000-01-01 00:03:00 0 1
``` | DataFrameGroupBy.resample(rule, *args, **kwargs)[source]#
Provide resampling when using a TimeGrouper.
Given a grouper, the function resamples it according to a string
“string” -> “frequency”.
See the frequency aliases
documentation for more details.
Parameters
rulestr or DateOffsetThe offset string or object representing target grouper conversion.
*args, **kwargsPossible arguments are how, fill_method, limit, kind and
on, and other arguments of TimeGrouper.
Returns
GrouperReturn a new grouper with our resampler appended.
See also
GrouperSpecify a frequency to resample with when grouping by a key.
DatetimeIndex.resampleFrequency conversion and resampling of time series.
Examples
>>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
>>> df.iloc[2, 0] = 5
>>> df
a b
2000-01-01 00:00:00 0 1
2000-01-01 00:01:00 0 1
2000-01-01 00:02:00 5 1
2000-01-01 00:03:00 0 1
Downsample the DataFrame into 3 minute bins and sum the values of
the timestamps falling into a bin.
>>> df.groupby('a').resample('3T').sum()
a b
a
0 2000-01-01 00:00:00 0 2
2000-01-01 00:03:00 0 1
5 2000-01-01 00:00:00 5 1
Upsample the series into 30 second bins.
>>> df.groupby('a').resample('30S').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:00:30 0 0
2000-01-01 00:01:00 0 1
2000-01-01 00:01:30 0 0
2000-01-01 00:02:00 0 0
2000-01-01 00:02:30 0 0
2000-01-01 00:03:00 0 1
5 2000-01-01 00:02:00 5 1
Resample by month. Values are assigned to the month of the period.
>>> df.groupby('a').resample('M').sum()
a b
a
0 2000-01-31 0 3
5 2000-01-31 5 1
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
>>> df.groupby('a').resample('3T', closed='right').sum()
a b
a
0 1999-12-31 23:57:00 0 1
2000-01-01 00:00:00 0 2
5 2000-01-01 00:00:00 5 1
Downsample the series into 3 minute bins and close the right side of
the bin interval, but label each bin using the right edge instead of
the left.
>>> df.groupby('a').resample('3T', closed='right', label='right').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:03:00 0 2
5 2000-01-01 00:03:00 5 1
| reference/api/pandas.core.groupby.DataFrameGroupBy.resample.html |
pandas.tseries.offsets.SemiMonthBegin.is_month_end | `pandas.tseries.offsets.SemiMonthBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | SemiMonthBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.SemiMonthBegin.is_month_end.html |
pandas.tseries.offsets.Milli.is_quarter_end | `pandas.tseries.offsets.Milli.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Milli.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Milli.is_quarter_end.html |
pandas.tseries.offsets.SemiMonthEnd.n | pandas.tseries.offsets.SemiMonthEnd.n | SemiMonthEnd.n#
| reference/api/pandas.tseries.offsets.SemiMonthEnd.n.html |
pandas.DataFrame.cummin | `pandas.DataFrame.cummin`
Return cumulative minimum over a DataFrame or Series axis.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
``` | DataFrame.cummin(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative minimum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
minimum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative minimum of Series or DataFrame.
See also
core.window.expanding.Expanding.minSimilar functionality but ignores NaN values.
DataFrame.minReturn the minimum over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the minimum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row,
use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
| reference/api/pandas.DataFrame.cummin.html |
pandas.Series.cat.rename_categories | `pandas.Series.cat.rename_categories`
Rename categories.
New categories which will replace old categories.
```
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
``` | Series.cat.rename_categories(*args, **kwargs)[source]#
Rename categories.
Parameters
new_categorieslist-like, dict-like or callableNew categories which will replace old categories.
list-like: all items must be unique and the number of items in
the new categories must match the existing number of categories.
dict-like: specifies a mapping from
old categories to new. Categories not contained in the mapping
are passed through and extra categories in the mapping are
ignored.
callable : a callable that is called on all items in the old
categories and whose return values comprise the new categories.
inplacebool, default FalseWhether or not to rename the categories inplace or return a copy of
this categorical with renamed categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with removed categories or None if inplace=True.
Raises
ValueErrorIf new categories are list-like and do not have the same number of
items than the current categories or do not validate as categories
See also
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and
categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B']
| reference/api/pandas.Series.cat.rename_categories.html |
pandas.tseries.offsets.Nano.base | `pandas.tseries.offsets.Nano.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | Nano.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.Nano.base.html |
pandas.io.formats.style.Styler.set_uuid | `pandas.io.formats.style.Styler.set_uuid`
Set the uuid applied to id attributes of HTML elements. | Styler.set_uuid(uuid)[source]#
Set the uuid applied to id attributes of HTML elements.
Parameters
uuidstr
Returns
selfStyler
Notes
Almost all HTML elements within the table, and including the <table> element
are assigned id attributes. The format is T_uuid_<extra> where
<extra> is typically a more specific identifier, such as row1_col2.
| reference/api/pandas.io.formats.style.Styler.set_uuid.html |
pandas.core.window.rolling.Window.sum | `pandas.core.window.rolling.Window.sum`
Calculate the rolling weighted window sum.
Include only float, int, boolean columns. | Window.sum(numeric_only=False, *args, **kwargs)[source]#
Calculate the rolling weighted window sum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsKeyword arguments to configure the SciPy weighted window type.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.sumAggregating sum for Series.
pandas.DataFrame.sumAggregating sum for DataFrame.
| reference/api/pandas.core.window.rolling.Window.sum.html |
pandas.tseries.offsets.Minute.is_month_end | `pandas.tseries.offsets.Minute.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | Minute.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.Minute.is_month_end.html |
pandas.tseries.offsets.WeekOfMonth.apply_index | `pandas.tseries.offsets.WeekOfMonth.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead. | WeekOfMonth.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.WeekOfMonth.apply_index.html |
pandas.Series.tz_convert | `pandas.Series.tz_convert`
Convert tz-aware axis to target time zone.
If axis is a MultiIndex, convert a specific level. Otherwise
must be None. | Series.tz_convert(tz, axis=0, level=None, copy=True)[source]#
Convert tz-aware axis to target time zone.
Parameters
tzstr or tzinfo object
axisthe axis to convert
levelint, str, default NoneIf axis is a MultiIndex, convert a specific level. Otherwise
must be None.
copybool, default TrueAlso make a copy of the underlying data.
Returns
Series/DataFrameObject with time zone converted axis.
Raises
TypeErrorIf the axis is tz-naive.
| reference/api/pandas.Series.tz_convert.html |
pandas.tseries.offsets.BYearBegin.is_month_end | `pandas.tseries.offsets.BYearBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | BYearBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.BYearBegin.is_month_end.html |
pandas.tseries.offsets.SemiMonthBegin.__call__ | `pandas.tseries.offsets.SemiMonthBegin.__call__`
Call self as a function. | SemiMonthBegin.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.SemiMonthBegin.__call__.html |
pandas.tseries.offsets.Week.nanos | pandas.tseries.offsets.Week.nanos | Week.nanos#
| reference/api/pandas.tseries.offsets.Week.nanos.html |
pandas.tseries.offsets.BusinessHour.is_quarter_end | `pandas.tseries.offsets.BusinessHour.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | BusinessHour.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.BusinessHour.is_quarter_end.html |
pandas.api.extensions.ExtensionArray._from_factorized | `pandas.api.extensions.ExtensionArray._from_factorized`
Reconstruct an ExtensionArray after factorization. | classmethod ExtensionArray._from_factorized(values, original)[source]#
Reconstruct an ExtensionArray after factorization.
Parameters
valuesndarrayAn integer ndarray with the factorized values.
originalExtensionArrayThe original ExtensionArray that factorize was called on.
See also
factorizeTop-level factorize method that dispatches here.
ExtensionArray.factorizeEncode the extension array as an enumerated type.
| reference/api/pandas.api.extensions.ExtensionArray._from_factorized.html |
pandas.core.groupby.DataFrameGroupBy.idxmax | `pandas.core.groupby.DataFrameGroupBy.idxmax`
Return index of first occurrence of maximum over requested axis.
```
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
``` | DataFrameGroupBy.idxmax(axis=0, skipna=True, numeric_only=_NoDefault.no_default)[source]#
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
numeric_onlybool, default True for axis=0, False for axis=1Include only float, int or boolean data.
New in version 1.5.0.
Returns
SeriesIndexes of maxima along the specified axis.
Raises
ValueError
If the row/column is empty
See also
Series.idxmaxReturn index of the maximum element.
Notes
This method is the DataFrame version of ndarray.argmax.
Examples
Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the maximum value in each column.
>>> df.idxmax()
consumption Wheat Products
co2_emissions Beef
dtype: object
To return the index for the maximum value in each row, use axis="columns".
>>> df.idxmax(axis="columns")
Pork co2_emissions
Wheat Products consumption
Beef co2_emissions
dtype: object
| reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html |
pandas.tseries.offsets.Day.n | pandas.tseries.offsets.Day.n | Day.n#
| reference/api/pandas.tseries.offsets.Day.n.html |
pandas.tseries.offsets.DateOffset.rollback | `pandas.tseries.offsets.DateOffset.rollback`
Roll provided date backward to next offset only if not on offset. | DateOffset.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.DateOffset.rollback.html |
pandas.tseries.offsets.BusinessDay.is_month_end | `pandas.tseries.offsets.BusinessDay.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | BusinessDay.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.BusinessDay.is_month_end.html |
pandas.tseries.offsets.Milli.is_on_offset | `pandas.tseries.offsets.Milli.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | Milli.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.Milli.is_on_offset.html |
pandas.tseries.offsets.BQuarterEnd.startingMonth | pandas.tseries.offsets.BQuarterEnd.startingMonth | BQuarterEnd.startingMonth#
| reference/api/pandas.tseries.offsets.BQuarterEnd.startingMonth.html |
pandas.tseries.offsets.MonthBegin.is_anchored | `pandas.tseries.offsets.MonthBegin.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | MonthBegin.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.MonthBegin.is_anchored.html |
pandas.to_numeric | `pandas.to_numeric`
Convert argument to a numeric type.
The default return dtype is float64 or int64
depending on the data supplied. Use the downcast parameter
to obtain other dtypes.
```
>>> s = pd.Series(['1.0', '2', -3])
>>> pd.to_numeric(s)
0 1.0
1 2.0
2 -3.0
dtype: float64
>>> pd.to_numeric(s, downcast='float')
0 1.0
1 2.0
2 -3.0
dtype: float32
>>> pd.to_numeric(s, downcast='signed')
0 1
1 2
2 -3
dtype: int8
>>> s = pd.Series(['apple', '1.0', '2', -3])
>>> pd.to_numeric(s, errors='ignore')
0 apple
1 1.0
2 2
3 -3
dtype: object
>>> pd.to_numeric(s, errors='coerce')
0 NaN
1 1.0
2 2.0
3 -3.0
dtype: float64
``` | pandas.to_numeric(arg, errors='raise', downcast=None)[source]#
Convert argument to a numeric type.
The default return dtype is float64 or int64
depending on the data supplied. Use the downcast parameter
to obtain other dtypes.
Please note that precision loss may occur if really large numbers
are passed in. Due to the internal limitations of ndarray, if
numbers smaller than -9223372036854775808 (np.iinfo(np.int64).min)
or larger than 18446744073709551615 (np.iinfo(np.uint64).max) are
passed in, it is very likely they will be converted to float so that
they can stored in an ndarray. These warnings apply similarly to
Series since it internally leverages ndarray.
Parameters
argscalar, list, tuple, 1-d array, or SeriesArgument to be converted.
errors{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If ‘raise’, then invalid parsing will raise an exception.
If ‘coerce’, then invalid parsing will be set as NaN.
If ‘ignore’, then invalid parsing will return the input.
downcaststr, default NoneCan be ‘integer’, ‘signed’, ‘unsigned’, or ‘float’.
If not None, and if the data has been successfully cast to a
numerical dtype (or if the data was numeric to begin with),
downcast that resulting data to the smallest numerical dtype
possible according to the following rules:
‘integer’ or ‘signed’: smallest signed int dtype (min.: np.int8)
‘unsigned’: smallest unsigned int dtype (min.: np.uint8)
‘float’: smallest float dtype (min.: np.float32)
As this behaviour is separate from the core conversion to
numeric values, any errors raised during the downcasting
will be surfaced regardless of the value of the ‘errors’ input.
In addition, downcasting will only occur if the size
of the resulting data’s dtype is strictly larger than
the dtype it is to be cast to, so if none of the dtypes
checked satisfy that specification, no downcasting will be
performed on the data.
Returns
retNumeric if parsing succeeded.
Return type depends on input. Series if Series, otherwise ndarray.
See also
DataFrame.astypeCast argument to a specified dtype.
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
numpy.ndarray.astypeCast a numpy array to a specified type.
DataFrame.convert_dtypesConvert dtypes.
Examples
Take separate series and convert to numeric, coercing when told to
>>> s = pd.Series(['1.0', '2', -3])
>>> pd.to_numeric(s)
0 1.0
1 2.0
2 -3.0
dtype: float64
>>> pd.to_numeric(s, downcast='float')
0 1.0
1 2.0
2 -3.0
dtype: float32
>>> pd.to_numeric(s, downcast='signed')
0 1
1 2
2 -3
dtype: int8
>>> s = pd.Series(['apple', '1.0', '2', -3])
>>> pd.to_numeric(s, errors='ignore')
0 apple
1 1.0
2 2
3 -3
dtype: object
>>> pd.to_numeric(s, errors='coerce')
0 NaN
1 1.0
2 2.0
3 -3.0
dtype: float64
Downcasting of nullable integer and floating dtypes is supported:
>>> s = pd.Series([1, 2, 3], dtype="Int64")
>>> pd.to_numeric(s, downcast="integer")
0 1
1 2
2 3
dtype: Int8
>>> s = pd.Series([1.0, 2.1, 3.0], dtype="Float64")
>>> pd.to_numeric(s, downcast="float")
0 1.0
1 2.1
2 3.0
dtype: Float32
| reference/api/pandas.to_numeric.html |
pandas.Period.year | `pandas.Period.year`
Return the year this Period falls on. | Period.year#
Return the year this Period falls on.
| reference/api/pandas.Period.year.html |
pandas.tseries.offsets.QuarterEnd.rollforward | `pandas.tseries.offsets.QuarterEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp. | QuarterEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.QuarterEnd.rollforward.html |
pandas.Index.rename | `pandas.Index.rename`
Alter Index or MultiIndex name.
```
>>> idx = pd.Index(['A', 'C', 'A', 'B'], name='score')
>>> idx.rename('grade')
Index(['A', 'C', 'A', 'B'], dtype='object', name='grade')
``` | Index.rename(name, inplace=False)[source]#
Alter Index or MultiIndex name.
Able to set new names without level. Defaults to returning new index.
Length of names must match number of levels in MultiIndex.
Parameters
namelabel or list of labelsName(s) to set.
inplacebool, default FalseModifies the object directly, instead of creating a new Index or
MultiIndex.
Returns
Index or NoneThe same type as the caller or None if inplace=True.
See also
Index.set_namesAble to set new names partially and by level.
Examples
>>> idx = pd.Index(['A', 'C', 'A', 'B'], name='score')
>>> idx.rename('grade')
Index(['A', 'C', 'A', 'B'], dtype='object', name='grade')
>>> idx = pd.MultiIndex.from_product([['python', 'cobra'],
... [2018, 2019]],
... names=['kind', 'year'])
>>> idx
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['kind', 'year'])
>>> idx.rename(['species', 'year'])
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['species', 'year'])
>>> idx.rename('species')
Traceback (most recent call last):
TypeError: Must pass list-like as `names`.
| reference/api/pandas.Index.rename.html |
pandas.tseries.offsets.CustomBusinessDay.normalize | pandas.tseries.offsets.CustomBusinessDay.normalize | CustomBusinessDay.normalize#
| reference/api/pandas.tseries.offsets.CustomBusinessDay.normalize.html |
pandas.DataFrame.T | pandas.DataFrame.T | property DataFrame.T[source]#
| reference/api/pandas.DataFrame.T.html |
pandas.tseries.offsets.MonthBegin.is_month_start | `pandas.tseries.offsets.MonthBegin.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | MonthBegin.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.MonthBegin.is_month_start.html |
pandas.tseries.offsets.Micro.rollforward | `pandas.tseries.offsets.Micro.rollforward`
Roll provided date forward to next offset only if not on offset. | Micro.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.Micro.rollforward.html |
pandas.DataFrame.plot.pie | `pandas.DataFrame.plot.pie`
Generate a pie plot.
```
>>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> plot = df.plot.pie(y='mass', figsize=(5, 5))
``` | DataFrame.plot.pie(**kwargs)[source]#
Generate a pie plot.
A pie plot is a proportional representation of the numerical data in a
column. This function wraps matplotlib.pyplot.pie() for the
specified column. If no column reference is passed and
subplots=True a pie plot is drawn for each numerical column
independently.
Parameters
yint or label, optionalLabel or position of the column to plot.
If not provided, subplots=True argument must be passed.
**kwargsKeyword arguments to pass on to DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themA NumPy array is returned when subplots is True.
See also
Series.plot.pieGenerate a pie plot for a Series.
DataFrame.plotMake plots of a DataFrame.
Examples
In the example below we have a DataFrame with the information about
planet’s mass and radius. We pass the ‘mass’ column to the
pie function to get a pie plot.
>>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> plot = df.plot.pie(y='mass', figsize=(5, 5))
>>> plot = df.plot.pie(subplots=True, figsize=(11, 6))
| reference/api/pandas.DataFrame.plot.pie.html |
pandas.core.window.ewm.ExponentialMovingWindow.corr | `pandas.core.window.ewm.ExponentialMovingWindow.corr`
Calculate the ewm (exponential weighted moment) sample correlation. | ExponentialMovingWindow.corr(other=None, pairwise=None, numeric_only=False, **kwargs)[source]#
Calculate the ewm (exponential weighted moment) sample correlation.
Parameters
otherSeries or DataFrame, optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndex DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.ewmCalling ewm with Series data.
pandas.DataFrame.ewmCalling ewm with DataFrames.
pandas.Series.corrAggregating corr for Series.
pandas.DataFrame.corrAggregating corr for DataFrame.
| reference/api/pandas.core.window.ewm.ExponentialMovingWindow.corr.html |
pandas.core.groupby.DataFrameGroupBy.shift | `pandas.core.groupby.DataFrameGroupBy.shift`
Shift each group by periods observations.
If freq is passed, the index will be increased using the periods and the freq. | DataFrameGroupBy.shift(periods=1, freq=None, axis=0, fill_value=None)[source]#
Shift each group by periods observations.
If freq is passed, the index will be increased using the periods and the freq.
Parameters
periodsint, default 1Number of periods to shift.
freqstr, optionalFrequency string.
axisaxis to shift, default 0Shift direction.
fill_valueoptionalThe scalar value to use for newly introduced missing values.
Returns
Series or DataFrameObject shifted within each group.
See also
Index.shiftShift values of Index.
tshiftShift the time index, using the index’s frequency if available.
| reference/api/pandas.core.groupby.DataFrameGroupBy.shift.html |
pandas.io.formats.style.Styler.set_properties | `pandas.io.formats.style.Styler.set_properties`
Set defined CSS-properties to each <td> HTML element for the given subset.
```
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_properties(color="white", align="right")
>>> df.style.set_properties(**{'background-color': 'yellow'})
``` | Styler.set_properties(subset=None, **kwargs)[source]#
Set defined CSS-properties to each <td> HTML element for the given subset.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
**kwargsdictA dictionary of property, value pairs to be set for each cell.
Returns
selfStyler
Notes
This is a convenience methods which wraps the Styler.applymap() calling a
function returning the CSS-properties independently of the data.
Examples
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_properties(color="white", align="right")
>>> df.style.set_properties(**{'background-color': 'yellow'})
See Table Visualization user guide for
more details.
| reference/api/pandas.io.formats.style.Styler.set_properties.html |
pandas.tseries.offsets.BYearEnd.rollforward | `pandas.tseries.offsets.BYearEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp. | BYearEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.BYearEnd.rollforward.html |
pandas.Series.gt | `pandas.Series.gt`
Return Greater than of series and other, element-wise (binary operator gt).
Equivalent to series > other, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.gt(b, fill_value=0)
a True
b False
c False
d False
e True
f False
dtype: bool
``` | Series.gt(other, level=None, fill_value=None, axis=0)[source]#
Return Greater than of series and other, element-wise (binary operator gt).
Equivalent to series > other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
Examples
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.gt(b, fill_value=0)
a True
b False
c False
d False
e True
f False
dtype: bool
| reference/api/pandas.Series.gt.html |
pandas.tseries.offsets.Hour.name | `pandas.tseries.offsets.Hour.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | Hour.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.Hour.name.html |
pandas.tseries.offsets.BQuarterBegin.is_year_start | `pandas.tseries.offsets.BQuarterBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | BQuarterBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.BQuarterBegin.is_year_start.html |
pandas.DataFrame.mean | `pandas.DataFrame.mean`
Return the mean of the values over the requested axis. | DataFrame.mean(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the mean of the values over the requested axis.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
| reference/api/pandas.DataFrame.mean.html |
pandas.Timestamp.today | `pandas.Timestamp.today`
Return the current time in the local timezone.
This differs from datetime.today() in that it can be localized to a
passed timezone.
```
>>> pd.Timestamp.today()
Timestamp('2020-11-16 22:37:39.969883')
``` | classmethod Timestamp.today(tz=None)#
Return the current time in the local timezone.
This differs from datetime.today() in that it can be localized to a
passed timezone.
Parameters
tzstr or timezone object, default NoneTimezone to localize to.
Examples
>>> pd.Timestamp.today()
Timestamp('2020-11-16 22:37:39.969883')
Analogous for pd.NaT:
>>> pd.NaT.today()
NaT
| reference/api/pandas.Timestamp.today.html |
pandas.DataFrame.to_period | `pandas.DataFrame.to_period`
Convert DataFrame from DatetimeIndex to PeriodIndex.
```
>>> idx = pd.to_datetime(
... [
... "2001-03-31 00:00:00",
... "2002-05-31 00:00:00",
... "2003-08-31 00:00:00",
... ]
... )
``` | DataFrame.to_period(freq=None, axis=0, copy=True)[source]#
Convert DataFrame from DatetimeIndex to PeriodIndex.
Convert DataFrame from DatetimeIndex to PeriodIndex with desired
frequency (inferred from index if not passed).
Parameters
freqstr, defaultFrequency of the PeriodIndex.
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to convert (the index by default).
copybool, default TrueIf False then underlying input data is not copied.
Returns
DataFrame with PeriodIndex
Examples
>>> idx = pd.to_datetime(
... [
... "2001-03-31 00:00:00",
... "2002-05-31 00:00:00",
... "2003-08-31 00:00:00",
... ]
... )
>>> idx
DatetimeIndex(['2001-03-31', '2002-05-31', '2003-08-31'],
dtype='datetime64[ns]', freq=None)
>>> idx.to_period("M")
PeriodIndex(['2001-03', '2002-05', '2003-08'], dtype='period[M]')
For the yearly frequency
>>> idx.to_period("Y")
PeriodIndex(['2001', '2002', '2003'], dtype='period[A-DEC]')
| reference/api/pandas.DataFrame.to_period.html |
pandas.Series.rpow | `pandas.Series.rpow`
Return Exponential power of series and other, element-wise (binary operator rpow).
Equivalent to other ** series, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.pow(b, fill_value=0)
a 1.0
b 1.0
c 1.0
d 0.0
e NaN
dtype: float64
``` | Series.rpow(other, level=None, fill_value=None, axis=0)[source]#
Return Exponential power of series and other, element-wise (binary operator rpow).
Equivalent to other ** series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.powElement-wise Exponential power, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.pow(b, fill_value=0)
a 1.0
b 1.0
c 1.0
d 0.0
e NaN
dtype: float64
| reference/api/pandas.Series.rpow.html |
pandas.tseries.offsets.FY5253.n | pandas.tseries.offsets.FY5253.n | FY5253.n#
| reference/api/pandas.tseries.offsets.FY5253.n.html |
pandas.Series.str.isalpha | `pandas.Series.str.isalpha`
Check whether all characters in each string are alphabetic.
This is equivalent to running the Python string method
str.isalpha() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
``` | Series.str.isalpha()[source]#
Check whether all characters in each string are alphabetic.
This is equivalent to running the Python string method
str.isalpha() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
| reference/api/pandas.Series.str.isalpha.html |
pandas.pivot_table | `pandas.pivot_table`
Create a spreadsheet-style pivot table as a DataFrame.
```
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
A B C D E
0 foo one small 1 2
1 foo one large 2 4
2 foo one large 2 5
3 foo two small 3 5
4 foo two small 3 6
5 bar one large 4 6
6 bar one small 5 8
7 bar two small 6 9
8 bar two large 7 9
``` | pandas.pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False, sort=True)[source]#
Create a spreadsheet-style pivot table as a DataFrame.
The levels in the pivot table will be stored in MultiIndex objects
(hierarchical indexes) on the index and columns of the result DataFrame.
Parameters
dataDataFrame
valuescolumn to aggregate, optional
indexcolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table index. If an array is passed,
it is being used as the same manner as column values.
columnscolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table column. If an array is passed,
it is being used as the same manner as column values.
aggfuncfunction, list of functions, dict, default numpy.meanIf list of functions passed, the resulting pivot table will have
hierarchical columns whose top level are the function names
(inferred from the function objects themselves)
If dict is passed, the key is column to aggregate and value
is function or list of functions.
fill_valuescalar, default NoneValue to replace missing values with (in the resulting pivot table,
after aggregation).
marginsbool, default FalseAdd all row / columns (e.g. for subtotal / grand totals).
dropnabool, default TrueDo not include columns whose entries are all NaN. If True,
rows with a NaN value in any column will be omitted before
computing margins.
margins_namestr, default ‘All’Name of the row / column that will contain the totals
when margins is True.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
Changed in version 0.25.0.
sortbool, default TrueSpecifies if the result should be sorted.
New in version 1.3.0.
Returns
DataFrameAn Excel style pivot table.
See also
DataFrame.pivotPivot without aggregation that can handle non-numeric data.
DataFrame.meltUnpivot a DataFrame from wide to long format, optionally leaving identifiers set.
wide_to_longWide panel to long format. Less flexible but more user-friendly than melt.
Notes
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
A B C D E
0 foo one small 1 2
1 foo one large 2 4
2 foo one large 2 5
3 foo two small 3 5
4 foo two small 3 6
5 bar one large 4 6
6 bar one small 5 8
7 bar two small 6 9
8 bar two large 7 9
This first example aggregates values by taking the sum.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum)
>>> table
C large small
A B
bar one 4.0 5.0
two 7.0 6.0
foo one 4.0 1.0
two NaN 6.0
We can also fill missing values using the fill_value parameter.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum, fill_value=0)
>>> table
C large small
A B
bar one 4 5
two 7 6
foo one 4 1
two 0 6
The next example aggregates by taking the mean across multiple columns.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': np.mean})
>>> table
D E
A C
bar large 5.500000 7.500000
small 5.500000 8.500000
foo large 2.000000 4.500000
small 2.333333 4.333333
We can also calculate multiple types of aggregations for any given
value column.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': [min, max, np.mean]})
>>> table
D E
mean max mean min
A C
bar large 5.500000 9 7.500000 6
small 5.500000 9 8.500000 8
foo large 2.000000 5 4.500000 4
small 2.333333 6 4.333333 2
| reference/api/pandas.pivot_table.html |