title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.DataFrame.info
|
`pandas.DataFrame.info`
Print a concise summary of a DataFrame.
This method prints information about a DataFrame including
the index dtype and columns, non-null values and memory usage.
```
>>> int_values = [1, 2, 3, 4, 5]
>>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
>>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
>>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
... "float_col": float_values})
>>> df
int_col text_col float_col
0 1 alpha 0.00
1 2 beta 0.25
2 3 gamma 0.50
3 4 delta 0.75
4 5 epsilon 1.00
```
|
DataFrame.info(verbose=None, buf=None, max_cols=None, memory_usage=None, show_counts=None, null_counts=None)[source]#
Print a concise summary of a DataFrame.
This method prints information about a DataFrame including
the index dtype and columns, non-null values and memory usage.
Parameters
verbosebool, optionalWhether to print the full summary. By default, the setting in
pandas.options.display.max_info_columns is followed.
bufwritable buffer, defaults to sys.stdoutWhere to send the output. By default, the output is printed to
sys.stdout. Pass a writable buffer if you need to further process
the output. max_cols : int, optional
When to switch from the verbose to the truncated output. If the
DataFrame has more than max_cols columns, the truncated output
is used. By default, the setting in
pandas.options.display.max_info_columns is used.
memory_usagebool, str, optionalSpecifies whether total memory usage of the DataFrame
elements (including the index) should be displayed. By default,
this follows the pandas.options.display.memory_usage setting.
True always show memory usage. False never shows memory usage.
A value of ‘deep’ is equivalent to “True with deep introspection”.
Memory usage is shown in human-readable units (base-2
representation). Without deep introspection a memory estimation is
made based in column dtype and number of rows assuming values
consume the same memory amount for corresponding dtypes. With deep
memory introspection, a real memory usage calculation is performed
at the cost of computational resources. See the
Frequently Asked Questions for more
details.
show_countsbool, optionalWhether to show the non-null counts. By default, this is shown
only if the DataFrame is smaller than
pandas.options.display.max_info_rows and
pandas.options.display.max_info_columns. A value of True always
shows the counts, and False never shows the counts.
null_countsbool, optional
Deprecated since version 1.2.0: Use show_counts instead.
Returns
NoneThis method prints a summary of a DataFrame and returns None.
See also
DataFrame.describeGenerate descriptive statistics of DataFrame columns.
DataFrame.memory_usageMemory usage of DataFrame columns.
Examples
>>> int_values = [1, 2, 3, 4, 5]
>>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
>>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
>>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
... "float_col": float_values})
>>> df
int_col text_col float_col
0 1 alpha 0.00
1 2 beta 0.25
2 3 gamma 0.50
3 4 delta 0.75
4 5 epsilon 1.00
Prints information of all columns:
>>> df.info(verbose=True)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 int_col 5 non-null int64
1 text_col 5 non-null object
2 float_col 5 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 248.0+ bytes
Prints a summary of columns count and its dtypes but not per column
information:
>>> df.info(verbose=False)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Columns: 3 entries, int_col to float_col
dtypes: float64(1), int64(1), object(1)
memory usage: 248.0+ bytes
Pipe output of DataFrame.info to buffer instead of sys.stdout, get
buffer content and writes to a text file:
>>> import io
>>> buffer = io.StringIO()
>>> df.info(buf=buffer)
>>> s = buffer.getvalue()
>>> with open("df_info.txt", "w",
... encoding="utf-8") as f:
... f.write(s)
260
The memory_usage parameter allows deep introspection mode, specially
useful for big DataFrames and fine-tune memory optimization:
>>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
>>> df = pd.DataFrame({
... 'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
... 'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
... 'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
... })
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 column_1 1000000 non-null object
1 column_2 1000000 non-null object
2 column_3 1000000 non-null object
dtypes: object(3)
memory usage: 22.9+ MB
>>> df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 column_1 1000000 non-null object
1 column_2 1000000 non-null object
2 column_3 1000000 non-null object
dtypes: object(3)
memory usage: 165.9 MB
|
reference/api/pandas.DataFrame.info.html
|
pandas.tseries.offsets.Micro.is_month_end
|
`pandas.tseries.offsets.Micro.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Micro.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Micro.is_month_end.html
|
pandas.tseries.offsets.FY5253.freqstr
|
`pandas.tseries.offsets.FY5253.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
FY5253.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.FY5253.freqstr.html
|
pandas.core.groupby.GroupBy.apply
|
`pandas.core.groupby.GroupBy.apply`
Apply function func group-wise and combine the results together.
```
>>> df = pd.DataFrame({'A': 'a a b'.split(),
... 'B': [1,2,3],
... 'C': [4,6,5]})
>>> g1 = df.groupby('A', group_keys=False)
>>> g2 = df.groupby('A', group_keys=True)
```
|
GroupBy.apply(func, *args, **kwargs)[source]#
Apply function func group-wise and combine the results together.
The function passed to apply must take a dataframe as its first
argument and return a DataFrame, Series or scalar. apply will
then take care of combining the results back together into a single
dataframe or series. apply is therefore a highly flexible
grouping method.
While apply is a very flexible method, its downside is that
using it can be quite a bit slower than using more specific methods
like agg or transform. Pandas offers a wide range of method that will
be much faster than using apply for their specific purposes, so try to
use them before reaching for apply.
Parameters
funccallableA callable that takes a dataframe as its first argument, and
returns a dataframe, a series or a scalar. In addition the
callable may take positional and keyword arguments.
args, kwargstuple and dictOptional positional and keyword arguments to pass to func.
Returns
appliedSeries or DataFrame
See also
pipeApply function to the full GroupBy object instead of to each group.
aggregateApply aggregate function to the GroupBy object.
transformApply function column-by-column to the GroupBy object.
Series.applyApply a function to a Series.
DataFrame.applyApply a function to each row or column of a DataFrame.
Notes
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func,
see the examples below.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A': 'a a b'.split(),
... 'B': [1,2,3],
... 'C': [4,6,5]})
>>> g1 = df.groupby('A', group_keys=False)
>>> g2 = df.groupby('A', group_keys=True)
Notice that g1 have g2 have two groups, a and b, and only
differ in their group_keys argument. Calling apply in various ways,
we can get different grouping results:
Example 1: below the function passed to apply takes a DataFrame as
its argument and returns a DataFrame. apply combines the result for
each group together into a new DataFrame:
>>> g1[['B', 'C']].apply(lambda x: x / x.sum())
B C
0 0.333333 0.4
1 0.666667 0.6
2 1.000000 1.0
In the above, the groups are not part of the index. We can have them included
by using g2 where group_keys=True:
>>> g2[['B', 'C']].apply(lambda x: x / x.sum())
B C
A
a 0 0.333333 0.4
1 0.666667 0.6
b 2 1.000000 1.0
Example 2: The function passed to apply takes a DataFrame as
its argument and returns a Series. apply combines the result for
each group together into a new DataFrame.
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func.
>>> g1[['B', 'C']].apply(lambda x: x.astype(float).max() - x.min())
B C
A
a 1.0 2.0
b 0.0 0.0
>>> g2[['B', 'C']].apply(lambda x: x.astype(float).max() - x.min())
B C
A
a 1.0 2.0
b 0.0 0.0
The group_keys argument has no effect here because the result is not
like-indexed (i.e. a transform) when compared
to the input.
Example 3: The function passed to apply takes a DataFrame as
its argument and returns a scalar. apply combines the result for
each group together into a Series, including setting the index as
appropriate:
>>> g1.apply(lambda x: x.C.max() - x.B.min())
A
a 5
b 2
dtype: int64
|
reference/api/pandas.core.groupby.GroupBy.apply.html
|
pandas.tseries.offsets.Tick.n
|
pandas.tseries.offsets.Tick.n
|
Tick.n#
|
reference/api/pandas.tseries.offsets.Tick.n.html
|
pandas.DatetimeIndex.day_of_week
|
`pandas.DatetimeIndex.day_of_week`
The day of the week with Monday=0, Sunday=6.
```
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
>>> s.dt.dayofweek
2016-12-31 5
2017-01-01 6
2017-01-02 0
2017-01-03 1
2017-01-04 2
2017-01-05 3
2017-01-06 4
2017-01-07 5
2017-01-08 6
Freq: D, dtype: int64
```
|
property DatetimeIndex.day_of_week[source]#
The day of the week with Monday=0, Sunday=6.
Return the day of the week. It is assumed the week starts on
Monday, which is denoted by 0 and ends on Sunday which is denoted
by 6. This method is available on both Series with datetime
values (using the dt accessor) or DatetimeIndex.
Returns
Series or IndexContaining integers indicating the day number.
See also
Series.dt.dayofweekAlias.
Series.dt.weekdayAlias.
Series.dt.day_nameReturns the name of the day of the week.
Examples
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
>>> s.dt.dayofweek
2016-12-31 5
2017-01-01 6
2017-01-02 0
2017-01-03 1
2017-01-04 2
2017-01-05 3
2017-01-06 4
2017-01-07 5
2017-01-08 6
Freq: D, dtype: int64
|
reference/api/pandas.DatetimeIndex.day_of_week.html
|
pandas.errors.OutOfBoundsDatetime
|
`pandas.errors.OutOfBoundsDatetime`
Raised when the datetime is outside the range that can be represented.
|
exception pandas.errors.OutOfBoundsDatetime#
Raised when the datetime is outside the range that can be represented.
|
reference/api/pandas.errors.OutOfBoundsDatetime.html
|
pandas.DataFrame.corr
|
`pandas.DataFrame.corr`
Compute pairwise correlation of columns, excluding NA/null values.
```
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
```
|
DataFrame.corr(method='pearson', min_periods=1, numeric_only=_NoDefault.no_default)[source]#
Compute pairwise correlation of columns, excluding NA/null values.
Parameters
method{‘pearson’, ‘kendall’, ‘spearman’} or callableMethod of correlation:
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
callable: callable with input two 1d ndarraysand returning a float. Note that the returned matrix from corr
will have 1 along the diagonals and will be symmetric
regardless of the callable’s behavior.
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result. Currently only available for Pearson
and Spearman correlation.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameCorrelation matrix.
See also
DataFrame.corrwithCompute pairwise correlation with another DataFrame or Series.
Series.corrCompute the correlation between two Series.
Notes
Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations.
Pearson correlation coefficient
Kendall rank correlation coefficient
Spearman’s rank correlation coefficient
Examples
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
>>> df = pd.DataFrame([(1, 1), (2, np.nan), (np.nan, 3), (4, 4)],
... columns=['dogs', 'cats'])
>>> df.corr(min_periods=3)
dogs cats
dogs 1.0 NaN
cats NaN 1.0
|
reference/api/pandas.DataFrame.corr.html
|
pandas.core.groupby.DataFrameGroupBy.nunique
|
`pandas.core.groupby.DataFrameGroupBy.nunique`
Return DataFrame with counts of unique elements in each position.
```
>>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam',
... 'ham', 'ham'],
... 'value1': [1, 5, 5, 2, 5, 5],
... 'value2': list('abbaxy')})
>>> df
id value1 value2
0 spam 1 a
1 egg 5 b
2 egg 5 b
3 spam 2 a
4 ham 5 x
5 ham 5 y
```
|
DataFrameGroupBy.nunique(dropna=True)[source]#
Return DataFrame with counts of unique elements in each position.
Parameters
dropnabool, default TrueDon’t include NaN in the counts.
Returns
nunique: DataFrame
Examples
>>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam',
... 'ham', 'ham'],
... 'value1': [1, 5, 5, 2, 5, 5],
... 'value2': list('abbaxy')})
>>> df
id value1 value2
0 spam 1 a
1 egg 5 b
2 egg 5 b
3 spam 2 a
4 ham 5 x
5 ham 5 y
>>> df.groupby('id').nunique()
value1 value2
id
egg 1 1
ham 1 2
spam 2 1
Check for rows with the same id but conflicting values:
>>> df.groupby('id').filter(lambda g: (g.nunique() > 1).any())
id value1 value2
0 spam 1 a
3 spam 2 a
4 ham 5 x
5 ham 5 y
|
reference/api/pandas.core.groupby.DataFrameGroupBy.nunique.html
|
pandas.api.extensions.ExtensionDtype.construct_from_string
|
`pandas.api.extensions.ExtensionDtype.construct_from_string`
Construct this type from a string.
This is useful mainly for data types that accept parameters.
For example, a period dtype accepts a frequency parameter that
can be set as period[H] (where H means hourly frequency).
```
>>> @classmethod
... def construct_from_string(cls, string):
... pattern = re.compile(r"^my_type\[(?P<arg_name>.+)\]$")
... match = pattern.match(string)
... if match:
... return cls(**match.groupdict())
... else:
... raise TypeError(
... f"Cannot construct a '{cls.__name__}' from '{string}'"
... )
```
|
classmethod ExtensionDtype.construct_from_string(string)[source]#
Construct this type from a string.
This is useful mainly for data types that accept parameters.
For example, a period dtype accepts a frequency parameter that
can be set as period[H] (where H means hourly frequency).
By default, in the abstract class, just the name of the type is
expected. But subclasses can overwrite this method to accept
parameters.
Parameters
stringstrThe name of the type, for example category.
Returns
ExtensionDtypeInstance of the dtype.
Raises
TypeErrorIf a class cannot be constructed from this ‘string’.
Examples
For extension dtypes with arguments the following may be an
adequate implementation.
>>> @classmethod
... def construct_from_string(cls, string):
... pattern = re.compile(r"^my_type\[(?P<arg_name>.+)\]$")
... match = pattern.match(string)
... if match:
... return cls(**match.groupdict())
... else:
... raise TypeError(
... f"Cannot construct a '{cls.__name__}' from '{string}'"
... )
|
reference/api/pandas.api.extensions.ExtensionDtype.construct_from_string.html
|
pandas.Series.to_clipboard
|
`pandas.Series.to_clipboard`
Copy object to the system clipboard.
```
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
```
|
Series.to_clipboard(excel=True, sep=None, **kwargs)[source]#
Copy object to the system clipboard.
Write a text representation of object to the system clipboard.
This can be pasted into Excel, for example.
Parameters
excelbool, default TrueProduce output in a csv format for easy pasting into excel.
True, use the provided separator for csv pasting.
False, write a string representation of the object to the clipboard.
sepstr, default '\t'Field delimiter.
**kwargsThese parameters will be passed to DataFrame.to_csv.
See also
DataFrame.to_csvWrite a DataFrame to a comma-separated values (csv) file.
read_clipboardRead text from clipboard and pass to read_csv.
Notes
Requirements for your platform.
Linux : xclip, or xsel (with PyQt4 modules)
Windows : none
macOS : none
This method uses the processes developed for the package pyperclip. A
solution to render any output string format is given in the examples.
Examples
Copy the contents of a DataFrame to the clipboard.
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
>>> df.to_clipboard(sep=',')
... # Wrote the following to the system clipboard:
... # ,A,B,C
... # 0,1,2,3
... # 1,4,5,6
We can omit the index by passing the keyword index and setting
it to false.
>>> df.to_clipboard(sep=',', index=False)
... # Wrote the following to the system clipboard:
... # A,B,C
... # 1,2,3
... # 4,5,6
Using the original pyperclip package for any string output format.
import pyperclip
html = df.style.to_html()
pyperclip.copy(html)
|
reference/api/pandas.Series.to_clipboard.html
|
pandas.Series.str.title
|
`pandas.Series.str.title`
Convert strings in the Series/Index to titlecase.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.title()[source]#
Convert strings in the Series/Index to titlecase.
Equivalent to str.title().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.title.html
|
pandas.DataFrame.isnull
|
`pandas.DataFrame.isnull`
DataFrame.isnull is an alias for DataFrame.isna.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
DataFrame.isnull()[source]#
DataFrame.isnull is an alias for DataFrame.isna.
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None or numpy.NaN, gets mapped to True
values.
Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
Returns
DataFrameMask of bool values for each element in DataFrame that
indicates whether an element is an NA value.
See also
DataFrame.isnullAlias of isna.
DataFrame.notnaBoolean inverse of isna.
DataFrame.dropnaOmit axes labels with missing values.
isnaTop-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
|
reference/api/pandas.DataFrame.isnull.html
|
pandas.Series.ffill
|
`pandas.Series.ffill`
Synonym for DataFrame.fillna() with method='ffill'.
Object with missing values filled or None if inplace=True.
|
Series.ffill(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='ffill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
|
reference/api/pandas.Series.ffill.html
|
pandas.ExcelWriter.handles
|
`pandas.ExcelWriter.handles`
Handles to Excel sheets.
|
property ExcelWriter.handles[source]#
Handles to Excel sheets.
Deprecated since version 1.5.0.
|
reference/api/pandas.ExcelWriter.handles.html
|
pandas.Series.swaplevel
|
`pandas.Series.swaplevel`
Swap levels i and j in a MultiIndex.
```
>>> s = pd.Series(
... ["A", "B", "A", "C"],
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> s
Final exam History January A
Geography February B
Coursework History March A
Geography April C
dtype: object
```
|
Series.swaplevel(i=- 2, j=- 1, copy=True)[source]#
Swap levels i and j in a MultiIndex.
Default is to swap the two innermost levels of the index.
Parameters
i, jint or strLevels of the indices to be swapped. Can pass level name as string.
copybool, default TrueWhether to copy underlying data.
Returns
SeriesSeries with levels swapped in MultiIndex.
Examples
>>> s = pd.Series(
... ["A", "B", "A", "C"],
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> s
Final exam History January A
Geography February B
Coursework History March A
Geography April C
dtype: object
In the following example, we will swap the levels of the indices.
Here, we will swap the levels column-wise, but levels can be swapped row-wise
in a similar manner. Note that column-wise is the default behaviour.
By not supplying any arguments for i and j, we swap the last and second to
last indices.
>>> s.swaplevel()
Final exam January History A
February Geography B
Coursework March History A
April Geography C
dtype: object
By supplying one argument, we can choose which index to swap the last
index with. We can for example swap the first index with the last one as
follows.
>>> s.swaplevel(0)
January History Final exam A
February Geography Final exam B
March History Coursework A
April Geography Coursework C
dtype: object
We can also define explicitly which indices we want to swap by supplying values
for both i and j. Here, we for example swap the first and second indices.
>>> s.swaplevel(0, 1)
History Final exam January A
Geography Final exam February B
History Coursework March A
Geography Coursework April C
dtype: object
|
reference/api/pandas.Series.swaplevel.html
|
pandas.Series.tshift
|
`pandas.Series.tshift`
Shift the time index, using the index’s frequency if available.
|
Series.tshift(periods=1, freq=None, axis=0)[source]#
Shift the time index, using the index’s frequency if available.
Deprecated since version 1.1.0: Use shift instead.
Parameters
periodsintNumber of periods to move, can be positive or negative.
freqDateOffset, timedelta, or str, default NoneIncrement to use from the tseries module
or time rule expressed as a string (e.g. ‘EOM’).
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Corresponds to the axis that contains the Index.
For Series this parameter is unused and defaults to 0.
Returns
shiftedSeries/DataFrame
Notes
If freq is not specified then tries to use the freq or inferred_freq
attributes of the index. If neither of those attributes exist, a
ValueError is thrown
|
reference/api/pandas.Series.tshift.html
|
pandas.tseries.offsets.BusinessMonthBegin.is_month_end
|
`pandas.tseries.offsets.BusinessMonthBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
BusinessMonthBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.is_month_end.html
|
pandas.core.groupby.GroupBy.mean
|
`pandas.core.groupby.GroupBy.mean`
Compute mean of groups, excluding missing values.
Include only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
```
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
```
|
final GroupBy.mean(numeric_only=_NoDefault.no_default, engine='cython', engine_kwargs=None)[source]#
Compute mean of groups, excluding missing values.
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting
compute.use_numba
New in version 1.4.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{{'nopython': True, 'nogil': False, 'parallel': False}}
New in version 1.4.0.
Returns
pandas.Series or pandas.DataFrame
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],
... 'B': [np.nan, 2, 3, 4, 5],
... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in
each group.
>>> df.groupby('A').mean()
B C
A
1 3.0 1.333333
2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean()
C
A B
1 2.0 2.0
4.0 1.0
2 3.0 1.0
5.0 2.0
Groupby one column and return the mean of only particular column in
the group.
>>> df.groupby('A')['B'].mean()
A
1 3.0
2 4.0
Name: B, dtype: float64
|
reference/api/pandas.core.groupby.GroupBy.mean.html
|
pandas.tseries.offsets.MonthEnd.is_quarter_end
|
`pandas.tseries.offsets.MonthEnd.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
MonthEnd.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.MonthEnd.is_quarter_end.html
|
pandas.core.window.ewm.ExponentialMovingWindow.cov
|
`pandas.core.window.ewm.ExponentialMovingWindow.cov`
Calculate the ewm (exponential weighted moment) sample covariance.
|
ExponentialMovingWindow.cov(other=None, pairwise=None, bias=False, numeric_only=False, **kwargs)[source]#
Calculate the ewm (exponential weighted moment) sample covariance.
Parameters
otherSeries or DataFrame , optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndex DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
biasbool, default FalseUse a standard estimation bias correction.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.ewmCalling ewm with Series data.
pandas.DataFrame.ewmCalling ewm with DataFrames.
pandas.Series.covAggregating cov for Series.
pandas.DataFrame.covAggregating cov for DataFrame.
|
reference/api/pandas.core.window.ewm.ExponentialMovingWindow.cov.html
|
pandas.tseries.offsets.FY5253Quarter.freqstr
|
`pandas.tseries.offsets.FY5253Quarter.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
FY5253Quarter.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.FY5253Quarter.freqstr.html
|
pandas.Series.to_markdown
|
`pandas.Series.to_markdown`
Print Series in Markdown-friendly format.
```
>>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
| | animal |
|---:|:---------|
| 0 | elk |
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
```
|
Series.to_markdown(buf=None, mode='wt', index=True, storage_options=None, **kwargs)[source]#
Print Series in Markdown-friendly format.
New in version 1.0.0.
Parameters
bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string.
modestr, optionalMode in which file is opened, “wt” by default.
indexbool, optional, default TrueAdd index (row) labels.
New in version 1.1.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
**kwargsThese parameters will be passed to tabulate.
Returns
strSeries in Markdown-friendly format.
Notes
Requires the tabulate package.
Examples
>>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
| | animal |
|---:|:---------|
| 0 | elk |
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
Output markdown with a tabulate option.
>>> print(s.to_markdown(tablefmt="grid"))
+----+----------+
| | animal |
+====+==========+
| 0 | elk |
+----+----------+
| 1 | pig |
+----+----------+
| 2 | dog |
+----+----------+
| 3 | quetzal |
+----+----------+
|
reference/api/pandas.Series.to_markdown.html
|
pandas.DatetimeIndex.is_month_end
|
`pandas.DatetimeIndex.is_month_end`
Indicates whether the date is the last day of the month.
For Series, returns a Series with boolean values.
For DatetimeIndex, returns a boolean array.
```
>>> s = pd.Series(pd.date_range("2018-02-27", periods=3))
>>> s
0 2018-02-27
1 2018-02-28
2 2018-03-01
dtype: datetime64[ns]
>>> s.dt.is_month_start
0 False
1 False
2 True
dtype: bool
>>> s.dt.is_month_end
0 False
1 True
2 False
dtype: bool
```
|
property DatetimeIndex.is_month_end[source]#
Indicates whether the date is the last day of the month.
Returns
Series or arrayFor Series, returns a Series with boolean values.
For DatetimeIndex, returns a boolean array.
See also
is_month_startReturn a boolean indicating whether the date is the first day of the month.
is_month_endReturn a boolean indicating whether the date is the last day of the month.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> s = pd.Series(pd.date_range("2018-02-27", periods=3))
>>> s
0 2018-02-27
1 2018-02-28
2 2018-03-01
dtype: datetime64[ns]
>>> s.dt.is_month_start
0 False
1 False
2 True
dtype: bool
>>> s.dt.is_month_end
0 False
1 True
2 False
dtype: bool
>>> idx = pd.date_range("2018-02-27", periods=3)
>>> idx.is_month_start
array([False, False, True])
>>> idx.is_month_end
array([False, True, False])
|
reference/api/pandas.DatetimeIndex.is_month_end.html
|
pandas.DataFrame.cov
|
`pandas.DataFrame.cov`
Compute pairwise covariance of columns, excluding NA/null values.
```
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
```
|
DataFrame.cov(min_periods=None, ddof=1, numeric_only=_NoDefault.no_default)[source]#
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame.
The returned data frame is the covariance matrix of the columns
of the DataFrame.
Both NA and null values are automatically excluded from the
calculation. (See the note below about bias from missing values.)
A threshold can be set for the minimum number of
observations for each value created. Comparisons with observations
below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to
understand the relationship between different measures
across time.
Parameters
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result.
ddofint, default 1Delta degrees of freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
New in version 1.1.0.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameThe covariance matrix of the series of the DataFrame.
See also
Series.covCompute covariance with another Series.
core.window.ewm.ExponentialMovingWindow.covExponential weighted sample covariance.
core.window.expanding.Expanding.covExpanding sample covariance.
core.window.rolling.Rolling.covRolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series.
The covariance is normalized by N-ddof.
For DataFrames that have Series that are missing data (assuming that
data is missing at random)
the returned covariance matrix will be an unbiased estimate
of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable
because the estimate covariance matrix is not guaranteed to be positive
semi-definite. This could lead to estimate correlations having
absolute values which are greater than one, and/or a non-invertible
covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional min_periods keyword
that specifies the required minimum number of non-NA observations for
each column pair in order to have a valid result:
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
|
reference/api/pandas.DataFrame.cov.html
|
pandas.DataFrame.aggregate
|
`pandas.DataFrame.aggregate`
Aggregate using one or more operations over the specified axis.
```
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
```
|
DataFrame.aggregate(func=None, axis=0, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
axis{0 or ‘index’, 1 or ‘columns’}, default 0If 0 or ‘index’: apply function to each column.
If 1 or ‘columns’: apply function to each row.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
The aggregation operations are always performed over an axis, either the
index (default) or the column axis. This behavior is different from
numpy aggregation functions (mean, median, prod, sum, std,
var), where the default is to compute the aggregation of the flattened
array, e.g., numpy.mean(arr_2d) as opposed to
numpy.mean(arr_2d, axis=0).
agg is an alias for aggregate. Use the alias.
See also
DataFrame.applyPerform any type of operations.
DataFrame.transformPerform transformation type operations.
core.groupby.GroupByPerform operations over groups.
core.resample.ResamplerPerform operations over resampled bins.
core.window.RollingPerform operations over rolling window.
core.window.ExpandingPerform operations over expanding window.
core.window.ExponentialMovingWindowPerform operation over exponential weighted window.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
Aggregate these functions over the rows.
>>> df.agg(['sum', 'min'])
A B C
sum 12.0 15.0 18.0
min 1.0 2.0 3.0
Different aggregations per column.
>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
A B
sum 12.0 NaN
min 1.0 2.0
max NaN 8.0
Aggregate different functions over the columns and rename the index of the resulting
DataFrame.
>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
A B C
x 7.0 NaN NaN
y NaN 2.0 NaN
z NaN NaN 6.0
Aggregate over the columns.
>>> df.agg("mean", axis="columns")
0 2.0
1 5.0
2 8.0
3 NaN
dtype: float64
|
reference/api/pandas.DataFrame.aggregate.html
|
pandas.tseries.offsets.BusinessDay.kwds
|
`pandas.tseries.offsets.BusinessDay.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
```
|
BusinessDay.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BusinessDay.kwds.html
|
pandas.errors.NullFrequencyError
|
`pandas.errors.NullFrequencyError`
Exception raised when a freq cannot be null.
|
exception pandas.errors.NullFrequencyError[source]#
Exception raised when a freq cannot be null.
Particularly DatetimeIndex.shift, TimedeltaIndex.shift,
PeriodIndex.shift.
|
reference/api/pandas.errors.NullFrequencyError.html
|
pandas.PeriodIndex.to_timestamp
|
`pandas.PeriodIndex.to_timestamp`
Cast to DatetimeArray/Index.
|
PeriodIndex.to_timestamp(freq=None, how='start')[source]#
Cast to DatetimeArray/Index.
Parameters
freqstr or DateOffset, optionalTarget frequency. The default is ‘D’ for week or longer,
‘S’ otherwise.
how{‘s’, ‘e’, ‘start’, ‘end’}Whether to use the start or end of the time period being converted.
Returns
DatetimeArray/Index
|
reference/api/pandas.PeriodIndex.to_timestamp.html
|
pandas.tseries.offsets.BusinessDay.onOffset
|
pandas.tseries.offsets.BusinessDay.onOffset
|
BusinessDay.onOffset()#
|
reference/api/pandas.tseries.offsets.BusinessDay.onOffset.html
|
Roadmap
|
Roadmap
This page provides an overview of the major themes in pandas’ development. Each of
these items requires a relatively large amount of effort to implement. These may
be achieved more quickly with dedicated funding or interest from contributors.
An item being on the roadmap does not mean that it will necessarily happen, even
with unlimited funding. During the implementation period we may discover issues
preventing the adoption of the feature.
Additionally, an item not being on the roadmap does not exclude it from inclusion
in pandas. The roadmap is intended for larger, fundamental changes to the project that
are likely to take months or years of developer time. Smaller-scoped items will continue
to be tracked on our issue tracker.
See Roadmap evolution for proposing changes to this document.
pandas Extension types allow for extending NumPy types with custom
data types and array storage. pandas uses extension types internally, and provides
an interface for 3rd-party libraries to define their own custom data types.
|
This page provides an overview of the major themes in pandas’ development. Each of
these items requires a relatively large amount of effort to implement. These may
be achieved more quickly with dedicated funding or interest from contributors.
An item being on the roadmap does not mean that it will necessarily happen, even
with unlimited funding. During the implementation period we may discover issues
preventing the adoption of the feature.
Additionally, an item not being on the roadmap does not exclude it from inclusion
in pandas. The roadmap is intended for larger, fundamental changes to the project that
are likely to take months or years of developer time. Smaller-scoped items will continue
to be tracked on our issue tracker.
See Roadmap evolution for proposing changes to this document.
Extensibility#
pandas Extension types allow for extending NumPy types with custom
data types and array storage. pandas uses extension types internally, and provides
an interface for 3rd-party libraries to define their own custom data types.
Many parts of pandas still unintentionally convert data to a NumPy array.
These problems are especially pronounced for nested data.
We’d like to improve the handling of extension arrays throughout the library,
making their behavior more consistent with the handling of NumPy arrays. We’ll do this
by cleaning up pandas’ internals and adding new methods to the extension array interface.
String data type#
Currently, pandas stores text data in an object -dtype NumPy array.
The current implementation has two primary drawbacks: First, object -dtype
is not specific to strings: any Python object can be stored in an object -dtype
array, not just strings. Second: this is not efficient. The NumPy memory model
isn’t especially well-suited to variable width text data.
To solve the first issue, we propose a new extension type for string data. This
will initially be opt-in, with users explicitly requesting dtype="string".
The array backing this string dtype may initially be the current implementation:
an object -dtype NumPy array of Python strings.
To solve the second issue (performance), we’ll explore alternative in-memory
array libraries (for example, Apache Arrow). As part of the work, we may
need to implement certain operations expected by pandas users (for example
the algorithm used in, Series.str.upper). That work may be done outside of
pandas.
Consistent missing value handling#
Currently, pandas handles missing data differently for different data types. We
use different types to indicate that a value is missing (np.nan for
floating-point data, np.nan or None for object-dtype data – typically
strings or booleans – with missing values, and pd.NaT for datetimelike
data). Integer data cannot store missing data or are cast to float. In addition,
pandas 1.0 introduced a new missing value sentinel, pd.NA, which is being
used for the experimental nullable integer, boolean, and string data types.
These different missing values have different behaviors in user-facing
operations. Specifically, we introduced different semantics for the nullable
data types for certain operations (e.g. propagating in comparison operations
instead of comparing as False).
Long term, we want to introduce consistent missing data handling for all data
types. This includes consistent behavior in all operations (indexing, arithmetic
operations, comparisons, etc.). There has been discussion of eventually making
the new semantics the default.
This has been discussed at GH28095 (and
linked issues), and described in more detail in this
design doc.
Apache Arrow interoperability#
Apache Arrow is a cross-language development
platform for in-memory data. The Arrow logical types are closely aligned with
typical pandas use cases.
We’d like to provide better-integrated support for Arrow memory and data types
within pandas. This will let us take advantage of its I/O capabilities and
provide for better interoperability with other languages and libraries
using Arrow.
Block manager rewrite#
We’d like to replace pandas current internal data structures (a collection of
1 or 2-D arrays) with a simpler collection of 1-D arrays.
pandas internal data model is quite complex. A DataFrame is made up of
one or more 2-dimensional “blocks”, with one or more blocks per dtype. This
collection of 2-D arrays is managed by the BlockManager.
The primary benefit of the BlockManager is improved performance on certain
operations (construction from a 2D array, binary operations, reductions across the columns),
especially for wide DataFrames. However, the BlockManager substantially increases the
complexity and maintenance burden of pandas.
By replacing the BlockManager we hope to achieve
Substantially simpler code
Easier extensibility with new logical types
Better user control over memory use and layout
Improved micro-performance
Option to provide a C / Cython API to pandas’ internals
See these design documents
for more.
Decoupling of indexing and internals#
The code for getting and setting values in pandas’ data structures needs refactoring.
In particular, we must clearly separate code that converts keys (e.g., the argument
to DataFrame.loc) to positions from code that uses these positions to get
or set values. This is related to the proposed BlockManager rewrite. Currently, the
BlockManager sometimes uses label-based, rather than position-based, indexing.
We propose that it should only work with positional indexing, and the translation of keys
to positions should be entirely done at a higher level.
Indexing is a complicated API with many subtleties. This refactor will require care
and attention. The following principles should inspire refactoring of indexing code and
should result on cleaner, simpler, and more performant code.
1. Label indexing must never involve looking in an axis twice for the same label(s).
This implies that any validation step must either:
limit validation to general features (e.g. dtype/structure of the key/index), or
reuse the result for the actual indexing.
2. Indexers must never rely on an explicit call to other indexers.
For instance, it is OK to have some internal method of .loc call some
internal method of __getitem__ (or of their common base class),
but never in the code flow of .loc should the_obj[something] appear.
3. Execution of positional indexing must never involve labels (as currently, sadly, happens).
That is, the code flow of a getter call (or a setter call in which the right hand side is non-indexed)
to .iloc should never involve the axes of the object in any way.
4. Indexing must never involve accessing/modifying values (i.e., act on ._data or .values) more than once.
The following steps must hence be clearly decoupled:
find positions we need to access/modify on each axis
(if we are accessing) derive the type of object we need to return (dimensionality)
actually access/modify the values
(if we are accessing) construct the return object
5. As a corollary to the decoupling between 4.i and 4.iii, any code which deals on how data is stored
(including any combination of handling multiple dtypes, and sparse storage, categoricals, third-party types)
must be independent from code that deals with identifying affected rows/columns,
and take place only once step 4.i is completed.
In particular, such code should most probably not live in pandas/core/indexing.py
… and must not depend in any way on the type(s) of axes (e.g. no MultiIndex special cases)
6. As a corollary to point 1.i, ``Index`` (sub)classes must provide separate methods for any desired validity check of label(s) which does not involve actual lookup,
on the one side, and for any required conversion/adaptation/lookup of label(s), on the other.
7. Use of trial and error should be limited, and anyway restricted to catch only exceptions
which are actually expected (typically KeyError).
In particular, code should never (intentionally) raise new exceptions in the except portion of a try... exception
8. Any code portion which is not specific to setters and getters must be shared,
and when small differences in behavior are expected (e.g. getting with .loc raises for
missing labels, setting still doesn’t), they can be managed with a specific parameter.
Numba-accelerated operations#
Numba is a JIT compiler for Python code. We’d like to provide
ways for users to apply their own Numba-jitted functions where pandas accepts user-defined functions
(for example, Series.apply(), DataFrame.apply(), DataFrame.applymap(),
and in groupby and window contexts). This will improve the performance of
user-defined-functions in these operations by staying within compiled code.
Performance monitoring#
pandas uses airspeed velocity to
monitor for performance regressions. ASV itself is a fabulous tool, but requires
some additional work to be integrated into an open source project’s workflow.
The asv-runner organization, currently made up
of pandas maintainers, provides tools built on top of ASV. We have a physical
machine for running a number of project’s benchmarks, and tools managing the
benchmark runs and reporting on results.
We’d like to fund improvements and maintenance of these tools to
Be more stable. Currently, they’re maintained on the nights and weekends when
a maintainer has free time.
Tune the system for benchmarks to improve stability, following
https://pyperf.readthedocs.io/en/latest/system.html
Build a GitHub bot to request ASV runs before a PR is merged. Currently, the
benchmarks are only run nightly.
Roadmap evolution#
pandas continues to evolve. The direction is primarily determined by community
interest. Everyone is welcome to review existing items on the roadmap and
to propose a new item.
Each item on the roadmap should be a short summary of a larger design proposal.
The proposal should include
Short summary of the changes, which would be appropriate for inclusion in
the roadmap if accepted.
Motivation for the changes.
An explanation of why the change is in scope for pandas.
Detailed design: Preferably with example-usage (even if not implemented yet)
and API documentation
API Change: Any API changes that may result from the proposal.
That proposal may then be submitted as a GitHub issue, where the pandas maintainers
can review and comment on the design. The pandas mailing list
should be notified of the proposal.
When there’s agreement that an implementation
would be welcome, the roadmap should be updated to include the summary and a
link to the discussion issue.
Completed items#
This section records now completed items from the pandas roadmap.
Documentation improvements#
We improved the pandas documentation
The pandas community worked with others to build the pydata-sphinx-theme,
which is now used for https://pandas.pydata.org/docs/ (GH15556).
Getting started contains a number of resources intended for new
pandas users coming from a variety of backgrounds (GH26831).
|
development/roadmap.html
|
pandas.CategoricalIndex.codes
|
`pandas.CategoricalIndex.codes`
The category codes of this categorical.
Codes are an array of integers which are the positions of the actual
values in the categories array.
|
property CategoricalIndex.codes[source]#
The category codes of this categorical.
Codes are an array of integers which are the positions of the actual
values in the categories array.
There is no setter, use the other categorical methods and the normal item
setter to change values in the categorical.
Returns
ndarray[int]A non-writable view of the codes array.
|
reference/api/pandas.CategoricalIndex.codes.html
|
pandas.tseries.offsets.YearBegin.rollforward
|
`pandas.tseries.offsets.YearBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
YearBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.YearBegin.rollforward.html
|
pandas.Series.dt.time
|
`pandas.Series.dt.time`
Returns numpy array of datetime.time objects.
|
Series.dt.time[source]#
Returns numpy array of datetime.time objects.
The time part of the Timestamps.
|
reference/api/pandas.Series.dt.time.html
|
pandas.Index.memory_usage
|
`pandas.Index.memory_usage`
Memory usage of the values.
|
Index.memory_usage(deep=False)[source]#
Memory usage of the values.
Parameters
deepbool, default FalseIntrospect the data deeply, interrogate
object dtypes for system-level memory consumption.
Returns
bytes used
See also
numpy.ndarray.nbytesTotal bytes consumed by the elements of the array.
Notes
Memory usage does not include memory consumed by elements that
are not components of the array if deep=False or if used on PyPy
|
reference/api/pandas.Index.memory_usage.html
|
pandas.errors.PossibleDataLossError
|
`pandas.errors.PossibleDataLossError`
Exception raised when trying to open a HDFStore file when already opened.
```
>>> store = pd.HDFStore('my-store', 'a')
>>> store.open("w")
... # PossibleDataLossError: Re-opening the file [my-store] with mode [a]...
```
|
exception pandas.errors.PossibleDataLossError[source]#
Exception raised when trying to open a HDFStore file when already opened.
Examples
>>> store = pd.HDFStore('my-store', 'a')
>>> store.open("w")
... # PossibleDataLossError: Re-opening the file [my-store] with mode [a]...
|
reference/api/pandas.errors.PossibleDataLossError.html
|
pandas.Series.equals
|
`pandas.Series.equals`
Test whether two objects contain the same elements.
```
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
```
|
Series.equals(other)[source]#
Test whether two objects contain the same elements.
This function allows two Series or DataFrames to be compared against
each other to see if they have the same shape and elements. NaNs in
the same location are considered equal.
The row/column index do not need to have the same type, as long
as the values are considered equal. Corresponding columns must be of
the same dtype.
Parameters
otherSeries or DataFrameThe other Series or DataFrame to be compared with the first.
Returns
boolTrue if all elements are the same in both objects, False
otherwise.
See also
Series.eqCompare two Series objects of the same length and return a Series where each element is True if the element in each Series is equal, False otherwise.
DataFrame.eqCompare two DataFrame objects of the same shape and return a DataFrame where each element is True if the respective element in each DataFrame is equal, False otherwise.
testing.assert_series_equalRaises an AssertionError if left and right are not equal. Provides an easy interface to ignore inequality in dtypes, indexes and precision among others.
testing.assert_frame_equalLike assert_series_equal, but targets DataFrames.
numpy.array_equalReturn True if two arrays have the same shape and elements, False otherwise.
Examples
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
DataFrames df and exactly_equal have the same types and values for
their elements and column labels, which will return True.
>>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
>>> exactly_equal
1 2
0 10 20
>>> df.equals(exactly_equal)
True
DataFrames df and different_column_type have the same element
types and values, but have different types for the column labels,
which will still return True.
>>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
>>> different_column_type
1.0 2.0
0 10 20
>>> df.equals(different_column_type)
True
DataFrames df and different_data_type have different types for the
same values for their elements, and will return False even though
their column labels are the same values and types.
>>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
>>> different_data_type
1 2
0 10.0 20.0
>>> df.equals(different_data_type)
False
|
reference/api/pandas.Series.equals.html
|
pandas.tseries.offsets.BusinessMonthEnd.is_anchored
|
`pandas.tseries.offsets.BusinessMonthEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
BusinessMonthEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_anchored.html
|
pandas.tseries.offsets.Tick.apply_index
|
`pandas.tseries.offsets.Tick.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
Tick.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Tick.apply_index.html
|
pandas.Series.reset_index
|
`pandas.Series.reset_index`
Generate a new DataFrame or Series with the index reset.
```
>>> s = pd.Series([1, 2, 3, 4], name='foo',
... index=pd.Index(['a', 'b', 'c', 'd'], name='idx'))
```
|
Series.reset_index(level=None, *, drop=False, name=_NoDefault.no_default, inplace=False, allow_duplicates=False)[source]#
Generate a new DataFrame or Series with the index reset.
This is useful when the index needs to be treated as a column, or
when the index is meaningless and needs to be reset to the default
before another operation.
Parameters
levelint, str, tuple, or list, default optionalFor a Series with a MultiIndex, only remove the specified levels
from the index. Removes all levels by default.
dropbool, default FalseJust reset the index, without inserting it as a column in
the new DataFrame.
nameobject, optionalThe name to use for the column containing the original Series
values. Uses self.name by default. This argument is ignored
when drop is True.
inplacebool, default FalseModify the Series in place (do not create a new object).
allow_duplicatesbool, default FalseAllow duplicate column labels to be created.
New in version 1.5.0.
Returns
Series or DataFrame or NoneWhen drop is False (the default), a DataFrame is returned.
The newly created columns will come first in the DataFrame,
followed by the original Series values.
When drop is True, a Series is returned.
In either case, if inplace=True, no value is returned.
See also
DataFrame.reset_indexAnalogous function for DataFrame.
Examples
>>> s = pd.Series([1, 2, 3, 4], name='foo',
... index=pd.Index(['a', 'b', 'c', 'd'], name='idx'))
Generate a DataFrame with default index.
>>> s.reset_index()
idx foo
0 a 1
1 b 2
2 c 3
3 d 4
To specify the name of the new column use name.
>>> s.reset_index(name='values')
idx values
0 a 1
1 b 2
2 c 3
3 d 4
To generate a new Series with the default set drop to True.
>>> s.reset_index(drop=True)
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
To update the Series in place, without generating a new one
set inplace to True. Note that it also requires drop=True.
>>> s.reset_index(inplace=True, drop=True)
>>> s
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
The level parameter is interesting for Series with a multi-level
index.
>>> arrays = [np.array(['bar', 'bar', 'baz', 'baz']),
... np.array(['one', 'two', 'one', 'two'])]
>>> s2 = pd.Series(
... range(4), name='foo',
... index=pd.MultiIndex.from_arrays(arrays,
... names=['a', 'b']))
To remove a specific level from the Index, use level.
>>> s2.reset_index(level='a')
a foo
b
one bar 0
two bar 1
one baz 2
two baz 3
If level is not set, all levels are removed from the Index.
>>> s2.reset_index()
a b foo
0 bar one 0
1 bar two 1
2 baz one 2
3 baz two 3
|
reference/api/pandas.Series.reset_index.html
|
pandas.tseries.offsets.Micro.is_year_end
|
`pandas.tseries.offsets.Micro.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Micro.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Micro.is_year_end.html
|
pandas.errors.PyperclipWindowsException
|
`pandas.errors.PyperclipWindowsException`
Exception raised when clipboard functionality is unsupported by Windows.
Access to the clipboard handle would be denied due to some other
window process is accessing it.
|
exception pandas.errors.PyperclipWindowsException(message)[source]#
Exception raised when clipboard functionality is unsupported by Windows.
Access to the clipboard handle would be denied due to some other
window process is accessing it.
|
reference/api/pandas.errors.PyperclipWindowsException.html
|
pandas.Series.multiply
|
`pandas.Series.multiply`
Return Multiplication of series and other, element-wise (binary operator mul).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
```
|
Series.multiply(other, level=None, fill_value=None, axis=0)[source]#
Return Multiplication of series and other, element-wise (binary operator mul).
Equivalent to series * other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rmulReverse of the Multiplication operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.multiply.html
|
pandas.MultiIndex.sortlevel
|
`pandas.MultiIndex.sortlevel`
Sort MultiIndex at the requested level.
```
>>> mi = pd.MultiIndex.from_arrays([[0, 0], [2, 1]])
>>> mi
MultiIndex([(0, 2),
(0, 1)],
)
```
|
MultiIndex.sortlevel(level=0, ascending=True, sort_remaining=True)[source]#
Sort MultiIndex at the requested level.
The result will respect the original ordering of the associated
factor at that level.
Parameters
levellist-like, int or str, default 0If a string is given, must be a name of the level.
If list-like must be names or ints of levels.
ascendingbool, default TrueFalse to sort in descending order.
Can also be a list to specify a directed ordering.
sort_remainingsort by the remaining levels after level
Returns
sorted_indexpd.MultiIndexResulting index.
indexernp.ndarray[np.intp]Indices of output values in original index.
Examples
>>> mi = pd.MultiIndex.from_arrays([[0, 0], [2, 1]])
>>> mi
MultiIndex([(0, 2),
(0, 1)],
)
>>> mi.sortlevel()
(MultiIndex([(0, 1),
(0, 2)],
), array([1, 0]))
>>> mi.sortlevel(sort_remaining=False)
(MultiIndex([(0, 2),
(0, 1)],
), array([0, 1]))
>>> mi.sortlevel(1)
(MultiIndex([(0, 1),
(0, 2)],
), array([1, 0]))
>>> mi.sortlevel(1, ascending=False)
(MultiIndex([(0, 2),
(0, 1)],
), array([0, 1]))
|
reference/api/pandas.MultiIndex.sortlevel.html
|
pandas.tseries.offsets.CustomBusinessDay.offset
|
`pandas.tseries.offsets.CustomBusinessDay.offset`
Alias for self._offset.
|
CustomBusinessDay.offset#
Alias for self._offset.
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.offset.html
|
pandas.DataFrame.product
|
`pandas.DataFrame.product`
Return the product of the values over the requested axis.
```
>>> pd.Series([], dtype="float64").prod()
1.0
```
|
DataFrame.product(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source]#
Return the product of the values over the requested axis.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
min_countint, default 0The required number of valid values to perform the operation. If fewer than
min_count non-NA values are present the result will be NA.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
By default, the product of an empty or all-NA Series is 1
>>> pd.Series([], dtype="float64").prod()
1.0
This can be controlled with the min_count parameter
>>> pd.Series([], dtype="float64").prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
|
reference/api/pandas.DataFrame.product.html
|
pandas.Series.abs
|
`pandas.Series.abs`
Return a Series/DataFrame with absolute numeric value of each element.
```
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
```
|
Series.abs()[source]#
Return a Series/DataFrame with absolute numeric value of each element.
This function only applies to elements that are all numeric.
Returns
absSeries/DataFrame containing the absolute value of each element.
See also
numpy.absoluteCalculate the absolute value element-wise.
Notes
For complex inputs, 1.2 + 1j, the absolute value is
\(\sqrt{ a^2 + b^2 }\).
Examples
Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j])
>>> s.abs()
0 1.56205
dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')])
>>> s.abs()
0 1 days
dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from
StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
|
reference/api/pandas.Series.abs.html
|
pandas.Series.transform
|
`pandas.Series.transform`
Call func on self producing a Series with the same axis shape as self.
```
>>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
>>> df
A B
0 0 1
1 1 2
2 2 3
>>> df.transform(lambda x: x + 1)
A B
0 1 2
1 2 3
2 3 4
```
|
Series.transform(func, axis=0, *args, **kwargs)[source]#
Call func on self producing a Series with the same axis shape as self.
Parameters
funcfunction, str, list-like or dict-likeFunction to use for transforming the data. If a function, must either
work when passed a Series or when passed to Series.apply. If func
is both list-like and dict-like, dict-like behavior takes precedence.
Accepted combinations are:
function
string function name
list-like of functions and/or function names, e.g. [np.exp, 'sqrt']
dict-like of axis labels -> functions, function names or list-like of such.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
SeriesA Series that must have the same length as self.
Raises
ValueErrorIf the returned Series has a different length than self.
See also
Series.aggOnly perform aggregating type operations.
Series.applyInvoke function on a Series.
Notes
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
>>> df
A B
0 0 1
1 1 2
2 2 3
>>> df.transform(lambda x: x + 1)
A B
0 1 2
1 2 3
2 3 4
Even though the resulting Series must have the same length as the
input Series, it is possible to provide several input functions:
>>> s = pd.Series(range(3))
>>> s
0 0
1 1
2 2
dtype: int64
>>> s.transform([np.sqrt, np.exp])
sqrt exp
0 0.000000 1.000000
1 1.000000 2.718282
2 1.414214 7.389056
You can call transform on a GroupBy object:
>>> df = pd.DataFrame({
... "Date": [
... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
... "Data": [5, 8, 6, 1, 50, 100, 60, 120],
... })
>>> df
Date Data
0 2015-05-08 5
1 2015-05-07 8
2 2015-05-06 6
3 2015-05-05 1
4 2015-05-08 50
5 2015-05-07 100
6 2015-05-06 60
7 2015-05-05 120
>>> df.groupby('Date')['Data'].transform('sum')
0 55
1 108
2 66
3 121
4 55
5 108
6 66
7 121
Name: Data, dtype: int64
>>> df = pd.DataFrame({
... "c": [1, 1, 1, 2, 2, 2, 2],
... "type": ["m", "n", "o", "m", "m", "n", "n"]
... })
>>> df
c type
0 1 m
1 1 n
2 1 o
3 2 m
4 2 m
5 2 n
6 2 n
>>> df['size'] = df.groupby('c')['type'].transform(len)
>>> df
c type size
0 1 m 3
1 1 n 3
2 1 o 3
3 2 m 4
4 2 m 4
5 2 n 4
6 2 n 4
|
reference/api/pandas.Series.transform.html
|
pandas.tseries.offsets.BMonthBegin
|
`pandas.tseries.offsets.BMonthBegin`
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
|
pandas.tseries.offsets.BMonthBegin#
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
|
reference/api/pandas.tseries.offsets.BMonthBegin.html
|
pandas.tseries.offsets.Milli.is_year_start
|
`pandas.tseries.offsets.Milli.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Milli.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Milli.is_year_start.html
|
pandas.tseries.offsets.BusinessHour.is_on_offset
|
`pandas.tseries.offsets.BusinessHour.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
Timestamp to check intersections with frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
BusinessHour.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.BusinessHour.is_on_offset.html
|
Window
|
Window
|
Rolling objects are returned by .rolling calls: pandas.DataFrame.rolling(), pandas.Series.rolling(), etc.
Expanding objects are returned by .expanding calls: pandas.DataFrame.expanding(), pandas.Series.expanding(), etc.
ExponentialMovingWindow objects are returned by .ewm calls: pandas.DataFrame.ewm(), pandas.Series.ewm(), etc.
Rolling window functions#
Rolling.count([numeric_only])
Calculate the rolling count of non NaN observations.
Rolling.sum([numeric_only, engine, ...])
Calculate the rolling sum.
Rolling.mean([numeric_only, engine, ...])
Calculate the rolling mean.
Rolling.median([numeric_only, engine, ...])
Calculate the rolling median.
Rolling.var([ddof, numeric_only, engine, ...])
Calculate the rolling variance.
Rolling.std([ddof, numeric_only, engine, ...])
Calculate the rolling standard deviation.
Rolling.min([numeric_only, engine, ...])
Calculate the rolling minimum.
Rolling.max([numeric_only, engine, ...])
Calculate the rolling maximum.
Rolling.corr([other, pairwise, ddof, ...])
Calculate the rolling correlation.
Rolling.cov([other, pairwise, ddof, ...])
Calculate the rolling sample covariance.
Rolling.skew([numeric_only])
Calculate the rolling unbiased skewness.
Rolling.kurt([numeric_only])
Calculate the rolling Fisher's definition of kurtosis without bias.
Rolling.apply(func[, raw, engine, ...])
Calculate the rolling custom aggregation function.
Rolling.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Rolling.quantile(quantile[, interpolation, ...])
Calculate the rolling quantile.
Rolling.sem([ddof, numeric_only])
Calculate the rolling standard error of mean.
Rolling.rank([method, ascending, pct, ...])
Calculate the rolling rank.
Weighted window functions#
Window.mean([numeric_only])
Calculate the rolling weighted window mean.
Window.sum([numeric_only])
Calculate the rolling weighted window sum.
Window.var([ddof, numeric_only])
Calculate the rolling weighted window variance.
Window.std([ddof, numeric_only])
Calculate the rolling weighted window standard deviation.
Expanding window functions#
Expanding.count([numeric_only])
Calculate the expanding count of non NaN observations.
Expanding.sum([numeric_only, engine, ...])
Calculate the expanding sum.
Expanding.mean([numeric_only, engine, ...])
Calculate the expanding mean.
Expanding.median([numeric_only, engine, ...])
Calculate the expanding median.
Expanding.var([ddof, numeric_only, engine, ...])
Calculate the expanding variance.
Expanding.std([ddof, numeric_only, engine, ...])
Calculate the expanding standard deviation.
Expanding.min([numeric_only, engine, ...])
Calculate the expanding minimum.
Expanding.max([numeric_only, engine, ...])
Calculate the expanding maximum.
Expanding.corr([other, pairwise, ddof, ...])
Calculate the expanding correlation.
Expanding.cov([other, pairwise, ddof, ...])
Calculate the expanding sample covariance.
Expanding.skew([numeric_only])
Calculate the expanding unbiased skewness.
Expanding.kurt([numeric_only])
Calculate the expanding Fisher's definition of kurtosis without bias.
Expanding.apply(func[, raw, engine, ...])
Calculate the expanding custom aggregation function.
Expanding.aggregate(func, *args, **kwargs)
Aggregate using one or more operations over the specified axis.
Expanding.quantile(quantile[, ...])
Calculate the expanding quantile.
Expanding.sem([ddof, numeric_only])
Calculate the expanding standard error of mean.
Expanding.rank([method, ascending, pct, ...])
Calculate the expanding rank.
Exponentially-weighted window functions#
ExponentialMovingWindow.mean([numeric_only, ...])
Calculate the ewm (exponential weighted moment) mean.
ExponentialMovingWindow.sum([numeric_only, ...])
Calculate the ewm (exponential weighted moment) sum.
ExponentialMovingWindow.std([bias, numeric_only])
Calculate the ewm (exponential weighted moment) standard deviation.
ExponentialMovingWindow.var([bias, numeric_only])
Calculate the ewm (exponential weighted moment) variance.
ExponentialMovingWindow.corr([other, ...])
Calculate the ewm (exponential weighted moment) sample correlation.
ExponentialMovingWindow.cov([other, ...])
Calculate the ewm (exponential weighted moment) sample covariance.
Window indexer#
Base class for defining custom window boundaries.
api.indexers.BaseIndexer([index_array, ...])
Base class for window bounds calculations.
api.indexers.FixedForwardWindowIndexer([...])
Creates window boundaries for fixed-length windows that include the current row.
api.indexers.VariableOffsetWindowIndexer([...])
Calculate window boundaries based on a non-fixed offset such as a BusinessDay.
|
reference/window.html
|
pandas.Series.dt.is_leap_year
|
`pandas.Series.dt.is_leap_year`
Boolean indicator if the date belongs to a leap year.
A leap year is a year, which has 366 days (instead of 365) including
29th of February as an intercalary day.
Leap years are years which are multiples of four with the exception
of years divisible by 100 but not by 400.
```
>>> idx = pd.date_range("2012-01-01", "2015-01-01", freq="Y")
>>> idx
DatetimeIndex(['2012-12-31', '2013-12-31', '2014-12-31'],
dtype='datetime64[ns]', freq='A-DEC')
>>> idx.is_leap_year
array([ True, False, False])
```
|
Series.dt.is_leap_year[source]#
Boolean indicator if the date belongs to a leap year.
A leap year is a year, which has 366 days (instead of 365) including
29th of February as an intercalary day.
Leap years are years which are multiples of four with the exception
of years divisible by 100 but not by 400.
Returns
Series or ndarrayBooleans indicating if dates belong to a leap year.
Examples
This method is available on Series with datetime values under
the .dt accessor, and directly on DatetimeIndex.
>>> idx = pd.date_range("2012-01-01", "2015-01-01", freq="Y")
>>> idx
DatetimeIndex(['2012-12-31', '2013-12-31', '2014-12-31'],
dtype='datetime64[ns]', freq='A-DEC')
>>> idx.is_leap_year
array([ True, False, False])
>>> dates_series = pd.Series(idx)
>>> dates_series
0 2012-12-31
1 2013-12-31
2 2014-12-31
dtype: datetime64[ns]
>>> dates_series.dt.is_leap_year
0 True
1 False
2 False
dtype: bool
|
reference/api/pandas.Series.dt.is_leap_year.html
|
pandas.Index.sortlevel
|
`pandas.Index.sortlevel`
For internal compatibility with the Index API.
|
Index.sortlevel(level=None, ascending=True, sort_remaining=None)[source]#
For internal compatibility with the Index API.
Sort the Index. This is for compat with MultiIndex
Parameters
ascendingbool, default TrueFalse to sort in descending order
level, sort_remaining are compat parameters
Returns
Index
|
reference/api/pandas.Index.sortlevel.html
|
pandas.DataFrame.plot.kde
|
`pandas.DataFrame.plot.kde`
Generate Kernel Density Estimate plot using Gaussian kernels.
In statistics, kernel density estimation (KDE) is a non-parametric
way to estimate the probability density function (PDF) of a random
variable. This function uses Gaussian kernels and includes automatic
bandwidth determination.
```
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
```
|
DataFrame.plot.kde(bw_method=None, ind=None, **kwargs)[source]#
Generate Kernel Density Estimate plot using Gaussian kernels.
In statistics, kernel density estimation (KDE) is a non-parametric
way to estimate the probability density function (PDF) of a random
variable. This function uses Gaussian kernels and includes automatic
bandwidth determination.
Parameters
bw_methodstr, scalar or callable, optionalThe method used to calculate the estimator bandwidth. This can be
‘scott’, ‘silverman’, a scalar constant or a callable.
If None (default), ‘scott’ is used.
See scipy.stats.gaussian_kde for more information.
indNumPy array or int, optionalEvaluation points for the estimated PDF. If None (default),
1000 equally spaced points are used. If ind is a NumPy array, the
KDE is evaluated at the points passed. If ind is an integer,
ind number of equally spaced points are used.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also
scipy.stats.gaussian_kdeRepresentation of a kernel-density estimate using Gaussian kernels. This is the function used internally to estimate the PDF.
Examples
Given a Series of points randomly sampled from an unknown
distribution, estimate its PDF using KDE with automatic
bandwidth determination and plot the results, evaluating them at
1000 equally spaced points (default):
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = s.plot.kde(ind=[1, 2, 3, 4, 5])
For DataFrame, it works in the same way:
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the
plot of the estimated PDF:
>>> ax = df.plot.kde(ind=[1, 2, 3, 4, 5, 6])
|
reference/api/pandas.DataFrame.plot.kde.html
|
pandas arrays, scalars, and data types
|
pandas arrays, scalars, and data types
|
Objects#
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar
Array
TZ-aware datetime
DatetimeTZDtype
Timestamp
Datetimes
Timedeltas
(none)
Timedelta
Timedeltas
Period (time spans)
PeriodDtype
Period
Periods
Intervals
IntervalDtype
Interval
Intervals
Nullable Integer
Int64Dtype, …
(none)
Nullable integer
Categorical
CategoricalDtype
(none)
Categoricals
Sparse
SparseDtype
(none)
Sparse
Strings
StringDtype
str
Strings
Boolean (with NA)
BooleanDtype
bool
Nullable Boolean
PyArrow
ArrowDtype
Python Scalars or NA
PyArrow
pandas and third-party libraries can extend NumPy’s type system (see Extension types).
The top-level array() method can be used to create a new array, which may be
stored in a Series, Index, or as a column in a DataFrame.
array(data[, dtype, copy])
Create an array.
PyArrow#
Warning
This feature is experimental, and the API can change in a future release without warning.
The arrays.ArrowExtensionArray is backed by a pyarrow.ChunkedArray with a
pyarrow.DataType instead of a NumPy array and data type. The .dtype of a arrays.ArrowExtensionArray
is an ArrowDtype.
Pyarrow provides similar array and data type
support as NumPy including first-class nullability support for all data types, immutability and more.
Note
For string types (pyarrow.string(), string[pyarrow]), PyArrow support is still facilitated
by arrays.ArrowStringArray and StringDtype("pyarrow"). See the string section
below.
While individual values in an arrays.ArrowExtensionArray are stored as a PyArrow objects, scalars are returned
as Python scalars corresponding to the data type, e.g. a PyArrow int64 will be returned as Python int, or NA for missing
values.
arrays.ArrowExtensionArray(values)
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
ArrowDtype(pyarrow_dtype)
An ExtensionDtype for PyArrow data types.
Datetimes#
NumPy cannot natively represent timezone-aware datetimes. pandas supports this
with the arrays.DatetimeArray extension array, which can hold timezone-naive
or timezone-aware values.
Timestamp, a subclass of datetime.datetime, is pandas’
scalar type for timezone-naive or timezone-aware datetime data.
Timestamp([ts_input, freq, tz, unit, year, ...])
Pandas replacement for python datetime.datetime object.
Properties#
Timestamp.asm8
Return numpy datetime64 format in nanoseconds.
Timestamp.day
Timestamp.dayofweek
Return day of the week.
Timestamp.day_of_week
Return day of the week.
Timestamp.dayofyear
Return the day of the year.
Timestamp.day_of_year
Return the day of the year.
Timestamp.days_in_month
Return the number of days in the month.
Timestamp.daysinmonth
Return the number of days in the month.
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Return True if year is a leap year.
Timestamp.is_month_end
Return True if date is last day of month.
Timestamp.is_month_start
Return True if date is first day of month.
Timestamp.is_quarter_end
Return True if date is last day of the quarter.
Timestamp.is_quarter_start
Return True if date is first day of the quarter.
Timestamp.is_year_end
Return True if date is last day of the year.
Timestamp.is_year_start
Return True if date is first day of the year.
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Return the quarter of the year.
Timestamp.resolution
Timestamp.second
Timestamp.tz
Alias for tzinfo.
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Return the week number of the year.
Timestamp.weekofyear
Return the week number of the year.
Timestamp.year
Methods#
Timestamp.astimezone(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.ceil(freq[, ambiguous, nonexistent])
Return a new Timestamp ceiled to this resolution.
Timestamp.combine(date, time)
Combine date, time into datetime with same date and time fields.
Timestamp.ctime
Return ctime() style string.
Timestamp.date
Return date object with same year, month and day.
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Timestamp.dst
Return self.tzinfo.dst(self).
Timestamp.floor(freq[, ambiguous, nonexistent])
Return a new Timestamp floored to this resolution.
Timestamp.freq
Timestamp.freqstr
Return the total number of days in the month.
Timestamp.fromordinal(ordinal[, freq, tz])
Construct a timestamp from a a proleptic Gregorian ordinal.
Timestamp.fromtimestamp(ts)
Transform timestamp[, tz] to tz's local time from POSIX timestamp.
Timestamp.isocalendar
Return a 3-tuple containing ISO year, week number, and weekday.
Timestamp.isoformat
Return the time formatted according to ISO 8610.
Timestamp.isoweekday()
Return the day of the week represented by the date.
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
Timestamp.now([tz])
Return new Timestamp object representing current time local to tz.
Timestamp.replace([year, month, day, hour, ...])
Implements datetime.replace, handles nanoseconds.
Timestamp.round(freq[, ambiguous, nonexistent])
Round the Timestamp to the specified resolution.
Timestamp.strftime(format)
Return a formatted string of the Timestamp.
Timestamp.strptime(string, format)
Function is not implemented.
Timestamp.time
Return time object with same time but with tzinfo=None.
Timestamp.timestamp
Return POSIX timestamp as float.
Timestamp.timetuple
Return time tuple, compatible with time.localtime().
Timestamp.timetz
Return time object with same time and tzinfo.
Timestamp.to_datetime64
Return a numpy.datetime64 object with 'ns' precision.
Timestamp.to_numpy
Convert the Timestamp to a NumPy datetime64.
Timestamp.to_julian_date()
Convert TimeStamp to a Julian Date.
Timestamp.to_period
Return an period of which this timestamp is an observation.
Timestamp.to_pydatetime
Convert a Timestamp object to a native Python datetime object.
Timestamp.today([tz])
Return the current time in the local timezone.
Timestamp.toordinal
Return proleptic Gregorian ordinal.
Timestamp.tz_convert(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.tz_localize(tz[, ambiguous, ...])
Localize the Timestamp to a timezone.
Timestamp.tzname
Return self.tzinfo.tzname(self).
Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
Timestamp.utcoffset
Return self.tzinfo.utcoffset(self).
Timestamp.utctimetuple
Return UTC time tuple, compatible with time.localtime().
Timestamp.weekday()
Return the day of the week represented by the date.
A collection of timestamps may be stored in a arrays.DatetimeArray.
For timezone-aware data, the .dtype of a arrays.DatetimeArray is a
DatetimeTZDtype. For timezone-naive data, np.dtype("datetime64[ns]")
is used.
If the data are timezone-aware, then every value in the array must have the same timezone.
arrays.DatetimeArray(values[, dtype, freq, copy])
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
DatetimeTZDtype([unit, tz])
An ExtensionDtype for timezone-aware datetime data.
Timedeltas#
NumPy can natively represent timedeltas. pandas provides Timedelta
for symmetry with Timestamp.
Timedelta([value, unit])
Represents a duration, the difference between two dates or times.
Properties#
Timedelta.asm8
Return a numpy timedelta64 array scalar view.
Timedelta.components
Return a components namedtuple-like.
Timedelta.days
Timedelta.delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
Timedelta.freq
(DEPRECATED) Freq property.
Timedelta.is_populated
(DEPRECATED) Is_populated property.
Timedelta.max
Timedelta.microseconds
Timedelta.min
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Timedelta.resolution
Timedelta.seconds
Timedelta.value
Timedelta.view
Array view compatibility.
Methods#
Timedelta.ceil(freq)
Return a new Timedelta ceiled to this resolution.
Timedelta.floor(freq)
Return a new Timedelta floored to this resolution.
Timedelta.isoformat
Format the Timedelta as ISO 8601 Duration.
Timedelta.round(freq)
Round the Timedelta to the specified resolution.
Timedelta.to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
Timedelta.to_numpy
Convert the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds
Total seconds in the duration.
A collection of Timedelta may be stored in a TimedeltaArray.
arrays.TimedeltaArray(values[, dtype, freq, ...])
Pandas ExtensionArray for timedelta data.
Periods#
pandas represents spans of times as Period objects.
Period#
Period([value, freq, ordinal, year, month, ...])
Represents a period of time.
Properties#
Period.day
Get day of the month that a Period falls on.
Period.dayofweek
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.day_of_week
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.dayofyear
Return the day of the year.
Period.day_of_year
Return the day of the year.
Period.days_in_month
Get the total number of days in the month that this period falls on.
Period.daysinmonth
Get the total number of days of the month that this period falls on.
Period.end_time
Get the Timestamp for the end of the period.
Period.freq
Period.freqstr
Return a string representation of the frequency.
Period.hour
Get the hour of the day component of the Period.
Period.is_leap_year
Return True if the period's year is in a leap year.
Period.minute
Get minute of the hour component of the Period.
Period.month
Return the month this Period falls on.
Period.ordinal
Period.quarter
Return the quarter this Period falls on.
Period.qyear
Fiscal year the Period lies in according to its starting-quarter.
Period.second
Get the second component of the Period.
Period.start_time
Get the Timestamp for the start of the period.
Period.week
Get the week of the year on the given Period.
Period.weekday
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.weekofyear
Get the week of the year on the given Period.
Period.year
Return the year this Period falls on.
Methods#
Period.asfreq
Convert Period to desired frequency, at the start or end of the interval.
Period.now
Return the period of now's date.
Period.strftime
Returns a formatted string representation of the Period.
Period.to_timestamp
Return the Timestamp representation of the Period.
A collection of Period may be stored in a arrays.PeriodArray.
Every period in a arrays.PeriodArray must have the same freq.
arrays.PeriodArray(values[, dtype, freq, copy])
Pandas ExtensionArray for storing Period data.
PeriodDtype([freq])
An ExtensionDtype for Period data.
Intervals#
Arbitrary intervals can be represented as Interval objects.
Interval
Immutable object implementing an Interval, a bounded slice-like interval.
Properties#
Interval.closed
String describing the inclusive side the intervals.
Interval.closed_left
Check if the interval is closed on the left side.
Interval.closed_right
Check if the interval is closed on the right side.
Interval.is_empty
Indicates if an interval is empty, meaning it contains no points.
Interval.left
Left bound for the interval.
Interval.length
Return the length of the Interval.
Interval.mid
Return the midpoint of the Interval.
Interval.open_left
Check if the interval is open on the left side.
Interval.open_right
Check if the interval is open on the right side.
Interval.overlaps
Check whether two Interval objects overlap.
Interval.right
Right bound for the interval.
A collection of intervals may be stored in an arrays.IntervalArray.
arrays.IntervalArray(data[, closed, dtype, ...])
Pandas array for interval data that are closed on the same side.
IntervalDtype([subtype, closed])
An ExtensionDtype for Interval data.
Nullable integer#
numpy.ndarray cannot natively represent integer-data with missing values.
pandas provides this through arrays.IntegerArray.
arrays.IntegerArray(values, mask[, copy])
Array of integer (optional missing) values.
Int8Dtype()
An ExtensionDtype for int8 integer data.
Int16Dtype()
An ExtensionDtype for int16 integer data.
Int32Dtype()
An ExtensionDtype for int32 integer data.
Int64Dtype()
An ExtensionDtype for int64 integer data.
UInt8Dtype()
An ExtensionDtype for uint8 integer data.
UInt16Dtype()
An ExtensionDtype for uint16 integer data.
UInt32Dtype()
An ExtensionDtype for uint32 integer data.
UInt64Dtype()
An ExtensionDtype for uint64 integer data.
Categoricals#
pandas defines a custom data type for representing data that can take only a
limited, fixed set of values. The dtype of a Categorical can be described by
a CategoricalDtype.
CategoricalDtype([categories, ordered])
Type for categorical data with the categories and orderedness.
CategoricalDtype.categories
An Index containing the unique categories allowed.
CategoricalDtype.ordered
Whether the categories have an ordered relationship.
Categorical data can be stored in a pandas.Categorical
Categorical(values[, categories, ordered, ...])
Represent a categorical variable in classic R / S-plus fashion.
The alternative Categorical.from_codes() constructor can be used when you
have the categories and integer codes already:
Categorical.from_codes(codes[, categories, ...])
Make a Categorical type from codes and categories or dtype.
The dtype information is available on the Categorical
Categorical.dtype
The CategoricalDtype for this instance.
Categorical.categories
The categories of this categorical.
Categorical.ordered
Whether the categories have an ordered relationship.
Categorical.codes
The category codes of this categorical.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts
the Categorical back to a NumPy array, so categories and order information is not preserved!
Categorical.__array__([dtype])
The numpy array interface.
A Categorical can be stored in a Series or DataFrame.
To create a Series of dtype category, use cat = s.astype(dtype) or
Series(..., dtype=dtype) where dtype is either
the string 'category'
an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical
data. See Categorical accessor for more.
Sparse#
Data where a single value is repeated many times (e.g. 0 or NaN) may
be stored efficiently as a arrays.SparseArray.
arrays.SparseArray(data[, sparse_index, ...])
An ExtensionArray for storing sparse data.
SparseDtype([dtype, fill_value])
Dtype for data stored in SparseArray.
The Series.sparse accessor may be used to access sparse-specific attributes
and methods if the Series contains sparse values. See
Sparse accessor and the user guide for more.
Strings#
When working with text data, where each valid element is a string or missing,
we recommend using StringDtype (with the alias "string").
arrays.StringArray(values[, copy])
Extension array for string data.
arrays.ArrowStringArray(values)
Extension array for string data in a pyarrow.ChunkedArray.
StringDtype([storage])
Extension dtype for string data.
The Series.str accessor is available for Series backed by a arrays.StringArray.
See String handling for more.
Nullable Boolean#
The boolean dtype (with the alias "boolean") provides support for storing
boolean data (True, False) with missing values, which is not possible
with a bool numpy.ndarray.
arrays.BooleanArray(values, mask[, copy])
Array of boolean (True/False) data with missing values.
BooleanDtype()
Extension dtype for boolean data.
Utilities#
Constructors#
api.types.union_categoricals(to_union[, ...])
Combine list-like of Categorical-like, unioning categories.
api.types.infer_dtype
Return a string label of the type of a scalar or list-like of values.
api.types.pandas_dtype(dtype)
Convert input into a pandas only dtype object or a numpy dtype object.
Data type introspection#
api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
api.types.is_extension_type(arr)
(DEPRECATED) Check whether an array-like is of a pandas extension class instance.
api.types.is_extension_array_dtype(arr_or_dtype)
Check if an object is a pandas extension array type.
api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
api.types.is_sparse(arr)
Check whether an array-like is a 1-D pandas sparse array.
Iterable introspection#
api.types.is_dict_like(obj)
Check if the object is dict-like.
api.types.is_file_like(obj)
Check if the object is a file-like object.
api.types.is_list_like
Check if the object is list-like.
api.types.is_named_tuple(obj)
Check if the object is a named tuple.
api.types.is_iterator
Check if the object is an iterator.
Scalar introspection#
api.types.is_bool
Return True if given object is boolean.
api.types.is_categorical(arr)
(DEPRECATED) Check whether an array-like is a Categorical instance.
api.types.is_complex
Return True if given object is complex.
api.types.is_float
Return True if given object is float.
api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
Return True if given object is integer.
api.types.is_interval
api.types.is_number(obj)
Check if the object is a number.
api.types.is_re(obj)
Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
api.types.is_scalar
Return True if given object is scalar.
|
reference/arrays.html
|
General functions
|
General functions
|
Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
|
reference/general_functions.html
|
Extensions
|
Extensions
|
These are primarily intended for library authors looking to extend pandas
objects.
api.extensions.register_extension_dtype(cls)
Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
api.extensions.register_series_accessor(name)
Register a custom accessor on Series objects.
api.extensions.register_index_accessor(name)
Register a custom accessor on Index objects.
api.extensions.ExtensionDtype()
A custom data type, to be paired with an ExtensionArray.
api.extensions.ExtensionArray()
Abstract base class for custom 1-D array types.
arrays.PandasArray(values[, copy])
A pandas ExtensionArray for NumPy data.
Additionally, we have some utility methods for ensuring your object
behaves correctly.
api.indexers.check_array_indexer(array, indexer)
Check if indexer is a valid array indexer for array.
The sentinel pandas.api.extensions.no_default is used as the default
value in some methods. Use an is comparison to check if the user
provides a non-default value.
|
reference/extensions.html
|
pandas.api.extensions.register_dataframe_accessor
|
`pandas.api.extensions.register_dataframe_accessor`
Register a custom accessor on DataFrame objects.
Name under which the accessor should be registered. A warning is issued
if this name conflicts with a preexisting attribute.
```
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
```
|
pandas.api.extensions.register_dataframe_accessor(name)[source]#
Register a custom accessor on DataFrame objects.
Parameters
namestrName under which the accessor should be registered. A warning is issued
if this name conflicts with a preexisting attribute.
Returns
callableA class decorator.
See also
register_dataframe_accessorRegister a custom accessor on DataFrame objects.
register_series_accessorRegister a custom accessor on Series objects.
register_index_accessorRegister a custom accessor on Index objects.
Notes
When accessed, your accessor will be initialized with the pandas object
the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError
if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples
In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map
|
reference/api/pandas.api.extensions.register_dataframe_accessor.html
|
pandas.Series.update
|
`pandas.Series.update`
Modify Series in place using values from passed Series.
```
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, 5, 6]))
>>> s
0 4
1 5
2 6
dtype: int64
```
|
Series.update(other)[source]#
Modify Series in place using values from passed Series.
Uses non-NA values from passed Series to make updates. Aligns
on index.
Parameters
otherSeries, or object coercible into Series
Examples
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, 5, 6]))
>>> s
0 4
1 5
2 6
dtype: int64
>>> s = pd.Series(['a', 'b', 'c'])
>>> s.update(pd.Series(['d', 'e'], index=[0, 2]))
>>> s
0 d
1 b
2 e
dtype: object
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, 5, 6, 7, 8]))
>>> s
0 4
1 5
2 6
dtype: int64
If other contains NaNs the corresponding values are not updated
in the original Series.
>>> s = pd.Series([1, 2, 3])
>>> s.update(pd.Series([4, np.nan, 6]))
>>> s
0 4
1 2
2 6
dtype: int64
other can also be a non-Series object type
that is coercible into a Series
>>> s = pd.Series([1, 2, 3])
>>> s.update([4, np.nan, 6])
>>> s
0 4
1 2
2 6
dtype: int64
>>> s = pd.Series([1, 2, 3])
>>> s.update({1: 9})
>>> s
0 1
1 9
2 3
dtype: int64
|
reference/api/pandas.Series.update.html
|
pandas.MultiIndex.names
|
`pandas.MultiIndex.names`
Names of levels in MultiIndex.
Examples
```
>>> mi = pd.MultiIndex.from_arrays(
... [[1, 2], [3, 4], [5, 6]], names=['x', 'y', 'z'])
>>> mi
MultiIndex([(1, 3, 5),
(2, 4, 6)],
names=['x', 'y', 'z'])
>>> mi.names
FrozenList(['x', 'y', 'z'])
```
|
property MultiIndex.names[source]#
Names of levels in MultiIndex.
Examples
>>> mi = pd.MultiIndex.from_arrays(
... [[1, 2], [3, 4], [5, 6]], names=['x', 'y', 'z'])
>>> mi
MultiIndex([(1, 3, 5),
(2, 4, 6)],
names=['x', 'y', 'z'])
>>> mi.names
FrozenList(['x', 'y', 'z'])
|
reference/api/pandas.MultiIndex.names.html
|
pandas.Int16Dtype
|
`pandas.Int16Dtype`
An ExtensionDtype for int16 integer data.
|
class pandas.Int16Dtype[source]#
An ExtensionDtype for int16 integer data.
Changed in version 1.0.0: Now uses pandas.NA as its missing value,
rather than numpy.nan.
Attributes
None
Methods
None
|
reference/api/pandas.Int16Dtype.html
|
pandas.Series.sparse.density
|
`pandas.Series.sparse.density`
The percent of non- fill_value points, as decimal.
```
>>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
>>> s.density
0.6
```
|
Series.sparse.density[source]#
The percent of non- fill_value points, as decimal.
Examples
>>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
>>> s.density
0.6
|
reference/api/pandas.Series.sparse.density.html
|
pandas.Series.dt.strftime
|
`pandas.Series.dt.strftime`
Convert to Index using specified date_format.
```
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
```
|
Series.dt.strftime(*args, **kwargs)[source]#
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which
supports the same string format as the python standard library. Details
of the string format can be found in python string format
doc.
Formats supported by the C strftime API but not by the python string format
doc (such as “%R”, “%r”) are not officially supported and should be
preferably replaced with their supported equivalents (such as “%H:%M”,
“%I:%M:%S %p”).
Note that PeriodIndex support additional directives, detailed in
Period.strftime.
Parameters
date_formatstrDate format string (e.g. “%Y-%m-%d”).
Returns
ndarray[object]NumPy ndarray of formatted strings.
See also
to_datetimeConvert the given argument to datetime.
DatetimeIndex.normalizeReturn DatetimeIndex with times to midnight.
DatetimeIndex.roundRound the DatetimeIndex to the specified freq.
DatetimeIndex.floorFloor the DatetimeIndex to the specified freq.
Timestamp.strftimeFormat a single Timestamp.
Period.strftimeFormat a single Period.
Examples
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
|
reference/api/pandas.Series.dt.strftime.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_start
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
CustomBusinessMonthBegin.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_month_start.html
|
Date offsets
|
Date offsets
|
DateOffset#
DateOffset
Standard kind of date increment used for a date range.
Properties#
DateOffset.freqstr
Return a string representing the frequency.
DateOffset.kwds
Return a dict of extra parameters for the offset.
DateOffset.name
Return a string representing the base frequency.
DateOffset.nanos
DateOffset.normalize
DateOffset.rule_code
DateOffset.n
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
Methods#
DateOffset.apply
DateOffset.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
DateOffset.copy
Return a copy of the frequency.
DateOffset.isAnchored
DateOffset.onOffset
DateOffset.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
DateOffset.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
DateOffset.__call__(*args, **kwargs)
Call self as a function.
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
DateOffset.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
DateOffset.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
DateOffset.is_year_start
Return boolean whether a timestamp occurs on the year start.
DateOffset.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessDay#
BusinessDay
DateOffset subclass representing possibly n business days.
Alias:
BDay
alias of pandas._libs.tslibs.offsets.BusinessDay
Properties#
BusinessDay.freqstr
Return a string representing the frequency.
BusinessDay.kwds
Return a dict of extra parameters for the offset.
BusinessDay.name
Return a string representing the base frequency.
BusinessDay.nanos
BusinessDay.normalize
BusinessDay.rule_code
BusinessDay.n
BusinessDay.weekmask
BusinessDay.holidays
BusinessDay.calendar
Methods#
BusinessDay.apply
BusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessDay.copy
Return a copy of the frequency.
BusinessDay.isAnchored
BusinessDay.onOffset
BusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessDay.__call__(*args, **kwargs)
Call self as a function.
BusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessHour#
BusinessHour
DateOffset subclass representing possibly n business hours.
Properties#
BusinessHour.freqstr
Return a string representing the frequency.
BusinessHour.kwds
Return a dict of extra parameters for the offset.
BusinessHour.name
Return a string representing the base frequency.
BusinessHour.nanos
BusinessHour.normalize
BusinessHour.rule_code
BusinessHour.n
BusinessHour.start
BusinessHour.end
BusinessHour.weekmask
BusinessHour.holidays
BusinessHour.calendar
Methods#
BusinessHour.apply
BusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessHour.copy
Return a copy of the frequency.
BusinessHour.isAnchored
BusinessHour.onOffset
BusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessHour.__call__(*args, **kwargs)
Call self as a function.
BusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessDay#
CustomBusinessDay
DateOffset subclass representing custom business days excluding holidays.
Alias:
CDay
alias of pandas._libs.tslibs.offsets.CustomBusinessDay
Properties#
CustomBusinessDay.freqstr
Return a string representing the frequency.
CustomBusinessDay.kwds
Return a dict of extra parameters for the offset.
CustomBusinessDay.name
Return a string representing the base frequency.
CustomBusinessDay.nanos
CustomBusinessDay.normalize
CustomBusinessDay.rule_code
CustomBusinessDay.n
CustomBusinessDay.weekmask
CustomBusinessDay.calendar
CustomBusinessDay.holidays
Methods#
CustomBusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessDay.apply
CustomBusinessDay.copy
Return a copy of the frequency.
CustomBusinessDay.isAnchored
CustomBusinessDay.onOffset
CustomBusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessDay.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessHour#
CustomBusinessHour
DateOffset subclass representing possibly n custom business days.
Properties#
CustomBusinessHour.freqstr
Return a string representing the frequency.
CustomBusinessHour.kwds
Return a dict of extra parameters for the offset.
CustomBusinessHour.name
Return a string representing the base frequency.
CustomBusinessHour.nanos
CustomBusinessHour.normalize
CustomBusinessHour.rule_code
CustomBusinessHour.n
CustomBusinessHour.weekmask
CustomBusinessHour.calendar
CustomBusinessHour.holidays
CustomBusinessHour.start
CustomBusinessHour.end
Methods#
CustomBusinessHour.apply
CustomBusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessHour.copy
Return a copy of the frequency.
CustomBusinessHour.isAnchored
CustomBusinessHour.onOffset
CustomBusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessHour.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthEnd#
MonthEnd
DateOffset of one month end.
Properties#
MonthEnd.freqstr
Return a string representing the frequency.
MonthEnd.kwds
Return a dict of extra parameters for the offset.
MonthEnd.name
Return a string representing the base frequency.
MonthEnd.nanos
MonthEnd.normalize
MonthEnd.rule_code
MonthEnd.n
Methods#
MonthEnd.apply
MonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthEnd.copy
Return a copy of the frequency.
MonthEnd.isAnchored
MonthEnd.onOffset
MonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthEnd.__call__(*args, **kwargs)
Call self as a function.
MonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthBegin#
MonthBegin
DateOffset of one month at beginning.
Properties#
MonthBegin.freqstr
Return a string representing the frequency.
MonthBegin.kwds
Return a dict of extra parameters for the offset.
MonthBegin.name
Return a string representing the base frequency.
MonthBegin.nanos
MonthBegin.normalize
MonthBegin.rule_code
MonthBegin.n
Methods#
MonthBegin.apply
MonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthBegin.copy
Return a copy of the frequency.
MonthBegin.isAnchored
MonthBegin.onOffset
MonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthBegin.__call__(*args, **kwargs)
Call self as a function.
MonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthEnd#
BusinessMonthEnd
DateOffset increments between the last business day of the month.
Alias:
BMonthEnd
alias of pandas._libs.tslibs.offsets.BusinessMonthEnd
Properties#
BusinessMonthEnd.freqstr
Return a string representing the frequency.
BusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
BusinessMonthEnd.name
Return a string representing the base frequency.
BusinessMonthEnd.nanos
BusinessMonthEnd.normalize
BusinessMonthEnd.rule_code
BusinessMonthEnd.n
Methods#
BusinessMonthEnd.apply
BusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthEnd.copy
Return a copy of the frequency.
BusinessMonthEnd.isAnchored
BusinessMonthEnd.onOffset
BusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthBegin#
BusinessMonthBegin
DateOffset of one month at the first business day.
Alias:
BMonthBegin
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
Properties#
BusinessMonthBegin.freqstr
Return a string representing the frequency.
BusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
BusinessMonthBegin.name
Return a string representing the base frequency.
BusinessMonthBegin.nanos
BusinessMonthBegin.normalize
BusinessMonthBegin.rule_code
BusinessMonthBegin.n
Methods#
BusinessMonthBegin.apply
BusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthBegin.copy
Return a copy of the frequency.
BusinessMonthBegin.isAnchored
BusinessMonthBegin.onOffset
BusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthBegin.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthEnd#
CustomBusinessMonthEnd
Attributes
Alias:
CBMonthEnd
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthEnd
Properties#
CustomBusinessMonthEnd.freqstr
Return a string representing the frequency.
CustomBusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthEnd.m_offset
CustomBusinessMonthEnd.name
Return a string representing the base frequency.
CustomBusinessMonthEnd.nanos
CustomBusinessMonthEnd.normalize
CustomBusinessMonthEnd.rule_code
CustomBusinessMonthEnd.n
CustomBusinessMonthEnd.weekmask
CustomBusinessMonthEnd.calendar
CustomBusinessMonthEnd.holidays
Methods#
CustomBusinessMonthEnd.apply
CustomBusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthEnd.copy
Return a copy of the frequency.
CustomBusinessMonthEnd.isAnchored
CustomBusinessMonthEnd.onOffset
CustomBusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthBegin#
CustomBusinessMonthBegin
Attributes
Alias:
CBMonthBegin
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthBegin
Properties#
CustomBusinessMonthBegin.freqstr
Return a string representing the frequency.
CustomBusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthBegin.m_offset
CustomBusinessMonthBegin.name
Return a string representing the base frequency.
CustomBusinessMonthBegin.nanos
CustomBusinessMonthBegin.normalize
CustomBusinessMonthBegin.rule_code
CustomBusinessMonthBegin.n
CustomBusinessMonthBegin.weekmask
CustomBusinessMonthBegin.calendar
CustomBusinessMonthBegin.holidays
Methods#
CustomBusinessMonthBegin.apply
CustomBusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthBegin.copy
Return a copy of the frequency.
CustomBusinessMonthBegin.isAnchored
CustomBusinessMonthBegin.onOffset
CustomBusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthBegin.__call__(*args, ...)
Call self as a function.
CustomBusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthEnd#
SemiMonthEnd
Two DateOffset's per month repeating on the last day of the month & day_of_month.
Properties#
SemiMonthEnd.freqstr
Return a string representing the frequency.
SemiMonthEnd.kwds
Return a dict of extra parameters for the offset.
SemiMonthEnd.name
Return a string representing the base frequency.
SemiMonthEnd.nanos
SemiMonthEnd.normalize
SemiMonthEnd.rule_code
SemiMonthEnd.n
SemiMonthEnd.day_of_month
Methods#
SemiMonthEnd.apply
SemiMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthEnd.copy
Return a copy of the frequency.
SemiMonthEnd.isAnchored
SemiMonthEnd.onOffset
SemiMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthEnd.__call__(*args, **kwargs)
Call self as a function.
SemiMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthBegin#
SemiMonthBegin
Two DateOffset's per month repeating on the first day of the month & day_of_month.
Properties#
SemiMonthBegin.freqstr
Return a string representing the frequency.
SemiMonthBegin.kwds
Return a dict of extra parameters for the offset.
SemiMonthBegin.name
Return a string representing the base frequency.
SemiMonthBegin.nanos
SemiMonthBegin.normalize
SemiMonthBegin.rule_code
SemiMonthBegin.n
SemiMonthBegin.day_of_month
Methods#
SemiMonthBegin.apply
SemiMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthBegin.copy
Return a copy of the frequency.
SemiMonthBegin.isAnchored
SemiMonthBegin.onOffset
SemiMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthBegin.__call__(*args, **kwargs)
Call self as a function.
SemiMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
Week#
Week
Weekly offset.
Properties#
Week.freqstr
Return a string representing the frequency.
Week.kwds
Return a dict of extra parameters for the offset.
Week.name
Return a string representing the base frequency.
Week.nanos
Week.normalize
Week.rule_code
Week.n
Week.weekday
Methods#
Week.apply
Week.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Week.copy
Return a copy of the frequency.
Week.isAnchored
Week.onOffset
Week.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Week.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Week.__call__(*args, **kwargs)
Call self as a function.
Week.is_month_start
Return boolean whether a timestamp occurs on the month start.
Week.is_month_end
Return boolean whether a timestamp occurs on the month end.
Week.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Week.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Week.is_year_start
Return boolean whether a timestamp occurs on the year start.
Week.is_year_end
Return boolean whether a timestamp occurs on the year end.
WeekOfMonth#
WeekOfMonth
Describes monthly dates like "the Tuesday of the 2nd week of each month".
Properties#
WeekOfMonth.freqstr
Return a string representing the frequency.
WeekOfMonth.kwds
Return a dict of extra parameters for the offset.
WeekOfMonth.name
Return a string representing the base frequency.
WeekOfMonth.nanos
WeekOfMonth.normalize
WeekOfMonth.rule_code
WeekOfMonth.n
WeekOfMonth.week
Methods#
WeekOfMonth.apply
WeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
WeekOfMonth.copy
Return a copy of the frequency.
WeekOfMonth.isAnchored
WeekOfMonth.onOffset
WeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
WeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
WeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
WeekOfMonth.weekday
WeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
WeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
WeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
WeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
WeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
WeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
LastWeekOfMonth#
LastWeekOfMonth
Describes monthly dates in last week of month.
Properties#
LastWeekOfMonth.freqstr
Return a string representing the frequency.
LastWeekOfMonth.kwds
Return a dict of extra parameters for the offset.
LastWeekOfMonth.name
Return a string representing the base frequency.
LastWeekOfMonth.nanos
LastWeekOfMonth.normalize
LastWeekOfMonth.rule_code
LastWeekOfMonth.n
LastWeekOfMonth.weekday
LastWeekOfMonth.week
Methods#
LastWeekOfMonth.apply
LastWeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
LastWeekOfMonth.copy
Return a copy of the frequency.
LastWeekOfMonth.isAnchored
LastWeekOfMonth.onOffset
LastWeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
LastWeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
LastWeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
LastWeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
LastWeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
LastWeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
LastWeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
LastWeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
LastWeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterEnd#
BQuarterEnd
DateOffset increments between the last business day of each Quarter.
Properties#
BQuarterEnd.freqstr
Return a string representing the frequency.
BQuarterEnd.kwds
Return a dict of extra parameters for the offset.
BQuarterEnd.name
Return a string representing the base frequency.
BQuarterEnd.nanos
BQuarterEnd.normalize
BQuarterEnd.rule_code
BQuarterEnd.n
BQuarterEnd.startingMonth
Methods#
BQuarterEnd.apply
BQuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterEnd.copy
Return a copy of the frequency.
BQuarterEnd.isAnchored
BQuarterEnd.onOffset
BQuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterEnd.__call__(*args, **kwargs)
Call self as a function.
BQuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterBegin#
BQuarterBegin
DateOffset increments between the first business day of each Quarter.
Properties#
BQuarterBegin.freqstr
Return a string representing the frequency.
BQuarterBegin.kwds
Return a dict of extra parameters for the offset.
BQuarterBegin.name
Return a string representing the base frequency.
BQuarterBegin.nanos
BQuarterBegin.normalize
BQuarterBegin.rule_code
BQuarterBegin.n
BQuarterBegin.startingMonth
Methods#
BQuarterBegin.apply
BQuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterBegin.copy
Return a copy of the frequency.
BQuarterBegin.isAnchored
BQuarterBegin.onOffset
BQuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterBegin.__call__(*args, **kwargs)
Call self as a function.
BQuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterEnd#
QuarterEnd
DateOffset increments between Quarter end dates.
Properties#
QuarterEnd.freqstr
Return a string representing the frequency.
QuarterEnd.kwds
Return a dict of extra parameters for the offset.
QuarterEnd.name
Return a string representing the base frequency.
QuarterEnd.nanos
QuarterEnd.normalize
QuarterEnd.rule_code
QuarterEnd.n
QuarterEnd.startingMonth
Methods#
QuarterEnd.apply
QuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterEnd.copy
Return a copy of the frequency.
QuarterEnd.isAnchored
QuarterEnd.onOffset
QuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterEnd.__call__(*args, **kwargs)
Call self as a function.
QuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterBegin#
QuarterBegin
DateOffset increments between Quarter start dates.
Properties#
QuarterBegin.freqstr
Return a string representing the frequency.
QuarterBegin.kwds
Return a dict of extra parameters for the offset.
QuarterBegin.name
Return a string representing the base frequency.
QuarterBegin.nanos
QuarterBegin.normalize
QuarterBegin.rule_code
QuarterBegin.n
QuarterBegin.startingMonth
Methods#
QuarterBegin.apply
QuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterBegin.copy
Return a copy of the frequency.
QuarterBegin.isAnchored
QuarterBegin.onOffset
QuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterBegin.__call__(*args, **kwargs)
Call self as a function.
QuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearEnd#
BYearEnd
DateOffset increments between the last business day of the year.
Properties#
BYearEnd.freqstr
Return a string representing the frequency.
BYearEnd.kwds
Return a dict of extra parameters for the offset.
BYearEnd.name
Return a string representing the base frequency.
BYearEnd.nanos
BYearEnd.normalize
BYearEnd.rule_code
BYearEnd.n
BYearEnd.month
Methods#
BYearEnd.apply
BYearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearEnd.copy
Return a copy of the frequency.
BYearEnd.isAnchored
BYearEnd.onOffset
BYearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearEnd.__call__(*args, **kwargs)
Call self as a function.
BYearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearBegin#
BYearBegin
DateOffset increments between the first business day of the year.
Properties#
BYearBegin.freqstr
Return a string representing the frequency.
BYearBegin.kwds
Return a dict of extra parameters for the offset.
BYearBegin.name
Return a string representing the base frequency.
BYearBegin.nanos
BYearBegin.normalize
BYearBegin.rule_code
BYearBegin.n
BYearBegin.month
Methods#
BYearBegin.apply
BYearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearBegin.copy
Return a copy of the frequency.
BYearBegin.isAnchored
BYearBegin.onOffset
BYearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearBegin.__call__(*args, **kwargs)
Call self as a function.
BYearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearEnd#
YearEnd
DateOffset increments between calendar year ends.
Properties#
YearEnd.freqstr
Return a string representing the frequency.
YearEnd.kwds
Return a dict of extra parameters for the offset.
YearEnd.name
Return a string representing the base frequency.
YearEnd.nanos
YearEnd.normalize
YearEnd.rule_code
YearEnd.n
YearEnd.month
Methods#
YearEnd.apply
YearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearEnd.copy
Return a copy of the frequency.
YearEnd.isAnchored
YearEnd.onOffset
YearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearEnd.__call__(*args, **kwargs)
Call self as a function.
YearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearBegin#
YearBegin
DateOffset increments between calendar year begin dates.
Properties#
YearBegin.freqstr
Return a string representing the frequency.
YearBegin.kwds
Return a dict of extra parameters for the offset.
YearBegin.name
Return a string representing the base frequency.
YearBegin.nanos
YearBegin.normalize
YearBegin.rule_code
YearBegin.n
YearBegin.month
Methods#
YearBegin.apply
YearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearBegin.copy
Return a copy of the frequency.
YearBegin.isAnchored
YearBegin.onOffset
YearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearBegin.__call__(*args, **kwargs)
Call self as a function.
YearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253#
FY5253
Describes 52-53 week fiscal year.
Properties#
FY5253.freqstr
Return a string representing the frequency.
FY5253.kwds
Return a dict of extra parameters for the offset.
FY5253.name
Return a string representing the base frequency.
FY5253.nanos
FY5253.normalize
FY5253.rule_code
FY5253.n
FY5253.startingMonth
FY5253.variation
FY5253.weekday
Methods#
FY5253.apply
FY5253.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253.copy
Return a copy of the frequency.
FY5253.get_rule_code_suffix
FY5253.get_year_end
FY5253.isAnchored
FY5253.onOffset
FY5253.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253.__call__(*args, **kwargs)
Call self as a function.
FY5253.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253Quarter#
FY5253Quarter
DateOffset increments between business quarter dates for 52-53 week fiscal year.
Properties#
FY5253Quarter.freqstr
Return a string representing the frequency.
FY5253Quarter.kwds
Return a dict of extra parameters for the offset.
FY5253Quarter.name
Return a string representing the base frequency.
FY5253Quarter.nanos
FY5253Quarter.normalize
FY5253Quarter.rule_code
FY5253Quarter.n
FY5253Quarter.qtr_with_extra_week
FY5253Quarter.startingMonth
FY5253Quarter.variation
FY5253Quarter.weekday
Methods#
FY5253Quarter.apply
FY5253Quarter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253Quarter.copy
Return a copy of the frequency.
FY5253Quarter.get_rule_code_suffix
FY5253Quarter.get_weeks
FY5253Quarter.isAnchored
FY5253Quarter.onOffset
FY5253Quarter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253Quarter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253Quarter.year_has_extra_week
FY5253Quarter.__call__(*args, **kwargs)
Call self as a function.
FY5253Quarter.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253Quarter.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253Quarter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253Quarter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253Quarter.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253Quarter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Easter#
Easter
DateOffset for the Easter holiday using logic defined in dateutil.
Properties#
Easter.freqstr
Return a string representing the frequency.
Easter.kwds
Return a dict of extra parameters for the offset.
Easter.name
Return a string representing the base frequency.
Easter.nanos
Easter.normalize
Easter.rule_code
Easter.n
Methods#
Easter.apply
Easter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Easter.copy
Return a copy of the frequency.
Easter.isAnchored
Easter.onOffset
Easter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Easter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Easter.__call__(*args, **kwargs)
Call self as a function.
Easter.is_month_start
Return boolean whether a timestamp occurs on the month start.
Easter.is_month_end
Return boolean whether a timestamp occurs on the month end.
Easter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Easter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Easter.is_year_start
Return boolean whether a timestamp occurs on the year start.
Easter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Tick#
Tick
Attributes
Properties#
Tick.delta
Tick.freqstr
Return a string representing the frequency.
Tick.kwds
Return a dict of extra parameters for the offset.
Tick.name
Return a string representing the base frequency.
Tick.nanos
Return an integer of the total number of nanoseconds.
Tick.normalize
Tick.rule_code
Tick.n
Methods#
Tick.copy
Return a copy of the frequency.
Tick.isAnchored
Tick.onOffset
Tick.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Tick.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Tick.__call__(*args, **kwargs)
Call self as a function.
Tick.apply
Tick.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Tick.is_month_start
Return boolean whether a timestamp occurs on the month start.
Tick.is_month_end
Return boolean whether a timestamp occurs on the month end.
Tick.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Tick.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Tick.is_year_start
Return boolean whether a timestamp occurs on the year start.
Tick.is_year_end
Return boolean whether a timestamp occurs on the year end.
Day#
Day
Attributes
Properties#
Day.delta
Day.freqstr
Return a string representing the frequency.
Day.kwds
Return a dict of extra parameters for the offset.
Day.name
Return a string representing the base frequency.
Day.nanos
Return an integer of the total number of nanoseconds.
Day.normalize
Day.rule_code
Day.n
Methods#
Day.copy
Return a copy of the frequency.
Day.isAnchored
Day.onOffset
Day.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Day.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Day.__call__(*args, **kwargs)
Call self as a function.
Day.apply
Day.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Day.is_month_start
Return boolean whether a timestamp occurs on the month start.
Day.is_month_end
Return boolean whether a timestamp occurs on the month end.
Day.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Day.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Day.is_year_start
Return boolean whether a timestamp occurs on the year start.
Day.is_year_end
Return boolean whether a timestamp occurs on the year end.
Hour#
Hour
Attributes
Properties#
Hour.delta
Hour.freqstr
Return a string representing the frequency.
Hour.kwds
Return a dict of extra parameters for the offset.
Hour.name
Return a string representing the base frequency.
Hour.nanos
Return an integer of the total number of nanoseconds.
Hour.normalize
Hour.rule_code
Hour.n
Methods#
Hour.copy
Return a copy of the frequency.
Hour.isAnchored
Hour.onOffset
Hour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Hour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Hour.__call__(*args, **kwargs)
Call self as a function.
Hour.apply
Hour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Hour.is_month_start
Return boolean whether a timestamp occurs on the month start.
Hour.is_month_end
Return boolean whether a timestamp occurs on the month end.
Hour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Hour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Hour.is_year_start
Return boolean whether a timestamp occurs on the year start.
Hour.is_year_end
Return boolean whether a timestamp occurs on the year end.
Minute#
Minute
Attributes
Properties#
Minute.delta
Minute.freqstr
Return a string representing the frequency.
Minute.kwds
Return a dict of extra parameters for the offset.
Minute.name
Return a string representing the base frequency.
Minute.nanos
Return an integer of the total number of nanoseconds.
Minute.normalize
Minute.rule_code
Minute.n
Methods#
Minute.copy
Return a copy of the frequency.
Minute.isAnchored
Minute.onOffset
Minute.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Minute.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Minute.__call__(*args, **kwargs)
Call self as a function.
Minute.apply
Minute.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Minute.is_month_start
Return boolean whether a timestamp occurs on the month start.
Minute.is_month_end
Return boolean whether a timestamp occurs on the month end.
Minute.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Minute.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Minute.is_year_start
Return boolean whether a timestamp occurs on the year start.
Minute.is_year_end
Return boolean whether a timestamp occurs on the year end.
Second#
Second
Attributes
Properties#
Second.delta
Second.freqstr
Return a string representing the frequency.
Second.kwds
Return a dict of extra parameters for the offset.
Second.name
Return a string representing the base frequency.
Second.nanos
Return an integer of the total number of nanoseconds.
Second.normalize
Second.rule_code
Second.n
Methods#
Second.copy
Return a copy of the frequency.
Second.isAnchored
Second.onOffset
Second.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Second.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Second.__call__(*args, **kwargs)
Call self as a function.
Second.apply
Second.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Second.is_month_start
Return boolean whether a timestamp occurs on the month start.
Second.is_month_end
Return boolean whether a timestamp occurs on the month end.
Second.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Second.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Second.is_year_start
Return boolean whether a timestamp occurs on the year start.
Second.is_year_end
Return boolean whether a timestamp occurs on the year end.
Milli#
Milli
Attributes
Properties#
Milli.delta
Milli.freqstr
Return a string representing the frequency.
Milli.kwds
Return a dict of extra parameters for the offset.
Milli.name
Return a string representing the base frequency.
Milli.nanos
Return an integer of the total number of nanoseconds.
Milli.normalize
Milli.rule_code
Milli.n
Methods#
Milli.copy
Return a copy of the frequency.
Milli.isAnchored
Milli.onOffset
Milli.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Milli.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Milli.__call__(*args, **kwargs)
Call self as a function.
Milli.apply
Milli.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Milli.is_month_start
Return boolean whether a timestamp occurs on the month start.
Milli.is_month_end
Return boolean whether a timestamp occurs on the month end.
Milli.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Milli.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Milli.is_year_start
Return boolean whether a timestamp occurs on the year start.
Milli.is_year_end
Return boolean whether a timestamp occurs on the year end.
Micro#
Micro
Attributes
Properties#
Micro.delta
Micro.freqstr
Return a string representing the frequency.
Micro.kwds
Return a dict of extra parameters for the offset.
Micro.name
Return a string representing the base frequency.
Micro.nanos
Return an integer of the total number of nanoseconds.
Micro.normalize
Micro.rule_code
Micro.n
Methods#
Micro.copy
Return a copy of the frequency.
Micro.isAnchored
Micro.onOffset
Micro.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Micro.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Micro.__call__(*args, **kwargs)
Call self as a function.
Micro.apply
Micro.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Micro.is_month_start
Return boolean whether a timestamp occurs on the month start.
Micro.is_month_end
Return boolean whether a timestamp occurs on the month end.
Micro.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Micro.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Micro.is_year_start
Return boolean whether a timestamp occurs on the year start.
Micro.is_year_end
Return boolean whether a timestamp occurs on the year end.
Nano#
Nano
Attributes
Properties#
Nano.delta
Nano.freqstr
Return a string representing the frequency.
Nano.kwds
Return a dict of extra parameters for the offset.
Nano.name
Return a string representing the base frequency.
Nano.nanos
Return an integer of the total number of nanoseconds.
Nano.normalize
Nano.rule_code
Nano.n
Methods#
Nano.copy
Return a copy of the frequency.
Nano.isAnchored
Nano.onOffset
Nano.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Nano.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Nano.__call__(*args, **kwargs)
Call self as a function.
Nano.apply
Nano.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Nano.is_month_start
Return boolean whether a timestamp occurs on the month start.
Nano.is_month_end
Return boolean whether a timestamp occurs on the month end.
Nano.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Nano.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Nano.is_year_start
Return boolean whether a timestamp occurs on the year start.
Nano.is_year_end
Return boolean whether a timestamp occurs on the year end.
Frequencies#
to_offset
Return DateOffset object from string or datetime.timedelta object.
|
reference/offset_frequency.html
|
pandas.concat
|
`pandas.concat`
Concatenate pandas objects along a particular axis.
Allows optional set logic along the other axes.
```
>>> s1 = pd.Series(['a', 'b'])
>>> s2 = pd.Series(['c', 'd'])
>>> pd.concat([s1, s2])
0 a
1 b
0 c
1 d
dtype: object
```
|
pandas.concat(objs, *, axis=0, join='outer', ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, sort=False, copy=True)[source]#
Concatenate pandas objects along a particular axis.
Allows optional set logic along the other axes.
Can also add a layer of hierarchical indexing on the concatenation axis,
which may be useful if the labels are the same (or overlapping) on
the passed axis number.
Parameters
objsa sequence or mapping of Series or DataFrame objectsIf a mapping is passed, the sorted keys will be used as the keys
argument, unless it is passed, in which case the values will be
selected (see below). Any None objects will be dropped silently unless
they are all None in which case a ValueError will be raised.
axis{0/’index’, 1/’columns’}, default 0The axis to concatenate along.
join{‘inner’, ‘outer’}, default ‘outer’How to handle indexes on other axis (or axes).
ignore_indexbool, default FalseIf True, do not use the index values along the concatenation axis. The
resulting axis will be labeled 0, …, n - 1. This is useful if you are
concatenating objects where the concatenation axis does not have
meaningful indexing information. Note the index values on the other
axes are still respected in the join.
keyssequence, default NoneIf multiple levels passed, should contain tuples. Construct
hierarchical index using the passed keys as the outermost level.
levelslist of sequences, default NoneSpecific levels (unique values) to use for constructing a
MultiIndex. Otherwise they will be inferred from the keys.
nameslist, default NoneNames for the levels in the resulting hierarchical index.
verify_integritybool, default FalseCheck whether the new concatenated axis contains duplicates. This can
be very expensive relative to the actual data concatenation.
sortbool, default FalseSort non-concatenation axis if it is not already aligned when join
is ‘outer’.
This has no effect when join='inner', which already preserves
the order of the non-concatenation axis.
Changed in version 1.0.0: Changed to not sort by default.
copybool, default TrueIf False, do not copy data unnecessarily.
Returns
object, type of objsWhen concatenating all Series along the index (axis=0), a
Series is returned. When objs contains at least one
DataFrame, a DataFrame is returned. When concatenating along
the columns (axis=1), a DataFrame is returned.
See also
DataFrame.joinJoin DataFrames using indexes.
DataFrame.mergeMerge DataFrames by indexes or columns.
Notes
The keys, levels, and names arguments are all optional.
A walkthrough of how this method fits in with other tools for combining
pandas objects can be found here.
It is not recommended to build DataFrames by adding single rows in a
for loop. Build a list of rows and make a DataFrame in a single concat.
Examples
Combine two Series.
>>> s1 = pd.Series(['a', 'b'])
>>> s2 = pd.Series(['c', 'd'])
>>> pd.concat([s1, s2])
0 a
1 b
0 c
1 d
dtype: object
Clear the existing index and reset it in the result
by setting the ignore_index option to True.
>>> pd.concat([s1, s2], ignore_index=True)
0 a
1 b
2 c
3 d
dtype: object
Add a hierarchical index at the outermost level of
the data with the keys option.
>>> pd.concat([s1, s2], keys=['s1', 's2'])
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Label the index keys you create with the names option.
>>> pd.concat([s1, s2], keys=['s1', 's2'],
... names=['Series name', 'Row ID'])
Series name Row ID
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Combine two DataFrame objects with identical columns.
>>> df1 = pd.DataFrame([['a', 1], ['b', 2]],
... columns=['letter', 'number'])
>>> df1
letter number
0 a 1
1 b 2
>>> df2 = pd.DataFrame([['c', 3], ['d', 4]],
... columns=['letter', 'number'])
>>> df2
letter number
0 c 3
1 d 4
>>> pd.concat([df1, df2])
letter number
0 a 1
1 b 2
0 c 3
1 d 4
Combine DataFrame objects with overlapping columns
and return everything. Columns outside the intersection will
be filled with NaN values.
>>> df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']],
... columns=['letter', 'number', 'animal'])
>>> df3
letter number animal
0 c 3 cat
1 d 4 dog
>>> pd.concat([df1, df3], sort=False)
letter number animal
0 a 1 NaN
1 b 2 NaN
0 c 3 cat
1 d 4 dog
Combine DataFrame objects with overlapping columns
and return only those that are shared by passing inner to
the join keyword argument.
>>> pd.concat([df1, df3], join="inner")
letter number
0 a 1
1 b 2
0 c 3
1 d 4
Combine DataFrame objects horizontally along the x axis by
passing in axis=1.
>>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']],
... columns=['animal', 'name'])
>>> pd.concat([df1, df4], axis=1)
letter number animal name
0 a 1 bird polly
1 b 2 monkey george
Prevent the result from including duplicate index values with the
verify_integrity option.
>>> df5 = pd.DataFrame([1], index=['a'])
>>> df5
0
a 1
>>> df6 = pd.DataFrame([2], index=['a'])
>>> df6
0
a 2
>>> pd.concat([df5, df6], verify_integrity=True)
Traceback (most recent call last):
...
ValueError: Indexes have overlapping values: ['a']
Append a single row to the end of a DataFrame object.
>>> df7 = pd.DataFrame({'a': 1, 'b': 2}, index=[0])
>>> df7
a b
0 1 2
>>> new_row = pd.Series({'a': 3, 'b': 4})
>>> new_row
a 3
b 4
dtype: int64
>>> pd.concat([df7, new_row.to_frame().T], ignore_index=True)
a b
0 1 2
1 3 4
|
reference/api/pandas.concat.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
pandas.tseries.offsets.CustomBusinessMonthEnd.normalize
|
CustomBusinessMonthEnd.normalize#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.normalize.html
|
pandas.core.groupby.GroupBy.bfill
|
`pandas.core.groupby.GroupBy.bfill`
Backward fill the values.
|
final GroupBy.bfill(limit=None)[source]#
Backward fill the values.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameObject with missing values filled.
See also
Series.bfillBackward fill the missing values in the dataset.
DataFrame.bfillBackward fill the missing values in the dataset.
Series.fillnaFill NaN values of a Series.
DataFrame.fillnaFill NaN values of a DataFrame.
|
reference/api/pandas.core.groupby.GroupBy.bfill.html
|
pandas.Series.cat.ordered
|
`pandas.Series.cat.ordered`
Whether the categories have an ordered relationship.
|
Series.cat.ordered[source]#
Whether the categories have an ordered relationship.
|
reference/api/pandas.Series.cat.ordered.html
|
pandas.io.formats.style.Styler.pipe
|
`pandas.io.formats.style.Styler.pipe`
Apply func(self, *args, **kwargs), and return the result.
```
>>> def some_highlights(styler, min_color="red", max_color="blue"):
... styler.highlight_min(color=min_color, axis=None)
... styler.highlight_max(color=max_color, axis=None)
... styler.highlight_null()
... return styler
>>> df = pd.DataFrame([[1, 2, 3, pd.NA], [pd.NA, 4, 5, 6]], dtype="Int64")
>>> df.style.pipe(some_highlights, min_color="green")
```
|
Styler.pipe(func, *args, **kwargs)[source]#
Apply func(self, *args, **kwargs), and return the result.
Parameters
funcfunctionFunction to apply to the Styler. Alternatively, a
(callable, keyword) tuple where keyword is a string
indicating the keyword of callable that expects the Styler.
*argsoptionalArguments passed to func.
**kwargsoptionalA dictionary of keyword arguments passed into func.
Returns
objectThe value returned by func.
See also
DataFrame.pipeAnalogous method for DataFrame.
Styler.applyApply a CSS-styling function column-wise, row-wise, or table-wise.
Notes
Like DataFrame.pipe(), this method can simplify the
application of several user-defined functions to a styler. Instead
of writing:
f(g(df.style.format(precision=3), arg1=a), arg2=b, arg3=c)
users can write:
(df.style.format(precision=3)
.pipe(g, arg1=a)
.pipe(f, arg2=b, arg3=c))
In particular, this allows users to define functions that take a
styler object, along with other parameters, and return the styler after
making styling changes (such as calling Styler.apply() or
Styler.set_properties()).
Examples
Common Use
A common usage pattern is to pre-define styling operations which
can be easily applied to a generic styler in a single pipe call.
>>> def some_highlights(styler, min_color="red", max_color="blue"):
... styler.highlight_min(color=min_color, axis=None)
... styler.highlight_max(color=max_color, axis=None)
... styler.highlight_null()
... return styler
>>> df = pd.DataFrame([[1, 2, 3, pd.NA], [pd.NA, 4, 5, 6]], dtype="Int64")
>>> df.style.pipe(some_highlights, min_color="green")
Since the method returns a Styler object it can be chained with other
methods as if applying the underlying highlighters directly.
>>> df.style.format("{:.1f}")
... .pipe(some_highlights, min_color="green")
... .highlight_between(left=2, right=5)
Advanced Use
Sometimes it may be necessary to pre-define styling functions, but in the case
where those functions rely on the styler, data or context. Since
Styler.use and Styler.export are designed to be non-data dependent,
they cannot be used for this purpose. Additionally the Styler.apply
and Styler.format type methods are not context aware, so a solution
is to use pipe to dynamically wrap this functionality.
Suppose we want to code a generic styling function that highlights the final
level of a MultiIndex. The number of levels in the Index is dynamic so we
need the Styler context to define the level.
>>> def highlight_last_level(styler):
... return styler.apply_index(
... lambda v: "background-color: pink; color: yellow", axis="columns",
... level=styler.columns.nlevels-1
... )
>>> df.columns = pd.MultiIndex.from_product([["A", "B"], ["X", "Y"]])
>>> df.style.pipe(highlight_last_level)
Additionally suppose we want to highlight a column header if there is any
missing data in that column.
In this case we need the data object itself to determine the effect on the
column headers.
>>> def highlight_header_missing(styler, level):
... def dynamic_highlight(s):
... return np.where(
... styler.data.isna().any(), "background-color: red;", ""
... )
... return styler.apply_index(dynamic_highlight, axis=1, level=level)
>>> df.style.pipe(highlight_header_missing, level=1)
|
reference/api/pandas.io.formats.style.Styler.pipe.html
|
pandas.tseries.offsets.Minute.delta
|
pandas.tseries.offsets.Minute.delta
|
Minute.delta#
|
reference/api/pandas.tseries.offsets.Minute.delta.html
|
pandas.tseries.offsets.Micro.rule_code
|
pandas.tseries.offsets.Micro.rule_code
|
Micro.rule_code#
|
reference/api/pandas.tseries.offsets.Micro.rule_code.html
|
pandas.tseries.offsets.QuarterBegin.normalize
|
pandas.tseries.offsets.QuarterBegin.normalize
|
QuarterBegin.normalize#
|
reference/api/pandas.tseries.offsets.QuarterBegin.normalize.html
|
pandas.tseries.offsets.LastWeekOfMonth.freqstr
|
`pandas.tseries.offsets.LastWeekOfMonth.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
LastWeekOfMonth.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.freqstr.html
|
pandas.UInt64Index
|
`pandas.UInt64Index`
Immutable sequence used for indexing and alignment.
|
class pandas.UInt64Index(data=None, dtype=None, copy=False, name=None)[source]#
Immutable sequence used for indexing and alignment.
Deprecated since version 1.4.0: In pandas v2.0 UInt64Index will be removed and NumericIndex used instead.
UInt64Index will remain fully functional for the duration of pandas 1.x.
The basic object storing axis labels for all pandas objects.
UInt64Index is a special case of Index with purely unsigned integer labels. .
Parameters
dataarray-like (1-dimensional)
dtypeNumPy dtype (default: uint64)
copyboolMake a copy of input ndarray.
nameobjectName to be stored in the index.
See also
IndexThe base pandas Index type.
NumericIndexIndex of numpy int/uint/float data.
Notes
An Index instance can only contain hashable objects.
Attributes
None
Methods
None
|
reference/api/pandas.UInt64Index.html
|
pandas.Series.to_json
|
`pandas.Series.to_json`
Convert the object to a JSON string.
Note NaN’s and None will be converted to null and datetime objects
will be converted to UNIX timestamps.
```
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
```
|
Series.to_json(path_or_buf=None, orient=None, date_format=None, double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False, compression='infer', index=True, indent=None, storage_options=None)[source]#
Convert the object to a JSON string.
Note NaN’s and None will be converted to null and datetime objects
will be converted to UNIX timestamps.
Parameters
path_or_bufstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a write() function. If None, the result is
returned as a string.
orientstrIndication of expected JSON string format.
Series:
default is ‘index’
allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}.
DataFrame:
default is ‘columns’
allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’,
‘values’, ‘table’}.
The format of the JSON string:
‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns],
‘data’ -> [values]}
‘records’ : list like [{column -> value}, … , {column -> value}]
‘index’ : dict like {index -> {column -> value}}
‘columns’ : dict like {column -> {index -> value}}
‘values’ : just the values array
‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}}
Describing the data, where data component is like orient='records'.
date_format{None, ‘epoch’, ‘iso’}Type of date conversion. ‘epoch’ = epoch milliseconds,
‘iso’ = ISO8601. The default depends on the orient. For
orient='table', the default is ‘iso’. For all other orients,
the default is ‘epoch’.
double_precisionint, default 10The number of decimal places to use when encoding
floating point values.
force_asciibool, default TrueForce encoded string to be ASCII.
date_unitstr, default ‘ms’ (milliseconds)The time unit to encode to, governs timestamp and ISO8601
precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond,
microsecond, and nanosecond respectively.
default_handlercallable, default NoneHandler to call if object cannot otherwise be converted to a
suitable format for JSON. Should receive a single argument which is
the object to convert and return a serialisable object.
linesbool, default FalseIf ‘orient’ is ‘records’ write out line-delimited json format. Will
throw ValueError if incorrect ‘orient’ since others are not
list-like.
compressionstr or dict, default ‘infer’For on-the-fly compression of the output data. If ‘infer’ and ‘path_or_buf’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
Set to None for no compression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdCompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for faster compression and to create
a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
indexbool, default TrueWhether to include the index values in the JSON string. Not
including the index (index=False) is only supported when
orient is ‘split’ or ‘table’.
indentint, optionalLength of whitespace used to indent each record.
New in version 1.0.0.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
None or strIf path_or_buf is None, returns the resulting json format as a
string. Otherwise returns None.
See also
read_jsonConvert a JSON string to pandas object.
Notes
The behavior of indent=0 varies from the stdlib, which does not
indent the output but does insert newlines. Currently, indent=0
and the default indent=None are equivalent in pandas, though this
may change in a future release.
orient='table' contains a ‘pandas_version’ field under ‘schema’.
This stores the version of pandas used in the latest revision of the
schema.
Examples
>>> import json
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
... columns=["col 1", "col 2"],
... )
>>> result = df.to_json(orient="split")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"columns": [
"col 1",
"col 2"
],
"index": [
"row 1",
"row 2"
],
"data": [
[
"a",
"b"
],
[
"c",
"d"
]
]
}
Encoding/decoding a Dataframe using 'records' formatted JSON.
Note that index labels are not preserved with this encoding.
>>> result = df.to_json(orient="records")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
{
"col 1": "a",
"col 2": "b"
},
{
"col 1": "c",
"col 2": "d"
}
]
Encoding/decoding a Dataframe using 'index' formatted JSON:
>>> result = df.to_json(orient="index")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"row 1": {
"col 1": "a",
"col 2": "b"
},
"row 2": {
"col 1": "c",
"col 2": "d"
}
}
Encoding/decoding a Dataframe using 'columns' formatted JSON:
>>> result = df.to_json(orient="columns")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"col 1": {
"row 1": "a",
"row 2": "c"
},
"col 2": {
"row 1": "b",
"row 2": "d"
}
}
Encoding/decoding a Dataframe using 'values' formatted JSON:
>>> result = df.to_json(orient="values")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
[
[
"a",
"b"
],
[
"c",
"d"
]
]
Encoding with Table Schema:
>>> result = df.to_json(orient="table")
>>> parsed = json.loads(result)
>>> json.dumps(parsed, indent=4)
{
"schema": {
"fields": [
{
"name": "index",
"type": "string"
},
{
"name": "col 1",
"type": "string"
},
{
"name": "col 2",
"type": "string"
}
],
"primaryKey": [
"index"
],
"pandas_version": "1.4.0"
},
"data": [
{
"index": "row 1",
"col 1": "a",
"col 2": "b"
},
{
"index": "row 2",
"col 1": "c",
"col 2": "d"
}
]
}
|
reference/api/pandas.Series.to_json.html
|
pandas.Series.str.index
|
`pandas.Series.str.index`
Return lowest indexes in each string in Series/Index.
|
Series.str.index(sub, start=0, end=None)[source]#
Return lowest indexes in each string in Series/Index.
Each of the returned indexes corresponds to the position where the
substring is fully contained between [start:end]. This is the same
as str.find except instead of returning -1, it raises a
ValueError when the substring is not found. Equivalent to standard
str.index.
Parameters
substrSubstring being searched.
startintLeft edge index.
endintRight edge index.
Returns
Series or Index of object
See also
rindexReturn highest indexes in each strings.
|
reference/api/pandas.Series.str.index.html
|
pandas.DataFrame.median
|
`pandas.DataFrame.median`
Return the median of the values over the requested axis.
|
DataFrame.median(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the median of the values over the requested axis.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
|
reference/api/pandas.DataFrame.median.html
|
pandas.isnull
|
`pandas.isnull`
Detect missing values for an array-like object.
```
>>> pd.isna('dog')
False
```
|
pandas.isnull(obj)[source]#
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are missing (NaN in numeric arrays, None or NaN
in object arrays, NaT in datetimelike).
Parameters
objscalar or array-likeObject to check for null or missing values.
Returns
bool or array-like of boolFor scalar input, returns a scalar boolean.
For array input, returns an array of boolean indicating whether each
corresponding element is missing.
See also
notnaBoolean inverse of pandas.isna.
Series.isnaDetect missing values in a Series.
DataFrame.isnaDetect missing values in a DataFrame.
Index.isnaDetect missing values in an Index.
Examples
Scalar arguments (including strings) result in a scalar boolean.
>>> pd.isna('dog')
False
>>> pd.isna(pd.NA)
True
>>> pd.isna(np.nan)
True
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.isna(array)
array([[False, True, False],
[False, False, True]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.isna(index)
array([False, False, True, False])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool
|
reference/api/pandas.isnull.html
|
pandas.Timestamp.utcoffset
|
`pandas.Timestamp.utcoffset`
Return self.tzinfo.utcoffset(self).
|
Timestamp.utcoffset()#
Return self.tzinfo.utcoffset(self).
|
reference/api/pandas.Timestamp.utcoffset.html
|
pandas.Series.mode
|
`pandas.Series.mode`
Return the mode(s) of the Series.
The mode is the value that appears most often. There can be multiple modes.
|
Series.mode(dropna=True)[source]#
Return the mode(s) of the Series.
The mode is the value that appears most often. There can be multiple modes.
Always returns Series even if only one value is returned.
Parameters
dropnabool, default TrueDon’t consider counts of NaN/NaT.
Returns
SeriesModes of the Series in sorted order.
|
reference/api/pandas.Series.mode.html
|
pandas.Series.mul
|
`pandas.Series.mul`
Return Multiplication of series and other, element-wise (binary operator mul).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
```
|
Series.mul(other, level=None, fill_value=None, axis=0)[source]#
Return Multiplication of series and other, element-wise (binary operator mul).
Equivalent to series * other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rmulReverse of the Multiplication operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.mul.html
|
Python Module Index
|
Python Module Index
|
p
p
pandas
|
py-modindex.html
|
pandas.tseries.offsets.Milli.apply_index
|
`pandas.tseries.offsets.Milli.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
Milli.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Milli.apply_index.html
|
pandas.tseries.offsets.FY5253Quarter.onOffset
|
pandas.tseries.offsets.FY5253Quarter.onOffset
|
FY5253Quarter.onOffset()#
|
reference/api/pandas.tseries.offsets.FY5253Quarter.onOffset.html
|
pandas.Series.dt.to_period
|
`pandas.Series.dt.to_period`
Cast to PeriodArray/Index at a particular frequency.
```
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
```
|
Series.dt.to_period(*args, **kwargs)[source]#
Cast to PeriodArray/Index at a particular frequency.
Converts DatetimeArray/Index to PeriodArray/Index.
Parameters
freqstr or Offset, optionalOne of pandas’ offset strings
or an Offset object. Will be inferred by default.
Returns
PeriodArray/Index
Raises
ValueErrorWhen converting a DatetimeArray/Index with non-regular values,
so that a frequency cannot be inferred.
See also
PeriodIndexImmutable ndarray holding ordinal values.
DatetimeIndex.to_pydatetimeReturn DatetimeIndex as object.
Examples
>>> df = pd.DataFrame({"y": [1, 2, 3]},
... index=pd.to_datetime(["2000-03-31 00:00:00",
... "2000-05-31 00:00:00",
... "2000-08-31 00:00:00"]))
>>> df.index.to_period("M")
PeriodIndex(['2000-03', '2000-05', '2000-08'],
dtype='period[M]')
Infer the daily frequency
>>> idx = pd.date_range("2017-01-01", periods=2)
>>> idx.to_period()
PeriodIndex(['2017-01-01', '2017-01-02'],
dtype='period[D]')
|
reference/api/pandas.Series.dt.to_period.html
|
pandas ecosystem
|
pandas ecosystem
|
Increasingly, packages are being built on top of pandas to address specific needs
in data preparation, analysis and visualization.
This is encouraging because it means pandas is not only helping users to handle
their data tasks but also that it provides a better starting point for developers to
build powerful and more focused data tools.
The creation of libraries that complement pandas’ functionality also allows pandas
development to remain focused around it’s original requirements.
This is an inexhaustive list of projects that build on pandas in order to provide
tools in the PyData space. For a list of projects that depend on pandas,
see the
Github network dependents for pandas
or search pypi for pandas.
We’d like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
Data cleaning and validation#
Pyjanitor#
Pyjanitor provides a clean API for cleaning data, using method chaining.
Pandera#
Pandera provides a flexible and expressive API for performing data validation on dataframes
to make data processing pipelines more readable and robust.
Dataframes contain information that pandera explicitly validates at runtime. This is useful in
production-critical data pipelines or reproducible research settings.
pandas-path#
Since Python 3.4, pathlib has been
included in the Python standard library. Path objects provide a simple
and delightful way to interact with the file system. The pandas-path package enables the
Path API for pandas through a custom accessor .path. Getting just the filenames from
a series of full file paths is as simple as my_files.path.name. Other convenient operations like
joining paths, replacing file extensions, and checking if files exist are also available.
Statistics and machine learning#
pandas-tfrecords#
Easy saving pandas dataframe to tensorflow tfrecords format and reading tfrecords to pandas.
Statsmodels#
Statsmodels is the prominent Python “statistics and econometrics library” and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas’ scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
sklearn-pandas#
Use pandas DataFrames in your scikit-learn
ML pipeline.
Featuretools#
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primitives”. Users can contribute their own primitives in Python and share them with the rest of the community.
Compose#
Compose is a machine learning tool for labeling data and prediction engineering. It allows you to structure the labeling process by parameterizing prediction problems and transforming time-driven relational data into target values with cutoff times that can be used for supervised learning.
STUMPY#
STUMPY is a powerful and scalable Python library for modern time series analysis.
At its core, STUMPY efficiently computes something called a
matrix profile,
which can be used for a wide variety of time series data mining tasks.
Visualization#
Pandas has its own Styler class for table visualization, and while
pandas also has built-in support for data visualization through charts with matplotlib,
there are a number of other pandas-compatible libraries.
Altair#
Altair is a declarative statistical visualization library for Python.
With Altair, you can spend more time understanding your data and its
meaning. Altair’s API is simple, friendly and consistent and built on
top of the powerful Vega-Lite JSON specification. This elegant
simplicity produces beautiful and effective visualizations with a
minimal amount of code. Altair works with pandas DataFrames.
Bokeh#
Bokeh is a Python interactive visualization library for large datasets that natively uses
the latest web technologies. Its goal is to provide elegant, concise construction of novel
graphics in the style of Protovis/D3, while delivering high-performance interactivity over
large data to thin clients.
Pandas-Bokeh provides a high level API
for Bokeh that can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides interactive
web-based charts and maps.
Seaborn#
Seaborn is a Python visualization library based on
matplotlib. It provides a high-level, dataset-oriented
interface for creating attractive statistical graphics. The plotting functions
in seaborn understand pandas objects and leverage pandas grouping operations
internally to support concise specification of complex visualizations. Seaborn
also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
plotnine#
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language.
Based on “The Grammar of Graphics” it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
Various implementations to other languages are available.
A good implementation for Python users is has2k1/plotnine.
IPython vega#
IPython Vega leverages Vega to create plots within Jupyter Notebook.
Plotly#
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has offline, or on-premise accounts for private use.
Lux#
Lux is a Python library that facilitates fast and easy experimentation with data by automating the visual data exploration process. To use Lux, simply add an extra import alongside pandas:
import lux
import pandas as pd
df = pd.read_csv("data.csv")
df # discover interesting insights!
By printing out a dataframe, Lux automatically recommends a set of visualizations that highlights interesting trends and patterns in the dataframe. Users can leverage any existing pandas commands without modifying their code, while being able to visualize their pandas data structures (e.g., DataFrame, Series, Index) at the same time. Lux also offers a powerful, intuitive language that allow users to create Altair, matplotlib, or Vega-Lite visualizations without having to think at the level of code.
Qtpandas#
Spun off from the main pandas library, the qtpandas
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
D-Tale#
D-Tale is a lightweight web client for visualizing pandas data structures. It
provides a rich spreadsheet-style grid which acts as a wrapper for a lot of
pandas functionality (query, sort, describe, corr…) so users can quickly
manipulate their data. There is also an interactive chart-builder using Plotly
Dash allowing users to build nice portable visualizations. D-Tale can be
invoked with the following command
import dtale
dtale.show(df)
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the grid.
hvplot#
hvPlot is a high-level plotting API for the PyData ecosystem built on HoloViews.
It can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "hvplot")
IDE#
IPython#
IPython is an interactive command shell and distributed computing
environment. IPython tab completion works with pandas methods and also
attributes like DataFrame columns.
Jupyter Notebook / Jupyter Lab#
Jupyter Notebook is a web application for creating Jupyter notebooks.
A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
Python) through ‘Download As’ in the web interface and jupyter convert
in a shell.
pandas DataFrames implement _repr_html_ and _repr_latex methods
which are utilized by Jupyter Notebook for displaying
(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
(Note: HTML tables may or may not be
compatible with non-HTML Jupyter output formats.)
See Options and Settings and
Available Options
for pandas display. settings.
Quantopian/qgrid#
qgrid is “an interactive grid for sorting and filtering
DataFrames in IPython Notebook” built with SlickGrid.
Spyder#
Spyder is a cross-platform PyQt-based IDE combining the editing, analysis,
debugging and profiling functionality of a software development tool with the
data exploration, interactive execution, deep inspection and rich visualization
capabilities of a scientific environment like MATLAB or Rstudio.
Its Variable Explorer
allows users to view, manipulate and edit pandas Index, Series,
and DataFrame objects like a “spreadsheet”, including copying and modifying
values, sorting, displaying a “heatmap”, converting data types and more.
pandas objects can also be renamed, duplicated, new columns added,
copied/pasted to/from the clipboard (as TSV), and saved/loaded to/from a file.
Spyder can also import data from a variety of plain text and binary files
or the clipboard into a new pandas DataFrame via a sophisticated import wizard.
Most pandas classes, methods and data attributes can be autocompleted in
Spyder’s Editor and
IPython Console,
and Spyder’s Help pane can retrieve
and render Numpydoc documentation on pandas objects in rich text with Sphinx
both automatically and on-demand.
API#
pandas-datareader#
pandas-datareader is a remote data access library for pandas (PyPI:pandas-datareader).
It is based on functionality that was located in pandas.io.data and pandas.io.wb but was
split off in v0.19.
See more in the pandas-datareader docs:
The following data feeds are available:
Google Finance
Tiingo
Morningstar
IEX
Robinhood
Enigma
Quandl
FRED
Fama/French
World Bank
OECD
Eurostat
TSP Fund Data
Nasdaq Trader Symbol Definitions
Stooq Index Data
MOEX Data
Quandl/Python#
Quandl API for Python wraps the Quandl REST API to return
pandas DataFrames with timeseries indexes.
Pydatastream#
PyDatastream is a Python interface to the
Refinitiv Datastream (DWS)
REST API to return indexed pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
pandaSDMX#
pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
SDMX 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or MultiIndexed DataFrames.
fredapi#
fredapi is a Python interface to the Federal Reserve Economic Data (FRED)
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
dataframe_sql#
dataframe_sql is a Python package that translates SQL syntax directly into
operations on pandas DataFrames. This is useful when migrating from a database to
using pandas or for users more comfortable with SQL looking for a way to interface
with pandas.
Domain specific#
Geopandas#
Geopandas extends pandas data objects to include geographic information which support
geometric operations. If your work entails maps and geographical coordinates, and
you love pandas, you should take a close look at Geopandas.
staircase#
staircase is a data analysis package, built upon pandas and numpy, for modelling and
manipulation of mathematical step functions. It provides a rich variety of arithmetic
operations, relational operations, logical operations, statistical operations and
aggregations for step functions defined over real numbers, datetime and timedelta domains.
xarray#
xarray brings the labeled data power of pandas to the physical sciences by
providing N-dimensional variants of the core pandas data structures. It aims to
provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
IO#
BCPandas#
BCPandas provides high performance writes from pandas to Microsoft SQL Server,
far exceeding the performance of the native df.to_sql method. Internally, it uses
Microsoft’s BCP utility, but the complexity is fully abstracted away from the end user.
Rigorously tested, it is a complete replacement for df.to_sql.
Deltalake#
Deltalake python package lets you access tables stored in
Delta Lake natively in Python without the need to use Spark or
JVM. It provides the delta_table.to_pyarrow_table().to_pandas() method to convert
any Delta table into Pandas dataframe.
Out-of-core#
Blaze#
Blaze provides a standard API for doing computations with various
in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables,
PySpark.
Cylon#
Cylon is a fast, scalable, distributed memory parallel runtime with a pandas
like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache
Arrow format to represent the data in-memory. Cylon DataFrame API implements
most of the core operators of pandas such as merge, filter, join, concat,
group-by, drop_duplicates, etc. These operators are designed to work across
thousands of cores to scale applications. It can interoperate with pandas
DataFrame by reading data from pandas or converting data to pandas so users
can selectively scale parts of their pandas DataFrame applications.
from pycylon import read_csv, DataFrame, CylonEnv
from pycylon.net import MPIConfig
# Initialize Cylon distributed environment
config: MPIConfig = MPIConfig()
env: CylonEnv = CylonEnv(config=config, distributed=True)
df1: DataFrame = read_csv('/tmp/csv1.csv')
df2: DataFrame = read_csv('/tmp/csv2.csv')
# Using 1000s of cores across the cluster to compute the join
df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env)
print(df3)
Dask#
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar DataFrame interface for out-of-core, parallel and distributed computing.
Dask-ML#
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
Ibis#
Ibis offers a standard way to write analytics code, that can be run in multiple engines. It helps in bridging the gap between local Python environments (like pandas) and remote storage and execution systems like Hadoop components (like HDFS, Impala, Hive, Spark) and SQL databases (Postgres, etc.).
Koalas#
Koalas provides a familiar pandas DataFrame interface on top of Apache Spark. It enables users to leverage multi-cores on one machine or a cluster of machines to speed up or scale their DataFrame code.
Modin#
The modin.pandas DataFrame is a parallel and distributed drop-in replacement
for pandas. This means that you can use Modin with existing pandas code or write
new code with the existing pandas API. Modin can leverage your entire machine or
cluster to speed up and scale your pandas workloads, including traditionally
time-consuming tasks like ingesting data (read_csv, read_excel,
read_parquet, etc.).
# import pandas as pd
import modin.pandas as pd
df = pd.read_csv("big.csv") # use all your cores!
Odo#
Odo provides a uniform API for moving data between different formats. It uses
pandas own read_csv for CSV IO and leverages many existing packages such as
PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
Pandarallel#
Pandarallel provides a simple way to parallelize your pandas operations on all your CPUs by changing only one line of code.
If also displays progress bars.
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
# df.apply(func)
df.parallel_apply(func)
Vaex#
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (109) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
vaex.from_pandas
vaex.to_pandas_df
Extension data types#
pandas provides an interface for defining
extension types to extend NumPy’s type
system. The following libraries implement that interface to provide types not
found in NumPy or pandas, which work well with pandas’ data containers.
Cyberpandas#
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas’ Series and DataFrame.
Pandas-Genomics#
Pandas-Genomics provides extension types, extension arrays, and extension accessors for working with genomics data
Pint-Pandas#
Pint-Pandas provides an extension type for
storing numeric arrays with units. These arrays can be stored inside pandas’
Series and DataFrame. Operations between Series and DataFrame columns which
use pint’s extension array are then units aware.
Text Extensions for Pandas#
Text Extensions for Pandas
provides extension types to cover common data structures for representing natural language
data, plus library integrations that convert the outputs of popular natural language
processing libraries into Pandas DataFrames.
Accessors#
A directory of projects providing
extension accessors. This is for users to
discover new accessors and for library authors to coordinate on the namespace.
Library
Accessor
Classes
Description
cyberpandas
ip
Series
Provides common operations for working with IP addresses.
pdvega
vgplot
Series, DataFrame
Provides plotting functions from the Altair library.
pandas-genomics
genomics
Series, DataFrame
Provides common operations for quality control and analysis of genomics data.
pandas_path
path
Index, Series
Provides pathlib.Path functions for Series.
pint-pandas
pint
Series, DataFrame
Provides units support for numeric Series and DataFrames.
composeml
slice
DataFrame
Provides a generator for enhanced data slicing.
datatest
validate
Series, DataFrame, Index
Provides validation, differences, and acceptance managers.
woodwork
ww
Series, DataFrame
Provides physical, logical, and semantic data typing information for Series and DataFrames.
staircase
sc
Series
Provides methods for querying, aggregating and plotting step functions
Development tools#
pandas-stubs#
While pandas repository is partially typed, the package itself doesn’t expose this information for external use.
Install pandas-stubs to enable basic type coverage of pandas API.
Learn more by reading through GH14468, GH26766, GH28142.
See installation and usage instructions on the github page.
|
ecosystem.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.rollback
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
CustomBusinessMonthBegin.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.rollback.html
|
pandas.tseries.offsets.BusinessMonthBegin.kwds
|
`pandas.tseries.offsets.BusinessMonthBegin.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
BusinessMonthBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.kwds.html
|
pandas.api.types.is_numeric_dtype
|
`pandas.api.types.is_numeric_dtype`
Check whether the provided array or dtype is of a numeric dtype.
The array or dtype to check.
```
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
```
|
pandas.api.types.is_numeric_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of a numeric dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of a numeric dtype.
Examples
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
|
reference/api/pandas.api.types.is_numeric_dtype.html
|
pandas.tseries.offsets.BusinessMonthEnd.freqstr
|
`pandas.tseries.offsets.BusinessMonthEnd.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
BusinessMonthEnd.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.freqstr.html
|
pandas.tseries.offsets.QuarterEnd.apply_index
|
`pandas.tseries.offsets.QuarterEnd.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
QuarterEnd.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.QuarterEnd.apply_index.html
|
pandas.tseries.offsets.MonthBegin.apply
|
pandas.tseries.offsets.MonthBegin.apply
|
MonthBegin.apply()#
|
reference/api/pandas.tseries.offsets.MonthBegin.apply.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.offset
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.offset`
Alias for self._offset.
|
CustomBusinessMonthBegin.offset#
Alias for self._offset.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.offset.html
|
pandas.tseries.offsets.Hour.nanos
|
`pandas.tseries.offsets.Hour.nanos`
Return an integer of the total number of nanoseconds.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
```
|
Hour.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
|
reference/api/pandas.tseries.offsets.Hour.nanos.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.n
|
pandas.tseries.offsets.CustomBusinessMonthEnd.n
|
CustomBusinessMonthEnd.n#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.n.html
|
pandas.tseries.offsets.QuarterEnd.copy
|
`pandas.tseries.offsets.QuarterEnd.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
QuarterEnd.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.QuarterEnd.copy.html
|
pandas.io.formats.style.Styler.template_html_style
|
pandas.io.formats.style.Styler.template_html_style
|
Styler.template_html_style = <Template 'html_style.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_html_style.html
|
pandas.Series.equals
|
`pandas.Series.equals`
Test whether two objects contain the same elements.
```
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
```
|
Series.equals(other)[source]#
Test whether two objects contain the same elements.
This function allows two Series or DataFrames to be compared against
each other to see if they have the same shape and elements. NaNs in
the same location are considered equal.
The row/column index do not need to have the same type, as long
as the values are considered equal. Corresponding columns must be of
the same dtype.
Parameters
otherSeries or DataFrameThe other Series or DataFrame to be compared with the first.
Returns
boolTrue if all elements are the same in both objects, False
otherwise.
See also
Series.eqCompare two Series objects of the same length and return a Series where each element is True if the element in each Series is equal, False otherwise.
DataFrame.eqCompare two DataFrame objects of the same shape and return a DataFrame where each element is True if the respective element in each DataFrame is equal, False otherwise.
testing.assert_series_equalRaises an AssertionError if left and right are not equal. Provides an easy interface to ignore inequality in dtypes, indexes and precision among others.
testing.assert_frame_equalLike assert_series_equal, but targets DataFrames.
numpy.array_equalReturn True if two arrays have the same shape and elements, False otherwise.
Examples
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
DataFrames df and exactly_equal have the same types and values for
their elements and column labels, which will return True.
>>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
>>> exactly_equal
1 2
0 10 20
>>> df.equals(exactly_equal)
True
DataFrames df and different_column_type have the same element
types and values, but have different types for the column labels,
which will still return True.
>>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
>>> different_column_type
1.0 2.0
0 10 20
>>> df.equals(different_column_type)
True
DataFrames df and different_data_type have different types for the
same values for their elements, and will return False even though
their column labels are the same values and types.
>>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
>>> different_data_type
1 2
0 10.0 20.0
>>> df.equals(different_data_type)
False
|
reference/api/pandas.Series.equals.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.