title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.tseries.offsets.YearBegin
|
`pandas.tseries.offsets.YearBegin`
DateOffset increments between calendar year begin dates.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.YearBegin()
Timestamp('2023-01-01 00:00:00')
```
|
class pandas.tseries.offsets.YearBegin#
DateOffset increments between calendar year begin dates.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.YearBegin()
Timestamp('2023-01-01 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
month
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
|
reference/api/pandas.tseries.offsets.YearBegin.html
|
pandas.tseries.offsets.CustomBusinessDay.apply_index
|
`pandas.tseries.offsets.CustomBusinessDay.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
CustomBusinessDay.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.apply_index.html
|
pandas.tseries.offsets.QuarterBegin.apply_index
|
`pandas.tseries.offsets.QuarterBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex.
|
QuarterBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.QuarterBegin.apply_index.html
|
pandas.tseries.offsets.Easter.copy
|
`pandas.tseries.offsets.Easter.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
Easter.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.Easter.copy.html
|
pandas.Series.last_valid_index
|
`pandas.Series.last_valid_index`
Return index for last non-NA value or None, if no non-NA value is found.
Notes
|
Series.last_valid_index()[source]#
Return index for last non-NA value or None, if no non-NA value is found.
Returns
scalartype of index
Notes
If all elements are non-NA/null, returns None.
Also returns None for empty Series/DataFrame.
|
reference/api/pandas.Series.last_valid_index.html
|
pandas.Timestamp.isoweekday
|
`pandas.Timestamp.isoweekday`
Return the day of the week represented by the date.
|
Timestamp.isoweekday()#
Return the day of the week represented by the date.
Monday == 1 … Sunday == 7.
|
reference/api/pandas.Timestamp.isoweekday.html
|
pandas.tseries.offsets.Second.kwds
|
`pandas.tseries.offsets.Second.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
Second.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Second.kwds.html
|
pandas.DataFrame.plot.hist
|
`pandas.DataFrame.plot.hist`
Draw one histogram of the DataFrame’s columns.
```
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns = ['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> ax = df.plot.hist(bins=12, alpha=0.5)
```
|
DataFrame.plot.hist(by=None, bins=10, **kwargs)[source]#
Draw one histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function groups the values of all given Series in the DataFrame
into bins and draws all bins in one matplotlib.axes.Axes.
This is useful when the DataFrame’s Series are in a similar scale.
Parameters
bystr or sequence, optionalColumn in the DataFrame to group by.
Changed in version 1.4.0: Previously, by is silently ignore and makes no groupings
binsint, default 10Number of histogram bins to be used.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
class:matplotlib.AxesSubplotReturn a histogram plot.
See also
DataFrame.histDraw histograms per DataFrame’s Series.
Series.histDraw a histogram with Series’ data.
Examples
When we roll a die 6000 times, we expect to get each value around 1000
times. But when we roll two dice and sum the result, the distribution
is going to be quite different. A histogram illustrates those
distributions.
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns = ['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> ax = df.plot.hist(bins=12, alpha=0.5)
A grouped histogram can be generated by providing the parameter by (which
can be a column name, or a list of column names):
>>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
>>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
>>> ax = df.plot.hist(column=["age"], by="gender", figsize=(10, 8))
|
reference/api/pandas.DataFrame.plot.hist.html
|
pandas.tseries.offsets.BusinessMonthEnd.apply
|
pandas.tseries.offsets.BusinessMonthEnd.apply
|
BusinessMonthEnd.apply()#
|
reference/api/pandas.tseries.offsets.BusinessMonthEnd.apply.html
|
pandas.Timedelta.view
|
`pandas.Timedelta.view`
Array view compatibility.
|
Timedelta.view()#
Array view compatibility.
|
reference/api/pandas.Timedelta.view.html
|
pandas.Timestamp.timetz
|
`pandas.Timestamp.timetz`
Return time object with same time and tzinfo.
|
Timestamp.timetz()#
Return time object with same time and tzinfo.
|
reference/api/pandas.Timestamp.timetz.html
|
pandas.Series.rdiv
|
`pandas.Series.rdiv`
Return Floating division of series and other, element-wise (binary operator rtruediv).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
```
|
Series.rdiv(other, level=None, fill_value=None, axis=0)[source]#
Return Floating division of series and other, element-wise (binary operator rtruediv).
Equivalent to other / series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.truedivElement-wise Floating division, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64
|
reference/api/pandas.Series.rdiv.html
|
pandas.Series.pop
|
`pandas.Series.pop`
Return item and drops from series. Raise KeyError if not found.
```
>>> ser = pd.Series([1,2,3])
```
|
Series.pop(item)[source]#
Return item and drops from series. Raise KeyError if not found.
Parameters
itemlabelIndex of the element that needs to be removed.
Returns
Value that is popped from series.
Examples
>>> ser = pd.Series([1,2,3])
>>> ser.pop(0)
1
>>> ser
1 2
2 3
dtype: int64
|
reference/api/pandas.Series.pop.html
|
pandas.tseries.offsets.BusinessHour.name
|
`pandas.tseries.offsets.BusinessHour.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
BusinessHour.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.BusinessHour.name.html
|
pandas.tseries.offsets.Hour.__call__
|
`pandas.tseries.offsets.Hour.__call__`
Call self as a function.
|
Hour.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.Hour.__call__.html
|
pandas.Series.skew
|
`pandas.Series.skew`
Return unbiased skew over requested axis.
|
Series.skew(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return unbiased skew over requested axis.
Normalized by N-1.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
|
reference/api/pandas.Series.skew.html
|
pandas.core.window.ewm.ExponentialMovingWindow.std
|
`pandas.core.window.ewm.ExponentialMovingWindow.std`
Calculate the ewm (exponential weighted moment) standard deviation.
|
ExponentialMovingWindow.std(bias=False, numeric_only=False, *args, **kwargs)[source]#
Calculate the ewm (exponential weighted moment) standard deviation.
Parameters
biasbool, default FalseUse a standard estimation bias correction.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.ewmCalling ewm with Series data.
pandas.DataFrame.ewmCalling ewm with DataFrames.
pandas.Series.stdAggregating std for Series.
pandas.DataFrame.stdAggregating std for DataFrame.
|
reference/api/pandas.core.window.ewm.ExponentialMovingWindow.std.html
|
pandas.ExcelWriter.check_extension
|
`pandas.ExcelWriter.check_extension`
checks that path’s extension against the Writer’s supported
extensions. If it isn’t supported, raises UnsupportedFiletypeError.
|
classmethod ExcelWriter.check_extension(ext)[source]#
checks that path’s extension against the Writer’s supported
extensions. If it isn’t supported, raises UnsupportedFiletypeError.
|
reference/api/pandas.ExcelWriter.check_extension.html
|
pandas.ExcelWriter.close
|
`pandas.ExcelWriter.close`
synonym for save, to make it more file-like
|
ExcelWriter.close()[source]#
synonym for save, to make it more file-like
|
reference/api/pandas.ExcelWriter.close.html
|
pandas.read_spss
|
`pandas.read_spss`
Load an SPSS file from the file path, returning a DataFrame.
|
pandas.read_spss(path, usecols=None, convert_categoricals=True)[source]#
Load an SPSS file from the file path, returning a DataFrame.
New in version 0.25.0.
Parameters
pathstr or PathFile path.
usecolslist-like, optionalReturn a subset of the columns. If None, return all columns.
convert_categoricalsbool, default is TrueConvert categorical columns into pd.Categorical.
Returns
DataFrame
|
reference/api/pandas.read_spss.html
|
pandas.Series.str.len
|
`pandas.Series.str.len`
Compute the length of each element in the Series/Index.
```
>>> s = pd.Series(['dog',
... '',
... 5,
... {'foo' : 'bar'},
... [2, 3, 5, 7],
... ('one', 'two', 'three')])
>>> s
0 dog
1
2 5
3 {'foo': 'bar'}
4 [2, 3, 5, 7]
5 (one, two, three)
dtype: object
>>> s.str.len()
0 3.0
1 0.0
2 NaN
3 1.0
4 4.0
5 3.0
dtype: float64
```
|
Series.str.len()[source]#
Compute the length of each element in the Series/Index.
The element may be a sequence (such as a string, tuple or list) or a collection
(such as a dictionary).
Returns
Series or Index of intA Series or Index of integer values indicating the length of each
element in the Series or Index.
See also
str.lenPython built-in function returning the length of an object.
Series.sizeReturns the length of the Series.
Examples
Returns the length (number of characters) in a string. Returns the
number of entries for dictionaries, lists or tuples.
>>> s = pd.Series(['dog',
... '',
... 5,
... {'foo' : 'bar'},
... [2, 3, 5, 7],
... ('one', 'two', 'three')])
>>> s
0 dog
1
2 5
3 {'foo': 'bar'}
4 [2, 3, 5, 7]
5 (one, two, three)
dtype: object
>>> s.str.len()
0 3.0
1 0.0
2 NaN
3 1.0
4 4.0
5 3.0
dtype: float64
|
reference/api/pandas.Series.str.len.html
|
pandas.Series.prod
|
`pandas.Series.prod`
Return the product of the values over the requested axis.
```
>>> pd.Series([], dtype="float64").prod()
1.0
```
|
Series.prod(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source]#
Return the product of the values over the requested axis.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
min_countint, default 0The required number of valid values to perform the operation. If fewer than
min_count non-NA values are present the result will be NA.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
By default, the product of an empty or all-NA Series is 1
>>> pd.Series([], dtype="float64").prod()
1.0
This can be controlled with the min_count parameter
>>> pd.Series([], dtype="float64").prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
|
reference/api/pandas.Series.prod.html
|
pandas.Series.to_dict
|
`pandas.Series.to_dict`
Convert Series to {label -> value} dict or dict-like object.
The collections.abc.Mapping subclass to use as the return
object. Can be the actual class or an empty
instance of the mapping type you want. If you want a
collections.defaultdict, you must pass it initialized.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s.to_dict()
{0: 1, 1: 2, 2: 3, 3: 4}
>>> from collections import OrderedDict, defaultdict
>>> s.to_dict(OrderedDict)
OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> dd = defaultdict(list)
>>> s.to_dict(dd)
defaultdict(<class 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
```
|
Series.to_dict(into=<class 'dict'>)[source]#
Convert Series to {label -> value} dict or dict-like object.
Parameters
intoclass, default dictThe collections.abc.Mapping subclass to use as the return
object. Can be the actual class or an empty
instance of the mapping type you want. If you want a
collections.defaultdict, you must pass it initialized.
Returns
collections.abc.MappingKey-value representation of Series.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s.to_dict()
{0: 1, 1: 2, 2: 3, 3: 4}
>>> from collections import OrderedDict, defaultdict
>>> s.to_dict(OrderedDict)
OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> dd = defaultdict(list)
>>> s.to_dict(dd)
defaultdict(<class 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
|
reference/api/pandas.Series.to_dict.html
|
pandas.DataFrame.cov
|
`pandas.DataFrame.cov`
Compute pairwise covariance of columns, excluding NA/null values.
```
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
```
|
DataFrame.cov(min_periods=None, ddof=1, numeric_only=_NoDefault.no_default)[source]#
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame.
The returned data frame is the covariance matrix of the columns
of the DataFrame.
Both NA and null values are automatically excluded from the
calculation. (See the note below about bias from missing values.)
A threshold can be set for the minimum number of
observations for each value created. Comparisons with observations
below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to
understand the relationship between different measures
across time.
Parameters
min_periodsint, optionalMinimum number of observations required per pair of columns
to have a valid result.
ddofint, default 1Delta degrees of freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
New in version 1.1.0.
numeric_onlybool, default TrueInclude only float, int or boolean data.
New in version 1.5.0.
Deprecated since version 1.5.0: The default value of numeric_only will be False in a future
version of pandas.
Returns
DataFrameThe covariance matrix of the series of the DataFrame.
See also
Series.covCompute covariance with another Series.
core.window.ewm.ExponentialMovingWindow.covExponential weighted sample covariance.
core.window.expanding.Expanding.covExpanding sample covariance.
core.window.rolling.Rolling.covRolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series.
The covariance is normalized by N-ddof.
For DataFrames that have Series that are missing data (assuming that
data is missing at random)
the returned covariance matrix will be an unbiased estimate
of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable
because the estimate covariance matrix is not guaranteed to be positive
semi-definite. This could lead to estimate correlations having
absolute values which are greater than one, and/or a non-invertible
covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
... columns=['dogs', 'cats'])
>>> df.cov()
dogs cats
dogs 0.666667 -1.000000
cats -1.000000 1.666667
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional min_periods keyword
that specifies the required minimum number of non-NA observations for
each column pair in order to have a valid result:
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
|
reference/api/pandas.DataFrame.cov.html
|
pandas.DataFrame.astype
|
`pandas.DataFrame.astype`
Cast a pandas object to a specified dtype dtype.
```
>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df.dtypes
col1 int64
col2 int64
dtype: object
```
|
DataFrame.astype(dtype, copy=True, errors='raise')[source]#
Cast a pandas object to a specified dtype dtype.
Parameters
dtypedata type, or dict of column name -> data typeUse a numpy.dtype or Python type to cast entire pandas object to
the same type. Alternatively, use {col: dtype, …}, where col is a
column label and dtype is a numpy.dtype or Python type to cast one
or more of the DataFrame’s columns to column-specific types.
copybool, default TrueReturn a copy when copy=True (be very careful setting
copy=False as changes to values then may propagate to other
pandas objects).
errors{‘raise’, ‘ignore’}, default ‘raise’Control raising of exceptions on invalid data for provided dtype.
raise : allow exceptions to be raised
ignore : suppress exceptions. On error return original object.
Returns
castedsame type as caller
See also
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to a numeric type.
numpy.ndarray.astypeCast a numpy array to a specified type.
Notes
Deprecated since version 1.3.0: Using astype to convert from timezone-naive dtype to
timezone-aware dtype is deprecated and will raise in a
future version. Use Series.dt.tz_localize() instead.
Examples
Create a DataFrame:
>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df.dtypes
col1 int64
col2 int64
dtype: object
Cast all columns to int32:
>>> df.astype('int32').dtypes
col1 int32
col2 int32
dtype: object
Cast col1 to int32 using a dictionary:
>>> df.astype({'col1': 'int32'}).dtypes
col1 int32
col2 int64
dtype: object
Create a series:
>>> ser = pd.Series([1, 2], dtype='int32')
>>> ser
0 1
1 2
dtype: int32
>>> ser.astype('int64')
0 1
1 2
dtype: int64
Convert to categorical type:
>>> ser.astype('category')
0 1
1 2
dtype: category
Categories (2, int64): [1, 2]
Convert to ordered categorical type with custom ordering:
>>> from pandas.api.types import CategoricalDtype
>>> cat_dtype = CategoricalDtype(
... categories=[2, 1], ordered=True)
>>> ser.astype(cat_dtype)
0 1
1 2
dtype: category
Categories (2, int64): [2 < 1]
Note that using copy=False and changing data on a new
pandas object may propagate changes:
>>> s1 = pd.Series([1, 2])
>>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
1 2
dtype: int64
Create a series of dates:
>>> ser_date = pd.Series(pd.date_range('20200101', periods=3))
>>> ser_date
0 2020-01-01
1 2020-01-02
2 2020-01-03
dtype: datetime64[ns]
|
reference/api/pandas.DataFrame.astype.html
|
pandas.tseries.offsets.Tick.apply
|
pandas.tseries.offsets.Tick.apply
|
Tick.apply()#
|
reference/api/pandas.tseries.offsets.Tick.apply.html
|
pandas.Timestamp.isoformat
|
`pandas.Timestamp.isoformat`
Return the time formatted according to ISO 8610.
The full format looks like ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn’.
By default, the fractional part is omitted if self.microsecond == 0
and self.nanosecond == 0.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
>>> ts.isoformat()
'2020-03-14T15:32:52.192548651'
>>> ts.isoformat(timespec='microseconds')
'2020-03-14T15:32:52.192548'
```
|
Timestamp.isoformat()#
Return the time formatted according to ISO 8610.
The full format looks like ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn’.
By default, the fractional part is omitted if self.microsecond == 0
and self.nanosecond == 0.
If self.tzinfo is not None, the UTC offset is also attached, giving
giving a full format of ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn+HH:MM’.
Parameters
sepstr, default ‘T’String used as the separator between the date and time.
timespecstr, default ‘auto’Specifies the number of additional terms of the time to include.
The valid values are ‘auto’, ‘hours’, ‘minutes’, ‘seconds’,
‘milliseconds’, ‘microseconds’, and ‘nanoseconds’.
Returns
str
Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
>>> ts.isoformat()
'2020-03-14T15:32:52.192548651'
>>> ts.isoformat(timespec='microseconds')
'2020-03-14T15:32:52.192548'
|
reference/api/pandas.Timestamp.isoformat.html
|
pandas.Series.median
|
`pandas.Series.median`
Return the median of the values over the requested axis.
Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
|
Series.median(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the median of the values over the requested axis.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
|
reference/api/pandas.Series.median.html
|
pandas.DataFrame.prod
|
`pandas.DataFrame.prod`
Return the product of the values over the requested axis.
```
>>> pd.Series([], dtype="float64").prod()
1.0
```
|
DataFrame.prod(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source]#
Return the product of the values over the requested axis.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
min_countint, default 0The required number of valid values to perform the operation. If fewer than
min_count non-NA values are present the result will be NA.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
By default, the product of an empty or all-NA Series is 1
>>> pd.Series([], dtype="float64").prod()
1.0
This can be controlled with the min_count parameter
>>> pd.Series([], dtype="float64").prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
|
reference/api/pandas.DataFrame.prod.html
|
pandas.IntervalDtype
|
`pandas.IntervalDtype`
An ExtensionDtype for Interval data.
This is not an actual numpy dtype, but a duck type.
```
>>> pd.IntervalDtype(subtype='int64', closed='both')
interval[int64, both]
```
|
class pandas.IntervalDtype(subtype=None, closed=None)[source]#
An ExtensionDtype for Interval data.
This is not an actual numpy dtype, but a duck type.
Parameters
subtypestr, np.dtypeThe dtype of the Interval bounds.
Examples
>>> pd.IntervalDtype(subtype='int64', closed='both')
interval[int64, both]
Attributes
subtype
The dtype of the Interval bounds.
Methods
None
|
reference/api/pandas.IntervalDtype.html
|
pandas.TimedeltaIndex.ceil
|
`pandas.TimedeltaIndex.ceil`
Perform ceil operation on the data to the specified freq.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
```
|
TimedeltaIndex.ceil(*args, **kwargs)[source]#
Perform ceil operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, ceiling will take place relative to the
local (“wall”) time and re-localized to the same timezone. When ceiling
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
|
reference/api/pandas.TimedeltaIndex.ceil.html
|
pandas.Series.str.isdigit
|
`pandas.Series.str.isdigit`
Check whether all characters in each string are digits.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.isdigit()[source]#
Check whether all characters in each string are digits.
This is equivalent to running the Python string method
str.isdigit() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.isdigit.html
|
pandas.tseries.offsets.QuarterEnd.__call__
|
`pandas.tseries.offsets.QuarterEnd.__call__`
Call self as a function.
|
QuarterEnd.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.QuarterEnd.__call__.html
|
pandas.tseries.offsets.Minute.kwds
|
`pandas.tseries.offsets.Minute.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
Minute.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.Minute.kwds.html
|
pandas.Categorical.dtype
|
`pandas.Categorical.dtype`
The CategoricalDtype for this instance.
|
property Categorical.dtype[source]#
The CategoricalDtype for this instance.
|
reference/api/pandas.Categorical.dtype.html
|
pandas.tseries.offsets.BusinessDay.is_month_start
|
`pandas.tseries.offsets.BusinessDay.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
BusinessDay.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessDay.is_month_start.html
|
pandas.core.groupby.GroupBy.cumcount
|
`pandas.core.groupby.GroupBy.cumcount`
Number each item in each group from 0 to the length of that group - 1.
```
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
```
|
final GroupBy.cumcount(ascending=True)[source]#
Number each item in each group from 0 to the length of that group - 1.
Essentially this is equivalent to
self.apply(lambda x: pd.Series(np.arange(len(x)), x.index))
Parameters
ascendingbool, default TrueIf False, number in reverse, from length of group - 1 to 0.
Returns
SeriesSequence number of each element within each group.
See also
ngroupNumber the groups themselves.
Examples
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
A
0 a
1 a
2 a
3 b
4 b
5 a
>>> df.groupby('A').cumcount()
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
|
reference/api/pandas.core.groupby.GroupBy.cumcount.html
|
pandas.IntervalIndex.contains
|
`pandas.IntervalIndex.contains`
Check elementwise if the Intervals contain the value.
```
>>> intervals = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 3), (2, 4)])
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
```
|
IntervalIndex.contains(*args, **kwargs)[source]#
Check elementwise if the Intervals contain the value.
Return a boolean mask whether the value is contained in the Intervals
of the IntervalArray.
New in version 0.25.0.
Parameters
otherscalarThe value to check whether it is contained in the Intervals.
Returns
boolean array
See also
Interval.containsCheck whether Interval object contains value.
IntervalArray.overlapsCheck if an Interval overlaps the values in the IntervalArray.
Examples
>>> intervals = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 3), (2, 4)])
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.contains(0.5)
array([ True, False, False])
|
reference/api/pandas.IntervalIndex.contains.html
|
pandas.DataFrame.expanding
|
`pandas.DataFrame.expanding`
Provide expanding window calculations.
```
>>> df = pd.DataFrame({"B": [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
```
|
DataFrame.expanding(min_periods=1, center=None, axis=0, method='single')[source]#
Provide expanding window calculations.
Parameters
min_periodsint, default 1Minimum number of observations in window required to have a value;
otherwise, result is np.nan.
centerbool, default FalseIf False, set the window labels as the right edge of the window index.
If True, set the window labels as the center of the window index.
Deprecated since version 1.1.0.
axisint or str, default 0If 0 or 'index', roll across the rows.
If 1 or 'columns', roll across the columns.
For Series this parameter is unused and defaults to 0.
methodstr {‘single’, ‘table’}, default ‘single’Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
New in version 1.3.0.
Returns
Expanding subclass
See also
rollingProvides rolling window calculations.
ewmProvides exponential weighted functions.
Notes
See Windowing Operations for further usage details
and examples.
Examples
>>> df = pd.DataFrame({"B": [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
min_periods
Expanding sum with 1 vs 3 observations needed to calculate a value.
>>> df.expanding(1).sum()
B
0 0.0
1 1.0
2 3.0
3 3.0
4 7.0
>>> df.expanding(3).sum()
B
0 NaN
1 NaN
2 3.0
3 3.0
4 7.0
|
reference/api/pandas.DataFrame.expanding.html
|
pandas.DatetimeIndex.day
|
`pandas.DatetimeIndex.day`
The day of the datetime.
Examples
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64
```
|
property DatetimeIndex.day[source]#
The day of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64
|
reference/api/pandas.DatetimeIndex.day.html
|
pandas.TimedeltaIndex.floor
|
`pandas.TimedeltaIndex.floor`
Perform floor operation on the data to the specified freq.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.floor('H')
DatetimeIndex(['2018-01-01 11:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[ns]', freq=None)
```
|
TimedeltaIndex.floor(*args, **kwargs)[source]#
Perform floor operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to floor the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, flooring will take place relative to the
local (“wall”) time and re-localized to the same timezone. When flooring
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.floor('H')
DatetimeIndex(['2018-01-01 11:00:00', '2018-01-01 12:00:00',
'2018-01-01 12:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.floor("H")
0 2018-01-01 11:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.floor("2H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.floor("2H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
|
reference/api/pandas.TimedeltaIndex.floor.html
|
pandas.tseries.offsets.CustomBusinessHour.copy
|
`pandas.tseries.offsets.CustomBusinessHour.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
CustomBusinessHour.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.CustomBusinessHour.copy.html
|
pandas.notna
|
`pandas.notna`
Detect non-missing values for an array-like object.
```
>>> pd.notna('dog')
True
```
|
pandas.notna(obj)[source]#
Detect non-missing values for an array-like object.
This function takes a scalar or array-like object and indicates
whether values are valid (not missing, which is NaN in numeric
arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters
objarray-like or object valueObject to check for not null or non-missing values.
Returns
bool or array-like of boolFor scalar input, returns a scalar boolean.
For array input, returns an array of boolean indicating whether each
corresponding element is valid.
See also
isnaBoolean inverse of pandas.notna.
Series.notnaDetect valid values in a Series.
DataFrame.notnaDetect valid values in a DataFrame.
Index.notnaDetect valid values in an Index.
Examples
Scalar arguments (including strings) result in a scalar boolean.
>>> pd.notna('dog')
True
>>> pd.notna(pd.NA)
False
>>> pd.notna(np.nan)
False
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.notna(array)
array([[ True, False, True],
[ True, True, False]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.notna(index)
array([ True, True, False, True])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.notna(df)
0 1 2
0 True True True
1 True False True
>>> pd.notna(df[1])
0 True
1 False
Name: 1, dtype: bool
|
reference/api/pandas.notna.html
|
pandas.Index.T
|
`pandas.Index.T`
Return the transpose, which is by definition self.
|
property Index.T[source]#
Return the transpose, which is by definition self.
|
reference/api/pandas.Index.T.html
|
pandas.Series.explode
|
`pandas.Series.explode`
Transform each element of a list-like to a row.
```
>>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]])
>>> s
0 [1, 2, 3]
1 foo
2 []
3 [3, 4]
dtype: object
```
|
Series.explode(ignore_index=False)[source]#
Transform each element of a list-like to a row.
New in version 0.25.0.
Parameters
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
SeriesExploded lists to rows; index will be duplicated for these rows.
See also
Series.str.splitSplit string values on specified separator.
Series.unstackUnstack, a.k.a. pivot, Series with MultiIndex to produce DataFrame.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
DataFrame.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of elements in
the output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]])
>>> s
0 [1, 2, 3]
1 foo
2 []
3 [3, 4]
dtype: object
>>> s.explode()
0 1
0 2
0 3
1 foo
2 NaN
3 3
3 4
dtype: object
|
reference/api/pandas.Series.explode.html
|
pandas.tseries.offsets.Milli.is_month_end
|
`pandas.tseries.offsets.Milli.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
Milli.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.Milli.is_month_end.html
|
pandas.Series.flags
|
`pandas.Series.flags`
Get the properties associated with this pandas object.
The available flags are
```
>>> df = pd.DataFrame({"A": [1, 2]})
>>> df.flags
<Flags(allows_duplicate_labels=True)>
```
|
property Series.flags[source]#
Get the properties associated with this pandas object.
The available flags are
Flags.allows_duplicate_labels
See also
FlagsFlags that apply to pandas objects.
DataFrame.attrsGlobal metadata applying to this dataset.
Notes
“Flags” differ from “metadata”. Flags reflect properties of the
pandas object (the Series or DataFrame). Metadata refer to properties
of the dataset, and should be stored in DataFrame.attrs.
Examples
>>> df = pd.DataFrame({"A": [1, 2]})
>>> df.flags
<Flags(allows_duplicate_labels=True)>
Flags can be get or set using .
>>> df.flags.allows_duplicate_labels
True
>>> df.flags.allows_duplicate_labels = False
Or by slicing with a key
>>> df.flags["allows_duplicate_labels"]
False
>>> df.flags["allows_duplicate_labels"] = True
|
reference/api/pandas.Series.flags.html
|
pandas.core.window.rolling.Rolling.sum
|
`pandas.core.window.rolling.Rolling.sum`
Calculate the rolling sum.
Include only float, int, boolean columns.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
```
|
Rolling.sum(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the rolling sum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.sumAggregating sum for Series.
pandas.DataFrame.sumAggregating sum for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
Examples
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
For DataFrame, each sum is computed column-wise.
>>> df = pd.DataFrame({"A": s, "B": s ** 2})
>>> df
A B
0 1 1
1 2 4
2 3 9
3 4 16
4 5 25
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
|
reference/api/pandas.core.window.rolling.Rolling.sum.html
|
pandas.core.window.rolling.Rolling.sem
|
`pandas.core.window.rolling.Rolling.sem`
Calculate the rolling standard error of mean.
Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
```
>>> s = pd.Series([0, 1, 2, 3])
>>> s.rolling(2, min_periods=1).sem()
0 NaN
1 0.707107
2 0.707107
3 0.707107
dtype: float64
```
|
Rolling.sem(ddof=1, numeric_only=False, *args, **kwargs)[source]#
Calculate the rolling standard error of mean.
Parameters
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.semAggregating sem for Series.
pandas.DataFrame.semAggregating sem for DataFrame.
Notes
A minimum of one period is required for the calculation.
Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> s.rolling(2, min_periods=1).sem()
0 NaN
1 0.707107
2 0.707107
3 0.707107
dtype: float64
|
reference/api/pandas.core.window.rolling.Rolling.sem.html
|
pandas.Timestamp.asm8
|
`pandas.Timestamp.asm8`
Return numpy datetime64 format in nanoseconds.
```
>>> ts = pd.Timestamp(2020, 3, 14, 15)
>>> ts.asm8
numpy.datetime64('2020-03-14T15:00:00.000000000')
```
|
Timestamp.asm8#
Return numpy datetime64 format in nanoseconds.
Examples
>>> ts = pd.Timestamp(2020, 3, 14, 15)
>>> ts.asm8
numpy.datetime64('2020-03-14T15:00:00.000000000')
|
reference/api/pandas.Timestamp.asm8.html
|
pandas.Series.compare
|
`pandas.Series.compare`
Compare to another Series and show the differences.
```
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
```
|
Series.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another Series and show the differences.
New in version 1.1.0.
Parameters
otherSeriesObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
New in version 1.5.0.
Returns
Series or DataFrameIf axis is 0 or ‘index’ the result will be a Series.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
If axis is 1 or ‘columns’ the result will be a DataFrame.
It will have two columns namely ‘self’ and ‘other’.
See also
DataFrame.compareCompare with another DataFrame and show differences.
Notes
Matching NaNs will not appear as a difference.
Examples
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
Align the differences on columns
>>> s1.compare(s2)
self other
1 b a
3 d b
Stack the differences on indices
>>> s1.compare(s2, align_axis=0)
1 self b
other a
3 self d
other b
dtype: object
Keep all original rows
>>> s1.compare(s2, keep_shape=True)
self other
0 NaN NaN
1 b a
2 NaN NaN
3 d b
4 NaN NaN
Keep all original rows and also all original values
>>> s1.compare(s2, keep_shape=True, keep_equal=True)
self other
0 a a
1 b a
2 c c
3 d b
4 e e
|
reference/api/pandas.Series.compare.html
|
pandas.Series.axes
|
`pandas.Series.axes`
Return a list of the row axis labels.
|
property Series.axes[source]#
Return a list of the row axis labels.
|
reference/api/pandas.Series.axes.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.cbday_roll
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.cbday_roll`
Define default roll function to be called in apply method.
|
CustomBusinessMonthEnd.cbday_roll#
Define default roll function to be called in apply method.
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.cbday_roll.html
|
pandas.DataFrame.take
|
`pandas.DataFrame.take`
Return the elements in the given positional indices along an axis.
```
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=['name', 'class', 'max_speed'],
... index=[0, 2, 3, 1])
>>> df
name class max_speed
0 falcon bird 389.0
2 parrot bird 24.0
3 lion mammal 80.5
1 monkey mammal NaN
```
|
DataFrame.take(indices, axis=0, is_copy=None, **kwargs)[source]#
Return the elements in the given positional indices along an axis.
This means that we are not indexing according to actual values in
the index attribute of the object. We are indexing according to the
actual position of the element in the object.
Parameters
indicesarray-likeAn array of ints indicating which positions to take.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0The axis on which to select elements. 0 means that we are
selecting rows, 1 means that we are selecting columns.
For Series this parameter is unused and defaults to 0.
is_copyboolBefore pandas 1.0, is_copy=False can be specified to ensure
that the return value is an actual copy. Starting with pandas 1.0,
take always returns a copy, and the keyword is therefore
deprecated.
Deprecated since version 1.0.0.
**kwargsFor compatibility with numpy.take(). Has no effect on the
output.
Returns
takensame type as callerAn array-like containing the elements taken from the object.
See also
DataFrame.locSelect a subset of a DataFrame by labels.
DataFrame.ilocSelect a subset of a DataFrame by positions.
numpy.takeTake elements from an array along an axis.
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=['name', 'class', 'max_speed'],
... index=[0, 2, 3, 1])
>>> df
name class max_speed
0 falcon bird 389.0
2 parrot bird 24.0
3 lion mammal 80.5
1 monkey mammal NaN
Take elements at positions 0 and 3 along the axis 0 (default).
Note how the actual indices selected (0 and 1) do not correspond to
our selected indices 0 and 3. That’s because we are selecting the 0th
and 3rd rows, not rows whose indices equal 0 and 3.
>>> df.take([0, 3])
name class max_speed
0 falcon bird 389.0
1 monkey mammal NaN
Take elements at indices 1 and 2 along the axis 1 (column selection).
>>> df.take([1, 2], axis=1)
class max_speed
0 bird 389.0
2 bird 24.0
3 mammal 80.5
1 mammal NaN
We may take elements using negative integers for positive indices,
starting from the end of the object, just like with Python lists.
>>> df.take([-1, -2])
name class max_speed
1 monkey mammal NaN
3 lion mammal 80.5
|
reference/api/pandas.DataFrame.take.html
|
How to handle time series data with ease?
|
How to handle time series data with ease?
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
I want to work with the dates in the column datetime as datetime objects instead of plain text
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
|
matplotlib.pyplot as plt
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")
In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m³
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m³
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m³
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m³
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m³
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
How to handle time series data with ease?#
Using pandas datetime properties#
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])
In [8]: air_quality["datetime"]
Out[8]:
0 2019-06-21 00:00:00+00:00
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
As many data sets do contain datetime information in one of
the columns, pandas input function like pandas.read_csv() and pandas.read_json()
can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as
Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful? Let’s illustrate the added
value with some example cases.
What is the start and end date of the time series data set we are working
with?
In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()
Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))
Using pandas.Timestamp for datetimes enables us to calculate with date
information and make them comparable. Hence, we can use this to get the
length of our time series:
In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()
Out[10]: Timedelta('44 days 23:00:00')
The result is a pandas.Timedelta object, similar to datetime.timedelta
from the standard Python library and defining a time duration.
To user guideThe various time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [11]: air_quality["month"] = air_quality["datetime"].dt.month
In [12]: air_quality.head()
Out[12]:
city country datetime ... value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 ... 20.0 µg/m³ 6
1 Paris FR 2019-06-20 23:00:00+00:00 ... 21.8 µg/m³ 6
2 Paris FR 2019-06-20 22:00:00+00:00 ... 26.5 µg/m³ 6
3 Paris FR 2019-06-20 21:00:00+00:00 ... 24.9 µg/m³ 6
4 Paris FR 2019-06-20 20:00:00+00:00 ... 21.4 µg/m³ 6
[5 rows x 8 columns]
By using Timestamp objects for dates, a lot of time-related
properties are provided by pandas. For example the month, but also
year, weekofyear, quarter,… All of these properties are
accessible by the dt accessor.
To user guideAn overview of the existing date properties is given in the
time and date components overview table. More details about the dt accessor
to return datetime like properties are explained in a dedicated section on the dt accessor.
What is the average \(NO_2\) concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the
tutorial on statistics calculation?
Here, we want to calculate a given statistic (e.g. mean \(NO_2\))
for each weekday and for each measurement location. To group on
weekdays, we use the datetime property weekday (with Monday=0 and
Sunday=6) of pandas Timestamp, which is also accessible by the
dt accessor. The grouping on both locations and weekdays can be done
to split the calculation of the mean on each of these combinations.
Danger
As we are working with a very short time series in these
examples, the analysis does not provide a long-term representative
result!
Plot the typical \(NO_2\) pattern during the day of our time series of all stations together. In other words, what is the average value for each hour of the day?
In [14]: fig, axs = plt.subplots(figsize=(12, 4))
In [15]: air_quality.groupby(air_quality["datetime"].dt.hour)["value"].mean().plot(
....: kind='bar', rot=0, ax=axs
....: )
....:
Out[15]: <AxesSubplot: xlabel='datetime'>
In [16]: plt.xlabel("Hour of the day"); # custom x label using Matplotlib
In [17]: plt.ylabel("$NO_2 (µg/m^3)$");
Similar to the previous case, we want to calculate a given statistic
(e.g. mean \(NO_2\)) for each hour of the day and we can use the
split-apply-combine approach again. For this case, we use the datetime property hour
of pandas Timestamp, which is also accessible by the dt accessor.
Datetime as index#
In the tutorial on reshaping,
pivot() was introduced to reshape the data table with each of the
measurements locations as a separate column:
In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value")
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN
Note
By pivoting the data, the datetime information became the
index of the table. In general, setting a column as an index can be
achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful
functionalities. For example, we do not need the dt accessor to get
the time series properties, but have these properties available on the
index directly:
In [20]: no_2.index.year, no_2.index.weekday
Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))
Some other advantages are the convenient subsetting of time period or
the adapted time scale on plots. Let’s apply this on our data.
Create a plot of the \(NO_2\) values in the different stations from the 20th of May till the end of 21st of May
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
To user guideMore information on the DatetimeIndex and the slicing by using strings is provided in the section on time series indexing.
Resample a time series to another frequency#
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [22]: monthly_max = no_2.resample("M").max()
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the
ability to resample() time series to another frequency (e.g.,
converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
it provides a time-based grouping, by using a string (e.g. M,
5H,…) that defines the target frequency
it requires an aggregation function such as mean, max,…
To user guideAn overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the
freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
Make a plot of the daily mean \(NO_2\) value in each of the stations.
In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
To user guideMore details on the power of time series resampling is provided in the user guide section on resampling.
REMEMBER
Valid date strings can be converted to datetime objects using
to_datetime function or as part of read functions.
Datetime objects in pandas support calculations, logical operations
and convenient date-related properties using the dt accessor.
A DatetimeIndex contains these date-related properties and
supports convenient slicing.
Resample is a powerful method to change the frequency of a time
series.
To user guideA full overview on time series is given on the pages on time series and date functionality.
|
getting_started/intro_tutorials/09_timeseries.html
|
pandas.Index.isnull
|
`pandas.Index.isnull`
Detect missing values.
```
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
array([False, False, True])
```
|
Index.isnull()[source]#
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None, numpy.NaN or pd.NaT, get
mapped to True values.
Everything else get mapped to False values. Characters such as
empty strings ‘’ or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
Returns
numpy.ndarray[bool]A boolean array of whether my values are NA.
See also
Index.notnaBoolean inverse of isna.
Index.dropnaOmit entries with missing values.
isnaTop-level isna.
Series.isnaDetect missing values in Series object.
Examples
Show which entries in a pandas.Index are NA. The result is an
array.
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
array([False, False, True])
Empty strings are not considered NA values. None is considered an NA
value.
>>> idx = pd.Index(['black', '', 'red', None])
>>> idx
Index(['black', '', 'red', None], dtype='object')
>>> idx.isna()
array([False, False, False, True])
For datetimes, NaT (Not a Time) is considered as an NA value.
>>> idx = pd.DatetimeIndex([pd.Timestamp('1940-04-25'),
... pd.Timestamp(''), None, pd.NaT])
>>> idx
DatetimeIndex(['1940-04-25', 'NaT', 'NaT', 'NaT'],
dtype='datetime64[ns]', freq=None)
>>> idx.isna()
array([False, True, True, True])
|
reference/api/pandas.Index.isnull.html
|
pandas.Series.unstack
|
`pandas.Series.unstack`
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
```
>>> s = pd.Series([1, 2, 3, 4],
... index=pd.MultiIndex.from_product([['one', 'two'],
... ['a', 'b']]))
>>> s
one a 1
b 2
two a 3
b 4
dtype: int64
```
|
Series.unstack(level=- 1, fill_value=None)[source]#
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
Parameters
levelint, str, or list of these, default last levelLevel(s) to unstack, can pass level name.
fill_valuescalar value, default NoneValue to use when replacing NaN values.
Returns
DataFrameUnstacked Series.
Notes
Reference the user guide for more examples.
Examples
>>> s = pd.Series([1, 2, 3, 4],
... index=pd.MultiIndex.from_product([['one', 'two'],
... ['a', 'b']]))
>>> s
one a 1
b 2
two a 3
b 4
dtype: int64
>>> s.unstack(level=-1)
a b
one 1 2
two 3 4
>>> s.unstack(level=0)
one two
a 1 3
b 2 4
|
reference/api/pandas.Series.unstack.html
|
pandas.DataFrame.query
|
`pandas.DataFrame.query`
Query the columns of a DataFrame with a boolean expression.
The query string to evaluate.
```
>>> df = pd.DataFrame({'A': range(1, 6),
... 'B': range(10, 0, -2),
... 'C C': range(10, 5, -1)})
>>> df
A B C C
0 1 10 10
1 2 8 9
2 3 6 8
3 4 4 7
4 5 2 6
>>> df.query('A > B')
A B C C
4 5 2 6
```
|
DataFrame.query(expr, *, inplace=False, **kwargs)[source]#
Query the columns of a DataFrame with a boolean expression.
Parameters
exprstrThe query string to evaluate.
You can refer to variables
in the environment by prefixing them with an ‘@’ character like
@a + b.
You can refer to column names that are not valid Python variable names
by surrounding them in backticks. Thus, column names containing spaces
or punctuations (besides underscores) or starting with digits must be
surrounded by backticks. (For example, a column named “Area (cm^2)” would
be referenced as `Area (cm^2)`). Column names which are Python keywords
(like “list”, “for”, “import”, etc) cannot be used.
For example, if one of your columns is called a a and you want
to sum it with b, your query should be `a a` + b.
New in version 0.25.0: Backtick quoting introduced.
New in version 1.0.0: Expanding functionality of backtick quoting for more than only spaces.
inplaceboolWhether to modify the DataFrame rather than creating a new one.
**kwargsSee the documentation for eval() for complete details
on the keyword arguments accepted by DataFrame.query().
Returns
DataFrame or NoneDataFrame resulting from the provided query expression or
None if inplace=True.
See also
evalEvaluate a string describing operations on DataFrame columns.
DataFrame.evalEvaluate a string describing operations on DataFrame columns.
Notes
The result of the evaluation of this expression is first passed to
DataFrame.loc and if that fails because of a
multidimensional key (e.g., a DataFrame) then the result will be passed
to DataFrame.__getitem__().
This method uses the top-level eval() function to
evaluate the passed query.
The query() method uses a slightly
modified Python syntax by default. For example, the & and |
(bitwise) operators have the precedence of their boolean cousins,
and and or. This is syntactically valid Python,
however the semantics are different.
You can change the semantics of the expression by passing the keyword
argument parser='python'. This enforces the same semantics as
evaluation in Python space. Likewise, you can pass engine='python'
to evaluate an expression using Python itself as a backend. This is not
recommended as it is inefficient compared to using numexpr as the
engine.
The DataFrame.index and
DataFrame.columns attributes of the
DataFrame instance are placed in the query namespace
by default, which allows you to treat both the index and columns of the
frame as a column in the frame.
The identifier index is used for the frame index; you can also
use the name of the index to identify it in a query. Please note that
Python keywords may not be used as identifiers.
For further details and examples see the query documentation in
indexing.
Backtick quoted variables
Backtick quoted variables are parsed as literal Python code and
are converted internally to a Python valid identifier.
This can lead to the following problems.
During parsing a number of disallowed characters inside the backtick
quoted string are replaced by strings that are allowed as a Python identifier.
These characters include all operators in Python, the space character, the
question mark, the exclamation mark, the dollar sign, and the euro sign.
For other characters that fall outside the ASCII range (U+0001..U+007F)
and those that are not further specified in PEP 3131,
the query parser will raise an error.
This excludes whitespace different than the space character,
but also the hashtag (as it is used for comments) and the backtick
itself (backtick can also not be escaped).
In a special case, quotes that make a pair around a backtick can
confuse the parser.
For example, `it's` > `that's` will raise an error,
as it forms a quoted string ('s > `that') with a backtick inside.
See also the Python documentation about lexical analysis
(https://docs.python.org/3/reference/lexical_analysis.html)
in combination with the source code in pandas.core.computation.parsing.
Examples
>>> df = pd.DataFrame({'A': range(1, 6),
... 'B': range(10, 0, -2),
... 'C C': range(10, 5, -1)})
>>> df
A B C C
0 1 10 10
1 2 8 9
2 3 6 8
3 4 4 7
4 5 2 6
>>> df.query('A > B')
A B C C
4 5 2 6
The previous expression is equivalent to
>>> df[df.A > df.B]
A B C C
4 5 2 6
For columns with spaces in their name, you can use backtick quoting.
>>> df.query('B == `C C`')
A B C C
0 1 10 10
The previous expression is equivalent to
>>> df[df.B == df['C C']]
A B C C
0 1 10 10
|
reference/api/pandas.DataFrame.query.html
|
pandas.Index.holds_integer
|
`pandas.Index.holds_integer`
Whether the type is an integer type.
|
final Index.holds_integer()[source]#
Whether the type is an integer type.
|
reference/api/pandas.Index.holds_integer.html
|
pandas.DataFrame.columns
|
`pandas.DataFrame.columns`
The column labels of the DataFrame.
|
DataFrame.columns#
The column labels of the DataFrame.
|
reference/api/pandas.DataFrame.columns.html
|
pandas.DataFrame.pivot
|
`pandas.DataFrame.pivot`
Return reshaped DataFrame organized by given index / column values.
```
>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
... 'two'],
... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
... 'baz': [1, 2, 3, 4, 5, 6],
... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
foo bar baz zoo
0 one A 1 x
1 one B 2 y
2 one C 3 z
3 two A 4 q
4 two B 5 w
5 two C 6 t
```
|
DataFrame.pivot(*, index=None, columns=None, values=None)[source]#
Return reshaped DataFrame organized by given index / column values.
Reshape data (produce a “pivot” table) based on column values. Uses
unique values from specified index / columns to form axes of the
resulting DataFrame. This function does not support data
aggregation, multiple values will result in a MultiIndex in the
columns. See the User Guide for more on reshaping.
Parameters
indexstr or object or a list of str, optionalColumn to use to make new frame’s index. If None, uses
existing index.
Changed in version 1.1.0: Also accept list of index names.
columnsstr or object or a list of strColumn to use to make new frame’s columns.
Changed in version 1.1.0: Also accept list of columns names.
valuesstr, object or a list of the previous, optionalColumn(s) to use for populating new frame’s values. If not
specified, all remaining columns will be used and the result will
have hierarchically indexed columns.
Returns
DataFrameReturns reshaped DataFrame.
Raises
ValueError:When there are any index, columns combinations with multiple
values. DataFrame.pivot_table when you need to aggregate.
See also
DataFrame.pivot_tableGeneralization of pivot that can handle duplicate values for one index/column pair.
DataFrame.unstackPivot based on the index values instead of a column.
wide_to_longWide panel to long format. Less flexible but more user-friendly than melt.
Notes
For finer-tuned control, see hierarchical indexing documentation along
with the related stack/unstack methods.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
... 'two'],
... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
... 'baz': [1, 2, 3, 4, 5, 6],
... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
foo bar baz zoo
0 one A 1 x
1 one B 2 y
2 one C 3 z
3 two A 4 q
4 two B 5 w
5 two C 6 t
>>> df.pivot(index='foo', columns='bar', values='baz')
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar')['baz']
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo'])
baz zoo
bar A B C A B C
foo
one 1 2 3 x y z
two 4 5 6 q w t
You could also assign a list of column names or a list of index names.
>>> df = pd.DataFrame({
... "lev1": [1, 1, 1, 2, 2, 2],
... "lev2": [1, 1, 2, 1, 1, 2],
... "lev3": [1, 2, 1, 2, 1, 2],
... "lev4": [1, 2, 3, 4, 5, 6],
... "values": [0, 1, 2, 3, 4, 5]})
>>> df
lev1 lev2 lev3 lev4 values
0 1 1 1 1 0
1 1 1 2 2 1
2 1 2 1 3 2
3 2 1 2 4 3
4 2 1 1 5 4
5 2 2 2 6 5
>>> df.pivot(index="lev1", columns=["lev2", "lev3"],values="values")
lev2 1 2
lev3 1 2 1 2
lev1
1 0.0 1.0 2.0 NaN
2 4.0 3.0 NaN 5.0
>>> df.pivot(index=["lev1", "lev2"], columns=["lev3"],values="values")
lev3 1 2
lev1 lev2
1 1 0.0 1.0
2 2.0 NaN
2 1 4.0 3.0
2 NaN 5.0
A ValueError is raised if there are any duplicates.
>>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'],
... "bar": ['A', 'A', 'B', 'C'],
... "baz": [1, 2, 3, 4]})
>>> df
foo bar baz
0 one A 1
1 one A 2
2 two B 3
3 two C 4
Notice that the first two rows are the same for our index
and columns arguments.
>>> df.pivot(index='foo', columns='bar', values='baz')
Traceback (most recent call last):
...
ValueError: Index contains duplicate entries, cannot reshape
|
reference/api/pandas.DataFrame.pivot.html
|
pandas.read_xml
|
`pandas.read_xml`
Read XML document into a DataFrame object.
```
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data xmlns="http://example.com">
... <row>
... <shape>square</shape>
... <degrees>360</degrees>
... <sides>4.0</sides>
... </row>
... <row>
... <shape>circle</shape>
... <degrees>360</degrees>
... <sides/>
... </row>
... <row>
... <shape>triangle</shape>
... <degrees>180</degrees>
... <sides>3.0</sides>
... </row>
... </data>'''
```
|
pandas.read_xml(path_or_buffer, *, xpath='./*', namespaces=None, elems_only=False, attrs_only=False, names=None, dtype=None, converters=None, parse_dates=None, encoding='utf-8', parser='lxml', stylesheet=None, iterparse=None, compression='infer', storage_options=None)[source]#
Read XML document into a DataFrame object.
New in version 1.3.0.
Parameters
path_or_bufferstr, path object, or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a read() function. The string can be any valid XML
string or a path. The string can further be a URL. Valid URL schemes
include http, ftp, s3, and file.
xpathstr, optional, default ‘./*’The XPath to parse required set of nodes for migration to DataFrame.
XPath should return a collection of elements and not a single
element. Note: The etree parser supports limited XPath
expressions. For more complex XPath, use lxml which requires
installation.
namespacesdict, optionalThe namespaces defined in XML document as dicts with key being
namespace prefix and value the URI. There is no need to include all
namespaces in XML, only the ones used in xpath expression.
Note: if XML document uses default namespace denoted as
xmlns=’<URI>’ without a prefix, you must assign any temporary
namespace prefix such as ‘doc’ to the URI in order to parse
underlying nodes and/or attributes. For example,
namespaces = {"doc": "https://example.com"}
elems_onlybool, optional, default FalseParse only the child elements at the specified xpath. By default,
all child elements and non-empty text nodes are returned.
attrs_onlybool, optional, default FalseParse only the attributes at the specified xpath.
By default, all attributes are returned.
nameslist-like, optionalColumn names for DataFrame of parsed XML data. Use this parameter to
rename original element names and distinguish same named elements and
attributes.
dtypeType name or dict of column -> type, optionalData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32,
‘c’: ‘Int64’}
Use str or object together with suitable na_values settings
to preserve and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0.
convertersdict, optionalDict of functions for converting values in certain columns. Keys can either
be integers or column labels.
New in version 1.5.0.
parse_datesbool or list of int or names or list of lists or dict, default FalseIdentifiers to parse index or columns to datetime. The behavior is as follows:
boolean. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
New in version 1.5.0.
encodingstr, optional, default ‘utf-8’Encoding of XML document.
parser{‘lxml’,’etree’}, default ‘lxml’Parser module to use for retrieval of data. Only ‘lxml’ and
‘etree’ are supported. With ‘lxml’ more complex XPath searches
and ability to use XSLT stylesheet are supported.
stylesheetstr, path object or file-like objectA URL, file-like object, or a raw string containing an XSLT script.
This stylesheet should flatten complex, deeply nested XML documents
for easier parsing. To use this feature you must have lxml module
installed and specify ‘lxml’ as parser. The xpath must
reference nodes of transformed XML document generated after XSLT
transformation and not the original XML document. Only XSLT 1.0
scripts and not later versions is currently supported.
iterparsedict, optionalThe nodes or attributes to retrieve in iterparsing of XML document
as a dict with key being the name of repeating element and value being
list of elements or attribute names that are descendants of the repeated
element. Note: If this option is used, it will replace xpath parsing
and unlike xpath, descendants do not need to relate to each other but can
exist any where in document under the repeating element. This memory-
efficient method should be used for very large XML files (500MB, 1GB, or 5GB+).
For example,
iterparse = {"row_element": ["child_elem", "attr", "grandchild_elem"]}
New in version 1.5.0.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
Returns
dfA DataFrame.
See also
read_jsonConvert a JSON string to pandas object.
read_htmlRead HTML tables into a list of DataFrame objects.
Notes
This method is best designed to import shallow XML documents in
following format which is the ideal fit for the two-dimensions of a
DataFrame (row by column).
<root>
<row>
<column1>data</column1>
<column2>data</column2>
<column3>data</column3>
...
</row>
<row>
...
</row>
...
</root>
As a file format, XML documents can be designed any way including
layout of elements and attributes as long as it conforms to W3C
specifications. Therefore, this method is a convenience handler for
a specific flatter design and not all possible XML structures.
However, for more complex XML documents, stylesheet allows you to
temporarily redesign original document with XSLT (a special purpose
language) for a flatter version for migration to a DataFrame.
This function will always return a single DataFrame or raise
exceptions due to issues with XML document, xpath, or other
parameters.
See the read_xml documentation in the IO section of the docs for more information in using this method to parse XML
files to DataFrames.
Examples
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data xmlns="http://example.com">
... <row>
... <shape>square</shape>
... <degrees>360</degrees>
... <sides>4.0</sides>
... </row>
... <row>
... <shape>circle</shape>
... <degrees>360</degrees>
... <sides/>
... </row>
... <row>
... <shape>triangle</shape>
... <degrees>180</degrees>
... <sides>3.0</sides>
... </row>
... </data>'''
>>> df = pd.read_xml(xml)
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data>
... <row shape="square" degrees="360" sides="4.0"/>
... <row shape="circle" degrees="360"/>
... <row shape="triangle" degrees="180" sides="3.0"/>
... </data>'''
>>> df = pd.read_xml(xml, xpath=".//row")
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <doc:data xmlns:doc="https://example.com">
... <doc:row>
... <doc:shape>square</doc:shape>
... <doc:degrees>360</doc:degrees>
... <doc:sides>4.0</doc:sides>
... </doc:row>
... <doc:row>
... <doc:shape>circle</doc:shape>
... <doc:degrees>360</doc:degrees>
... <doc:sides/>
... </doc:row>
... <doc:row>
... <doc:shape>triangle</doc:shape>
... <doc:degrees>180</doc:degrees>
... <doc:sides>3.0</doc:sides>
... </doc:row>
... </doc:data>'''
>>> df = pd.read_xml(xml,
... xpath="//doc:row",
... namespaces={"doc": "https://example.com"})
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
|
reference/api/pandas.read_xml.html
|
pandas.Series.any
|
`pandas.Series.any`
Return whether any element is True, potentially over an axis.
```
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
```
|
Series.any(*, axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]#
Return whether any element is True, potentially over an axis.
Returns False unless there is at least one element within a series or
along a Dataframe axis that is True or equivalent (e.g. non-zero or
non-empty).
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Indicate which axis or axes should be reduced. For Series this parameter
is unused and defaults to 0.
0 / ‘index’ : reduce the index, return a Series whose index is the
original column labels.
1 / ‘columns’ : reduce the columns, return a Series whose index is the
original index.
None : reduce all axes, return a scalar.
bool_onlybool, default NoneInclude only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
skipnabool, default TrueExclude NA/null values. If the entire row/column is NA and skipna is
True, then the result will be False, as for an empty row/column.
If skipna is False, then NA are treated as True, because these are not
equal to zero.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
**kwargsany, default NoneAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
scalar or SeriesIf level is specified, then, Series is returned; otherwise, scalar
is returned.
See also
numpy.anyNumpy version of this method.
Series.anyReturn whether any element is True.
Series.allReturn whether all elements are True.
DataFrame.anyReturn whether any element is True over requested axis.
DataFrame.allReturn whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element
is True.
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
A B
0 True 1
1 False 2
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
A B
0 True 1
1 False 0
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
Aggregating over the entire DataFrame with axis=None.
>>> df.any(axis=None)
True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
|
reference/api/pandas.Series.any.html
|
pandas.api.types.is_int64_dtype
|
`pandas.api.types.is_int64_dtype`
Check whether the provided array or dtype is of the int64 dtype.
The array or dtype to check.
```
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype('int8')
False
>>> is_int64_dtype('Int8')
False
>>> is_int64_dtype(pd.Int64Dtype)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
```
|
pandas.api.types.is_int64_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of the int64 dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of the int64 dtype.
Notes
Depending on system architecture, the return value of is_int64_dtype(
int) will be True if the OS uses 64-bit integers and False if the OS
uses 32-bit integers.
Examples
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype('int8')
False
>>> is_int64_dtype('Int8')
False
>>> is_int64_dtype(pd.Int64Dtype)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
|
reference/api/pandas.api.types.is_int64_dtype.html
|
pandas.tseries.offsets.YearBegin.rollforward
|
`pandas.tseries.offsets.YearBegin.rollforward`
Roll provided date forward to next offset only if not on offset.
|
YearBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.YearBegin.rollforward.html
|
pandas.Series.pipe
|
`pandas.Series.pipe`
Apply chainable functions that expect Series or DataFrames.
Function to apply to the Series/DataFrame.
args, and kwargs are passed into func.
Alternatively a (callable, data_keyword) tuple where
data_keyword is a string indicating the keyword of
callable that expects the Series/DataFrame.
```
>>> func(g(h(df), arg1=a), arg2=b, arg3=c)
```
|
Series.pipe(func, *args, **kwargs)[source]#
Apply chainable functions that expect Series or DataFrames.
Parameters
funcfunctionFunction to apply to the Series/DataFrame.
args, and kwargs are passed into func.
Alternatively a (callable, data_keyword) tuple where
data_keyword is a string indicating the keyword of
callable that expects the Series/DataFrame.
argsiterable, optionalPositional arguments passed into func.
kwargsmapping, optionalA dictionary of keyword arguments passed into func.
Returns
objectthe return type of func.
See also
DataFrame.applyApply a function along input axis of DataFrame.
DataFrame.applymapApply a function elementwise on a whole DataFrame.
Series.mapApply a mapping correspondence on a Series.
Notes
Use .pipe when chaining together functions that expect
Series, DataFrames or GroupBy objects. Instead of writing
>>> func(g(h(df), arg1=a), arg2=b, arg3=c)
You can write
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(func, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second
argument, pass a tuple indicating which keyword expects the
data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((func, 'arg2'), arg1=a, arg3=c)
... )
|
reference/api/pandas.Series.pipe.html
|
pandas.api.types.is_re_compilable
|
`pandas.api.types.is_re_compilable`
Check if the object can be compiled into a regex pattern instance.
Whether obj can be compiled as a regex pattern.
```
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False
```
|
pandas.api.types.is_re_compilable(obj)[source]#
Check if the object can be compiled into a regex pattern instance.
Parameters
objThe object to check
Returns
is_regex_compilableboolWhether obj can be compiled as a regex pattern.
Examples
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False
|
reference/api/pandas.api.types.is_re_compilable.html
|
pandas.errors.UnsortedIndexError
|
`pandas.errors.UnsortedIndexError`
Error raised when slicing a MultiIndex which has not been lexsorted.
|
exception pandas.errors.UnsortedIndexError[source]#
Error raised when slicing a MultiIndex which has not been lexsorted.
Subclass of KeyError.
|
reference/api/pandas.errors.UnsortedIndexError.html
|
pandas.DatetimeIndex.inferred_freq
|
`pandas.DatetimeIndex.inferred_freq`
Tries to return a string representing a frequency generated by infer_freq.
|
DatetimeIndex.inferred_freq[source]#
Tries to return a string representing a frequency generated by infer_freq.
Returns None if it can’t autodetect the frequency.
|
reference/api/pandas.DatetimeIndex.inferred_freq.html
|
pandas.DataFrame.copy
|
`pandas.DataFrame.copy`
Make a copy of this object’s indices and data.
```
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a 1
b 2
dtype: int64
```
|
DataFrame.copy(deep=True)[source]#
Make a copy of this object’s indices and data.
When deep=True (default), a new object will be created with a
copy of the calling object’s data and indices. Modifications to
the data or indices of the copy will not be reflected in the
original object (see notes below).
When deep=False, a new object will be created without copying
the calling object’s data or index (only references to the data
and index are copied). Any changes to the data of the original
will be reflected in the shallow copy (and vice versa).
Parameters
deepbool, default TrueMake a deep copy, including a copy of the data and the indices.
With deep=False neither the indices nor the data are copied.
Returns
copySeries or DataFrameObject type matches caller.
Notes
When deep=True, data is copied but actual Python objects
will not be copied recursively, only the reference to the object.
This is in contrast to copy.deepcopy in the Standard Library,
which recursively copies object data (see examples below).
While Index objects are copied when deep=True, the underlying
numpy array is not copied for performance reasons. Since Index is
immutable, the underlying data can be safely shared and a copy
is not needed.
Since pandas is not thread safe, see the
gotchas when copying in a threading
environment.
Examples
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a 1
b 2
dtype: int64
>>> s_copy = s.copy()
>>> s_copy
a 1
b 2
dtype: int64
Shallow copy versus default (deep) copy:
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> deep = s.copy()
>>> shallow = s.copy(deep=False)
Shallow copy shares data and index with original.
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
Deep copy has own copy of data and index.
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected
in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy
will copy the data, but will not do so recursively. Updating a nested
data object will be reflected in the deep copy.
>>> s = pd.Series([[1, 2], [3, 4]])
>>> deep = s.copy()
>>> s[0][0] = 10
>>> s
0 [10, 2]
1 [3, 4]
dtype: object
>>> deep
0 [10, 2]
1 [3, 4]
dtype: object
|
reference/api/pandas.DataFrame.copy.html
|
pandas.Series.radd
|
`pandas.Series.radd`
Return Addition of series and other, element-wise (binary operator radd).
Equivalent to other + series, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64
```
|
Series.radd(other, level=None, fill_value=None, axis=0)[source]#
Return Addition of series and other, element-wise (binary operator radd).
Equivalent to other + series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.addElement-wise Addition, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64
|
reference/api/pandas.Series.radd.html
|
pandas.tseries.offsets.Milli.apply_index
|
pandas.tseries.offsets.Milli.apply_index - Vectorized apply of DateOffset to DatetimeIndex.
|
Milli.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
|
reference/api/pandas.tseries.offsets.Milli.apply_index.html
|
pandas.Series.str.casefold
|
`pandas.Series.str.casefold`
Convert strings in the Series/Index to be casefolded.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.casefold()[source]#
Convert strings in the Series/Index to be casefolded.
New in version 0.25.0.
Equivalent to str.casefold().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.casefold.html
|
pandas.tseries.offsets.Nano.apply
|
pandas.tseries.offsets.Nano.apply
|
Nano.apply()#
|
reference/api/pandas.tseries.offsets.Nano.apply.html
|
pandas.PeriodIndex.asfreq
|
`pandas.PeriodIndex.asfreq`
Convert the PeriodArray to the specified frequency freq.
```
>>> pidx = pd.period_range('2010-01-01', '2015-01-01', freq='A')
>>> pidx
PeriodIndex(['2010', '2011', '2012', '2013', '2014', '2015'],
dtype='period[A-DEC]')
```
|
PeriodIndex.asfreq(freq=None, how='E')[source]#
Convert the PeriodArray to the specified frequency freq.
Equivalent to applying pandas.Period.asfreq() with the given arguments
to each Period in this PeriodArray.
Parameters
freqstrA frequency.
howstr {‘E’, ‘S’}, default ‘E’Whether the elements should be aligned to the end
or start within pa period.
‘E’, ‘END’, or ‘FINISH’ for end,
‘S’, ‘START’, or ‘BEGIN’ for start.
January 31st (‘END’) vs. January 1st (‘START’) for example.
Returns
PeriodArrayThe transformed PeriodArray with the new frequency.
See also
pandas.arrays.PeriodArray.asfreqConvert each Period in a PeriodArray to the given frequency.
Period.asfreqConvert a Period object to the given frequency.
Examples
>>> pidx = pd.period_range('2010-01-01', '2015-01-01', freq='A')
>>> pidx
PeriodIndex(['2010', '2011', '2012', '2013', '2014', '2015'],
dtype='period[A-DEC]')
>>> pidx.asfreq('M')
PeriodIndex(['2010-12', '2011-12', '2012-12', '2013-12', '2014-12',
'2015-12'], dtype='period[M]')
>>> pidx.asfreq('M', how='S')
PeriodIndex(['2010-01', '2011-01', '2012-01', '2013-01', '2014-01',
'2015-01'], dtype='period[M]')
|
reference/api/pandas.PeriodIndex.asfreq.html
|
pandas.Period.hour
|
`pandas.Period.hour`
Get the hour of the day component of the Period.
```
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.hour
13
```
|
Period.hour#
Get the hour of the day component of the Period.
Returns
intThe hour as an integer, between 0 and 23.
See also
Period.secondGet the second component of the Period.
Period.minuteGet the minute component of the Period.
Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.hour
13
Period longer than a day
>>> p = pd.Period("2018-03-11", freq="M")
>>> p.hour
0
|
reference/api/pandas.Period.hour.html
|
pandas.DataFrame.reindex_like
|
`pandas.DataFrame.reindex_like`
Return an object with matching indices as other object.
Conform the object to the same index on all axes. Optional
filling logic, placing NaN in locations having no value
in the previous index. A new object is produced unless the
new index is equivalent to the current one and copy=False.
```
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
```
|
DataFrame.reindex_like(other, method=None, copy=True, limit=None, tolerance=None)[source]#
Return an object with matching indices as other object.
Conform the object to the same index on all axes. Optional
filling logic, placing NaN in locations having no value
in the previous index. A new object is produced unless the
new index is equivalent to the current one and copy=False.
Parameters
otherObject of the same data typeIts row and column indices are used to define the new indices
of this object.
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: propagate last valid observation forward to next
valid
backfill / bfill: use next valid observation to fill gap
nearest: use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
limitint, default NoneMaximum number of consecutive labels to fill for inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series or DataFrameSame type as caller, but with changed indices on each axis.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindexChange to new indices or expand indices.
Notes
Same as calling
.reindex(index=other.index, columns=other.columns,...).
Examples
>>> df1 = pd.DataFrame([[24.3, 75.7, 'high'],
... [31, 87.8, 'high'],
... [22, 71.6, 'medium'],
... [35, 95, 'medium']],
... columns=['temp_celsius', 'temp_fahrenheit',
... 'windspeed'],
... index=pd.date_range(start='2014-02-12',
... end='2014-02-15', freq='D'))
>>> df1
temp_celsius temp_fahrenheit windspeed
2014-02-12 24.3 75.7 high
2014-02-13 31.0 87.8 high
2014-02-14 22.0 71.6 medium
2014-02-15 35.0 95.0 medium
>>> df2 = pd.DataFrame([[28, 'low'],
... [30, 'low'],
... [35.1, 'medium']],
... columns=['temp_celsius', 'windspeed'],
... index=pd.DatetimeIndex(['2014-02-12', '2014-02-13',
... '2014-02-15']))
>>> df2
temp_celsius windspeed
2014-02-12 28.0 low
2014-02-13 30.0 low
2014-02-15 35.1 medium
>>> df2.reindex_like(df1)
temp_celsius temp_fahrenheit windspeed
2014-02-12 28.0 NaN low
2014-02-13 30.0 NaN low
2014-02-14 NaN NaN NaN
2014-02-15 35.1 NaN medium
|
reference/api/pandas.DataFrame.reindex_like.html
|
pandas.DatetimeIndex.microsecond
|
`pandas.DatetimeIndex.microsecond`
The microseconds of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="us")
... )
>>> datetime_series
0 2000-01-01 00:00:00.000000
1 2000-01-01 00:00:00.000001
2 2000-01-01 00:00:00.000002
dtype: datetime64[ns]
>>> datetime_series.dt.microsecond
0 0
1 1
2 2
dtype: int64
```
|
property DatetimeIndex.microsecond[source]#
The microseconds of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="us")
... )
>>> datetime_series
0 2000-01-01 00:00:00.000000
1 2000-01-01 00:00:00.000001
2 2000-01-01 00:00:00.000002
dtype: datetime64[ns]
>>> datetime_series.dt.microsecond
0 0
1 1
2 2
dtype: int64
|
reference/api/pandas.DatetimeIndex.microsecond.html
|
pandas.Series.index
|
`pandas.Series.index`
The index (axis labels) of the Series.
|
Series.index#
The index (axis labels) of the Series.
|
reference/api/pandas.Series.index.html
|
pandas.errors.DtypeWarning
|
`pandas.errors.DtypeWarning`
Warning raised when reading different dtypes in a column from a file.
```
>>> df = pd.DataFrame({'a': (['1'] * 100000 + ['X'] * 100000 +
... ['1'] * 100000),
... 'b': ['b'] * 300000})
>>> df.to_csv('test.csv', index=False)
>>> df2 = pd.read_csv('test.csv')
... # DtypeWarning: Columns (0) have mixed types
```
|
exception pandas.errors.DtypeWarning[source]#
Warning raised when reading different dtypes in a column from a file.
Raised for a dtype incompatibility. This can happen whenever read_csv
or read_table encounter non-uniform dtypes in a column(s) of a given
CSV file.
See also
read_csvRead CSV (comma-separated) file into a DataFrame.
read_tableRead general delimited file into a DataFrame.
Notes
This warning is issued when dealing with larger files because the dtype
checking happens per chunk read.
Despite the warning, the CSV file is read with mixed types in a single
column which will be an object type. See the examples below to better
understand this issue.
Examples
This example creates and reads a large CSV file with a column that contains
int and str.
>>> df = pd.DataFrame({'a': (['1'] * 100000 + ['X'] * 100000 +
... ['1'] * 100000),
... 'b': ['b'] * 300000})
>>> df.to_csv('test.csv', index=False)
>>> df2 = pd.read_csv('test.csv')
... # DtypeWarning: Columns (0) have mixed types
Important to notice that df2 will contain both str and int for the
same input, ‘1’.
>>> df2.iloc[262140, 0]
'1'
>>> type(df2.iloc[262140, 0])
<class 'str'>
>>> df2.iloc[262150, 0]
1
>>> type(df2.iloc[262150, 0])
<class 'int'>
One way to solve this issue is using the dtype parameter in the
read_csv and read_table functions to explicit the conversion:
>>> df2 = pd.read_csv('test.csv', sep=',', dtype={'a': str})
No warning was issued.
|
reference/api/pandas.errors.DtypeWarning.html
|
pandas.tseries.offsets.FY5253Quarter.is_month_end
|
`pandas.tseries.offsets.FY5253Quarter.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
```
|
FY5253Quarter.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
|
reference/api/pandas.tseries.offsets.FY5253Quarter.is_month_end.html
|
pandas.tseries.offsets.Week.is_quarter_start
|
`pandas.tseries.offsets.Week.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
Week.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.Week.is_quarter_start.html
|
Internals
|
Internals
This section will provide a look into some of pandas internals. It’s primarily
intended for developers of pandas itself.
In pandas there are a few objects implemented which can serve as valid
containers for the axis labels:
Index: the generic “ordered set” object, an ndarray of object dtype
assuming nothing about its contents. The labels must be hashable (and
likely immutable) and unique. Populates a dict of label to location in
Cython to do O(1) lookups.
Int64Index: a version of Index highly optimized for 64-bit integer
data, such as time stamps
Float64Index: a version of Index highly optimized for 64-bit float data
|
This section will provide a look into some of pandas internals. It’s primarily
intended for developers of pandas itself.
Indexing#
In pandas there are a few objects implemented which can serve as valid
containers for the axis labels:
Index: the generic “ordered set” object, an ndarray of object dtype
assuming nothing about its contents. The labels must be hashable (and
likely immutable) and unique. Populates a dict of label to location in
Cython to do O(1) lookups.
Int64Index: a version of Index highly optimized for 64-bit integer
data, such as time stamps
Float64Index: a version of Index highly optimized for 64-bit float data
MultiIndex: the standard hierarchical index object
DatetimeIndex: An Index object with Timestamp boxed elements (impl are the int64 values)
TimedeltaIndex: An Index object with Timedelta boxed elements (impl are the in64 values)
PeriodIndex: An Index object with Period elements
There are functions that make the creation of a regular index easy:
date_range: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of Python datetime objects
period_range: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of Period objects, representing timespans
The motivation for having an Index class in the first place was to enable
different implementations of indexing. This means that it’s possible for you,
the user, to implement a custom Index subclass that may be better suited to
a particular application than the ones provided in pandas.
From an internal implementation point of view, the relevant methods that an
Index must define are one or more of the following (depending on how
incompatible the new object internals are with the Index functions):
get_loc: returns an “indexer” (an integer, or in some cases a
slice object) for a label
slice_locs: returns the “range” to slice between two labels
get_indexer: Computes the indexing vector for reindexing / data
alignment purposes. See the source / docstrings for more on this
get_indexer_non_unique: Computes the indexing vector for reindexing / data
alignment purposes when the index is non-unique. See the source / docstrings
for more on this
reindex: Does any pre-conversion of the input index then calls
get_indexer
union, intersection: computes the union or intersection of two
Index objects
insert: Inserts a new label into an Index, yielding a new object
delete: Delete a label, yielding a new object
drop: Deletes a set of labels
take: Analogous to ndarray.take
MultiIndex#
Internally, the MultiIndex consists of a few things: the levels, the
integer codes (until version 0.24 named labels), and the level names:
In [1]: index = pd.MultiIndex.from_product(
...: [range(3), ["one", "two"]], names=["first", "second"]
...: )
...:
In [2]: index
Out[2]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [3]: index.levels
Out[3]: FrozenList([[0, 1, 2], ['one', 'two']])
In [4]: index.codes
Out[4]: FrozenList([[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]])
In [5]: index.names
Out[5]: FrozenList(['first', 'second'])
You can probably guess that the codes determine which unique element is
identified with that location at each layer of the index. It’s important to
note that sortedness is determined solely from the integer codes and does
not check (or care) whether the levels themselves are sorted. Fortunately, the
constructors from_tuples and from_arrays ensure that this is true, but
if you compute the levels and codes yourself, please be careful.
Values#
pandas extends NumPy’s type system with custom types, like Categorical or
datetimes with a timezone, so we have multiple notions of “values”. For 1-D
containers (Index classes and Series) we have the following convention:
cls._values refers is the “best possible” array. This could be an
ndarray or ExtensionArray.
So, for example, Series[category]._values is a Categorical.
Subclassing pandas data structures#
This section has been moved to Subclassing pandas data structures.
|
development/internals.html
|
pandas.tseries.offsets.Second.n
|
pandas.tseries.offsets.Second.n
|
Second.n#
|
reference/api/pandas.tseries.offsets.Second.n.html
|
Python Module Index
|
Python Module Index
|
p
p
pandas
|
py-modindex.html
|
pandas.read_stata
|
`pandas.read_stata`
Read Stata file into DataFrame.
Any valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.dta.
```
>>> df = pd.read_stata('animals.dta')
```
|
pandas.read_stata(filepath_or_buffer, *, convert_dates=True, convert_categoricals=True, index_col=None, convert_missing=False, preserve_dtypes=True, columns=None, order_categoricals=True, chunksize=None, iterator=False, compression='infer', storage_options=None)[source]#
Read Stata file into DataFrame.
Parameters
filepath_or_bufferstr, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.dta.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
convert_datesbool, default TrueConvert date variables to DataFrame time values.
convert_categoricalsbool, default TrueRead value labels and convert columns to Categorical/Factor variables.
index_colstr, optionalColumn to set as index.
convert_missingbool, default FalseFlag indicating whether to convert missing values to their Stata
representations. If False, missing values are replaced with nan.
If True, columns containing missing values are returned with
object data types and missing values are represented by
StataMissingValue objects.
preserve_dtypesbool, default TruePreserve Stata datatypes. If False, numeric data are upcast to pandas
default types for foreign data (float64 or int64).
columnslist or NoneColumns to retain. Columns will be returned in the given order. None
returns all columns.
order_categoricalsbool, default TrueFlag indicating whether converted categorical data are ordered.
chunksizeint, default NoneReturn StataReader object for iterations, returns chunks with
given number of lines.
iteratorbool, default FalseReturn StataReader object.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
Returns
DataFrame or StataReader
See also
io.stata.StataReaderLow-level reader for Stata data files.
DataFrame.to_stataExport Stata data files.
Notes
Categorical variables read through an iterator may not have the same
categories and dtype. This occurs when a variable stored in a DTA
file is associated to an incomplete set of value labels that only
label a strict subset of the values.
Examples
Creating a dummy stata for this example
>>> df = pd.DataFrame({‘animal’: [‘falcon’, ‘parrot’, ‘falcon’,
… ‘parrot’],
… ‘speed’: [350, 18, 361, 15]}) # doctest: +SKIP
>>> df.to_stata(‘animals.dta’) # doctest: +SKIP
Read a Stata dta file:
>>> df = pd.read_stata('animals.dta')
Read a Stata dta file in 10,000 line chunks:
>>> values = np.random.randint(0, 10, size=(20_000, 1), dtype=”uint8”) # doctest: +SKIP
>>> df = pd.DataFrame(values, columns=[“i”]) # doctest: +SKIP
>>> df.to_stata(‘filename.dta’) # doctest: +SKIP
>>> itr = pd.read_stata('filename.dta', chunksize=10000)
>>> for chunk in itr:
... # Operate on a single chunk, e.g., chunk.mean()
... pass
|
reference/api/pandas.read_stata.html
|
pandas.tseries.offsets.FY5253Quarter.onOffset
|
pandas.tseries.offsets.FY5253Quarter.onOffset
|
FY5253Quarter.onOffset()#
|
reference/api/pandas.tseries.offsets.FY5253Quarter.onOffset.html
|
pandas.Series.nsmallest
|
`pandas.Series.nsmallest`
Return the smallest n elements.
```
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Brunei": 434000, "Malta": 434000,
... "Maldives": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Brunei 434000
Malta 434000
Maldives 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
```
|
Series.nsmallest(n=5, keep='first')[source]#
Return the smallest n elements.
Parameters
nint, default 5Return this many ascending sorted values.
keep{‘first’, ‘last’, ‘all’}, default ‘first’When there are duplicate values that cannot all fit in a
Series of n elements:
first : return the first n occurrences in order
of appearance.
last : return the last n occurrences in reverse
order of appearance.
all : keep all occurrences. This can result in a Series of
size larger than n.
Returns
SeriesThe n smallest values in the Series, sorted in increasing order.
See also
Series.nlargestGet the n largest elements.
Series.sort_valuesSort Series by values.
Series.headReturn the first n rows.
Notes
Faster than .sort_values().head(n) for small n relative to
the size of the Series object.
Examples
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Brunei": 434000, "Malta": 434000,
... "Maldives": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Brunei 434000
Malta 434000
Maldives 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
The n smallest elements where n=5 by default.
>>> s.nsmallest()
Montserrat 5200
Nauru 11300
Tuvalu 11300
Anguilla 11300
Iceland 337000
dtype: int64
The n smallest elements where n=3. Default keep value is
‘first’ so Nauru and Tuvalu will be kept.
>>> s.nsmallest(3)
Montserrat 5200
Nauru 11300
Tuvalu 11300
dtype: int64
The n smallest elements where n=3 and keeping the last
duplicates. Anguilla and Tuvalu will be kept since they are the last
with value 11300 based on the index order.
>>> s.nsmallest(3, keep='last')
Montserrat 5200
Anguilla 11300
Tuvalu 11300
dtype: int64
The n smallest elements where n=3 with all duplicates kept. Note
that the returned Series has four elements due to the three duplicates.
>>> s.nsmallest(3, keep='all')
Montserrat 5200
Nauru 11300
Tuvalu 11300
Anguilla 11300
dtype: int64
|
reference/api/pandas.Series.nsmallest.html
|
pandas.SparseDtype
|
`pandas.SparseDtype`
Dtype for data stored in SparseArray.
|
class pandas.SparseDtype(dtype=<class 'numpy.float64'>, fill_value=None)[source]#
Dtype for data stored in SparseArray.
This dtype implements the pandas ExtensionDtype interface.
Parameters
dtypestr, ExtensionDtype, numpy.dtype, type, default numpy.float64The dtype of the underlying array storing the non-fill value values.
fill_valuescalar, optionalThe scalar value not stored in the SparseArray. By default, this
depends on dtype.
dtype
na_value
float
np.nan
int
0
bool
False
datetime64
pd.NaT
timedelta64
pd.NaT
The default value may be overridden by specifying a fill_value.
Attributes
None
Methods
None
|
reference/api/pandas.SparseDtype.html
|
pandas.tseries.offsets.QuarterEnd.is_year_start
|
`pandas.tseries.offsets.QuarterEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
QuarterEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.QuarterEnd.is_year_start.html
|
pandas.Series.cat.as_ordered
|
`pandas.Series.cat.as_ordered`
Set the Categorical to be ordered.
Whether or not to set the ordered attribute in-place or return
a copy of this categorical with ordered set to True.
|
Series.cat.as_ordered(*args, **kwargs)[source]#
Set the Categorical to be ordered.
Parameters
inplacebool, default FalseWhether or not to set the ordered attribute in-place or return
a copy of this categorical with ordered set to True.
Deprecated since version 1.5.0.
Returns
Categorical or NoneOrdered Categorical or None if inplace=True.
|
reference/api/pandas.Series.cat.as_ordered.html
|
pandas.Timestamp.ctime
|
`pandas.Timestamp.ctime`
Return ctime() style string.
|
Timestamp.ctime()#
Return ctime() style string.
|
reference/api/pandas.Timestamp.ctime.html
|
pandas.tseries.offsets.YearEnd.rollforward
|
`pandas.tseries.offsets.YearEnd.rollforward`
Roll provided date forward to next offset only if not on offset.
|
YearEnd.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.YearEnd.rollforward.html
|
pandas.MultiIndex.from_frame
|
`pandas.MultiIndex.from_frame`
Make a MultiIndex from a DataFrame.
```
>>> df = pd.DataFrame([['HI', 'Temp'], ['HI', 'Precip'],
... ['NJ', 'Temp'], ['NJ', 'Precip']],
... columns=['a', 'b'])
>>> df
a b
0 HI Temp
1 HI Precip
2 NJ Temp
3 NJ Precip
```
|
classmethod MultiIndex.from_frame(df, sortorder=None, names=None)[source]#
Make a MultiIndex from a DataFrame.
Parameters
dfDataFrameDataFrame to be converted to MultiIndex.
sortorderint, optionalLevel of sortedness (must be lexicographically sorted by that
level).
nameslist-like, optionalIf no names are provided, use the column names, or tuple of column
names if the columns is a MultiIndex. If a sequence, overwrite
names with the given sequence.
Returns
MultiIndexThe MultiIndex representation of the given DataFrame.
See also
MultiIndex.from_arraysConvert list of arrays to MultiIndex.
MultiIndex.from_tuplesConvert list of tuples to MultiIndex.
MultiIndex.from_productMake a MultiIndex from cartesian product of iterables.
Examples
>>> df = pd.DataFrame([['HI', 'Temp'], ['HI', 'Precip'],
... ['NJ', 'Temp'], ['NJ', 'Precip']],
... columns=['a', 'b'])
>>> df
a b
0 HI Temp
1 HI Precip
2 NJ Temp
3 NJ Precip
>>> pd.MultiIndex.from_frame(df)
MultiIndex([('HI', 'Temp'),
('HI', 'Precip'),
('NJ', 'Temp'),
('NJ', 'Precip')],
names=['a', 'b'])
Using explicit names, instead of the column names
>>> pd.MultiIndex.from_frame(df, names=['state', 'observation'])
MultiIndex([('HI', 'Temp'),
('HI', 'Precip'),
('NJ', 'Temp'),
('NJ', 'Precip')],
names=['state', 'observation'])
|
reference/api/pandas.MultiIndex.from_frame.html
|
pandas.DataFrame.iterrows
|
`pandas.DataFrame.iterrows`
Iterate over DataFrame rows as (index, Series) pairs.
```
>>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])
>>> row = next(df.iterrows())[1]
>>> row
int 1.0
float 1.5
Name: 0, dtype: float64
>>> print(row['int'].dtype)
float64
>>> print(df['int'].dtype)
int64
```
|
DataFrame.iterrows()[source]#
Iterate over DataFrame rows as (index, Series) pairs.
Yields
indexlabel or tuple of labelThe index of the row. A tuple for a MultiIndex.
dataSeriesThe data of the row as a Series.
See also
DataFrame.itertuplesIterate over DataFrame rows as namedtuples of the values.
DataFrame.itemsIterate over (column name, Series) pairs.
Notes
Because iterrows returns a Series for each row,
it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
>>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])
>>> row = next(df.iterrows())[1]
>>> row
int 1.0
float 1.5
Name: 0, dtype: float64
>>> print(row['int'].dtype)
float64
>>> print(df['int'].dtype)
int64
To preserve dtypes while iterating over the rows, it is better
to use itertuples() which returns namedtuples of the values
and which is generally faster than iterrows.
You should never modify something you are iterating over.
This is not guaranteed to work in all cases. Depending on the
data types, the iterator returns a copy and not a view, and writing
to it will have no effect.
|
reference/api/pandas.DataFrame.iterrows.html
|
pandas.DataFrame.gt
|
`pandas.DataFrame.gt`
Get Greater than of dataframe and other, element-wise (binary operator gt).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
```
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
```
|
DataFrame.gt(other, axis='columns', level=None)[source]#
Get Greater than of dataframe and other, element-wise (binary operator gt).
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison
operators.
Equivalent to ==, !=, <=, <, >=, > with support to choose axis
(rows or columns) and level for comparison.
Parameters
otherscalar, sequence, Series, or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’Whether to compare by the index (0 or ‘index’) or columns
(1 or ‘columns’).
levelint or labelBroadcast across a level, matching Index values on the passed
MultiIndex level.
Returns
DataFrame of boolResult of the comparison.
See also
DataFrame.eqCompare DataFrames for equality elementwise.
DataFrame.neCompare DataFrames for inequality elementwise.
DataFrame.leCompare DataFrames for less than inequality or equality elementwise.
DataFrame.ltCompare DataFrames for strictly less than inequality elementwise.
DataFrame.geCompare DataFrames for greater than inequality or equality elementwise.
DataFrame.gtCompare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together.
NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned
with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must
match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False
|
reference/api/pandas.DataFrame.gt.html
|
pandas.Index.nbytes
|
`pandas.Index.nbytes`
Return the number of bytes in the underlying data.
|
property Index.nbytes[source]#
Return the number of bytes in the underlying data.
|
reference/api/pandas.Index.nbytes.html
|
pandas.io.formats.style.Styler.format
|
`pandas.io.formats.style.Styler.format`
Format the text display value of cells.
Object to define how values are displayed. See notes.
```
>>> df = pd.DataFrame([[np.nan, 1.0, 'A'], [2.0, np.nan, 3.0]])
>>> df.style.format(na_rep='MISS', precision=3)
0 1 2
0 MISS 1.000 A
1 2.000 MISS 3.000
```
|
Styler.format(formatter=None, subset=None, na_rep=None, precision=None, decimal='.', thousands=None, escape=None, hyperlinks=None)[source]#
Format the text display value of cells.
Parameters
formatterstr, callable, dict or NoneObject to define how values are displayed. See notes.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
na_repstr, optionalRepresentation for missing values.
If na_rep is None, no special formatting is applied.
New in version 1.0.0.
precisionint, optionalFloating point precision to use for display purposes, if not determined by
the specified formatter.
New in version 1.3.0.
decimalstr, default “.”Character used as decimal separator for floats, complex and integers.
New in version 1.3.0.
thousandsstr, optional, default NoneCharacter used as thousands separator for floats, complex and integers.
New in version 1.3.0.
escapestr, optionalUse ‘html’ to replace the characters &, <, >, ', and "
in cell display string with HTML-safe sequences.
Use ‘latex’ to replace the characters &, %, $, #, _,
{, }, ~, ^, and \ in the cell display string with
LaTeX-safe sequences.
Escaping is done before formatter.
New in version 1.3.0.
hyperlinks{“html”, “latex”}, optionalConvert string patterns containing https://, http://, ftp:// or www. to
HTML <a> tags as clickable URL hyperlinks if “html”, or LaTeX href
commands if “latex”.
New in version 1.4.0.
Returns
selfStyler
See also
Styler.format_indexFormat the text display value of index labels.
Notes
This method assigns a formatting function, formatter, to each cell in the
DataFrame. If formatter is None, then the default formatter is used.
If a callable then that function should take a data value as input and return
a displayable representation, such as a string. If formatter is
given as a string this is assumed to be a valid Python format specification
and is wrapped to a callable as string.format(x). If a dict is given,
keys should correspond to column names, and values should be string or
callable, as above.
The default formatter currently expresses floats and complex numbers with the
pandas display precision unless using the precision argument here. The
default formatter does not adjust the representation of missing values unless
the na_rep argument is used.
The subset argument defines which region to apply the formatting function
to. If the formatter argument is given in dict form but does not include
all columns within the subset then these columns will have the default formatter
applied. Any columns in the formatter dict excluded from the subset will
be ignored.
When using a formatter string the dtypes must be compatible, otherwise a
ValueError will be raised.
When instantiating a Styler, default formatting can be applied be setting the
pandas.options:
styler.format.formatter: default None.
styler.format.na_rep: default None.
styler.format.precision: default 6.
styler.format.decimal: default “.”.
styler.format.thousands: default None.
styler.format.escape: default None.
Warning
Styler.format is ignored when using the output format Styler.to_excel,
since Excel and Python have inherrently different formatting structures.
However, it is possible to use the number-format pseudo CSS attribute
to force Excel permissible formatting. See examples.
Examples
Using na_rep and precision with the default formatter
>>> df = pd.DataFrame([[np.nan, 1.0, 'A'], [2.0, np.nan, 3.0]])
>>> df.style.format(na_rep='MISS', precision=3)
0 1 2
0 MISS 1.000 A
1 2.000 MISS 3.000
Using a formatter specification on consistent column dtypes
>>> df.style.format('{:.2f}', na_rep='MISS', subset=[0,1])
0 1 2
0 MISS 1.00 A
1 2.00 MISS 3.000000
Using the default formatter for unspecified columns
>>> df.style.format({0: '{:.2f}', 1: '£ {:.1f}'}, na_rep='MISS', precision=1)
...
0 1 2
0 MISS £ 1.0 A
1 2.00 MISS 3.0
Multiple na_rep or precision specifications under the default
formatter.
>>> df.style.format(na_rep='MISS', precision=1, subset=[0])
... .format(na_rep='PASS', precision=2, subset=[1, 2])
0 1 2
0 MISS 1.00 A
1 2.0 PASS 3.00
Using a callable formatter function.
>>> func = lambda s: 'STRING' if isinstance(s, str) else 'FLOAT'
>>> df.style.format({0: '{:.1f}', 2: func}, precision=4, na_rep='MISS')
...
0 1 2
0 MISS 1.0000 STRING
1 2.0 MISS FLOAT
Using a formatter with HTML escape and na_rep.
>>> df = pd.DataFrame([['<div></div>', '"A&B"', None]])
>>> s = df.style.format(
... '<a href="a.com/{0}">{0}</a>', escape="html", na_rep="NA"
... )
>>> s.to_html()
...
<td .. ><a href="a.com/<div></div>"><div></div></a></td>
<td .. ><a href="a.com/"A&B"">"A&B"</a></td>
<td .. >NA</td>
...
Using a formatter with LaTeX escape.
>>> df = pd.DataFrame([["123"], ["~ ^"], ["$%#"]])
>>> df.style.format("\\textbf{{{}}}", escape="latex").to_latex()
...
\begin{tabular}{ll}
{} & {0} \\
0 & \textbf{123} \\
1 & \textbf{\textasciitilde \space \textasciicircum } \\
2 & \textbf{\$\%\#} \\
\end{tabular}
Pandas defines a number-format pseudo CSS attribute instead of the .format
method to create to_excel permissible formatting. Note that semi-colons are
CSS protected characters but used as separators in Excel’s format string.
Replace semi-colons with the section separator character (ASCII-245) when
defining the formatting here.
>>> df = pd.DataFrame({"A": [1, 0, -1]})
>>> pseudo_css = "number-format: 0§[Red](0)§-§@;"
>>> df.style.applymap(lambda v: css).to_excel("formatted_file.xlsx")
...
|
reference/api/pandas.io.formats.style.Styler.format.html
|
pandas.Series.dt.minute
|
`pandas.Series.dt.minute`
The minutes of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="T")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:01:00
2 2000-01-01 00:02:00
dtype: datetime64[ns]
>>> datetime_series.dt.minute
0 0
1 1
2 2
dtype: int64
```
|
Series.dt.minute[source]#
The minutes of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="T")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:01:00
2 2000-01-01 00:02:00
dtype: datetime64[ns]
>>> datetime_series.dt.minute
0 0
1 1
2 2
dtype: int64
|
reference/api/pandas.Series.dt.minute.html
|
pandas.DataFrame.first
|
`pandas.DataFrame.first`
Select initial periods of time series data based on a date offset.
When having a DataFrame with dates as index, this function can
select the first few rows based on a date offset.
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
```
|
DataFrame.first(offset)[source]#
Select initial periods of time series data based on a date offset.
When having a DataFrame with dates as index, this function can
select the first few rows based on a date offset.
Parameters
offsetstr, DateOffset or dateutil.relativedeltaThe offset length of the data that will be selected. For instance,
‘1M’ will display all the rows having their index within the first month.
Returns
Series or DataFrameA subset of the caller.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
lastSelect final periods of time series based on a date offset.
at_timeSelect values at a particular time of the day.
between_timeSelect values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
Get the rows for the first 3 days:
>>> ts.first('3D')
A
2018-04-09 1
2018-04-11 2
Notice the data for 3 first calendar days were returned, not the first
3 days observed in the dataset, and therefore data for 2018-04-13 was
not returned.
|
reference/api/pandas.DataFrame.first.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.