title
stringlengths 1
185
| diff
stringlengths 0
32.2M
| body
stringlengths 0
123k
⌀ | url
stringlengths 57
58
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
⌀ | updated_at
stringlengths 20
20
|
---|---|---|---|---|---|---|---|
clarified the documentation for DF.drop_duplicates | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 78c9f2aa96472..ade05ab27093e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4625,7 +4625,8 @@ def dropna(self, axis=0, how='any', thresh=None, subset=None,
def drop_duplicates(self, subset=None, keep='first', inplace=False):
"""
Return DataFrame with duplicate rows removed, optionally only
- considering certain columns.
+ considering certain columns. Indexes, including time indexes
+ are ignored.
Parameters
----------
| I hit an issue with a time series index where I wanted to keep duplicate data with different time values and only delete rows with the same time and columns. This documentation change would have saved me a lot of time.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25056 | 2019-01-31T17:14:09Z | 2019-02-01T18:24:37Z | 2019-02-01T18:24:37Z | 2019-02-02T14:50:36Z |
Clarification of docstring for value_counts | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 58e1b2370c7c8..b23a0f10e9e2b 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -989,6 +989,36 @@ a single date rather than the entire array.
os.remove('tmp.csv')
+
+.. _io.csv.mixed_timezones:
+
+Parsing a CSV with mixed Timezones
+++++++++++++++++++++++++++++++++++
+
+Pandas cannot natively represent a column or index with mixed timezones. If your CSV
+file contains columns with a mixture of timezones, the default result will be
+an object-dtype column with strings, even with ``parse_dates``.
+
+
+.. ipython:: python
+
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(StringIO(content), parse_dates=['a'])
+ df['a']
+
+To parse the mixed-timezone values as a datetime column, pass a partially-applied
+:func:`to_datetime` with ``utc=True`` as the ``date_parser``.
+
+.. ipython:: python
+
+ df = pd.read_csv(StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df['a']
+
+
.. _io.dayfirst:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index fc963fce37a5b..a49ea2cf493a6 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -6,7 +6,8 @@ What's New in 0.24.0 (January 25, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See :ref:`install.dropping-27` for more
+ details.
{{ header }}
@@ -244,7 +245,7 @@ the new extension arrays that back interval and period data.
Joining with two multi-indexes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-:func:`DataFrame.merge` and :func:`DataFrame.join` can now be used to join multi-indexed ``Dataframe`` instances on the overlaping index levels (:issue:`6360`)
+:func:`DataFrame.merge` and :func:`DataFrame.join` can now be used to join multi-indexed ``Dataframe`` instances on the overlapping index levels (:issue:`6360`)
See the :ref:`Merge, join, and concatenate
<merging.Join_with_two_multi_indexes>` documentation section.
@@ -647,6 +648,52 @@ that the dates have been converted to UTC
pd.to_datetime(["2015-11-18 15:30:00+05:30",
"2015-11-18 16:30:00+06:30"], utc=True)
+
+.. _whatsnew_0240.api_breaking.read_csv_mixed_tz:
+
+Parsing mixed-timezones with :func:`read_csv`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:func:`read_csv` no longer silently converts mixed-timezone columns to UTC (:issue:`24987`).
+
+*Previous Behavior*
+
+.. code-block:: python
+
+ >>> import io
+ >>> content = """\
+ ... a
+ ... 2000-01-01T00:00:00+05:00
+ ... 2000-01-01T00:00:00+06:00"""
+ >>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ >>> df.a
+ 0 1999-12-31 19:00:00
+ 1 1999-12-31 18:00:00
+ Name: a, dtype: datetime64[ns]
+
+*New Behavior*
+
+.. ipython:: python
+
+ import io
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ df.a
+
+As can be seen, the ``dtype`` is object; each value in the column is a string.
+To convert the strings to an array of datetimes, the ``date_parser`` argument
+
+.. ipython:: python
+
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df.a
+
+See :ref:`whatsnew_0240.api.timezone_offset_parsing` for more.
+
.. _whatsnew_0240.api_breaking.period_end_time:
Time values in ``dt.end_time`` and ``to_timestamp(how='end')``
diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index ee4b7ab62b31a..047404e93914b 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -15,6 +15,15 @@ Whats New in 0.24.1 (February XX, 2019)
These are the changes in pandas 0.24.1. See :ref:`release` for a full changelog
including other versions of pandas.
+.. _whatsnew_0241.regressions:
+
+Fixed Regressions
+^^^^^^^^^^^^^^^^^
+
+- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
+- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
+- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
+- Fixed regression in :func:`merge` when merging an empty ``DataFrame`` with multiple timezone-aware columns on one of the timezone-aware columns (:issue:`25014`).
.. _whatsnew_0241.enhancements:
@@ -58,11 +67,19 @@ Bug Fixes
-
**Timedelta**
-
+- Bug in :func:`to_timedelta` with `box=False` incorrectly returning a ``datetime64`` object instead of a ``timedelta64`` object (:issue:`24961`)
-
-
-
+**Reshaping**
+
+- Bug in :meth:`DataFrame.groupby` with :class:`Grouper` when there is a time change (DST) and grouping frequency is ``'1d'`` (:issue:`24972`)
+
+**Visualization**
+
+- Fixed the warning for implicitly registered matplotlib converters not showing. See :ref:`whatsnew_0211.converters` for more (:issue:`24963`).
+
**Other**
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 47517782e2bbf..791ff44303e96 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -222,7 +222,7 @@ def __getitem__(self, item):
item = item._ndarray
result = self._ndarray[item]
- if not lib.is_scalar(result):
+ if not lib.is_scalar(item):
result = type(self)(result)
return result
diff --git a/pandas/core/base.py b/pandas/core/base.py
index c02ba88ea7fda..7b3152595e4b2 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1234,7 +1234,7 @@ def value_counts(self, normalize=False, sort=True, ascending=False,
If True then the object returned will contain the relative
frequencies of the unique values.
sort : boolean, default True
- Sort by values.
+ Sort by frequencies.
ascending : boolean, default False
Sort in ascending order.
bins : integer, optional
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b4f79bda25517..28c6f3c23a3ce 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -847,7 +847,7 @@ def itertuples(self, index=True, name="Pandas"):
----------
index : bool, default True
If True, return the index as the first element of the tuple.
- name : str, default "Pandas"
+ name : str or None, default "Pandas"
The name of the returned namedtuples or None to return regular
tuples.
@@ -1290,23 +1290,26 @@ def to_dict(self, orient='dict', into=dict):
('columns', self.columns.tolist()),
('data', [
list(map(com.maybe_box_datetimelike, t))
- for t in self.itertuples(index=False)]
- )))
+ for t in self.itertuples(index=False, name=None)
+ ])))
elif orient.lower().startswith('s'):
return into_c((k, com.maybe_box_datetimelike(v))
for k, v in compat.iteritems(self))
elif orient.lower().startswith('r'):
+ columns = self.columns.tolist()
+ rows = (dict(zip(columns, row))
+ for row in self.itertuples(index=False, name=None))
return [
into_c((k, com.maybe_box_datetimelike(v))
- for k, v in compat.iteritems(row._asdict()))
- for row in self.itertuples(index=False)]
+ for k, v in compat.iteritems(row))
+ for row in rows]
elif orient.lower().startswith('i'):
if not self.index.is_unique:
raise ValueError(
"DataFrame index must be unique for orient='index'."
)
return into_c((t[0], dict(zip(self.columns, t[1:])))
- for t in self.itertuples())
+ for t in self.itertuples(name=None))
else:
raise ValueError("orient '{o}' not understood".format(o=orient))
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 767da81c5c43a..3d176012df22b 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2333,7 +2333,7 @@ def union(self, other, sort=True):
def _wrap_setop_result(self, other, result):
return self._constructor(result, name=get_op_result_name(self, other))
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -2342,11 +2342,15 @@ def intersection(self, other, sort=True):
Parameters
----------
other : Index or array-like
- sort : bool, default True
+ sort : bool, default False
Sort the resulting index if possible
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index cc373c06efcc9..ef941ab87ba12 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -594,7 +594,7 @@ def _wrap_setop_result(self, other, result):
name = get_op_result_name(self, other)
return self._shallow_copy(result, name=name, freq=None, tz=self.tz)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Specialized intersection for DatetimeIndex objects. May be much faster
than Index.intersection
@@ -602,6 +602,14 @@ def intersection(self, other, sort=True):
Parameters
----------
other : DatetimeIndex or array-like
+ sort : bool, default True
+ Sort the resulting index if possible.
+
+ .. versionadded:: 0.24.0
+
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
Returns
-------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 0210560aaa21f..736de94991181 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1093,8 +1093,8 @@ def equals(self, other):
def overlaps(self, other):
return self._data.overlaps(other)
- def _setop(op_name):
- def func(self, other, sort=True):
+ def _setop(op_name, sort=True):
+ def func(self, other, sort=sort):
other = self._as_like_interval_index(other)
# GH 19016: ensure set op will not return a prohibited dtype
@@ -1128,7 +1128,7 @@ def is_all_dates(self):
return False
union = _setop('union')
- intersection = _setop('intersection')
+ intersection = _setop('intersection', sort=False)
difference = _setop('difference')
symmetric_difference = _setop('symmetric_difference')
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index e4d01a40bd181..16af3fe8eef26 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2910,7 +2910,7 @@ def union(self, other, sort=True):
return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0,
names=result_names)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two MultiIndex objects.
@@ -2922,6 +2922,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
Index
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index ebf5b279563cf..e17a6a682af40 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -343,7 +343,7 @@ def equals(self, other):
return super(RangeIndex, self).equals(other)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -355,6 +355,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 4a16707a376e9..640587b7f9f31 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -183,7 +183,7 @@ def get_reindexed_values(self, empty_dtype, upcasted_na):
is_datetime64tz_dtype(empty_dtype)):
if self.block is None:
array = empty_dtype.construct_array_type()
- return array(np.full(self.shape[1], fill_value),
+ return array(np.full(self.shape[1], fill_value.value),
dtype=empty_dtype)
pass
elif getattr(self.block, 'is_categorical', False):
@@ -335,8 +335,10 @@ def get_empty_dtype_and_na(join_units):
elif 'category' in upcast_classes:
return np.dtype(np.object_), np.nan
elif 'datetimetz' in upcast_classes:
+ # GH-25014. We use NaT instead of iNaT, since this eventually
+ # ends up in DatetimeArray.take, which does not allow iNaT.
dtype = upcast_classes['datetimetz']
- return dtype[0], tslibs.iNaT
+ return dtype[0], tslibs.NaT
elif 'datetime' in upcast_classes:
return np.dtype('M8[ns]'), tslibs.iNaT
elif 'timedelta' in upcast_classes:
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 6822225273906..7723827ff478a 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -30,8 +30,7 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.tseries.frequencies import to_offset
-from pandas.tseries.offsets import (
- DateOffset, Day, Nano, Tick, delta_to_nanoseconds)
+from pandas.tseries.offsets import DateOffset, Day, Nano, Tick
_shared_docs_kwargs = dict()
@@ -1613,20 +1612,20 @@ def _get_timestamp_range_edges(first, last, offset, closed='left', base=0):
A tuple of length 2, containing the adjusted pd.Timestamp objects.
"""
if isinstance(offset, Tick):
- is_day = isinstance(offset, Day)
- day_nanos = delta_to_nanoseconds(timedelta(1))
-
- # #1165 and #24127
- if (is_day and not offset.nanos % day_nanos) or not is_day:
- first, last = _adjust_dates_anchored(first, last, offset,
- closed=closed, base=base)
- if is_day and first.tz is not None:
- # _adjust_dates_anchored assumes 'D' means 24H, but first/last
- # might contain a DST transition (23H, 24H, or 25H).
- # Ensure first/last snap to midnight.
- first = first.normalize()
- last = last.normalize()
- return first, last
+ if isinstance(offset, Day):
+ # _adjust_dates_anchored assumes 'D' means 24H, but first/last
+ # might contain a DST transition (23H, 24H, or 25H).
+ # So "pretend" the dates are naive when adjusting the endpoints
+ tz = first.tz
+ first = first.tz_localize(None)
+ last = last.tz_localize(None)
+
+ first, last = _adjust_dates_anchored(first, last, offset,
+ closed=closed, base=base)
+ if isinstance(offset, Day):
+ first = first.tz_localize(tz)
+ last = last.tz_localize(tz)
+ return first, last
else:
first = first.normalize()
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index e3428146b91d8..ddd21d0f62d08 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -120,7 +120,8 @@ def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'):
try:
result = Timedelta(r, unit)
if not box:
- result = result.asm8
+ # explicitly view as timedelta64 for case when result is pd.NaT
+ result = result.asm8.view('timedelta64[ns]')
except ValueError:
if errors == 'raise':
raise
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index b31d3f665f47f..4163a571df800 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -203,9 +203,14 @@
* dict, e.g. {{'foo' : [1, 3]}} -> parse columns 1, 3 as date and call
result 'foo'
- If a column or index contains an unparseable date, the entire column or
- index will be returned unaltered as an object data type. For non-standard
- datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
+ If a column or index cannot be represented as an array of datetimes,
+ say because of an unparseable value or a mixture of timezones, the column
+ or index will be returned unaltered as an object data type. For
+ non-standard datetime parsing, use ``pd.to_datetime`` after
+ ``pd.read_csv``. To parse an index or column with a mixture of timezones,
+ specify ``date_parser`` to be a partially-applied
+ :func:`pandas.to_datetime` with ``utc=True``. See
+ :ref:`io.csv.mixed_timezones` for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : bool, default False
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index e543ab88f53b2..85549bafa8dc0 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -39,7 +39,7 @@
else:
_HAS_MPL = True
if get_option('plotting.matplotlib.register_converters'):
- _converter.register(explicit=True)
+ _converter.register(explicit=False)
def _raise_if_no_mpl():
diff --git a/pandas/tests/extension/numpy_/__init__.py b/pandas/tests/extension/numpy_/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/extension/numpy_/conftest.py b/pandas/tests/extension/numpy_/conftest.py
new file mode 100644
index 0000000000000..daa93571c2957
--- /dev/null
+++ b/pandas/tests/extension/numpy_/conftest.py
@@ -0,0 +1,38 @@
+import numpy as np
+import pytest
+
+from pandas.core.arrays.numpy_ import PandasArray
+
+
+@pytest.fixture
+def allow_in_pandas(monkeypatch):
+ """
+ A monkeypatch to tell pandas to let us in.
+
+ By default, passing a PandasArray to an index / series / frame
+ constructor will unbox that PandasArray to an ndarray, and treat
+ it as a non-EA column. We don't want people using EAs without
+ reason.
+
+ The mechanism for this is a check against ABCPandasArray
+ in each constructor.
+
+ But, for testing, we need to allow them in pandas. So we patch
+ the _typ of PandasArray, so that we evade the ABCPandasArray
+ check.
+ """
+ with monkeypatch.context() as m:
+ m.setattr(PandasArray, '_typ', 'extension')
+ yield
+
+
+@pytest.fixture
+def na_value():
+ return np.nan
+
+
+@pytest.fixture
+def na_cmp():
+ def cmp(a, b):
+ return np.isnan(a) and np.isnan(b)
+ return cmp
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/numpy_/test_numpy.py
similarity index 84%
rename from pandas/tests/extension/test_numpy.py
rename to pandas/tests/extension/numpy_/test_numpy.py
index 7ca6882c7441b..4c93d5ee0b9d7 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/numpy_/test_numpy.py
@@ -6,7 +6,7 @@
from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
import pandas.util.testing as tm
-from . import base
+from .. import base
@pytest.fixture
@@ -14,28 +14,6 @@ def dtype():
return PandasDtype(np.dtype('float'))
-@pytest.fixture
-def allow_in_pandas(monkeypatch):
- """
- A monkeypatch to tells pandas to let us in.
-
- By default, passing a PandasArray to an index / series / frame
- constructor will unbox that PandasArray to an ndarray, and treat
- it as a non-EA column. We don't want people using EAs without
- reason.
-
- The mechanism for this is a check against ABCPandasArray
- in each constructor.
-
- But, for testing, we need to allow them in pandas. So we patch
- the _typ of PandasArray, so that we evade the ABCPandasArray
- check.
- """
- with monkeypatch.context() as m:
- m.setattr(PandasArray, '_typ', 'extension')
- yield
-
-
@pytest.fixture
def data(allow_in_pandas, dtype):
return PandasArray(np.arange(1, 101, dtype=dtype._dtype))
@@ -46,18 +24,6 @@ def data_missing(allow_in_pandas):
return PandasArray(np.array([np.nan, 1.0]))
-@pytest.fixture
-def na_value():
- return np.nan
-
-
-@pytest.fixture
-def na_cmp():
- def cmp(a, b):
- return np.isnan(a) and np.isnan(b)
- return cmp
-
-
@pytest.fixture
def data_for_sorting(allow_in_pandas):
"""Length-3 array with a known sort order.
diff --git a/pandas/tests/extension/numpy_/test_numpy_nested.py b/pandas/tests/extension/numpy_/test_numpy_nested.py
new file mode 100644
index 0000000000000..cf9b34dd08798
--- /dev/null
+++ b/pandas/tests/extension/numpy_/test_numpy_nested.py
@@ -0,0 +1,286 @@
+"""
+Tests for PandasArray with nested data. Users typically won't create
+these objects via `pd.array`, but they can show up through `.array`
+on a Series with nested data.
+
+We partition these tests into their own file, as many of the base
+tests fail, as they aren't appropriate for nested data. It is easier
+to have a seperate file with its own data generating fixtures, than
+trying to skip based upon the value of a fixture.
+"""
+import pytest
+
+import pandas as pd
+from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
+
+from .. import base
+
+# For NumPy <1.16, np.array([np.nan, (1,)]) raises
+# ValueError: setting an array element with a sequence.
+np = pytest.importorskip('numpy', minversion='1.16.0')
+
+
+@pytest.fixture
+def dtype():
+ return PandasDtype(np.dtype('object'))
+
+
+@pytest.fixture
+def data(allow_in_pandas, dtype):
+ return pd.Series([(i,) for i in range(100)]).array
+
+
+@pytest.fixture
+def data_missing(allow_in_pandas):
+ return PandasArray(np.array([np.nan, (1,)]))
+
+
+@pytest.fixture
+def data_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, C, A] with
+ A < B < C
+ """
+ # Use an empty tuple for first element, then remove,
+ # to disable np.array's shape inference.
+ return PandasArray(
+ np.array([(), (2,), (3,), (1,)])[1:]
+ )
+
+
+@pytest.fixture
+def data_missing_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, NA, A] with
+ A < B and NA missing.
+ """
+ return PandasArray(
+ np.array([(1,), np.nan, (0,)])
+ )
+
+
+@pytest.fixture
+def data_for_grouping(allow_in_pandas):
+ """Data for factorization, grouping, and unique tests.
+
+ Expected to be like [B, B, NA, NA, A, A, B, C]
+
+ Where A < B < C and NA is missing
+ """
+ a, b, c = (1,), (2,), (3,)
+ return PandasArray(np.array(
+ [b, b, np.nan, np.nan, a, a, b, c]
+ ))
+
+
+skip_nested = pytest.mark.skip(reason="Skipping for nested PandasArray")
+
+
+class BaseNumPyTests(object):
+ pass
+
+
+class TestCasting(BaseNumPyTests, base.BaseCastingTests):
+
+ @skip_nested
+ def test_astype_str(self, data):
+ pass
+
+
+class TestConstructors(BaseNumPyTests, base.BaseConstructorsTests):
+ @pytest.mark.skip(reason="We don't register our dtype")
+ # We don't want to register. This test should probably be split in two.
+ def test_from_dtype(self, data):
+ pass
+
+ @skip_nested
+ def test_array_from_scalars(self, data):
+ pass
+
+
+class TestDtype(BaseNumPyTests, base.BaseDtypeTests):
+
+ @pytest.mark.skip(reason="Incorrect expected.")
+ # we unsurprisingly clash with a NumPy name.
+ def test_check_dtype(self, data):
+ pass
+
+
+class TestGetitem(BaseNumPyTests, base.BaseGetitemTests):
+
+ @skip_nested
+ def test_getitem_scalar(self, data):
+ pass
+
+ @skip_nested
+ def test_take_series(self, data):
+ pass
+
+
+class TestGroupby(BaseNumPyTests, base.BaseGroupbyTests):
+ @skip_nested
+ def test_groupby_extension_apply(self, data_for_grouping, op):
+ pass
+
+
+class TestInterface(BaseNumPyTests, base.BaseInterfaceTests):
+ @skip_nested
+ def test_array_interface(self, data):
+ # NumPy array shape inference
+ pass
+
+
+class TestMethods(BaseNumPyTests, base.BaseMethodsTests):
+
+ @pytest.mark.skip(reason="TODO: remove?")
+ def test_value_counts(self, all_data, dropna):
+ pass
+
+ @pytest.mark.skip(reason="Incorrect expected")
+ # We have a bool dtype, so the result is an ExtensionArray
+ # but expected is not
+ def test_combine_le(self, data_repeated):
+ super(TestMethods, self).test_combine_le(data_repeated)
+
+ @skip_nested
+ def test_combine_add(self, data_repeated):
+ # Not numeric
+ pass
+
+ @skip_nested
+ def test_shift_fill_value(self, data):
+ # np.array shape inference. Shift implementation fails.
+ super().test_shift_fill_value(data)
+
+ @skip_nested
+ def test_unique(self, data, box, method):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_fillna_copy_frame(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_fillna_copy_series(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_hash_pandas_object_works(self, data, as_frame):
+ # ndarray of tuples not hashable
+ pass
+
+ @skip_nested
+ def test_searchsorted(self, data_for_sorting, as_series):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_where_series(self, data, na_value, as_frame):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_repeat(self, data, repeats, as_series, use_numpy):
+ # Fails creating expected
+ pass
+
+
+class TestPrinting(BaseNumPyTests, base.BasePrintingTests):
+ pass
+
+
+class TestMissing(BaseNumPyTests, base.BaseMissingTests):
+
+ @skip_nested
+ def test_fillna_scalar(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series_method(self, data_missing, method):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_frame(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+
+class TestReshaping(BaseNumPyTests, base.BaseReshapingTests):
+
+ @pytest.mark.skip("Incorrect parent test")
+ # not actually a mixed concat, since we concat int and int.
+ def test_concat_mixed_dtypes(self, data):
+ super(TestReshaping, self).test_concat_mixed_dtypes(data)
+
+ @skip_nested
+ def test_merge(self, data, na_value):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array(self, data):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array_duplicates(self, data):
+ # Fails creating expected
+ pass
+
+
+class TestSetitem(BaseNumPyTests, base.BaseSetitemTests):
+
+ @skip_nested
+ def test_setitem_scalar_series(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_mismatched_length_raises(self, data, as_array):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_broadcasts(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_mask_broadcast(self, data, setter):
+ pass
+
+ @skip_nested
+ def test_setitem_scalar_key_sequence_raise(self, data):
+ pass
+
+
+# Skip Arithmetics, NumericReduce, BooleanReduce, Parsing
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index ddf85136126a1..7b98395dd6dec 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -488,3 +488,17 @@ def test_to_dict_index_dtypes(self, into, expected):
result = DataFrame.from_dict(result, orient='index')[cols]
expected = DataFrame.from_dict(expected, orient='index')[cols]
tm.assert_frame_equal(result, expected)
+
+ def test_to_dict_numeric_names(self):
+ # https://github.com/pandas-dev/pandas/issues/24940
+ df = DataFrame({str(i): [i] for i in range(5)})
+ result = set(df.to_dict('records')[0].keys())
+ expected = set(df.columns)
+ assert result == expected
+
+ def test_to_dict_wide(self):
+ # https://github.com/pandas-dev/pandas/issues/24939
+ df = DataFrame({('A_{:d}'.format(i)): [i] for i in range(256)})
+ result = df.to_dict('records')[0]
+ expected = {'A_{:d}'.format(i): i for i in range(256)}
+ assert result == expected
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index f3e9d835c7391..20e439de46bde 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -765,6 +765,11 @@ def test_intersect_str_dates(self, sort):
assert len(result) == 0
+ def test_intersect_nosort(self):
+ result = pd.Index(['c', 'b', 'a']).intersection(['b', 'a'])
+ expected = pd.Index(['b', 'a'])
+ tm.assert_index_equal(result, expected)
+
@pytest.mark.parametrize("sort", [True, False])
def test_chained_union(self, sort):
# Chained unions handles names correctly
@@ -1595,20 +1600,27 @@ def test_drop_tuple(self, values, to_drop):
for drop_me in to_drop[1], [to_drop[1]]:
pytest.raises(KeyError, removed.drop, drop_me)
- @pytest.mark.parametrize("method,expected", [
+ @pytest.mark.parametrize("method,expected,sort", [
+ ('intersection', np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
+ dtype=[('num', int), ('let', 'a1')]),
+ False),
+
('intersection', np.array([(1, 'A'), (1, 'B'), (2, 'A'), (2, 'B')],
- dtype=[('num', int), ('let', 'a1')])),
+ dtype=[('num', int), ('let', 'a1')]),
+ True),
+
('union', np.array([(1, 'A'), (1, 'B'), (1, 'C'), (2, 'A'), (2, 'B'),
- (2, 'C')], dtype=[('num', int), ('let', 'a1')]))
+ (2, 'C')], dtype=[('num', int), ('let', 'a1')]),
+ True)
])
- def test_tuple_union_bug(self, method, expected):
+ def test_tuple_union_bug(self, method, expected, sort):
index1 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
dtype=[('num', int), ('let', 'a1')]))
index2 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'),
(2, 'B'), (1, 'C'), (2, 'C')],
dtype=[('num', int), ('let', 'a1')]))
- result = getattr(index1, method)(index2)
+ result = getattr(index1, method)(index2, sort=sort)
assert result.ndim == 1
expected = Index(expected)
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 73995cbe79ecd..b743aeecdc756 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1276,6 +1276,21 @@ def test_resample_across_dst():
assert_frame_equal(result, expected)
+def test_groupby_with_dst_time_change():
+ # GH 24972
+ index = pd.DatetimeIndex([1478064900001000000, 1480037118776792000],
+ tz='UTC').tz_convert('America/Chicago')
+
+ df = pd.DataFrame([1, 2], index=index)
+ result = df.groupby(pd.Grouper(freq='1d')).last()
+ expected_index_values = pd.date_range('2016-11-02', '2016-11-24',
+ freq='d', tz='America/Chicago')
+
+ index = pd.DatetimeIndex(expected_index_values)
+ expected = pd.DataFrame([1.0] + ([np.nan] * 21) + [2.0], index=index)
+ assert_frame_equal(result, expected)
+
+
def test_resample_dst_anchor():
# 5172
dti = DatetimeIndex([datetime(2012, 11, 4, 23)], tz='US/Eastern')
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index f0a3ddc8ce8a4..1e60fdbebfeb3 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -616,6 +616,24 @@ def test_merge_on_datetime64tz(self):
assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
+ def test_merge_on_datetime64tz_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/25014
+ dtz = pd.DatetimeTZDtype(tz='UTC')
+ right = pd.DataFrame({'date': [pd.Timestamp('2018', tz=dtz.tz)],
+ 'value': [4.0],
+ 'date2': [pd.Timestamp('2019', tz=dtz.tz)]},
+ columns=['date', 'value', 'date2'])
+ left = right[:0]
+ result = left.merge(right, on='date')
+ expected = pd.DataFrame({
+ 'value_x': pd.Series(dtype=float),
+ 'date2_x': pd.Series(dtype=dtz),
+ 'date': pd.Series(dtype=dtz),
+ 'value_y': pd.Series(dtype=float),
+ 'date2_y': pd.Series(dtype=dtz),
+ }, columns=['value_x', 'date2_x', 'date', 'value_y', 'date2_y'])
+ tm.assert_frame_equal(result, expected)
+
def test_merge_datetime64tz_with_dst_transition(self):
# GH 18885
df1 = pd.DataFrame(pd.date_range(
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 9b5fdfb06a9fa..e1838e0160fec 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -309,8 +309,13 @@ def test_iso_conversion(self):
assert to_timedelta('P0DT0H0M1S') == expected
def test_nat_converters(self):
- assert to_timedelta('nat', box=False).astype('int64') == iNaT
- assert to_timedelta('nan', box=False).astype('int64') == iNaT
+ result = to_timedelta('nat', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
+
+ result = to_timedelta('nan', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
@pytest.mark.parametrize('units, np_unit',
[(['Y', 'y'], 'Y'),
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25055 | 2019-01-31T17:05:55Z | 2019-01-31T20:30:20Z | null | 2019-01-31T20:30:20Z |
Backport PR #25039 on branch 0.24.x (BUG: avoid usage in_qtconsole for recent IPython versions) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 047404e93914b..521319c55a503 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -83,7 +83,7 @@ Bug Fixes
**Other**
--
+- Fixed AttributeError when printing a DataFrame's HTML repr after accessing the IPython config object (:issue:`25036`)
-
.. _whatsnew_0.241.contributors:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 28c6f3c23a3ce..5b462b949abf9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -17,6 +17,7 @@
import itertools
import sys
import warnings
+from distutils.version import LooseVersion
from textwrap import dedent
import numpy as np
@@ -646,9 +647,15 @@ def _repr_html_(self):
# XXX: In IPython 3.x and above, the Qt console will not attempt to
# display HTML, so this check can be removed when support for
# IPython 2.x is no longer needed.
- if console.in_qtconsole():
- # 'HTML output is disabled in QtConsole'
- return None
+ try:
+ import IPython
+ except ImportError:
+ pass
+ else:
+ if LooseVersion(IPython.__version__) < LooseVersion('3.0'):
+ if console.in_qtconsole():
+ # 'HTML output is disabled in QtConsole'
+ return None
if self._info_repr():
buf = StringIO(u(""))
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 5d922ccaf1fd5..b0cf5a2f17609 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -12,6 +12,7 @@
import os
import re
import sys
+import textwrap
import warnings
import dateutil
@@ -2777,3 +2778,17 @@ def test_format_percentiles():
fmt.format_percentiles([2, 0.1, 0.5])
with pytest.raises(ValueError, match=msg):
fmt.format_percentiles([0.1, 0.5, 'a'])
+
+
+def test_repr_html_ipython_config(ip):
+ code = textwrap.dedent("""\
+ import pandas as pd
+ df = pd.DataFrame({"A": [1, 2]})
+ df._repr_html_()
+
+ cfg = get_ipython().config
+ cfg['IPKernelApp']['parent_appname']
+ df._repr_html_()
+ """)
+ result = ip.run_cell(code)
+ assert not result.error_in_exec
| Backport PR #25039: BUG: avoid usage in_qtconsole for recent IPython versions | https://api.github.com/repos/pandas-dev/pandas/pulls/25054 | 2019-01-31T16:03:27Z | 2019-01-31T20:17:47Z | 2019-01-31T20:17:47Z | 2019-01-31T20:17:47Z |
DEPR: remove PanelGroupBy, disable DataFrame.to_panel | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 09626be713c4f..a3fb1c575e7f1 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -51,7 +51,7 @@ Deprecations
Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+- Removed (parts of) :class:`Panel` (:issue:`25047`)
-
-
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index afc4194e71eb1..ad4709fb3b870 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1974,45 +1974,7 @@ def to_panel(self):
-------
panel : Panel
"""
- # only support this kind for now
- if (not isinstance(self.index, MultiIndex) or # pragma: no cover
- len(self.index.levels) != 2):
- raise NotImplementedError('Only 2-level MultiIndex are supported.')
-
- if not self.index.is_unique:
- raise ValueError("Can't convert non-uniquely indexed "
- "DataFrame to Panel")
-
- self._consolidate_inplace()
-
- # minor axis must be sorted
- if self.index.lexsort_depth < 2:
- selfsorted = self.sort_index(level=0)
- else:
- selfsorted = self
-
- major_axis, minor_axis = selfsorted.index.levels
- major_codes, minor_codes = selfsorted.index.codes
- shape = len(major_axis), len(minor_axis)
-
- # preserve names, if any
- major_axis = major_axis.copy()
- major_axis.name = self.index.names[0]
-
- minor_axis = minor_axis.copy()
- minor_axis.name = self.index.names[1]
-
- # create new axes
- new_axes = [selfsorted.columns, major_axis, minor_axis]
-
- # create new manager
- new_mgr = selfsorted._data.reshape_nd(axes=new_axes,
- labels=[major_codes,
- minor_codes],
- shape=shape,
- ref_items=selfsorted.columns)
-
- return self._constructor_expanddim(new_mgr)
+ raise NotImplementedError("Panel is being removed in pandas 0.25.0.")
@deprecate_kwarg(old_arg_name='encoding', new_arg_name=None)
def to_stata(self, fname, convert_dates=None, write_index=True,
diff --git a/pandas/core/groupby/__init__.py b/pandas/core/groupby/__init__.py
index 9c15a5ebfe0f2..ac35f3825e5e8 100644
--- a/pandas/core/groupby/__init__.py
+++ b/pandas/core/groupby/__init__.py
@@ -1,4 +1,4 @@
from pandas.core.groupby.groupby import GroupBy # noqa: F401
from pandas.core.groupby.generic import ( # noqa: F401
- SeriesGroupBy, DataFrameGroupBy, PanelGroupBy)
+ SeriesGroupBy, DataFrameGroupBy)
from pandas.core.groupby.grouper import Grouper # noqa: F401
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 78aa6d13a9e02..c8ea9ce689871 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1,5 +1,5 @@
"""
-Define the SeriesGroupBy, DataFrameGroupBy, and PanelGroupBy
+Define the SeriesGroupBy and DataFrameGroupBy
classes that hold the groupby interfaces (and some implementations).
These are user facing as the result of the ``df.groupby(...)`` operations,
@@ -39,7 +39,6 @@
from pandas.core.index import CategoricalIndex, Index, MultiIndex
import pandas.core.indexes.base as ibase
from pandas.core.internals import BlockManager, make_block
-from pandas.core.panel import Panel
from pandas.core.series import Series
from pandas.plotting._core import boxplot_frame_groupby
@@ -1586,90 +1585,3 @@ def groupby_series(obj, col=None):
return results
boxplot = boxplot_frame_groupby
-
-
-class PanelGroupBy(NDFrameGroupBy):
-
- def aggregate(self, arg, *args, **kwargs):
- return super(PanelGroupBy, self).aggregate(arg, *args, **kwargs)
-
- agg = aggregate
-
- def _iterate_slices(self):
- if self.axis == 0:
- # kludge
- if self._selection is None:
- slice_axis = self._selected_obj.items
- else:
- slice_axis = self._selection_list
- slicer = lambda x: self._selected_obj[x]
- else:
- raise NotImplementedError("axis other than 0 is not supported")
-
- for val in slice_axis:
- if val in self.exclusions:
- continue
-
- yield val, slicer(val)
-
- def aggregate(self, arg, *args, **kwargs):
- """
- Aggregate using input function or dict of {column -> function}
-
- Parameters
- ----------
- arg : function or dict
- Function to use for aggregating groups. If a function, must either
- work when passed a Panel or when passed to Panel.apply. If
- pass a dict, the keys must be DataFrame column names
-
- Returns
- -------
- aggregated : Panel
- """
- if isinstance(arg, compat.string_types):
- return getattr(self, arg)(*args, **kwargs)
-
- return self._aggregate_generic(arg, *args, **kwargs)
-
- def _wrap_generic_output(self, result, obj):
- if self.axis == 0:
- new_axes = list(obj.axes)
- new_axes[0] = self.grouper.result_index
- elif self.axis == 1:
- x, y, z = obj.axes
- new_axes = [self.grouper.result_index, z, x]
- else:
- x, y, z = obj.axes
- new_axes = [self.grouper.result_index, y, x]
-
- result = Panel._from_axes(result, new_axes)
-
- if self.axis == 1:
- result = result.swapaxes(0, 1).swapaxes(0, 2)
- elif self.axis == 2:
- result = result.swapaxes(0, 2)
-
- return result
-
- def _aggregate_item_by_item(self, func, *args, **kwargs):
- obj = self._obj_with_exclusions
- result = {}
-
- if self.axis > 0:
- for item in obj:
- try:
- itemg = DataFrameGroupBy(obj[item],
- axis=self.axis - 1,
- grouper=self.grouper)
- result[item] = itemg.aggregate(func, *args, **kwargs)
- except (ValueError, TypeError):
- raise
- new_axes = list(obj.axes)
- new_axes[self.axis] = self.grouper.result_index
- return Panel._from_axes(result, new_axes)
- else:
- raise ValueError("axis value must be greater than 0")
-
- def _wrap_aggregated_output(self, output, names=None):
- raise AbstractMethodError(self)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index c8afafde48ac2..de535eeea4b5e 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -917,9 +917,7 @@ def groupby(self, function, axis='major'):
-------
grouped : PanelGroupBy
"""
- from pandas.core.groupby import PanelGroupBy
- axis = self._get_axis_number(axis)
- return PanelGroupBy(self, function, axis=axis)
+ raise NotImplementedError("Panel is removed in pandas 0.25.0")
def to_frame(self, filter_observations=True):
"""
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index a7204fcd9dd20..fbddc9ff29ce9 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -20,7 +20,7 @@
import pandas.core.algorithms as algos
from pandas.core.generic import _shared_docs
from pandas.core.groupby.base import GroupByMixin
-from pandas.core.groupby.generic import PanelGroupBy, SeriesGroupBy
+from pandas.core.groupby.generic import SeriesGroupBy
from pandas.core.groupby.groupby import (
GroupBy, _GroupBy, _pipe_template, groupby)
from pandas.core.groupby.grouper import Grouper
@@ -340,12 +340,7 @@ def _groupby_and_aggregate(self, how, grouper=None, *args, **kwargs):
obj = self._selected_obj
- try:
- grouped = groupby(obj, by=None, grouper=grouper, axis=self.axis)
- except TypeError:
-
- # panel grouper
- grouped = PanelGroupBy(obj, grouper=grouper, axis=self.axis)
+ grouped = groupby(obj, by=None, grouper=grouper, axis=self.axis)
try:
if isinstance(obj, ABCDataFrame) and compat.callable(how):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 2ab6ddb5b25c7..00fa01bb23c8c 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -31,7 +31,7 @@
PeriodIndex, Series, SparseDataFrame, SparseSeries, TimedeltaIndex, compat,
concat, isna, to_datetime)
from pandas.core import config
-from pandas.core.algorithms import match, unique
+from pandas.core.algorithms import unique
from pandas.core.arrays.categorical import (
Categorical, _factorize_from_iterables)
from pandas.core.arrays.sparse import BlockIndex, IntIndex
@@ -3944,29 +3944,7 @@ def read(self, where=None, columns=None, **kwargs):
objs.append(obj)
else:
- warnings.warn(duplicate_doc, DuplicateWarning, stacklevel=5)
-
- # reconstruct
- long_index = MultiIndex.from_arrays(
- [i.values for i in self.index_axes])
-
- for c in self.values_axes:
- lp = DataFrame(c.data, index=long_index, columns=c.values)
-
- # need a better algorithm
- tuple_index = long_index.values
-
- unique_tuples = unique(tuple_index)
- unique_tuples = com.asarray_tuplesafe(unique_tuples)
-
- indexer = match(unique_tuples, tuple_index)
- indexer = ensure_platform_int(indexer)
-
- new_index = long_index.take(indexer)
- new_values = lp.values.take(indexer, axis=0)
-
- lp = DataFrame(new_values, index=new_index, columns=lp.columns)
- objs.append(lp.to_panel())
+ raise NotImplementedError("Panel is removed in pandas 0.25.0")
# create the composite object
if len(objs) == 1:
@@ -4875,16 +4853,3 @@ def select_coords(self):
return self.coordinates
return np.arange(start, stop)
-
-# utilities ###
-
-
-def timeit(key, df, fn=None, remove=True, **kwargs):
- if fn is None:
- fn = 'timeit.h5'
- store = HDFStore(fn, mode='w')
- store.append(key, df, **kwargs)
- store.close()
-
- if remove:
- os.remove(fn)
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 1622088d05f4d..2bb3559d56d61 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from warnings import catch_warnings, simplefilter
+from warnings import catch_warnings
import numpy as np
@@ -39,9 +39,6 @@ def test_abc_types(self):
assert isinstance(pd.Int64Index([1, 2, 3]), gt.ABCIndexClass)
assert isinstance(pd.Series([1, 2, 3]), gt.ABCSeries)
assert isinstance(self.df, gt.ABCDataFrame)
- with catch_warnings(record=True):
- simplefilter('ignore', FutureWarning)
- assert isinstance(self.df.to_panel(), gt.ABCPanel)
assert isinstance(self.sparse_series, gt.ABCSparseSeries)
assert isinstance(self.sparse_array, gt.ABCSparseArray)
assert isinstance(self.sparse_frame, gt.ABCSparseDataFrame)
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index 4f0747c0d6945..2e3696e7e04cc 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -6,7 +6,7 @@
import pytest
import pandas as pd
-from pandas import DataFrame, Index, MultiIndex, Panel, Series
+from pandas import DataFrame, Index, MultiIndex, Series
from pandas.tests.frame.common import TestData
import pandas.util.testing as tm
@@ -125,29 +125,6 @@ def test_indexing_sliced(self):
tm.assert_series_equal(res, exp)
assert isinstance(res, tm.SubclassedSeries)
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_to_panel_expanddim(self):
- # GH 9762
-
- class SubclassedFrame(DataFrame):
-
- @property
- def _constructor_expanddim(self):
- return SubclassedPanel
-
- class SubclassedPanel(Panel):
- pass
-
- index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)])
- df = SubclassedFrame({'X': [1, 2, 3], 'Y': [4, 5, 6]}, index=index)
- result = df.to_panel()
- assert isinstance(result, SubclassedPanel)
- expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]],
- items=['X', 'Y'], major_axis=[0],
- minor_axis=[0, 1, 2],
- dtype='int64')
- tm.assert_panel_equal(result, expected)
-
def test_subclass_attr_err_propagation(self):
# GH 11808
class A(DataFrame):
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 98c917a6eca3c..0bfc7ababd18a 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1239,31 +1239,6 @@ def _check_work(gp):
# _check_work(panel.groupby(lambda x: x.month, axis=1))
-@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
-def test_panel_groupby():
- panel = tm.makePanel()
- tm.add_nans(panel)
- grouped = panel.groupby({'ItemA': 0, 'ItemB': 0, 'ItemC': 1},
- axis='items')
- agged = grouped.mean()
- agged2 = grouped.agg(lambda x: x.mean('items'))
-
- tm.assert_panel_equal(agged, agged2)
-
- tm.assert_index_equal(agged.items, Index([0, 1]))
-
- grouped = panel.groupby(lambda x: x.month, axis='major')
- agged = grouped.mean()
-
- exp = Index(sorted(list(set(panel.major_axis.month))))
- tm.assert_index_equal(agged.major_axis, exp)
-
- grouped = panel.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1},
- axis='minor')
- agged = grouped.mean()
- tm.assert_index_equal(agged.minor_axis, Index([0, 1]))
-
-
def test_groupby_2d_malformed():
d = DataFrame(index=lrange(2))
d['group'] = ['g1', 'g2']
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index a509a7cb57c97..44b5bd5f13992 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -14,8 +14,7 @@
from pandas.core.groupby.grouper import Grouping
import pandas.util.testing as tm
from pandas.util.testing import (
- assert_almost_equal, assert_frame_equal, assert_panel_equal,
- assert_series_equal)
+ assert_almost_equal, assert_frame_equal, assert_series_equal)
# selection
# --------------------------------
@@ -563,17 +562,7 @@ def test_list_grouper_with_nat(self):
# --------------------------------
class TestGetGroup():
-
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
def test_get_group(self):
- wp = tm.makePanel()
- grouped = wp.groupby(lambda x: x.month, axis='major')
-
- gp = grouped.get_group(1)
- expected = wp.reindex(
- major=[x for x in wp.major_axis if x.month == 1])
- assert_panel_equal(gp, expected)
-
# GH 5267
# be datelike friendly
df = DataFrame({'DATE': pd.to_datetime(
@@ -755,19 +744,6 @@ def test_multi_iter_frame(self, three_group):
for key, group in grouped:
pass
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_multi_iter_panel(self):
- wp = tm.makePanel()
- grouped = wp.groupby([lambda x: x.month, lambda x: x.weekday()],
- axis=1)
-
- for (month, wd), group in grouped:
- exp_axis = [x
- for x in wp.major_axis
- if x.month == month and x.weekday() == wd]
- expected = wp.reindex(major=exp_axis)
- assert_panel_equal(group, expected)
-
def test_dictify(self, df):
dict(iter(df.groupby('A')))
dict(iter(df.groupby(['A', 'B'])))
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 9430011288f27..c339c33751b5f 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -3050,29 +3050,6 @@ def test_select_with_dups(self):
result = store.select('df', columns=['B', 'A'])
assert_frame_equal(result, expected, by_blocks=True)
- @pytest.mark.filterwarnings(
- "ignore:\\nduplicate:pandas.io.pytables.DuplicateWarning"
- )
- def test_wide_table_dups(self):
- with ensure_clean_store(self.path) as store:
- with catch_warnings(record=True):
-
- wp = tm.makePanel()
- store.put('panel', wp, format='table')
- store.put('panel', wp, format='table', append=True)
-
- recons = store['panel']
-
- assert_panel_equal(recons, wp)
-
- def test_long(self):
- def _check(left, right):
- assert_panel_equal(left.to_panel(), right.to_panel())
-
- with catch_warnings(record=True):
- wp = tm.makePanel()
- self._check_roundtrip(wp.to_frame(), _check)
-
def test_overwrite_node(self):
with ensure_clean_store(self.path) as store:
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 856c4df5380e5..ceccb48194f85 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1,6 +1,5 @@
from datetime import datetime, timedelta
from functools import partial
-from warnings import catch_warnings, simplefilter
import numpy as np
import pytest
@@ -10,7 +9,7 @@
from pandas.errors import UnsupportedFunctionCall
import pandas as pd
-from pandas import DataFrame, Panel, Series, Timedelta, Timestamp, isna, notna
+from pandas import DataFrame, Series, Timedelta, Timestamp, isna, notna
from pandas.core.indexes.datetimes import date_range
from pandas.core.indexes.period import Period, period_range
from pandas.core.resample import (
@@ -692,56 +691,6 @@ def test_resample_axis1():
tm.assert_frame_equal(result, expected)
-def test_resample_panel():
- rng = date_range('1/1/2000', '6/30/2000')
- n = len(rng)
-
- with catch_warnings(record=True):
- simplefilter("ignore", FutureWarning)
- panel = Panel(np.random.randn(3, n, 5),
- items=['one', 'two', 'three'],
- major_axis=rng,
- minor_axis=['a', 'b', 'c', 'd', 'e'])
-
- result = panel.resample('M', axis=1).mean()
-
- def p_apply(panel, f):
- result = {}
- for item in panel.items:
- result[item] = f(panel[item])
- return Panel(result, items=panel.items)
-
- expected = p_apply(panel, lambda x: x.resample('M').mean())
- tm.assert_panel_equal(result, expected)
-
- panel2 = panel.swapaxes(1, 2)
- result = panel2.resample('M', axis=2).mean()
- expected = p_apply(panel2,
- lambda x: x.resample('M', axis=1).mean())
- tm.assert_panel_equal(result, expected)
-
-
-@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
-def test_resample_panel_numpy():
- rng = date_range('1/1/2000', '6/30/2000')
- n = len(rng)
-
- with catch_warnings(record=True):
- panel = Panel(np.random.randn(3, n, 5),
- items=['one', 'two', 'three'],
- major_axis=rng,
- minor_axis=['a', 'b', 'c', 'd', 'e'])
-
- result = panel.resample('M', axis=1).apply(lambda x: x.mean(1))
- expected = panel.resample('M', axis=1).mean()
- tm.assert_panel_equal(result, expected)
-
- panel = panel.swapaxes(1, 2)
- result = panel.resample('M', axis=2).apply(lambda x: x.mean(2))
- expected = panel.resample('M', axis=2).mean()
- tm.assert_panel_equal(result, expected)
-
-
def test_resample_anchored_ticks():
# If a fixed delta (5 minute, 4 hour) evenly divides a day, we should
# "anchor" the origin at midnight so we get regular intervals rather
diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py
index a4eb7933738c0..2f330d1f2484b 100644
--- a/pandas/tests/resample/test_time_grouper.py
+++ b/pandas/tests/resample/test_time_grouper.py
@@ -5,7 +5,7 @@
import pytest
import pandas as pd
-from pandas import DataFrame, Panel, Series
+from pandas import DataFrame, Series
from pandas.core.indexes.datetimes import date_range
from pandas.core.resample import TimeGrouper
import pandas.util.testing as tm
@@ -79,27 +79,6 @@ def f(df):
tm.assert_index_equal(result.index, df.index)
-@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
-def test_panel_aggregation():
- ind = pd.date_range('1/1/2000', periods=100)
- data = np.random.randn(2, len(ind), 4)
-
- wp = Panel(data, items=['Item1', 'Item2'], major_axis=ind,
- minor_axis=['A', 'B', 'C', 'D'])
-
- tg = TimeGrouper('M', axis=1)
- _, grouper, _ = tg._get_grouper(wp)
- bingrouped = wp.groupby(grouper)
- binagg = bingrouped.mean()
-
- def f(x):
- assert (isinstance(x, Panel))
- return x.mean(1)
-
- result = bingrouped.agg(f)
- tm.assert_panel_equal(result, binagg)
-
-
@pytest.mark.parametrize('name, func', [
('Int64Index', tm.makeIntIndex),
('Index', tm.makeUnicodeIndex),
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index ba0ad72e624f7..6b20acc844829 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1653,61 +1653,6 @@ def test_transpose_copy(self):
panel.values[0, 1, 1] = np.nan
assert notna(result.values[1, 0, 1])
- def test_to_frame(self):
- # filtered
- filtered = self.panel.to_frame()
- expected = self.panel.to_frame().dropna(how='any')
- assert_frame_equal(filtered, expected)
-
- # unfiltered
- unfiltered = self.panel.to_frame(filter_observations=False)
- assert_panel_equal(unfiltered.to_panel(), self.panel)
-
- # names
- assert unfiltered.index.names == ('major', 'minor')
-
- # unsorted, round trip
- df = self.panel.to_frame(filter_observations=False)
- unsorted = df.take(np.random.permutation(len(df)))
- pan = unsorted.to_panel()
- assert_panel_equal(pan, self.panel)
-
- # preserve original index names
- df = DataFrame(np.random.randn(6, 2),
- index=[['a', 'a', 'b', 'b', 'c', 'c'],
- [0, 1, 0, 1, 0, 1]],
- columns=['one', 'two'])
- df.index.names = ['foo', 'bar']
- df.columns.name = 'baz'
-
- rdf = df.to_panel().to_frame()
- assert rdf.index.names == df.index.names
- assert rdf.columns.names == df.columns.names
-
- def test_to_frame_mixed(self):
- panel = self.panel.fillna(0)
- panel['str'] = 'foo'
- panel['bool'] = panel['ItemA'] > 0
-
- lp = panel.to_frame()
- wp = lp.to_panel()
- assert wp['bool'].values.dtype == np.bool_
- # Previously, this was mutating the underlying
- # index and changing its name
- assert_frame_equal(wp['bool'], panel['bool'], check_names=False)
-
- # GH 8704
- # with categorical
- df = panel.to_frame()
- df['category'] = df['str'].astype('category')
-
- # to_panel
- # TODO: this converts back to object
- p = df.to_panel()
- expected = panel.copy()
- expected['category'] = 'foo'
- assert_panel_equal(p, expected)
-
def test_to_frame_multi_major(self):
idx = MultiIndex.from_tuples(
[(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')])
@@ -1808,22 +1753,6 @@ def test_to_frame_multi_drop_level(self):
expected = DataFrame({'i1': [1., 2], 'i2': [1., 2]}, index=exp_idx)
assert_frame_equal(result, expected)
- def test_to_panel_na_handling(self):
- df = DataFrame(np.random.randint(0, 10, size=20).reshape((10, 2)),
- index=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
- [0, 1, 2, 3, 4, 5, 2, 3, 4, 5]])
-
- panel = df.to_panel()
- assert isna(panel[0].loc[1, [0, 1]]).all()
-
- def test_to_panel_duplicates(self):
- # #2441
- df = DataFrame({'a': [0, 0, 1], 'b': [1, 1, 1], 'c': [1, 2, 3]})
- idf = df.set_index(['a', 'b'])
-
- with pytest.raises(ValueError, match='non-uniquely indexed'):
- idf.to_panel()
-
def test_panel_dups(self):
# GH 4960
@@ -2121,14 +2050,6 @@ def test_get_attr(self):
self.panel['i'] = self.panel['ItemA']
assert_frame_equal(self.panel['i'], self.panel.i)
- def test_from_frame_level1_unsorted(self):
- tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2), ('AAPL', 1),
- ('MSFT', 1)]
- midx = MultiIndex.from_tuples(tuples)
- df = DataFrame(np.random.rand(5, 4), index=midx)
- p = df.to_panel()
- assert_frame_equal(p.minor_xs(2), df.xs(2, level=1).sort_index())
-
def test_to_excel(self):
try:
import xlwt # noqa
@@ -2404,40 +2325,11 @@ def setup_method(self, method):
self.panel = panel.to_frame()
self.unfiltered_panel = panel.to_frame(filter_observations=False)
- def test_ops_differently_indexed(self):
- # trying to set non-identically indexed panel
- wp = self.panel.to_panel()
- wp2 = wp.reindex(major=wp.major_axis[:-1])
- lp2 = wp2.to_frame()
-
- result = self.panel + lp2
- assert_frame_equal(result.reindex(lp2.index), lp2 * 2)
-
- # careful, mutation
- self.panel['foo'] = lp2['ItemA']
- assert_series_equal(self.panel['foo'].reindex(lp2.index),
- lp2['ItemA'],
- check_names=False)
-
def test_ops_scalar(self):
result = self.panel.mul(2)
expected = DataFrame.__mul__(self.panel, 2)
assert_frame_equal(result, expected)
- def test_combineFrame(self):
- wp = self.panel.to_panel()
- result = self.panel.add(wp['ItemA'].stack(), axis=0)
- assert_frame_equal(result.to_panel()['ItemA'], wp['ItemA'] * 2)
-
- def test_combinePanel(self):
- wp = self.panel.to_panel()
- result = self.panel.add(self.panel)
- wide_result = result.to_panel()
- assert_frame_equal(wp['ItemA'] * 2, wide_result['ItemA'])
-
- # one item
- result = self.panel.add(self.panel.filter(['ItemA']))
-
def test_combine_scalar(self):
result = self.panel.mul(2)
expected = DataFrame(self.panel._data) * 2
@@ -2454,34 +2346,6 @@ def test_combine_series(self):
expected = DataFrame.add(self.panel, s, axis=1)
assert_frame_equal(result, expected)
- def test_operators(self):
- wp = self.panel.to_panel()
- result = (self.panel + 1).to_panel()
- assert_frame_equal(wp['ItemA'] + 1, result['ItemA'])
-
- def test_arith_flex_panel(self):
- ops = ['add', 'sub', 'mul', 'div',
- 'truediv', 'pow', 'floordiv', 'mod']
- if not compat.PY3:
- aliases = {}
- else:
- aliases = {'div': 'truediv'}
- self.panel = self.panel.to_panel()
-
- for n in [np.random.randint(-50, -1), np.random.randint(1, 50), 0]:
- for op in ops:
- alias = aliases.get(op, op)
- f = getattr(operator, alias)
- exp = f(self.panel, n)
- result = getattr(self.panel, op)(n)
- assert_panel_equal(result, exp, check_panel_type=True)
-
- # rops
- r_f = lambda x, y: f(y, x)
- exp = r_f(self.panel, n)
- result = getattr(self.panel, 'r' + op)(n)
- assert_panel_equal(result, exp)
-
def test_sort(self):
def is_sorted(arr):
return (arr[1:] > arr[:-1]).any()
@@ -2502,45 +2366,6 @@ def test_to_sparse(self):
with pytest.raises(NotImplementedError, match=msg):
self.panel.to_sparse
- def test_truncate(self):
- dates = self.panel.index.levels[0]
- start, end = dates[1], dates[5]
-
- trunced = self.panel.truncate(start, end).to_panel()
- expected = self.panel.to_panel()['ItemA'].truncate(start, end)
-
- # TODO truncate drops index.names
- assert_frame_equal(trunced['ItemA'], expected, check_names=False)
-
- trunced = self.panel.truncate(before=start).to_panel()
- expected = self.panel.to_panel()['ItemA'].truncate(before=start)
-
- # TODO truncate drops index.names
- assert_frame_equal(trunced['ItemA'], expected, check_names=False)
-
- trunced = self.panel.truncate(after=end).to_panel()
- expected = self.panel.to_panel()['ItemA'].truncate(after=end)
-
- # TODO truncate drops index.names
- assert_frame_equal(trunced['ItemA'], expected, check_names=False)
-
- # truncate on dates that aren't in there
- wp = self.panel.to_panel()
- new_index = wp.major_axis[::5]
-
- wp2 = wp.reindex(major=new_index)
-
- lp2 = wp2.to_frame()
- lp_trunc = lp2.truncate(wp.major_axis[2], wp.major_axis[-2])
-
- wp_trunc = wp2.truncate(wp.major_axis[2], wp.major_axis[-2])
-
- assert_panel_equal(wp_trunc, lp_trunc.to_panel())
-
- # throw proper exception
- pytest.raises(Exception, lp2.truncate, wp.major_axis[-2],
- wp.major_axis[2])
-
def test_axis_dummies(self):
from pandas.core.reshape.reshape import make_axis_dummies
@@ -2567,20 +2392,6 @@ def test_get_dummies(self):
dummies = get_dummies(self.panel['Label'])
tm.assert_numpy_array_equal(dummies.values, minor_dummies.values)
- def test_mean(self):
- means = self.panel.mean(level='minor')
-
- # test versus Panel version
- wide_means = self.panel.to_panel().mean('major')
- assert_frame_equal(means, wide_means)
-
- def test_sum(self):
- sums = self.panel.sum(level='minor')
-
- # test versus Panel version
- wide_sums = self.panel.to_panel().sum('major')
- assert_frame_equal(sums, wide_sums)
-
def test_count(self):
index = self.panel.index
| My understanding is that we're removing Panel in 0.25.0. A local attempt to do this all-at-once got messy quick (largely due to io.pytables and io.msgpack). This gets the ball rolling by removing only PanelGroupBy and DataFrame.to_panel, followed by all of the code+tests that rely on either of these. | https://api.github.com/repos/pandas-dev/pandas/pulls/25047 | 2019-01-31T03:50:28Z | 2019-02-06T03:47:26Z | 2019-02-06T03:47:26Z | 2019-02-09T08:53:29Z |
ENH: Support fold argument in Timestamp.replace | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index a9fa8b2174dd0..8e1fc352ba4f7 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -19,7 +19,7 @@ including other versions of pandas.
Other Enhancements
^^^^^^^^^^^^^^^^^^
--
+- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
-
-
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index a55d15a7c4e85..c719bcb2ef135 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -669,7 +669,6 @@ class NaTType(_NaT):
nanosecond : int, optional
tzinfo : tz-convertible, optional
fold : int, optional, default is 0
- added in 3.6, NotImplemented
Returns
-------
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index fe0564cb62c30..85d94f822056b 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import sys
import warnings
from cpython cimport (PyObject_RichCompareBool, PyObject_RichCompare,
@@ -43,10 +44,11 @@ from pandas._libs.tslibs.timezones import UTC
# Constants
_zero_time = datetime_time(0, 0)
_no_input = object()
-
+PY36 = sys.version_info >= (3, 6)
# ----------------------------------------------------------------------
+
def maybe_integer_op_deprecated(obj):
# GH#22535 add/sub of integers and int-arrays is deprecated
if obj.freq is not None:
@@ -1195,7 +1197,6 @@ class Timestamp(_Timestamp):
nanosecond : int, optional
tzinfo : tz-convertible, optional
fold : int, optional, default is 0
- added in 3.6, NotImplemented
Returns
-------
@@ -1252,12 +1253,16 @@ class Timestamp(_Timestamp):
# see GH#18319
ts_input = _tzinfo.localize(datetime(dts.year, dts.month, dts.day,
dts.hour, dts.min, dts.sec,
- dts.us))
+ dts.us),
+ is_dst=not bool(fold))
_tzinfo = ts_input.tzinfo
else:
- ts_input = datetime(dts.year, dts.month, dts.day,
- dts.hour, dts.min, dts.sec, dts.us,
- tzinfo=_tzinfo)
+ kwargs = {'year': dts.year, 'month': dts.month, 'day': dts.day,
+ 'hour': dts.hour, 'minute': dts.min, 'second': dts.sec,
+ 'microsecond': dts.us, 'tzinfo': _tzinfo}
+ if PY36:
+ kwargs['fold'] = fold
+ ts_input = datetime(**kwargs)
ts = convert_datetime_to_tsobject(ts_input, _tzinfo)
value = ts.value + (dts.ps // 1000)
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index 3f9a30d254126..adcf66200a672 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -8,7 +8,7 @@
from pandas._libs.tslibs import conversion
from pandas._libs.tslibs.frequencies import INVALID_FREQ_ERR_MSG
-from pandas.compat import PY3
+from pandas.compat import PY3, PY36
import pandas.util._test_decorators as td
from pandas import NaT, Timestamp
@@ -329,6 +329,19 @@ def test_replace_dst_border(self):
expected = Timestamp('2013-11-3 03:00:00', tz='America/Chicago')
assert result == expected
+ @pytest.mark.skipif(not PY36, reason='Fold not available until PY3.6')
+ @pytest.mark.parametrize('fold', [0, 1])
+ @pytest.mark.parametrize('tz', ['dateutil/Europe/London', 'Europe/London'])
+ def test_replace_dst_fold(self, fold, tz):
+ # GH 25017
+ d = datetime(2019, 10, 27, 2, 30)
+ ts = Timestamp(d, tz=tz)
+ result = ts.replace(hour=1, fold=fold)
+ expected = Timestamp(datetime(2019, 10, 27, 1, 30)).tz_localize(
+ tz, ambiguous=not fold
+ )
+ assert result == expected
+
# --------------------------------------------------------------
# Timestamp.normalize
| - [x] closes #25017
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Since `Timestamp` has its own `replace` method, I think we can still introduce this while still supporting PY3.5 (`datetime.replace` gained the `fold` argument in 3.6) while it mimics the functionality in PY3.6 | https://api.github.com/repos/pandas-dev/pandas/pulls/25046 | 2019-01-31T01:54:09Z | 2019-02-01T18:40:56Z | 2019-02-01T18:40:56Z | 2019-02-01T18:51:05Z |
PERF: use new to_records() argument in to_stata() | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 939fb8b9415bd..130477f588c26 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -23,14 +23,6 @@ Other Enhancements
-
-
-.. _whatsnew_0250.performance:
-
-Performance Improvements
-~~~~~~~~~~~~~~~~~~~~~~~~
- - Significant speedup in `SparseArray` initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (:issue:`24985`)
-
-
-
.. _whatsnew_0250.api_breaking:
Backwards incompatible API changes
@@ -69,8 +61,8 @@ Removal of prior version deprecations/changes
Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~
--
--
+- Significant speedup in `SparseArray` initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (:issue:`24985`)
+- `DataFrame.to_stata()` is now faster when outputting data with any string or non-native endian columns (:issue:`25045`)
-
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 1b0660171ecac..0bd084f4e5df7 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2385,32 +2385,22 @@ def _prepare_data(self):
data = self._convert_strls(data)
# 3. Convert bad string data to '' and pad to correct length
- dtypes = []
- data_cols = []
- has_strings = False
+ dtypes = {}
native_byteorder = self._byteorder == _set_endianness(sys.byteorder)
for i, col in enumerate(data):
typ = typlist[i]
if typ <= self._max_string_length:
- has_strings = True
data[col] = data[col].fillna('').apply(_pad_bytes, args=(typ,))
stype = 'S{type}'.format(type=typ)
- dtypes.append(('c' + str(i), stype))
- string = data[col].str.encode(self._encoding)
- data_cols.append(string.values.astype(stype))
+ dtypes[col] = stype
+ data[col] = data[col].str.encode(self._encoding).astype(stype)
else:
- values = data[col].values
dtype = data[col].dtype
if not native_byteorder:
dtype = dtype.newbyteorder(self._byteorder)
- dtypes.append(('c' + str(i), dtype))
- data_cols.append(values)
- dtypes = np.dtype(dtypes)
+ dtypes[col] = dtype
- if has_strings or not native_byteorder:
- self.data = np.fromiter(zip(*data_cols), dtype=dtypes)
- else:
- self.data = data.to_records(index=False)
+ self.data = data.to_records(index=False, column_dtypes=dtypes)
def _write_data(self):
data = self.data
| The `to_stata()` function spends ~25-50% of its time massaging string/different endian data and creating a `np.recarray` in a roundabout way. Using `column_dtypes` in `to_records()` allows some cleanup and for a decent performance bump:
```
$ asv compare upstream/master HEAD -s --sort ratio
Benchmarks that have improved:
before after ratio
[4cbee179] [9bf67cc5]
<to_stata~1> <to_stata>
- 709±9ms 552±20ms 0.78 io.stata.Stata.time_write_stata('tw')
- 409±30ms 233±30ms 0.57 io.stata.Stata.time_write_stata('tq')
- 402±20ms 227±30ms 0.56 io.stata.Stata.time_write_stata('tc')
- 398±9ms 222±30ms 0.56 io.stata.Stata.time_write_stata('th')
- 420±20ms 231±30ms 0.55 io.stata.Stata.time_write_stata('tm')
- 396±10ms 214±3ms 0.54 io.stata.Stata.time_write_stata('ty')
- 389±8ms 207±10ms 0.53 io.stata.Stata.time_write_stata('td')
Benchmarks that have stayed the same:
before after ratio
[4cbee179] [9bf67cc5]
<to_stata~1> <to_stata>
527±6ms 563±30ms 1.07 io.stata.Stata.time_read_stata('th')
507±20ms 531±9ms 1.05 io.stata.Stata.time_read_stata('ty')
519±10ms 543±30ms 1.05 io.stata.Stata.time_read_stata('tm')
484±10ms 504±10ms 1.04 io.stata.Stata.time_read_stata('tw')
149±6ms 152±2ms 1.02 io.stata.Stata.time_read_stata('tc')
152±3ms 153±8ms 1.01 io.stata.Stata.time_read_stata('td')
533±20ms 533±6ms 1.00 io.stata.Stata.time_read_stata('tq')
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25045 | 2019-01-30T23:41:47Z | 2019-02-01T20:56:06Z | 2019-02-01T20:56:06Z | 2019-02-01T20:56:09Z |
CLN: to_pickle internals | diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 61295b8249f58..8f16f8154b952 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -201,7 +201,7 @@ def load_newobj_ex(self):
pass
-def load(fh, encoding=None, compat=False, is_verbose=False):
+def load(fh, encoding=None, is_verbose=False):
"""load a pickle, with a provided encoding
if compat is True:
@@ -212,7 +212,6 @@ def load(fh, encoding=None, compat=False, is_verbose=False):
----------
fh : a filelike object
encoding : an optional encoding
- compat : provide Series compatibility mode, boolean, default False
is_verbose : show exception output
"""
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 789f55a62dc58..ab4a266853a78 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -1,8 +1,7 @@
""" pickle compat """
import warnings
-import numpy as np
-from numpy.lib.format import read_array, write_array
+from numpy.lib.format import read_array
from pandas.compat import PY3, BytesIO, cPickle as pkl, pickle_compat as pc
@@ -76,6 +75,7 @@ def to_pickle(obj, path, compression='infer', protocol=pkl.HIGHEST_PROTOCOL):
try:
f.write(pkl.dumps(obj, protocol=protocol))
finally:
+ f.close()
for _f in fh:
_f.close()
@@ -138,63 +138,32 @@ def read_pickle(path, compression='infer'):
>>> os.remove("./dummy.pkl")
"""
path = _stringify_path(path)
+ f, fh = _get_handle(path, 'rb', compression=compression, is_text=False)
+
+ # 1) try with cPickle
+ # 2) try with the compat pickle to handle subclass changes
+ # 3) pass encoding only if its not None as py2 doesn't handle the param
- def read_wrapper(func):
- # wrapper file handle open/close operation
- f, fh = _get_handle(path, 'rb',
- compression=compression,
- is_text=False)
- try:
- return func(f)
- finally:
- for _f in fh:
- _f.close()
-
- def try_read(path, encoding=None):
- # try with cPickle
- # try with current pickle, if we have a Type Error then
- # try with the compat pickle to handle subclass changes
- # pass encoding only if its not None as py2 doesn't handle
- # the param
-
- # cpickle
- # GH 6899
- try:
- with warnings.catch_warnings(record=True):
- # We want to silence any warnings about, e.g. moved modules.
- warnings.simplefilter("ignore", Warning)
- return read_wrapper(lambda f: pkl.load(f))
- except Exception: # noqa: E722
- # reg/patched pickle
- # compat not used in pandas/compat/pickle_compat.py::load
- # TODO: remove except block OR modify pc.load to use compat
- try:
- return read_wrapper(
- lambda f: pc.load(f, encoding=encoding, compat=False))
- # compat pickle
- except Exception: # noqa: E722
- return read_wrapper(
- lambda f: pc.load(f, encoding=encoding, compat=True))
try:
- return try_read(path)
+ with warnings.catch_warnings(record=True):
+ # We want to silence any warnings about, e.g. moved modules.
+ warnings.simplefilter("ignore", Warning)
+ return pkl.load(f)
except Exception: # noqa: E722
- if PY3:
- return try_read(path, encoding='latin1')
- raise
-
+ try:
+ return pc.load(f, encoding=None)
+ except Exception: # noqa: E722
+ if PY3:
+ return pc.load(f, encoding='latin1')
+ raise
+ finally:
+ f.close()
+ for _f in fh:
+ _f.close()
# compat with sparse pickle / unpickle
-def _pickle_array(arr):
- arr = arr.view(np.ndarray)
-
- buf = BytesIO()
- write_array(buf, arr)
-
- return buf.getvalue()
-
-
def _unpickle_array(bytes):
arr = read_array(BytesIO(bytes))
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/25044 | 2019-01-30T23:27:07Z | 2019-02-01T18:50:36Z | 2019-02-01T18:50:36Z | 2019-02-01T18:51:56Z |
Backport PR #24993 on branch 0.24.x (Test nested PandasArray) | diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 47517782e2bbf..791ff44303e96 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -222,7 +222,7 @@ def __getitem__(self, item):
item = item._ndarray
result = self._ndarray[item]
- if not lib.is_scalar(result):
+ if not lib.is_scalar(item):
result = type(self)(result)
return result
diff --git a/pandas/tests/extension/numpy_/__init__.py b/pandas/tests/extension/numpy_/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/extension/numpy_/conftest.py b/pandas/tests/extension/numpy_/conftest.py
new file mode 100644
index 0000000000000..daa93571c2957
--- /dev/null
+++ b/pandas/tests/extension/numpy_/conftest.py
@@ -0,0 +1,38 @@
+import numpy as np
+import pytest
+
+from pandas.core.arrays.numpy_ import PandasArray
+
+
+@pytest.fixture
+def allow_in_pandas(monkeypatch):
+ """
+ A monkeypatch to tell pandas to let us in.
+
+ By default, passing a PandasArray to an index / series / frame
+ constructor will unbox that PandasArray to an ndarray, and treat
+ it as a non-EA column. We don't want people using EAs without
+ reason.
+
+ The mechanism for this is a check against ABCPandasArray
+ in each constructor.
+
+ But, for testing, we need to allow them in pandas. So we patch
+ the _typ of PandasArray, so that we evade the ABCPandasArray
+ check.
+ """
+ with monkeypatch.context() as m:
+ m.setattr(PandasArray, '_typ', 'extension')
+ yield
+
+
+@pytest.fixture
+def na_value():
+ return np.nan
+
+
+@pytest.fixture
+def na_cmp():
+ def cmp(a, b):
+ return np.isnan(a) and np.isnan(b)
+ return cmp
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/numpy_/test_numpy.py
similarity index 84%
rename from pandas/tests/extension/test_numpy.py
rename to pandas/tests/extension/numpy_/test_numpy.py
index 7ca6882c7441b..4c93d5ee0b9d7 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/numpy_/test_numpy.py
@@ -6,7 +6,7 @@
from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
import pandas.util.testing as tm
-from . import base
+from .. import base
@pytest.fixture
@@ -14,28 +14,6 @@ def dtype():
return PandasDtype(np.dtype('float'))
-@pytest.fixture
-def allow_in_pandas(monkeypatch):
- """
- A monkeypatch to tells pandas to let us in.
-
- By default, passing a PandasArray to an index / series / frame
- constructor will unbox that PandasArray to an ndarray, and treat
- it as a non-EA column. We don't want people using EAs without
- reason.
-
- The mechanism for this is a check against ABCPandasArray
- in each constructor.
-
- But, for testing, we need to allow them in pandas. So we patch
- the _typ of PandasArray, so that we evade the ABCPandasArray
- check.
- """
- with monkeypatch.context() as m:
- m.setattr(PandasArray, '_typ', 'extension')
- yield
-
-
@pytest.fixture
def data(allow_in_pandas, dtype):
return PandasArray(np.arange(1, 101, dtype=dtype._dtype))
@@ -46,18 +24,6 @@ def data_missing(allow_in_pandas):
return PandasArray(np.array([np.nan, 1.0]))
-@pytest.fixture
-def na_value():
- return np.nan
-
-
-@pytest.fixture
-def na_cmp():
- def cmp(a, b):
- return np.isnan(a) and np.isnan(b)
- return cmp
-
-
@pytest.fixture
def data_for_sorting(allow_in_pandas):
"""Length-3 array with a known sort order.
diff --git a/pandas/tests/extension/numpy_/test_numpy_nested.py b/pandas/tests/extension/numpy_/test_numpy_nested.py
new file mode 100644
index 0000000000000..cf9b34dd08798
--- /dev/null
+++ b/pandas/tests/extension/numpy_/test_numpy_nested.py
@@ -0,0 +1,286 @@
+"""
+Tests for PandasArray with nested data. Users typically won't create
+these objects via `pd.array`, but they can show up through `.array`
+on a Series with nested data.
+
+We partition these tests into their own file, as many of the base
+tests fail, as they aren't appropriate for nested data. It is easier
+to have a seperate file with its own data generating fixtures, than
+trying to skip based upon the value of a fixture.
+"""
+import pytest
+
+import pandas as pd
+from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
+
+from .. import base
+
+# For NumPy <1.16, np.array([np.nan, (1,)]) raises
+# ValueError: setting an array element with a sequence.
+np = pytest.importorskip('numpy', minversion='1.16.0')
+
+
+@pytest.fixture
+def dtype():
+ return PandasDtype(np.dtype('object'))
+
+
+@pytest.fixture
+def data(allow_in_pandas, dtype):
+ return pd.Series([(i,) for i in range(100)]).array
+
+
+@pytest.fixture
+def data_missing(allow_in_pandas):
+ return PandasArray(np.array([np.nan, (1,)]))
+
+
+@pytest.fixture
+def data_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, C, A] with
+ A < B < C
+ """
+ # Use an empty tuple for first element, then remove,
+ # to disable np.array's shape inference.
+ return PandasArray(
+ np.array([(), (2,), (3,), (1,)])[1:]
+ )
+
+
+@pytest.fixture
+def data_missing_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, NA, A] with
+ A < B and NA missing.
+ """
+ return PandasArray(
+ np.array([(1,), np.nan, (0,)])
+ )
+
+
+@pytest.fixture
+def data_for_grouping(allow_in_pandas):
+ """Data for factorization, grouping, and unique tests.
+
+ Expected to be like [B, B, NA, NA, A, A, B, C]
+
+ Where A < B < C and NA is missing
+ """
+ a, b, c = (1,), (2,), (3,)
+ return PandasArray(np.array(
+ [b, b, np.nan, np.nan, a, a, b, c]
+ ))
+
+
+skip_nested = pytest.mark.skip(reason="Skipping for nested PandasArray")
+
+
+class BaseNumPyTests(object):
+ pass
+
+
+class TestCasting(BaseNumPyTests, base.BaseCastingTests):
+
+ @skip_nested
+ def test_astype_str(self, data):
+ pass
+
+
+class TestConstructors(BaseNumPyTests, base.BaseConstructorsTests):
+ @pytest.mark.skip(reason="We don't register our dtype")
+ # We don't want to register. This test should probably be split in two.
+ def test_from_dtype(self, data):
+ pass
+
+ @skip_nested
+ def test_array_from_scalars(self, data):
+ pass
+
+
+class TestDtype(BaseNumPyTests, base.BaseDtypeTests):
+
+ @pytest.mark.skip(reason="Incorrect expected.")
+ # we unsurprisingly clash with a NumPy name.
+ def test_check_dtype(self, data):
+ pass
+
+
+class TestGetitem(BaseNumPyTests, base.BaseGetitemTests):
+
+ @skip_nested
+ def test_getitem_scalar(self, data):
+ pass
+
+ @skip_nested
+ def test_take_series(self, data):
+ pass
+
+
+class TestGroupby(BaseNumPyTests, base.BaseGroupbyTests):
+ @skip_nested
+ def test_groupby_extension_apply(self, data_for_grouping, op):
+ pass
+
+
+class TestInterface(BaseNumPyTests, base.BaseInterfaceTests):
+ @skip_nested
+ def test_array_interface(self, data):
+ # NumPy array shape inference
+ pass
+
+
+class TestMethods(BaseNumPyTests, base.BaseMethodsTests):
+
+ @pytest.mark.skip(reason="TODO: remove?")
+ def test_value_counts(self, all_data, dropna):
+ pass
+
+ @pytest.mark.skip(reason="Incorrect expected")
+ # We have a bool dtype, so the result is an ExtensionArray
+ # but expected is not
+ def test_combine_le(self, data_repeated):
+ super(TestMethods, self).test_combine_le(data_repeated)
+
+ @skip_nested
+ def test_combine_add(self, data_repeated):
+ # Not numeric
+ pass
+
+ @skip_nested
+ def test_shift_fill_value(self, data):
+ # np.array shape inference. Shift implementation fails.
+ super().test_shift_fill_value(data)
+
+ @skip_nested
+ def test_unique(self, data, box, method):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_fillna_copy_frame(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_fillna_copy_series(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_hash_pandas_object_works(self, data, as_frame):
+ # ndarray of tuples not hashable
+ pass
+
+ @skip_nested
+ def test_searchsorted(self, data_for_sorting, as_series):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_where_series(self, data, na_value, as_frame):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_repeat(self, data, repeats, as_series, use_numpy):
+ # Fails creating expected
+ pass
+
+
+class TestPrinting(BaseNumPyTests, base.BasePrintingTests):
+ pass
+
+
+class TestMissing(BaseNumPyTests, base.BaseMissingTests):
+
+ @skip_nested
+ def test_fillna_scalar(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series_method(self, data_missing, method):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_frame(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+
+class TestReshaping(BaseNumPyTests, base.BaseReshapingTests):
+
+ @pytest.mark.skip("Incorrect parent test")
+ # not actually a mixed concat, since we concat int and int.
+ def test_concat_mixed_dtypes(self, data):
+ super(TestReshaping, self).test_concat_mixed_dtypes(data)
+
+ @skip_nested
+ def test_merge(self, data, na_value):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array(self, data):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array_duplicates(self, data):
+ # Fails creating expected
+ pass
+
+
+class TestSetitem(BaseNumPyTests, base.BaseSetitemTests):
+
+ @skip_nested
+ def test_setitem_scalar_series(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_mismatched_length_raises(self, data, as_array):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_broadcasts(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_mask_broadcast(self, data, setter):
+ pass
+
+ @skip_nested
+ def test_setitem_scalar_key_sequence_raise(self, data):
+ pass
+
+
+# Skip Arithmetics, NumericReduce, BooleanReduce, Parsing
| Backport PR #24993: Test nested PandasArray | https://api.github.com/repos/pandas-dev/pandas/pulls/25042 | 2019-01-30T21:18:28Z | 2019-01-30T22:28:45Z | 2019-01-30T22:28:45Z | 2019-01-30T22:28:46Z |
Backport PR #25033 on branch 0.24.x (BUG: Fixed merging on tz-aware) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 57fdff041db28..047404e93914b 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -23,6 +23,7 @@ Fixed Regressions
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
+- Fixed regression in :func:`merge` when merging an empty ``DataFrame`` with multiple timezone-aware columns on one of the timezone-aware columns (:issue:`25014`).
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 4a16707a376e9..640587b7f9f31 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -183,7 +183,7 @@ def get_reindexed_values(self, empty_dtype, upcasted_na):
is_datetime64tz_dtype(empty_dtype)):
if self.block is None:
array = empty_dtype.construct_array_type()
- return array(np.full(self.shape[1], fill_value),
+ return array(np.full(self.shape[1], fill_value.value),
dtype=empty_dtype)
pass
elif getattr(self.block, 'is_categorical', False):
@@ -335,8 +335,10 @@ def get_empty_dtype_and_na(join_units):
elif 'category' in upcast_classes:
return np.dtype(np.object_), np.nan
elif 'datetimetz' in upcast_classes:
+ # GH-25014. We use NaT instead of iNaT, since this eventually
+ # ends up in DatetimeArray.take, which does not allow iNaT.
dtype = upcast_classes['datetimetz']
- return dtype[0], tslibs.iNaT
+ return dtype[0], tslibs.NaT
elif 'datetime' in upcast_classes:
return np.dtype('M8[ns]'), tslibs.iNaT
elif 'timedelta' in upcast_classes:
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index f0a3ddc8ce8a4..1e60fdbebfeb3 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -616,6 +616,24 @@ def test_merge_on_datetime64tz(self):
assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
+ def test_merge_on_datetime64tz_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/25014
+ dtz = pd.DatetimeTZDtype(tz='UTC')
+ right = pd.DataFrame({'date': [pd.Timestamp('2018', tz=dtz.tz)],
+ 'value': [4.0],
+ 'date2': [pd.Timestamp('2019', tz=dtz.tz)]},
+ columns=['date', 'value', 'date2'])
+ left = right[:0]
+ result = left.merge(right, on='date')
+ expected = pd.DataFrame({
+ 'value_x': pd.Series(dtype=float),
+ 'date2_x': pd.Series(dtype=dtz),
+ 'date': pd.Series(dtype=dtz),
+ 'value_y': pd.Series(dtype=float),
+ 'date2_y': pd.Series(dtype=dtz),
+ }, columns=['value_x', 'date2_x', 'date', 'value_y', 'date2_y'])
+ tm.assert_frame_equal(result, expected)
+
def test_merge_datetime64tz_with_dst_transition(self):
# GH 18885
df1 = pd.DataFrame(pd.date_range(
| Backport PR #25033: BUG: Fixed merging on tz-aware | https://api.github.com/repos/pandas-dev/pandas/pulls/25041 | 2019-01-30T21:17:40Z | 2019-01-30T22:27:20Z | 2019-01-30T22:27:20Z | 2019-01-30T22:27:20Z |
BUG: to_clipboard text truncated for Python 3 on Windows for UTF-16 text | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index a9fa8b2174dd0..880eaed3b5dfb 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -163,6 +163,7 @@ MultiIndex
I/O
^^^
+- Fixed bug in missing text when using :meth:`to_clipboard` if copying utf-16 characters in Python 3 on Windows (:issue:`25040`)
-
-
-
diff --git a/pandas/io/clipboard/windows.py b/pandas/io/clipboard/windows.py
index 3d979a61b5f2d..4f5275af693b7 100644
--- a/pandas/io/clipboard/windows.py
+++ b/pandas/io/clipboard/windows.py
@@ -29,6 +29,7 @@ def init_windows_clipboard():
HINSTANCE, HMENU, BOOL, UINT, HANDLE)
windll = ctypes.windll
+ msvcrt = ctypes.CDLL('msvcrt')
safeCreateWindowExA = CheckedCall(windll.user32.CreateWindowExA)
safeCreateWindowExA.argtypes = [DWORD, LPCSTR, LPCSTR, DWORD, INT, INT,
@@ -71,6 +72,10 @@ def init_windows_clipboard():
safeGlobalUnlock.argtypes = [HGLOBAL]
safeGlobalUnlock.restype = BOOL
+ wcslen = CheckedCall(msvcrt.wcslen)
+ wcslen.argtypes = [c_wchar_p]
+ wcslen.restype = UINT
+
GMEM_MOVEABLE = 0x0002
CF_UNICODETEXT = 13
@@ -129,13 +134,13 @@ def copy_windows(text):
# If the hMem parameter identifies a memory object,
# the object must have been allocated using the
# function with the GMEM_MOVEABLE flag.
- count = len(text) + 1
+ count = wcslen(text) + 1
handle = safeGlobalAlloc(GMEM_MOVEABLE,
count * sizeof(c_wchar))
locked_handle = safeGlobalLock(handle)
- ctypes.memmove(c_wchar_p(locked_handle),
- c_wchar_p(text), count * sizeof(c_wchar))
+ ctypes.memmove(c_wchar_p(locked_handle), c_wchar_p(text),
+ count * sizeof(c_wchar))
safeGlobalUnlock(handle)
safeSetClipboardData(CF_UNICODETEXT, handle)
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 8eb26d9f3dec5..565db92210b0a 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -12,6 +12,7 @@
from pandas.util import testing as tm
from pandas.util.testing import makeCustomDataframe as mkdf
+from pandas.io.clipboard import clipboard_get, clipboard_set
from pandas.io.clipboard.exceptions import PyperclipException
try:
@@ -30,8 +31,8 @@ def build_kwargs(sep, excel):
return kwargs
-@pytest.fixture(params=['delims', 'utf8', 'string', 'long', 'nonascii',
- 'colwidth', 'mixed', 'float', 'int'])
+@pytest.fixture(params=['delims', 'utf8', 'utf16', 'string', 'long',
+ 'nonascii', 'colwidth', 'mixed', 'float', 'int'])
def df(request):
data_type = request.param
@@ -41,6 +42,10 @@ def df(request):
elif data_type == 'utf8':
return pd.DataFrame({'a': ['µasd', 'Ωœ∑´'],
'b': ['øπ∆˚¬', 'œ∑´®']})
+ elif data_type == 'utf16':
+ return pd.DataFrame({'a': ['\U0001f44d\U0001f44d',
+ '\U0001f44d\U0001f44d'],
+ 'b': ['abc', 'def']})
elif data_type == 'string':
return mkdf(5, 3, c_idx_type='s', r_idx_type='i',
c_idx_names=[None], r_idx_names=[None])
@@ -225,3 +230,14 @@ def test_invalid_encoding(self, df):
@pytest.mark.parametrize('enc', ['UTF-8', 'utf-8', 'utf8'])
def test_round_trip_valid_encodings(self, enc, df):
self.check_round_trip_frame(df, encoding=enc)
+
+
+@pytest.mark.single
+@pytest.mark.clipboard
+@pytest.mark.skipif(not _DEPS_INSTALLED,
+ reason="clipboard primitives not installed")
+@pytest.mark.parametrize('data', [u'\U0001f44d...', u'Ωœ∑´...', 'abcd...'])
+def test_raw_roundtrip(data):
+ # PR #25040 wide unicode wasn't copied correctly on PY3 on windows
+ clipboard_set(data)
+ assert data == clipboard_get()
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
For windows users where Python is compiled with UCS-4 (Python 3 primarily), tables copied to clipboard are missing data from the end when there are any unicode characters in the dataframe that have a 4-byte representation in UTF-16 (i.e. in the U+010000 to U+10FFFF range). The bug can be reproduced here:
```python
import pandas
obj=pandas.DataFrame([u'\U0001f44d\U0001f44d',
u'12345'])
obj.to_clipboard()
```
where the clipboard text results in
```
0
0 👍👍
1 1234
```
One character is chopped from the end of the clipboard string for each 4-byte unicode character copied.
or more to the point:
```python
pandas.io.clipboard.clipboard_set(u'\U0001f44d 12345')
```
produces
```
👍 1234
```
The cause of this issue is that ```len(u'\U0001f44d')==1``` when python is in UCS-4, and Pandas allocates 2 bytes per python character in the clipboard buffer but the character consumes 4 bytes, displacing another character at the end of the string to be copied. In UCS-2 (most Python 2 builds), ```len(u'\U0001f44d')==2``` and so 4 bytes are allocated and consumed by the character.
My proposed change (affecting only windows clipboard operations) first converts the text to UTF-16 little endian because that is the format used by windows, then measures the length of the resulting byte string, rather than using Python's ```len(text) * 2``` to measure how many bytes should be allocated to the clipboard buffer.
I've tested this change in python 3.6 and 2.7 on windows 7 x64. I don't expect this causing other issues with other versions of windows but I would appreciate if anyone on older versions of windows would double check.
| https://api.github.com/repos/pandas-dev/pandas/pulls/25040 | 2019-01-30T21:17:08Z | 2019-02-01T20:53:57Z | 2019-02-01T20:53:56Z | 2019-02-01T20:53:59Z |
BUG: avoid usage in_qtconsole for recent IPython versions | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 047404e93914b..521319c55a503 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -83,7 +83,7 @@ Bug Fixes
**Other**
--
+- Fixed AttributeError when printing a DataFrame's HTML repr after accessing the IPython config object (:issue:`25036`)
-
.. _whatsnew_0.241.contributors:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2049a8aa960bf..78c9f2aa96472 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -17,6 +17,7 @@
import itertools
import sys
import warnings
+from distutils.version import LooseVersion
from textwrap import dedent
import numpy as np
@@ -646,9 +647,15 @@ def _repr_html_(self):
# XXX: In IPython 3.x and above, the Qt console will not attempt to
# display HTML, so this check can be removed when support for
# IPython 2.x is no longer needed.
- if console.in_qtconsole():
- # 'HTML output is disabled in QtConsole'
- return None
+ try:
+ import IPython
+ except ImportError:
+ pass
+ else:
+ if LooseVersion(IPython.__version__) < LooseVersion('3.0'):
+ if console.in_qtconsole():
+ # 'HTML output is disabled in QtConsole'
+ return None
if self._info_repr():
buf = StringIO(u(""))
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 5d922ccaf1fd5..b0cf5a2f17609 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -12,6 +12,7 @@
import os
import re
import sys
+import textwrap
import warnings
import dateutil
@@ -2777,3 +2778,17 @@ def test_format_percentiles():
fmt.format_percentiles([2, 0.1, 0.5])
with pytest.raises(ValueError, match=msg):
fmt.format_percentiles([0.1, 0.5, 'a'])
+
+
+def test_repr_html_ipython_config(ip):
+ code = textwrap.dedent("""\
+ import pandas as pd
+ df = pd.DataFrame({"A": [1, 2]})
+ df._repr_html_()
+
+ cfg = get_ipython().config
+ cfg['IPKernelApp']['parent_appname']
+ df._repr_html_()
+ """)
+ result = ip.run_cell(code)
+ assert not result.error_in_exec
| I've verified this manually with qtconsole 4.4.0, but if others want to check that'd be helpful.
![screen shot 2019-01-30 at 2 22 06 pm](https://user-images.githubusercontent.com/1312546/52010178-794a8080-249a-11e9-9376-a9254ce6bbf9.png)
What release should this be done in? 0.24.1, 0.24.2 or 0.25.0?
Closes https://github.com/pandas-dev/pandas/issues/25036 | https://api.github.com/repos/pandas-dev/pandas/pulls/25039 | 2019-01-30T20:23:35Z | 2019-01-31T16:02:38Z | 2019-01-31T16:02:37Z | 2019-01-31T16:02:38Z |
DOC: fix error in documentation #24981 | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 953f40d1afebe..2c2e5c5425216 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -15,7 +15,7 @@ steps:
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
-those groups. In the apply step, we might wish to one of the
+those groups. In the apply step, we might wish to do one of the
following:
* **Aggregation**: compute a summary statistic (or statistics) for each
| Added "do" in the last sentence of the second paragraph.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25038 | 2019-01-30T20:06:46Z | 2019-01-30T21:56:44Z | 2019-01-30T21:56:44Z | 2019-01-30T21:56:47Z |
DOC: Example from docstring was proposing wrong interpolation order | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a351233a77465..cff685c2ad7cb 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6601,7 +6601,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
'barycentric', 'polynomial': Passed to
`scipy.interpolate.interp1d`. Both 'polynomial' and 'spline'
require that you also specify an `order` (int),
- e.g. ``df.interpolate(method='polynomial', order=4)``.
+ e.g. ``df.interpolate(method='polynomial', order=5)``.
These use the numerical values of the index.
* 'krogh', 'piecewise_polynomial', 'spline', 'pchip', 'akima':
Wrappers around the SciPy interpolation methods of similar
| Currently doctring explaining interpolation proposes using polynomial interpolation with order equal to 4. Unfortunately, scipy does not allow that value to be used, throwing an ValueError from here: https://github.com/scipy/scipy/blob/5875fd397eb4e6adcfa0c65f7b9006424c066cb0/scipy/interpolate/_bsplines.py#L583
Looking at the blame, last edit was 5 years ago so that rather do not depend on any reasonable scipy version.
Interpolations with order equal to 2 that are spread around docstrings (and doctests) do not pass through the method throwing that exception so they are okay.
- [-] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [-] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25035 | 2019-01-30T17:55:28Z | 2019-01-31T12:25:55Z | 2019-01-31T12:25:55Z | 2019-01-31T12:25:57Z |
BUG: Fixed merging on tz-aware | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 57fdff041db28..047404e93914b 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -23,6 +23,7 @@ Fixed Regressions
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
+- Fixed regression in :func:`merge` when merging an empty ``DataFrame`` with multiple timezone-aware columns on one of the timezone-aware columns (:issue:`25014`).
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 4a16707a376e9..640587b7f9f31 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -183,7 +183,7 @@ def get_reindexed_values(self, empty_dtype, upcasted_na):
is_datetime64tz_dtype(empty_dtype)):
if self.block is None:
array = empty_dtype.construct_array_type()
- return array(np.full(self.shape[1], fill_value),
+ return array(np.full(self.shape[1], fill_value.value),
dtype=empty_dtype)
pass
elif getattr(self.block, 'is_categorical', False):
@@ -335,8 +335,10 @@ def get_empty_dtype_and_na(join_units):
elif 'category' in upcast_classes:
return np.dtype(np.object_), np.nan
elif 'datetimetz' in upcast_classes:
+ # GH-25014. We use NaT instead of iNaT, since this eventually
+ # ends up in DatetimeArray.take, which does not allow iNaT.
dtype = upcast_classes['datetimetz']
- return dtype[0], tslibs.iNaT
+ return dtype[0], tslibs.NaT
elif 'datetime' in upcast_classes:
return np.dtype('M8[ns]'), tslibs.iNaT
elif 'timedelta' in upcast_classes:
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index c17c301968269..a0a20d1da6cef 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -616,6 +616,24 @@ def test_merge_on_datetime64tz(self):
assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
+ def test_merge_on_datetime64tz_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/25014
+ dtz = pd.DatetimeTZDtype(tz='UTC')
+ right = pd.DataFrame({'date': [pd.Timestamp('2018', tz=dtz.tz)],
+ 'value': [4.0],
+ 'date2': [pd.Timestamp('2019', tz=dtz.tz)]},
+ columns=['date', 'value', 'date2'])
+ left = right[:0]
+ result = left.merge(right, on='date')
+ expected = pd.DataFrame({
+ 'value_x': pd.Series(dtype=float),
+ 'date2_x': pd.Series(dtype=dtz),
+ 'date': pd.Series(dtype=dtz),
+ 'value_y': pd.Series(dtype=float),
+ 'date2_y': pd.Series(dtype=dtz),
+ }, columns=['value_x', 'date2_x', 'date', 'value_y', 'date2_y'])
+ tm.assert_frame_equal(result, expected)
+
def test_merge_datetime64tz_with_dst_transition(self):
# GH 18885
df1 = pd.DataFrame(pd.date_range(
| Closes https://github.com/pandas-dev/pandas/issues/25014
| https://api.github.com/repos/pandas-dev/pandas/pulls/25033 | 2019-01-30T16:05:41Z | 2019-01-30T21:17:31Z | 2019-01-30T21:17:31Z | 2019-01-30T21:17:35Z |
(Closes #25029) Removed extra bracket from cheatsheet code example. | diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet.pdf b/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
index 696ed288cf7a6..d50896dc5ccc5 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet.pdf and b/doc/cheatsheet/Pandas_Cheat_Sheet.pdf differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet.pptx b/doc/cheatsheet/Pandas_Cheat_Sheet.pptx
index f8b98a6f1f8e4..95f2771017db5 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet.pptx and b/doc/cheatsheet/Pandas_Cheat_Sheet.pptx differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf
index daa65a944e68a..05e4b87f6a210 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf and b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx
index 6270a71e20ee8..cb0f058db5448 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx and b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx differ
| Closes #25029
There was an additional bracket present under the "Create DataFrame with a MultiIndex" code example.
I removed this in both the English and Japanese versions of the cheatsheet. | https://api.github.com/repos/pandas-dev/pandas/pulls/25032 | 2019-01-30T15:58:02Z | 2019-02-09T17:26:39Z | 2019-02-09T17:26:39Z | 2019-02-09T17:26:42Z |
ENH: Support index=True for io.sql.get_schema | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 222963a7ff71a..0923b05d41479 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -31,7 +31,6 @@ Fixed Regressions
Enhancements
^^^^^^^^^^^^
-
.. _whatsnew_0241.bug_fixes:
Bug Fixes
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 939fb8b9415bd..052f052420e41 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -165,7 +165,7 @@ MultiIndex
I/O
^^^
--
+- :func:`get_schema` now accepts an `index` parameter (default: `False`) that includes the index in the generated schema. (:issue:`9084`)
-
-
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index aaface5415384..7e4cefddc2746 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -1223,8 +1223,9 @@ def drop_table(self, table_name, schema=None):
self.get_table(table_name, schema).drop()
self.meta.clear()
- def _create_sql_schema(self, frame, table_name, keys=None, dtype=None):
- table = SQLTable(table_name, self, frame=frame, index=False, keys=keys,
+ def _create_sql_schema(self, frame, table_name, keys=None, dtype=None,
+ index=False):
+ table = SQLTable(table_name, self, frame=frame, index=index, keys=keys,
dtype=dtype)
return str(table.sql_schema())
@@ -1565,13 +1566,14 @@ def drop_table(self, name, schema=None):
name=_get_valid_sqlite_name(name))
self.execute(drop_sql)
- def _create_sql_schema(self, frame, table_name, keys=None, dtype=None):
- table = SQLiteTable(table_name, self, frame=frame, index=False,
+ def _create_sql_schema(self, frame, table_name, keys=None, dtype=None,
+ index=False):
+ table = SQLiteTable(table_name, self, frame=frame, index=index,
keys=keys, dtype=dtype)
return str(table.sql_schema())
-def get_schema(frame, name, keys=None, con=None, dtype=None):
+def get_schema(frame, name, keys=None, con=None, dtype=None, index=False):
"""
Get the SQL db table schema for the given frame.
@@ -1589,8 +1591,11 @@ def get_schema(frame, name, keys=None, con=None, dtype=None):
dtype : dict of column name to SQL type, default None
Optional specifying the datatype for columns. The SQL type should
be a SQLAlchemy type, or a string for sqlite3 fallback connection.
+ index : boolean, default False
+ Whether to include DataFrame index as a column
"""
pandas_sql = pandasSQL_builder(con=con)
- return pandas_sql._create_sql_schema(frame, name, keys=keys, dtype=dtype)
+ return pandas_sql._create_sql_schema(
+ frame, name, keys=keys, dtype=dtype, index=index)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 75a6d8d009083..e37921441596b 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -823,6 +823,21 @@ def test_get_schema_keys(self):
constraint_sentence = 'CONSTRAINT test_pk PRIMARY KEY ("A", "B")'
assert constraint_sentence in create_sql
+ @pytest.mark.parametrize("index_arg, expected", [
+ ({}, False),
+ ({"index": False}, False),
+ ({"index": True}, True),
+ ])
+ def test_get_schema_with_index(self, index_arg, expected):
+ frame = DataFrame({
+ 'one': pd.Series([1, 2, 3], index=['a', 'b', 'c']),
+ 'two': pd.Series([1, 2, 3], index=['a', 'b', 'c'])
+ })
+ frame.index.name = 'alphabet'
+
+ create_sql = sql.get_schema(frame, 'test', con=self.conn, **index_arg)
+ assert ('alphabet' in create_sql) == expected
+
def test_chunksize_read(self):
df = DataFrame(np.random.randn(22, 5), columns=list('abcde'))
df.to_sql('test_chunksize', self.conn, index=False)
| Closes pandas-dev/pandas#9084
- Decided to keep the default as `index=False` to keep the API consistent. `to_sql` has `index=True`.
- Tempted to name the parameter `include_dataframe_index` as "index" has
a different meaning in a SQL context.
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25030 | 2019-01-30T15:46:49Z | 2019-05-03T05:37:19Z | null | 2019-05-03T05:37:20Z |
CLN: typo fixups | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 3147f36dcc835..eb511b1adb28a 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -18,7 +18,6 @@ cnp.import_array()
cimport pandas._libs.util as util
-util.import_array()
from pandas._libs.hashtable cimport Int64Vector, Int64VectorData
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index a55d15a7c4e85..92cbcce6c7042 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -382,7 +382,7 @@ class NaTType(_NaT):
)
combine = _make_error_func('combine', # noqa:E128
"""
- Timsetamp.combine(date, time)
+ Timestamp.combine(date, time)
date, time -> datetime with same date and time fields
"""
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index fe0564cb62c30..3e6763e226a4a 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -197,7 +197,7 @@ def round_nsint64(values, mode, freq):
# This is PITA. Because we inherit from datetime, which has very specific
# construction requirements, we need to do object instantiation in python
-# (see Timestamp class above). This will serve as a C extension type that
+# (see Timestamp class below). This will serve as a C extension type that
# shadows the python class, where we do any heavy lifting.
cdef class _Timestamp(datetime):
@@ -670,7 +670,7 @@ class Timestamp(_Timestamp):
@classmethod
def combine(cls, date, time):
"""
- Timsetamp.combine(date, time)
+ Timestamp.combine(date, time)
date, time -> datetime with same date and time fields
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index df764aa4ba666..36144c31dfef9 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2072,17 +2072,9 @@ def get_values(self, dtype=None):
return object dtype as boxed values, such as Timestamps/Timedelta
"""
if is_object_dtype(dtype):
- values = self.values
-
- if self.ndim > 1:
- values = values.ravel()
-
- values = lib.map_infer(values, self._box_func)
-
- if self.ndim > 1:
- values = values.reshape(self.values.shape)
-
- return values
+ values = self.values.ravel()
+ result = self._holder(values).astype(object)
+ return result.reshape(self.values.shape)
return self.values
| Also edit DatetimeLikeBlockMixin.get_values to be much simpler. | https://api.github.com/repos/pandas-dev/pandas/pulls/25028 | 2019-01-30T14:57:35Z | 2019-01-31T12:27:49Z | 2019-01-31T12:27:49Z | 2020-04-05T17:36:54Z |
DOC: 0.24.1 whatsnew | diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index 51487c0d325b5..df2a29a76f3c5 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -39,7 +39,7 @@ See the :ref:`overview` for more detail about what's in the library.
{% endif %}
{% if not single_doc -%}
- What's New in 0.24.0 <whatsnew/v0.24.0>
+ What's New in 0.24.1 <whatsnew/v0.24.1>
install
getting_started/index
user_guide/index
| This PR has the documentation changes that are just for 0.24.x. I'll have another PR later with changes to 0.24.1.rst that should go to master first, before being backported. | https://api.github.com/repos/pandas-dev/pandas/pulls/25027 | 2019-01-30T14:52:06Z | 2019-02-01T20:09:00Z | 2019-02-01T20:09:00Z | 2019-02-01T20:09:03Z |
DOC: Start 0.24.2.rst | diff --git a/doc/source/whatsnew/v0.24.2.rst b/doc/source/whatsnew/v0.24.2.rst
new file mode 100644
index 0000000000000..cba21ce7ee1e6
--- /dev/null
+++ b/doc/source/whatsnew/v0.24.2.rst
@@ -0,0 +1,99 @@
+:orphan:
+
+.. _whatsnew_0242:
+
+Whats New in 0.24.2 (February XX, 2019)
+---------------------------------------
+
+.. warning::
+
+ The 0.24.x series of releases will be the last to support Python 2. Future feature
+ releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+
+{{ header }}
+
+These are the changes in pandas 0.24.2. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+.. _whatsnew_0242.regressions:
+
+Fixed Regressions
+^^^^^^^^^^^^^^^^^
+
+-
+-
+-
+
+.. _whatsnew_0242.enhancements:
+
+Enhancements
+^^^^^^^^^^^^
+
+-
+-
+
+.. _whatsnew_0242.bug_fixes:
+
+Bug Fixes
+~~~~~~~~~
+
+**Conversion**
+
+-
+-
+-
+
+**Indexing**
+
+-
+-
+-
+
+**I/O**
+
+-
+-
+-
+
+**Categorical**
+
+-
+-
+-
+
+**Timezones**
+
+-
+-
+-
+
+**Timedelta**
+
+-
+-
+-
+
+**Reshaping**
+
+-
+-
+-
+
+**Visualization**
+
+-
+-
+-
+
+**Other**
+
+-
+-
+-
+
+.. _whatsnew_0.242.contributors:
+
+Contributors
+~~~~~~~~~~~~
+
+.. contributors:: v0.24.1..v0.24.2
\ No newline at end of file
| [ci skip]
| https://api.github.com/repos/pandas-dev/pandas/pulls/25026 | 2019-01-30T14:18:55Z | 2019-02-01T12:27:16Z | 2019-02-01T12:27:16Z | 2019-02-01T12:27:16Z |
Backport PR #24961 on branch 0.24.x (fix+test to_timedelta('NaT', box=False)) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 8f4c3982c745f..82885f851e86b 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -65,7 +65,7 @@ Bug Fixes
-
**Timedelta**
-
+- Bug in :func:`to_timedelta` with `box=False` incorrectly returning a ``datetime64`` object instead of a ``timedelta64`` object (:issue:`24961`)
-
-
-
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index e3428146b91d8..ddd21d0f62d08 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -120,7 +120,8 @@ def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'):
try:
result = Timedelta(r, unit)
if not box:
- result = result.asm8
+ # explicitly view as timedelta64 for case when result is pd.NaT
+ result = result.asm8.view('timedelta64[ns]')
except ValueError:
if errors == 'raise':
raise
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 9b5fdfb06a9fa..e1838e0160fec 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -309,8 +309,13 @@ def test_iso_conversion(self):
assert to_timedelta('P0DT0H0M1S') == expected
def test_nat_converters(self):
- assert to_timedelta('nat', box=False).astype('int64') == iNaT
- assert to_timedelta('nan', box=False).astype('int64') == iNaT
+ result = to_timedelta('nat', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
+
+ result = to_timedelta('nan', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
@pytest.mark.parametrize('units, np_unit',
[(['Y', 'y'], 'Y'),
| Backport PR #24961: fix+test to_timedelta('NaT', box=False) | https://api.github.com/repos/pandas-dev/pandas/pulls/25025 | 2019-01-30T12:43:22Z | 2019-01-30T13:18:39Z | 2019-01-30T13:18:39Z | 2019-01-30T13:20:24Z |
REGR: fix read_sql delegation for queries on MySQL/pymysql | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 828c35c10e958..defb84f438e3a 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -22,6 +22,7 @@ Fixed Regressions
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
+- Fixed regression in :func:`read_sql` when passing certain queries with MySQL/pymysql (:issue:`24988`).
- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
.. _whatsnew_0241.enhancements:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 5d1163b3e0024..aaface5415384 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -381,7 +381,8 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
try:
_is_table_name = pandas_sql.has_table(sql)
- except (ImportError, AttributeError):
+ except Exception:
+ # using generic exception to catch errors from sql drivers (GH24988)
_is_table_name = False
if _is_table_name:
| Closes #24988, see discussion there regarding lack of test. | https://api.github.com/repos/pandas-dev/pandas/pulls/25024 | 2019-01-30T09:49:43Z | 2019-01-31T21:24:58Z | 2019-01-31T21:24:57Z | 2019-01-31T21:24:58Z |
BUG: to_datetime(strs, utc=True) used previous UTC offset | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 867007b2ba7f5..24e3b42859416 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -103,7 +103,7 @@ Timedelta
Timezones
^^^^^^^^^
--
+- Bug in :func:`to_datetime` with ``utc=True`` and datetime strings that would apply previously parsed UTC offsets to subsequent arguments (:issue:`24992`)
-
-
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 798e338d5581b..f932e236b5218 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -645,6 +645,8 @@ cpdef array_to_datetime(ndarray[object] values, str errors='raise',
out_tzoffset_vals.add(out_tzoffset * 60.)
tz = pytz.FixedOffset(out_tzoffset)
value = tz_convert_single(value, tz, UTC)
+ out_local = 0
+ out_tzoffset = 0
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 38f5eab15041f..b94935d2521eb 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -714,6 +714,29 @@ def test_iso_8601_strings_with_different_offsets(self):
NaT], tz='UTC')
tm.assert_index_equal(result, expected)
+ def test_iss8601_strings_mixed_offsets_with_naive(self):
+ # GH 24992
+ result = pd.to_datetime([
+ '2018-11-28T00:00:00',
+ '2018-11-28T00:00:00+12:00',
+ '2018-11-28T00:00:00',
+ '2018-11-28T00:00:00+06:00',
+ '2018-11-28T00:00:00'
+ ], utc=True)
+ expected = pd.to_datetime([
+ '2018-11-28T00:00:00',
+ '2018-11-27T12:00:00',
+ '2018-11-28T00:00:00',
+ '2018-11-27T18:00:00',
+ '2018-11-28T00:00:00'
+ ], utc=True)
+ tm.assert_index_equal(result, expected)
+
+ items = ['2018-11-28T00:00:00+12:00', '2018-11-28T00:00:00']
+ result = pd.to_datetime(items, utc=True)
+ expected = pd.to_datetime(list(reversed(items)), utc=True)[::-1]
+ tm.assert_index_equal(result, expected)
+
def test_non_iso_strings_with_tz_offset(self):
result = to_datetime(['March 1, 2018 12:00:00+0400'] * 2)
expected = DatetimeIndex([datetime(2018, 3, 1, 12,
| - [x] closes #24992
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25020 | 2019-01-30T07:04:27Z | 2019-01-31T12:29:33Z | 2019-01-31T12:29:32Z | 2019-01-31T15:59:28Z |
CLN: do not use .repeat asv setting for storing benchmark data | diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index e9f2727f64e15..b5b2c955f0133 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -102,10 +102,10 @@ def setup(self, repeats):
N = 10**5
self.s = Series(tm.makeStringIndex(N))
repeat = {'int': 1, 'array': np.random.randint(1, 3, N)}
- self.repeat = repeat[repeats]
+ self.values = repeat[repeats]
def time_repeat(self, repeats):
- self.s.str.repeat(self.repeat)
+ self.s.str.repeat(self.values)
class Cat(object):
| `asv` uses `.repeat` to specify the number of times a benchmark should be repeated; our `strings.Replace` benchmark inadvertently uses this to store benchmark data. This doesn't cause issues until after the first parameter:
```
[ 99.87%] ··· strings.Repeat.time_repeat 1/2 failed
[ 99.87%] ··· ========= ===========
repeats
--------- -----------
int 151±0.9ms
array failed
========= ===========
[ 99.87%] ···· For parameters: 'array'
Traceback (most recent call last):
File "/home/chris/code/asv/asv/benchmark.py", line 595, in run
min_repeat, max_repeat, max_time = self.repeat
ValueError: too many values to unpack (expected 3)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/chris/code/asv/asv/benchmark.py", line 1170, in main_run_server
main_run(run_args)
File "/home/chris/code/asv/asv/benchmark.py", line 1044, in main_run
result = benchmark.do_run()
File "/home/chris/code/asv/asv/benchmark.py", line 523, in do_run
return self.run(*self._current_params)
File "/home/chris/code/asv/asv/benchmark.py", line 597, in run
if self.repeat == 0:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
With this PR, both parameters now succeed:
```
[ 0.00%] · For pandas commit 8825f78e <repeat>:
[ 0.00%] ·· Benchmarking conda-py3.6-Cython-matplotlib-numexpr-numpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 50.00%] ··· Running (strings.Repeat.time_repeat--).
[100.00%] ··· strings.Repeat.time_repeat ok
[100.00%] ··· ========= ===========
repeats
--------- -----------
int 152±1ms
array 150±0.6ms
========= ===========
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25015 | 2019-01-29T23:21:55Z | 2019-01-30T05:48:52Z | 2019-01-30T05:48:52Z | 2019-01-30T05:48:59Z |
Backport PR #24967 on branch 0.24.x (REGR: Preserve order by default in Index.difference) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 8f4c3982c745f..828c35c10e958 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -22,6 +22,7 @@ Fixed Regressions
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
+- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 767da81c5c43a..3d176012df22b 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2333,7 +2333,7 @@ def union(self, other, sort=True):
def _wrap_setop_result(self, other, result):
return self._constructor(result, name=get_op_result_name(self, other))
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -2342,11 +2342,15 @@ def intersection(self, other, sort=True):
Parameters
----------
other : Index or array-like
- sort : bool, default True
+ sort : bool, default False
Sort the resulting index if possible
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index cc373c06efcc9..ef941ab87ba12 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -594,7 +594,7 @@ def _wrap_setop_result(self, other, result):
name = get_op_result_name(self, other)
return self._shallow_copy(result, name=name, freq=None, tz=self.tz)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Specialized intersection for DatetimeIndex objects. May be much faster
than Index.intersection
@@ -602,6 +602,14 @@ def intersection(self, other, sort=True):
Parameters
----------
other : DatetimeIndex or array-like
+ sort : bool, default True
+ Sort the resulting index if possible.
+
+ .. versionadded:: 0.24.0
+
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
Returns
-------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 0210560aaa21f..736de94991181 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1093,8 +1093,8 @@ def equals(self, other):
def overlaps(self, other):
return self._data.overlaps(other)
- def _setop(op_name):
- def func(self, other, sort=True):
+ def _setop(op_name, sort=True):
+ def func(self, other, sort=sort):
other = self._as_like_interval_index(other)
# GH 19016: ensure set op will not return a prohibited dtype
@@ -1128,7 +1128,7 @@ def is_all_dates(self):
return False
union = _setop('union')
- intersection = _setop('intersection')
+ intersection = _setop('intersection', sort=False)
difference = _setop('difference')
symmetric_difference = _setop('symmetric_difference')
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index e4d01a40bd181..16af3fe8eef26 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2910,7 +2910,7 @@ def union(self, other, sort=True):
return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0,
names=result_names)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two MultiIndex objects.
@@ -2922,6 +2922,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
Index
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index ebf5b279563cf..e17a6a682af40 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -343,7 +343,7 @@ def equals(self, other):
return super(RangeIndex, self).equals(other)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -355,6 +355,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index f3e9d835c7391..20e439de46bde 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -765,6 +765,11 @@ def test_intersect_str_dates(self, sort):
assert len(result) == 0
+ def test_intersect_nosort(self):
+ result = pd.Index(['c', 'b', 'a']).intersection(['b', 'a'])
+ expected = pd.Index(['b', 'a'])
+ tm.assert_index_equal(result, expected)
+
@pytest.mark.parametrize("sort", [True, False])
def test_chained_union(self, sort):
# Chained unions handles names correctly
@@ -1595,20 +1600,27 @@ def test_drop_tuple(self, values, to_drop):
for drop_me in to_drop[1], [to_drop[1]]:
pytest.raises(KeyError, removed.drop, drop_me)
- @pytest.mark.parametrize("method,expected", [
+ @pytest.mark.parametrize("method,expected,sort", [
+ ('intersection', np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
+ dtype=[('num', int), ('let', 'a1')]),
+ False),
+
('intersection', np.array([(1, 'A'), (1, 'B'), (2, 'A'), (2, 'B')],
- dtype=[('num', int), ('let', 'a1')])),
+ dtype=[('num', int), ('let', 'a1')]),
+ True),
+
('union', np.array([(1, 'A'), (1, 'B'), (1, 'C'), (2, 'A'), (2, 'B'),
- (2, 'C')], dtype=[('num', int), ('let', 'a1')]))
+ (2, 'C')], dtype=[('num', int), ('let', 'a1')]),
+ True)
])
- def test_tuple_union_bug(self, method, expected):
+ def test_tuple_union_bug(self, method, expected, sort):
index1 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
dtype=[('num', int), ('let', 'a1')]))
index2 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'),
(2, 'B'), (1, 'C'), (2, 'C')],
dtype=[('num', int), ('let', 'a1')]))
- result = getattr(index1, method)(index2)
+ result = getattr(index1, method)(index2, sort=sort)
assert result.ndim == 1
expected = Index(expected)
| Backport PR #24967: REGR: Preserve order by default in Index.difference | https://api.github.com/repos/pandas-dev/pandas/pulls/25013 | 2019-01-29T21:43:34Z | 2019-01-30T12:50:04Z | 2019-01-30T12:50:04Z | 2019-01-30T12:50:04Z |
BUG-24212 fix when other_index has incompatible dtype | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index b2a379d9fe6f5..bb7fdf97c9383 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -399,7 +399,7 @@ Reshaping
^^^^^^^^^
- Bug in :func:`pandas.merge` adds a string of ``None``, if ``None`` is assigned in suffixes instead of remain the column name as-is (:issue:`24782`).
-- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
+- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (missing index values are now assigned NA) (:issue:`24212`, :issue:`25009`)
- :func:`to_records` now accepts dtypes to its ``column_dtypes`` parameter (:issue:`24895`)
- Bug in :func:`concat` where order of ``OrderedDict`` (and ``dict`` in Python 3.6+) is not respected, when passed in as ``objs`` argument (:issue:`21510`)
- Bug in :func:`pivot_table` where columns with ``NaN`` values are dropped even if ``dropna`` argument is ``False``, when the ``aggfunc`` argument contains a ``list`` (:issue:`22159`)
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 0837186e33267..78309ce9c863c 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -803,22 +803,18 @@ def _create_join_index(self, index, other_index, indexer,
-------
join_index
"""
- join_index = index.take(indexer)
if (self.how in (how, 'outer') and
not isinstance(other_index, MultiIndex)):
# if final index requires values in other_index but not target
# index, indexer may hold missing (-1) values, causing Index.take
- # to take the final value in target index
+ # to take the final value in target index. So, we set the last
+ # element to be the desired fill value. We do not use allow_fill
+ # and fill_value because it throws a ValueError on integer indices
mask = indexer == -1
if np.any(mask):
- # if values missing (-1) from target index,
- # take from other_index instead
- join_list = join_index.to_numpy()
- other_list = other_index.take(other_indexer).to_numpy()
- join_list[mask] = other_list[mask]
- join_index = Index(join_list, dtype=join_index.dtype,
- name=join_index.name)
- return join_index
+ fill_value = na_value_for_dtype(index.dtype, compat=False)
+ index = index.append(Index([fill_value]))
+ return index.take(indexer)
def _get_merge_keys(self):
"""
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index b4a58628faa4d..8bc68cc7f8fc2 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -15,7 +15,8 @@
import pandas as pd
from pandas import (
Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Float64Index,
- Int64Index, MultiIndex, RangeIndex, Series, UInt64Index)
+ Int64Index, IntervalIndex, MultiIndex, PeriodIndex, RangeIndex, Series,
+ TimedeltaIndex, UInt64Index)
from pandas.api.types import CategoricalDtype as CDT
from pandas.core.reshape.concat import concat
from pandas.core.reshape.merge import MergeError, merge
@@ -1034,11 +1035,30 @@ def test_merge_two_empty_df_no_division_error(self):
merge(a, a, on=('a', 'b'))
@pytest.mark.parametrize('how', ['right', 'outer'])
- def test_merge_on_index_with_more_values(self, how):
+ @pytest.mark.parametrize(
+ 'index,expected_index',
+ [(CategoricalIndex([1, 2, 4]),
+ CategoricalIndex([1, 2, 4, None, None, None])),
+ (DatetimeIndex(['2001-01-01', '2002-02-02', '2003-03-03']),
+ DatetimeIndex(['2001-01-01', '2002-02-02', '2003-03-03',
+ pd.NaT, pd.NaT, pd.NaT])),
+ (Float64Index([1, 2, 3]),
+ Float64Index([1, 2, 3, None, None, None])),
+ (Int64Index([1, 2, 3]),
+ Float64Index([1, 2, 3, None, None, None])),
+ (IntervalIndex.from_tuples([(1, 2), (2, 3), (3, 4)]),
+ IntervalIndex.from_tuples([(1, 2), (2, 3), (3, 4),
+ np.nan, np.nan, np.nan])),
+ (PeriodIndex(['2001-01-01', '2001-01-02', '2001-01-03'], freq='D'),
+ PeriodIndex(['2001-01-01', '2001-01-02', '2001-01-03',
+ pd.NaT, pd.NaT, pd.NaT], freq='D')),
+ (TimedeltaIndex(['1d', '2d', '3d']),
+ TimedeltaIndex(['1d', '2d', '3d', pd.NaT, pd.NaT, pd.NaT]))])
+ def test_merge_on_index_with_more_values(self, how, index, expected_index):
# GH 24212
# pd.merge gets [0, 1, 2, -1, -1, -1] as left_indexer, ensure that
# -1 is interpreted as a missing value instead of the last element
- df1 = pd.DataFrame({'a': [1, 2, 3], 'key': [0, 2, 2]})
+ df1 = pd.DataFrame({'a': [1, 2, 3], 'key': [0, 2, 2]}, index=index)
df2 = pd.DataFrame({'b': [1, 2, 3, 4, 5]})
result = df1.merge(df2, left_on='key', right_index=True, how=how)
expected = pd.DataFrame([[1.0, 0, 1],
@@ -1048,7 +1068,7 @@ def test_merge_on_index_with_more_values(self, how):
[np.nan, 3, 4],
[np.nan, 4, 5]],
columns=['a', 'key', 'b'])
- expected.set_index(Int64Index([0, 1, 2, 1, 3, 4]), inplace=True)
+ expected.set_index(expected_index, inplace=True)
assert_frame_equal(result, expected)
def test_merge_right_index_right(self):
@@ -1062,11 +1082,27 @@ def test_merge_right_index_right(self):
'key': [0, 1, 1, 2],
'b': [1, 2, 2, 3]},
columns=['a', 'key', 'b'],
- index=[0, 1, 2, 2])
+ index=[0, 1, 2, np.nan])
result = left.merge(right, left_on='key', right_index=True,
how='right')
tm.assert_frame_equal(result, expected)
+ def test_merge_take_missing_values_from_index_of_other_dtype(self):
+ # GH 24212
+ left = pd.DataFrame({'a': [1, 2, 3],
+ 'key': pd.Categorical(['a', 'a', 'b'],
+ categories=list('abc'))})
+ right = pd.DataFrame({'b': [1, 2, 3]},
+ index=pd.CategoricalIndex(['a', 'b', 'c']))
+ result = left.merge(right, left_on='key',
+ right_index=True, how='right')
+ expected = pd.DataFrame({'a': [1, 2, 3, None],
+ 'key': pd.Categorical(['a', 'a', 'b', 'c']),
+ 'b': [1, 1, 2, 3]},
+ index=[0, 1, 2, np.nan])
+ expected = expected.reindex(columns=['a', 'key', 'b'])
+ tm.assert_frame_equal(result, expected)
+
def _check_merge(x, y):
for how in ['inner', 'left', 'outer']:
| - [X] closes #25001
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Followup to #24916, addresses the case when the other index has an incompatible dtype, so we cannot take directly from it. Currently, this PR ~naively replaces the missing index values with the number of the rows in the other index that caused them~ replaces the missing index values with the appropriate NA value.
~Still working on adding cases when it is possible to combine indices of sparse/categorical dtypes without densifying.~ | https://api.github.com/repos/pandas-dev/pandas/pulls/25009 | 2019-01-29T19:32:15Z | 2019-05-05T21:21:55Z | 2019-05-05T21:21:55Z | 2019-05-05T21:22:00Z |
require Return section only if return is not None nor commentary | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index bb58449843096..6f78b91653a3f 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -229,6 +229,27 @@ def good_imports(self):
"""
pass
+ def no_returns(self):
+ """
+ Say hello and have no returns.
+ """
+ pass
+
+ def empty_returns(self):
+ """
+ Say hello and always return None.
+
+ Since this function never returns a value, this
+ docstring doesn't need a return section.
+ """
+ def say_hello():
+ return "Hello World!"
+ say_hello()
+ if True:
+ return
+ else:
+ return None
+
class BadGenericDocStrings(object):
"""Everything here has a bad docstring
@@ -783,7 +804,7 @@ def test_good_class(self, capsys):
@pytest.mark.parametrize("func", [
'plot', 'sample', 'random_letters', 'sample_values', 'head', 'head1',
- 'contains', 'mode', 'good_imports'])
+ 'contains', 'mode', 'good_imports', 'no_returns', 'empty_returns'])
def test_good_functions(self, capsys, func):
errors = validate_one(self._import_path(
klass='GoodDocStrings', func=func))['errors']
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index bce33f7e78daa..446cd60968312 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -26,6 +26,8 @@
import importlib
import doctest
import tempfile
+import ast
+import textwrap
import flake8.main.application
@@ -490,9 +492,45 @@ def yields(self):
@property
def method_source(self):
try:
- return inspect.getsource(self.obj)
+ source = inspect.getsource(self.obj)
except TypeError:
return ''
+ return textwrap.dedent(source)
+
+ @property
+ def method_returns_something(self):
+ '''
+ Check if the docstrings method can return something.
+
+ Bare returns, returns valued None and returns from nested functions are
+ disconsidered.
+
+ Returns
+ -------
+ bool
+ Whether the docstrings method can return something.
+ '''
+
+ def get_returns_not_on_nested_functions(node):
+ returns = [node] if isinstance(node, ast.Return) else []
+ for child in ast.iter_child_nodes(node):
+ # Ignore nested functions and its subtrees.
+ if not isinstance(child, ast.FunctionDef):
+ child_returns = get_returns_not_on_nested_functions(child)
+ returns.extend(child_returns)
+ return returns
+
+ tree = ast.parse(self.method_source).body
+ if tree:
+ returns = get_returns_not_on_nested_functions(tree[0])
+ return_values = [r.value for r in returns]
+ # Replace NameConstant nodes valued None for None.
+ for i, v in enumerate(return_values):
+ if isinstance(v, ast.NameConstant) and v.value is None:
+ return_values[i] = None
+ return any(return_values)
+ else:
+ return False
@property
def first_line_ends_in_dot(self):
@@ -691,7 +729,7 @@ def get_validation_data(doc):
if doc.is_function_or_method:
if not doc.returns:
- if 'return' in doc.method_source:
+ if doc.method_returns_something:
errs.append(error('RT01'))
else:
if len(doc.returns) == 1 and doc.returns[0][1]:
| - [ ] closes #23488
Updated return lookup at source in validate_docstrings.py:
- ignore "return None"
- ignore empty return
- ignore the word "return" on commentaries
Updated test_validate_docstrings.py:
- added a test which contains the "returns" listed above and has a valid docstring with no Return section | https://api.github.com/repos/pandas-dev/pandas/pulls/25008 | 2019-01-29T18:23:09Z | 2019-03-11T12:02:05Z | 2019-03-11T12:02:04Z | 2019-03-11T12:02:05Z |
API: Change default for Index.union sort | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 047404e93914b..948350df140eb 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -15,10 +15,84 @@ Whats New in 0.24.1 (February XX, 2019)
These are the changes in pandas 0.24.1. See :ref:`release` for a full changelog
including other versions of pandas.
+.. _whatsnew_0241.api:
+
+API Changes
+~~~~~~~~~~~
+
+Changing the ``sort`` parameter for :meth:`Index.union`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The default ``sort`` value for :meth:`Index.union` has changed from ``True`` to ``None`` (:issue:`24959`).
+The default *behavior* remains the same: The result is sorted, unless
+
+1. ``self`` and ``other`` are identical
+2. ``self`` or ``other`` is empty
+3. ``self`` or ``other`` contain values that can not be compared (a ``RuntimeWarning`` is raised).
+
+This allows ``sort=True`` to now mean "always sort". A ``TypeError`` is raised if the values cannot be compared.
+
+**Behavior in 0.24.0**
+
+.. ipython:: python
+
+ In [1]: idx = pd.Index(['b', 'a'])
+
+ In [2]: idx.union(idx) # sort=True was the default.
+ Out[2]: Index(['b', 'a'], dtype='object')
+
+ In [3]: idx.union(idx, sort=True) # result is still not sorted.
+ Out[32]: Index(['b', 'a'], dtype='object')
+
+**New Behavior**
+
+.. ipython:: python
+
+ idx = pd.Index(['b', 'a'])
+ idx.union(idx) # sort=None is the default. Don't sort identical operands.
+
+ idx.union(idx, sort=True)
+
+The same change applies to :meth:`Index.difference` and :meth:`Index.symmetric_difference`, which
+would previously not sort the result when ``sort=True`` but the values could not be compared.
+
+Changed the behavior of :meth:`Index.intersection` with ``sort=True``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When ``sort=True`` is provided to :meth:`Index.intersection`, the values are always sorted. In 0.24.0,
+the values would not be sorted when ``self`` and ``other`` were identical. Pass ``sort=False`` to not
+sort the values. This matches the behavior of pandas 0.23.4 and earlier.
+
+**Behavior in 0.23.4**
+
+.. ipython:: python
+
+ In [2]: idx = pd.Index(['b', 'a'])
+
+ In [3]: idx.intersection(idx) # sort was not a keyword.
+ Out[3]: Index(['b', 'a'], dtype='object')
+
+**Behavior in 0.24.0**
+
+.. ipython:: python
+
+ In [5]: idx.intersection(idx) # sort=True by default. Don't sort identical.
+ Out[5]: Index(['b', 'a'], dtype='object')
+
+ In [6]: idx.intersection(idx, sort=True)
+ Out[6]: Index(['b', 'a'], dtype='object')
+
+**New Behavior**
+
+.. ipython:: python
+
+ idx.intersection(idx) # sort=False by default
+ idx.intersection(idx, sort=True)
+
.. _whatsnew_0241.regressions:
Fixed Regressions
-^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
@@ -28,7 +102,7 @@ Fixed Regressions
.. _whatsnew_0241.enhancements:
Enhancements
-^^^^^^^^^^^^
+~~~~~~~~~~~~
.. _whatsnew_0241.bug_fixes:
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 4a3440e14ba14..c9473149d8a84 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -233,11 +233,14 @@ def fast_unique_multiple(list arrays, sort: bool=True):
if val not in table:
table[val] = stub
uniques.append(val)
- if sort:
+ if sort is None:
try:
uniques.sort()
except Exception:
+ # TODO: RuntimeWarning?
pass
+ elif sort:
+ uniques.sort()
return uniques
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 3d176012df22b..12880ed93cc2a 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2245,18 +2245,34 @@ def _get_reconciled_name_object(self, other):
return self._shallow_copy(name=name)
return self
- def union(self, other, sort=True):
+ def union(self, other, sort=None):
"""
Form the union of two Index objects.
Parameters
----------
other : Index or array-like
- sort : bool, default True
- Sort the resulting index if possible
+ sort : bool or None, default None
+ Whether to sort the resulting Index.
+
+ * None : Sort the result, except when
+
+ 1. `self` and `other` are equal.
+ 2. `self` or `other` has length 0.
+ 3. Some values in `self` or `other` cannot be compared.
+ A RuntimeWarning is issued in this case.
+
+ * True : sort the result. A TypeError is raised when the
+ values cannot be compared.
+ * False : do not sort the result.
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default `sort` to None, matching the
+ behavior of pandas 0.23.4 and earlier.
+
Returns
-------
union : Index
@@ -2273,10 +2289,16 @@ def union(self, other, sort=True):
other = ensure_index(other)
if len(other) == 0 or self.equals(other):
- return self._get_reconciled_name_object(other)
+ result = self._get_reconciled_name_object(other)
+ if sort:
+ result = result.sort_values()
+ return result
if len(self) == 0:
- return other._get_reconciled_name_object(self)
+ result = other._get_reconciled_name_object(self)
+ if sort:
+ result = result.sort_values()
+ return result
# TODO: is_dtype_union_equal is a hack around
# 1. buggy set ops with duplicates (GH #13432)
@@ -2319,13 +2341,16 @@ def union(self, other, sort=True):
else:
result = lvals
- if sort:
+ if sort is None:
try:
result = sorting.safe_sort(result)
except TypeError as e:
warnings.warn("{}, sort order is undefined for "
"incomparable objects".format(e),
RuntimeWarning, stacklevel=3)
+ elif sort:
+ # raise if not sortable.
+ result = sorting.safe_sort(result)
# for subclasses
return self._wrap_setop_result(other, result)
@@ -2342,8 +2367,12 @@ def intersection(self, other, sort=False):
Parameters
----------
other : Index or array-like
- sort : bool, default False
- Sort the resulting index if possible
+ sort : bool or None, default False
+ Whether to sort the resulting index.
+
+ * False : do not sort the result.
+ * True : sort the result. A TypeError is raised when the
+ values cannot be compared.
.. versionadded:: 0.24.0
@@ -2367,7 +2396,10 @@ def intersection(self, other, sort=False):
other = ensure_index(other)
if self.equals(other):
- return self._get_reconciled_name_object(other)
+ result = self._get_reconciled_name_object(other)
+ if sort:
+ result = result.sort_values()
+ return result
if not is_dtype_equal(self.dtype, other.dtype):
this = self.astype('O')
@@ -2415,7 +2447,7 @@ def intersection(self, other, sort=False):
return taken
- def difference(self, other, sort=True):
+ def difference(self, other, sort=None):
"""
Return a new Index with elements from the index that are not in
`other`.
@@ -2425,11 +2457,24 @@ def difference(self, other, sort=True):
Parameters
----------
other : Index or array-like
- sort : bool, default True
- Sort the resulting index if possible
+ sort : bool or None, default None
+ Whether to sort the resulting index. By default, the
+ values are attempted to be sorted, but any TypeError from
+ incomparable elements is caught by pandas.
+
+ * None : Attempt to sort the result, but catch any TypeErrors
+ from comparing incomparable elements.
+ * False : Do not sort the result.
+ * True : Sort the result, raising a TypeError if any elements
+ cannot be compared.
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Added the `None` option, which matches the behavior of
+ pandas 0.23.4 and earlier.
+
Returns
-------
difference : Index
@@ -2460,15 +2505,17 @@ def difference(self, other, sort=True):
label_diff = np.setdiff1d(np.arange(this.size), indexer,
assume_unique=True)
the_diff = this.values.take(label_diff)
- if sort:
+ if sort is None:
try:
the_diff = sorting.safe_sort(the_diff)
except TypeError:
pass
+ elif sort:
+ the_diff = sorting.safe_sort(the_diff)
return this._shallow_copy(the_diff, name=result_name, freq=None)
- def symmetric_difference(self, other, result_name=None, sort=True):
+ def symmetric_difference(self, other, result_name=None, sort=None):
"""
Compute the symmetric difference of two Index objects.
@@ -2476,11 +2523,24 @@ def symmetric_difference(self, other, result_name=None, sort=True):
----------
other : Index or array-like
result_name : str
- sort : bool, default True
- Sort the resulting index if possible
+ sort : bool or None, default None
+ Whether to sort the resulting index. By default, the
+ values are attempted to be sorted, but any TypeError from
+ incomparable elements is caught by pandas.
+
+ * None : Attempt to sort the result, but catch any TypeErrors
+ from comparing incomparable elements.
+ * False : Do not sort the result.
+ * True : Sort the result, raising a TypeError if any elements
+ cannot be compared.
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Added the `None` option, which matches the behavior of
+ pandas 0.23.4 and earlier.
+
Returns
-------
symmetric_difference : Index
@@ -2524,11 +2584,13 @@ def symmetric_difference(self, other, result_name=None, sort=True):
right_diff = other.values.take(right_indexer)
the_diff = _concat._concat_compat([left_diff, right_diff])
- if sort:
+ if sort is None:
try:
the_diff = sorting.safe_sort(the_diff)
except TypeError:
pass
+ elif sort:
+ the_diff = sorting.safe_sort(the_diff)
attribs = self._get_attributes_dict()
attribs['name'] = result_name
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 16af3fe8eef26..32a5a09359019 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2879,18 +2879,34 @@ def equal_levels(self, other):
return False
return True
- def union(self, other, sort=True):
+ def union(self, other, sort=None):
"""
Form the union of two MultiIndex objects
Parameters
----------
other : MultiIndex or array / Index of tuples
- sort : bool, default True
- Sort the resulting MultiIndex if possible
+ sort : bool or None, default None
+ Whether to sort the resulting Index.
+
+ * None : Sort the result, except when
+
+ 1. `self` and `other` are equal.
+ 2. `self` has length 0.
+ 3. Some values in `self` or `other` cannot be compared.
+ A RuntimeWarning is issued in this case.
+
+ * True : sort the result. A TypeError is raised when the
+ values cannot be compared.
+ * False : do not sort the result.
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default `sort` to None, matching the
+ behavior of pandas 0.23.4 and earlier.
+
Returns
-------
Index
@@ -2901,8 +2917,12 @@ def union(self, other, sort=True):
other, result_names = self._convert_can_do_setop(other)
if len(other) == 0 or self.equals(other):
+ if sort:
+ return self.sort_values()
return self
+ # TODO: Index.union returns other when `len(self)` is 0.
+
uniq_tuples = lib.fast_unique_multiple([self._ndarray_values,
other._ndarray_values],
sort=sort)
@@ -2917,7 +2937,7 @@ def intersection(self, other, sort=False):
Parameters
----------
other : MultiIndex or array / Index of tuples
- sort : bool, default True
+ sort : bool, default False
Sort the resulting MultiIndex if possible
.. versionadded:: 0.24.0
@@ -2934,6 +2954,8 @@ def intersection(self, other, sort=False):
other, result_names = self._convert_can_do_setop(other)
if self.equals(other):
+ if sort:
+ return self.sort_values()
return self
self_tuples = self._ndarray_values
@@ -2951,7 +2973,7 @@ def intersection(self, other, sort=False):
return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0,
names=result_names)
- def difference(self, other, sort=True):
+ def difference(self, other, sort=None):
"""
Compute set difference of two MultiIndex objects
@@ -2971,6 +2993,8 @@ def difference(self, other, sort=True):
other, result_names = self._convert_can_do_setop(other)
if len(other) == 0:
+ if sort:
+ return self.sort_values()
return self
if self.equals(other):
diff --git a/pandas/tests/indexes/multi/test_set_ops.py b/pandas/tests/indexes/multi/test_set_ops.py
index 208d6cf1c639f..6a42e29aa8f5c 100644
--- a/pandas/tests/indexes/multi/test_set_ops.py
+++ b/pandas/tests/indexes/multi/test_set_ops.py
@@ -174,7 +174,10 @@ def test_difference(idx, sort):
# name from empty array
result = first.difference([], sort=sort)
- assert first.equals(result)
+ if sort:
+ assert first.sort_values().equals(result)
+ else:
+ assert first.equals(result)
assert first.names == result.names
# name from non-empty array
@@ -189,6 +192,36 @@ def test_difference(idx, sort):
first.difference([1, 2, 3, 4, 5], sort=sort)
+def test_difference_sort_special():
+ idx = pd.MultiIndex.from_product([[1, 0], ['a', 'b']])
+ # sort=None, the default
+ result = idx.difference([])
+ tm.assert_index_equal(result, idx)
+
+ result = idx.difference([], sort=True)
+ expected = pd.MultiIndex.from_product([[0, 1], ['a', 'b']])
+ tm.assert_index_equal(result, expected)
+
+
+def test_difference_sort_incomparable():
+ idx = pd.MultiIndex.from_product([[1, pd.Timestamp('2000'), 2],
+ ['a', 'b']])
+
+ other = pd.MultiIndex.from_product([[3, pd.Timestamp('2000'), 4],
+ ['c', 'd']])
+ # sort=None, the default
+ # result = idx.difference(other)
+ # tm.assert_index_equal(result, idx)
+
+ # sort=False
+ result = idx.difference(other)
+ tm.assert_index_equal(result, idx)
+
+ # sort=True, raises
+ with pytest.raises(TypeError):
+ idx.difference(other, sort=True)
+
+
@pytest.mark.parametrize("sort", [True, False])
def test_union(idx, sort):
piece1 = idx[:5][::-1]
@@ -203,10 +236,16 @@ def test_union(idx, sort):
# corner case, pass self or empty thing:
the_union = idx.union(idx, sort=sort)
- assert the_union is idx
+ if sort:
+ tm.assert_index_equal(the_union, idx.sort_values())
+ else:
+ assert the_union is idx
the_union = idx.union(idx[:0], sort=sort)
- assert the_union is idx
+ if sort:
+ tm.assert_index_equal(the_union, idx.sort_values())
+ else:
+ assert the_union is idx
# won't work in python 3
# tuples = _index.values
@@ -238,7 +277,10 @@ def test_intersection(idx, sort):
# corner case, pass self
the_int = idx.intersection(idx, sort=sort)
- assert the_int is idx
+ if sort:
+ tm.assert_index_equal(the_int, idx.sort_values())
+ else:
+ assert the_int is idx
# empty intersection: disjoint
empty = idx[:2].intersection(idx[2:], sort=sort)
@@ -249,3 +291,47 @@ def test_intersection(idx, sort):
# tuples = _index.values
# result = _index & tuples
# assert result.equals(tuples)
+
+
+def test_intersect_equal_sort():
+ idx = pd.MultiIndex.from_product([[1, 0], ['a', 'b']])
+ sorted_ = pd.MultiIndex.from_product([[0, 1], ['a', 'b']])
+ tm.assert_index_equal(idx.intersection(idx, sort=False), idx)
+ tm.assert_index_equal(idx.intersection(idx, sort=True), sorted_)
+
+
+@pytest.mark.parametrize('slice_', [slice(None), slice(0)])
+def test_union_sort_other_empty(slice_):
+ # https://github.com/pandas-dev/pandas/issues/24959
+ idx = pd.MultiIndex.from_product([[1, 0], ['a', 'b']])
+
+ # default, sort=None
+ other = idx[slice_]
+ tm.assert_index_equal(idx.union(other), idx)
+ # MultiIndex does not special case empty.union(idx)
+ # tm.assert_index_equal(other.union(idx), idx)
+
+ # sort=False
+ tm.assert_index_equal(idx.union(other, sort=False), idx)
+
+ # sort=True
+ result = idx.union(other, sort=True)
+ expected = pd.MultiIndex.from_product([[0, 1], ['a', 'b']])
+ tm.assert_index_equal(result, expected)
+
+
+def test_union_sort_other_incomparable():
+ # https://github.com/pandas-dev/pandas/issues/24959
+ idx = pd.MultiIndex.from_product([[1, pd.Timestamp('2000')], ['a', 'b']])
+
+ # default, sort=None
+ result = idx.union(idx[:1])
+ tm.assert_index_equal(result, idx)
+
+ # sort=False
+ result = idx.union(idx[:1], sort=False)
+ tm.assert_index_equal(result, idx)
+
+ # sort=True
+ with pytest.raises(TypeError, match='Cannot compare'):
+ idx.union(idx[:1], sort=True)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 20e439de46bde..4e8555cbe1aab 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -3,6 +3,7 @@
from collections import defaultdict
from datetime import datetime, timedelta
import math
+import operator
import sys
import numpy as np
@@ -695,7 +696,10 @@ def test_intersection(self, sort):
# Corner cases
inter = first.intersection(first, sort=sort)
- assert inter is first
+ if sort:
+ tm.assert_index_equal(inter, first.sort_values())
+ else:
+ assert inter is first
@pytest.mark.parametrize("index2,keeps_name", [
(Index([3, 4, 5, 6, 7], name="index"), True), # preserve same name
@@ -770,6 +774,12 @@ def test_intersect_nosort(self):
expected = pd.Index(['b', 'a'])
tm.assert_index_equal(result, expected)
+ def test_intersect_equal_sort(self):
+ idx = pd.Index(['c', 'a', 'b'])
+ sorted_ = pd.Index(['a', 'b', 'c'])
+ tm.assert_index_equal(idx.intersection(idx, sort=False), idx)
+ tm.assert_index_equal(idx.intersection(idx, sort=True), sorted_)
+
@pytest.mark.parametrize("sort", [True, False])
def test_chained_union(self, sort):
# Chained unions handles names correctly
@@ -799,6 +809,41 @@ def test_union(self, sort):
tm.assert_index_equal(union, everything.sort_values())
assert tm.equalContents(union, everything)
+ @pytest.mark.parametrize('slice_', [slice(None), slice(0)])
+ def test_union_sort_other_special(self, slice_):
+ # https://github.com/pandas-dev/pandas/issues/24959
+
+ idx = pd.Index([1, 0, 2])
+ # default, sort=None
+ other = idx[slice_]
+ tm.assert_index_equal(idx.union(other), idx)
+ tm.assert_index_equal(other.union(idx), idx)
+
+ # sort=False
+ tm.assert_index_equal(idx.union(other, sort=False), idx)
+
+ # sort=True
+ result = idx.union(other, sort=True)
+ expected = pd.Index([0, 1, 2])
+ tm.assert_index_equal(result, expected)
+
+ def test_union_sort_other_incomparable(self):
+ # https://github.com/pandas-dev/pandas/issues/24959
+ idx = pd.Index([1, pd.Timestamp('2000')])
+ # default, sort=None
+ with tm.assert_produces_warning(RuntimeWarning):
+ result = idx.union(idx[:1])
+
+ tm.assert_index_equal(result, idx)
+
+ # sort=True
+ with pytest.raises(TypeError, match='.*'):
+ idx.union(idx[:1], sort=True)
+
+ # sort=False
+ result = idx.union(idx[:1], sort=False)
+ tm.assert_index_equal(result, idx)
+
@pytest.mark.parametrize("klass", [
np.array, Series, list])
@pytest.mark.parametrize("sort", [True, False])
@@ -815,19 +860,20 @@ def test_union_from_iterables(self, klass, sort):
tm.assert_index_equal(result, everything.sort_values())
assert tm.equalContents(result, everything)
- @pytest.mark.parametrize("sort", [True, False])
+ @pytest.mark.parametrize("sort", [None, True, False])
def test_union_identity(self, sort):
# TODO: replace with fixturesult
first = self.strIndex[5:20]
union = first.union(first, sort=sort)
- assert union is first
+ # i.e. identity is not preserved when sort is True
+ assert (union is first) is (not sort)
union = first.union([], sort=sort)
- assert union is first
+ assert (union is first) is (not sort)
union = Index([]).union(first, sort=sort)
- assert union is first
+ assert (union is first) is (not sort)
@pytest.mark.parametrize("first_list", [list('ba'), list()])
@pytest.mark.parametrize("second_list", [list('ab'), list()])
@@ -1054,6 +1100,29 @@ def test_symmetric_difference(self, sort):
assert tm.equalContents(result, expected)
assert result.name is None
+ @pytest.mark.parametrize('opname', ['difference', 'symmetric_difference'])
+ def test_difference_incomparable(self, opname):
+ a = pd.Index([3, pd.Timestamp('2000'), 1])
+ b = pd.Index([2, pd.Timestamp('1999'), 1])
+ op = operator.methodcaller(opname, b)
+
+ # sort=None, the default
+ result = op(a)
+ expected = pd.Index([3, pd.Timestamp('2000'), 2, pd.Timestamp('1999')])
+ if opname == 'difference':
+ expected = expected[:2]
+ tm.assert_index_equal(result, expected)
+
+ # sort=False
+ op = operator.methodcaller(opname, b, sort=False)
+ result = op(a)
+ tm.assert_index_equal(result, expected)
+
+ # sort=True, raises
+ op = operator.methodcaller(opname, b, sort=True)
+ with pytest.raises(TypeError, match='Cannot compare'):
+ op(a)
+
@pytest.mark.parametrize("sort", [True, False])
def test_symmetric_difference_mi(self, sort):
index1 = MultiIndex.from_tuples(self.tuples)
| Closes https://github.com/pandas-dev/pandas/issues/24959
Haven't done MultiIndex yet, just opening for discussion on *if* we should do this for 0.24.1. | https://api.github.com/repos/pandas-dev/pandas/pulls/25007 | 2019-01-29T18:02:12Z | 2019-02-04T22:12:40Z | null | 2019-02-04T22:12:43Z |
Backport PR #24973: fix for BUG: grouping with tz-aware: Values falls… | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 7647e199030d2..8f4c3982c745f 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -70,6 +70,9 @@ Bug Fixes
-
-
+**Reshaping**
+
+- Bug in :meth:`DataFrame.groupby` with :class:`Grouper` when there is a time change (DST) and grouping frequency is ``'1d'`` (:issue:`24972`)
**Visualization**
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 6822225273906..7723827ff478a 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -30,8 +30,7 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.tseries.frequencies import to_offset
-from pandas.tseries.offsets import (
- DateOffset, Day, Nano, Tick, delta_to_nanoseconds)
+from pandas.tseries.offsets import DateOffset, Day, Nano, Tick
_shared_docs_kwargs = dict()
@@ -1613,20 +1612,20 @@ def _get_timestamp_range_edges(first, last, offset, closed='left', base=0):
A tuple of length 2, containing the adjusted pd.Timestamp objects.
"""
if isinstance(offset, Tick):
- is_day = isinstance(offset, Day)
- day_nanos = delta_to_nanoseconds(timedelta(1))
-
- # #1165 and #24127
- if (is_day and not offset.nanos % day_nanos) or not is_day:
- first, last = _adjust_dates_anchored(first, last, offset,
- closed=closed, base=base)
- if is_day and first.tz is not None:
- # _adjust_dates_anchored assumes 'D' means 24H, but first/last
- # might contain a DST transition (23H, 24H, or 25H).
- # Ensure first/last snap to midnight.
- first = first.normalize()
- last = last.normalize()
- return first, last
+ if isinstance(offset, Day):
+ # _adjust_dates_anchored assumes 'D' means 24H, but first/last
+ # might contain a DST transition (23H, 24H, or 25H).
+ # So "pretend" the dates are naive when adjusting the endpoints
+ tz = first.tz
+ first = first.tz_localize(None)
+ last = last.tz_localize(None)
+
+ first, last = _adjust_dates_anchored(first, last, offset,
+ closed=closed, base=base)
+ if isinstance(offset, Day):
+ first = first.tz_localize(tz)
+ last = last.tz_localize(tz)
+ return first, last
else:
first = first.normalize()
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 73995cbe79ecd..b743aeecdc756 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1276,6 +1276,21 @@ def test_resample_across_dst():
assert_frame_equal(result, expected)
+def test_groupby_with_dst_time_change():
+ # GH 24972
+ index = pd.DatetimeIndex([1478064900001000000, 1480037118776792000],
+ tz='UTC').tz_convert('America/Chicago')
+
+ df = pd.DataFrame([1, 2], index=index)
+ result = df.groupby(pd.Grouper(freq='1d')).last()
+ expected_index_values = pd.date_range('2016-11-02', '2016-11-24',
+ freq='d', tz='America/Chicago')
+
+ index = pd.DatetimeIndex(expected_index_values)
+ expected = pd.DataFrame([1.0] + ([np.nan] * 21) + [2.0], index=index)
+ assert_frame_equal(result, expected)
+
+
def test_resample_dst_anchor():
# 5172
dti = DatetimeIndex([datetime(2012, 11, 4, 23)], tz='US/Eastern')
| … after last bin
| https://api.github.com/repos/pandas-dev/pandas/pulls/25005 | 2019-01-29T16:12:28Z | 2019-01-29T16:46:52Z | 2019-01-29T16:46:52Z | 2019-01-29T16:46:56Z |
TST: Split out test_pytables.py to sub-module of tests | diff --git a/pandas/tests/io/pytables/__init__.py b/pandas/tests/io/pytables/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/io/pytables/base.py b/pandas/tests/io/pytables/base.py
new file mode 100644
index 0000000000000..0df4d2d1f5740
--- /dev/null
+++ b/pandas/tests/io/pytables/base.py
@@ -0,0 +1,98 @@
+from contextlib import contextmanager
+import os
+import tempfile
+
+import pandas.util.testing as tm
+
+from pandas.io.pytables import HDFStore
+
+
+class Base(object):
+
+ @classmethod
+ def setup_class(cls):
+
+ # Pytables 3.0.0 deprecates lots of things
+ tm.reset_testing_mode()
+
+ @classmethod
+ def teardown_class(cls):
+
+ # Pytables 3.0.0 deprecates lots of things
+ tm.set_testing_mode()
+
+ def setup_method(self, method):
+ self.path = 'tmp.__%s__.h5' % tm.rands(10)
+
+ def teardown_method(self, method):
+ pass
+
+
+def safe_close(store):
+ try:
+ if store is not None:
+ store.close()
+ except IOError:
+ pass
+
+
+@contextmanager
+def ensure_clean_store(path, mode='a', complevel=None, complib=None,
+ fletcher32=False):
+
+ store = None
+ try:
+
+ # put in the temporary path if we don't have one already
+ if not len(os.path.dirname(path)):
+ path = create_tempfile(path)
+
+ store = HDFStore(path, mode=mode, complevel=complevel,
+ complib=complib, fletcher32=False)
+ yield store
+ finally:
+ safe_close(store)
+ if mode == 'w' or mode == 'a':
+ safe_remove(path)
+
+
+@contextmanager
+def ensure_clean_path(path):
+ """
+ return essentially a named temporary file that is not opened
+ and deleted on existing; if path is a list, then create and
+ return list of filenames
+ """
+ filenames = []
+ try:
+ if isinstance(path, list):
+ filenames = [create_tempfile(p) for p in path]
+ yield filenames
+ else:
+ filenames = [create_tempfile(path)]
+ yield filenames[0]
+ finally:
+ for f in filenames:
+ safe_remove(f)
+
+
+def safe_remove(path):
+ if path is not None:
+ try:
+ os.remove(path)
+ except OSError:
+ pass
+
+
+def create_tempfile(path):
+ """ create an unopened named temporary file """
+ return os.path.join(tempfile.gettempdir(), path)
+
+
+def maybe_remove(store, key):
+ """For tests using tables, try removing the table to be sure there is
+ no content from previous tests using the same table name."""
+ try:
+ store.remove(key)
+ except (ValueError, KeyError):
+ pass
diff --git a/pandas/tests/io/pytables/test_complex_values.py b/pandas/tests/io/pytables/test_complex_values.py
new file mode 100644
index 0000000000000..96634799c5a10
--- /dev/null
+++ b/pandas/tests/io/pytables/test_complex_values.py
@@ -0,0 +1,159 @@
+from warnings import catch_warnings
+
+import numpy as np
+import pytest
+
+from pandas import DataFrame, Panel, Series, concat
+from pandas.util.testing import (
+ assert_frame_equal, assert_panel_equal, assert_series_equal)
+
+from pandas.io.pytables import read_hdf
+
+from .base import Base, ensure_clean_path, ensure_clean_store
+
+
+class TestHDFComplexValues(Base):
+ # GH10447
+
+ def test_complex_fixed(self):
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_table(self):
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table', mode='w')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_mixed_fixed(self):
+ complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
+ 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex64,
+ 'D': complex128,
+ 'E': [1.0, 2.0, 3.0, 4.0]},
+ index=list('abcd'))
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_mixed_table(self):
+ complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
+ 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex64,
+ 'D': complex128,
+ 'E': [1.0, 2.0, 3.0, 4.0]},
+ index=list('abcd'))
+
+ with ensure_clean_store(self.path) as store:
+ store.append('df', df, data_columns=['A', 'B'])
+ result = store.select('df', where='A>2')
+ assert_frame_equal(df.loc[df.A > 2], result)
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
+ def test_complex_across_dimensions_fixed(self):
+ with catch_warnings(record=True):
+ complex128 = np.array(
+ [1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+ df = DataFrame({'A': s, 'B': s})
+ p = Panel({'One': df, 'Two': df})
+
+ objs = [s, df, p]
+ comps = [assert_series_equal, assert_frame_equal,
+ assert_panel_equal]
+ for obj, comp in zip(objs, comps):
+ with ensure_clean_path(self.path) as path:
+ obj.to_hdf(path, 'obj', format='fixed')
+ reread = read_hdf(path, 'obj')
+ comp(obj, reread)
+
+ @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
+ def test_complex_across_dimensions(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+ df = DataFrame({'A': s, 'B': s})
+
+ with catch_warnings(record=True):
+ p = Panel({'One': df, 'Two': df})
+
+ objs = [df, p]
+ comps = [assert_frame_equal, assert_panel_equal]
+ for obj, comp in zip(objs, comps):
+ with ensure_clean_path(self.path) as path:
+ obj.to_hdf(path, 'obj', format='table')
+ reread = read_hdf(path, 'obj')
+ comp(obj, reread)
+
+ def test_complex_indexing_error(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex128},
+ index=list('abcd'))
+ with ensure_clean_store(self.path) as store:
+ pytest.raises(TypeError, store.append,
+ 'df', df, data_columns=['C'])
+
+ def test_complex_series_error(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+
+ with ensure_clean_path(self.path) as path:
+ pytest.raises(TypeError, s.to_hdf, path, 'obj', format='t')
+
+ with ensure_clean_path(self.path) as path:
+ s.to_hdf(path, 'obj', format='t', index=False)
+ reread = read_hdf(path, 'obj')
+ assert_series_equal(s, reread)
+
+ def test_complex_append(self):
+ df = DataFrame({'a': np.random.randn(100).astype(np.complex128),
+ 'b': np.random.randn(100)})
+
+ with ensure_clean_store(self.path) as store:
+ store.append('df', df, data_columns=['b'])
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(concat([df, df], 0), result)
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/pytables/test_pytables.py
similarity index 89%
rename from pandas/tests/io/test_pytables.py
rename to pandas/tests/io/pytables/test_pytables.py
index 517a3e059469c..818c7d5ba618e 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/pytables/test_pytables.py
@@ -1,9 +1,7 @@
-from contextlib import contextmanager
import datetime
from datetime import timedelta
from distutils.version import LooseVersion
import os
-import tempfile
from warnings import catch_warnings, simplefilter
import numpy as np
@@ -23,7 +21,7 @@
date_range, isna, timedelta_range)
import pandas.util.testing as tm
from pandas.util.testing import (
- assert_frame_equal, assert_panel_equal, assert_series_equal, set_timezone)
+ assert_frame_equal, assert_panel_equal, assert_series_equal)
from pandas.io import pytables as pytables # noqa:E402
from pandas.io.formats.printing import pprint_thing
@@ -31,6 +29,10 @@
ClosedFileError, HDFStore, PossibleDataLossError, Term, read_hdf)
from pandas.io.pytables import TableIterator # noqa:E402
+from .base import (
+ Base, create_tempfile, ensure_clean_path, ensure_clean_store, maybe_remove,
+ safe_close, safe_remove)
+
tables = pytest.importorskip('tables')
@@ -42,67 +44,6 @@
"ignore:object name:tables.exceptions.NaturalNameWarning"
)
-# contextmanager to ensure the file cleanup
-
-
-def safe_remove(path):
- if path is not None:
- try:
- os.remove(path)
- except OSError:
- pass
-
-
-def safe_close(store):
- try:
- if store is not None:
- store.close()
- except IOError:
- pass
-
-
-def create_tempfile(path):
- """ create an unopened named temporary file """
- return os.path.join(tempfile.gettempdir(), path)
-
-
-@contextmanager
-def ensure_clean_store(path, mode='a', complevel=None, complib=None,
- fletcher32=False):
-
- try:
-
- # put in the temporary path if we don't have one already
- if not len(os.path.dirname(path)):
- path = create_tempfile(path)
-
- store = HDFStore(path, mode=mode, complevel=complevel,
- complib=complib, fletcher32=False)
- yield store
- finally:
- safe_close(store)
- if mode == 'w' or mode == 'a':
- safe_remove(path)
-
-
-@contextmanager
-def ensure_clean_path(path):
- """
- return essentially a named temporary file that is not opened
- and deleted on existing; if path is a list, then create and
- return list of filenames
- """
- try:
- if isinstance(path, list):
- filenames = [create_tempfile(p) for p in path]
- yield filenames
- else:
- filenames = [create_tempfile(path)]
- yield filenames[0]
- finally:
- for f in filenames:
- safe_remove(f)
-
# set these parameters so we don't have file sharing
tables.parameters.MAX_NUMEXPR_THREADS = 1
@@ -110,36 +51,6 @@ def ensure_clean_path(path):
tables.parameters.MAX_THREADS = 1
-def _maybe_remove(store, key):
- """For tests using tables, try removing the table to be sure there is
- no content from previous tests using the same table name."""
- try:
- store.remove(key)
- except (ValueError, KeyError):
- pass
-
-
-class Base(object):
-
- @classmethod
- def setup_class(cls):
-
- # Pytables 3.0.0 deprecates lots of things
- tm.reset_testing_mode()
-
- @classmethod
- def teardown_class(cls):
-
- # Pytables 3.0.0 deprecates lots of things
- tm.set_testing_mode()
-
- def setup_method(self, method):
- self.path = 'tmp.__%s__.h5' % tm.rands(10)
-
- def teardown_method(self, method):
- pass
-
-
@pytest.mark.single
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestHDFStore(Base):
@@ -259,24 +170,24 @@ def test_api(self):
path = store._path
df = tm.makeDataFrame()
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=True, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
# append to False
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
# formats
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format=None)
assert_frame_equal(store.select('df'), df)
@@ -307,16 +218,16 @@ def test_api_default_format(self):
df = tm.makeDataFrame()
pd.set_option('io.hdf.default_format', 'fixed')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df)
assert not store.get_storer('df').is_table
pytest.raises(ValueError, store.append, 'df2', df)
pd.set_option('io.hdf.default_format', 'table')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df)
assert store.get_storer('df').is_table
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df)
assert store.get_storer('df').is_table
@@ -455,7 +366,7 @@ def test_versioning(self):
store['a'] = tm.makeTimeSeries()
store['b'] = tm.makeDataFrame()
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df[:10])
store.append('df1', df[10:])
assert store.root.a._v_attrs.pandas_version == '0.15.2'
@@ -463,7 +374,7 @@ def test_versioning(self):
assert store.root.df1._v_attrs.pandas_version == '0.15.2'
# write a file and wipe its versioning
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df)
# this is an error because its table_type is appendable, but no
@@ -717,7 +628,7 @@ def test_put(self):
# node does not currently exist, test _is_table_type returns False
# in this case
- # _maybe_remove(store, 'f')
+ # maybe_remove(store, 'f')
# pytest.raises(ValueError, store.put, 'f', df[10:],
# append=True)
@@ -892,7 +803,7 @@ def test_put_mixed_type(self):
df = df._consolidate()._convert(datetime=True)
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
# PerformanceWarning
with catch_warnings(record=True):
@@ -914,37 +825,37 @@ def test_append(self):
with catch_warnings(record=True):
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df[:10])
store.append('df1', df[10:])
tm.assert_frame_equal(store['df1'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.put('df2', df[:10], format='table')
store.append('df2', df[10:])
tm.assert_frame_equal(store['df2'], df)
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('/df3', df[:10])
store.append('/df3', df[10:])
tm.assert_frame_equal(store['df3'], df)
# this is allowed by almost always don't want to do it
# tables.NaturalNameWarning
- _maybe_remove(store, '/df3 foo')
+ maybe_remove(store, '/df3 foo')
store.append('/df3 foo', df[:10])
store.append('/df3 foo', df[10:])
tm.assert_frame_equal(store['df3 foo'], df)
# panel
wp = tm.makePanel()
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
store.append('wp1', wp.iloc[:, :10, :])
store.append('wp1', wp.iloc[:, 10:, :])
assert_panel_equal(store['wp1'], wp)
# test using differt order of items on the non-index axes
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
wp_append1 = wp.iloc[:, :10, :]
store.append('wp1', wp_append1)
wp_append2 = wp.iloc[:, 10:, :].reindex(items=wp.items[::-1])
@@ -955,7 +866,7 @@ def test_append(self):
df = DataFrame(data=[[1, 2], [0, 1], [1, 2], [0, 0]])
df['mixed_column'] = 'testing'
df.loc[2, 'mixed_column'] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
tm.assert_frame_equal(store['df'], df)
@@ -969,12 +880,12 @@ def test_append(self):
dtype=np.uint32),
'u64': Series([2**58, 2**59, 2**60, 2**61, 2**62],
dtype=np.uint64)}, index=np.arange(5))
- _maybe_remove(store, 'uints')
+ maybe_remove(store, 'uints')
store.append('uints', uint_data)
tm.assert_frame_equal(store['uints'], uint_data)
# uints - test storage of uints in indexable columns
- _maybe_remove(store, 'uints')
+ maybe_remove(store, 'uints')
# 64-bit indices not yet supported
store.append('uints', uint_data, data_columns=[
'u08', 'u16', 'u32'])
@@ -1036,7 +947,7 @@ def check(format, index):
df = DataFrame(np.random.randn(10, 2), columns=list('AB'))
df.index = index(len(df))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df, format=format)
assert_frame_equal(df, store['df'])
@@ -1074,7 +985,7 @@ def test_encoding(self):
df = DataFrame(dict(A='foo', B='bar'), index=range(5))
df.loc[2, 'A'] = np.nan
df.loc[3, 'B'] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, encoding='ascii')
tm.assert_frame_equal(store['df'], df)
@@ -1141,7 +1052,7 @@ def test_append_some_nans(self):
'E': datetime.datetime(2001, 1, 2, 0, 0)},
index=np.arange(20))
# some nans
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
df.loc[0:15, ['A1', 'B', 'D', 'E']] = np.nan
store.append('df1', df[:10])
store.append('df1', df[10:])
@@ -1150,7 +1061,7 @@ def test_append_some_nans(self):
# first column
df1 = df.copy()
df1.loc[:, 'A1'] = np.nan
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df1[:10])
store.append('df1', df1[10:])
tm.assert_frame_equal(store['df1'], df1)
@@ -1158,7 +1069,7 @@ def test_append_some_nans(self):
# 2nd column
df2 = df.copy()
df2.loc[:, 'A2'] = np.nan
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df2[:10])
store.append('df2', df2[10:])
tm.assert_frame_equal(store['df2'], df2)
@@ -1166,7 +1077,7 @@ def test_append_some_nans(self):
# datetimes
df3 = df.copy()
df3.loc[:, 'E'] = np.nan
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df3[:10])
store.append('df3', df3[10:])
tm.assert_frame_equal(store['df3'], df3)
@@ -1181,26 +1092,26 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
# nan some entire rows (dropna=True)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df[-4:])
# nan some entire rows (dropna=False)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
# tests the option io.hdf.dropna_table
pd.set_option('io.hdf.dropna_table', False)
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df[:10])
store.append('df3', df[10:])
tm.assert_frame_equal(store['df3'], df)
pd.set_option('io.hdf.dropna_table', True)
- _maybe_remove(store, 'df4')
+ maybe_remove(store, 'df4')
store.append('df4', df[:10])
store.append('df4', df[10:])
tm.assert_frame_equal(store['df4'], df[-4:])
@@ -1213,12 +1124,12 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
@@ -1234,12 +1145,12 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
@@ -1276,7 +1187,7 @@ def test_append_frame_column_oriented(self):
# column oriented
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df.iloc[:, :2], axes=['columns'])
store.append('df1', df.iloc[:, 2:])
tm.assert_frame_equal(store['df1'], df)
@@ -1428,7 +1339,7 @@ def check_col(key, name, size):
pd.concat([df['B'], df2['B']]))
# with nans
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = tm.makeTimeDataFrame()
df['string'] = 'foo'
df.loc[1:4, 'string'] = np.nan
@@ -1449,19 +1360,19 @@ def check_col(key, name, size):
df = DataFrame(dict(A='foo', B='bar'), index=range(10))
# a min_itemsize that creates a data_column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, min_itemsize={'A': 200})
check_col('df', 'A', 200)
assert store.get_storer('df').data_columns == ['A']
# a min_itemsize that creates a data_column2
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['B'], min_itemsize={'A': 200})
check_col('df', 'A', 200)
assert store.get_storer('df').data_columns == ['B', 'A']
# a min_itemsize that creates a data_column2
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=[
'B'], min_itemsize={'values': 200})
check_col('df', 'B', 200)
@@ -1469,7 +1380,7 @@ def check_col(key, name, size):
assert store.get_storer('df').data_columns == ['B']
# infer the .typ on subsequent appends
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:5], min_itemsize=200)
store.append('df', df[5:], min_itemsize=200)
tm.assert_frame_equal(store['df'], df)
@@ -1477,7 +1388,7 @@ def check_col(key, name, size):
# invalid min_itemsize keys
df = DataFrame(['foo', 'foo', 'foo', 'barh',
'barh', 'barh'], columns=['A'])
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
pytest.raises(ValueError, store.append, 'df',
df, min_itemsize={'foo': 20, 'foobar': 20})
@@ -1528,7 +1439,7 @@ def test_append_with_data_columns(self):
with ensure_clean_store(self.path) as store:
df = tm.makeTimeDataFrame()
df.iloc[0, df.columns.get_loc('B')] = 1.
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:2], data_columns=['B'])
store.append('df', df[2:])
tm.assert_frame_equal(store['df'], df)
@@ -1554,7 +1465,7 @@ def test_append_with_data_columns(self):
df_new['string'] = 'foo'
df_new.loc[1:4, 'string'] = np.nan
df_new.loc[5:6, 'string'] = 'bar'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'])
result = store.select('df', "string='foo'")
expected = df_new[df_new.string == 'foo']
@@ -1566,15 +1477,15 @@ def check_col(key, name, size):
.table.description, name).itemsize == size
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'],
min_itemsize={'string': 30})
check_col('df', 'string', 30)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append(
'df', df_new, data_columns=['string'], min_itemsize=30)
check_col('df', 'string', 30)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'],
min_itemsize={'values': 30})
check_col('df', 'string', 30)
@@ -1583,7 +1494,7 @@ def check_col(key, name, size):
df_new['string2'] = 'foobarbah'
df_new['string_block1'] = 'foobarbah1'
df_new['string_block2'] = 'foobarbah2'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string', 'string2'],
min_itemsize={'string': 30, 'string2': 40,
'values': 50})
@@ -1606,7 +1517,7 @@ def check_col(key, name, size):
sl = df_new.columns.get_loc('string2')
df_new.iloc[2:5, sl] = np.nan
df_new.iloc[7:8, sl] = 'bar'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append(
'df', df_new, data_columns=['A', 'B', 'string', 'string2'])
result = store.select('df',
@@ -1633,7 +1544,7 @@ def check_col(key, name, size):
df_dc = df_dc._convert(datetime=True)
df_dc.loc[3:5, ['A', 'B', 'datetime']] = np.nan
- _maybe_remove(store, 'df_dc')
+ maybe_remove(store, 'df_dc')
store.append('df_dc', df_dc,
data_columns=['B', 'C', 'string',
'string2', 'datetime'])
@@ -1757,7 +1668,7 @@ def col(t, column):
assert(col('f2', 'string2').is_indexed is False)
# try to index a non-table
- _maybe_remove(store, 'f2')
+ maybe_remove(store, 'f2')
store.put('f2', df)
pytest.raises(TypeError, store.create_table_index, 'f2')
@@ -1864,21 +1775,21 @@ def make_index(names=None):
names=names)
# no names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index())
store.append('df', df)
tm.assert_frame_equal(store.select('df'), df)
# partial names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', None, None]))
store.append('df', df)
tm.assert_frame_equal(store.select('df'), df)
# series
- _maybe_remove(store, 's')
+ maybe_remove(store, 's')
s = Series(np.zeros(12), index=make_index(['date', None, None]))
store.append('s', s)
xp = Series(np.zeros(12), index=make_index(
@@ -1886,19 +1797,19 @@ def make_index(names=None):
tm.assert_series_equal(store.select('s'), xp)
# dup with column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', 'a', 't']))
pytest.raises(ValueError, store.append, 'df', df)
# dup within level
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=['a', 'b'],
index=make_index(['date', 'date', 'date']))
pytest.raises(ValueError, store.append, 'df', df)
# fully names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', 's', 't']))
store.append('df', df)
@@ -2241,7 +2152,7 @@ def test_append_with_timedelta(self):
with ensure_clean_store(self.path) as store:
# table
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
result = store.select('df')
assert_frame_equal(result, df)
@@ -2266,7 +2177,7 @@ def test_append_with_timedelta(self):
assert_frame_equal(result, df.iloc[4:])
# fixed
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.put('df2', df)
result = store.select('df2')
assert_frame_equal(result, df)
@@ -2279,11 +2190,11 @@ def test_remove(self):
df = tm.makeDataFrame()
store['a'] = ts
store['b'] = df
- _maybe_remove(store, 'a')
+ maybe_remove(store, 'a')
assert len(store) == 1
tm.assert_frame_equal(df, store['b'])
- _maybe_remove(store, 'b')
+ maybe_remove(store, 'b')
assert len(store) == 0
# nonexistence
@@ -2292,13 +2203,13 @@ def test_remove(self):
# pathing
store['a'] = ts
store['b/foo'] = df
- _maybe_remove(store, 'foo')
- _maybe_remove(store, 'b/foo')
+ maybe_remove(store, 'foo')
+ maybe_remove(store, 'b/foo')
assert len(store) == 1
store['a'] = ts
store['b/foo'] = df
- _maybe_remove(store, 'b')
+ maybe_remove(store, 'b')
assert len(store) == 1
# __delitem__
@@ -2328,7 +2239,7 @@ def test_remove_where(self):
assert_panel_equal(rs, expected)
# empty where
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
# deleted number (entire table)
@@ -2336,7 +2247,7 @@ def test_remove_where(self):
assert n == 120
# non - empty where
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
pytest.raises(ValueError, store.remove,
'wp', ['foo'])
@@ -2350,7 +2261,7 @@ def test_remove_startstop(self):
wp = tm.makePanel(30)
# start
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
store.put('wp1', wp, format='t')
n = store.remove('wp1', start=32)
assert n == 120 - 32
@@ -2358,7 +2269,7 @@ def test_remove_startstop(self):
expected = wp.reindex(major_axis=wp.major_axis[:32 // 4])
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp2')
+ maybe_remove(store, 'wp2')
store.put('wp2', wp, format='t')
n = store.remove('wp2', start=-32)
assert n == 32
@@ -2367,7 +2278,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# stop
- _maybe_remove(store, 'wp3')
+ maybe_remove(store, 'wp3')
store.put('wp3', wp, format='t')
n = store.remove('wp3', stop=32)
assert n == 32
@@ -2375,7 +2286,7 @@ def test_remove_startstop(self):
expected = wp.reindex(major_axis=wp.major_axis[32 // 4:])
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp4')
+ maybe_remove(store, 'wp4')
store.put('wp4', wp, format='t')
n = store.remove('wp4', stop=-32)
assert n == 120 - 32
@@ -2384,7 +2295,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# start n stop
- _maybe_remove(store, 'wp5')
+ maybe_remove(store, 'wp5')
store.put('wp5', wp, format='t')
n = store.remove('wp5', start=16, stop=-16)
assert n == 120 - 32
@@ -2394,7 +2305,7 @@ def test_remove_startstop(self):
.union(wp.major_axis[-16 // 4:])))
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp6')
+ maybe_remove(store, 'wp6')
store.put('wp6', wp, format='t')
n = store.remove('wp6', start=16, stop=16)
assert n == 0
@@ -2403,7 +2314,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# with where
- _maybe_remove(store, 'wp7')
+ maybe_remove(store, 'wp7')
# TODO: unused?
date = wp.major_axis.take(np.arange(0, 30, 3)) # noqa
@@ -2425,7 +2336,7 @@ def test_remove_crit(self):
wp = tm.makePanel(30)
# group row removal
- _maybe_remove(store, 'wp3')
+ maybe_remove(store, 'wp3')
date4 = wp.major_axis.take([0, 1, 2, 4, 5, 6, 8, 9, 10])
crit4 = 'major_axis=date4'
store.put('wp3', wp, format='t')
@@ -2438,7 +2349,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# upper half
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
date = wp.major_axis[len(wp.major_axis) // 2]
@@ -2455,7 +2366,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# individual row elements
- _maybe_remove(store, 'wp2')
+ maybe_remove(store, 'wp2')
store.put('wp2', wp, format='table')
date1 = wp.major_axis[1:3]
@@ -2488,7 +2399,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# corners
- _maybe_remove(store, 'wp4')
+ maybe_remove(store, 'wp4')
store.put('wp4', wp, format='table')
n = store.remove(
'wp4', where="major_axis>wp.major_axis[-1]")
@@ -3122,12 +3033,12 @@ def test_select(self):
wp = tm.makePanel()
# put/select ok
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
store.select('wp')
# non-table ok (where = None)
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp2', wp)
store.select('wp2')
@@ -3137,7 +3048,7 @@ def test_select(self):
major_axis=date_range('1/1/2000', periods=100),
minor_axis=['E%03d' % i for i in range(100)])
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.append('wp', wp)
items = ['Item%03d' % i for i in range(80)]
result = store.select('wp', 'items=items')
@@ -3150,7 +3061,7 @@ def test_select(self):
# select with columns=
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
result = store.select('df', columns=['A', 'B'])
expected = df.reindex(columns=['A', 'B'])
@@ -3162,21 +3073,21 @@ def test_select(self):
tm.assert_frame_equal(expected, result)
# with a data column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['A'])
result = store.select('df', ['A > 0'], columns=['A', 'B'])
expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# all a data columns
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
result = store.select('df', ['A > 0'], columns=['A', 'B'])
expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# with a data column, but different columns
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['A'])
result = store.select('df', ['A > 0'], columns=['C', 'D'])
expected = df[df.A > 0].reindex(columns=['C', 'D'])
@@ -3189,7 +3100,7 @@ def test_select_dtypes(self):
df = DataFrame(dict(
ts=bdate_range('2012-01-01', periods=300),
A=np.random.randn(300)))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['ts', 'A'])
result = store.select('df', "ts>=Timestamp('2012-02-01')")
@@ -3201,7 +3112,7 @@ def test_select_dtypes(self):
df['object'] = 'foo'
df.loc[4:5, 'object'] = 'bar'
df['boolv'] = df['A'] > 0
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
expected = (df[df.boolv == True] # noqa
@@ -3220,7 +3131,7 @@ def test_select_dtypes(self):
# integer index
df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
- _maybe_remove(store, 'df_int')
+ maybe_remove(store, 'df_int')
store.append('df_int', df)
result = store.select(
'df_int', "index<10 and columns=['A']")
@@ -3230,7 +3141,7 @@ def test_select_dtypes(self):
# float index
df = DataFrame(dict(A=np.random.rand(
20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
- _maybe_remove(store, 'df_float')
+ maybe_remove(store, 'df_float')
store.append('df_float', df)
result = store.select(
'df_float', "index<10.0 and columns=['A']")
@@ -3300,7 +3211,7 @@ def test_select_with_many_inputs(self):
B=range(300),
users=['a'] * 50 + ['b'] * 50 + ['c'] * 100 +
['a%03d' % i for i in range(100)]))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['ts', 'A', 'B', 'users'])
# regular select
@@ -3344,7 +3255,7 @@ def test_select_iterator(self):
with ensure_clean_store(self.path) as store:
df = tm.makeTimeDataFrame(500)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
expected = store.select('df')
@@ -3414,7 +3325,7 @@ def test_select_iterator_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3446,7 +3357,7 @@ def test_select_iterator_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3488,7 +3399,7 @@ def test_select_iterator_non_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[1]
@@ -3523,7 +3434,7 @@ def test_select_iterator_non_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
end_dt = expected.index[-1]
@@ -3545,7 +3456,7 @@ def test_select_iterator_many_empty_frames(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100000, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3606,7 +3517,7 @@ def test_retain_index_attributes(self):
index=date_range('2000-1-1', periods=3, freq='H'))))
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'data')
+ maybe_remove(store, 'data')
store.put('data', df, format='table')
result = store.get('data')
@@ -3628,7 +3539,7 @@ def test_retain_index_attributes(self):
assert store.get_storer('data').info['index']['freq'] is None
# this is ok
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
df2 = DataFrame(dict(
A=Series(lrange(3),
index=[Timestamp('20010101'), Timestamp('20010102'),
@@ -3917,7 +3828,7 @@ def test_read_column(self):
df = tm.makeTimeDataFrame()
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
# GH 17912
# HDFStore.select_column should raise a KeyError
@@ -3989,7 +3900,7 @@ def test_coordinates(self):
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
# all
@@ -3997,7 +3908,7 @@ def test_coordinates(self):
assert((c.values == np.arange(len(df.index))).all())
# get coordinates back & test vs frame
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(dict(A=lrange(5), B=lrange(5)))
store.append('df', df)
@@ -4015,8 +3926,8 @@ def test_coordinates(self):
assert isinstance(c, Index)
# multiple tables
- _maybe_remove(store, 'df1')
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df1')
+ maybe_remove(store, 'df2')
df1 = tm.makeTimeDataFrame()
df2 = tm.makeTimeDataFrame().rename(columns=lambda x: "%s_2" % x)
store.append('df1', df1, data_columns=['A', 'B'])
@@ -4756,41 +4667,41 @@ def test_categorical(self):
with ensure_clean_store(self.path) as store:
# Basic
- _maybe_remove(store, 's')
+ maybe_remove(store, 's')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=False))
store.append('s', s, format='table')
result = store.select('s')
tm.assert_series_equal(s, result)
- _maybe_remove(store, 's_ordered')
+ maybe_remove(store, 's_ordered')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=True))
store.append('s_ordered', s, format='table')
result = store.select('s_ordered')
tm.assert_series_equal(s, result)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame({"s": s, "vals": [1, 2, 3, 4, 5, 6]})
store.append('df', df, format='table')
result = store.select('df')
tm.assert_frame_equal(result, df)
# Dtypes
- _maybe_remove(store, 'si')
+ maybe_remove(store, 'si')
s = Series([1, 1, 2, 2, 3, 4, 5]).astype('category')
store.append('si', s)
result = store.select('si')
tm.assert_series_equal(result, s)
- _maybe_remove(store, 'si2')
+ maybe_remove(store, 'si2')
s = Series([1, 1, np.nan, 2, 3, 4, 5]).astype('category')
store.append('si2', s)
result = store.select('si2')
tm.assert_series_equal(result, s)
# Multiple
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
df2 = df.copy()
df2['s2'] = Series(list('abcdefg')).astype('category')
store.append('df2', df2)
@@ -4804,7 +4715,7 @@ def test_categorical(self):
assert '/df2/meta/values_block_1/meta' in info
# unordered
- _maybe_remove(store, 's2')
+ maybe_remove(store, 's2')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=False))
store.append('s2', s, format='table')
@@ -4812,7 +4723,7 @@ def test_categorical(self):
tm.assert_series_equal(result, s)
# Query
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df, data_columns=['s'])
expected = df[df.s.isin(['b', 'c'])]
result = store.select('df3', where=['s in ["b","c"]'])
@@ -5228,424 +5139,3 @@ def test_read_py2_hdf_file_in_py3(self, datapath):
mode='r') as store:
result = store['p']
assert_frame_equal(result, expected)
-
-
-class TestHDFComplexValues(Base):
- # GH10447
-
- def test_complex_fixed(self):
- df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
- index=list('abcd'),
- columns=list('ABCDE'))
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_table(self):
- df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table', mode='w')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_mixed_fixed(self):
- complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
- 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex64,
- 'D': complex128,
- 'E': [1.0, 2.0, 3.0, 4.0]},
- index=list('abcd'))
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_mixed_table(self):
- complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
- 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex64,
- 'D': complex128,
- 'E': [1.0, 2.0, 3.0, 4.0]},
- index=list('abcd'))
-
- with ensure_clean_store(self.path) as store:
- store.append('df', df, data_columns=['A', 'B'])
- result = store.select('df', where='A>2')
- assert_frame_equal(df.loc[df.A > 2], result)
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_complex_across_dimensions_fixed(self):
- with catch_warnings(record=True):
- complex128 = np.array(
- [1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
- df = DataFrame({'A': s, 'B': s})
- p = Panel({'One': df, 'Two': df})
-
- objs = [s, df, p]
- comps = [tm.assert_series_equal, tm.assert_frame_equal,
- tm.assert_panel_equal]
- for obj, comp in zip(objs, comps):
- with ensure_clean_path(self.path) as path:
- obj.to_hdf(path, 'obj', format='fixed')
- reread = read_hdf(path, 'obj')
- comp(obj, reread)
-
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_complex_across_dimensions(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
- df = DataFrame({'A': s, 'B': s})
-
- with catch_warnings(record=True):
- p = Panel({'One': df, 'Two': df})
-
- objs = [df, p]
- comps = [tm.assert_frame_equal, tm.assert_panel_equal]
- for obj, comp in zip(objs, comps):
- with ensure_clean_path(self.path) as path:
- obj.to_hdf(path, 'obj', format='table')
- reread = read_hdf(path, 'obj')
- comp(obj, reread)
-
- def test_complex_indexing_error(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex128},
- index=list('abcd'))
- with ensure_clean_store(self.path) as store:
- pytest.raises(TypeError, store.append,
- 'df', df, data_columns=['C'])
-
- def test_complex_series_error(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
-
- with ensure_clean_path(self.path) as path:
- pytest.raises(TypeError, s.to_hdf, path, 'obj', format='t')
-
- with ensure_clean_path(self.path) as path:
- s.to_hdf(path, 'obj', format='t', index=False)
- reread = read_hdf(path, 'obj')
- tm.assert_series_equal(s, reread)
-
- def test_complex_append(self):
- df = DataFrame({'a': np.random.randn(100).astype(np.complex128),
- 'b': np.random.randn(100)})
-
- with ensure_clean_store(self.path) as store:
- store.append('df', df, data_columns=['b'])
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(pd.concat([df, df], 0), result)
-
-
-class TestTimezones(Base):
-
- def _compare_with_tz(self, a, b):
- tm.assert_frame_equal(a, b)
-
- # compare the zones on each element
- for c in a.columns:
- for i in a.index:
- a_e = a.loc[i, c]
- b_e = b.loc[i, c]
- if not (a_e == b_e and a_e.tz == b_e.tz):
- raise AssertionError(
- "invalid tz comparison [%s] [%s]" % (a_e, b_e))
-
- def test_append_with_timezones_dateutil(self):
-
- from datetime import timedelta
-
- # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows
- # filename issues.
- from pandas._libs.tslibs.timezones import maybe_get_tz
- gettz = lambda x: maybe_get_tz('dateutil/' + x)
-
- # as columns
- with ensure_clean_store(self.path) as store:
-
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', tz=gettz(
- 'US/Eastern')) + timedelta(hours=1) * i for i in range(5)]))
-
- store.append('df_tz', df, data_columns=['A'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # select with tz aware
- expected = df[df.A >= df.A[3]]
- result = store.select('df_tz', where='A>=df.A[3]')
- self._compare_with_tz(result, expected)
-
- # ensure we include dates in DST and STD time here.
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130603',
- tz=gettz('US/Eastern'))),
- index=range(5))
- store.append('df_tz', df)
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130102', tz=gettz('EET'))),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # this is ok
- _maybe_remove(store, 'df_tz')
- store.append('df_tz', df, data_columns=['A', 'B'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # can't append with diff timezone
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130102', tz=gettz('CET'))),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # as index
- with ensure_clean_store(self.path) as store:
-
- # GH 4098 example
- df = DataFrame(dict(A=Series(lrange(3), index=date_range(
- '2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern')))))
-
- _maybe_remove(store, 'df')
- store.put('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- _maybe_remove(store, 'df')
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- def test_append_with_timezones_pytz(self):
-
- from datetime import timedelta
-
- # as columns
- with ensure_clean_store(self.path) as store:
-
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=[Timestamp('20130102 2:00:00',
- tz='US/Eastern') +
- timedelta(hours=1) * i
- for i in range(5)]))
- store.append('df_tz', df, data_columns=['A'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # select with tz aware
- self._compare_with_tz(store.select(
- 'df_tz', where='A>=df.A[3]'), df[df.A >= df.A[3]])
-
- _maybe_remove(store, 'df_tz')
- # ensure we include dates in DST and STD time here.
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130603', tz='US/Eastern')),
- index=range(5))
- store.append('df_tz', df)
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130102', tz='EET')),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # this is ok
- _maybe_remove(store, 'df_tz')
- store.append('df_tz', df, data_columns=['A', 'B'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # can't append with diff timezone
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130102', tz='CET')),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # as index
- with ensure_clean_store(self.path) as store:
-
- # GH 4098 example
- df = DataFrame(dict(A=Series(lrange(3), index=date_range(
- '2000-1-1', periods=3, freq='H', tz='US/Eastern'))))
-
- _maybe_remove(store, 'df')
- store.put('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- _maybe_remove(store, 'df')
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- def test_tseries_select_index_column(self):
- # GH7777
- # selecting a UTC datetimeindex column did
- # not preserve UTC tzinfo set before storing
-
- # check that no tz still works
- rng = date_range('1/1/2000', '1/30/2000')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == DatetimeIndex(result.values).tz
-
- # check utc
- rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == result.dt.tz
-
- # double check non-utc
- rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == result.dt.tz
-
- def test_timezones_fixed(self):
- with ensure_clean_store(self.path) as store:
-
- # index
- rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
- df = DataFrame(np.random.randn(len(rng), 4), index=rng)
- store['df'] = df
- result = store['df']
- assert_frame_equal(result, df)
-
- # as data
- # GH11411
- _maybe_remove(store, 'df')
- df = DataFrame({'A': rng,
- 'B': rng.tz_convert('UTC').tz_localize(None),
- 'C': rng.tz_convert('CET'),
- 'D': range(len(rng))}, index=rng)
- store['df'] = df
- result = store['df']
- assert_frame_equal(result, df)
-
- def test_fixed_offset_tz(self):
- rng = date_range('1/1/2000 00:00:00-07:00', '1/30/2000 00:00:00-07:00')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store['frame'] = frame
- recons = store['frame']
- tm.assert_index_equal(recons.index, rng)
- assert rng.tz == recons.index.tz
-
- @td.skip_if_windows
- def test_store_timezone(self):
- # GH2852
- # issue storing datetime.date with a timezone as it resets when read
- # back in a new timezone
-
- # original method
- with ensure_clean_store(self.path) as store:
-
- today = datetime.date(2013, 9, 10)
- df = DataFrame([1, 2, 3], index=[today, today, today])
- store['obj1'] = df
- result = store['obj1']
- assert_frame_equal(result, df)
-
- # with tz setting
- with ensure_clean_store(self.path) as store:
-
- with set_timezone('EST5EDT'):
- today = datetime.date(2013, 9, 10)
- df = DataFrame([1, 2, 3], index=[today, today, today])
- store['obj1'] = df
-
- with set_timezone('CST6CDT'):
- result = store['obj1']
-
- assert_frame_equal(result, df)
-
- def test_legacy_datetimetz_object(self, datapath):
- # legacy from < 0.17.0
- # 8260
- expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130603', tz='CET')),
- index=range(5))
- with ensure_clean_store(
- datapath('io', 'data', 'legacy_hdf', 'datetimetz_object.h5'),
- mode='r') as store:
- result = store['df']
- assert_frame_equal(result, expected)
-
- def test_dst_transitions(self):
- # make sure we are not failing on transaitions
- with ensure_clean_store(self.path) as store:
- times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00",
- tz="Europe/London",
- freq="H",
- ambiguous='infer')
-
- for i in [times, times + pd.Timedelta('10min')]:
- _maybe_remove(store, 'df')
- df = DataFrame({'A': range(len(i)), 'B': i}, index=i)
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
new file mode 100644
index 0000000000000..e00000cdea8ed
--- /dev/null
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -0,0 +1,288 @@
+import datetime
+
+import numpy as np
+import pytest
+
+from pandas.compat import lrange, range
+import pandas.util._test_decorators as td
+
+import pandas as pd
+from pandas import DataFrame, DatetimeIndex, Series, Timestamp, date_range
+import pandas.util.testing as tm
+from pandas.util.testing import assert_frame_equal, set_timezone
+
+from .base import Base, ensure_clean_store, maybe_remove
+
+
+class TestTimezones(Base):
+
+ def _compare_with_tz(self, a, b):
+ tm.assert_frame_equal(a, b)
+
+ # compare the zones on each element
+ for c in a.columns:
+ for i in a.index:
+ a_e = a.loc[i, c]
+ b_e = b.loc[i, c]
+ if not (a_e == b_e and a_e.tz == b_e.tz):
+ raise AssertionError(
+ "invalid tz comparison [%s] [%s]" % (a_e, b_e))
+
+ def test_append_with_timezones_dateutil(self):
+
+ from datetime import timedelta
+
+ # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows
+ # filename issues.
+ from pandas._libs.tslibs.timezones import maybe_get_tz
+ gettz = lambda x: maybe_get_tz('dateutil/' + x)
+
+ # as columns
+ with ensure_clean_store(self.path) as store:
+
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', tz=gettz(
+ 'US/Eastern')) + timedelta(hours=1) * i for i in range(5)]))
+
+ store.append('df_tz', df, data_columns=['A'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # select with tz aware
+ expected = df[df.A >= df.A[3]]
+ result = store.select('df_tz', where='A>=df.A[3]')
+ self._compare_with_tz(result, expected)
+
+ # ensure we include dates in DST and STD time here.
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130603',
+ tz=gettz('US/Eastern'))),
+ index=range(5))
+ store.append('df_tz', df)
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130102', tz=gettz('EET'))),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # this is ok
+ maybe_remove(store, 'df_tz')
+ store.append('df_tz', df, data_columns=['A', 'B'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # can't append with diff timezone
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130102', tz=gettz('CET'))),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # as index
+ with ensure_clean_store(self.path) as store:
+
+ # GH 4098 example
+ df = DataFrame(dict(A=Series(lrange(3), index=date_range(
+ '2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern')))))
+
+ maybe_remove(store, 'df')
+ store.put('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ maybe_remove(store, 'df')
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ def test_append_with_timezones_pytz(self):
+
+ from datetime import timedelta
+
+ # as columns
+ with ensure_clean_store(self.path) as store:
+
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=[Timestamp('20130102 2:00:00',
+ tz='US/Eastern') +
+ timedelta(hours=1) * i
+ for i in range(5)]))
+ store.append('df_tz', df, data_columns=['A'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # select with tz aware
+ self._compare_with_tz(store.select(
+ 'df_tz', where='A>=df.A[3]'), df[df.A >= df.A[3]])
+
+ maybe_remove(store, 'df_tz')
+ # ensure we include dates in DST and STD time here.
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='US/Eastern')),
+ index=range(5))
+ store.append('df_tz', df)
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130102', tz='EET')),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # this is ok
+ maybe_remove(store, 'df_tz')
+ store.append('df_tz', df, data_columns=['A', 'B'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # can't append with diff timezone
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130102', tz='CET')),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # as index
+ with ensure_clean_store(self.path) as store:
+
+ # GH 4098 example
+ df = DataFrame(dict(A=Series(lrange(3), index=date_range(
+ '2000-1-1', periods=3, freq='H', tz='US/Eastern'))))
+
+ maybe_remove(store, 'df')
+ store.put('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ maybe_remove(store, 'df')
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ def test_tseries_select_index_column(self):
+ # GH7777
+ # selecting a UTC datetimeindex column did
+ # not preserve UTC tzinfo set before storing
+
+ # check that no tz still works
+ rng = date_range('1/1/2000', '1/30/2000')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == DatetimeIndex(result.values).tz
+
+ # check utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == result.dt.tz
+
+ # double check non-utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == result.dt.tz
+
+ def test_timezones_fixed(self):
+ with ensure_clean_store(self.path) as store:
+
+ # index
+ rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
+ df = DataFrame(np.random.randn(len(rng), 4), index=rng)
+ store['df'] = df
+ result = store['df']
+ assert_frame_equal(result, df)
+
+ # as data
+ # GH11411
+ maybe_remove(store, 'df')
+ df = DataFrame({'A': rng,
+ 'B': rng.tz_convert('UTC').tz_localize(None),
+ 'C': rng.tz_convert('CET'),
+ 'D': range(len(rng))}, index=rng)
+ store['df'] = df
+ result = store['df']
+ assert_frame_equal(result, df)
+
+ def test_fixed_offset_tz(self):
+ rng = date_range('1/1/2000 00:00:00-07:00', '1/30/2000 00:00:00-07:00')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store['frame'] = frame
+ recons = store['frame']
+ tm.assert_index_equal(recons.index, rng)
+ assert rng.tz == recons.index.tz
+
+ @td.skip_if_windows
+ def test_store_timezone(self):
+ # GH2852
+ # issue storing datetime.date with a timezone as it resets when read
+ # back in a new timezone
+
+ # original method
+ with ensure_clean_store(self.path) as store:
+
+ today = datetime.date(2013, 9, 10)
+ df = DataFrame([1, 2, 3], index=[today, today, today])
+ store['obj1'] = df
+ result = store['obj1']
+ assert_frame_equal(result, df)
+
+ # with tz setting
+ with ensure_clean_store(self.path) as store:
+
+ with set_timezone('EST5EDT'):
+ today = datetime.date(2013, 9, 10)
+ df = DataFrame([1, 2, 3], index=[today, today, today])
+ store['obj1'] = df
+
+ with set_timezone('CST6CDT'):
+ result = store['obj1']
+
+ assert_frame_equal(result, df)
+
+ def test_legacy_datetimetz_object(self, datapath):
+ # legacy from < 0.17.0
+ # 8260
+ expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='CET')),
+ index=range(5))
+ with ensure_clean_store(
+ datapath('io', 'data', 'legacy_hdf', 'datetimetz_object.h5'),
+ mode='r') as store:
+ result = store['df']
+ assert_frame_equal(result, expected)
+
+ def test_dst_transitions(self):
+ # make sure we are not failing on transaitions
+ with ensure_clean_store(self.path) as store:
+ times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00",
+ tz="Europe/London",
+ freq="H",
+ ambiguous='infer')
+
+ for i in [times, times + pd.Timedelta('10min')]:
+ maybe_remove(store, 'df')
+ df = DataFrame({'A': range(len(i)), 'B': i}, index=i)
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
| Fixes pandas-dev/pandas#18498
Split test_pytables.py into:
- pytables/base.py
- pytables/test_complex_values.py
- pytables/test_pytables.py
- pytables/test_timezones.py | https://api.github.com/repos/pandas-dev/pandas/pulls/25004 | 2019-01-29T14:44:00Z | 2019-03-20T02:08:50Z | null | 2019-03-20T02:08:50Z |
Split out test_pytables.py to sub-module of tests | diff --git a/pandas/tests/io/pytables/__init__.py b/pandas/tests/io/pytables/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/io/pytables/base.py b/pandas/tests/io/pytables/base.py
new file mode 100644
index 0000000000000..0df4d2d1f5740
--- /dev/null
+++ b/pandas/tests/io/pytables/base.py
@@ -0,0 +1,98 @@
+from contextlib import contextmanager
+import os
+import tempfile
+
+import pandas.util.testing as tm
+
+from pandas.io.pytables import HDFStore
+
+
+class Base(object):
+
+ @classmethod
+ def setup_class(cls):
+
+ # Pytables 3.0.0 deprecates lots of things
+ tm.reset_testing_mode()
+
+ @classmethod
+ def teardown_class(cls):
+
+ # Pytables 3.0.0 deprecates lots of things
+ tm.set_testing_mode()
+
+ def setup_method(self, method):
+ self.path = 'tmp.__%s__.h5' % tm.rands(10)
+
+ def teardown_method(self, method):
+ pass
+
+
+def safe_close(store):
+ try:
+ if store is not None:
+ store.close()
+ except IOError:
+ pass
+
+
+@contextmanager
+def ensure_clean_store(path, mode='a', complevel=None, complib=None,
+ fletcher32=False):
+
+ store = None
+ try:
+
+ # put in the temporary path if we don't have one already
+ if not len(os.path.dirname(path)):
+ path = create_tempfile(path)
+
+ store = HDFStore(path, mode=mode, complevel=complevel,
+ complib=complib, fletcher32=False)
+ yield store
+ finally:
+ safe_close(store)
+ if mode == 'w' or mode == 'a':
+ safe_remove(path)
+
+
+@contextmanager
+def ensure_clean_path(path):
+ """
+ return essentially a named temporary file that is not opened
+ and deleted on existing; if path is a list, then create and
+ return list of filenames
+ """
+ filenames = []
+ try:
+ if isinstance(path, list):
+ filenames = [create_tempfile(p) for p in path]
+ yield filenames
+ else:
+ filenames = [create_tempfile(path)]
+ yield filenames[0]
+ finally:
+ for f in filenames:
+ safe_remove(f)
+
+
+def safe_remove(path):
+ if path is not None:
+ try:
+ os.remove(path)
+ except OSError:
+ pass
+
+
+def create_tempfile(path):
+ """ create an unopened named temporary file """
+ return os.path.join(tempfile.gettempdir(), path)
+
+
+def maybe_remove(store, key):
+ """For tests using tables, try removing the table to be sure there is
+ no content from previous tests using the same table name."""
+ try:
+ store.remove(key)
+ except (ValueError, KeyError):
+ pass
diff --git a/pandas/tests/io/pytables/test_complex_values.py b/pandas/tests/io/pytables/test_complex_values.py
new file mode 100644
index 0000000000000..96634799c5a10
--- /dev/null
+++ b/pandas/tests/io/pytables/test_complex_values.py
@@ -0,0 +1,159 @@
+from warnings import catch_warnings
+
+import numpy as np
+import pytest
+
+from pandas import DataFrame, Panel, Series, concat
+from pandas.util.testing import (
+ assert_frame_equal, assert_panel_equal, assert_series_equal)
+
+from pandas.io.pytables import read_hdf
+
+from .base import Base, ensure_clean_path, ensure_clean_store
+
+
+class TestHDFComplexValues(Base):
+ # GH10447
+
+ def test_complex_fixed(self):
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_table(self):
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
+ index=list('abcd'),
+ columns=list('ABCDE'))
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table', mode='w')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_mixed_fixed(self):
+ complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
+ 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex64,
+ 'D': complex128,
+ 'E': [1.0, 2.0, 3.0, 4.0]},
+ index=list('abcd'))
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ def test_complex_mixed_table(self):
+ complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
+ 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex64,
+ 'D': complex128,
+ 'E': [1.0, 2.0, 3.0, 4.0]},
+ index=list('abcd'))
+
+ with ensure_clean_store(self.path) as store:
+ store.append('df', df, data_columns=['A', 'B'])
+ result = store.select('df', where='A>2')
+ assert_frame_equal(df.loc[df.A > 2], result)
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', format='table')
+ reread = read_hdf(path, 'df')
+ assert_frame_equal(df, reread)
+
+ @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
+ def test_complex_across_dimensions_fixed(self):
+ with catch_warnings(record=True):
+ complex128 = np.array(
+ [1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+ df = DataFrame({'A': s, 'B': s})
+ p = Panel({'One': df, 'Two': df})
+
+ objs = [s, df, p]
+ comps = [assert_series_equal, assert_frame_equal,
+ assert_panel_equal]
+ for obj, comp in zip(objs, comps):
+ with ensure_clean_path(self.path) as path:
+ obj.to_hdf(path, 'obj', format='fixed')
+ reread = read_hdf(path, 'obj')
+ comp(obj, reread)
+
+ @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
+ def test_complex_across_dimensions(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+ df = DataFrame({'A': s, 'B': s})
+
+ with catch_warnings(record=True):
+ p = Panel({'One': df, 'Two': df})
+
+ objs = [df, p]
+ comps = [assert_frame_equal, assert_panel_equal]
+ for obj, comp in zip(objs, comps):
+ with ensure_clean_path(self.path) as path:
+ obj.to_hdf(path, 'obj', format='table')
+ reread = read_hdf(path, 'obj')
+ comp(obj, reread)
+
+ def test_complex_indexing_error(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
+ dtype=np.complex128)
+ df = DataFrame({'A': [1, 2, 3, 4],
+ 'B': ['a', 'b', 'c', 'd'],
+ 'C': complex128},
+ index=list('abcd'))
+ with ensure_clean_store(self.path) as store:
+ pytest.raises(TypeError, store.append,
+ 'df', df, data_columns=['C'])
+
+ def test_complex_series_error(self):
+ complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
+ s = Series(complex128, index=list('abcd'))
+
+ with ensure_clean_path(self.path) as path:
+ pytest.raises(TypeError, s.to_hdf, path, 'obj', format='t')
+
+ with ensure_clean_path(self.path) as path:
+ s.to_hdf(path, 'obj', format='t', index=False)
+ reread = read_hdf(path, 'obj')
+ assert_series_equal(s, reread)
+
+ def test_complex_append(self):
+ df = DataFrame({'a': np.random.randn(100).astype(np.complex128),
+ 'b': np.random.randn(100)})
+
+ with ensure_clean_store(self.path) as store:
+ store.append('df', df, data_columns=['b'])
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(concat([df, df], 0), result)
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/pytables/test_pytables.py
similarity index 89%
rename from pandas/tests/io/test_pytables.py
rename to pandas/tests/io/pytables/test_pytables.py
index 517a3e059469c..818c7d5ba618e 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/pytables/test_pytables.py
@@ -1,9 +1,7 @@
-from contextlib import contextmanager
import datetime
from datetime import timedelta
from distutils.version import LooseVersion
import os
-import tempfile
from warnings import catch_warnings, simplefilter
import numpy as np
@@ -23,7 +21,7 @@
date_range, isna, timedelta_range)
import pandas.util.testing as tm
from pandas.util.testing import (
- assert_frame_equal, assert_panel_equal, assert_series_equal, set_timezone)
+ assert_frame_equal, assert_panel_equal, assert_series_equal)
from pandas.io import pytables as pytables # noqa:E402
from pandas.io.formats.printing import pprint_thing
@@ -31,6 +29,10 @@
ClosedFileError, HDFStore, PossibleDataLossError, Term, read_hdf)
from pandas.io.pytables import TableIterator # noqa:E402
+from .base import (
+ Base, create_tempfile, ensure_clean_path, ensure_clean_store, maybe_remove,
+ safe_close, safe_remove)
+
tables = pytest.importorskip('tables')
@@ -42,67 +44,6 @@
"ignore:object name:tables.exceptions.NaturalNameWarning"
)
-# contextmanager to ensure the file cleanup
-
-
-def safe_remove(path):
- if path is not None:
- try:
- os.remove(path)
- except OSError:
- pass
-
-
-def safe_close(store):
- try:
- if store is not None:
- store.close()
- except IOError:
- pass
-
-
-def create_tempfile(path):
- """ create an unopened named temporary file """
- return os.path.join(tempfile.gettempdir(), path)
-
-
-@contextmanager
-def ensure_clean_store(path, mode='a', complevel=None, complib=None,
- fletcher32=False):
-
- try:
-
- # put in the temporary path if we don't have one already
- if not len(os.path.dirname(path)):
- path = create_tempfile(path)
-
- store = HDFStore(path, mode=mode, complevel=complevel,
- complib=complib, fletcher32=False)
- yield store
- finally:
- safe_close(store)
- if mode == 'w' or mode == 'a':
- safe_remove(path)
-
-
-@contextmanager
-def ensure_clean_path(path):
- """
- return essentially a named temporary file that is not opened
- and deleted on existing; if path is a list, then create and
- return list of filenames
- """
- try:
- if isinstance(path, list):
- filenames = [create_tempfile(p) for p in path]
- yield filenames
- else:
- filenames = [create_tempfile(path)]
- yield filenames[0]
- finally:
- for f in filenames:
- safe_remove(f)
-
# set these parameters so we don't have file sharing
tables.parameters.MAX_NUMEXPR_THREADS = 1
@@ -110,36 +51,6 @@ def ensure_clean_path(path):
tables.parameters.MAX_THREADS = 1
-def _maybe_remove(store, key):
- """For tests using tables, try removing the table to be sure there is
- no content from previous tests using the same table name."""
- try:
- store.remove(key)
- except (ValueError, KeyError):
- pass
-
-
-class Base(object):
-
- @classmethod
- def setup_class(cls):
-
- # Pytables 3.0.0 deprecates lots of things
- tm.reset_testing_mode()
-
- @classmethod
- def teardown_class(cls):
-
- # Pytables 3.0.0 deprecates lots of things
- tm.set_testing_mode()
-
- def setup_method(self, method):
- self.path = 'tmp.__%s__.h5' % tm.rands(10)
-
- def teardown_method(self, method):
- pass
-
-
@pytest.mark.single
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestHDFStore(Base):
@@ -259,24 +170,24 @@ def test_api(self):
path = store._path
df = tm.makeDataFrame()
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=True, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
# append to False
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
# formats
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format='table')
assert_frame_equal(store.select('df'), df)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df.iloc[:10], append=False, format='table')
store.append('df', df.iloc[10:], append=True, format=None)
assert_frame_equal(store.select('df'), df)
@@ -307,16 +218,16 @@ def test_api_default_format(self):
df = tm.makeDataFrame()
pd.set_option('io.hdf.default_format', 'fixed')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df)
assert not store.get_storer('df').is_table
pytest.raises(ValueError, store.append, 'df2', df)
pd.set_option('io.hdf.default_format', 'table')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df)
assert store.get_storer('df').is_table
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df)
assert store.get_storer('df').is_table
@@ -455,7 +366,7 @@ def test_versioning(self):
store['a'] = tm.makeTimeSeries()
store['b'] = tm.makeDataFrame()
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df[:10])
store.append('df1', df[10:])
assert store.root.a._v_attrs.pandas_version == '0.15.2'
@@ -463,7 +374,7 @@ def test_versioning(self):
assert store.root.df1._v_attrs.pandas_version == '0.15.2'
# write a file and wipe its versioning
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df)
# this is an error because its table_type is appendable, but no
@@ -717,7 +628,7 @@ def test_put(self):
# node does not currently exist, test _is_table_type returns False
# in this case
- # _maybe_remove(store, 'f')
+ # maybe_remove(store, 'f')
# pytest.raises(ValueError, store.put, 'f', df[10:],
# append=True)
@@ -892,7 +803,7 @@ def test_put_mixed_type(self):
df = df._consolidate()._convert(datetime=True)
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
# PerformanceWarning
with catch_warnings(record=True):
@@ -914,37 +825,37 @@ def test_append(self):
with catch_warnings(record=True):
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df[:10])
store.append('df1', df[10:])
tm.assert_frame_equal(store['df1'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.put('df2', df[:10], format='table')
store.append('df2', df[10:])
tm.assert_frame_equal(store['df2'], df)
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('/df3', df[:10])
store.append('/df3', df[10:])
tm.assert_frame_equal(store['df3'], df)
# this is allowed by almost always don't want to do it
# tables.NaturalNameWarning
- _maybe_remove(store, '/df3 foo')
+ maybe_remove(store, '/df3 foo')
store.append('/df3 foo', df[:10])
store.append('/df3 foo', df[10:])
tm.assert_frame_equal(store['df3 foo'], df)
# panel
wp = tm.makePanel()
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
store.append('wp1', wp.iloc[:, :10, :])
store.append('wp1', wp.iloc[:, 10:, :])
assert_panel_equal(store['wp1'], wp)
# test using differt order of items on the non-index axes
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
wp_append1 = wp.iloc[:, :10, :]
store.append('wp1', wp_append1)
wp_append2 = wp.iloc[:, 10:, :].reindex(items=wp.items[::-1])
@@ -955,7 +866,7 @@ def test_append(self):
df = DataFrame(data=[[1, 2], [0, 1], [1, 2], [0, 0]])
df['mixed_column'] = 'testing'
df.loc[2, 'mixed_column'] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
tm.assert_frame_equal(store['df'], df)
@@ -969,12 +880,12 @@ def test_append(self):
dtype=np.uint32),
'u64': Series([2**58, 2**59, 2**60, 2**61, 2**62],
dtype=np.uint64)}, index=np.arange(5))
- _maybe_remove(store, 'uints')
+ maybe_remove(store, 'uints')
store.append('uints', uint_data)
tm.assert_frame_equal(store['uints'], uint_data)
# uints - test storage of uints in indexable columns
- _maybe_remove(store, 'uints')
+ maybe_remove(store, 'uints')
# 64-bit indices not yet supported
store.append('uints', uint_data, data_columns=[
'u08', 'u16', 'u32'])
@@ -1036,7 +947,7 @@ def check(format, index):
df = DataFrame(np.random.randn(10, 2), columns=list('AB'))
df.index = index(len(df))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.put('df', df, format=format)
assert_frame_equal(df, store['df'])
@@ -1074,7 +985,7 @@ def test_encoding(self):
df = DataFrame(dict(A='foo', B='bar'), index=range(5))
df.loc[2, 'A'] = np.nan
df.loc[3, 'B'] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, encoding='ascii')
tm.assert_frame_equal(store['df'], df)
@@ -1141,7 +1052,7 @@ def test_append_some_nans(self):
'E': datetime.datetime(2001, 1, 2, 0, 0)},
index=np.arange(20))
# some nans
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
df.loc[0:15, ['A1', 'B', 'D', 'E']] = np.nan
store.append('df1', df[:10])
store.append('df1', df[10:])
@@ -1150,7 +1061,7 @@ def test_append_some_nans(self):
# first column
df1 = df.copy()
df1.loc[:, 'A1'] = np.nan
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df1[:10])
store.append('df1', df1[10:])
tm.assert_frame_equal(store['df1'], df1)
@@ -1158,7 +1069,7 @@ def test_append_some_nans(self):
# 2nd column
df2 = df.copy()
df2.loc[:, 'A2'] = np.nan
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df2[:10])
store.append('df2', df2[10:])
tm.assert_frame_equal(store['df2'], df2)
@@ -1166,7 +1077,7 @@ def test_append_some_nans(self):
# datetimes
df3 = df.copy()
df3.loc[:, 'E'] = np.nan
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df3[:10])
store.append('df3', df3[10:])
tm.assert_frame_equal(store['df3'], df3)
@@ -1181,26 +1092,26 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
# nan some entire rows (dropna=True)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df[-4:])
# nan some entire rows (dropna=False)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
# tests the option io.hdf.dropna_table
pd.set_option('io.hdf.dropna_table', False)
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df[:10])
store.append('df3', df[10:])
tm.assert_frame_equal(store['df3'], df)
pd.set_option('io.hdf.dropna_table', True)
- _maybe_remove(store, 'df4')
+ maybe_remove(store, 'df4')
store.append('df4', df[:10])
store.append('df4', df[10:])
tm.assert_frame_equal(store['df4'], df[-4:])
@@ -1213,12 +1124,12 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
@@ -1234,12 +1145,12 @@ def test_append_all_nans(self):
df.loc[0:15, :] = np.nan
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:10], dropna=True)
store.append('df', df[10:], dropna=True)
tm.assert_frame_equal(store['df'], df)
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.append('df2', df[:10], dropna=False)
store.append('df2', df[10:], dropna=False)
tm.assert_frame_equal(store['df2'], df)
@@ -1276,7 +1187,7 @@ def test_append_frame_column_oriented(self):
# column oriented
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df1')
+ maybe_remove(store, 'df1')
store.append('df1', df.iloc[:, :2], axes=['columns'])
store.append('df1', df.iloc[:, 2:])
tm.assert_frame_equal(store['df1'], df)
@@ -1428,7 +1339,7 @@ def check_col(key, name, size):
pd.concat([df['B'], df2['B']]))
# with nans
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = tm.makeTimeDataFrame()
df['string'] = 'foo'
df.loc[1:4, 'string'] = np.nan
@@ -1449,19 +1360,19 @@ def check_col(key, name, size):
df = DataFrame(dict(A='foo', B='bar'), index=range(10))
# a min_itemsize that creates a data_column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, min_itemsize={'A': 200})
check_col('df', 'A', 200)
assert store.get_storer('df').data_columns == ['A']
# a min_itemsize that creates a data_column2
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['B'], min_itemsize={'A': 200})
check_col('df', 'A', 200)
assert store.get_storer('df').data_columns == ['B', 'A']
# a min_itemsize that creates a data_column2
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=[
'B'], min_itemsize={'values': 200})
check_col('df', 'B', 200)
@@ -1469,7 +1380,7 @@ def check_col(key, name, size):
assert store.get_storer('df').data_columns == ['B']
# infer the .typ on subsequent appends
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:5], min_itemsize=200)
store.append('df', df[5:], min_itemsize=200)
tm.assert_frame_equal(store['df'], df)
@@ -1477,7 +1388,7 @@ def check_col(key, name, size):
# invalid min_itemsize keys
df = DataFrame(['foo', 'foo', 'foo', 'barh',
'barh', 'barh'], columns=['A'])
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
pytest.raises(ValueError, store.append, 'df',
df, min_itemsize={'foo': 20, 'foobar': 20})
@@ -1528,7 +1439,7 @@ def test_append_with_data_columns(self):
with ensure_clean_store(self.path) as store:
df = tm.makeTimeDataFrame()
df.iloc[0, df.columns.get_loc('B')] = 1.
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df[:2], data_columns=['B'])
store.append('df', df[2:])
tm.assert_frame_equal(store['df'], df)
@@ -1554,7 +1465,7 @@ def test_append_with_data_columns(self):
df_new['string'] = 'foo'
df_new.loc[1:4, 'string'] = np.nan
df_new.loc[5:6, 'string'] = 'bar'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'])
result = store.select('df', "string='foo'")
expected = df_new[df_new.string == 'foo']
@@ -1566,15 +1477,15 @@ def check_col(key, name, size):
.table.description, name).itemsize == size
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'],
min_itemsize={'string': 30})
check_col('df', 'string', 30)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append(
'df', df_new, data_columns=['string'], min_itemsize=30)
check_col('df', 'string', 30)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string'],
min_itemsize={'values': 30})
check_col('df', 'string', 30)
@@ -1583,7 +1494,7 @@ def check_col(key, name, size):
df_new['string2'] = 'foobarbah'
df_new['string_block1'] = 'foobarbah1'
df_new['string_block2'] = 'foobarbah2'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df_new, data_columns=['string', 'string2'],
min_itemsize={'string': 30, 'string2': 40,
'values': 50})
@@ -1606,7 +1517,7 @@ def check_col(key, name, size):
sl = df_new.columns.get_loc('string2')
df_new.iloc[2:5, sl] = np.nan
df_new.iloc[7:8, sl] = 'bar'
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append(
'df', df_new, data_columns=['A', 'B', 'string', 'string2'])
result = store.select('df',
@@ -1633,7 +1544,7 @@ def check_col(key, name, size):
df_dc = df_dc._convert(datetime=True)
df_dc.loc[3:5, ['A', 'B', 'datetime']] = np.nan
- _maybe_remove(store, 'df_dc')
+ maybe_remove(store, 'df_dc')
store.append('df_dc', df_dc,
data_columns=['B', 'C', 'string',
'string2', 'datetime'])
@@ -1757,7 +1668,7 @@ def col(t, column):
assert(col('f2', 'string2').is_indexed is False)
# try to index a non-table
- _maybe_remove(store, 'f2')
+ maybe_remove(store, 'f2')
store.put('f2', df)
pytest.raises(TypeError, store.create_table_index, 'f2')
@@ -1864,21 +1775,21 @@ def make_index(names=None):
names=names)
# no names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index())
store.append('df', df)
tm.assert_frame_equal(store.select('df'), df)
# partial names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', None, None]))
store.append('df', df)
tm.assert_frame_equal(store.select('df'), df)
# series
- _maybe_remove(store, 's')
+ maybe_remove(store, 's')
s = Series(np.zeros(12), index=make_index(['date', None, None]))
store.append('s', s)
xp = Series(np.zeros(12), index=make_index(
@@ -1886,19 +1797,19 @@ def make_index(names=None):
tm.assert_series_equal(store.select('s'), xp)
# dup with column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', 'a', 't']))
pytest.raises(ValueError, store.append, 'df', df)
# dup within level
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=['a', 'b'],
index=make_index(['date', 'date', 'date']))
pytest.raises(ValueError, store.append, 'df', df)
# fully names
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
'a', 'b'], index=make_index(['date', 's', 't']))
store.append('df', df)
@@ -2241,7 +2152,7 @@ def test_append_with_timedelta(self):
with ensure_clean_store(self.path) as store:
# table
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
result = store.select('df')
assert_frame_equal(result, df)
@@ -2266,7 +2177,7 @@ def test_append_with_timedelta(self):
assert_frame_equal(result, df.iloc[4:])
# fixed
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
store.put('df2', df)
result = store.select('df2')
assert_frame_equal(result, df)
@@ -2279,11 +2190,11 @@ def test_remove(self):
df = tm.makeDataFrame()
store['a'] = ts
store['b'] = df
- _maybe_remove(store, 'a')
+ maybe_remove(store, 'a')
assert len(store) == 1
tm.assert_frame_equal(df, store['b'])
- _maybe_remove(store, 'b')
+ maybe_remove(store, 'b')
assert len(store) == 0
# nonexistence
@@ -2292,13 +2203,13 @@ def test_remove(self):
# pathing
store['a'] = ts
store['b/foo'] = df
- _maybe_remove(store, 'foo')
- _maybe_remove(store, 'b/foo')
+ maybe_remove(store, 'foo')
+ maybe_remove(store, 'b/foo')
assert len(store) == 1
store['a'] = ts
store['b/foo'] = df
- _maybe_remove(store, 'b')
+ maybe_remove(store, 'b')
assert len(store) == 1
# __delitem__
@@ -2328,7 +2239,7 @@ def test_remove_where(self):
assert_panel_equal(rs, expected)
# empty where
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
# deleted number (entire table)
@@ -2336,7 +2247,7 @@ def test_remove_where(self):
assert n == 120
# non - empty where
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
pytest.raises(ValueError, store.remove,
'wp', ['foo'])
@@ -2350,7 +2261,7 @@ def test_remove_startstop(self):
wp = tm.makePanel(30)
# start
- _maybe_remove(store, 'wp1')
+ maybe_remove(store, 'wp1')
store.put('wp1', wp, format='t')
n = store.remove('wp1', start=32)
assert n == 120 - 32
@@ -2358,7 +2269,7 @@ def test_remove_startstop(self):
expected = wp.reindex(major_axis=wp.major_axis[:32 // 4])
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp2')
+ maybe_remove(store, 'wp2')
store.put('wp2', wp, format='t')
n = store.remove('wp2', start=-32)
assert n == 32
@@ -2367,7 +2278,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# stop
- _maybe_remove(store, 'wp3')
+ maybe_remove(store, 'wp3')
store.put('wp3', wp, format='t')
n = store.remove('wp3', stop=32)
assert n == 32
@@ -2375,7 +2286,7 @@ def test_remove_startstop(self):
expected = wp.reindex(major_axis=wp.major_axis[32 // 4:])
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp4')
+ maybe_remove(store, 'wp4')
store.put('wp4', wp, format='t')
n = store.remove('wp4', stop=-32)
assert n == 120 - 32
@@ -2384,7 +2295,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# start n stop
- _maybe_remove(store, 'wp5')
+ maybe_remove(store, 'wp5')
store.put('wp5', wp, format='t')
n = store.remove('wp5', start=16, stop=-16)
assert n == 120 - 32
@@ -2394,7 +2305,7 @@ def test_remove_startstop(self):
.union(wp.major_axis[-16 // 4:])))
assert_panel_equal(result, expected)
- _maybe_remove(store, 'wp6')
+ maybe_remove(store, 'wp6')
store.put('wp6', wp, format='t')
n = store.remove('wp6', start=16, stop=16)
assert n == 0
@@ -2403,7 +2314,7 @@ def test_remove_startstop(self):
assert_panel_equal(result, expected)
# with where
- _maybe_remove(store, 'wp7')
+ maybe_remove(store, 'wp7')
# TODO: unused?
date = wp.major_axis.take(np.arange(0, 30, 3)) # noqa
@@ -2425,7 +2336,7 @@ def test_remove_crit(self):
wp = tm.makePanel(30)
# group row removal
- _maybe_remove(store, 'wp3')
+ maybe_remove(store, 'wp3')
date4 = wp.major_axis.take([0, 1, 2, 4, 5, 6, 8, 9, 10])
crit4 = 'major_axis=date4'
store.put('wp3', wp, format='t')
@@ -2438,7 +2349,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# upper half
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
date = wp.major_axis[len(wp.major_axis) // 2]
@@ -2455,7 +2366,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# individual row elements
- _maybe_remove(store, 'wp2')
+ maybe_remove(store, 'wp2')
store.put('wp2', wp, format='table')
date1 = wp.major_axis[1:3]
@@ -2488,7 +2399,7 @@ def test_remove_crit(self):
assert_panel_equal(result, expected)
# corners
- _maybe_remove(store, 'wp4')
+ maybe_remove(store, 'wp4')
store.put('wp4', wp, format='table')
n = store.remove(
'wp4', where="major_axis>wp.major_axis[-1]")
@@ -3122,12 +3033,12 @@ def test_select(self):
wp = tm.makePanel()
# put/select ok
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp', wp, format='table')
store.select('wp')
# non-table ok (where = None)
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.put('wp2', wp)
store.select('wp2')
@@ -3137,7 +3048,7 @@ def test_select(self):
major_axis=date_range('1/1/2000', periods=100),
minor_axis=['E%03d' % i for i in range(100)])
- _maybe_remove(store, 'wp')
+ maybe_remove(store, 'wp')
store.append('wp', wp)
items = ['Item%03d' % i for i in range(80)]
result = store.select('wp', 'items=items')
@@ -3150,7 +3061,7 @@ def test_select(self):
# select with columns=
df = tm.makeTimeDataFrame()
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
result = store.select('df', columns=['A', 'B'])
expected = df.reindex(columns=['A', 'B'])
@@ -3162,21 +3073,21 @@ def test_select(self):
tm.assert_frame_equal(expected, result)
# with a data column
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['A'])
result = store.select('df', ['A > 0'], columns=['A', 'B'])
expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# all a data columns
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
result = store.select('df', ['A > 0'], columns=['A', 'B'])
expected = df[df.A > 0].reindex(columns=['A', 'B'])
tm.assert_frame_equal(expected, result)
# with a data column, but different columns
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['A'])
result = store.select('df', ['A > 0'], columns=['C', 'D'])
expected = df[df.A > 0].reindex(columns=['C', 'D'])
@@ -3189,7 +3100,7 @@ def test_select_dtypes(self):
df = DataFrame(dict(
ts=bdate_range('2012-01-01', periods=300),
A=np.random.randn(300)))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['ts', 'A'])
result = store.select('df', "ts>=Timestamp('2012-02-01')")
@@ -3201,7 +3112,7 @@ def test_select_dtypes(self):
df['object'] = 'foo'
df.loc[4:5, 'object'] = 'bar'
df['boolv'] = df['A'] > 0
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=True)
expected = (df[df.boolv == True] # noqa
@@ -3220,7 +3131,7 @@ def test_select_dtypes(self):
# integer index
df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
- _maybe_remove(store, 'df_int')
+ maybe_remove(store, 'df_int')
store.append('df_int', df)
result = store.select(
'df_int', "index<10 and columns=['A']")
@@ -3230,7 +3141,7 @@ def test_select_dtypes(self):
# float index
df = DataFrame(dict(A=np.random.rand(
20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
- _maybe_remove(store, 'df_float')
+ maybe_remove(store, 'df_float')
store.append('df_float', df)
result = store.select(
'df_float', "index<10.0 and columns=['A']")
@@ -3300,7 +3211,7 @@ def test_select_with_many_inputs(self):
B=range(300),
users=['a'] * 50 + ['b'] * 50 + ['c'] * 100 +
['a%03d' % i for i in range(100)]))
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df, data_columns=['ts', 'A', 'B', 'users'])
# regular select
@@ -3344,7 +3255,7 @@ def test_select_iterator(self):
with ensure_clean_store(self.path) as store:
df = tm.makeTimeDataFrame(500)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
expected = store.select('df')
@@ -3414,7 +3325,7 @@ def test_select_iterator_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3446,7 +3357,7 @@ def test_select_iterator_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3488,7 +3399,7 @@ def test_select_iterator_non_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[1]
@@ -3523,7 +3434,7 @@ def test_select_iterator_non_complete_8014(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100064, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
end_dt = expected.index[-1]
@@ -3545,7 +3456,7 @@ def test_select_iterator_many_empty_frames(self):
with ensure_clean_store(self.path) as store:
expected = tm.makeTimeDataFrame(100000, 'S')
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', expected)
beg_dt = expected.index[0]
@@ -3606,7 +3517,7 @@ def test_retain_index_attributes(self):
index=date_range('2000-1-1', periods=3, freq='H'))))
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'data')
+ maybe_remove(store, 'data')
store.put('data', df, format='table')
result = store.get('data')
@@ -3628,7 +3539,7 @@ def test_retain_index_attributes(self):
assert store.get_storer('data').info['index']['freq'] is None
# this is ok
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
df2 = DataFrame(dict(
A=Series(lrange(3),
index=[Timestamp('20010101'), Timestamp('20010102'),
@@ -3917,7 +3828,7 @@ def test_read_column(self):
df = tm.makeTimeDataFrame()
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
# GH 17912
# HDFStore.select_column should raise a KeyError
@@ -3989,7 +3900,7 @@ def test_coordinates(self):
with ensure_clean_store(self.path) as store:
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
store.append('df', df)
# all
@@ -3997,7 +3908,7 @@ def test_coordinates(self):
assert((c.values == np.arange(len(df.index))).all())
# get coordinates back & test vs frame
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame(dict(A=lrange(5), B=lrange(5)))
store.append('df', df)
@@ -4015,8 +3926,8 @@ def test_coordinates(self):
assert isinstance(c, Index)
# multiple tables
- _maybe_remove(store, 'df1')
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df1')
+ maybe_remove(store, 'df2')
df1 = tm.makeTimeDataFrame()
df2 = tm.makeTimeDataFrame().rename(columns=lambda x: "%s_2" % x)
store.append('df1', df1, data_columns=['A', 'B'])
@@ -4756,41 +4667,41 @@ def test_categorical(self):
with ensure_clean_store(self.path) as store:
# Basic
- _maybe_remove(store, 's')
+ maybe_remove(store, 's')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=False))
store.append('s', s, format='table')
result = store.select('s')
tm.assert_series_equal(s, result)
- _maybe_remove(store, 's_ordered')
+ maybe_remove(store, 's_ordered')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=True))
store.append('s_ordered', s, format='table')
result = store.select('s_ordered')
tm.assert_series_equal(s, result)
- _maybe_remove(store, 'df')
+ maybe_remove(store, 'df')
df = DataFrame({"s": s, "vals": [1, 2, 3, 4, 5, 6]})
store.append('df', df, format='table')
result = store.select('df')
tm.assert_frame_equal(result, df)
# Dtypes
- _maybe_remove(store, 'si')
+ maybe_remove(store, 'si')
s = Series([1, 1, 2, 2, 3, 4, 5]).astype('category')
store.append('si', s)
result = store.select('si')
tm.assert_series_equal(result, s)
- _maybe_remove(store, 'si2')
+ maybe_remove(store, 'si2')
s = Series([1, 1, np.nan, 2, 3, 4, 5]).astype('category')
store.append('si2', s)
result = store.select('si2')
tm.assert_series_equal(result, s)
# Multiple
- _maybe_remove(store, 'df2')
+ maybe_remove(store, 'df2')
df2 = df.copy()
df2['s2'] = Series(list('abcdefg')).astype('category')
store.append('df2', df2)
@@ -4804,7 +4715,7 @@ def test_categorical(self):
assert '/df2/meta/values_block_1/meta' in info
# unordered
- _maybe_remove(store, 's2')
+ maybe_remove(store, 's2')
s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[
'a', 'b', 'c', 'd'], ordered=False))
store.append('s2', s, format='table')
@@ -4812,7 +4723,7 @@ def test_categorical(self):
tm.assert_series_equal(result, s)
# Query
- _maybe_remove(store, 'df3')
+ maybe_remove(store, 'df3')
store.append('df3', df, data_columns=['s'])
expected = df[df.s.isin(['b', 'c'])]
result = store.select('df3', where=['s in ["b","c"]'])
@@ -5228,424 +5139,3 @@ def test_read_py2_hdf_file_in_py3(self, datapath):
mode='r') as store:
result = store['p']
assert_frame_equal(result, expected)
-
-
-class TestHDFComplexValues(Base):
- # GH10447
-
- def test_complex_fixed(self):
- df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
- index=list('abcd'),
- columns=list('ABCDE'))
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_table(self):
- df = DataFrame(np.random.rand(4, 5).astype(np.complex64),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- df = DataFrame(np.random.rand(4, 5).astype(np.complex128),
- index=list('abcd'),
- columns=list('ABCDE'))
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table', mode='w')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_mixed_fixed(self):
- complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
- 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex64,
- 'D': complex128,
- 'E': [1.0, 2.0, 3.0, 4.0]},
- index=list('abcd'))
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- def test_complex_mixed_table(self):
- complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j,
- 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64)
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex64,
- 'D': complex128,
- 'E': [1.0, 2.0, 3.0, 4.0]},
- index=list('abcd'))
-
- with ensure_clean_store(self.path) as store:
- store.append('df', df, data_columns=['A', 'B'])
- result = store.select('df', where='A>2')
- assert_frame_equal(df.loc[df.A > 2], result)
-
- with ensure_clean_path(self.path) as path:
- df.to_hdf(path, 'df', format='table')
- reread = read_hdf(path, 'df')
- assert_frame_equal(df, reread)
-
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_complex_across_dimensions_fixed(self):
- with catch_warnings(record=True):
- complex128 = np.array(
- [1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
- df = DataFrame({'A': s, 'B': s})
- p = Panel({'One': df, 'Two': df})
-
- objs = [s, df, p]
- comps = [tm.assert_series_equal, tm.assert_frame_equal,
- tm.assert_panel_equal]
- for obj, comp in zip(objs, comps):
- with ensure_clean_path(self.path) as path:
- obj.to_hdf(path, 'obj', format='fixed')
- reread = read_hdf(path, 'obj')
- comp(obj, reread)
-
- @pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
- def test_complex_across_dimensions(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
- df = DataFrame({'A': s, 'B': s})
-
- with catch_warnings(record=True):
- p = Panel({'One': df, 'Two': df})
-
- objs = [df, p]
- comps = [tm.assert_frame_equal, tm.assert_panel_equal]
- for obj, comp in zip(objs, comps):
- with ensure_clean_path(self.path) as path:
- obj.to_hdf(path, 'obj', format='table')
- reread = read_hdf(path, 'obj')
- comp(obj, reread)
-
- def test_complex_indexing_error(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j],
- dtype=np.complex128)
- df = DataFrame({'A': [1, 2, 3, 4],
- 'B': ['a', 'b', 'c', 'd'],
- 'C': complex128},
- index=list('abcd'))
- with ensure_clean_store(self.path) as store:
- pytest.raises(TypeError, store.append,
- 'df', df, data_columns=['C'])
-
- def test_complex_series_error(self):
- complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j])
- s = Series(complex128, index=list('abcd'))
-
- with ensure_clean_path(self.path) as path:
- pytest.raises(TypeError, s.to_hdf, path, 'obj', format='t')
-
- with ensure_clean_path(self.path) as path:
- s.to_hdf(path, 'obj', format='t', index=False)
- reread = read_hdf(path, 'obj')
- tm.assert_series_equal(s, reread)
-
- def test_complex_append(self):
- df = DataFrame({'a': np.random.randn(100).astype(np.complex128),
- 'b': np.random.randn(100)})
-
- with ensure_clean_store(self.path) as store:
- store.append('df', df, data_columns=['b'])
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(pd.concat([df, df], 0), result)
-
-
-class TestTimezones(Base):
-
- def _compare_with_tz(self, a, b):
- tm.assert_frame_equal(a, b)
-
- # compare the zones on each element
- for c in a.columns:
- for i in a.index:
- a_e = a.loc[i, c]
- b_e = b.loc[i, c]
- if not (a_e == b_e and a_e.tz == b_e.tz):
- raise AssertionError(
- "invalid tz comparison [%s] [%s]" % (a_e, b_e))
-
- def test_append_with_timezones_dateutil(self):
-
- from datetime import timedelta
-
- # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows
- # filename issues.
- from pandas._libs.tslibs.timezones import maybe_get_tz
- gettz = lambda x: maybe_get_tz('dateutil/' + x)
-
- # as columns
- with ensure_clean_store(self.path) as store:
-
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', tz=gettz(
- 'US/Eastern')) + timedelta(hours=1) * i for i in range(5)]))
-
- store.append('df_tz', df, data_columns=['A'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # select with tz aware
- expected = df[df.A >= df.A[3]]
- result = store.select('df_tz', where='A>=df.A[3]')
- self._compare_with_tz(result, expected)
-
- # ensure we include dates in DST and STD time here.
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130603',
- tz=gettz('US/Eastern'))),
- index=range(5))
- store.append('df_tz', df)
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130102', tz=gettz('EET'))),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # this is ok
- _maybe_remove(store, 'df_tz')
- store.append('df_tz', df, data_columns=['A', 'B'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # can't append with diff timezone
- df = DataFrame(dict(A=Timestamp('20130102',
- tz=gettz('US/Eastern')),
- B=Timestamp('20130102', tz=gettz('CET'))),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # as index
- with ensure_clean_store(self.path) as store:
-
- # GH 4098 example
- df = DataFrame(dict(A=Series(lrange(3), index=date_range(
- '2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern')))))
-
- _maybe_remove(store, 'df')
- store.put('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- _maybe_remove(store, 'df')
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- def test_append_with_timezones_pytz(self):
-
- from datetime import timedelta
-
- # as columns
- with ensure_clean_store(self.path) as store:
-
- _maybe_remove(store, 'df_tz')
- df = DataFrame(dict(A=[Timestamp('20130102 2:00:00',
- tz='US/Eastern') +
- timedelta(hours=1) * i
- for i in range(5)]))
- store.append('df_tz', df, data_columns=['A'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # select with tz aware
- self._compare_with_tz(store.select(
- 'df_tz', where='A>=df.A[3]'), df[df.A >= df.A[3]])
-
- _maybe_remove(store, 'df_tz')
- # ensure we include dates in DST and STD time here.
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130603', tz='US/Eastern')),
- index=range(5))
- store.append('df_tz', df)
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130102', tz='EET')),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # this is ok
- _maybe_remove(store, 'df_tz')
- store.append('df_tz', df, data_columns=['A', 'B'])
- result = store['df_tz']
- self._compare_with_tz(result, df)
- assert_frame_equal(result, df)
-
- # can't append with diff timezone
- df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130102', tz='CET')),
- index=range(5))
- pytest.raises(ValueError, store.append, 'df_tz', df)
-
- # as index
- with ensure_clean_store(self.path) as store:
-
- # GH 4098 example
- df = DataFrame(dict(A=Series(lrange(3), index=date_range(
- '2000-1-1', periods=3, freq='H', tz='US/Eastern'))))
-
- _maybe_remove(store, 'df')
- store.put('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- _maybe_remove(store, 'df')
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
-
- def test_tseries_select_index_column(self):
- # GH7777
- # selecting a UTC datetimeindex column did
- # not preserve UTC tzinfo set before storing
-
- # check that no tz still works
- rng = date_range('1/1/2000', '1/30/2000')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == DatetimeIndex(result.values).tz
-
- # check utc
- rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == result.dt.tz
-
- # double check non-utc
- rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store.append('frame', frame)
- result = store.select_column('frame', 'index')
- assert rng.tz == result.dt.tz
-
- def test_timezones_fixed(self):
- with ensure_clean_store(self.path) as store:
-
- # index
- rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
- df = DataFrame(np.random.randn(len(rng), 4), index=rng)
- store['df'] = df
- result = store['df']
- assert_frame_equal(result, df)
-
- # as data
- # GH11411
- _maybe_remove(store, 'df')
- df = DataFrame({'A': rng,
- 'B': rng.tz_convert('UTC').tz_localize(None),
- 'C': rng.tz_convert('CET'),
- 'D': range(len(rng))}, index=rng)
- store['df'] = df
- result = store['df']
- assert_frame_equal(result, df)
-
- def test_fixed_offset_tz(self):
- rng = date_range('1/1/2000 00:00:00-07:00', '1/30/2000 00:00:00-07:00')
- frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
-
- with ensure_clean_store(self.path) as store:
- store['frame'] = frame
- recons = store['frame']
- tm.assert_index_equal(recons.index, rng)
- assert rng.tz == recons.index.tz
-
- @td.skip_if_windows
- def test_store_timezone(self):
- # GH2852
- # issue storing datetime.date with a timezone as it resets when read
- # back in a new timezone
-
- # original method
- with ensure_clean_store(self.path) as store:
-
- today = datetime.date(2013, 9, 10)
- df = DataFrame([1, 2, 3], index=[today, today, today])
- store['obj1'] = df
- result = store['obj1']
- assert_frame_equal(result, df)
-
- # with tz setting
- with ensure_clean_store(self.path) as store:
-
- with set_timezone('EST5EDT'):
- today = datetime.date(2013, 9, 10)
- df = DataFrame([1, 2, 3], index=[today, today, today])
- store['obj1'] = df
-
- with set_timezone('CST6CDT'):
- result = store['obj1']
-
- assert_frame_equal(result, df)
-
- def test_legacy_datetimetz_object(self, datapath):
- # legacy from < 0.17.0
- # 8260
- expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130603', tz='CET')),
- index=range(5))
- with ensure_clean_store(
- datapath('io', 'data', 'legacy_hdf', 'datetimetz_object.h5'),
- mode='r') as store:
- result = store['df']
- assert_frame_equal(result, expected)
-
- def test_dst_transitions(self):
- # make sure we are not failing on transaitions
- with ensure_clean_store(self.path) as store:
- times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00",
- tz="Europe/London",
- freq="H",
- ambiguous='infer')
-
- for i in [times, times + pd.Timedelta('10min')]:
- _maybe_remove(store, 'df')
- df = DataFrame({'A': range(len(i)), 'B': i}, index=i)
- store.append('df', df)
- result = store.select('df')
- assert_frame_equal(result, df)
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
new file mode 100644
index 0000000000000..e00000cdea8ed
--- /dev/null
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -0,0 +1,288 @@
+import datetime
+
+import numpy as np
+import pytest
+
+from pandas.compat import lrange, range
+import pandas.util._test_decorators as td
+
+import pandas as pd
+from pandas import DataFrame, DatetimeIndex, Series, Timestamp, date_range
+import pandas.util.testing as tm
+from pandas.util.testing import assert_frame_equal, set_timezone
+
+from .base import Base, ensure_clean_store, maybe_remove
+
+
+class TestTimezones(Base):
+
+ def _compare_with_tz(self, a, b):
+ tm.assert_frame_equal(a, b)
+
+ # compare the zones on each element
+ for c in a.columns:
+ for i in a.index:
+ a_e = a.loc[i, c]
+ b_e = b.loc[i, c]
+ if not (a_e == b_e and a_e.tz == b_e.tz):
+ raise AssertionError(
+ "invalid tz comparison [%s] [%s]" % (a_e, b_e))
+
+ def test_append_with_timezones_dateutil(self):
+
+ from datetime import timedelta
+
+ # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows
+ # filename issues.
+ from pandas._libs.tslibs.timezones import maybe_get_tz
+ gettz = lambda x: maybe_get_tz('dateutil/' + x)
+
+ # as columns
+ with ensure_clean_store(self.path) as store:
+
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', tz=gettz(
+ 'US/Eastern')) + timedelta(hours=1) * i for i in range(5)]))
+
+ store.append('df_tz', df, data_columns=['A'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # select with tz aware
+ expected = df[df.A >= df.A[3]]
+ result = store.select('df_tz', where='A>=df.A[3]')
+ self._compare_with_tz(result, expected)
+
+ # ensure we include dates in DST and STD time here.
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130603',
+ tz=gettz('US/Eastern'))),
+ index=range(5))
+ store.append('df_tz', df)
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130102', tz=gettz('EET'))),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # this is ok
+ maybe_remove(store, 'df_tz')
+ store.append('df_tz', df, data_columns=['A', 'B'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # can't append with diff timezone
+ df = DataFrame(dict(A=Timestamp('20130102',
+ tz=gettz('US/Eastern')),
+ B=Timestamp('20130102', tz=gettz('CET'))),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # as index
+ with ensure_clean_store(self.path) as store:
+
+ # GH 4098 example
+ df = DataFrame(dict(A=Series(lrange(3), index=date_range(
+ '2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern')))))
+
+ maybe_remove(store, 'df')
+ store.put('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ maybe_remove(store, 'df')
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ def test_append_with_timezones_pytz(self):
+
+ from datetime import timedelta
+
+ # as columns
+ with ensure_clean_store(self.path) as store:
+
+ maybe_remove(store, 'df_tz')
+ df = DataFrame(dict(A=[Timestamp('20130102 2:00:00',
+ tz='US/Eastern') +
+ timedelta(hours=1) * i
+ for i in range(5)]))
+ store.append('df_tz', df, data_columns=['A'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # select with tz aware
+ self._compare_with_tz(store.select(
+ 'df_tz', where='A>=df.A[3]'), df[df.A >= df.A[3]])
+
+ maybe_remove(store, 'df_tz')
+ # ensure we include dates in DST and STD time here.
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='US/Eastern')),
+ index=range(5))
+ store.append('df_tz', df)
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130102', tz='EET')),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # this is ok
+ maybe_remove(store, 'df_tz')
+ store.append('df_tz', df, data_columns=['A', 'B'])
+ result = store['df_tz']
+ self._compare_with_tz(result, df)
+ assert_frame_equal(result, df)
+
+ # can't append with diff timezone
+ df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130102', tz='CET')),
+ index=range(5))
+ pytest.raises(ValueError, store.append, 'df_tz', df)
+
+ # as index
+ with ensure_clean_store(self.path) as store:
+
+ # GH 4098 example
+ df = DataFrame(dict(A=Series(lrange(3), index=date_range(
+ '2000-1-1', periods=3, freq='H', tz='US/Eastern'))))
+
+ maybe_remove(store, 'df')
+ store.put('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ maybe_remove(store, 'df')
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
+
+ def test_tseries_select_index_column(self):
+ # GH7777
+ # selecting a UTC datetimeindex column did
+ # not preserve UTC tzinfo set before storing
+
+ # check that no tz still works
+ rng = date_range('1/1/2000', '1/30/2000')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == DatetimeIndex(result.values).tz
+
+ # check utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == result.dt.tz
+
+ # double check non-utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ assert rng.tz == result.dt.tz
+
+ def test_timezones_fixed(self):
+ with ensure_clean_store(self.path) as store:
+
+ # index
+ rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
+ df = DataFrame(np.random.randn(len(rng), 4), index=rng)
+ store['df'] = df
+ result = store['df']
+ assert_frame_equal(result, df)
+
+ # as data
+ # GH11411
+ maybe_remove(store, 'df')
+ df = DataFrame({'A': rng,
+ 'B': rng.tz_convert('UTC').tz_localize(None),
+ 'C': rng.tz_convert('CET'),
+ 'D': range(len(rng))}, index=rng)
+ store['df'] = df
+ result = store['df']
+ assert_frame_equal(result, df)
+
+ def test_fixed_offset_tz(self):
+ rng = date_range('1/1/2000 00:00:00-07:00', '1/30/2000 00:00:00-07:00')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store['frame'] = frame
+ recons = store['frame']
+ tm.assert_index_equal(recons.index, rng)
+ assert rng.tz == recons.index.tz
+
+ @td.skip_if_windows
+ def test_store_timezone(self):
+ # GH2852
+ # issue storing datetime.date with a timezone as it resets when read
+ # back in a new timezone
+
+ # original method
+ with ensure_clean_store(self.path) as store:
+
+ today = datetime.date(2013, 9, 10)
+ df = DataFrame([1, 2, 3], index=[today, today, today])
+ store['obj1'] = df
+ result = store['obj1']
+ assert_frame_equal(result, df)
+
+ # with tz setting
+ with ensure_clean_store(self.path) as store:
+
+ with set_timezone('EST5EDT'):
+ today = datetime.date(2013, 9, 10)
+ df = DataFrame([1, 2, 3], index=[today, today, today])
+ store['obj1'] = df
+
+ with set_timezone('CST6CDT'):
+ result = store['obj1']
+
+ assert_frame_equal(result, df)
+
+ def test_legacy_datetimetz_object(self, datapath):
+ # legacy from < 0.17.0
+ # 8260
+ expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='CET')),
+ index=range(5))
+ with ensure_clean_store(
+ datapath('io', 'data', 'legacy_hdf', 'datetimetz_object.h5'),
+ mode='r') as store:
+ result = store['df']
+ assert_frame_equal(result, expected)
+
+ def test_dst_transitions(self):
+ # make sure we are not failing on transaitions
+ with ensure_clean_store(self.path) as store:
+ times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00",
+ tz="Europe/London",
+ freq="H",
+ ambiguous='infer')
+
+ for i in [times, times + pd.Timedelta('10min')]:
+ maybe_remove(store, 'df')
+ df = DataFrame({'A': range(len(i)), 'B': i}, index=i)
+ store.append('df', df)
+ result = store.select('df')
+ assert_frame_equal(result, df)
| Fixes pandas-dev/pandas#18498
Split test_pytables.py into:
- pytables/base.py
- pytables/test_complex_values.py
- pytables/test_pytables.py
- pytables/test_timezones.py
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25003 | 2019-01-29T14:39:46Z | 2019-01-29T14:40:11Z | null | 2019-01-29T14:40:11Z |
BUG: Fix broken links | diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 21df1a3aacd59..faff68b636109 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -8,16 +8,16 @@ Our main contributing guide can be found [in this repo](https://github.com/panda
If you are looking to contribute to the *pandas* codebase, the best place to start is the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues). This is also a great place for filing bug reports and making suggestions for ways in which we can improve the code and documentation.
-If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in the "[Where to start?](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#where-to-start)" section.
+If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in the "[Where to start?](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#where-to-start)" section.
## Filing Issues
-If you notice a bug in the code or documentation, or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the "[Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#bug-reports-and-enhancement-requests)" section.
+If you notice a bug in the code or documentation, or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the "[Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#bug-reports-and-enhancement-requests)" section.
## Contributing to the Codebase
-The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to the "[Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#working-with-the-code)" section.
+The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to the "[Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#working-with-the-code)" section.
-Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites in the "[Test-driven development/code writing](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing)" section. We also have guidelines regarding coding style that will be enforced during testing, which can be found in the "[Code standards](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#code-standards)" section.
+Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites in the "[Test-driven development/code writing](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing)" section. We also have guidelines regarding coding style that will be enforced during testing, which can be found in the "[Code standards](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#code-standards)" section.
-Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the "[Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#contributing-your-changes-to-pandas)" section. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
+Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the "[Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#contributing-your-changes-to-pandas)" section. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
| The previous location of contributing.rst file was
/doc/source/contributing.rst but has been moved to
/doc/source/development/contributing.rst | https://api.github.com/repos/pandas-dev/pandas/pulls/25002 | 2019-01-29T14:09:04Z | 2019-01-29T15:09:38Z | 2019-01-29T15:09:38Z | 2019-01-29T15:11:19Z |
BUG: on .to_string(index=False) | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 76ee21b4c9a50..0203392d9fc5a 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -647,6 +647,7 @@ I/O
- Bug in :func:`DataFrame.to_html()` where values were truncated using display options instead of outputting the full content (:issue:`17004`)
- Fixed bug in missing text when using :meth:`to_clipboard` if copying utf-16 characters in Python 3 on Windows (:issue:`25040`)
+- Bug in :meth:`Series.to_string` adding a leading space when ``index=False`` (:issue:`24980`)
- Bug in :func:`read_json` for ``orient='table'`` when it tries to infer dtypes by default, which is not applicable as dtypes are already defined in the JSON schema (:issue:`21345`)
- Bug in :func:`read_json` for ``orient='table'`` and float index, as it infers index dtype by default, which is not applicable because index dtype is already defined in the JSON schema (:issue:`25433`)
- Bug in :func:`read_json` for ``orient='table'`` and string of float column names, as it makes a column name type conversion to :class:`Timestamp`, which is not applicable because column names are already defined in the JSON schema (:issue:`25435`)
@@ -671,6 +672,7 @@ I/O
- :func:`read_excel` now raises a ``ValueError`` when input is of type :class:`pandas.io.excel.ExcelFile` and ``engine`` param is passed since :class:`pandas.io.excel.ExcelFile` has an engine defined (:issue:`26566`)
- Bug while selecting from :class:`HDFStore` with ``where=''`` specified (:issue:`26610`).
+
Plotting
^^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 903fd7ffe706a..fa36723324ea2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2859,9 +2859,9 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
... 'mask': ['red', 'purple'],
... 'weapon': ['sai', 'bo staff']})
>>> df.to_latex(index=False) # doctest: +NORMALIZE_WHITESPACE
- '\\begin{tabular}{lll}\n\\toprule\n name & mask & weapon
- \\\\\n\\midrule\n Raphael & red & sai \\\\\n Donatello &
- purple & bo staff \\\\\n\\bottomrule\n\\end{tabular}\n'
+ '\\begin{tabular}{lll}\n\\toprule\n name & mask & weapon
+ \\\\\n\\midrule\n Raphael & red & sai \\\\\nDonatello &
+ purple & bo staff \\\\\n\\bottomrule\n\\end{tabular}\n'
"""
# Get defaults from the pandas config
if self.ndim == 1:
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index f632bc13a5b24..83b7b03b7254b 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -247,8 +247,15 @@ def _get_formatted_index(self):
def _get_formatted_values(self):
values_to_format = self.tr_series._formatting_values()
+
+ if self.index:
+ leading_space = 'compat'
+ else:
+ leading_space = False
return format_array(values_to_format, None,
- float_format=self.float_format, na_rep=self.na_rep)
+ float_format=self.float_format,
+ na_rep=self.na_rep,
+ leading_space=leading_space)
def to_string(self):
series = self.tr_series
@@ -712,9 +719,15 @@ def _format_col(self, i):
frame = self.tr_frame
formatter = self._get_formatter(i)
values_to_format = frame.iloc[:, i]._formatting_values()
+
+ if self.index:
+ leading_space = 'compat'
+ else:
+ leading_space = False
return format_array(values_to_format, formatter,
float_format=self.float_format, na_rep=self.na_rep,
- space=self.col_space, decimal=self.decimal)
+ space=self.col_space, decimal=self.decimal,
+ leading_space=leading_space)
def to_html(self, classes=None, notebook=False, border=None):
"""
@@ -851,7 +864,7 @@ def _get_column_name_list(self):
def format_array(values, formatter, float_format=None, na_rep='NaN',
digits=None, space=None, justify='right', decimal='.',
- leading_space=None):
+ leading_space='compat'):
"""
Format an array for printing.
@@ -865,7 +878,7 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
space
justify
decimal
- leading_space : bool, optional
+ leading_space : bool, default is 'compat'
Whether the array should be formatted with a leading space.
When an array as a column of a Series or DataFrame, we do want
the leading space to pad between columns.
@@ -915,7 +928,7 @@ class GenericArrayFormatter:
def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
space=12, float_format=None, justify='right', decimal='.',
- quoting=None, fixed_width=True, leading_space=None):
+ quoting=None, fixed_width=True, leading_space='compat'):
self.values = values
self.digits = digits
self.na_rep = na_rep
@@ -973,7 +986,7 @@ def _format(x):
is_float_type = lib.map_infer(vals, is_float) & notna(vals)
leading_space = self.leading_space
- if leading_space is None:
+ if leading_space == 'compat':
leading_space = is_float_type.any()
fmt_values = []
@@ -1101,7 +1114,11 @@ def format_values_with(float_format):
# The default is otherwise to use str instead of a formatting string
if self.float_format is None:
if self.fixed_width:
- float_format = partial('{value: .{digits:d}f}'.format,
+ if self.leading_space is not False:
+ fmt_str = '{value: .{digits:d}f}'
+ else:
+ fmt_str = '{value:.{digits:d}f}'
+ float_format = partial(fmt_str.format,
digits=self.digits)
else:
float_format = self.float_format
@@ -1133,7 +1150,11 @@ def format_values_with(float_format):
(abs_vals > 0)).any()
if has_small_values or (too_long and has_large_values):
- float_format = partial('{value: .{digits:d}e}'.format,
+ if self.leading_space is not False:
+ fmt_str = '{value: .{digits:d}e}'
+ else:
+ fmt_str = '{value:.{digits:d}e}'
+ float_format = partial(fmt_str.format,
digits=self.digits)
formatted_values = format_values_with(float_format)
@@ -1150,7 +1171,12 @@ def _format_strings(self):
class IntArrayFormatter(GenericArrayFormatter):
def _format_strings(self):
- formatter = self.formatter or (lambda x: '{x: d}'.format(x=x))
+ if self.leading_space is False:
+ fmt_str = '{x:d}'
+ else:
+ fmt_str = '{x: d}'
+ formatter = self.formatter or (lambda x: fmt_str.format(x=x))
+# formatter = self.formatter or (lambda x: '{x: d}'.format(x=x))
fmt_values = [formatter(x) for x in self.values]
return fmt_values
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index edb7c2136825d..5c63be92b9226 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1232,15 +1232,15 @@ def test_to_string_no_index(self):
df_s = df.to_string(index=False)
# Leading space is expected for positive numbers.
- expected = (" x y z\n"
- " 11 33 AAA\n"
- " 22 -44 ")
+ expected = (" x y z\n"
+ "11 33 AAA\n"
+ "22 -44 ")
assert df_s == expected
df_s = df[['y', 'x', 'z']].to_string(index=False)
- expected = (" y x z\n"
- " 33 11 AAA\n"
- "-44 22 ")
+ expected = (" y x z\n"
+ " 33 11 AAA\n"
+ "-44 22 ")
assert df_s == expected
def test_to_string_line_width_no_index(self):
@@ -1255,7 +1255,7 @@ def test_to_string_line_width_no_index(self):
df = DataFrame({'x': [11, 22, 33], 'y': [4, 5, 6]})
df_s = df.to_string(line_width=1, index=False)
- expected = " x \\\n 11 \n 22 \n 33 \n\n y \n 4 \n 5 \n 6 "
+ expected = " x \\\n11 \n22 \n33 \n\n y \n 4 \n 5 \n 6 "
assert df_s == expected
@@ -1844,7 +1844,7 @@ def test_to_string_without_index(self):
# GH 11729 Test index=False option
s = Series([1, 2, 3, 4])
result = s.to_string(index=False)
- expected = (' 1\n' + ' 2\n' + ' 3\n' + ' 4')
+ expected = ('1\n' + '2\n' + '3\n' + '4')
assert result == expected
def test_unicode_name_in_footer(self):
@@ -2332,6 +2332,15 @@ def test_to_string_header(self):
exp = '0 0\n ..\n9 9'
assert res == exp
+ @pytest.mark.parametrize("inputs, expected", [
+ ([' a', ' b'], ' a\n b'),
+ (['.1', '1'], '.1\n 1'),
+ (['10', '-10'], ' 10\n-10')
+ ])
+ def test_to_string_index_false_corner_case(self, inputs, expected):
+ s = pd.Series(inputs).to_string(index=False)
+ assert s == expected
+
def test_to_string_multindex_header(self):
# GH 16718
df = (pd.DataFrame({'a': [0], 'b': [1], 'c': [2], 'd': [3]})
@@ -2740,6 +2749,31 @@ def test_format_percentiles():
fmt.format_percentiles([0.1, 0.5, 'a'])
+@pytest.mark.parametrize("input_array, expected", [
+ ("a", "a"),
+ (["a", "b"], "a\nb"),
+ ([1, "a"], "1\na"),
+ (1, "1"),
+ ([0, -1], " 0\n-1"),
+ (1.0, '1.0')
+])
+def test_format_remove_leading_space_series(input_array, expected):
+ # GH: 24980
+ s = pd.Series(input_array).to_string(index=False)
+ assert s == expected
+
+
+@pytest.mark.parametrize("input_array, expected", [
+ ({"A": ["a"]}, "A\na"),
+ ({"A": ["a", "b"], "B": ["c", "dd"]}, "A B\na c\nb dd"),
+ ({"A": ["a", 1], "B": ["aa", 1]}, "A B\na aa\n1 1")
+])
+def test_format_remove_leading_space_dataframe(input_array, expected):
+ # GH: 24980
+ df = pd.DataFrame(input_array).to_string(index=False)
+ assert df == expected
+
+
def test_format_percentiles_integer_idx():
# Issue #26660
result = fmt.format_percentiles(np.linspace(0, 1, 10 + 1))
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index b9f28ec36d021..0d2e12c051725 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -51,10 +51,10 @@ def test_to_latex(self, float_frame):
withoutindex_result = df.to_latex(index=False)
withoutindex_expected = r"""\begin{tabular}{rl}
\toprule
- a & b \\
+ a & b \\
\midrule
- 1 & b1 \\
- 2 & b2 \\
+ 1 & b1 \\
+ 2 & b2 \\
\bottomrule
\end{tabular}
"""
@@ -410,7 +410,7 @@ def test_to_latex_longtable(self, float_frame):
withoutindex_result = df.to_latex(index=False, longtable=True)
withoutindex_expected = r"""\begin{longtable}{rl}
\toprule
- a & b \\
+ a & b \\
\midrule
\endhead
\midrule
@@ -420,8 +420,8 @@ def test_to_latex_longtable(self, float_frame):
\bottomrule
\endlastfoot
- 1 & b1 \\
- 2 & b2 \\
+ 1 & b1 \\
+ 2 & b2 \\
\end{longtable}
"""
@@ -477,8 +477,8 @@ def test_to_latex_no_header(self):
withoutindex_result = df.to_latex(index=False, header=False)
withoutindex_expected = r"""\begin{tabular}{rl}
\toprule
- 1 & b1 \\
- 2 & b2 \\
+1 & b1 \\
+2 & b2 \\
\bottomrule
\end{tabular}
"""
@@ -504,10 +504,10 @@ def test_to_latex_specified_header(self):
withoutindex_result = df.to_latex(header=['AA', 'BB'], index=False)
withoutindex_expected = r"""\begin{tabular}{rl}
\toprule
-AA & BB \\
+AA & BB \\
\midrule
- 1 & b1 \\
- 2 & b2 \\
+ 1 & b1 \\
+ 2 & b2 \\
\bottomrule
\end{tabular}
"""
| - [ ] closes #24980
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/25000 | 2019-01-29T13:37:31Z | 2019-07-15T01:08:55Z | null | 2019-07-15T01:08:55Z |
DOC: move whatsnew note of #24916 | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 78eba1fe5d025..85a2ba5bb03b6 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -72,7 +72,8 @@ Bug Fixes
**Reshaping**
-- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
+-
+-
**Visualization**
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 5129449e4fdf3..d0ddb6e09d555 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -179,7 +179,7 @@ Groupby/Resample/Rolling
Reshaping
^^^^^^^^^
--
+- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
-
-
| See https://github.com/pandas-dev/pandas/pull/24951#issuecomment-458146101 | https://api.github.com/repos/pandas-dev/pandas/pulls/24999 | 2019-01-29T13:24:40Z | 2019-01-29T13:24:53Z | 2019-01-29T13:24:53Z | 2019-01-29T13:27:04Z |
DOC: move whatsnew | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 78eba1fe5d025..08d184f9df416 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -70,10 +70,6 @@ Bug Fixes
-
-
-**Reshaping**
-
-- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
-
**Visualization**
- Fixed the warning for implicitly registered matplotlib converters not showing. See :ref:`whatsnew_0211.converters` for more (:issue:`24963`).
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 5129449e4fdf3..d0ddb6e09d555 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -179,7 +179,7 @@ Groupby/Resample/Rolling
Reshaping
^^^^^^^^^
--
+- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
-
-
| https://github.com/pandas-dev/pandas/pull/24951#issuecomment-458532222 | https://api.github.com/repos/pandas-dev/pandas/pulls/24998 | 2019-01-29T13:22:36Z | 2019-01-29T13:26:05Z | null | 2019-01-29T14:38:23Z |
Backport PR #24964 on branch 0.24.x (DEPR: Fixed warning for implicit registration) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 16044fb00d4f5..7647e199030d2 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -71,6 +71,11 @@ Bug Fixes
-
+**Visualization**
+
+- Fixed the warning for implicitly registered matplotlib converters not showing. See :ref:`whatsnew_0211.converters` for more (:issue:`24963`).
+
+
**Other**
-
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index e543ab88f53b2..85549bafa8dc0 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -39,7 +39,7 @@
else:
_HAS_MPL = True
if get_option('plotting.matplotlib.register_converters'):
- _converter.register(explicit=True)
+ _converter.register(explicit=False)
def _raise_if_no_mpl():
| Backport PR #24964: DEPR: Fixed warning for implicit registration | https://api.github.com/repos/pandas-dev/pandas/pulls/24997 | 2019-01-29T12:23:51Z | 2019-01-29T13:26:30Z | 2019-01-29T13:26:30Z | 2019-01-29T13:26:30Z |
Backport PR #24989 on branch 0.24.x (DOC: Document breaking change to read_csv) | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 58e1b2370c7c8..b23a0f10e9e2b 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -989,6 +989,36 @@ a single date rather than the entire array.
os.remove('tmp.csv')
+
+.. _io.csv.mixed_timezones:
+
+Parsing a CSV with mixed Timezones
+++++++++++++++++++++++++++++++++++
+
+Pandas cannot natively represent a column or index with mixed timezones. If your CSV
+file contains columns with a mixture of timezones, the default result will be
+an object-dtype column with strings, even with ``parse_dates``.
+
+
+.. ipython:: python
+
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(StringIO(content), parse_dates=['a'])
+ df['a']
+
+To parse the mixed-timezone values as a datetime column, pass a partially-applied
+:func:`to_datetime` with ``utc=True`` as the ``date_parser``.
+
+.. ipython:: python
+
+ df = pd.read_csv(StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df['a']
+
+
.. _io.dayfirst:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 16319a3b83ca4..a49ea2cf493a6 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -648,6 +648,52 @@ that the dates have been converted to UTC
pd.to_datetime(["2015-11-18 15:30:00+05:30",
"2015-11-18 16:30:00+06:30"], utc=True)
+
+.. _whatsnew_0240.api_breaking.read_csv_mixed_tz:
+
+Parsing mixed-timezones with :func:`read_csv`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:func:`read_csv` no longer silently converts mixed-timezone columns to UTC (:issue:`24987`).
+
+*Previous Behavior*
+
+.. code-block:: python
+
+ >>> import io
+ >>> content = """\
+ ... a
+ ... 2000-01-01T00:00:00+05:00
+ ... 2000-01-01T00:00:00+06:00"""
+ >>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ >>> df.a
+ 0 1999-12-31 19:00:00
+ 1 1999-12-31 18:00:00
+ Name: a, dtype: datetime64[ns]
+
+*New Behavior*
+
+.. ipython:: python
+
+ import io
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ df.a
+
+As can be seen, the ``dtype`` is object; each value in the column is a string.
+To convert the strings to an array of datetimes, the ``date_parser`` argument
+
+.. ipython:: python
+
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df.a
+
+See :ref:`whatsnew_0240.api.timezone_offset_parsing` for more.
+
.. _whatsnew_0240.api_breaking.period_end_time:
Time values in ``dt.end_time`` and ``to_timestamp(how='end')``
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index b31d3f665f47f..4163a571df800 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -203,9 +203,14 @@
* dict, e.g. {{'foo' : [1, 3]}} -> parse columns 1, 3 as date and call
result 'foo'
- If a column or index contains an unparseable date, the entire column or
- index will be returned unaltered as an object data type. For non-standard
- datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
+ If a column or index cannot be represented as an array of datetimes,
+ say because of an unparseable value or a mixture of timezones, the column
+ or index will be returned unaltered as an object data type. For
+ non-standard datetime parsing, use ``pd.to_datetime`` after
+ ``pd.read_csv``. To parse an index or column with a mixture of timezones,
+ specify ``date_parser`` to be a partially-applied
+ :func:`pandas.to_datetime` with ``utc=True``. See
+ :ref:`io.csv.mixed_timezones` for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : bool, default False
| Backport PR #24989: DOC: Document breaking change to read_csv | https://api.github.com/repos/pandas-dev/pandas/pulls/24996 | 2019-01-29T12:23:41Z | 2019-01-29T13:26:17Z | 2019-01-29T13:26:17Z | 2019-01-29T13:26:17Z |
STY: use pytest.raises context manager (indexes/datetimes) | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index d7a8417a71be2..f9aef2401a4d3 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -2058,7 +2058,7 @@ def validate_tz_from_dtype(dtype, tz):
# tz-naive dtype (i.e. datetime64[ns])
if tz is not None and not timezones.tz_compare(tz, dtz):
raise ValueError("cannot supply both a tz and a "
- "timezone-naive dtype (i.e. datetime64[ns]")
+ "timezone-naive dtype (i.e. datetime64[ns])")
return tz
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 499f01f0e7f7b..d72dccadf0ac0 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -30,7 +30,12 @@ def setup_indices(self):
def test_pickle_compat_construction(self):
# need an object to create with
- pytest.raises(TypeError, self._holder)
+ msg = (r"Index\(\.\.\.\) must be called with a collection of some"
+ r" kind, None was passed|"
+ r"__new__\(\) missing 1 required positional argument: 'data'|"
+ r"__new__\(\) takes at least 2 arguments \(1 given\)")
+ with pytest.raises(TypeError, match=msg):
+ self._holder()
def test_to_series(self):
# assert that we are creating a copy of the index
@@ -84,8 +89,11 @@ def test_shift(self):
# GH8083 test the base class for shift
idx = self.create_index()
- pytest.raises(NotImplementedError, idx.shift, 1)
- pytest.raises(NotImplementedError, idx.shift, 1, 2)
+ msg = "Not supported for type {}".format(type(idx).__name__)
+ with pytest.raises(NotImplementedError, match=msg):
+ idx.shift(1)
+ with pytest.raises(NotImplementedError, match=msg):
+ idx.shift(1, 2)
def test_create_index_existing_name(self):
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 7ebebbf6dee28..6893f635c82ac 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -135,8 +135,10 @@ def test_construction_with_alt_tz_localize(self, kwargs, tz_aware_fixture):
tm.assert_index_equal(i2, expected)
# incompat tz/dtype
- pytest.raises(ValueError, lambda: DatetimeIndex(
- i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific'))
+ msg = "cannot supply both a tz and a dtype with a tz"
+ with pytest.raises(ValueError, match=msg):
+ DatetimeIndex(i.tz_localize(None).asi8,
+ dtype=i.dtype, tz='US/Pacific')
def test_construction_index_with_mixed_timezones(self):
# gh-11488: no tz results in DatetimeIndex
@@ -439,14 +441,19 @@ def test_constructor_coverage(self):
tm.assert_index_equal(from_ints, expected)
# non-conforming
- pytest.raises(ValueError, DatetimeIndex,
- ['2000-01-01', '2000-01-02', '2000-01-04'], freq='D')
+ msg = ("Inferred frequency None from passed values does not conform"
+ " to passed frequency D")
+ with pytest.raises(ValueError, match=msg):
+ DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'], freq='D')
- pytest.raises(ValueError, date_range, start='2011-01-01',
- freq='b')
- pytest.raises(ValueError, date_range, end='2011-01-01',
- freq='B')
- pytest.raises(ValueError, date_range, periods=10, freq='D')
+ msg = ("Of the four parameters: start, end, periods, and freq, exactly"
+ " three must be specified")
+ with pytest.raises(ValueError, match=msg):
+ date_range(start='2011-01-01', freq='b')
+ with pytest.raises(ValueError, match=msg):
+ date_range(end='2011-01-01', freq='B')
+ with pytest.raises(ValueError, match=msg):
+ date_range(periods=10, freq='D')
@pytest.mark.parametrize('freq', ['AS', 'W-SUN'])
def test_constructor_datetime64_tzformat(self, freq):
@@ -511,18 +518,20 @@ def test_constructor_dtype(self):
idx = DatetimeIndex(['2013-01-01', '2013-01-02'],
dtype='datetime64[ns, US/Eastern]')
- pytest.raises(ValueError,
- lambda: DatetimeIndex(idx,
- dtype='datetime64[ns]'))
+ msg = ("cannot supply both a tz and a timezone-naive dtype"
+ r" \(i\.e\. datetime64\[ns\]\)")
+ with pytest.raises(ValueError, match=msg):
+ DatetimeIndex(idx, dtype='datetime64[ns]')
# this is effectively trying to convert tz's
- pytest.raises(TypeError,
- lambda: DatetimeIndex(idx,
- dtype='datetime64[ns, CET]'))
- pytest.raises(ValueError,
- lambda: DatetimeIndex(
- idx, tz='CET',
- dtype='datetime64[ns, US/Eastern]'))
+ msg = ("data is already tz-aware US/Eastern, unable to set specified"
+ " tz: CET")
+ with pytest.raises(TypeError, match=msg):
+ DatetimeIndex(idx, dtype='datetime64[ns, CET]')
+ msg = "cannot supply both a tz and a dtype with a tz"
+ with pytest.raises(ValueError, match=msg):
+ DatetimeIndex(idx, tz='CET', dtype='datetime64[ns, US/Eastern]')
+
result = DatetimeIndex(idx, dtype='datetime64[ns, US/Eastern]')
tm.assert_index_equal(idx, result)
@@ -732,7 +741,9 @@ def test_from_freq_recreate_from_data(self, freq):
def test_datetimeindex_constructor_misc(self):
arr = ['1/1/2005', '1/2/2005', 'Jn 3, 2005', '2005-01-04']
- pytest.raises(Exception, DatetimeIndex, arr)
+ msg = r"(\(u?')?Unknown string format(:', 'Jn 3, 2005'\))?"
+ with pytest.raises(ValueError, match=msg):
+ DatetimeIndex(arr)
arr = ['1/1/2005', '1/2/2005', '1/3/2005', '2005-01-04']
idx1 = DatetimeIndex(arr)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index a9bece248e9d0..a38ee264d362c 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -346,8 +346,10 @@ def test_compat_replace(self, f):
def test_catch_infinite_loop(self):
offset = offsets.DateOffset(minute=5)
# blow up, don't loop forever
- pytest.raises(Exception, date_range, datetime(2011, 11, 11),
- datetime(2011, 11, 12), freq=offset)
+ msg = "Offset <DateOffset: minute=5> did not increment date"
+ with pytest.raises(ValueError, match=msg):
+ date_range(datetime(2011, 11, 11), datetime(2011, 11, 12),
+ freq=offset)
@pytest.mark.parametrize('periods', (1, 2))
def test_wom_len(self, periods):
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index cec181161fc11..fc6080e68a803 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -190,7 +190,9 @@ def test_datetimeindex_accessors(self):
# Ensure is_start/end accessors throw ValueError for CustomBusinessDay,
bday_egypt = offsets.CustomBusinessDay(weekmask='Sun Mon Tue Wed Thu')
dti = date_range(datetime(2013, 4, 30), periods=5, freq=bday_egypt)
- pytest.raises(ValueError, lambda: dti.is_month_start)
+ msg = "Custom business days is not supported by is_month_start"
+ with pytest.raises(ValueError, match=msg):
+ dti.is_month_start
dti = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'])
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index 2a546af79931e..84085141fcf92 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -37,15 +37,19 @@ def test_ops_properties_basic(self):
# sanity check that the behavior didn't change
# GH#7206
+ msg = "'Series' object has no attribute '{}'"
for op in ['year', 'day', 'second', 'weekday']:
- pytest.raises(TypeError, lambda x: getattr(self.dt_series, op))
+ with pytest.raises(AttributeError, match=msg.format(op)):
+ getattr(self.dt_series, op)
# attribute access should still work!
s = Series(dict(year=2000, month=1, day=10))
assert s.year == 2000
assert s.month == 1
assert s.day == 10
- pytest.raises(AttributeError, lambda: s.weekday)
+ msg = "'Series' object has no attribute 'weekday'"
+ with pytest.raises(AttributeError, match=msg):
+ s.weekday
def test_repeat_range(self, tz_naive_fixture):
tz = tz_naive_fixture
diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py
index 1b2aab9d370a3..a0c9d9f02385c 100644
--- a/pandas/tests/indexes/datetimes/test_partial_slicing.py
+++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py
@@ -170,7 +170,8 @@ def test_partial_slice(self):
result = s['2005-1-1']
assert result == s.iloc[0]
- pytest.raises(Exception, s.__getitem__, '2004-12-31')
+ with pytest.raises(KeyError, match=r"^'2004-12-31'$"):
+ s['2004-12-31']
def test_partial_slice_daily(self):
rng = date_range(freq='H', start=datetime(2005, 1, 31), periods=500)
@@ -179,7 +180,8 @@ def test_partial_slice_daily(self):
result = s['2005-1-31']
tm.assert_series_equal(result, s.iloc[:24])
- pytest.raises(Exception, s.__getitem__, '2004-12-31 00')
+ with pytest.raises(KeyError, match=r"^'2004-12-31 00'$"):
+ s['2004-12-31 00']
def test_partial_slice_hourly(self):
rng = date_range(freq='T', start=datetime(2005, 1, 1, 20, 0, 0),
@@ -193,7 +195,8 @@ def test_partial_slice_hourly(self):
tm.assert_series_equal(result, s.iloc[:60])
assert s['2005-1-1 20:00'] == s.iloc[0]
- pytest.raises(Exception, s.__getitem__, '2004-12-31 00:15')
+ with pytest.raises(KeyError, match=r"^'2004-12-31 00:15'$"):
+ s['2004-12-31 00:15']
def test_partial_slice_minutely(self):
rng = date_range(freq='S', start=datetime(2005, 1, 1, 23, 59, 0),
@@ -207,7 +210,8 @@ def test_partial_slice_minutely(self):
tm.assert_series_equal(result, s.iloc[:60])
assert s[Timestamp('2005-1-1 23:59:00')] == s.iloc[0]
- pytest.raises(Exception, s.__getitem__, '2004-12-31 00:00:00')
+ with pytest.raises(KeyError, match=r"^'2004-12-31 00:00:00'$"):
+ s['2004-12-31 00:00:00']
def test_partial_slice_second_precision(self):
rng = date_range(start=datetime(2005, 1, 1, 0, 0, 59,
@@ -255,7 +259,9 @@ def test_partial_slicing_dataframe(self):
result = df['a'][ts_string]
assert isinstance(result, np.int64)
assert result == expected
- pytest.raises(KeyError, df.__getitem__, ts_string)
+ msg = r"^'{}'$".format(ts_string)
+ with pytest.raises(KeyError, match=msg):
+ df[ts_string]
# Timestamp with resolution less precise than index
for fmt in formats[:rnum]:
@@ -282,15 +288,20 @@ def test_partial_slicing_dataframe(self):
result = df['a'][ts_string]
assert isinstance(result, np.int64)
assert result == 2
- pytest.raises(KeyError, df.__getitem__, ts_string)
+ msg = r"^'{}'$".format(ts_string)
+ with pytest.raises(KeyError, match=msg):
+ df[ts_string]
# Not compatible with existing key
# Should raise KeyError
for fmt, res in list(zip(formats, resolutions))[rnum + 1:]:
ts = index[1] + Timedelta("1 " + res)
ts_string = ts.strftime(fmt)
- pytest.raises(KeyError, df['a'].__getitem__, ts_string)
- pytest.raises(KeyError, df.__getitem__, ts_string)
+ msg = r"^'{}'$".format(ts_string)
+ with pytest.raises(KeyError, match=msg):
+ df['a'][ts_string]
+ with pytest.raises(KeyError, match=msg):
+ df[ts_string]
def test_partial_slicing_with_multiindex(self):
@@ -316,11 +327,10 @@ def test_partial_slicing_with_multiindex(self):
# this is an IndexingError as we don't do partial string selection on
# multi-levels.
- def f():
+ msg = "Too many indexers"
+ with pytest.raises(IndexingError, match=msg):
df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')]
- pytest.raises(IndexingError, f)
-
# GH 4294
# partial slice on a series mi
s = pd.DataFrame(np.random.rand(1000, 1000), index=pd.date_range(
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index 680eddd27cf9f..42338a751e0fc 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -7,6 +7,8 @@
import numpy as np
import pytest
+from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime
+
import pandas as pd
from pandas import DatetimeIndex, Timestamp, date_range
import pandas.util.testing as tm
@@ -27,10 +29,14 @@ def test_dti_date(self):
expected = [t.date() for t in rng]
assert (result == expected).all()
- def test_dti_date_out_of_range(self):
+ @pytest.mark.parametrize('data', [
+ ['1400-01-01'],
+ [datetime(1400, 1, 1)]])
+ def test_dti_date_out_of_range(self, data):
# GH#1475
- pytest.raises(ValueError, DatetimeIndex, ['1400-01-01'])
- pytest.raises(ValueError, DatetimeIndex, [datetime(1400, 1, 1)])
+ msg = "Out of bounds nanosecond timestamp: 1400-01-01 00:00:00"
+ with pytest.raises(OutOfBoundsDatetime, match=msg):
+ DatetimeIndex(data)
@pytest.mark.parametrize('field', [
'dayofweek', 'dayofyear', 'week', 'weekofyear', 'quarter',
@@ -74,9 +80,15 @@ def test_round_daily(self):
result = dti.round('s')
tm.assert_index_equal(result, dti)
- # invalid
- for freq in ['Y', 'M', 'foobar']:
- pytest.raises(ValueError, lambda: dti.round(freq))
+ @pytest.mark.parametrize('freq, error_msg', [
+ ('Y', '<YearEnd: month=12> is a non-fixed frequency'),
+ ('M', '<MonthEnd> is a non-fixed frequency'),
+ ('foobar', 'Invalid frequency: foobar')])
+ def test_round_invalid(self, freq, error_msg):
+ dti = date_range('20130101 09:10:11', periods=5)
+ dti = dti.tz_localize('UTC').tz_convert('US/Eastern')
+ with pytest.raises(ValueError, match=error_msg):
+ dti.round(freq)
def test_round(self, tz_naive_fixture):
tz = tz_naive_fixture
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index bec2fa66c43cd..38f5eab15041f 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -346,12 +346,16 @@ def test_to_datetime_dt64s(self, cache):
for dt in in_bound_dts:
assert pd.to_datetime(dt, cache=cache) == Timestamp(dt)
- oob_dts = [np.datetime64('1000-01-01'), np.datetime64('5000-01-02'), ]
-
- for dt in oob_dts:
- pytest.raises(ValueError, pd.to_datetime, dt, errors='raise')
- pytest.raises(ValueError, Timestamp, dt)
- assert pd.to_datetime(dt, errors='coerce', cache=cache) is NaT
+ @pytest.mark.parametrize('dt', [np.datetime64('1000-01-01'),
+ np.datetime64('5000-01-02')])
+ @pytest.mark.parametrize('cache', [True, False])
+ def test_to_datetime_dt64s_out_of_bounds(self, cache, dt):
+ msg = "Out of bounds nanosecond timestamp: {}".format(dt)
+ with pytest.raises(OutOfBoundsDatetime, match=msg):
+ pd.to_datetime(dt, errors='raise')
+ with pytest.raises(OutOfBoundsDatetime, match=msg):
+ Timestamp(dt)
+ assert pd.to_datetime(dt, errors='coerce', cache=cache) is NaT
@pytest.mark.parametrize('cache', [True, False])
def test_to_datetime_array_of_dt64s(self, cache):
@@ -367,8 +371,9 @@ def test_to_datetime_array_of_dt64s(self, cache):
# A list of datetimes where the last one is out of bounds
dts_with_oob = dts + [np.datetime64('9999-01-01')]
- pytest.raises(ValueError, pd.to_datetime, dts_with_oob,
- errors='raise')
+ msg = "Out of bounds nanosecond timestamp: 9999-01-01 00:00:00"
+ with pytest.raises(OutOfBoundsDatetime, match=msg):
+ pd.to_datetime(dts_with_oob, errors='raise')
tm.assert_numpy_array_equal(
pd.to_datetime(dts_with_oob, box=False, errors='coerce',
@@ -410,7 +415,10 @@ def test_to_datetime_tz(self, cache):
# mixed tzs will raise
arr = [pd.Timestamp('2013-01-01 13:00:00', tz='US/Pacific'),
pd.Timestamp('2013-01-02 14:00:00', tz='US/Eastern')]
- pytest.raises(ValueError, lambda: pd.to_datetime(arr, cache=cache))
+ msg = ("Tz-aware datetime.datetime cannot be converted to datetime64"
+ " unless utc=True")
+ with pytest.raises(ValueError, match=msg):
+ pd.to_datetime(arr, cache=cache)
@pytest.mark.parametrize('cache', [True, False])
def test_to_datetime_tz_pytz(self, cache):
@@ -1088,9 +1096,9 @@ def test_to_datetime_on_datetime64_series(self, cache):
def test_to_datetime_with_space_in_series(self, cache):
# GH 6428
s = Series(['10/18/2006', '10/18/2008', ' '])
- pytest.raises(ValueError, lambda: to_datetime(s,
- errors='raise',
- cache=cache))
+ msg = r"(\(u?')?String does not contain a date(:', ' '\))?"
+ with pytest.raises(ValueError, match=msg):
+ to_datetime(s, errors='raise', cache=cache)
result_coerce = to_datetime(s, errors='coerce', cache=cache)
expected_coerce = Series([datetime(2006, 10, 18),
datetime(2008, 10, 18),
@@ -1111,13 +1119,12 @@ def test_to_datetime_with_apply(self, cache):
assert_series_equal(result, expected)
td = pd.Series(['May 04', 'Jun 02', ''], index=[1, 2, 3])
- pytest.raises(ValueError,
- lambda: pd.to_datetime(td, format='%b %y',
- errors='raise',
- cache=cache))
- pytest.raises(ValueError,
- lambda: td.apply(pd.to_datetime, format='%b %y',
- errors='raise', cache=cache))
+ msg = r"time data '' does not match format '%b %y' \(match\)"
+ with pytest.raises(ValueError, match=msg):
+ pd.to_datetime(td, format='%b %y', errors='raise', cache=cache)
+ with pytest.raises(ValueError, match=msg):
+ td.apply(pd.to_datetime, format='%b %y',
+ errors='raise', cache=cache)
expected = pd.to_datetime(td, format='%b %y', errors='coerce',
cache=cache)
@@ -1168,8 +1175,9 @@ def test_to_datetime_unprocessable_input(self, cache, box, klass):
result = to_datetime([1, '1'], errors='ignore', cache=cache, box=box)
expected = klass(np.array([1, '1'], dtype='O'))
tm.assert_equal(result, expected)
- pytest.raises(TypeError, to_datetime, [1, '1'], errors='raise',
- cache=cache, box=box)
+ msg = "invalid string coercion to datetime"
+ with pytest.raises(TypeError, match=msg):
+ to_datetime([1, '1'], errors='raise', cache=cache, box=box)
def test_to_datetime_other_datetime64_units(self):
# 5/25/2012
@@ -1225,17 +1233,18 @@ def test_string_na_nat_conversion(self, cache):
malformed = np.array(['1/100/2000', np.nan], dtype=object)
# GH 10636, default is now 'raise'
- pytest.raises(ValueError,
- lambda: to_datetime(malformed, errors='raise',
- cache=cache))
+ msg = (r"\(u?'Unknown string format:', '1/100/2000'\)|"
+ "day is out of range for month")
+ with pytest.raises(ValueError, match=msg):
+ to_datetime(malformed, errors='raise', cache=cache)
result = to_datetime(malformed, errors='ignore', cache=cache)
# GH 21864
expected = Index(malformed)
tm.assert_index_equal(result, expected)
- pytest.raises(ValueError, to_datetime, malformed, errors='raise',
- cache=cache)
+ with pytest.raises(ValueError, match=msg):
+ to_datetime(malformed, errors='raise', cache=cache)
idx = ['a', 'b', 'c', 'd', 'e']
series = Series(['1/1/2000', np.nan, '1/3/2000', np.nan,
@@ -1414,14 +1423,24 @@ def test_day_not_in_month_coerce(self, cache):
@pytest.mark.parametrize('cache', [True, False])
def test_day_not_in_month_raise(self, cache):
- pytest.raises(ValueError, to_datetime, '2015-02-29',
- errors='raise', cache=cache)
- pytest.raises(ValueError, to_datetime, '2015-02-29',
- errors='raise', format="%Y-%m-%d", cache=cache)
- pytest.raises(ValueError, to_datetime, '2015-02-32',
- errors='raise', format="%Y-%m-%d", cache=cache)
- pytest.raises(ValueError, to_datetime, '2015-04-31',
- errors='raise', format="%Y-%m-%d", cache=cache)
+ msg = "day is out of range for month"
+ with pytest.raises(ValueError, match=msg):
+ to_datetime('2015-02-29', errors='raise', cache=cache)
+
+ msg = "time data 2015-02-29 doesn't match format specified"
+ with pytest.raises(ValueError, match=msg):
+ to_datetime('2015-02-29', errors='raise', format="%Y-%m-%d",
+ cache=cache)
+
+ msg = "time data 2015-02-32 doesn't match format specified"
+ with pytest.raises(ValueError, match=msg):
+ to_datetime('2015-02-32', errors='raise', format="%Y-%m-%d",
+ cache=cache)
+
+ msg = "time data 2015-04-31 doesn't match format specified"
+ with pytest.raises(ValueError, match=msg):
+ to_datetime('2015-04-31', errors='raise', format="%Y-%m-%d",
+ cache=cache)
@pytest.mark.parametrize('cache', [True, False])
def test_day_not_in_month_ignore(self, cache):
@@ -1656,7 +1675,9 @@ def test_parsers_time(self):
assert tools.to_time(time_string) == expected
new_string = "14.15"
- pytest.raises(ValueError, tools.to_time, new_string)
+ msg = r"Cannot convert arg \['14\.15'\] to a time"
+ with pytest.raises(ValueError, match=msg):
+ tools.to_time(new_string)
assert tools.to_time(new_string, format="%H.%M") == expected
arg = ["14:15", "20:20"]
| xref #24332
| https://api.github.com/repos/pandas-dev/pandas/pulls/24995 | 2019-01-29T10:57:30Z | 2019-01-29T12:37:43Z | 2019-01-29T12:37:43Z | 2019-01-29T12:45:29Z |
Test nested PandasArray | diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 47517782e2bbf..791ff44303e96 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -222,7 +222,7 @@ def __getitem__(self, item):
item = item._ndarray
result = self._ndarray[item]
- if not lib.is_scalar(result):
+ if not lib.is_scalar(item):
result = type(self)(result)
return result
diff --git a/pandas/tests/extension/numpy_/__init__.py b/pandas/tests/extension/numpy_/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/extension/numpy_/conftest.py b/pandas/tests/extension/numpy_/conftest.py
new file mode 100644
index 0000000000000..daa93571c2957
--- /dev/null
+++ b/pandas/tests/extension/numpy_/conftest.py
@@ -0,0 +1,38 @@
+import numpy as np
+import pytest
+
+from pandas.core.arrays.numpy_ import PandasArray
+
+
+@pytest.fixture
+def allow_in_pandas(monkeypatch):
+ """
+ A monkeypatch to tell pandas to let us in.
+
+ By default, passing a PandasArray to an index / series / frame
+ constructor will unbox that PandasArray to an ndarray, and treat
+ it as a non-EA column. We don't want people using EAs without
+ reason.
+
+ The mechanism for this is a check against ABCPandasArray
+ in each constructor.
+
+ But, for testing, we need to allow them in pandas. So we patch
+ the _typ of PandasArray, so that we evade the ABCPandasArray
+ check.
+ """
+ with monkeypatch.context() as m:
+ m.setattr(PandasArray, '_typ', 'extension')
+ yield
+
+
+@pytest.fixture
+def na_value():
+ return np.nan
+
+
+@pytest.fixture
+def na_cmp():
+ def cmp(a, b):
+ return np.isnan(a) and np.isnan(b)
+ return cmp
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/numpy_/test_numpy.py
similarity index 84%
rename from pandas/tests/extension/test_numpy.py
rename to pandas/tests/extension/numpy_/test_numpy.py
index 7ca6882c7441b..4c93d5ee0b9d7 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/numpy_/test_numpy.py
@@ -6,7 +6,7 @@
from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
import pandas.util.testing as tm
-from . import base
+from .. import base
@pytest.fixture
@@ -14,28 +14,6 @@ def dtype():
return PandasDtype(np.dtype('float'))
-@pytest.fixture
-def allow_in_pandas(monkeypatch):
- """
- A monkeypatch to tells pandas to let us in.
-
- By default, passing a PandasArray to an index / series / frame
- constructor will unbox that PandasArray to an ndarray, and treat
- it as a non-EA column. We don't want people using EAs without
- reason.
-
- The mechanism for this is a check against ABCPandasArray
- in each constructor.
-
- But, for testing, we need to allow them in pandas. So we patch
- the _typ of PandasArray, so that we evade the ABCPandasArray
- check.
- """
- with monkeypatch.context() as m:
- m.setattr(PandasArray, '_typ', 'extension')
- yield
-
-
@pytest.fixture
def data(allow_in_pandas, dtype):
return PandasArray(np.arange(1, 101, dtype=dtype._dtype))
@@ -46,18 +24,6 @@ def data_missing(allow_in_pandas):
return PandasArray(np.array([np.nan, 1.0]))
-@pytest.fixture
-def na_value():
- return np.nan
-
-
-@pytest.fixture
-def na_cmp():
- def cmp(a, b):
- return np.isnan(a) and np.isnan(b)
- return cmp
-
-
@pytest.fixture
def data_for_sorting(allow_in_pandas):
"""Length-3 array with a known sort order.
diff --git a/pandas/tests/extension/numpy_/test_numpy_nested.py b/pandas/tests/extension/numpy_/test_numpy_nested.py
new file mode 100644
index 0000000000000..cf9b34dd08798
--- /dev/null
+++ b/pandas/tests/extension/numpy_/test_numpy_nested.py
@@ -0,0 +1,286 @@
+"""
+Tests for PandasArray with nested data. Users typically won't create
+these objects via `pd.array`, but they can show up through `.array`
+on a Series with nested data.
+
+We partition these tests into their own file, as many of the base
+tests fail, as they aren't appropriate for nested data. It is easier
+to have a seperate file with its own data generating fixtures, than
+trying to skip based upon the value of a fixture.
+"""
+import pytest
+
+import pandas as pd
+from pandas.core.arrays.numpy_ import PandasArray, PandasDtype
+
+from .. import base
+
+# For NumPy <1.16, np.array([np.nan, (1,)]) raises
+# ValueError: setting an array element with a sequence.
+np = pytest.importorskip('numpy', minversion='1.16.0')
+
+
+@pytest.fixture
+def dtype():
+ return PandasDtype(np.dtype('object'))
+
+
+@pytest.fixture
+def data(allow_in_pandas, dtype):
+ return pd.Series([(i,) for i in range(100)]).array
+
+
+@pytest.fixture
+def data_missing(allow_in_pandas):
+ return PandasArray(np.array([np.nan, (1,)]))
+
+
+@pytest.fixture
+def data_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, C, A] with
+ A < B < C
+ """
+ # Use an empty tuple for first element, then remove,
+ # to disable np.array's shape inference.
+ return PandasArray(
+ np.array([(), (2,), (3,), (1,)])[1:]
+ )
+
+
+@pytest.fixture
+def data_missing_for_sorting(allow_in_pandas):
+ """Length-3 array with a known sort order.
+
+ This should be three items [B, NA, A] with
+ A < B and NA missing.
+ """
+ return PandasArray(
+ np.array([(1,), np.nan, (0,)])
+ )
+
+
+@pytest.fixture
+def data_for_grouping(allow_in_pandas):
+ """Data for factorization, grouping, and unique tests.
+
+ Expected to be like [B, B, NA, NA, A, A, B, C]
+
+ Where A < B < C and NA is missing
+ """
+ a, b, c = (1,), (2,), (3,)
+ return PandasArray(np.array(
+ [b, b, np.nan, np.nan, a, a, b, c]
+ ))
+
+
+skip_nested = pytest.mark.skip(reason="Skipping for nested PandasArray")
+
+
+class BaseNumPyTests(object):
+ pass
+
+
+class TestCasting(BaseNumPyTests, base.BaseCastingTests):
+
+ @skip_nested
+ def test_astype_str(self, data):
+ pass
+
+
+class TestConstructors(BaseNumPyTests, base.BaseConstructorsTests):
+ @pytest.mark.skip(reason="We don't register our dtype")
+ # We don't want to register. This test should probably be split in two.
+ def test_from_dtype(self, data):
+ pass
+
+ @skip_nested
+ def test_array_from_scalars(self, data):
+ pass
+
+
+class TestDtype(BaseNumPyTests, base.BaseDtypeTests):
+
+ @pytest.mark.skip(reason="Incorrect expected.")
+ # we unsurprisingly clash with a NumPy name.
+ def test_check_dtype(self, data):
+ pass
+
+
+class TestGetitem(BaseNumPyTests, base.BaseGetitemTests):
+
+ @skip_nested
+ def test_getitem_scalar(self, data):
+ pass
+
+ @skip_nested
+ def test_take_series(self, data):
+ pass
+
+
+class TestGroupby(BaseNumPyTests, base.BaseGroupbyTests):
+ @skip_nested
+ def test_groupby_extension_apply(self, data_for_grouping, op):
+ pass
+
+
+class TestInterface(BaseNumPyTests, base.BaseInterfaceTests):
+ @skip_nested
+ def test_array_interface(self, data):
+ # NumPy array shape inference
+ pass
+
+
+class TestMethods(BaseNumPyTests, base.BaseMethodsTests):
+
+ @pytest.mark.skip(reason="TODO: remove?")
+ def test_value_counts(self, all_data, dropna):
+ pass
+
+ @pytest.mark.skip(reason="Incorrect expected")
+ # We have a bool dtype, so the result is an ExtensionArray
+ # but expected is not
+ def test_combine_le(self, data_repeated):
+ super(TestMethods, self).test_combine_le(data_repeated)
+
+ @skip_nested
+ def test_combine_add(self, data_repeated):
+ # Not numeric
+ pass
+
+ @skip_nested
+ def test_shift_fill_value(self, data):
+ # np.array shape inference. Shift implementation fails.
+ super().test_shift_fill_value(data)
+
+ @skip_nested
+ def test_unique(self, data, box, method):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_fillna_copy_frame(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_fillna_copy_series(self, data_missing):
+ # The "scalar" for this array isn't a scalar.
+ pass
+
+ @skip_nested
+ def test_hash_pandas_object_works(self, data, as_frame):
+ # ndarray of tuples not hashable
+ pass
+
+ @skip_nested
+ def test_searchsorted(self, data_for_sorting, as_series):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_where_series(self, data, na_value, as_frame):
+ # Test setup fails.
+ pass
+
+ @skip_nested
+ def test_repeat(self, data, repeats, as_series, use_numpy):
+ # Fails creating expected
+ pass
+
+
+class TestPrinting(BaseNumPyTests, base.BasePrintingTests):
+ pass
+
+
+class TestMissing(BaseNumPyTests, base.BaseMissingTests):
+
+ @skip_nested
+ def test_fillna_scalar(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series_method(self, data_missing, method):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_series(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+ @skip_nested
+ def test_fillna_frame(self, data_missing):
+ # Non-scalar "scalar" values.
+ pass
+
+
+class TestReshaping(BaseNumPyTests, base.BaseReshapingTests):
+
+ @pytest.mark.skip("Incorrect parent test")
+ # not actually a mixed concat, since we concat int and int.
+ def test_concat_mixed_dtypes(self, data):
+ super(TestReshaping, self).test_concat_mixed_dtypes(data)
+
+ @skip_nested
+ def test_merge(self, data, na_value):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array(self, data):
+ # Fails creating expected
+ pass
+
+ @skip_nested
+ def test_merge_on_extension_array_duplicates(self, data):
+ # Fails creating expected
+ pass
+
+
+class TestSetitem(BaseNumPyTests, base.BaseSetitemTests):
+
+ @skip_nested
+ def test_setitem_scalar_series(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_mismatched_length_raises(self, data, as_array):
+ pass
+
+ @skip_nested
+ def test_setitem_sequence_broadcasts(self, data, box_in_series):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_loc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_mixed(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_iloc_scalar_multiple_homogoneous(self, data):
+ pass
+
+ @skip_nested
+ def test_setitem_mask_broadcast(self, data, setter):
+ pass
+
+ @skip_nested
+ def test_setitem_scalar_key_sequence_raise(self, data):
+ pass
+
+
+# Skip Arithmetics, NumericReduce, BooleanReduce, Parsing
| May be easiest to view the commits individually.
https://github.com/pandas-dev/pandas/commit/38e7413f3f650230eca75ab50580630838930c34
has the commit fixing the actual issue in #24986. That one was easy.
https://github.com/pandas-dev/pandas/commit/558cdbec47c9d9e7862a3d3c688fcf51260d367d is just moving the numpy-backed EA tests down to a directory, to facilitate the real changes in
https://github.com/pandas-dev/pandas/commit/9122bb6d5e7f0eb2296016f35e9d8eb7ef7293e1
Having a PandasArray with nested data breaks lots of tests. Some of these are because of how we construct the expected, and how NumPy handles nested data (not the best). Adding checks to the individual tests like
```
def test_foo(self, data):
if data.dtype == 'object':
raise pytest.skip('skipping for object')
pass
```
wasn't really feasible, as we'd need to duplicate all the parametrized fixtures defined on the tests themselves.
So, I opted to define a `pytest_collection_modifyitems` to add skips to a known list of tests that won't pass for nested data.
Closes #24986 | https://api.github.com/repos/pandas-dev/pandas/pulls/24993 | 2019-01-29T03:49:49Z | 2019-01-30T21:18:20Z | 2019-01-30T21:18:20Z | 2019-01-30T21:18:24Z |
DOC: Document breaking change to read_csv | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 58e1b2370c7c8..b23a0f10e9e2b 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -989,6 +989,36 @@ a single date rather than the entire array.
os.remove('tmp.csv')
+
+.. _io.csv.mixed_timezones:
+
+Parsing a CSV with mixed Timezones
+++++++++++++++++++++++++++++++++++
+
+Pandas cannot natively represent a column or index with mixed timezones. If your CSV
+file contains columns with a mixture of timezones, the default result will be
+an object-dtype column with strings, even with ``parse_dates``.
+
+
+.. ipython:: python
+
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(StringIO(content), parse_dates=['a'])
+ df['a']
+
+To parse the mixed-timezone values as a datetime column, pass a partially-applied
+:func:`to_datetime` with ``utc=True`` as the ``date_parser``.
+
+.. ipython:: python
+
+ df = pd.read_csv(StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df['a']
+
+
.. _io.dayfirst:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 16319a3b83ca4..a49ea2cf493a6 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -648,6 +648,52 @@ that the dates have been converted to UTC
pd.to_datetime(["2015-11-18 15:30:00+05:30",
"2015-11-18 16:30:00+06:30"], utc=True)
+
+.. _whatsnew_0240.api_breaking.read_csv_mixed_tz:
+
+Parsing mixed-timezones with :func:`read_csv`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:func:`read_csv` no longer silently converts mixed-timezone columns to UTC (:issue:`24987`).
+
+*Previous Behavior*
+
+.. code-block:: python
+
+ >>> import io
+ >>> content = """\
+ ... a
+ ... 2000-01-01T00:00:00+05:00
+ ... 2000-01-01T00:00:00+06:00"""
+ >>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ >>> df.a
+ 0 1999-12-31 19:00:00
+ 1 1999-12-31 18:00:00
+ Name: a, dtype: datetime64[ns]
+
+*New Behavior*
+
+.. ipython:: python
+
+ import io
+ content = """\
+ a
+ 2000-01-01T00:00:00+05:00
+ 2000-01-01T00:00:00+06:00"""
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
+ df.a
+
+As can be seen, the ``dtype`` is object; each value in the column is a string.
+To convert the strings to an array of datetimes, the ``date_parser`` argument
+
+.. ipython:: python
+
+ df = pd.read_csv(io.StringIO(content), parse_dates=['a'],
+ date_parser=lambda col: pd.to_datetime(col, utc=True))
+ df.a
+
+See :ref:`whatsnew_0240.api.timezone_offset_parsing` for more.
+
.. _whatsnew_0240.api_breaking.period_end_time:
Time values in ``dt.end_time`` and ``to_timestamp(how='end')``
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index b31d3f665f47f..4163a571df800 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -203,9 +203,14 @@
* dict, e.g. {{'foo' : [1, 3]}} -> parse columns 1, 3 as date and call
result 'foo'
- If a column or index contains an unparseable date, the entire column or
- index will be returned unaltered as an object data type. For non-standard
- datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
+ If a column or index cannot be represented as an array of datetimes,
+ say because of an unparseable value or a mixture of timezones, the column
+ or index will be returned unaltered as an object data type. For
+ non-standard datetime parsing, use ``pd.to_datetime`` after
+ ``pd.read_csv``. To parse an index or column with a mixture of timezones,
+ specify ``date_parser`` to be a partially-applied
+ :func:`pandas.to_datetime` with ``utc=True``. See
+ :ref:`io.csv.mixed_timezones` for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : bool, default False
| Closes https://github.com/pandas-dev/pandas/issues/24987 | https://api.github.com/repos/pandas-dev/pandas/pulls/24989 | 2019-01-28T23:47:59Z | 2019-01-29T12:22:40Z | 2019-01-29T12:22:40Z | 2019-01-29T12:22:46Z |
PERF: ~40x speedup in sparse init and ops by using numpy in check_integrity | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index d0ddb6e09d555..a9fa8b2174dd0 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -23,6 +23,13 @@ Other Enhancements
-
-
+.. _whatsnew_0250.performance:
+
+Performance Improvements
+~~~~~~~~~~~~~~~~~~~~~~~~
+ - Significant speedup in `SparseArray` initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (:issue:`24985`)
+
+
.. _whatsnew_0250.api_breaking:
@@ -187,7 +194,7 @@ Reshaping
Sparse
^^^^^^
--
+- Significant speedup in `SparseArray` initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (:issue:`24985`)
-
-
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index f5980998f6db4..5471c8184e458 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -72,9 +72,6 @@ cdef class IntIndex(SparseIndex):
A ValueError is raised if any of these conditions is violated.
"""
- cdef:
- int32_t index, prev = -1
-
if self.npoints > self.length:
msg = ("Too many indices. Expected "
"{exp} but found {act}").format(
@@ -86,17 +83,15 @@ cdef class IntIndex(SparseIndex):
if self.npoints == 0:
return
- if min(self.indices) < 0:
+ if self.indices.min() < 0:
raise ValueError("No index can be less than zero")
- if max(self.indices) >= self.length:
+ if self.indices.max() >= self.length:
raise ValueError("All indices must be less than the length")
- for index in self.indices:
- if prev != -1 and index <= prev:
- raise ValueError("Indices must be strictly increasing")
-
- prev = index
+ monotonic = np.all(self.indices[:-1] < self.indices[1:])
+ if not monotonic:
+ raise ValueError("Indices must be strictly increasing")
def equals(self, other):
if not isinstance(other, IntIndex):
| A pretty significant regression was introduced into `SparseArray` operations around the release of v0.20.0:
![089e89af-a40d-4443-ad3b-16018f9f44fd](https://user-images.githubusercontent.com/440095/51856811-79554f80-22e5-11e9-8227-fc646c269b33.png)
A run of `asv find` identified https://github.com/pandas-dev/pandas/pull/15863 as the source; simply using numpy operations rather than python-in-cython yields a ~40x speedup:
```
$ asv compare master HEAD -s --sort ratio
Benchmarks that have improved:
before after ratio
[2b16e2e6] [0e35de9b]
<master> <sparse_check_integrity>
- 46.4±0.4ms 42.0±0.1ms 0.91 sparse.SparseArrayConstructor.time_sparse_array(0.01, nan, <class 'object'>)
- 216±0.4μs 193±0.4μs 0.89 sparse.Arithmetic.time_intersect(0.01, 0)
- 60.1±0.2ms 48.1±0.1ms 0.80 sparse.SparseArrayConstructor.time_sparse_array(0.01, 0, <class 'object'>)
- 1.02±0.01s 594±6ms 0.58 reshape.GetDummies.time_get_dummies_1d_sparse
- 96.3±10ms 54.1±0.2ms 0.56 sparse.SparseArrayConstructor.time_sparse_array(0.1, nan, <class 'object'>)
- 105±0.3ms 56.1±0.3ms 0.54 sparse.SparseArrayConstructor.time_sparse_array(0.1, 0, <class 'object'>)
- 7.43±0.2ms 3.17±0.08ms 0.43 sparse.SparseArrayConstructor.time_sparse_array(0.01, nan, <class 'numpy.float64'>)
- 6.67±0.02ms 2.50±0.03ms 0.37 sparse.SparseArrayConstructor.time_sparse_array(0.01, 0, <class 'numpy.int64'>)
- 660±9ms 237±0.2ms 0.36 sparse.Arithmetic.time_make_union(0.1, nan)
- 674±10ms 237±0.2ms 0.35 sparse.Arithmetic.time_make_union(0.01, nan)
- 6.27±0.04ms 2.03±0.02ms 0.32 sparse.SparseArrayConstructor.time_sparse_array(0.01, 0, <class 'numpy.float64'>)
- 5.71±0.08ms 1.58±0.01ms 0.28 sparse.Arithmetic.time_intersect(0.1, 0)
- 50.0±0.6ms 7.94±0.04ms 0.16 sparse.SparseArrayConstructor.time_sparse_array(0.1, nan, <class 'numpy.float64'>)
- 93.4±0.6ms 14.7±0.04ms 0.16 sparse.Arithmetic.time_divide(0.1, 0)
- 93.4±0.6ms 14.5±0.05ms 0.16 sparse.Arithmetic.time_add(0.1, 0)
- 9.69±0.03ms 1.46±0ms 0.15 sparse.Arithmetic.time_divide(0.01, 0)
- 48.7±0.4ms 7.29±0.01ms 0.15 sparse.SparseArrayConstructor.time_sparse_array(0.1, 0, <class 'numpy.int64'>)
- 9.71±0.04ms 1.45±0ms 0.15 sparse.Arithmetic.time_add(0.01, 0)
- 91.6±0.5ms 12.9±0.04ms 0.14 sparse.Arithmetic.time_make_union(0.1, 0)
- 49.8±0.8ms 6.84±0.01ms 0.14 sparse.SparseArrayConstructor.time_sparse_array(0.1, 0, <class 'numpy.float64'>)
- 9.37±0.01ms 1.18±0ms 0.13 sparse.Arithmetic.time_make_union(0.01, 0)
- 9.60±0.1ms 1.05±0ms 0.11 sparse.ArithmeticBlock.time_division(nan)
- 9.50±0.2ms 1.03±0ms 0.11 sparse.ArithmeticBlock.time_addition(nan)
- 9.49±0.1ms 991±2μs 0.10 sparse.ArithmeticBlock.time_division(0)
- 9.52±0.1ms 981±2μs 0.10 sparse.ArithmeticBlock.time_addition(0)
- 9.19±0.1ms 827±2μs 0.09 sparse.ArithmeticBlock.time_make_union(nan)
- 9.01±0.1ms 796±2μs 0.09 sparse.ArithmeticBlock.time_make_union(0)
- 4.31±0.04ms 156±0.3μs 0.04 sparse.ArithmeticBlock.time_intersect(nan)
- 4.33±0.09ms 156±0.5μs 0.04 sparse.ArithmeticBlock.time_intersect(0)
- 464±20ms 14.6±1ms 0.03 sparse.SparseArrayConstructor.time_sparse_array(0.01, nan, <class 'numpy.int64'>)
- 439±4ms 12.1±1ms 0.03 sparse.SparseArrayConstructor.time_sparse_array(0.1, nan, <class 'numpy.int64'>)
- 440±2ms 10.7±0.01ms 0.02 sparse.Arithmetic.time_intersect(0.01, nan)
- 444±10ms 10.7±0.01ms 0.02 sparse.Arithmetic.time_intersect(0.1, nan)
Benchmarks that have stayed the same:
before after ratio
[2b16e2e6] [0e35de9b]
<master> <sparse_check_integrity>
246±10ms 251±7ms 1.02 sparse.SparseSeriesToFrame.time_series_to_frame
597±2ms 605±10ms 1.01 sparse.SparseDataFrameConstructor.time_from_scipy
6.21±0s 6.26±0.02s 1.01 sparse.SparseDataFrameConstructor.time_constructor
2.87±0.01ms 2.88±0.04ms 1.01 sparse.FromCoo.time_sparse_series_from_coo
246±1ms 247±2ms 1.00 sparse.SparseDataFrameConstructor.time_from_dict
2.29±0.05ms 2.29±0.01ms 1.00 sparse.Arithmetic.time_add(0.1, nan)
2.06±0.01ms 2.06±0.01ms 1.00 reshape.SparseIndex.time_unstack
4.61±0.01ms 4.62±0.01ms 1.00 sparse.Arithmetic.time_divide(0.1, nan)
4.05±0.01ms 4.04±0.01ms 1.00 sparse.Arithmetic.time_divide(0.01, nan)
46.2±0.3ms 46.1±0.1ms 1.00 sparse.ToCoo.time_sparse_series_to_coo
2.31±0.03ms 2.29±0.02ms 0.99 sparse.Arithmetic.time_add(0.01, nan)
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24985 | 2019-01-28T18:21:29Z | 2019-01-30T13:01:42Z | 2019-01-30T13:01:42Z | 2019-01-30T13:01:46Z |
API/ERR: allow iterators in df.set_index & improve errors | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 83ca93bdfa703..4ea5d935a6920 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -22,6 +22,7 @@ Other Enhancements
- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
- :meth:`DataFrame.at_time` and :meth:`Series.at_time` now support :meth:`datetime.time` objects with timezones (:issue:`24043`)
+- :meth:`DataFrame.set_index` now works for instances of ``abc.Iterator``, provided their output is of the same length as the calling frame (:issue:`22484`, :issue:`24984`)
- :meth:`DatetimeIndex.union` now supports the ``sort`` argument. The behaviour of the sort parameter matches that of :meth:`Index.union` (:issue:`24994`)
-
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index d7ca7f8963f70..4036af85b7212 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -137,6 +137,7 @@ def lfilter(*args, **kwargs):
reload = reload
Hashable = collections.abc.Hashable
Iterable = collections.abc.Iterable
+ Iterator = collections.abc.Iterator
Mapping = collections.abc.Mapping
MutableMapping = collections.abc.MutableMapping
Sequence = collections.abc.Sequence
@@ -199,6 +200,7 @@ def get_range_parameters(data):
Hashable = collections.Hashable
Iterable = collections.Iterable
+ Iterator = collections.Iterator
Mapping = collections.Mapping
MutableMapping = collections.MutableMapping
Sequence = collections.Sequence
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 79f209f9ebc0a..608e5c53ec094 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -33,7 +33,7 @@
from pandas import compat
from pandas.compat import (range, map, zip, lmap, lzip, StringIO, u,
- PY36, raise_with_traceback,
+ PY36, raise_with_traceback, Iterator,
string_and_binary_types)
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.cast import (
@@ -4025,7 +4025,8 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
This parameter can be either a single column key, a single array of
the same length as the calling DataFrame, or a list containing an
arbitrary combination of column keys and arrays. Here, "array"
- encompasses :class:`Series`, :class:`Index` and ``np.ndarray``.
+ encompasses :class:`Series`, :class:`Index`, ``np.ndarray``, and
+ instances of :class:`abc.Iterator`.
drop : bool, default True
Delete columns to be used as the new index.
append : bool, default False
@@ -4104,6 +4105,32 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
if not isinstance(keys, list):
keys = [keys]
+ err_msg = ('The parameter "keys" may be a column key, one-dimensional '
+ 'array, or a list containing only valid column keys and '
+ 'one-dimensional arrays.')
+
+ missing = []
+ for col in keys:
+ if isinstance(col, (ABCIndexClass, ABCSeries, np.ndarray,
+ list, Iterator)):
+ # arrays are fine as long as they are one-dimensional
+ # iterators get converted to list below
+ if getattr(col, 'ndim', 1) != 1:
+ raise ValueError(err_msg)
+ else:
+ # everything else gets tried as a key; see GH 24969
+ try:
+ found = col in self.columns
+ except TypeError:
+ raise TypeError(err_msg + ' Received column of '
+ 'type {}'.format(type(col)))
+ else:
+ if not found:
+ missing.append(col)
+
+ if missing:
+ raise KeyError('None of {} are in the columns'.format(missing))
+
if inplace:
frame = self
else:
@@ -4132,6 +4159,9 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
elif isinstance(col, (list, np.ndarray)):
arrays.append(col)
names.append(None)
+ elif isinstance(col, Iterator):
+ arrays.append(list(col))
+ names.append(None)
# from here, col can only be a column label
else:
arrays.append(frame[col]._values)
@@ -4139,6 +4169,15 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
if drop:
to_remove.append(col)
+ if len(arrays[-1]) != len(self):
+ # check newest element against length of calling frame, since
+ # ensure_index_from_sequences would not raise for append=False.
+ raise ValueError('Length mismatch: Expected {len_self} rows, '
+ 'received array of length {len_col}'.format(
+ len_self=len(self),
+ len_col=len(arrays[-1])
+ ))
+
index = ensure_index_from_sequences(arrays, names)
if verify_integrity and not index.is_unique:
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index cc3687f856b4e..a25e893e08900 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -178,10 +178,10 @@ def test_set_index_pass_arrays(self, frame_of_index_cols,
# MultiIndex constructor does not work directly on Series -> lambda
# We also emulate a "constructor" for the label -> lambda
# also test index name if append=True (name is duplicate here for A)
- @pytest.mark.parametrize('box2', [Series, Index, np.array, list,
+ @pytest.mark.parametrize('box2', [Series, Index, np.array, list, iter,
lambda x: MultiIndex.from_arrays([x]),
lambda x: x.name])
- @pytest.mark.parametrize('box1', [Series, Index, np.array, list,
+ @pytest.mark.parametrize('box1', [Series, Index, np.array, list, iter,
lambda x: MultiIndex.from_arrays([x]),
lambda x: x.name])
@pytest.mark.parametrize('append, index_name', [(True, None),
@@ -195,6 +195,9 @@ def test_set_index_pass_arrays_duplicate(self, frame_of_index_cols, drop,
keys = [box1(df['A']), box2(df['A'])]
result = df.set_index(keys, drop=drop, append=append)
+ # if either box is iter, it has been consumed; re-read
+ keys = [box1(df['A']), box2(df['A'])]
+
# need to adapt first drop for case that both keys are 'A' --
# cannot drop the same column twice;
# use "is" because == would give ambiguous Boolean error for containers
@@ -253,25 +256,48 @@ def test_set_index_raise_keys(self, frame_of_index_cols, drop, append):
df.set_index(['A', df['A'], tuple(df['A'])],
drop=drop, append=append)
- @pytest.mark.xfail(reason='broken due to revert, see GH 25085')
@pytest.mark.parametrize('append', [True, False])
@pytest.mark.parametrize('drop', [True, False])
- @pytest.mark.parametrize('box', [set, iter, lambda x: (y for y in x)],
- ids=['set', 'iter', 'generator'])
+ @pytest.mark.parametrize('box', [set], ids=['set'])
def test_set_index_raise_on_type(self, frame_of_index_cols, box,
drop, append):
df = frame_of_index_cols
msg = 'The parameter "keys" may be a column key, .*'
- # forbidden type, e.g. set/iter/generator
+ # forbidden type, e.g. set
with pytest.raises(TypeError, match=msg):
df.set_index(box(df['A']), drop=drop, append=append)
- # forbidden type in list, e.g. set/iter/generator
+ # forbidden type in list, e.g. set
with pytest.raises(TypeError, match=msg):
df.set_index(['A', df['A'], box(df['A'])],
drop=drop, append=append)
+ # MultiIndex constructor does not work directly on Series -> lambda
+ @pytest.mark.parametrize('box', [Series, Index, np.array, iter,
+ lambda x: MultiIndex.from_arrays([x])],
+ ids=['Series', 'Index', 'np.array',
+ 'iter', 'MultiIndex'])
+ @pytest.mark.parametrize('length', [4, 6], ids=['too_short', 'too_long'])
+ @pytest.mark.parametrize('append', [True, False])
+ @pytest.mark.parametrize('drop', [True, False])
+ def test_set_index_raise_on_len(self, frame_of_index_cols, box, length,
+ drop, append):
+ # GH 24984
+ df = frame_of_index_cols # has length 5
+
+ values = np.random.randint(0, 10, (length,))
+
+ msg = 'Length mismatch: Expected 5 rows, received array of length.*'
+
+ # wrong length directly
+ with pytest.raises(ValueError, match=msg):
+ df.set_index(box(values), drop=drop, append=append)
+
+ # wrong length in list
+ with pytest.raises(ValueError, match=msg):
+ df.set_index(['A', df.A, box(values)], drop=drop, append=append)
+
def test_set_index_custom_label_type(self):
# GH 24969
@@ -341,7 +367,7 @@ def __repr__(self):
# missing key
thing3 = Thing(['Three', 'pink'])
- msg = '.*' # due to revert, see GH 25085
+ msg = r"frozenset\(\{'Three', 'pink'\}\)"
with pytest.raises(KeyError, match=msg):
# missing label directly
df.set_index(thing3)
@@ -366,7 +392,7 @@ def __str__(self):
thing2 = Thing('Two', 'blue')
df = DataFrame([[0, 2], [1, 3]], columns=[thing1, thing2])
- msg = 'unhashable type.*'
+ msg = 'The parameter "keys" may be a column key, .*'
with pytest.raises(TypeError, match=msg):
# use custom label directly
| - [x] closes the parts of #22484 (resp. those worth keeping) that were reverted in #25085 due to #24969 ~closes #24969~
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
This is a quick fix for the regression - however, I think this should be immediately (i.e. 0.24.1) be deprecated. I haven't yet added a deprecation warning here, pending further discussion in the issue.
@jorisvandenbossche @TomAugspurger @jreback
| https://api.github.com/repos/pandas-dev/pandas/pulls/24984 | 2019-01-28T17:52:56Z | 2019-02-24T03:34:20Z | 2019-02-24T03:34:19Z | 2019-02-24T10:09:38Z |
Backport PR #24965 on branch 0.24.x (Fixed itertuples usage in to_dict) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index ee4b7ab62b31a..16044fb00d4f5 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -15,6 +15,13 @@ Whats New in 0.24.1 (February XX, 2019)
These are the changes in pandas 0.24.1. See :ref:`release` for a full changelog
including other versions of pandas.
+.. _whatsnew_0241.regressions:
+
+Fixed Regressions
+^^^^^^^^^^^^^^^^^
+
+- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
+- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b4f79bda25517..28c6f3c23a3ce 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -847,7 +847,7 @@ def itertuples(self, index=True, name="Pandas"):
----------
index : bool, default True
If True, return the index as the first element of the tuple.
- name : str, default "Pandas"
+ name : str or None, default "Pandas"
The name of the returned namedtuples or None to return regular
tuples.
@@ -1290,23 +1290,26 @@ def to_dict(self, orient='dict', into=dict):
('columns', self.columns.tolist()),
('data', [
list(map(com.maybe_box_datetimelike, t))
- for t in self.itertuples(index=False)]
- )))
+ for t in self.itertuples(index=False, name=None)
+ ])))
elif orient.lower().startswith('s'):
return into_c((k, com.maybe_box_datetimelike(v))
for k, v in compat.iteritems(self))
elif orient.lower().startswith('r'):
+ columns = self.columns.tolist()
+ rows = (dict(zip(columns, row))
+ for row in self.itertuples(index=False, name=None))
return [
into_c((k, com.maybe_box_datetimelike(v))
- for k, v in compat.iteritems(row._asdict()))
- for row in self.itertuples(index=False)]
+ for k, v in compat.iteritems(row))
+ for row in rows]
elif orient.lower().startswith('i'):
if not self.index.is_unique:
raise ValueError(
"DataFrame index must be unique for orient='index'."
)
return into_c((t[0], dict(zip(self.columns, t[1:])))
- for t in self.itertuples())
+ for t in self.itertuples(name=None))
else:
raise ValueError("orient '{o}' not understood".format(o=orient))
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index ddf85136126a1..7b98395dd6dec 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -488,3 +488,17 @@ def test_to_dict_index_dtypes(self, into, expected):
result = DataFrame.from_dict(result, orient='index')[cols]
expected = DataFrame.from_dict(expected, orient='index')[cols]
tm.assert_frame_equal(result, expected)
+
+ def test_to_dict_numeric_names(self):
+ # https://github.com/pandas-dev/pandas/issues/24940
+ df = DataFrame({str(i): [i] for i in range(5)})
+ result = set(df.to_dict('records')[0].keys())
+ expected = set(df.columns)
+ assert result == expected
+
+ def test_to_dict_wide(self):
+ # https://github.com/pandas-dev/pandas/issues/24939
+ df = DataFrame({('A_{:d}'.format(i)): [i] for i in range(256)})
+ result = df.to_dict('records')[0]
+ expected = {'A_{:d}'.format(i): i for i in range(256)}
+ assert result == expected
| Backport PR #24965: Fixed itertuples usage in to_dict | https://api.github.com/repos/pandas-dev/pandas/pulls/24978 | 2019-01-28T15:32:55Z | 2019-01-28T16:17:37Z | 2019-01-28T16:17:36Z | 2019-01-28T16:17:37Z |
STY: use pytest.raises context manager (resample) | diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 911cd990ab881..48debfa2848e7 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -95,7 +95,10 @@ def test_resample_interpolate_all_ts(frame):
def test_raises_on_non_datetimelike_index():
# this is a non datetimelike index
xp = DataFrame()
- pytest.raises(TypeError, lambda: xp.resample('A').mean())
+ msg = ("Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex,"
+ " but got an instance of 'Index'")
+ with pytest.raises(TypeError, match=msg):
+ xp.resample('A').mean()
@pytest.mark.parametrize('freq', ['M', 'D', 'H'])
@@ -189,8 +192,10 @@ def test_resample_loffset_arg_type_all_ts(frame, create_index):
# GH 13022, 7687 - TODO: fix resample w/ TimedeltaIndex
if isinstance(expected.index, TimedeltaIndex):
- with pytest.raises(AssertionError):
+ msg = "DataFrame are different"
+ with pytest.raises(AssertionError, match=msg):
assert_frame_equal(result_agg, expected)
+ with pytest.raises(AssertionError, match=msg):
assert_frame_equal(result_how, expected)
else:
assert_frame_equal(result_agg, expected)
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 73995cbe79ecd..74487052f8982 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -113,16 +113,18 @@ def test_resample_basic_grouper(series):
@pytest.mark.parametrize(
'_index_start,_index_end,_index_name',
[('1/1/2000 00:00:00', '1/1/2000 00:13:00', 'index')])
-@pytest.mark.parametrize('kwargs', [
- dict(label='righttt'),
- dict(closed='righttt'),
- dict(convention='starttt')
+@pytest.mark.parametrize('keyword,value', [
+ ('label', 'righttt'),
+ ('closed', 'righttt'),
+ ('convention', 'starttt')
])
-def test_resample_string_kwargs(series, kwargs):
+def test_resample_string_kwargs(series, keyword, value):
# see gh-19303
# Check that wrong keyword argument strings raise an error
- with pytest.raises(ValueError, match='Unsupported value'):
- series.resample('5min', **kwargs)
+ msg = "Unsupported value {value} for `{keyword}`".format(
+ value=value, keyword=keyword)
+ with pytest.raises(ValueError, match=msg):
+ series.resample('5min', **({keyword: value}))
@pytest.mark.parametrize(
@@ -676,7 +678,7 @@ def test_asfreq_non_unique():
ts = Series(np.random.randn(len(rng2)), index=rng2)
msg = 'cannot reindex from a duplicate axis'
- with pytest.raises(Exception, match=msg):
+ with pytest.raises(ValueError, match=msg):
ts.asfreq('B')
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index c2fbb5bbb088c..8abdf9034527b 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -11,6 +11,7 @@
import pandas as pd
from pandas import DataFrame, Series, Timestamp
+from pandas.core.indexes.base import InvalidIndexError
from pandas.core.indexes.datetimes import date_range
from pandas.core.indexes.period import Period, PeriodIndex, period_range
from pandas.core.resample import _get_period_range_edges
@@ -72,17 +73,19 @@ def test_asfreq_fill_value(self, series):
@pytest.mark.parametrize('freq', ['H', '12H', '2D', 'W'])
@pytest.mark.parametrize('kind', [None, 'period', 'timestamp'])
- def test_selection(self, index, freq, kind):
+ @pytest.mark.parametrize('kwargs', [dict(on='date'), dict(level='d')])
+ def test_selection(self, index, freq, kind, kwargs):
# This is a bug, these should be implemented
# GH 14008
rng = np.arange(len(index), dtype=np.int64)
df = DataFrame({'date': index, 'a': rng},
index=pd.MultiIndex.from_arrays([rng, index],
names=['v', 'd']))
- with pytest.raises(NotImplementedError):
- df.resample(freq, on='date', kind=kind)
- with pytest.raises(NotImplementedError):
- df.resample(freq, level='d', kind=kind)
+ msg = ("Resampling from level= or on= selection with a PeriodIndex is"
+ r" not currently supported, use \.set_index\(\.\.\.\) to"
+ " explicitly set index")
+ with pytest.raises(NotImplementedError, match=msg):
+ df.resample(freq, kind=kind, **kwargs)
@pytest.mark.parametrize('month', MONTHS)
@pytest.mark.parametrize('meth', ['ffill', 'bfill'])
@@ -110,13 +113,20 @@ def test_basic_downsample(self, simple_period_range_series):
assert_series_equal(ts.resample('a-dec').mean(), result)
assert_series_equal(ts.resample('a').mean(), result)
- def test_not_subperiod(self, simple_period_range_series):
+ @pytest.mark.parametrize('rule,expected_error_msg', [
+ ('a-dec', '<YearEnd: month=12>'),
+ ('q-mar', '<QuarterEnd: startingMonth=3>'),
+ ('M', '<MonthEnd>'),
+ ('w-thu', '<Week: weekday=3>')
+ ])
+ def test_not_subperiod(
+ self, simple_period_range_series, rule, expected_error_msg):
# These are incompatible period rules for resampling
ts = simple_period_range_series('1/1/1990', '6/30/1995', freq='w-wed')
- pytest.raises(ValueError, lambda: ts.resample('a-dec').mean())
- pytest.raises(ValueError, lambda: ts.resample('q-mar').mean())
- pytest.raises(ValueError, lambda: ts.resample('M').mean())
- pytest.raises(ValueError, lambda: ts.resample('w-thu').mean())
+ msg = ("Frequency <Week: weekday=2> cannot be resampled to {}, as they"
+ " are not sub or super periods").format(expected_error_msg)
+ with pytest.raises(IncompatibleFrequency, match=msg):
+ ts.resample(rule).mean()
@pytest.mark.parametrize('freq', ['D', '2D'])
def test_basic_upsample(self, freq, simple_period_range_series):
@@ -212,8 +222,9 @@ def test_resample_same_freq(self, resample_method):
assert_series_equal(result, expected)
def test_resample_incompat_freq(self):
-
- with pytest.raises(IncompatibleFrequency):
+ msg = ("Frequency <MonthEnd> cannot be resampled to <Week: weekday=6>,"
+ " as they are not sub or super periods")
+ with pytest.raises(IncompatibleFrequency, match=msg):
Series(range(3), index=pd.period_range(
start='2000', periods=3, freq='M')).resample('W').mean()
@@ -373,7 +384,9 @@ def test_resample_fill_missing(self):
def test_cant_fill_missing_dups(self):
rng = PeriodIndex([2000, 2005, 2005, 2007, 2007], freq='A')
s = Series(np.random.randn(5), index=rng)
- pytest.raises(Exception, lambda: s.resample('A').ffill())
+ msg = "Reindexing only valid with uniquely valued Index objects"
+ with pytest.raises(InvalidIndexError, match=msg):
+ s.resample('A').ffill()
@pytest.mark.parametrize('freq', ['5min'])
@pytest.mark.parametrize('kind', ['period', None, 'timestamp'])
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index 69684daf05f3d..a694942cc4c40 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -113,16 +113,14 @@ def test_getitem():
test_frame.columns[[0, 1]])
-def test_select_bad_cols():
-
+@pytest.mark.parametrize('key', [['D'], ['A', 'D']])
+def test_select_bad_cols(key):
g = test_frame.resample('H')
- pytest.raises(KeyError, g.__getitem__, ['D'])
-
- pytest.raises(KeyError, g.__getitem__, ['A', 'D'])
- with pytest.raises(KeyError, match='^[^A]+$'):
- # A should not be referenced as a bad column...
- # will have to rethink regex if you change message!
- g[['A', 'D']]
+ # 'A' should not be referenced as a bad column...
+ # will have to rethink regex if you change message!
+ msg = r"^\"Columns not found: 'D'\"$"
+ with pytest.raises(KeyError, match=msg):
+ g[key]
def test_attribute_access():
@@ -216,7 +214,9 @@ def test_fillna():
result = r.fillna(method='bfill')
assert_series_equal(result, expected)
- with pytest.raises(ValueError):
+ msg = (r"Invalid fill method\. Expecting pad \(ffill\), backfill"
+ r" \(bfill\) or nearest\. Got 0")
+ with pytest.raises(ValueError, match=msg):
r.fillna(0)
@@ -437,12 +437,11 @@ def test_agg_misc():
# errors
# invalid names in the agg specification
+ msg = "\"Column 'B' does not exist!\""
for t in cases:
- with pytest.raises(KeyError):
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- t[['A']].agg({'A': ['sum', 'std'],
- 'B': ['mean', 'std']})
+ with pytest.raises(KeyError, match=msg):
+ t[['A']].agg({'A': ['sum', 'std'],
+ 'B': ['mean', 'std']})
def test_agg_nested_dicts():
@@ -464,11 +463,11 @@ def test_agg_nested_dicts():
df.groupby(pd.Grouper(freq='2D'))
]
+ msg = r"cannot perform renaming for r(1|2) with a nested dictionary"
for t in cases:
- def f():
+ with pytest.raises(pd.core.base.SpecificationError, match=msg):
t.aggregate({'r1': {'A': ['mean', 'sum']},
'r2': {'B': ['mean', 'sum']}})
- pytest.raises(ValueError, f)
for t in cases:
expected = pd.concat([t['A'].mean(), t['A'].std(), t['B'].mean(),
@@ -499,7 +498,8 @@ def test_try_aggregate_non_existing_column():
df = DataFrame(data).set_index('dt')
# Error as we don't have 'z' column
- with pytest.raises(KeyError):
+ msg = "\"Column 'z' does not exist!\""
+ with pytest.raises(KeyError, match=msg):
df.resample('30T').agg({'x': ['mean'],
'y': ['median'],
'z': ['sum']})
@@ -517,23 +517,29 @@ def test_selection_api_validation():
df_exp = DataFrame({'a': rng}, index=index)
# non DatetimeIndex
- with pytest.raises(TypeError):
+ msg = ("Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex,"
+ " but got an instance of 'Int64Index'")
+ with pytest.raises(TypeError, match=msg):
df.resample('2D', level='v')
- with pytest.raises(ValueError):
+ msg = "The Grouper cannot specify both a key and a level!"
+ with pytest.raises(ValueError, match=msg):
df.resample('2D', on='date', level='d')
- with pytest.raises(TypeError):
+ msg = "unhashable type: 'list'"
+ with pytest.raises(TypeError, match=msg):
df.resample('2D', on=['a', 'date'])
- with pytest.raises(KeyError):
+ msg = r"\"Level \['a', 'date'\] not found\""
+ with pytest.raises(KeyError, match=msg):
df.resample('2D', level=['a', 'date'])
# upsampling not allowed
- with pytest.raises(ValueError):
+ msg = ("Upsampling from level= or on= selection is not supported, use"
+ r" \.set_index\(\.\.\.\) to explicitly set index to datetime-like")
+ with pytest.raises(ValueError, match=msg):
df.resample('2D', level='d').asfreq()
-
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg):
df.resample('2D', on='date').asfreq()
exp = df_exp.resample('2D').sum()
diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py
index ec29b55ac9d67..a4eb7933738c0 100644
--- a/pandas/tests/resample/test_time_grouper.py
+++ b/pandas/tests/resample/test_time_grouper.py
@@ -112,7 +112,7 @@ def test_fails_on_no_datetime_index(name, func):
df = DataFrame({'a': np.random.randn(n)}, index=index)
msg = ("Only valid with DatetimeIndex, TimedeltaIndex "
- "or PeriodIndex, but got an instance of %r" % name)
+ "or PeriodIndex, but got an instance of '{}'".format(name))
with pytest.raises(TypeError, match=msg):
df.groupby(TimeGrouper('D'))
| xref #24332 | https://api.github.com/repos/pandas-dev/pandas/pulls/24977 | 2019-01-28T13:51:37Z | 2019-01-28T22:15:38Z | 2019-01-28T22:15:38Z | 2019-01-29T12:44:58Z |
fix for BUG: grouping with tz-aware: Values falls after last bin | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 85a2ba5bb03b6..8f4c3982c745f 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -72,8 +72,7 @@ Bug Fixes
**Reshaping**
--
--
+- Bug in :meth:`DataFrame.groupby` with :class:`Grouper` when there is a time change (DST) and grouping frequency is ``'1d'`` (:issue:`24972`)
**Visualization**
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 6822225273906..7723827ff478a 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -30,8 +30,7 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.tseries.frequencies import to_offset
-from pandas.tseries.offsets import (
- DateOffset, Day, Nano, Tick, delta_to_nanoseconds)
+from pandas.tseries.offsets import DateOffset, Day, Nano, Tick
_shared_docs_kwargs = dict()
@@ -1613,20 +1612,20 @@ def _get_timestamp_range_edges(first, last, offset, closed='left', base=0):
A tuple of length 2, containing the adjusted pd.Timestamp objects.
"""
if isinstance(offset, Tick):
- is_day = isinstance(offset, Day)
- day_nanos = delta_to_nanoseconds(timedelta(1))
-
- # #1165 and #24127
- if (is_day and not offset.nanos % day_nanos) or not is_day:
- first, last = _adjust_dates_anchored(first, last, offset,
- closed=closed, base=base)
- if is_day and first.tz is not None:
- # _adjust_dates_anchored assumes 'D' means 24H, but first/last
- # might contain a DST transition (23H, 24H, or 25H).
- # Ensure first/last snap to midnight.
- first = first.normalize()
- last = last.normalize()
- return first, last
+ if isinstance(offset, Day):
+ # _adjust_dates_anchored assumes 'D' means 24H, but first/last
+ # might contain a DST transition (23H, 24H, or 25H).
+ # So "pretend" the dates are naive when adjusting the endpoints
+ tz = first.tz
+ first = first.tz_localize(None)
+ last = last.tz_localize(None)
+
+ first, last = _adjust_dates_anchored(first, last, offset,
+ closed=closed, base=base)
+ if isinstance(offset, Day):
+ first = first.tz_localize(tz)
+ last = last.tz_localize(tz)
+ return first, last
else:
first = first.normalize()
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 74487052f8982..856c4df5380e5 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1278,6 +1278,21 @@ def test_resample_across_dst():
assert_frame_equal(result, expected)
+def test_groupby_with_dst_time_change():
+ # GH 24972
+ index = pd.DatetimeIndex([1478064900001000000, 1480037118776792000],
+ tz='UTC').tz_convert('America/Chicago')
+
+ df = pd.DataFrame([1, 2], index=index)
+ result = df.groupby(pd.Grouper(freq='1d')).last()
+ expected_index_values = pd.date_range('2016-11-02', '2016-11-24',
+ freq='d', tz='America/Chicago')
+
+ index = pd.DatetimeIndex(expected_index_values)
+ expected = pd.DataFrame([1.0] + ([np.nan] * 21) + [2.0], index=index)
+ assert_frame_equal(result, expected)
+
+
def test_resample_dst_anchor():
# 5172
dti = DatetimeIndex([datetime(2012, 11, 4, 23)], tz='US/Eastern')
| - [ ] closes #24972
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24973 | 2019-01-28T00:59:40Z | 2019-01-29T15:54:38Z | 2019-01-29T15:54:38Z | 2019-01-29T16:47:23Z |
ENH: Expose symlog scaling in plotting API | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index ccf5c43280765..be208a434f77b 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -23,7 +23,7 @@ including other versions of pandas.
Other Enhancements
^^^^^^^^^^^^^^^^^^
-
+- :func:`DataFrame.plot` keywords ``logy``, ``logx`` and ``loglog`` can now accept the value ``'sym'`` for symlog scaling. (:issue:`24867`)
- Added support for ISO week year format ('%G-%V-%u') when parsing datetimes using :meth: `to_datetime` (:issue:`16607`)
- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 5ed6c2f4e14b6..e75e8bb4f8821 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -288,8 +288,10 @@ def _maybe_right_yaxis(self, ax, axes_num):
if not self._has_plotted_object(orig_ax): # no data on left y
orig_ax.get_yaxis().set_visible(False)
- if self.logy or self.loglog:
+ if self.logy is True or self.loglog is True:
new_ax.set_yscale('log')
+ elif self.logy == 'sym' or self.loglog == 'sym':
+ new_ax.set_yscale('symlog')
return new_ax
def _setup_subplots(self):
@@ -311,10 +313,24 @@ def _setup_subplots(self):
axes = _flatten(axes)
- if self.logx or self.loglog:
+ valid_log = {False, True, 'sym', None}
+ input_log = {self.logx, self.logy, self.loglog}
+ if input_log - valid_log:
+ invalid_log = next(iter((input_log - valid_log)))
+ raise ValueError(
+ "Boolean, None and 'sym' are valid options,"
+ " '{}' is given.".format(invalid_log)
+ )
+
+ if self.logx is True or self.loglog is True:
[a.set_xscale('log') for a in axes]
- if self.logy or self.loglog:
+ elif self.logx == 'sym' or self.loglog == 'sym':
+ [a.set_xscale('symlog') for a in axes]
+
+ if self.logy is True or self.loglog is True:
[a.set_yscale('log') for a in axes]
+ elif self.logy == 'sym' or self.loglog == 'sym':
+ [a.set_yscale('symlog') for a in axes]
self.fig = fig
self.axes = axes
@@ -1909,12 +1925,18 @@ def _plot(data, x=None, y=None, subplots=False,
Place legend on axis subplots
style : list or dict
matplotlib line style per column
- logx : bool, default False
- Use log scaling on x axis
- logy : bool, default False
- Use log scaling on y axis
- loglog : bool, default False
- Use log scaling on both x and y axes
+ logx : bool or 'sym', default False
+ Use log scaling or symlog scaling on x axis
+ .. versionchanged:: 0.25.0
+
+ logy : bool or 'sym' default False
+ Use log scaling or symlog scaling on y axis
+ .. versionchanged:: 0.25.0
+
+ loglog : bool or 'sym', default False
+ Use log scaling or symlog scaling on both x and y axes
+ .. versionchanged:: 0.25.0
+
xticks : sequence
Values to use for the xticks
yticks : sequence
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 2b17377c7b9bc..768fb7519c1ce 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -248,16 +248,34 @@ def test_plot_xy(self):
# TODO add MultiIndex test
@pytest.mark.slow
- def test_logscales(self):
+ @pytest.mark.parametrize("input_log, expected_log", [
+ (True, 'log'),
+ ('sym', 'symlog')
+ ])
+ def test_logscales(self, input_log, expected_log):
df = DataFrame({'a': np.arange(100)}, index=np.arange(100))
- ax = df.plot(logy=True)
- self._check_ax_scales(ax, yaxis='log')
- ax = df.plot(logx=True)
- self._check_ax_scales(ax, xaxis='log')
+ ax = df.plot(logy=input_log)
+ self._check_ax_scales(ax, yaxis=expected_log)
+ assert ax.get_yscale() == expected_log
+
+ ax = df.plot(logx=input_log)
+ self._check_ax_scales(ax, xaxis=expected_log)
+ assert ax.get_xscale() == expected_log
+
+ ax = df.plot(loglog=input_log)
+ self._check_ax_scales(ax, xaxis=expected_log, yaxis=expected_log)
+ assert ax.get_xscale() == expected_log
+ assert ax.get_yscale() == expected_log
+
+ @pytest.mark.parametrize("input_param", ["logx", "logy", "loglog"])
+ def test_invalid_logscale(self, input_param):
+ # GH: 24867
+ df = DataFrame({'a': np.arange(100)}, index=np.arange(100))
- ax = df.plot(loglog=True)
- self._check_ax_scales(ax, xaxis='log', yaxis='log')
+ msg = "Boolean, None and 'sym' are valid options, 'sm' is given."
+ with pytest.raises(ValueError, match=msg):
+ df.plot(**{input_param: "sm"})
@pytest.mark.slow
def test_xcompat(self):
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index e384c578aa446..daa17c173ca08 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -571,16 +571,21 @@ def test_df_series_secondary_legend(self):
tm.close()
@pytest.mark.slow
- def test_secondary_logy(self):
+ @pytest.mark.parametrize("input_logy, expected_scale", [
+ (True, 'log'),
+ ('sym', 'symlog')
+ ])
+ def test_secondary_logy(self, input_logy, expected_scale):
# GH 25545
s1 = Series(np.random.randn(30))
s2 = Series(np.random.randn(30))
- ax1 = s1.plot(logy=True)
- ax2 = s2.plot(secondary_y=True, logy=True)
+ # GH 24980
+ ax1 = s1.plot(logy=input_logy)
+ ax2 = s2.plot(secondary_y=True, logy=input_logy)
- assert ax1.get_yscale() == 'log'
- assert ax2.get_yscale() == 'log'
+ assert ax1.get_yscale() == expected_scale
+ assert ax2.get_yscale() == expected_scale
@pytest.mark.slow
def test_plot_fails_with_dupe_color_and_style(self):
| - [x] closes #24867
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24968 | 2019-01-27T22:20:37Z | 2019-04-12T11:54:20Z | 2019-04-12T11:54:20Z | 2019-04-12T11:54:26Z |
REGR: Preserve order by default in Index.intersection | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index de33ce64c1597..a9d13b556ce8b 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -22,6 +22,7 @@ Fixed Regressions
- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
+- Fixed regression in :class:`Index.intersection` incorrectly sorting the values by default (:issue:`24959`).
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 767da81c5c43a..3d176012df22b 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2333,7 +2333,7 @@ def union(self, other, sort=True):
def _wrap_setop_result(self, other, result):
return self._constructor(result, name=get_op_result_name(self, other))
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -2342,11 +2342,15 @@ def intersection(self, other, sort=True):
Parameters
----------
other : Index or array-like
- sort : bool, default True
+ sort : bool, default False
Sort the resulting index if possible
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index cc373c06efcc9..ef941ab87ba12 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -594,7 +594,7 @@ def _wrap_setop_result(self, other, result):
name = get_op_result_name(self, other)
return self._shallow_copy(result, name=name, freq=None, tz=self.tz)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Specialized intersection for DatetimeIndex objects. May be much faster
than Index.intersection
@@ -602,6 +602,14 @@ def intersection(self, other, sort=True):
Parameters
----------
other : DatetimeIndex or array-like
+ sort : bool, default True
+ Sort the resulting index if possible.
+
+ .. versionadded:: 0.24.0
+
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
Returns
-------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 0210560aaa21f..736de94991181 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1093,8 +1093,8 @@ def equals(self, other):
def overlaps(self, other):
return self._data.overlaps(other)
- def _setop(op_name):
- def func(self, other, sort=True):
+ def _setop(op_name, sort=True):
+ def func(self, other, sort=sort):
other = self._as_like_interval_index(other)
# GH 19016: ensure set op will not return a prohibited dtype
@@ -1128,7 +1128,7 @@ def is_all_dates(self):
return False
union = _setop('union')
- intersection = _setop('intersection')
+ intersection = _setop('intersection', sort=False)
difference = _setop('difference')
symmetric_difference = _setop('symmetric_difference')
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index e4d01a40bd181..16af3fe8eef26 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2910,7 +2910,7 @@ def union(self, other, sort=True):
return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0,
names=result_names)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two MultiIndex objects.
@@ -2922,6 +2922,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
Index
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index ebf5b279563cf..e17a6a682af40 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -343,7 +343,7 @@ def equals(self, other):
return super(RangeIndex, self).equals(other)
- def intersection(self, other, sort=True):
+ def intersection(self, other, sort=False):
"""
Form the intersection of two Index objects.
@@ -355,6 +355,10 @@ def intersection(self, other, sort=True):
.. versionadded:: 0.24.0
+ .. versionchanged:: 0.24.1
+
+ Changed the default from ``True`` to ``False``.
+
Returns
-------
intersection : Index
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index f3e9d835c7391..20e439de46bde 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -765,6 +765,11 @@ def test_intersect_str_dates(self, sort):
assert len(result) == 0
+ def test_intersect_nosort(self):
+ result = pd.Index(['c', 'b', 'a']).intersection(['b', 'a'])
+ expected = pd.Index(['b', 'a'])
+ tm.assert_index_equal(result, expected)
+
@pytest.mark.parametrize("sort", [True, False])
def test_chained_union(self, sort):
# Chained unions handles names correctly
@@ -1595,20 +1600,27 @@ def test_drop_tuple(self, values, to_drop):
for drop_me in to_drop[1], [to_drop[1]]:
pytest.raises(KeyError, removed.drop, drop_me)
- @pytest.mark.parametrize("method,expected", [
+ @pytest.mark.parametrize("method,expected,sort", [
+ ('intersection', np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
+ dtype=[('num', int), ('let', 'a1')]),
+ False),
+
('intersection', np.array([(1, 'A'), (1, 'B'), (2, 'A'), (2, 'B')],
- dtype=[('num', int), ('let', 'a1')])),
+ dtype=[('num', int), ('let', 'a1')]),
+ True),
+
('union', np.array([(1, 'A'), (1, 'B'), (1, 'C'), (2, 'A'), (2, 'B'),
- (2, 'C')], dtype=[('num', int), ('let', 'a1')]))
+ (2, 'C')], dtype=[('num', int), ('let', 'a1')]),
+ True)
])
- def test_tuple_union_bug(self, method, expected):
+ def test_tuple_union_bug(self, method, expected, sort):
index1 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
dtype=[('num', int), ('let', 'a1')]))
index2 = Index(np.array([(1, 'A'), (2, 'A'), (1, 'B'),
(2, 'B'), (1, 'C'), (2, 'C')],
dtype=[('num', int), ('let', 'a1')]))
- result = getattr(index1, method)(index2)
+ result = getattr(index1, method)(index2, sort=sort)
assert result.ndim == 1
expected = Index(expected)
| Closes https://github.com/pandas-dev/pandas/issues/24959 | https://api.github.com/repos/pandas-dev/pandas/pulls/24967 | 2019-01-27T22:16:04Z | 2019-01-29T21:43:08Z | 2019-01-29T21:43:08Z | 2019-01-31T14:34:42Z |
Fixed itertuples usage in to_dict | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 3ac2ed73ea53f..de33ce64c1597 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -15,6 +15,13 @@ Whats New in 0.24.1 (February XX, 2019)
These are the changes in pandas 0.24.1. See :ref:`release` for a full changelog
including other versions of pandas.
+.. _whatsnew_0241.regressions:
+
+Fixed Regressions
+^^^^^^^^^^^^^^^^^
+
+- Bug in :meth:`DataFrame.itertuples` with ``records`` orient raising an ``AttributeError`` when the ``DataFrame`` contained more than 255 columns (:issue:`24939`)
+- Bug in :meth:`DataFrame.itertuples` orient converting integer column names to strings prepended with an underscore (:issue:`24940`)
.. _whatsnew_0241.enhancements:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b4f79bda25517..28c6f3c23a3ce 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -847,7 +847,7 @@ def itertuples(self, index=True, name="Pandas"):
----------
index : bool, default True
If True, return the index as the first element of the tuple.
- name : str, default "Pandas"
+ name : str or None, default "Pandas"
The name of the returned namedtuples or None to return regular
tuples.
@@ -1290,23 +1290,26 @@ def to_dict(self, orient='dict', into=dict):
('columns', self.columns.tolist()),
('data', [
list(map(com.maybe_box_datetimelike, t))
- for t in self.itertuples(index=False)]
- )))
+ for t in self.itertuples(index=False, name=None)
+ ])))
elif orient.lower().startswith('s'):
return into_c((k, com.maybe_box_datetimelike(v))
for k, v in compat.iteritems(self))
elif orient.lower().startswith('r'):
+ columns = self.columns.tolist()
+ rows = (dict(zip(columns, row))
+ for row in self.itertuples(index=False, name=None))
return [
into_c((k, com.maybe_box_datetimelike(v))
- for k, v in compat.iteritems(row._asdict()))
- for row in self.itertuples(index=False)]
+ for k, v in compat.iteritems(row))
+ for row in rows]
elif orient.lower().startswith('i'):
if not self.index.is_unique:
raise ValueError(
"DataFrame index must be unique for orient='index'."
)
return into_c((t[0], dict(zip(self.columns, t[1:])))
- for t in self.itertuples())
+ for t in self.itertuples(name=None))
else:
raise ValueError("orient '{o}' not understood".format(o=orient))
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index ddf85136126a1..7b98395dd6dec 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -488,3 +488,17 @@ def test_to_dict_index_dtypes(self, into, expected):
result = DataFrame.from_dict(result, orient='index')[cols]
expected = DataFrame.from_dict(expected, orient='index')[cols]
tm.assert_frame_equal(result, expected)
+
+ def test_to_dict_numeric_names(self):
+ # https://github.com/pandas-dev/pandas/issues/24940
+ df = DataFrame({str(i): [i] for i in range(5)})
+ result = set(df.to_dict('records')[0].keys())
+ expected = set(df.columns)
+ assert result == expected
+
+ def test_to_dict_wide(self):
+ # https://github.com/pandas-dev/pandas/issues/24939
+ df = DataFrame({('A_{:d}'.format(i)): [i] for i in range(256)})
+ result = df.to_dict('records')[0]
+ expected = {'A_{:d}'.format(i): i for i in range(256)}
+ assert result == expected
| Closes https://github.com/pandas-dev/pandas/issues/24940
Closes https://github.com/pandas-dev/pandas/issues/24939
| https://api.github.com/repos/pandas-dev/pandas/pulls/24965 | 2019-01-27T21:53:37Z | 2019-01-28T15:31:56Z | 2019-01-28T15:31:55Z | 2019-01-28T15:32:01Z |
DEPR: Fixed warning for implicit registration | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index de33ce64c1597..78eba1fe5d025 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -74,6 +74,11 @@ Bug Fixes
- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
+**Visualization**
+
+- Fixed the warning for implicitly registered matplotlib converters not showing. See :ref:`whatsnew_0211.converters` for more (:issue:`24963`).
+
+
**Other**
-
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index e543ab88f53b2..85549bafa8dc0 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -39,7 +39,7 @@
else:
_HAS_MPL = True
if get_option('plotting.matplotlib.register_converters'):
- _converter.register(explicit=True)
+ _converter.register(explicit=False)
def _raise_if_no_mpl():
| Reminder: We want people to explicitly call `pandas.plotting.register_matplotlib_converters()` when using a matplotlib plotting method with a pandas object. We implicitly call it as part of Series/Frame.plot.
```python
In [1]: import pandas as pd
In [2]: import matplotlib.pyplot as plt
In [3]: fig, ax = plt.subplots()
In [4]: s = pd.Series(range(12), index=pd.date_range('2017', periods=12))
In [5]: ax.plot(s)
## -- End pasted text --
Using matplotlib backend: TkAgg
/Users/taugspurger/sandbox/pandas/pandas/plotting/_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
Out[1]: [<matplotlib.lines.Line2D at 0x1077d7b38>]
```
I'm not quite sure what happened in
https://github.com/pandas-dev/pandas/pull/18307, but the warning wasn't showing up by default.
Closes https://github.com/pandas-dev/pandas/issues/24963 | https://api.github.com/repos/pandas-dev/pandas/pulls/24964 | 2019-01-27T21:31:00Z | 2019-01-29T12:23:33Z | 2019-01-29T12:23:33Z | 2019-05-31T11:11:50Z |
fix+test to_timedelta('NaT', box=False) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 828c35c10e958..57fdff041db28 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -66,7 +66,7 @@ Bug Fixes
-
**Timedelta**
-
+- Bug in :func:`to_timedelta` with `box=False` incorrectly returning a ``datetime64`` object instead of a ``timedelta64`` object (:issue:`24961`)
-
-
-
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index e3428146b91d8..ddd21d0f62d08 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -120,7 +120,8 @@ def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'):
try:
result = Timedelta(r, unit)
if not box:
- result = result.asm8
+ # explicitly view as timedelta64 for case when result is pd.NaT
+ result = result.asm8.view('timedelta64[ns]')
except ValueError:
if errors == 'raise':
raise
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 9b5fdfb06a9fa..e1838e0160fec 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -309,8 +309,13 @@ def test_iso_conversion(self):
assert to_timedelta('P0DT0H0M1S') == expected
def test_nat_converters(self):
- assert to_timedelta('nat', box=False).astype('int64') == iNaT
- assert to_timedelta('nan', box=False).astype('int64') == iNaT
+ result = to_timedelta('nat', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
+
+ result = to_timedelta('nan', box=False)
+ assert result.dtype.kind == 'm'
+ assert result.astype('int64') == iNaT
@pytest.mark.parametrize('units, np_unit',
[(['Y', 'y'], 'Y'),
| - [x] closes #24957
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24961 | 2019-01-27T20:21:43Z | 2019-01-30T12:42:50Z | 2019-01-30T12:42:50Z | 2019-01-30T14:39:46Z |
STY: use pytest.raises context syntax (indexing) | diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py
index ea451d40eb5d3..073d40001a16b 100644
--- a/pandas/tests/indexing/multiindex/test_loc.py
+++ b/pandas/tests/indexing/multiindex/test_loc.py
@@ -123,10 +123,12 @@ def test_loc_multiindex(self):
tm.assert_frame_equal(rs, xp)
# missing label
- pytest.raises(KeyError, lambda: mi_int.loc[2])
+ with pytest.raises(KeyError, match=r"^2L?$"):
+ mi_int.loc[2]
with catch_warnings(record=True):
# GH 21593
- pytest.raises(KeyError, lambda: mi_int.ix[2])
+ with pytest.raises(KeyError, match=r"^2L?$"):
+ mi_int.ix[2]
def test_loc_multiindex_indexer_none(self):
diff --git a/pandas/tests/indexing/multiindex/test_partial.py b/pandas/tests/indexing/multiindex/test_partial.py
index 2e37ebe4a0629..473463def2b87 100644
--- a/pandas/tests/indexing/multiindex/test_partial.py
+++ b/pandas/tests/indexing/multiindex/test_partial.py
@@ -104,8 +104,8 @@ def test_getitem_partial_column_select(self):
result = df.ix[('a', 'y'), [1, 0]]
tm.assert_frame_equal(result, expected)
- pytest.raises(KeyError, df.loc.__getitem__,
- (('a', 'foo'), slice(None, None)))
+ with pytest.raises(KeyError, match=r"\('a', 'foo'\)"):
+ df.loc[('a', 'foo'), :]
def test_partial_set(
self, multiindex_year_month_day_dataframe_random_data):
diff --git a/pandas/tests/indexing/multiindex/test_slice.py b/pandas/tests/indexing/multiindex/test_slice.py
index fcecb2b454eb6..db7d079186708 100644
--- a/pandas/tests/indexing/multiindex/test_slice.py
+++ b/pandas/tests/indexing/multiindex/test_slice.py
@@ -107,7 +107,8 @@ def test_per_axis_per_level_getitem(self):
# ambiguous cases
# these can be multiply interpreted (e.g. in this case
# as df.loc[slice(None),[1]] as well
- pytest.raises(KeyError, lambda: df.loc[slice(None), [1]])
+ with pytest.raises(KeyError, match=r"'\[1\] not in index'"):
+ df.loc[slice(None), [1]]
result = df.loc[(slice(None), [1]), :]
expected = df.iloc[[0, 3]]
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index b7443e242137b..317aac1766cf8 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -53,23 +53,20 @@ def test_loc_scalar(self):
assert_frame_equal(df, expected)
# value not in the categories
- pytest.raises(KeyError, lambda: df.loc['d'])
+ with pytest.raises(KeyError, match=r"^'d'$"):
+ df.loc['d']
- def f():
+ msg = "cannot append a non-category item to a CategoricalIndex"
+ with pytest.raises(TypeError, match=msg):
df.loc['d'] = 10
- pytest.raises(TypeError, f)
-
- def f():
+ msg = ("cannot insert an item into a CategoricalIndex that is not"
+ " already an existing category")
+ with pytest.raises(TypeError, match=msg):
df.loc['d', 'A'] = 10
-
- pytest.raises(TypeError, f)
-
- def f():
+ with pytest.raises(TypeError, match=msg):
df.loc['d', 'C'] = 10
- pytest.raises(TypeError, f)
-
def test_getitem_scalar(self):
cats = Categorical([Timestamp('12-31-1999'),
@@ -318,7 +315,8 @@ def test_loc_listlike(self):
assert_frame_equal(result, expected, check_index_type=True)
# element in the categories but not in the values
- pytest.raises(KeyError, lambda: self.df2.loc['e'])
+ with pytest.raises(KeyError, match=r"^'e'$"):
+ self.df2.loc['e']
# assign is ok
df = self.df2.copy()
@@ -616,22 +614,29 @@ def test_reindexing(self):
assert_frame_equal(result, expected, check_index_type=True)
# passed duplicate indexers are not allowed
- pytest.raises(ValueError, lambda: self.df2.reindex(['a', 'a']))
+ msg = "cannot reindex with a non-unique indexer"
+ with pytest.raises(ValueError, match=msg):
+ self.df2.reindex(['a', 'a'])
# args NotImplemented ATM
- pytest.raises(NotImplementedError,
- lambda: self.df2.reindex(['a'], method='ffill'))
- pytest.raises(NotImplementedError,
- lambda: self.df2.reindex(['a'], level=1))
- pytest.raises(NotImplementedError,
- lambda: self.df2.reindex(['a'], limit=2))
+ msg = r"argument {} is not implemented for CategoricalIndex\.reindex"
+ with pytest.raises(NotImplementedError, match=msg.format('method')):
+ self.df2.reindex(['a'], method='ffill')
+ with pytest.raises(NotImplementedError, match=msg.format('level')):
+ self.df2.reindex(['a'], level=1)
+ with pytest.raises(NotImplementedError, match=msg.format('limit')):
+ self.df2.reindex(['a'], limit=2)
def test_loc_slice(self):
# slicing
# not implemented ATM
# GH9748
- pytest.raises(TypeError, lambda: self.df.loc[1:5])
+ msg = ("cannot do slice indexing on {klass} with these "
+ r"indexers \[1\] of {kind}".format(
+ klass=str(CategoricalIndex), kind=str(int)))
+ with pytest.raises(TypeError, match=msg):
+ self.df.loc[1:5]
# result = df.loc[1:5]
# expected = df.iloc[[1,2,3,4]]
@@ -679,8 +684,11 @@ def test_boolean_selection(self):
# categories=[3, 2, 1],
# ordered=False,
# name=u'B')
- pytest.raises(TypeError, lambda: df4[df4.index < 2])
- pytest.raises(TypeError, lambda: df4[df4.index > 1])
+ msg = "Unordered Categoricals can only compare equality or not"
+ with pytest.raises(TypeError, match=msg):
+ df4[df4.index < 2]
+ with pytest.raises(TypeError, match=msg):
+ df4[df4.index > 1]
def test_indexing_with_category(self):
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index e38c1b16b3b60..be0d9c5cf24ca 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -302,11 +302,11 @@ def test_setting_with_copy_bug(self):
'c': ['a', 'b', np.nan, 'd']})
mask = pd.isna(df.c)
- def f():
+ msg = ("A value is trying to be set on a copy of a slice from a"
+ " DataFrame")
+ with pytest.raises(com.SettingWithCopyError, match=msg):
df[['c']][mask] = df[['b']][mask]
- pytest.raises(com.SettingWithCopyError, f)
-
# invalid warning as we are returning a new object
# GH 8730
df1 = DataFrame({'x': Series(['a', 'b', 'c']),
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index de91b8f4a796c..b9b47338c9de2 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -6,7 +6,7 @@
import pytest
from pandas import (
- DataFrame, Float64Index, Index, Int64Index, RangeIndex, Series)
+ DataFrame, Float64Index, Index, Int64Index, RangeIndex, Series, compat)
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal, assert_series_equal
@@ -54,9 +54,11 @@ def test_scalar_error(self):
with pytest.raises(TypeError, match=msg):
s.iloc[3.0]
- def f():
+ msg = ("cannot do positional indexing on {klass} with these "
+ r"indexers \[3\.0\] of {kind}".format(
+ klass=type(i), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
- pytest.raises(TypeError, f)
@ignore_ix
def test_scalar_non_numeric(self):
@@ -82,35 +84,46 @@ def test_scalar_non_numeric(self):
(lambda x: x.iloc, False),
(lambda x: x, True)]:
- def f():
- with catch_warnings(record=True):
- idxr(s)[3.0]
-
# gettitem on a DataFrame is a KeyError as it is indexing
# via labels on the columns
if getitem and isinstance(s, DataFrame):
error = KeyError
+ msg = r"^3(\.0)?$"
else:
error = TypeError
- pytest.raises(error, f)
+ msg = (r"cannot do (label|index|positional) indexing"
+ r" on {klass} with these indexers \[3\.0\] of"
+ r" {kind}|"
+ "Cannot index by location index with a"
+ " non-integer key"
+ .format(klass=type(i), kind=str(float)))
+ with catch_warnings(record=True):
+ with pytest.raises(error, match=msg):
+ idxr(s)[3.0]
# label based can be a TypeError or KeyError
- def f():
- s.loc[3.0]
-
if s.index.inferred_type in ['string', 'unicode', 'mixed']:
error = KeyError
+ msg = r"^3$"
else:
error = TypeError
- pytest.raises(error, f)
+ msg = (r"cannot do (label|index) indexing"
+ r" on {klass} with these indexers \[3\.0\] of"
+ r" {kind}"
+ .format(klass=type(i), kind=str(float)))
+ with pytest.raises(error, match=msg):
+ s.loc[3.0]
# contains
assert 3.0 not in s
# setting with a float fails with iloc
- def f():
+ msg = (r"cannot do (label|index|positional) indexing"
+ r" on {klass} with these indexers \[3\.0\] of"
+ r" {kind}"
+ .format(klass=type(i), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
- pytest.raises(TypeError, f)
# setting with an indexer
if s.index.inferred_type in ['categorical']:
@@ -145,7 +158,12 @@ def f():
# fallsback to position selection, series only
s = Series(np.arange(len(i)), index=i)
s[3]
- pytest.raises(TypeError, lambda: s[3.0])
+ msg = (r"cannot do (label|index) indexing"
+ r" on {klass} with these indexers \[3\.0\] of"
+ r" {kind}"
+ .format(klass=type(i), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
+ s[3.0]
@ignore_ix
def test_scalar_with_mixed(self):
@@ -153,19 +171,23 @@ def test_scalar_with_mixed(self):
s2 = Series([1, 2, 3], index=['a', 'b', 'c'])
s3 = Series([1, 2, 3], index=['a', 'b', 1.5])
- # lookup in a pure string index
+ # lookup in a pure stringstr
# with an invalid indexer
for idxr in [lambda x: x.ix,
lambda x: x,
lambda x: x.iloc]:
- def f():
- with catch_warnings(record=True):
+ msg = (r"cannot do label indexing"
+ r" on {klass} with these indexers \[1\.0\] of"
+ r" {kind}|"
+ "Cannot index by location index with a non-integer key"
+ .format(klass=str(Index), kind=str(float)))
+ with catch_warnings(record=True):
+ with pytest.raises(TypeError, match=msg):
idxr(s2)[1.0]
- pytest.raises(TypeError, f)
-
- pytest.raises(KeyError, lambda: s2.loc[1.0])
+ with pytest.raises(KeyError, match=r"^1$"):
+ s2.loc[1.0]
result = s2.loc['b']
expected = 2
@@ -175,11 +197,13 @@ def f():
# indexing
for idxr in [lambda x: x]:
- def f():
+ msg = (r"cannot do label indexing"
+ r" on {klass} with these indexers \[1\.0\] of"
+ r" {kind}"
+ .format(klass=str(Index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
idxr(s3)[1.0]
- pytest.raises(TypeError, f)
-
result = idxr(s3)[1]
expected = 2
assert result == expected
@@ -189,17 +213,22 @@ def f():
for idxr in [lambda x: x.ix]:
with catch_warnings(record=True):
- def f():
+ msg = (r"cannot do label indexing"
+ r" on {klass} with these indexers \[1\.0\] of"
+ r" {kind}"
+ .format(klass=str(Index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
idxr(s3)[1.0]
- pytest.raises(TypeError, f)
-
result = idxr(s3)[1]
expected = 2
assert result == expected
- pytest.raises(TypeError, lambda: s3.iloc[1.0])
- pytest.raises(KeyError, lambda: s3.loc[1.0])
+ msg = "Cannot index by location index with a non-integer key"
+ with pytest.raises(TypeError, match=msg):
+ s3.iloc[1.0]
+ with pytest.raises(KeyError, match=r"^1$"):
+ s3.loc[1.0]
result = s3.loc[1.5]
expected = 3
@@ -280,16 +309,14 @@ def test_scalar_float(self):
# setting
s2 = s.copy()
- def f():
- with catch_warnings(record=True):
- idxr(s2)[indexer] = expected
with catch_warnings(record=True):
result = idxr(s2)[indexer]
self.check(result, s, 3, getitem)
# random integer is a KeyError
with catch_warnings(record=True):
- pytest.raises(KeyError, lambda: idxr(s)[3.5])
+ with pytest.raises(KeyError, match=r"^3\.5$"):
+ idxr(s)[3.5]
# contains
assert 3.0 in s
@@ -303,11 +330,16 @@ def f():
self.check(result, s, 3, False)
# iloc raises with a float
- pytest.raises(TypeError, lambda: s.iloc[3.0])
+ msg = "Cannot index by location index with a non-integer key"
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[3.0]
- def g():
+ msg = (r"cannot do positional indexing"
+ r" on {klass} with these indexers \[3\.0\] of"
+ r" {kind}"
+ .format(klass=str(Float64Index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s2.iloc[3.0] = 0
- pytest.raises(TypeError, g)
@ignore_ix
def test_slice_non_numeric(self):
@@ -329,37 +361,55 @@ def test_slice_non_numeric(self):
slice(3, 4.0),
slice(3.0, 4.0)]:
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(3|4)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s.iloc[l]
- pytest.raises(TypeError, f)
for idxr in [lambda x: x.ix,
lambda x: x.loc,
lambda x: x.iloc,
lambda x: x]:
- def f():
- with catch_warnings(record=True):
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers"
+ r" \[(3|4)(\.0)?\]"
+ r" of ({kind_float}|{kind_int})"
+ .format(klass=type(index),
+ kind_float=str(float),
+ kind_int=str(int)))
+ with catch_warnings(record=True):
+ with pytest.raises(TypeError, match=msg):
idxr(s)[l]
- pytest.raises(TypeError, f)
# setitem
for l in [slice(3.0, 4),
slice(3, 4.0),
slice(3.0, 4.0)]:
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(3|4)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s.iloc[l] = 0
- pytest.raises(TypeError, f)
for idxr in [lambda x: x.ix,
lambda x: x.loc,
lambda x: x.iloc,
lambda x: x]:
- def f():
- with catch_warnings(record=True):
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers"
+ r" \[(3|4)(\.0)?\]"
+ r" of ({kind_float}|{kind_int})"
+ .format(klass=type(index),
+ kind_float=str(float),
+ kind_int=str(int)))
+ with catch_warnings(record=True):
+ with pytest.raises(TypeError, match=msg):
idxr(s)[l] = 0
- pytest.raises(TypeError, f)
@ignore_ix
def test_slice_integer(self):
@@ -396,11 +446,13 @@ def test_slice_integer(self):
self.check(result, s, indexer, False)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(3|4)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l]
- pytest.raises(TypeError, f)
-
# getitem out-of-bounds
for l in [slice(-6, 6),
slice(-6.0, 6.0)]:
@@ -420,11 +472,13 @@ def f():
self.check(result, s, indexer, False)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[-6\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[slice(-6.0, 6.0)]
- pytest.raises(TypeError, f)
-
# getitem odd floats
for l, res1 in [(slice(2.5, 4), slice(3, 5)),
(slice(2, 3.5), slice(2, 4)),
@@ -443,11 +497,13 @@ def f():
self.check(result, s, res, False)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(2|3)\.5\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l]
- pytest.raises(TypeError, f)
-
# setitem
for l in [slice(3.0, 4),
slice(3, 4.0),
@@ -462,11 +518,13 @@ def f():
assert (result == 0).all()
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(3|4)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l] = 0
- pytest.raises(TypeError, f)
-
def test_integer_positional_indexing(self):
""" make sure that we are raising on positional indexing
w.r.t. an integer index """
@@ -484,11 +542,17 @@ def test_integer_positional_indexing(self):
slice(2.0, 4),
slice(2.0, 4.0)]:
- def f():
+ if compat.PY2:
+ klass = Int64Index
+ else:
+ klass = RangeIndex
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(2|4)\.0\] of"
+ " {kind}"
+ .format(klass=str(klass), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
idxr(s)[l]
- pytest.raises(TypeError, f)
-
@ignore_ix
def test_slice_integer_frame_getitem(self):
@@ -509,11 +573,13 @@ def f(idxr):
self.check(result, s, indexer, False)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(0|1)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l]
- pytest.raises(TypeError, f)
-
# getitem out-of-bounds
for l in [slice(-10, 10),
slice(-10.0, 10.0)]:
@@ -522,11 +588,13 @@ def f():
self.check(result, s, slice(-10, 10), True)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[-10\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[slice(-10.0, 10.0)]
- pytest.raises(TypeError, f)
-
# getitem odd floats
for l, res in [(slice(0.5, 1), slice(1, 2)),
(slice(0, 0.5), slice(0, 1)),
@@ -536,11 +604,13 @@ def f():
self.check(result, s, res, False)
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[0\.5\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l]
- pytest.raises(TypeError, f)
-
# setitem
for l in [slice(3.0, 4),
slice(3, 4.0),
@@ -552,11 +622,13 @@ def f():
assert (result == 0).all()
# positional indexing
- def f():
+ msg = ("cannot do slice indexing"
+ r" on {klass} with these indexers \[(3|4)\.0\] of"
+ " {kind}"
+ .format(klass=type(index), kind=str(float)))
+ with pytest.raises(TypeError, match=msg):
s[l] = 0
- pytest.raises(TypeError, f)
-
f(lambda x: x.loc)
with catch_warnings(record=True):
f(lambda x: x.ix)
@@ -632,9 +704,12 @@ def test_floating_misc(self):
# value not found (and no fallbacking at all)
# scalar integers
- pytest.raises(KeyError, lambda: s.loc[4])
- pytest.raises(KeyError, lambda: s.loc[4])
- pytest.raises(KeyError, lambda: s[4])
+ with pytest.raises(KeyError, match=r"^4\.0$"):
+ s.loc[4]
+ with pytest.raises(KeyError, match=r"^4\.0$"):
+ s.loc[4]
+ with pytest.raises(KeyError, match=r"^4\.0$"):
+ s[4]
# fancy floats/integers create the correct entry (as nan)
# fancy tests
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index a867387db4b46..5c87d553daba3 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -26,26 +26,33 @@ def test_iloc_exceeds_bounds(self):
msg = 'positional indexers are out-of-bounds'
with pytest.raises(IndexError, match=msg):
df.iloc[:, [0, 1, 2, 3, 4, 5]]
- pytest.raises(IndexError, lambda: df.iloc[[1, 30]])
- pytest.raises(IndexError, lambda: df.iloc[[1, -30]])
- pytest.raises(IndexError, lambda: df.iloc[[100]])
+ with pytest.raises(IndexError, match=msg):
+ df.iloc[[1, 30]]
+ with pytest.raises(IndexError, match=msg):
+ df.iloc[[1, -30]]
+ with pytest.raises(IndexError, match=msg):
+ df.iloc[[100]]
s = df['A']
- pytest.raises(IndexError, lambda: s.iloc[[100]])
- pytest.raises(IndexError, lambda: s.iloc[[-100]])
+ with pytest.raises(IndexError, match=msg):
+ s.iloc[[100]]
+ with pytest.raises(IndexError, match=msg):
+ s.iloc[[-100]]
# still raise on a single indexer
msg = 'single positional indexer is out-of-bounds'
with pytest.raises(IndexError, match=msg):
df.iloc[30]
- pytest.raises(IndexError, lambda: df.iloc[-30])
+ with pytest.raises(IndexError, match=msg):
+ df.iloc[-30]
# GH10779
# single positive/negative indexer exceeding Series bounds should raise
# an IndexError
with pytest.raises(IndexError, match=msg):
s.iloc[30]
- pytest.raises(IndexError, lambda: s.iloc[-30])
+ with pytest.raises(IndexError, match=msg):
+ s.iloc[-30]
# slices are ok
result = df.iloc[:, 4:10] # 0 < start < len < stop
@@ -104,8 +111,12 @@ def check(result, expected):
check(dfl.iloc[:, 1:3], dfl.iloc[:, [1]])
check(dfl.iloc[4:6], dfl.iloc[[4]])
- pytest.raises(IndexError, lambda: dfl.iloc[[4, 5, 6]])
- pytest.raises(IndexError, lambda: dfl.iloc[:, 4])
+ msg = "positional indexers are out-of-bounds"
+ with pytest.raises(IndexError, match=msg):
+ dfl.iloc[[4, 5, 6]]
+ msg = "single positional indexer is out-of-bounds"
+ with pytest.raises(IndexError, match=msg):
+ dfl.iloc[:, 4]
def test_iloc_getitem_int(self):
@@ -437,10 +448,16 @@ def test_iloc_getitem_labelled_frame(self):
assert result == exp
# out-of-bounds exception
- pytest.raises(IndexError, df.iloc.__getitem__, tuple([10, 5]))
+ msg = "single positional indexer is out-of-bounds"
+ with pytest.raises(IndexError, match=msg):
+ df.iloc[10, 5]
# trying to use a label
- pytest.raises(ValueError, df.iloc.__getitem__, tuple(['j', 'D']))
+ msg = (r"Location based indexing can only have \[integer, integer"
+ r" slice \(START point is INCLUDED, END point is EXCLUDED\),"
+ r" listlike of integers, boolean array\] types")
+ with pytest.raises(ValueError, match=msg):
+ df.iloc['j', 'D']
def test_iloc_getitem_doc_issue(self):
@@ -555,10 +572,15 @@ def test_iloc_mask(self):
# GH 3631, iloc with a mask (of a series) should raise
df = DataFrame(lrange(5), list('ABCDE'), columns=['a'])
mask = (df.a % 2 == 0)
- pytest.raises(ValueError, df.iloc.__getitem__, tuple([mask]))
+ msg = ("iLocation based boolean indexing cannot use an indexable as"
+ " a mask")
+ with pytest.raises(ValueError, match=msg):
+ df.iloc[mask]
mask.index = lrange(len(mask))
- pytest.raises(NotImplementedError, df.iloc.__getitem__,
- tuple([mask]))
+ msg = ("iLocation based boolean indexing on an integer type is not"
+ " available")
+ with pytest.raises(NotImplementedError, match=msg):
+ df.iloc[mask]
# ndarray ok
result = df.iloc[np.array([True] * len(mask), dtype=bool)]
diff --git a/pandas/tests/indexing/test_ix.py b/pandas/tests/indexing/test_ix.py
index 35805bce07705..fb4dfbb39ce94 100644
--- a/pandas/tests/indexing/test_ix.py
+++ b/pandas/tests/indexing/test_ix.py
@@ -102,7 +102,12 @@ def compare(result, expected):
with catch_warnings(record=True):
df.ix[key]
- pytest.raises(TypeError, lambda: df.loc[key])
+ msg = (r"cannot do slice indexing"
+ r" on {klass} with these indexers \[(0|1)\] of"
+ r" {kind}"
+ .format(klass=type(df.index), kind=str(int)))
+ with pytest.raises(TypeError, match=msg):
+ df.loc[key]
df = DataFrame(np.random.randn(5, 4), columns=list('ABCD'),
index=pd.date_range('2012-01-01', periods=5))
@@ -122,7 +127,8 @@ def compare(result, expected):
with catch_warnings(record=True):
expected = df.ix[key]
except KeyError:
- pytest.raises(KeyError, lambda: df.loc[key])
+ with pytest.raises(KeyError, match=r"^'2012-01-31'$"):
+ df.loc[key]
continue
result = df.loc[key]
@@ -279,14 +285,18 @@ def test_ix_setitem_out_of_bounds_axis_0(self):
np.random.randn(2, 5), index=["row%s" % i for i in range(2)],
columns=["col%s" % i for i in range(5)])
with catch_warnings(record=True):
- pytest.raises(ValueError, df.ix.__setitem__, (2, 0), 100)
+ msg = "cannot set by positional indexing with enlargement"
+ with pytest.raises(ValueError, match=msg):
+ df.ix[2, 0] = 100
def test_ix_setitem_out_of_bounds_axis_1(self):
df = DataFrame(
np.random.randn(5, 2), index=["row%s" % i for i in range(5)],
columns=["col%s" % i for i in range(2)])
with catch_warnings(record=True):
- pytest.raises(ValueError, df.ix.__setitem__, (0, 2), 100)
+ msg = "cannot set by positional indexing with enlargement"
+ with pytest.raises(ValueError, match=msg):
+ df.ix[0, 2] = 100
def test_ix_empty_list_indexer_is_ok(self):
with catch_warnings(record=True):
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 17e107c7a1130..3bf4a6bee4af9 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -233,8 +233,10 @@ def test_loc_to_fail(self):
columns=['e', 'f', 'g'])
# raise a KeyError?
- pytest.raises(KeyError, df.loc.__getitem__,
- tuple([[1, 2], [1, 2]]))
+ msg = (r"\"None of \[Int64Index\(\[1, 2\], dtype='int64'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ df.loc[[1, 2], [1, 2]]
# GH 7496
# loc should not fallback
@@ -243,10 +245,18 @@ def test_loc_to_fail(self):
s.loc[1] = 1
s.loc['a'] = 2
- pytest.raises(KeyError, lambda: s.loc[-1])
- pytest.raises(KeyError, lambda: s.loc[[-1, -2]])
+ with pytest.raises(KeyError, match=r"^-1$"):
+ s.loc[-1]
+
+ msg = (r"\"None of \[Int64Index\(\[-1, -2\], dtype='int64'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ s.loc[[-1, -2]]
- pytest.raises(KeyError, lambda: s.loc[['4']])
+ msg = (r"\"None of \[Index\(\[u?'4'\], dtype='object'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ s.loc[['4']]
s.loc[-1] = 3
with tm.assert_produces_warning(FutureWarning,
@@ -256,29 +266,28 @@ def test_loc_to_fail(self):
tm.assert_series_equal(result, expected)
s['a'] = 2
- pytest.raises(KeyError, lambda: s.loc[[-2]])
+ msg = (r"\"None of \[Int64Index\(\[-2\], dtype='int64'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ s.loc[[-2]]
del s['a']
- def f():
+ with pytest.raises(KeyError, match=msg):
s.loc[[-2]] = 0
- pytest.raises(KeyError, f)
-
# inconsistency between .loc[values] and .loc[values,:]
# GH 7999
df = DataFrame([['a'], ['b']], index=[1, 2], columns=['value'])
- def f():
+ msg = (r"\"None of \[Int64Index\(\[3\], dtype='int64'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
df.loc[[3], :]
- pytest.raises(KeyError, f)
-
- def f():
+ with pytest.raises(KeyError, match=msg):
df.loc[[3]]
- pytest.raises(KeyError, f)
-
def test_loc_getitem_list_with_fail(self):
# 15747
# should KeyError if *any* missing labels
@@ -600,11 +609,15 @@ def test_loc_non_unique(self):
# these are going to raise because the we are non monotonic
df = DataFrame({'A': [1, 2, 3, 4, 5, 6],
'B': [3, 4, 5, 6, 7, 8]}, index=[0, 1, 0, 1, 2, 3])
- pytest.raises(KeyError, df.loc.__getitem__,
- tuple([slice(1, None)]))
- pytest.raises(KeyError, df.loc.__getitem__,
- tuple([slice(0, None)]))
- pytest.raises(KeyError, df.loc.__getitem__, tuple([slice(1, 2)]))
+ msg = "'Cannot get left slice bound for non-unique label: 1'"
+ with pytest.raises(KeyError, match=msg):
+ df.loc[1:]
+ msg = "'Cannot get left slice bound for non-unique label: 0'"
+ with pytest.raises(KeyError, match=msg):
+ df.loc[0:]
+ msg = "'Cannot get left slice bound for non-unique label: 1'"
+ with pytest.raises(KeyError, match=msg):
+ df.loc[1:2]
# monotonic are ok
df = DataFrame({'A': [1, 2, 3, 4, 5, 6],
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index b863afe02c2e8..5b6a5ab9ecf7b 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -246,7 +246,10 @@ def test_series_partial_set(self):
tm.assert_series_equal(result, expected, check_index_type=True)
# raises as nothing in in the index
- pytest.raises(KeyError, lambda: ser.loc[[3, 3, 3]])
+ msg = (r"\"None of \[Int64Index\(\[3, 3, 3\], dtype='int64'\)\] are"
+ r" in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ ser.loc[[3, 3, 3]]
expected = Series([0.2, 0.2, np.nan], index=[2, 2, 3])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -342,7 +345,10 @@ def test_series_partial_set_with_name(self):
tm.assert_series_equal(result, expected, check_index_type=True)
# raises as nothing in in the index
- pytest.raises(KeyError, lambda: ser.loc[[3, 3, 3]])
+ msg = (r"\"None of \[Int64Index\(\[3, 3, 3\], dtype='int64',"
+ r" name=u?'idx'\)\] are in the \[index\]\"")
+ with pytest.raises(KeyError, match=msg):
+ ser.loc[[3, 3, 3]]
exp_idx = Index([2, 2, 3], dtype='int64', name='idx')
expected = Series([0.2, 0.2, np.nan], index=exp_idx, name='s')
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index e4b8181a67514..6d607ce86c08e 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -30,7 +30,9 @@ def _check(f, func, values=False):
for f in [d['labels'], d['ts'], d['floats']]:
if f is not None:
- pytest.raises(ValueError, self.check_values, f, 'iat')
+ msg = "iAt based indexing can only have integer indexers"
+ with pytest.raises(ValueError, match=msg):
+ self.check_values(f, 'iat')
# at
for f in [d['ints'], d['uints'], d['labels'],
@@ -57,7 +59,9 @@ def _check(f, func, values=False):
for f in [d['labels'], d['ts'], d['floats']]:
if f is not None:
- pytest.raises(ValueError, _check, f, 'iat')
+ msg = "iAt based indexing can only have integer indexers"
+ with pytest.raises(ValueError, match=msg):
+ _check(f, 'iat')
# at
for f in [d['ints'], d['uints'], d['labels'],
@@ -107,8 +111,12 @@ def test_imethods_with_dups(self):
result = s.iat[2]
assert result == 2
- pytest.raises(IndexError, lambda: s.iat[10])
- pytest.raises(IndexError, lambda: s.iat[-10])
+ msg = "index 10 is out of bounds for axis 0 with size 5"
+ with pytest.raises(IndexError, match=msg):
+ s.iat[10]
+ msg = "index -10 is out of bounds for axis 0 with size 5"
+ with pytest.raises(IndexError, match=msg):
+ s.iat[-10]
result = s.iloc[[2, 3]]
expected = Series([2, 3], [2, 2], dtype='int64')
@@ -128,22 +136,30 @@ def test_at_to_fail(self):
s = Series([1, 2, 3], index=list('abc'))
result = s.at['a']
assert result == 1
- pytest.raises(ValueError, lambda: s.at[0])
+ msg = ("At based indexing on an non-integer index can only have"
+ " non-integer indexers")
+ with pytest.raises(ValueError, match=msg):
+ s.at[0]
df = DataFrame({'A': [1, 2, 3]}, index=list('abc'))
result = df.at['a', 'A']
assert result == 1
- pytest.raises(ValueError, lambda: df.at['a', 0])
+ with pytest.raises(ValueError, match=msg):
+ df.at['a', 0]
s = Series([1, 2, 3], index=[3, 2, 1])
result = s.at[1]
assert result == 3
- pytest.raises(ValueError, lambda: s.at['a'])
+ msg = ("At based indexing on an integer index can only have integer"
+ " indexers")
+ with pytest.raises(ValueError, match=msg):
+ s.at['a']
df = DataFrame({0: [1, 2, 3]}, index=[3, 2, 1])
result = df.at[1, 0]
assert result == 3
- pytest.raises(ValueError, lambda: df.at['a', 0])
+ with pytest.raises(ValueError, match=msg):
+ df.at['a', 0]
# GH 13822, incorrect error string with non-unique columns when missing
# column is accessed
| xref #24332 | https://api.github.com/repos/pandas-dev/pandas/pulls/24960 | 2019-01-27T20:00:01Z | 2019-01-28T12:38:30Z | 2019-01-28T12:38:30Z | 2019-01-29T12:44:27Z |
CLN: isort asv_bench/benchmark/algorithms.py | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
index 34fb161e5afcb..74849d330f2bc 100644
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -5,7 +5,6 @@
import pandas as pd
from pandas.util import testing as tm
-
for imp in ['pandas.util', 'pandas.tools.hashing']:
try:
hashing = import_module(imp)
@@ -142,4 +141,4 @@ def time_quantile(self, quantile, interpolation, dtype):
self.idx.quantile(quantile, interpolation=interpolation)
-from .pandas_vb_common import setup # noqa: F401
+from .pandas_vb_common import setup # noqa: F401 isort:skip
diff --git a/setup.cfg b/setup.cfg
index 7155cc1013544..b15c3ce8a110a 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -114,7 +114,6 @@ force_sort_within_sections=True
skip=
pandas/core/api.py,
pandas/core/frame.py,
- asv_bench/benchmarks/algorithms.py,
asv_bench/benchmarks/attrs_caching.py,
asv_bench/benchmarks/binary_ops.py,
asv_bench/benchmarks/categoricals.py,
| - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
I deleted an empty line of code and removed the file from the isort skip list. Per issue #22947, made sure setup import was at the bottom of the file before committing. Small contribution but looking forward to contributing more!
Thanks everyone for making the documentation for contributing easy to follow! | https://api.github.com/repos/pandas-dev/pandas/pulls/24958 | 2019-01-27T00:38:37Z | 2019-01-30T05:53:18Z | 2019-01-30T05:53:18Z | 2019-01-30T05:53:25Z |
BUG: Better handle larger numbers in to_numeric | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index a9fa8b2174dd0..383c9ffee8241 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -110,6 +110,8 @@ Timezones
Numeric
^^^^^^^
+- Bug in :meth:`to_numeric` in which large negative numbers were being improperly handled (:issue:`24910`)
+- Bug in :meth:`to_numeric` in which numbers were being coerced to float, even though ``errors`` was not ``coerce`` (:issue:`24910`)
-
-
-
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 4745916eb0ce2..4a3440e14ba14 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -1828,7 +1828,7 @@ def maybe_convert_numeric(ndarray[object] values, set na_values,
except (ValueError, OverflowError, TypeError):
pass
- # otherwise, iterate and do full infererence
+ # Otherwise, iterate and do full inference.
cdef:
int status, maybe_int
Py_ssize_t i, n = values.size
@@ -1865,10 +1865,10 @@ def maybe_convert_numeric(ndarray[object] values, set na_values,
else:
seen.float_ = True
- if val <= oINT64_MAX:
+ if oINT64_MIN <= val <= oINT64_MAX:
ints[i] = val
- if seen.sint_ and seen.uint_:
+ if val < oINT64_MIN or (seen.sint_ and seen.uint_):
seen.float_ = True
elif util.is_bool_object(val):
@@ -1910,23 +1910,28 @@ def maybe_convert_numeric(ndarray[object] values, set na_values,
else:
seen.saw_int(as_int)
- if not (seen.float_ or as_int in na_values):
+ if as_int not in na_values:
if as_int < oINT64_MIN or as_int > oUINT64_MAX:
- raise ValueError('Integer out of range.')
+ if seen.coerce_numeric:
+ seen.float_ = True
+ else:
+ raise ValueError("Integer out of range.")
+ else:
+ if as_int >= 0:
+ uints[i] = as_int
- if as_int >= 0:
- uints[i] = as_int
- if as_int <= oINT64_MAX:
- ints[i] = as_int
+ if as_int <= oINT64_MAX:
+ ints[i] = as_int
seen.float_ = seen.float_ or (seen.uint_ and seen.sint_)
else:
seen.float_ = True
except (TypeError, ValueError) as e:
if not seen.coerce_numeric:
- raise type(e)(str(e) + ' at position {pos}'.format(pos=i))
+ raise type(e)(str(e) + " at position {pos}".format(pos=i))
elif "uint64" in str(e): # Exception from check functions.
raise
+
seen.saw_null()
floats[i] = NaN
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 79d8ee38637f9..24f3e6753e500 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -19,6 +19,14 @@ def to_numeric(arg, errors='raise', downcast=None):
depending on the data supplied. Use the `downcast` parameter
to obtain other dtypes.
+ Please note that precision loss may occur if really large numbers
+ are passed in. Due to the internal limitations of `ndarray`, if
+ numbers smaller than `-9223372036854775808` (np.iinfo(np.int64).min)
+ or larger than `18446744073709551615` (np.iinfo(np.uint64).max) are
+ passed in, it is very likely they will be converted to float so that
+ they can stored in an `ndarray`. These warnings apply similarly to
+ `Series` since it internally leverages `ndarray`.
+
Parameters
----------
arg : scalar, list, tuple, 1-d array, or Series
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 3822170d884aa..97e1dc2f6aefc 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -4,11 +4,50 @@
from numpy import iinfo
import pytest
+import pandas.compat as compat
+
import pandas as pd
from pandas import DataFrame, Index, Series, to_numeric
from pandas.util import testing as tm
+@pytest.fixture(params=[None, "ignore", "raise", "coerce"])
+def errors(request):
+ return request.param
+
+
+@pytest.fixture(params=[True, False])
+def signed(request):
+ return request.param
+
+
+@pytest.fixture(params=[lambda x: x, str], ids=["identity", "str"])
+def transform(request):
+ return request.param
+
+
+@pytest.fixture(params=[
+ 47393996303418497800,
+ 100000000000000000000
+])
+def large_val(request):
+ return request.param
+
+
+@pytest.fixture(params=[True, False])
+def multiple_elts(request):
+ return request.param
+
+
+@pytest.fixture(params=[
+ (lambda x: Index(x, name="idx"), tm.assert_index_equal),
+ (lambda x: Series(x, name="ser"), tm.assert_series_equal),
+ (lambda x: np.array(Index(x).values), tm.assert_numpy_array_equal)
+])
+def transform_assert_equal(request):
+ return request.param
+
+
@pytest.mark.parametrize("input_kwargs,result_kwargs", [
(dict(), dict(dtype=np.int64)),
(dict(errors="coerce", downcast="integer"), dict(dtype=np.int8))
@@ -172,7 +211,6 @@ def test_all_nan():
tm.assert_series_equal(result, expected)
-@pytest.mark.parametrize("errors", [None, "ignore", "raise", "coerce"])
def test_type_check(errors):
# see gh-11776
df = DataFrame({"a": [1, -3.14, 7], "b": ["4", "5", "6"]})
@@ -183,11 +221,100 @@ def test_type_check(errors):
to_numeric(df, **kwargs)
-@pytest.mark.parametrize("val", [
- 1, 1.1, "1", "1.1", -1.5, "-1.5"
-])
-def test_scalar(val):
- assert to_numeric(val) == float(val)
+@pytest.mark.parametrize("val", [1, 1.1, 20001])
+def test_scalar(val, signed, transform):
+ val = -val if signed else val
+ assert to_numeric(transform(val)) == float(val)
+
+
+def test_really_large_scalar(large_val, signed, transform, errors):
+ # see gh-24910
+ kwargs = dict(errors=errors) if errors is not None else dict()
+ val = -large_val if signed else large_val
+
+ val = transform(val)
+ val_is_string = isinstance(val, str)
+
+ if val_is_string and errors in (None, "raise"):
+ msg = "Integer out of range. at position 0"
+ with pytest.raises(ValueError, match=msg):
+ to_numeric(val, **kwargs)
+ else:
+ expected = float(val) if (errors == "coerce" and
+ val_is_string) else val
+ assert tm.assert_almost_equal(to_numeric(val, **kwargs), expected)
+
+
+def test_really_large_in_arr(large_val, signed, transform,
+ multiple_elts, errors):
+ # see gh-24910
+ kwargs = dict(errors=errors) if errors is not None else dict()
+ val = -large_val if signed else large_val
+ val = transform(val)
+
+ extra_elt = "string"
+ arr = [val] + multiple_elts * [extra_elt]
+
+ val_is_string = isinstance(val, str)
+ coercing = errors == "coerce"
+
+ if errors in (None, "raise") and (val_is_string or multiple_elts):
+ if val_is_string:
+ msg = "Integer out of range. at position 0"
+ else:
+ msg = 'Unable to parse string "string" at position 1'
+
+ with pytest.raises(ValueError, match=msg):
+ to_numeric(arr, **kwargs)
+ else:
+ result = to_numeric(arr, **kwargs)
+
+ exp_val = float(val) if (coercing and val_is_string) else val
+ expected = [exp_val]
+
+ if multiple_elts:
+ if coercing:
+ expected.append(np.nan)
+ exp_dtype = float
+ else:
+ expected.append(extra_elt)
+ exp_dtype = object
+ else:
+ exp_dtype = float if isinstance(exp_val, (
+ int, compat.long, float)) else object
+
+ tm.assert_almost_equal(result, np.array(expected, dtype=exp_dtype))
+
+
+def test_really_large_in_arr_consistent(large_val, signed,
+ multiple_elts, errors):
+ # see gh-24910
+ #
+ # Even if we discover that we have to hold float, does not mean
+ # we should be lenient on subsequent elements that fail to be integer.
+ kwargs = dict(errors=errors) if errors is not None else dict()
+ arr = [str(-large_val if signed else large_val)]
+
+ if multiple_elts:
+ arr.insert(0, large_val)
+
+ if errors in (None, "raise"):
+ index = int(multiple_elts)
+ msg = "Integer out of range. at position {index}".format(index=index)
+
+ with pytest.raises(ValueError, match=msg):
+ to_numeric(arr, **kwargs)
+ else:
+ result = to_numeric(arr, **kwargs)
+
+ if errors == "coerce":
+ expected = [float(i) for i in arr]
+ exp_dtype = float
+ else:
+ expected = arr
+ exp_dtype = object
+
+ tm.assert_almost_equal(result, np.array(expected, dtype=exp_dtype))
@pytest.mark.parametrize("errors,checker", [
@@ -205,15 +332,6 @@ def test_scalar_fail(errors, checker):
assert checker(to_numeric(scalar, errors=errors))
-@pytest.fixture(params=[
- (lambda x: Index(x, name="idx"), tm.assert_index_equal),
- (lambda x: Series(x, name="ser"), tm.assert_series_equal),
- (lambda x: np.array(Index(x).values), tm.assert_numpy_array_equal)
-])
-def transform_assert_equal(request):
- return request.param
-
-
@pytest.mark.parametrize("data", [
[1, 2, 3],
[1., np.nan, 3, np.nan]
| Fun edge case bug fixes for `to_numeric`:
* Warn about lossiness when passing really large numbers that exceed `(u)int64` ranges.
* Coerce negative numbers to float when requested instead of crashing and returning `object`.
* Consistently parse numbers as integers / floats, even if we know that the resulting container has to be float. This is to ensure consistent error behavior when inputs numbers are too large.
Closes #24910.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24956 | 2019-01-27T00:27:45Z | 2019-01-31T12:46:46Z | 2019-01-31T12:46:45Z | 2019-01-31T17:15:05Z |
ENH: Quoting column names containing spaces with backticks to use them in query and eval. | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 73eb6a15a1b47..700a7c0e72074 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -29,6 +29,7 @@ Other Enhancements
- :meth:`DataFrame.rename` now supports the ``errors`` argument to raise errors when attempting to rename nonexistent keys (:issue:`13473`)
- :class:`RangeIndex` has gained :attr:`~RangeIndex.start`, :attr:`~RangeIndex.stop`, and :attr:`~RangeIndex.step` attributes (:issue:`25710`)
- :class:`datetime.timezone` objects are now supported as arguments to timezone methods and constructors (:issue:`25065`)
+- :meth:`DataFrame.query` and :meth:`DataFrame.eval` now supports quoting column names with backticks to refer to names with spaces (:issue:`6508`)
.. _whatsnew_0250.api_breaking:
diff --git a/pandas/core/computation/common.py b/pandas/core/computation/common.py
index e7eca04e413c5..1e38919affcdd 100644
--- a/pandas/core/computation/common.py
+++ b/pandas/core/computation/common.py
@@ -1,9 +1,12 @@
import numpy as np
-from pandas.compat import reduce
+from pandas.compat import reduce, string_types
import pandas as pd
+# A token value Python's tokenizer probably will never use.
+_BACKTICK_QUOTED_STRING = 100
+
def _ensure_decoded(s):
""" if we have bytes, decode them to unicode """
@@ -22,5 +25,14 @@ def _result_type_many(*arrays_and_dtypes):
return reduce(np.result_type, arrays_and_dtypes)
+def _remove_spaces_column_name(name):
+ """Check if name contains any spaces, if it contains any spaces
+ the spaces will be removed and an underscore suffix is added."""
+ if not isinstance(name, string_types) or " " not in name:
+ return name
+
+ return name.replace(" ", "_") + "_BACKTICK_QUOTED_STRING"
+
+
class NameResolutionError(NameError):
pass
diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
index d840bf6ae71a2..4ab34b7349af5 100644
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -3,16 +3,20 @@
import ast
from functools import partial
+import itertools as it
+import operator
import tokenize
import numpy as np
-from pandas.compat import StringIO, lmap, reduce, string_types, zip
+from pandas.compat import StringIO, lmap, map, reduce, string_types, zip
import pandas as pd
from pandas import compat
from pandas.core import common as com
from pandas.core.base import StringMixin
+from pandas.core.computation.common import (
+ _BACKTICK_QUOTED_STRING, _remove_spaces_column_name)
from pandas.core.computation.ops import (
_LOCAL_TAG, BinOp, Constant, Div, FuncNode, Op, Term, UnaryOp,
UndefinedVariableError, _arith_ops_syms, _bool_ops_syms, _cmp_ops_syms,
@@ -31,7 +35,17 @@ def tokenize_string(source):
A Python source code string
"""
line_reader = StringIO(source).readline
- for toknum, tokval, _, _, _ in tokenize.generate_tokens(line_reader):
+ token_generator = tokenize.generate_tokens(line_reader)
+
+ # Loop over all tokens till a backtick (`) is found.
+ # Then, take all tokens till the next backtick to form a backtick quoted
+ # string.
+ for toknum, tokval, _, _, _ in token_generator:
+ if tokval == '`':
+ tokval = " ".join(it.takewhile(
+ lambda tokval: tokval != '`',
+ map(operator.itemgetter(1), token_generator)))
+ toknum = _BACKTICK_QUOTED_STRING
yield toknum, tokval
@@ -102,6 +116,31 @@ def _replace_locals(tok):
return toknum, tokval
+def _clean_spaces_backtick_quoted_names(tok):
+ """Clean up a column name if surrounded by backticks.
+
+ Backtick quoted string are indicated by a certain tokval value. If a string
+ is a backtick quoted token it will processed by
+ :func:`_remove_spaces_column_name` so that the parser can find this
+ string when the query is executed.
+ See also :meth:`NDFrame._get_space_character_free_column_resolver`.
+
+ Parameters
+ ----------
+ tok : tuple of int, str
+ ints correspond to the all caps constants in the tokenize module
+
+ Returns
+ -------
+ t : tuple of int, str
+ Either the input or token or the replacement values
+ """
+ toknum, tokval = tok
+ if toknum == _BACKTICK_QUOTED_STRING:
+ return tokenize.NAME, _remove_spaces_column_name(tokval)
+ return toknum, tokval
+
+
def _compose2(f, g):
"""Compose 2 callables"""
return lambda *args, **kwargs: f(g(*args, **kwargs))
@@ -114,7 +153,8 @@ def _compose(*funcs):
def _preparse(source, f=_compose(_replace_locals, _replace_booleans,
- _rewrite_assign)):
+ _rewrite_assign,
+ _clean_spaces_backtick_quoted_names)):
"""Compose a collection of tokenization functions
Parameters
@@ -711,8 +751,9 @@ def visitor(x, y):
class PandasExprVisitor(BaseExprVisitor):
def __init__(self, env, engine, parser,
- preparser=partial(_preparse, f=_compose(_replace_locals,
- _replace_booleans))):
+ preparser=partial(_preparse, f=_compose(
+ _replace_locals, _replace_booleans,
+ _clean_spaces_backtick_quoted_names))):
super(PandasExprVisitor, self).__init__(env, engine, parser, preparser)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b4f15905afc44..2dc885d198f48 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2967,6 +2967,15 @@ def query(self, expr, inplace=False, **kwargs):
The query string to evaluate. You can refer to variables
in the environment by prefixing them with an '@' character like
``@a + b``.
+
+ .. versionadded:: 0.25.0
+
+ You can refer to column names that contain spaces by surrounding
+ them in backticks.
+
+ For example, if one of your columns is called ``a a`` and you want
+ to sum it with ``b``, your query should be ```a a` + b``.
+
inplace : bool
Whether the query should modify the data in place or return
a modified copy.
@@ -3025,23 +3034,37 @@ def query(self, expr, inplace=False, **kwargs):
Examples
--------
- >>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})
+ >>> df = pd.DataFrame({'A': range(1, 6),
+ ... 'B': range(10, 0, -2),
+ ... 'C C': range(10, 5, -1)})
>>> df
- A B
- 0 1 10
- 1 2 8
- 2 3 6
- 3 4 4
- 4 5 2
+ A B C C
+ 0 1 10 10
+ 1 2 8 9
+ 2 3 6 8
+ 3 4 4 7
+ 4 5 2 6
>>> df.query('A > B')
- A B
- 4 5 2
+ A B C C
+ 4 5 2 6
The previous expression is equivalent to
>>> df[df.A > df.B]
- A B
- 4 5 2
+ A B C C
+ 4 5 2 6
+
+ For columns with spaces in their name, you can use backtick quoting.
+
+ >>> df.query('B == `C C`')
+ A B C C
+ 0 1 10 10
+
+ The previous expression is equivalent to
+
+ >>> df[df.B == df['C C']]
+ A B C C
+ 0 1 10 10
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
if not isinstance(expr, compat.string_types):
@@ -3160,7 +3183,9 @@ def eval(self, expr, inplace=False, **kwargs):
kwargs['level'] = kwargs.pop('level', 0) + 1
if resolvers is None:
index_resolvers = self._get_index_resolvers()
- resolvers = dict(self.iteritems()), index_resolvers
+ column_resolvers = \
+ self._get_space_character_free_column_resolvers()
+ resolvers = column_resolvers, index_resolvers
if 'target' not in kwargs:
kwargs['target'] = self
kwargs['resolvers'] = kwargs.get('resolvers', ()) + tuple(resolvers)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ac2ec40d6305d..f69ba51e59784 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -423,6 +423,18 @@ def _get_index_resolvers(self):
d.update(self._get_axis_resolvers(axis_name))
return d
+ def _get_space_character_free_column_resolvers(self):
+ """Return the space character free column resolvers of a dataframe.
+
+ Column names with spaces are 'cleaned up' so that they can be referred
+ to by backtick quoting.
+ Used in :meth:`DataFrame.eval`.
+ """
+ from pandas.core.computation.common import _remove_spaces_column_name
+
+ return {_remove_spaces_column_name(k): v for k, v
+ in self.iteritems()}
+
@property
def _info_axis(self):
return getattr(self, self._info_axis_name)
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index ba02cb54bcea1..a8a9a278a0ebb 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -1031,3 +1031,54 @@ def test_invalid_type_for_operator_raises(self, parser, engine, op):
with pytest.raises(TypeError, match=msg):
df.eval('a {0} b'.format(op), engine=engine, parser=parser)
+
+
+class TestDataFrameQueryBacktickQuoting(object):
+
+ @pytest.fixture(scope='class')
+ def df(self):
+ yield DataFrame({'A': [1, 2, 3],
+ 'B B': [3, 2, 1],
+ 'C C': [4, 5, 6],
+ 'C_C': [8, 9, 10],
+ 'D_D D': [11, 1, 101]})
+
+ def test_single_backtick_variable_query(self, df):
+ res = df.query('1 < `B B`')
+ expect = df[1 < df['B B']]
+ assert_frame_equal(res, expect)
+
+ def test_two_backtick_variables_query(self, df):
+ res = df.query('1 < `B B` and 4 < `C C`')
+ expect = df[(1 < df['B B']) & (4 < df['C C'])]
+ assert_frame_equal(res, expect)
+
+ def test_single_backtick_variable_expr(self, df):
+ res = df.eval('A + `B B`')
+ expect = df['A'] + df['B B']
+ assert_series_equal(res, expect)
+
+ def test_two_backtick_variables_expr(self, df):
+ res = df.eval('`B B` + `C C`')
+ expect = df['B B'] + df['C C']
+ assert_series_equal(res, expect)
+
+ def test_already_underscore_variable(self, df):
+ res = df.eval('`C_C` + A')
+ expect = df['C_C'] + df['A']
+ assert_series_equal(res, expect)
+
+ def test_same_name_but_underscores(self, df):
+ res = df.eval('C_C + `C C`')
+ expect = df['C_C'] + df['C C']
+ assert_series_equal(res, expect)
+
+ def test_mixed_underscores_and_spaces(self, df):
+ res = df.eval('A + `D_D D`')
+ expect = df['A'] + df['D_D D']
+ assert_series_equal(res, expect)
+
+ def backtick_quote_name_with_no_spaces(self, df):
+ res = df.eval('A + `C_C`')
+ expect = df['A'] + df['C_C']
+ assert_series_equal(res, expect)
| - [x] closes #6508
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24955 | 2019-01-26T22:55:13Z | 2019-03-20T12:24:18Z | 2019-03-20T12:24:17Z | 2019-03-20T12:24:22Z |
Refactor groupby helper from tempita to fused types | diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index e6036654c71c3..950ba3f89ffb7 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -2,6 +2,7 @@
import cython
from cython import Py_ssize_t
+from cython cimport floating
from libc.stdlib cimport malloc, free
@@ -382,5 +383,55 @@ def group_any_all(uint8_t[:] out,
out[lab] = flag_val
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def _group_add(floating[:, :] out,
+ int64_t[:] counts,
+ floating[:, :] values,
+ const int64_t[:] labels,
+ Py_ssize_t min_count=0):
+ """
+ Only aggregates on axis=0
+ """
+ cdef:
+ Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
+ floating val, count
+ ndarray[floating, ndim=2] sumx, nobs
+
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
+ nobs = np.zeros_like(out)
+ sumx = np.zeros_like(out)
+
+ N, K = (<object>values).shape
+
+ with nogil:
+
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ sumx[lab, j] += val
+
+ for i in range(ncounts):
+ for j in range(K):
+ if nobs[i, j] < min_count:
+ out[i, j] = NAN
+ else:
+ out[i, j] = sumx[i, j]
+
+
+group_add_float32 = _group_add['float']
+group_add_float64 = _group_add['double']
+
# generated from template
include "groupby_helper.pxi"
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index 858039f038d02..db7018e1a7254 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -9,7 +9,7 @@ cdef extern from "numpy/npy_math.h":
_int64_max = np.iinfo(np.int64).max
# ----------------------------------------------------------------------
-# group_add, group_prod, group_var, group_mean, group_ohlc
+# group_prod, group_var, group_mean, group_ohlc
# ----------------------------------------------------------------------
{{py:
@@ -27,53 +27,6 @@ def get_dispatch(dtypes):
{{for name, c_type in get_dispatch(dtypes)}}
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def group_add_{{name}}({{c_type}}[:, :] out,
- int64_t[:] counts,
- {{c_type}}[:, :] values,
- const int64_t[:] labels,
- Py_ssize_t min_count=0):
- """
- Only aggregates on axis=0
- """
- cdef:
- Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
- {{c_type}} val, count
- ndarray[{{c_type}}, ndim=2] sumx, nobs
-
- if not len(values) == len(labels):
- raise AssertionError("len(index) != len(labels)")
-
- nobs = np.zeros_like(out)
- sumx = np.zeros_like(out)
-
- N, K = (<object>values).shape
-
- with nogil:
-
- for i in range(N):
- lab = labels[i]
- if lab < 0:
- continue
-
- counts[lab] += 1
- for j in range(K):
- val = values[i, j]
-
- # not nan
- if val == val:
- nobs[lab, j] += 1
- sumx[lab, j] += val
-
- for i in range(ncounts):
- for j in range(K):
- if nobs[i, j] < min_count:
- out[i, j] = NAN
- else:
- out[i, j] = sumx[i, j]
-
-
@cython.wraparound(False)
@cython.boundscheck(False)
def group_prod_{{name}}({{c_type}}[:, :] out,
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 87f48d5a40554..78c9aa9187135 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -380,7 +380,7 @@ def get_func(fname):
# otherwise find dtype-specific version, falling back to object
for dt in [dtype_str, 'object']:
f = getattr(libgroupby, "{fname}_{dtype_str}".format(
- fname=fname, dtype_str=dtype_str), None)
+ fname=fname, dtype_str=dt), None)
if f is not None:
return f
| Refactoring groupby_helper to use fused types instead of tempita for functions such as group_add.
@jbrockmendel if this is what you meant in the previous [PR](https://github.com/pandas-dev/pandas/pull/24932#discussion_r251094683) I'll do the refactoring for the rest of the functions in groupby_helper.pxi.in. | https://api.github.com/repos/pandas-dev/pandas/pulls/24954 | 2019-01-26T19:10:46Z | 2019-02-09T18:00:16Z | 2019-02-09T18:00:16Z | 2019-02-09T18:00:16Z |
pivot_table very slow on Categorical data; how about an observed keyword argument? #24923 | diff --git a/asv_bench/benchmarks/reshape.py b/asv_bench/benchmarks/reshape.py
index bead5a5996d1a..678403d837805 100644
--- a/asv_bench/benchmarks/reshape.py
+++ b/asv_bench/benchmarks/reshape.py
@@ -127,6 +127,10 @@ def setup(self):
'value1': np.random.randn(N),
'value2': np.random.randn(N),
'value3': np.random.randn(N)})
+ self.df2 = DataFrame({'col1': list('abcde'), 'col2': list('fghij'),
+ 'col3': [1, 2, 3, 4, 5]})
+ self.df2.col1 = self.df2.col1.astype('category')
+ self.df2.col2 = self.df2.col2.astype('category')
def time_pivot_table(self):
self.df.pivot_table(index='key1', columns=['key2', 'key3'])
@@ -139,6 +143,14 @@ def time_pivot_table_margins(self):
self.df.pivot_table(index='key1', columns=['key2', 'key3'],
margins=True)
+ def time_pivot_table_categorical(self):
+ self.df2.pivot_table(index='col1', values='col3', columns='col2',
+ aggfunc=np.sum, fill_value=0)
+
+ def time_pivot_table_categorical_observed(self):
+ self.df2.pivot_table(index='col1', values='col3', columns='col2',
+ aggfunc=np.sum, fill_value=0, observed=True)
+
class Crosstab:
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 9afcf3ddcdbb1..fb36a083ec290 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -28,6 +28,7 @@ Other Enhancements
- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
- :meth:`DataFrame.at_time` and :meth:`Series.at_time` now support :meth:`datetime.time` objects with timezones (:issue:`24043`)
+- :meth:`DataFrame.pivot_table` now accepts an ``observed`` parameter which is passed to underlying calls to :meth:`DataFrame.groupby` to speed up grouping categorical data. (:issue:`24923`)
- ``Series.str`` has gained :meth:`Series.str.casefold` method to removes all case distinctions present in a string (:issue:`25405`)
- :meth:`DataFrame.set_index` now works for instances of ``abc.Iterator``, provided their output is of the same length as the calling frame (:issue:`22484`, :issue:`24984`)
- :meth:`DatetimeIndex.union` now supports the ``sort`` argument. The behaviour of the sort parameter matches that of :meth:`Index.union` (:issue:`24994`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2008c444fad5e..b127a1d28e22b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5701,6 +5701,12 @@ def pivot(self, index=None, columns=None, values=None):
margins_name : string, default 'All'
Name of the row / column that will contain the totals
when margins is True.
+ observed : boolean, default False
+ This only applies if any of the groupers are Categoricals.
+ If True: only show observed values for categorical groupers.
+ If False: show all values for categorical groupers.
+
+ .. versionchanged :: 0.25.0
Returns
-------
@@ -5791,12 +5797,12 @@ def pivot(self, index=None, columns=None, values=None):
@Appender(_shared_docs['pivot_table'])
def pivot_table(self, values=None, index=None, columns=None,
aggfunc='mean', fill_value=None, margins=False,
- dropna=True, margins_name='All'):
+ dropna=True, margins_name='All', observed=False):
from pandas.core.reshape.pivot import pivot_table
return pivot_table(self, values=values, index=index, columns=columns,
aggfunc=aggfunc, fill_value=fill_value,
margins=margins, dropna=dropna,
- margins_name=margins_name)
+ margins_name=margins_name, observed=observed)
def stack(self, level=-1, dropna=True):
"""
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 3aaae3b59a0d4..be0d74b460850 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -22,7 +22,7 @@
@Appender(_shared_docs['pivot_table'], indents=1)
def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
fill_value=None, margins=False, dropna=True,
- margins_name='All'):
+ margins_name='All', observed=False):
index = _convert_by(index)
columns = _convert_by(columns)
@@ -34,7 +34,8 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
columns=columns,
fill_value=fill_value, aggfunc=func,
margins=margins, dropna=dropna,
- margins_name=margins_name)
+ margins_name=margins_name,
+ observed=observed)
pieces.append(table)
keys.append(getattr(func, '__name__', func))
@@ -77,7 +78,7 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
pass
values = list(values)
- grouped = data.groupby(keys, observed=False)
+ grouped = data.groupby(keys, observed=observed)
agged = grouped.agg(aggfunc)
if dropna and isinstance(agged, ABCDataFrame) and len(agged.columns):
agged = agged.dropna(how='all')
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 5b757ac156078..64374cd9646eb 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -37,18 +37,18 @@ def setup_method(self, method):
'E': np.random.randn(11),
'F': np.random.randn(11)})
- def test_pivot_table(self):
+ def test_pivot_table(self, observed):
index = ['A', 'B']
columns = 'C'
table = pivot_table(self.data, values='D',
- index=index, columns=columns)
+ index=index, columns=columns, observed=observed)
table2 = self.data.pivot_table(
- values='D', index=index, columns=columns)
+ values='D', index=index, columns=columns, observed=observed)
tm.assert_frame_equal(table, table2)
# this works
- pivot_table(self.data, values='D', index=index)
+ pivot_table(self.data, values='D', index=index, observed=observed)
if len(index) > 1:
assert table.index.names == tuple(index)
@@ -64,6 +64,28 @@ def test_pivot_table(self):
index + [columns])['D'].agg(np.mean).unstack()
tm.assert_frame_equal(table, expected)
+ def test_pivot_table_categorical_observed_equal(self, observed):
+ # issue #24923
+ df = pd.DataFrame({'col1': list('abcde'),
+ 'col2': list('fghij'),
+ 'col3': [1, 2, 3, 4, 5]})
+
+ expected = df.pivot_table(index='col1', values='col3',
+ columns='col2', aggfunc=np.sum,
+ fill_value=0)
+
+ expected.index = expected.index.astype('category')
+ expected.columns = expected.columns.astype('category')
+
+ df.col1 = df.col1.astype('category')
+ df.col2 = df.col2.astype('category')
+
+ result = df.pivot_table(index='col1', values='col3',
+ columns='col2', aggfunc=np.sum,
+ fill_value=0, observed=observed)
+
+ tm.assert_frame_equal(result, expected)
+
def test_pivot_table_nocols(self):
df = DataFrame({'rows': ['a', 'b', 'c'],
'cols': ['x', 'y', 'z'],
| - [x] closes #24923
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24953 | 2019-01-26T17:56:25Z | 2019-04-26T01:16:10Z | 2019-04-26T01:16:10Z | 2019-04-26T01:16:25Z |
DOC/CLN: Fix errors in DataFrame docstrings | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 85aa13526e77c..242f4e7b605c2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1065,7 +1065,7 @@ def from_dict(cls, data, orient='columns', dtype=None, columns=None):
Returns
-------
- pandas.DataFrame
+ DataFrame
See Also
--------
@@ -1145,7 +1145,7 @@ def to_numpy(self, dtype=None, copy=False):
Returns
-------
- array : numpy.ndarray
+ numpy.ndarray
See Also
--------
@@ -1439,7 +1439,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
Returns
-------
- df : DataFrame
+ DataFrame
"""
# Make a copy of the input columns so we can modify it
@@ -1755,7 +1755,7 @@ def from_items(cls, items, columns=None, orient='columns'):
Returns
-------
- frame : DataFrame
+ DataFrame
"""
warnings.warn("from_items is deprecated. Please use "
@@ -1866,7 +1866,7 @@ def from_csv(cls, path, header=0, sep=',', index_col=0, parse_dates=True,
Returns
-------
- y : DataFrame
+ DataFrame
See Also
--------
@@ -1956,7 +1956,7 @@ def to_panel(self):
Returns
-------
- panel : Panel
+ Panel
"""
raise NotImplementedError("Panel is being removed in pandas 0.25.0.")
@@ -2478,7 +2478,7 @@ def memory_usage(self, index=True, deep=False):
Returns
-------
- sizes : Series
+ Series
A Series whose index is the original column names and whose values
is the memory usage of each column in bytes.
@@ -2696,7 +2696,7 @@ def get_value(self, index, col, takeable=False):
Returns
-------
- value : scalar value
+ scalar value
"""
warnings.warn("get_value is deprecated and will be removed "
@@ -2741,7 +2741,7 @@ def set_value(self, index, col, value, takeable=False):
Returns
-------
- frame : DataFrame
+ DataFrame
If label pair is contained, will be reference to calling DataFrame,
otherwise a new object
"""
@@ -3177,7 +3177,7 @@ def select_dtypes(self, include=None, exclude=None):
Returns
-------
- subset : DataFrame
+ DataFrame
The subset of the frame including the dtypes in ``include`` and
excluding the dtypes in ``exclude``.
@@ -3542,7 +3542,7 @@ def _sanitize_column(self, key, value, broadcast=True):
Returns
-------
- sanitized_column : numpy-array
+ numpy.ndarray
"""
def reindexer(value):
@@ -3811,7 +3811,7 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Returns
-------
- dropped : pandas.DataFrame
+ DataFrame
Raises
------
@@ -3936,7 +3936,7 @@ def rename(self, *args, **kwargs):
Returns
-------
- renamed : DataFrame
+ DataFrame
See Also
--------
@@ -4579,7 +4579,7 @@ def drop_duplicates(self, subset=None, keep='first', inplace=False):
Returns
-------
- deduplicated : DataFrame
+ DataFrame
"""
if self.empty:
return self.copy()
@@ -4613,7 +4613,7 @@ def duplicated(self, subset=None, keep='first'):
Returns
-------
- duplicated : Series
+ Series
"""
from pandas.core.sorting import get_group_index
from pandas._libs.hashtable import duplicated_int64, _SIZE_HINT_LIMIT
@@ -4981,7 +4981,7 @@ def swaplevel(self, i=-2, j=-1, axis=0):
Returns
-------
- swapped : same type as caller (new object)
+ DataFrame
.. versionchanged:: 0.18.1
@@ -5260,7 +5260,7 @@ def combine_first(self, other):
Returns
-------
- combined : DataFrame
+ DataFrame
See Also
--------
@@ -5621,7 +5621,7 @@ def pivot(self, index=None, columns=None, values=None):
Returns
-------
- table : DataFrame
+ DataFrame
See Also
--------
@@ -5907,7 +5907,7 @@ def unstack(self, level=-1, fill_value=None):
Returns
-------
- unstacked : DataFrame or Series
+ Series or DataFrame
See Also
--------
@@ -6073,7 +6073,7 @@ def diff(self, periods=1, axis=0):
Returns
-------
- diffed : DataFrame
+ DataFrame
See Also
--------
@@ -6345,7 +6345,7 @@ def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None,
Returns
-------
- applied : Series or DataFrame
+ Series or DataFrame
See Also
--------
@@ -6538,7 +6538,7 @@ def append(self, other, ignore_index=False,
Returns
-------
- appended : DataFrame
+ DataFrame
See Also
--------
@@ -6956,12 +6956,13 @@ def corr(self, method='pearson', min_periods=1):
min_periods : int, optional
Minimum number of observations required per pair of columns
- to have a valid result. Currently only available for pearson
- and spearman correlation
+ to have a valid result. Currently only available for Pearson
+ and Spearman correlation.
Returns
-------
- y : DataFrame
+ DataFrame
+ Correlation matrix.
See Also
--------
@@ -6970,14 +6971,15 @@ def corr(self, method='pearson', min_periods=1):
Examples
--------
- >>> histogram_intersection = lambda a, b: np.minimum(a, b
- ... ).sum().round(decimals=1)
+ >>> def histogram_intersection(a, b):
+ ... v = np.minimum(a, b).sum().round(decimals=1)
+ ... return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
- dogs cats
- dogs 1.0 0.3
- cats 0.3 1.0
+ dogs cats
+ dogs 1.0 0.3
+ cats 0.3 1.0
"""
numeric_df = self._get_numeric_data()
cols = numeric_df.columns
@@ -7140,10 +7142,11 @@ def corrwith(self, other, axis=0, drop=False, method='pearson'):
Parameters
----------
other : DataFrame, Series
+ Object with which to compute correlations.
axis : {0 or 'index', 1 or 'columns'}, default 0
- 0 or 'index' to compute column-wise, 1 or 'columns' for row-wise
- drop : boolean, default False
- Drop missing indices from result
+ 0 or 'index' to compute column-wise, 1 or 'columns' for row-wise.
+ drop : bool, default False
+ Drop missing indices from result.
method : {'pearson', 'kendall', 'spearman'} or callable
* pearson : standard correlation coefficient
* kendall : Kendall Tau correlation coefficient
@@ -7155,7 +7158,8 @@ def corrwith(self, other, axis=0, drop=False, method='pearson'):
Returns
-------
- correls : Series
+ Series
+ Pairwise correlations.
See Also
-------
@@ -7485,7 +7489,7 @@ def nunique(self, axis=0, dropna=True):
Returns
-------
- nunique : Series
+ Series
See Also
--------
@@ -7523,7 +7527,8 @@ def idxmin(self, axis=0, skipna=True):
Returns
-------
- idxmin : Series
+ Series
+ Indexes of minima along the specified axis.
Raises
------
@@ -7559,7 +7564,8 @@ def idxmax(self, axis=0, skipna=True):
Returns
-------
- idxmax : Series
+ Series
+ Indexes of maxima along the specified axis.
Raises
------
@@ -7706,7 +7712,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
Returns
-------
- quantiles : Series or DataFrame
+ Series or DataFrame
If ``q`` is an array, a DataFrame will be returned where the
index is ``q``, the columns are the columns of self, and the
@@ -7776,19 +7782,19 @@ def to_timestamp(self, freq=None, how='start', axis=0, copy=True):
Parameters
----------
- freq : string, default frequency of PeriodIndex
- Desired frequency
+ freq : str, default frequency of PeriodIndex
+ Desired frequency.
how : {'s', 'e', 'start', 'end'}
Convention for converting period to timestamp; start of period
- vs. end
+ vs. end.
axis : {0 or 'index', 1 or 'columns'}, default 0
- The axis to convert (the index by default)
- copy : boolean, default True
- If false then underlying input data is not copied
+ The axis to convert (the index by default).
+ copy : bool, default True
+ If False then underlying input data is not copied.
Returns
-------
- df : DataFrame with DatetimeIndex
+ DataFrame with DatetimeIndex
"""
new_data = self._data
if copy:
@@ -7812,15 +7818,16 @@ def to_period(self, freq=None, axis=0, copy=True):
Parameters
----------
- freq : string, default
+ freq : str, default
+ Frequency of the PeriodIndex.
axis : {0 or 'index', 1 or 'columns'}, default 0
- The axis to convert (the index by default)
- copy : boolean, default True
- If False then underlying input data is not copied
+ The axis to convert (the index by default).
+ copy : bool, default True
+ If False then underlying input data is not copied.
Returns
-------
- ts : TimeSeries with PeriodIndex
+ TimeSeries with PeriodIndex
"""
new_data = self._data
if copy:
@@ -7893,7 +7900,7 @@ def isin(self, values):
match. Note that 'falcon' does not match based on the number of legs
in df2.
- >>> other = pd.DataFrame({'num_legs': [8, 2],'num_wings': [0, 2]},
+ >>> other = pd.DataFrame({'num_legs': [8, 2], 'num_wings': [0, 2]},
... index=['spider', 'falcon'])
>>> df.isin(other)
num_legs num_wings
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c886493f90eaf..1a404630b660e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -774,18 +774,18 @@ def pop(self, item):
Parameters
----------
item : str
- Column label to be popped
+ Label of column to be popped.
Returns
-------
- popped : Series
+ Series
Examples
--------
- >>> df = pd.DataFrame([('falcon', 'bird', 389.0),
- ... ('parrot', 'bird', 24.0),
- ... ('lion', 'mammal', 80.5),
- ... ('monkey', 'mammal', np.nan)],
+ >>> df = pd.DataFrame([('falcon', 'bird', 389.0),
+ ... ('parrot', 'bird', 24.0),
+ ... ('lion', 'mammal', 80.5),
+ ... ('monkey','mammal', np.nan)],
... columns=('name', 'class', 'max_speed'))
>>> df
name class max_speed
@@ -937,7 +937,7 @@ def swaplevel(self, i=-2, j=-1, axis=0):
Parameters
----------
- i, j : int, string (can be mixed)
+ i, j : int, str (can be mixed)
Level of index to be swapped. Can pass level name as string.
Returns
@@ -973,9 +973,9 @@ def rename(self, *args, **kwargs):
and raise on DataFrame or Panel.
dict-like or functions are transformations to apply to
that axis' values
- copy : boolean, default True
- Also copy underlying data
- inplace : boolean, default False
+ copy : bool, default True
+ Also copy underlying data.
+ inplace : bool, default False
Whether to return a new %(klass)s. If True then value of copy is
ignored.
level : int or level name, default None
@@ -2947,7 +2947,7 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
will treat them as non-numeric.
quotechar : str, default '\"'
String of length 1. Character used to quote fields.
- line_terminator : string, optional
+ line_terminator : str, optional
The newline character or character sequence to use in the output
file. Defaults to `os.linesep`, which depends on the OS in which
this method is called ('\n' for linux, '\r\n' for Windows, i.e.).
@@ -10282,7 +10282,7 @@ def _doc_parms(cls):
Parameters
----------
axis : %(axis_descr)s
-skipna : boolean, default True
+skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA
level : int or level name, default None
@@ -10291,7 +10291,7 @@ def _doc_parms(cls):
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
-numeric_only : boolean, default None
+numeric_only : bool, default None
Include only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
| A few small edits to `DataFrame` docstrings | https://api.github.com/repos/pandas-dev/pandas/pulls/24952 | 2019-01-26T16:13:28Z | 2019-02-14T07:05:41Z | 2019-02-14T07:05:41Z | 2019-02-14T14:52:02Z |
Backport PR #24916 on branch 0.24.x (BUG-24212 fix regression in #24897) | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index ee4b7ab62b31a..3ac2ed73ea53f 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -63,6 +63,9 @@ Bug Fixes
-
-
+**Reshaping**
+
+- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
**Other**
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index e11847d2b8ce2..1dd19a7c1514e 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -757,13 +757,21 @@ def _get_join_info(self):
if self.right_index:
if len(self.left) > 0:
- join_index = self.left.index.take(left_indexer)
+ join_index = self._create_join_index(self.left.index,
+ self.right.index,
+ left_indexer,
+ right_indexer,
+ how='right')
else:
join_index = self.right.index.take(right_indexer)
left_indexer = np.array([-1] * len(join_index))
elif self.left_index:
if len(self.right) > 0:
- join_index = self.right.index.take(right_indexer)
+ join_index = self._create_join_index(self.right.index,
+ self.left.index,
+ right_indexer,
+ left_indexer,
+ how='left')
else:
join_index = self.left.index.take(left_indexer)
right_indexer = np.array([-1] * len(join_index))
@@ -774,6 +782,39 @@ def _get_join_info(self):
join_index = join_index.astype(object)
return join_index, left_indexer, right_indexer
+ def _create_join_index(self, index, other_index, indexer,
+ other_indexer, how='left'):
+ """
+ Create a join index by rearranging one index to match another
+
+ Parameters
+ ----------
+ index: Index being rearranged
+ other_index: Index used to supply values not found in index
+ indexer: how to rearrange index
+ how: replacement is only necessary if indexer based on other_index
+
+ Returns
+ -------
+ join_index
+ """
+ join_index = index.take(indexer)
+ if (self.how in (how, 'outer') and
+ not isinstance(other_index, MultiIndex)):
+ # if final index requires values in other_index but not target
+ # index, indexer may hold missing (-1) values, causing Index.take
+ # to take the final value in target index
+ mask = indexer == -1
+ if np.any(mask):
+ # if values missing (-1) from target index,
+ # take from other_index instead
+ join_list = join_index.to_numpy()
+ other_list = other_index.take(other_indexer).to_numpy()
+ join_list[mask] = other_list[mask]
+ join_index = Index(join_list, dtype=join_index.dtype,
+ name=join_index.name)
+ return join_index
+
def _get_merge_keys(self):
"""
Note: has side effects (copy/delete key columns)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index f0a3ddc8ce8a4..c17c301968269 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -939,25 +939,22 @@ def test_merge_two_empty_df_no_division_error(self):
with np.errstate(divide='raise'):
merge(a, a, on=('a', 'b'))
- @pytest.mark.parametrize('how', ['left', 'outer'])
- @pytest.mark.xfail(reason="GH-24897")
+ @pytest.mark.parametrize('how', ['right', 'outer'])
def test_merge_on_index_with_more_values(self, how):
# GH 24212
- # pd.merge gets [-1, -1, 0, 1] as right_indexer, ensure that -1 is
- # interpreted as a missing value instead of the last element
- df1 = pd.DataFrame([[1, 2], [2, 4], [3, 6], [4, 8]],
- columns=['a', 'b'])
- df2 = pd.DataFrame([[3, 30], [4, 40]],
- columns=['a', 'c'])
- df1.set_index('a', drop=False, inplace=True)
- df2.set_index('a', inplace=True)
- result = pd.merge(df1, df2, left_index=True, right_on='a', how=how)
- expected = pd.DataFrame([[1, 2, np.nan],
- [2, 4, np.nan],
- [3, 6, 30.0],
- [4, 8, 40.0]],
- columns=['a', 'b', 'c'])
- expected.set_index('a', drop=False, inplace=True)
+ # pd.merge gets [0, 1, 2, -1, -1, -1] as left_indexer, ensure that
+ # -1 is interpreted as a missing value instead of the last element
+ df1 = pd.DataFrame({'a': [1, 2, 3], 'key': [0, 2, 2]})
+ df2 = pd.DataFrame({'b': [1, 2, 3, 4, 5]})
+ result = df1.merge(df2, left_on='key', right_index=True, how=how)
+ expected = pd.DataFrame([[1.0, 0, 1],
+ [2.0, 2, 3],
+ [3.0, 2, 3],
+ [np.nan, 1, 2],
+ [np.nan, 3, 4],
+ [np.nan, 4, 5]],
+ columns=['a', 'key', 'b'])
+ expected.set_index(Int64Index([0, 1, 2, 1, 3, 4]), inplace=True)
assert_frame_equal(result, expected)
def test_merge_right_index_right(self):
| Backport PR #24916: BUG-24212 fix regression in #24897 | https://api.github.com/repos/pandas-dev/pandas/pulls/24951 | 2019-01-26T14:58:41Z | 2019-01-26T17:47:00Z | 2019-01-26T17:47:00Z | 2023-05-11T01:18:39Z |
[DOC] Fix issues with DataFrame.aggregate page | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a239ff4b4d5db..b5bfb74ee24d2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6172,16 +6172,7 @@ def _gotitem(self,
# TODO: _shallow_copy(subset)?
return subset[key]
- _agg_summary_and_see_also_doc = dedent("""
- The aggregation operations are always performed over an axis, either the
- index (default) or the column axis. This behavior is different from
- `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,
- `var`), where the default is to compute the aggregation of the flattened
- array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d,
- axis=0)``.
-
- `agg` is an alias for `aggregate`. Use the alias.
-
+ _agg_see_also_doc = dedent("""
See Also
--------
DataFrame.apply : Perform any type of operations.
@@ -6226,9 +6217,19 @@ def _gotitem(self,
2 8.0
3 NaN
dtype: float64
+ """)
+ _agg_summary = dedent("""
+ The aggregation operations are always performed over an axis, either
+ the index (default) or the column axis. This behavior is different from
+ `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,
+ `var`), where the default is to compute the aggregation of the
+ flattened array, e.g., ``numpy.mean(arr_2d)`` as opposed to
+ ``numpy.mean(arr_2d, axis=0)``.
+
""")
- @Substitution(see_also=_agg_summary_and_see_also_doc,
+ @Substitution(summary=_agg_summary,
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='.. versionadded:: 0.20.0',
**_shared_doc_kwargs)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6e79c02d7dbdd..d127ae02219ae 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4926,6 +4926,8 @@ def pipe(self, func, *args, **kwargs):
_shared_docs['aggregate'] = dedent("""
Aggregate using one or more operations over the specified axis.
+ %(summary)s
+
%(versionadded)s
Parameters
@@ -4949,10 +4951,10 @@ def pipe(self, func, *args, **kwargs):
Returns
-------
DataFrame, Series or scalar
- If DataFrame.agg is called with a single function, returns a Series
- If DataFrame.agg is called with several functions, returns a DataFrame
- If Series.agg is called with single function, returns a scalar
- If Series.agg is called with several functions, returns a Series.
+ - If DataFrame.agg is called with a single function, returns a Series
+ - If DataFrame.agg is called with several functions, returns a DataFrame
+ - If Series.agg is called with single function, returns a scalar
+ - If Series.agg is called with several functions, returns a Series
%(see_also)s
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 52056a6842ed9..317e6f3b39f35 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -747,7 +747,8 @@ def _selection_name(self):
def apply(self, func, *args, **kwargs):
return super(SeriesGroupBy, self).apply(func, *args, **kwargs)
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='Series',
@@ -1306,7 +1307,8 @@ class DataFrameGroupBy(NDFrameGroupBy):
2 3 4 0.704907
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='DataFrame',
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index b3b28d7772713..a206c949d6bf9 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -254,7 +254,8 @@ def pipe(self, func, *args, **kwargs):
2013-01-01 00:00:04 5 NaN
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='DataFrame',
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a5dfe8d43c336..2511f3acaac2e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3492,7 +3492,8 @@ def _gotitem(self, key, ndim, subset=None):
dtype: int64
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='.. versionadded:: 0.20.0',
**_shared_doc_kwargs)
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 9e29fdb94c1e0..cd266f09532fa 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -733,7 +733,8 @@ def f(arg, *args, **kwargs):
9 0.070889 0.134399 -0.031308
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='Series/DataFrame',
@@ -1675,7 +1676,8 @@ def _validate_freq(self):
9 0.212668 -1.647453
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='Series/Dataframe',
@@ -1953,7 +1955,8 @@ def _get_window(self, other=None):
9 -0.286980 0.618493 -0.694496
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='Series/Dataframe',
@@ -2265,7 +2268,8 @@ def _constructor(self):
9 -0.286980 0.618493 -0.694496
""")
- @Substitution(see_also=_agg_see_also_doc,
+ @Substitution(summary='',
+ see_also=_agg_see_also_doc,
examples=_agg_examples_doc,
versionadded='',
klass='Series/Dataframe',
| Fixed some formatting issues. Description was gettign rendered in the
wrong location.
Moved a line from generic.py to frame.py to help with the rendering.
Moved the description being clubbed with see also to doc string with the
function declaration in frame.py
- closes #24668
- passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/24948 | 2019-01-26T13:24:08Z | 2019-03-28T05:33:45Z | null | 2019-03-28T05:33:45Z |
TST/REF: Add pytest idiom to test_numeric.py | diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 537881f3a5e85..3822170d884aa 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -5,436 +5,461 @@
import pytest
import pandas as pd
-from pandas import to_numeric
+from pandas import DataFrame, Index, Series, to_numeric
from pandas.util import testing as tm
-class TestToNumeric(object):
-
- def test_empty(self):
- # see gh-16302
- s = pd.Series([], dtype=object)
-
- res = to_numeric(s)
- expected = pd.Series([], dtype=np.int64)
-
- tm.assert_series_equal(res, expected)
-
- # Original issue example
- res = to_numeric(s, errors='coerce', downcast='integer')
- expected = pd.Series([], dtype=np.int8)
-
- tm.assert_series_equal(res, expected)
-
- def test_series(self):
- s = pd.Series(['1', '-3.14', '7'])
- res = to_numeric(s)
- expected = pd.Series([1, -3.14, 7])
- tm.assert_series_equal(res, expected)
-
- s = pd.Series(['1', '-3.14', 7])
- res = to_numeric(s)
- tm.assert_series_equal(res, expected)
-
- def test_series_numeric(self):
- s = pd.Series([1, 3, 4, 5], index=list('ABCD'), name='XXX')
- res = to_numeric(s)
- tm.assert_series_equal(res, s)
-
- s = pd.Series([1., 3., 4., 5.], index=list('ABCD'), name='XXX')
- res = to_numeric(s)
- tm.assert_series_equal(res, s)
-
- # bool is regarded as numeric
- s = pd.Series([True, False, True, True],
- index=list('ABCD'), name='XXX')
- res = to_numeric(s)
- tm.assert_series_equal(res, s)
-
- def test_error(self):
- s = pd.Series([1, -3.14, 'apple'])
- msg = 'Unable to parse string "apple" at position 2'
- with pytest.raises(ValueError, match=msg):
- to_numeric(s, errors='raise')
-
- res = to_numeric(s, errors='ignore')
- expected = pd.Series([1, -3.14, 'apple'])
- tm.assert_series_equal(res, expected)
-
- res = to_numeric(s, errors='coerce')
- expected = pd.Series([1, -3.14, np.nan])
- tm.assert_series_equal(res, expected)
-
- s = pd.Series(['orange', 1, -3.14, 'apple'])
- msg = 'Unable to parse string "orange" at position 0'
- with pytest.raises(ValueError, match=msg):
- to_numeric(s, errors='raise')
-
- def test_error_seen_bool(self):
- s = pd.Series([True, False, 'apple'])
- msg = 'Unable to parse string "apple" at position 2'
- with pytest.raises(ValueError, match=msg):
- to_numeric(s, errors='raise')
-
- res = to_numeric(s, errors='ignore')
- expected = pd.Series([True, False, 'apple'])
- tm.assert_series_equal(res, expected)
-
- # coerces to float
- res = to_numeric(s, errors='coerce')
- expected = pd.Series([1., 0., np.nan])
- tm.assert_series_equal(res, expected)
-
- def test_list(self):
- s = ['1', '-3.14', '7']
- res = to_numeric(s)
- expected = np.array([1, -3.14, 7])
- tm.assert_numpy_array_equal(res, expected)
-
- def test_list_numeric(self):
- s = [1, 3, 4, 5]
- res = to_numeric(s)
- tm.assert_numpy_array_equal(res, np.array(s, dtype=np.int64))
-
- s = [1., 3., 4., 5.]
- res = to_numeric(s)
- tm.assert_numpy_array_equal(res, np.array(s))
-
- # bool is regarded as numeric
- s = [True, False, True, True]
- res = to_numeric(s)
- tm.assert_numpy_array_equal(res, np.array(s))
-
- def test_numeric(self):
- s = pd.Series([1, -3.14, 7], dtype='O')
- res = to_numeric(s)
- expected = pd.Series([1, -3.14, 7])
- tm.assert_series_equal(res, expected)
-
- s = pd.Series([1, -3.14, 7])
- res = to_numeric(s)
- tm.assert_series_equal(res, expected)
-
- # GH 14827
- df = pd.DataFrame(dict(
- a=[1.2, decimal.Decimal(3.14), decimal.Decimal("infinity"), '0.1'],
- b=[1.0, 2.0, 3.0, 4.0],
- ))
- expected = pd.DataFrame(dict(
- a=[1.2, 3.14, np.inf, 0.1],
- b=[1.0, 2.0, 3.0, 4.0],
- ))
-
- # Test to_numeric over one column
- df_copy = df.copy()
- df_copy['a'] = df_copy['a'].apply(to_numeric)
- tm.assert_frame_equal(df_copy, expected)
-
- # Test to_numeric over multiple columns
- df_copy = df.copy()
- df_copy[['a', 'b']] = df_copy[['a', 'b']].apply(to_numeric)
- tm.assert_frame_equal(df_copy, expected)
-
- def test_numeric_lists_and_arrays(self):
- # Test to_numeric with embedded lists and arrays
- df = pd.DataFrame(dict(
- a=[[decimal.Decimal(3.14), 1.0], decimal.Decimal(1.6), 0.1]
- ))
- df['a'] = df['a'].apply(to_numeric)
- expected = pd.DataFrame(dict(
- a=[[3.14, 1.0], 1.6, 0.1],
- ))
- tm.assert_frame_equal(df, expected)
-
- df = pd.DataFrame(dict(
- a=[np.array([decimal.Decimal(3.14), 1.0]), 0.1]
- ))
- df['a'] = df['a'].apply(to_numeric)
- expected = pd.DataFrame(dict(
- a=[[3.14, 1.0], 0.1],
- ))
- tm.assert_frame_equal(df, expected)
-
- def test_all_nan(self):
- s = pd.Series(['a', 'b', 'c'])
- res = to_numeric(s, errors='coerce')
- expected = pd.Series([np.nan, np.nan, np.nan])
- tm.assert_series_equal(res, expected)
-
- @pytest.mark.parametrize("errors", [None, "ignore", "raise", "coerce"])
- def test_type_check(self, errors):
- # see gh-11776
- df = pd.DataFrame({"a": [1, -3.14, 7], "b": ["4", "5", "6"]})
- kwargs = dict(errors=errors) if errors is not None else dict()
- error_ctx = pytest.raises(TypeError, match="1-d array")
-
- with error_ctx:
- to_numeric(df, **kwargs)
-
- def test_scalar(self):
- assert pd.to_numeric(1) == 1
- assert pd.to_numeric(1.1) == 1.1
-
- assert pd.to_numeric('1') == 1
- assert pd.to_numeric('1.1') == 1.1
-
- with pytest.raises(ValueError):
- to_numeric('XX', errors='raise')
-
- assert to_numeric('XX', errors='ignore') == 'XX'
- assert np.isnan(to_numeric('XX', errors='coerce'))
-
- def test_numeric_dtypes(self):
- idx = pd.Index([1, 2, 3], name='xxx')
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, idx)
-
- res = pd.to_numeric(pd.Series(idx, name='xxx'))
- tm.assert_series_equal(res, pd.Series(idx, name='xxx'))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, idx.values)
-
- idx = pd.Index([1., np.nan, 3., np.nan], name='xxx')
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, idx)
-
- res = pd.to_numeric(pd.Series(idx, name='xxx'))
- tm.assert_series_equal(res, pd.Series(idx, name='xxx'))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, idx.values)
-
- def test_str(self):
- idx = pd.Index(['1', '2', '3'], name='xxx')
- exp = np.array([1, 2, 3], dtype='int64')
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, pd.Index(exp, name='xxx'))
-
- res = pd.to_numeric(pd.Series(idx, name='xxx'))
- tm.assert_series_equal(res, pd.Series(exp, name='xxx'))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, exp)
-
- idx = pd.Index(['1.5', '2.7', '3.4'], name='xxx')
- exp = np.array([1.5, 2.7, 3.4])
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, pd.Index(exp, name='xxx'))
-
- res = pd.to_numeric(pd.Series(idx, name='xxx'))
- tm.assert_series_equal(res, pd.Series(exp, name='xxx'))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, exp)
-
- def test_datetime_like(self, tz_naive_fixture):
- idx = pd.date_range("20130101", periods=3,
- tz=tz_naive_fixture, name="xxx")
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, pd.Index(idx.asi8, name="xxx"))
-
- res = pd.to_numeric(pd.Series(idx, name="xxx"))
- tm.assert_series_equal(res, pd.Series(idx.asi8, name="xxx"))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, idx.asi8)
-
- def test_timedelta(self):
- idx = pd.timedelta_range('1 days', periods=3, freq='D', name='xxx')
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, pd.Index(idx.asi8, name='xxx'))
-
- res = pd.to_numeric(pd.Series(idx, name='xxx'))
- tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
-
- res = pd.to_numeric(idx.values)
- tm.assert_numpy_array_equal(res, idx.asi8)
-
- def test_period(self):
- idx = pd.period_range('2011-01', periods=3, freq='M', name='xxx')
- res = pd.to_numeric(idx)
- tm.assert_index_equal(res, pd.Index(idx.asi8, name='xxx'))
-
- # TODO: enable when we can support native PeriodDtype
- # res = pd.to_numeric(pd.Series(idx, name='xxx'))
- # tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
-
- def test_non_hashable(self):
- # Test for Bug #13324
- s = pd.Series([[10.0, 2], 1.0, 'apple'])
- res = pd.to_numeric(s, errors='coerce')
- tm.assert_series_equal(res, pd.Series([np.nan, 1.0, np.nan]))
-
- res = pd.to_numeric(s, errors='ignore')
- tm.assert_series_equal(res, pd.Series([[10.0, 2], 1.0, 'apple']))
-
- with pytest.raises(TypeError, match="Invalid object type"):
- pd.to_numeric(s)
-
- @pytest.mark.parametrize("data", [
- ["1", 2, 3],
- [1, 2, 3],
- np.array(["1970-01-02", "1970-01-03",
- "1970-01-04"], dtype="datetime64[D]")
- ])
- def test_downcast_basic(self, data):
- # see gh-13352
- invalid_downcast = "unsigned-integer"
- msg = "invalid downcasting method provided"
-
- with pytest.raises(ValueError, match=msg):
- pd.to_numeric(data, downcast=invalid_downcast)
-
- expected = np.array([1, 2, 3], dtype=np.int64)
-
- # Basic function tests.
- res = pd.to_numeric(data)
- tm.assert_numpy_array_equal(res, expected)
-
- res = pd.to_numeric(data, downcast=None)
- tm.assert_numpy_array_equal(res, expected)
-
- # Basic dtype support.
- smallest_uint_dtype = np.dtype(np.typecodes["UnsignedInteger"][0])
-
- # Support below np.float32 is rare and far between.
- float_32_char = np.dtype(np.float32).char
- smallest_float_dtype = float_32_char
-
- expected = np.array([1, 2, 3], dtype=smallest_uint_dtype)
- res = pd.to_numeric(data, downcast="unsigned")
- tm.assert_numpy_array_equal(res, expected)
-
- expected = np.array([1, 2, 3], dtype=smallest_float_dtype)
- res = pd.to_numeric(data, downcast="float")
- tm.assert_numpy_array_equal(res, expected)
-
- @pytest.mark.parametrize("signed_downcast", ["integer", "signed"])
- @pytest.mark.parametrize("data", [
- ["1", 2, 3],
- [1, 2, 3],
- np.array(["1970-01-02", "1970-01-03",
- "1970-01-04"], dtype="datetime64[D]")
- ])
- def test_signed_downcast(self, data, signed_downcast):
- # see gh-13352
- smallest_int_dtype = np.dtype(np.typecodes["Integer"][0])
- expected = np.array([1, 2, 3], dtype=smallest_int_dtype)
-
- res = pd.to_numeric(data, downcast=signed_downcast)
- tm.assert_numpy_array_equal(res, expected)
-
- def test_ignore_downcast_invalid_data(self):
- # If we can't successfully cast the given
- # data to a numeric dtype, do not bother
- # with the downcast parameter.
- data = ["foo", 2, 3]
- expected = np.array(data, dtype=object)
-
- res = pd.to_numeric(data, errors="ignore",
- downcast="unsigned")
- tm.assert_numpy_array_equal(res, expected)
-
- def test_ignore_downcast_neg_to_unsigned(self):
- # Cannot cast to an unsigned integer
- # because we have a negative number.
- data = ["-1", 2, 3]
- expected = np.array([-1, 2, 3], dtype=np.int64)
-
- res = pd.to_numeric(data, downcast="unsigned")
- tm.assert_numpy_array_equal(res, expected)
-
- @pytest.mark.parametrize("downcast", ["integer", "signed", "unsigned"])
- @pytest.mark.parametrize("data,expected", [
- (["1.1", 2, 3],
- np.array([1.1, 2, 3], dtype=np.float64)),
- ([10000.0, 20000, 3000, 40000.36, 50000, 50000.00],
- np.array([10000.0, 20000, 3000,
- 40000.36, 50000, 50000.00], dtype=np.float64))
- ])
- def test_ignore_downcast_cannot_convert_float(
- self, data, expected, downcast):
- # Cannot cast to an integer (signed or unsigned)
- # because we have a float number.
- res = pd.to_numeric(data, downcast=downcast)
- tm.assert_numpy_array_equal(res, expected)
-
- @pytest.mark.parametrize("downcast,expected_dtype", [
- ("integer", np.int16),
- ("signed", np.int16),
- ("unsigned", np.uint16)
- ])
- def test_downcast_not8bit(self, downcast, expected_dtype):
- # the smallest integer dtype need not be np.(u)int8
- data = ["256", 257, 258]
-
- expected = np.array([256, 257, 258], dtype=expected_dtype)
- res = pd.to_numeric(data, downcast=downcast)
- tm.assert_numpy_array_equal(res, expected)
-
- @pytest.mark.parametrize("dtype,downcast,min_max", [
- ("int8", "integer", [iinfo(np.int8).min,
- iinfo(np.int8).max]),
- ("int16", "integer", [iinfo(np.int16).min,
- iinfo(np.int16).max]),
- ('int32', "integer", [iinfo(np.int32).min,
- iinfo(np.int32).max]),
- ('int64', "integer", [iinfo(np.int64).min,
- iinfo(np.int64).max]),
- ('uint8', "unsigned", [iinfo(np.uint8).min,
- iinfo(np.uint8).max]),
- ('uint16', "unsigned", [iinfo(np.uint16).min,
- iinfo(np.uint16).max]),
- ('uint32', "unsigned", [iinfo(np.uint32).min,
- iinfo(np.uint32).max]),
- ('uint64', "unsigned", [iinfo(np.uint64).min,
- iinfo(np.uint64).max]),
- ('int16', "integer", [iinfo(np.int8).min,
- iinfo(np.int8).max + 1]),
- ('int32', "integer", [iinfo(np.int16).min,
- iinfo(np.int16).max + 1]),
- ('int64', "integer", [iinfo(np.int32).min,
- iinfo(np.int32).max + 1]),
- ('int16', "integer", [iinfo(np.int8).min - 1,
- iinfo(np.int16).max]),
- ('int32', "integer", [iinfo(np.int16).min - 1,
- iinfo(np.int32).max]),
- ('int64', "integer", [iinfo(np.int32).min - 1,
- iinfo(np.int64).max]),
- ('uint16', "unsigned", [iinfo(np.uint8).min,
- iinfo(np.uint8).max + 1]),
- ('uint32', "unsigned", [iinfo(np.uint16).min,
- iinfo(np.uint16).max + 1]),
- ('uint64', "unsigned", [iinfo(np.uint32).min,
- iinfo(np.uint32).max + 1])
- ])
- def test_downcast_limits(self, dtype, downcast, min_max):
- # see gh-14404: test the limits of each downcast.
- series = pd.to_numeric(pd.Series(min_max), downcast=downcast)
- assert series.dtype == dtype
-
- def test_coerce_uint64_conflict(self):
- # see gh-17007 and gh-17125
- #
- # Still returns float despite the uint64-nan conflict,
- # which would normally force the casting to object.
- df = pd.DataFrame({"a": [200, 300, "", "NaN", 30000000000000000000]})
- expected = pd.Series([200, 300, np.nan, np.nan,
- 30000000000000000000], dtype=float, name="a")
- result = to_numeric(df["a"], errors="coerce")
+@pytest.mark.parametrize("input_kwargs,result_kwargs", [
+ (dict(), dict(dtype=np.int64)),
+ (dict(errors="coerce", downcast="integer"), dict(dtype=np.int8))
+])
+def test_empty(input_kwargs, result_kwargs):
+ # see gh-16302
+ ser = Series([], dtype=object)
+ result = to_numeric(ser, **input_kwargs)
+
+ expected = Series([], **result_kwargs)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("last_val", ["7", 7])
+def test_series(last_val):
+ ser = Series(["1", "-3.14", last_val])
+ result = to_numeric(ser)
+
+ expected = Series([1, -3.14, 7])
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("data", [
+ [1, 3, 4, 5],
+ [1., 3., 4., 5.],
+
+ # Bool is regarded as numeric.
+ [True, False, True, True]
+])
+def test_series_numeric(data):
+ ser = Series(data, index=list("ABCD"), name="EFG")
+
+ result = to_numeric(ser)
+ tm.assert_series_equal(result, ser)
+
+
+@pytest.mark.parametrize("data,msg", [
+ ([1, -3.14, "apple"],
+ 'Unable to parse string "apple" at position 2'),
+ (["orange", 1, -3.14, "apple"],
+ 'Unable to parse string "orange" at position 0')
+])
+def test_error(data, msg):
+ ser = Series(data)
+
+ with pytest.raises(ValueError, match=msg):
+ to_numeric(ser, errors="raise")
+
+
+@pytest.mark.parametrize("errors,exp_data", [
+ ("ignore", [1, -3.14, "apple"]),
+ ("coerce", [1, -3.14, np.nan])
+])
+def test_ignore_error(errors, exp_data):
+ ser = Series([1, -3.14, "apple"])
+ result = to_numeric(ser, errors=errors)
+
+ expected = Series(exp_data)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("errors,exp", [
+ ("raise", 'Unable to parse string "apple" at position 2'),
+ ("ignore", [True, False, "apple"]),
+
+ # Coerces to float.
+ ("coerce", [1., 0., np.nan])
+])
+def test_bool_handling(errors, exp):
+ ser = Series([True, False, "apple"])
+
+ if isinstance(exp, str):
+ with pytest.raises(ValueError, match=exp):
+ to_numeric(ser, errors=errors)
+ else:
+ result = to_numeric(ser, errors=errors)
+ expected = Series(exp)
+
tm.assert_series_equal(result, expected)
- s = pd.Series(["12345678901234567890", "1234567890", "ITEM"])
- expected = pd.Series([12345678901234567890,
- 1234567890, np.nan], dtype=float)
- result = to_numeric(s, errors="coerce")
+
+def test_list():
+ ser = ["1", "-3.14", "7"]
+ res = to_numeric(ser)
+
+ expected = np.array([1, -3.14, 7])
+ tm.assert_numpy_array_equal(res, expected)
+
+
+@pytest.mark.parametrize("data,arr_kwargs", [
+ ([1, 3, 4, 5], dict(dtype=np.int64)),
+ ([1., 3., 4., 5.], dict()),
+
+ # Boolean is regarded as numeric.
+ ([True, False, True, True], dict())
+])
+def test_list_numeric(data, arr_kwargs):
+ result = to_numeric(data)
+ expected = np.array(data, **arr_kwargs)
+ tm.assert_numpy_array_equal(result, expected)
+
+
+@pytest.mark.parametrize("kwargs", [
+ dict(dtype="O"), dict()
+])
+def test_numeric(kwargs):
+ data = [1, -3.14, 7]
+
+ ser = Series(data, **kwargs)
+ result = to_numeric(ser)
+
+ expected = Series(data)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("columns", [
+ # One column.
+ "a",
+
+ # Multiple columns.
+ ["a", "b"]
+])
+def test_numeric_df_columns(columns):
+ # see gh-14827
+ df = DataFrame(dict(
+ a=[1.2, decimal.Decimal(3.14), decimal.Decimal("infinity"), "0.1"],
+ b=[1.0, 2.0, 3.0, 4.0],
+ ))
+
+ expected = DataFrame(dict(
+ a=[1.2, 3.14, np.inf, 0.1],
+ b=[1.0, 2.0, 3.0, 4.0],
+ ))
+
+ df_copy = df.copy()
+ df_copy[columns] = df_copy[columns].apply(to_numeric)
+
+ tm.assert_frame_equal(df_copy, expected)
+
+
+@pytest.mark.parametrize("data,exp_data", [
+ ([[decimal.Decimal(3.14), 1.0], decimal.Decimal(1.6), 0.1],
+ [[3.14, 1.0], 1.6, 0.1]),
+ ([np.array([decimal.Decimal(3.14), 1.0]), 0.1],
+ [[3.14, 1.0], 0.1])
+])
+def test_numeric_embedded_arr_likes(data, exp_data):
+ # Test to_numeric with embedded lists and arrays
+ df = DataFrame(dict(a=data))
+ df["a"] = df["a"].apply(to_numeric)
+
+ expected = DataFrame(dict(a=exp_data))
+ tm.assert_frame_equal(df, expected)
+
+
+def test_all_nan():
+ ser = Series(["a", "b", "c"])
+ result = to_numeric(ser, errors="coerce")
+
+ expected = Series([np.nan, np.nan, np.nan])
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("errors", [None, "ignore", "raise", "coerce"])
+def test_type_check(errors):
+ # see gh-11776
+ df = DataFrame({"a": [1, -3.14, 7], "b": ["4", "5", "6"]})
+ kwargs = dict(errors=errors) if errors is not None else dict()
+ error_ctx = pytest.raises(TypeError, match="1-d array")
+
+ with error_ctx:
+ to_numeric(df, **kwargs)
+
+
+@pytest.mark.parametrize("val", [
+ 1, 1.1, "1", "1.1", -1.5, "-1.5"
+])
+def test_scalar(val):
+ assert to_numeric(val) == float(val)
+
+
+@pytest.mark.parametrize("errors,checker", [
+ ("raise", 'Unable to parse string "fail" at position 0'),
+ ("ignore", lambda x: x == "fail"),
+ ("coerce", lambda x: np.isnan(x))
+])
+def test_scalar_fail(errors, checker):
+ scalar = "fail"
+
+ if isinstance(checker, str):
+ with pytest.raises(ValueError, match=checker):
+ to_numeric(scalar, errors=errors)
+ else:
+ assert checker(to_numeric(scalar, errors=errors))
+
+
+@pytest.fixture(params=[
+ (lambda x: Index(x, name="idx"), tm.assert_index_equal),
+ (lambda x: Series(x, name="ser"), tm.assert_series_equal),
+ (lambda x: np.array(Index(x).values), tm.assert_numpy_array_equal)
+])
+def transform_assert_equal(request):
+ return request.param
+
+
+@pytest.mark.parametrize("data", [
+ [1, 2, 3],
+ [1., np.nan, 3, np.nan]
+])
+def test_numeric_dtypes(data, transform_assert_equal):
+ transform, assert_equal = transform_assert_equal
+ data = transform(data)
+
+ result = to_numeric(data)
+ assert_equal(result, data)
+
+
+@pytest.mark.parametrize("data,exp", [
+ (["1", "2", "3"], np.array([1, 2, 3], dtype="int64")),
+ (["1.5", "2.7", "3.4"], np.array([1.5, 2.7, 3.4]))
+])
+def test_str(data, exp, transform_assert_equal):
+ transform, assert_equal = transform_assert_equal
+ result = to_numeric(transform(data))
+
+ expected = transform(exp)
+ assert_equal(result, expected)
+
+
+def test_datetime_like(tz_naive_fixture, transform_assert_equal):
+ transform, assert_equal = transform_assert_equal
+ idx = pd.date_range("20130101", periods=3, tz=tz_naive_fixture)
+
+ result = to_numeric(transform(idx))
+ expected = transform(idx.asi8)
+ assert_equal(result, expected)
+
+
+def test_timedelta(transform_assert_equal):
+ transform, assert_equal = transform_assert_equal
+ idx = pd.timedelta_range("1 days", periods=3, freq="D")
+
+ result = to_numeric(transform(idx))
+ expected = transform(idx.asi8)
+ assert_equal(result, expected)
+
+
+def test_period(transform_assert_equal):
+ transform, assert_equal = transform_assert_equal
+
+ idx = pd.period_range("2011-01", periods=3, freq="M", name="")
+ inp = transform(idx)
+
+ if isinstance(inp, Index):
+ result = to_numeric(inp)
+ expected = transform(idx.asi8)
+ assert_equal(result, expected)
+ else:
+ # TODO: PeriodDtype, so support it in to_numeric.
+ pytest.skip("Missing PeriodDtype support in to_numeric")
+
+
+@pytest.mark.parametrize("errors,expected", [
+ ("raise", "Invalid object type at position 0"),
+ ("ignore", Series([[10.0, 2], 1.0, "apple"])),
+ ("coerce", Series([np.nan, 1.0, np.nan]))
+])
+def test_non_hashable(errors, expected):
+ # see gh-13324
+ ser = Series([[10.0, 2], 1.0, "apple"])
+
+ if isinstance(expected, str):
+ with pytest.raises(TypeError, match=expected):
+ to_numeric(ser, errors=errors)
+ else:
+ result = to_numeric(ser, errors=errors)
tm.assert_series_equal(result, expected)
- # For completeness, check against "ignore" and "raise"
- result = to_numeric(s, errors="ignore")
- tm.assert_series_equal(result, s)
- msg = "Unable to parse string"
- with pytest.raises(ValueError, match=msg):
- to_numeric(s, errors="raise")
+def test_downcast_invalid_cast():
+ # see gh-13352
+ data = ["1", 2, 3]
+ invalid_downcast = "unsigned-integer"
+ msg = "invalid downcasting method provided"
+
+ with pytest.raises(ValueError, match=msg):
+ to_numeric(data, downcast=invalid_downcast)
+
+
+@pytest.mark.parametrize("data", [
+ ["1", 2, 3],
+ [1, 2, 3],
+ np.array(["1970-01-02", "1970-01-03",
+ "1970-01-04"], dtype="datetime64[D]")
+])
+@pytest.mark.parametrize("kwargs,exp_dtype", [
+ # Basic function tests.
+ (dict(), np.int64),
+ (dict(downcast=None), np.int64),
+
+ # Support below np.float32 is rare and far between.
+ (dict(downcast="float"), np.dtype(np.float32).char),
+
+ # Basic dtype support.
+ (dict(downcast="unsigned"), np.dtype(np.typecodes["UnsignedInteger"][0]))
+])
+def test_downcast_basic(data, kwargs, exp_dtype):
+ # see gh-13352
+ result = to_numeric(data, **kwargs)
+ expected = np.array([1, 2, 3], dtype=exp_dtype)
+ tm.assert_numpy_array_equal(result, expected)
+
+
+@pytest.mark.parametrize("signed_downcast", ["integer", "signed"])
+@pytest.mark.parametrize("data", [
+ ["1", 2, 3],
+ [1, 2, 3],
+ np.array(["1970-01-02", "1970-01-03",
+ "1970-01-04"], dtype="datetime64[D]")
+])
+def test_signed_downcast(data, signed_downcast):
+ # see gh-13352
+ smallest_int_dtype = np.dtype(np.typecodes["Integer"][0])
+ expected = np.array([1, 2, 3], dtype=smallest_int_dtype)
+
+ res = to_numeric(data, downcast=signed_downcast)
+ tm.assert_numpy_array_equal(res, expected)
+
+
+def test_ignore_downcast_invalid_data():
+ # If we can't successfully cast the given
+ # data to a numeric dtype, do not bother
+ # with the downcast parameter.
+ data = ["foo", 2, 3]
+ expected = np.array(data, dtype=object)
+
+ res = to_numeric(data, errors="ignore",
+ downcast="unsigned")
+ tm.assert_numpy_array_equal(res, expected)
+
+
+def test_ignore_downcast_neg_to_unsigned():
+ # Cannot cast to an unsigned integer
+ # because we have a negative number.
+ data = ["-1", 2, 3]
+ expected = np.array([-1, 2, 3], dtype=np.int64)
+
+ res = to_numeric(data, downcast="unsigned")
+ tm.assert_numpy_array_equal(res, expected)
+
+
+@pytest.mark.parametrize("downcast", ["integer", "signed", "unsigned"])
+@pytest.mark.parametrize("data,expected", [
+ (["1.1", 2, 3],
+ np.array([1.1, 2, 3], dtype=np.float64)),
+ ([10000.0, 20000, 3000, 40000.36, 50000, 50000.00],
+ np.array([10000.0, 20000, 3000,
+ 40000.36, 50000, 50000.00], dtype=np.float64))
+])
+def test_ignore_downcast_cannot_convert_float(data, expected, downcast):
+ # Cannot cast to an integer (signed or unsigned)
+ # because we have a float number.
+ res = to_numeric(data, downcast=downcast)
+ tm.assert_numpy_array_equal(res, expected)
+
+
+@pytest.mark.parametrize("downcast,expected_dtype", [
+ ("integer", np.int16),
+ ("signed", np.int16),
+ ("unsigned", np.uint16)
+])
+def test_downcast_not8bit(downcast, expected_dtype):
+ # the smallest integer dtype need not be np.(u)int8
+ data = ["256", 257, 258]
+
+ expected = np.array([256, 257, 258], dtype=expected_dtype)
+ res = to_numeric(data, downcast=downcast)
+ tm.assert_numpy_array_equal(res, expected)
+
+
+@pytest.mark.parametrize("dtype,downcast,min_max", [
+ ("int8", "integer", [iinfo(np.int8).min,
+ iinfo(np.int8).max]),
+ ("int16", "integer", [iinfo(np.int16).min,
+ iinfo(np.int16).max]),
+ ("int32", "integer", [iinfo(np.int32).min,
+ iinfo(np.int32).max]),
+ ("int64", "integer", [iinfo(np.int64).min,
+ iinfo(np.int64).max]),
+ ("uint8", "unsigned", [iinfo(np.uint8).min,
+ iinfo(np.uint8).max]),
+ ("uint16", "unsigned", [iinfo(np.uint16).min,
+ iinfo(np.uint16).max]),
+ ("uint32", "unsigned", [iinfo(np.uint32).min,
+ iinfo(np.uint32).max]),
+ ("uint64", "unsigned", [iinfo(np.uint64).min,
+ iinfo(np.uint64).max]),
+ ("int16", "integer", [iinfo(np.int8).min,
+ iinfo(np.int8).max + 1]),
+ ("int32", "integer", [iinfo(np.int16).min,
+ iinfo(np.int16).max + 1]),
+ ("int64", "integer", [iinfo(np.int32).min,
+ iinfo(np.int32).max + 1]),
+ ("int16", "integer", [iinfo(np.int8).min - 1,
+ iinfo(np.int16).max]),
+ ("int32", "integer", [iinfo(np.int16).min - 1,
+ iinfo(np.int32).max]),
+ ("int64", "integer", [iinfo(np.int32).min - 1,
+ iinfo(np.int64).max]),
+ ("uint16", "unsigned", [iinfo(np.uint8).min,
+ iinfo(np.uint8).max + 1]),
+ ("uint32", "unsigned", [iinfo(np.uint16).min,
+ iinfo(np.uint16).max + 1]),
+ ("uint64", "unsigned", [iinfo(np.uint32).min,
+ iinfo(np.uint32).max + 1])
+])
+def test_downcast_limits(dtype, downcast, min_max):
+ # see gh-14404: test the limits of each downcast.
+ series = to_numeric(Series(min_max), downcast=downcast)
+ assert series.dtype == dtype
+
+
+@pytest.mark.parametrize("data,exp_data", [
+ ([200, 300, "", "NaN", 30000000000000000000],
+ [200, 300, np.nan, np.nan, 30000000000000000000]),
+ (["12345678901234567890", "1234567890", "ITEM"],
+ [12345678901234567890, 1234567890, np.nan])
+])
+def test_coerce_uint64_conflict(data, exp_data):
+ # see gh-17007 and gh-17125
+ #
+ # Still returns float despite the uint64-nan conflict,
+ # which would normally force the casting to object.
+ result = to_numeric(Series(data), errors="coerce")
+ expected = Series(exp_data, dtype=float)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("errors,exp", [
+ ("ignore", Series(["12345678901234567890", "1234567890", "ITEM"])),
+ ("raise", "Unable to parse string")
+])
+def test_non_coerce_uint64_conflict(errors, exp):
+ # see gh-17007 and gh-17125
+ #
+ # For completeness.
+ ser = Series(["12345678901234567890", "1234567890", "ITEM"])
+
+ if isinstance(exp, str):
+ with pytest.raises(ValueError, match=exp):
+ to_numeric(ser, errors=errors)
+ else:
+ result = to_numeric(ser, errors=errors)
+ tm.assert_series_equal(result, ser)
| https://api.github.com/repos/pandas-dev/pandas/pulls/24946 | 2019-01-26T10:19:11Z | 2019-01-26T15:42:23Z | 2019-01-26T15:42:23Z | 2019-01-26T21:22:34Z |
|
DOC/CLN: Fix errors in Series docstrings | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 427da96c5e1c4..db8a15932106a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -129,7 +129,7 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
sequence are used, the index will override the keys found in the
dict.
dtype : str, numpy.dtype, or ExtensionDtype, optional
- dtype for the output Series. If not specified, this will be
+ Data type for the output Series. If not specified, this will be
inferred from `data`.
See the :ref:`user guide <basics.dtypes>` for more usages.
copy : bool, default False
@@ -444,7 +444,7 @@ def values(self):
Returns
-------
- arr : numpy.ndarray or ndarray-like
+ numpy.ndarray or ndarray-like
See Also
--------
@@ -513,6 +513,11 @@ def ravel(self, order='C'):
"""
Return the flattened underlying data as an ndarray.
+ Returns
+ -------
+ numpy.ndarray or ndarray-like
+ Flattened data of the Series.
+
See Also
--------
numpy.ndarray.ravel
@@ -830,7 +835,7 @@ def _ixs(self, i, axis=0):
Returns
-------
- value : scalar (int) or Series (slice, sequence)
+ scalar (int) or Series (slice, sequence)
"""
try:
@@ -1173,7 +1178,7 @@ def get_value(self, label, takeable=False):
Returns
-------
- value : scalar value
+ scalar value
"""
warnings.warn("get_value is deprecated and will be removed "
"in a future release. Please use "
@@ -1207,7 +1212,7 @@ def set_value(self, label, value, takeable=False):
Returns
-------
- series : Series
+ Series
If label is contained, will be reference to calling Series,
otherwise a new object
"""
@@ -1394,29 +1399,30 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None, header=True,
Parameters
----------
buf : StringIO-like, optional
- buffer to write to
- na_rep : string, optional
- string representation of NAN to use, default 'NaN'
+ Buffer to write to.
+ na_rep : str, optional
+ String representation of NaN to use, default 'NaN'.
float_format : one-parameter function, optional
- formatter function to apply to columns' elements if they are floats
- default None
- header : boolean, default True
- Add the Series header (index name)
+ Formatter function to apply to columns' elements if they are
+ floats, default None.
+ header : bool, default True
+ Add the Series header (index name).
index : bool, optional
- Add index (row) labels, default True
- length : boolean, default False
- Add the Series length
- dtype : boolean, default False
- Add the Series dtype
- name : boolean, default False
- Add the Series name if not None
+ Add index (row) labels, default True.
+ length : bool, default False
+ Add the Series length.
+ dtype : bool, default False
+ Add the Series dtype.
+ name : bool, default False
+ Add the Series name if not None.
max_rows : int, optional
Maximum number of rows to show before truncating. If None, show
all.
Returns
-------
- formatted : string (if not buffer passed)
+ str or None
+ String representation of Series if ``buf=None``, otherwise None.
"""
formatter = fmt.SeriesFormatter(self, name=name, length=length,
@@ -1476,7 +1482,8 @@ def to_dict(self, into=dict):
Returns
-------
- value_dict : collections.Mapping
+ collections.Mapping
+ Key-value representation of Series.
Examples
--------
@@ -1488,7 +1495,7 @@ def to_dict(self, into=dict):
OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> dd = defaultdict(list)
>>> s.to_dict(dd)
- defaultdict(<type 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
+ defaultdict(<class 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
"""
# GH16122
into_c = com.standardize_mapping(into)
@@ -1506,7 +1513,18 @@ def to_frame(self, name=None):
Returns
-------
- data_frame : DataFrame
+ DataFrame
+ DataFrame representation of Series.
+
+ Examples
+ --------
+ >>> s = pd.Series(["a", "b", "c"],
+ ... name="vals")
+ >>> s.to_frame()
+ vals
+ 0 a
+ 1 b
+ 2 c
"""
if name is None:
df = self._constructor_expanddim(self)
@@ -1521,12 +1539,14 @@ def to_sparse(self, kind='block', fill_value=None):
Parameters
----------
- kind : {'block', 'integer'}
+ kind : {'block', 'integer'}, default 'block'
fill_value : float, defaults to NaN (missing)
+ Value to use for filling NaN values.
Returns
-------
- sp : SparseSeries
+ SparseSeries
+ Sparse representation of the Series.
"""
# TODO: deprecate
from pandas.core.sparse.series import SparseSeries
@@ -1564,11 +1584,18 @@ def count(self, level=None):
----------
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a smaller Series
+ particular level, collapsing into a smaller Series.
Returns
-------
- nobs : int or Series (if level specified)
+ int or Series (if level specified)
+ Number of non-null values in the Series.
+
+ Examples
+ --------
+ >>> s = pd.Series([0.0, 1.0, np.nan])
+ >>> s.count()
+ 2
"""
if level is None:
return notna(com.values_from_object(self)).sum()
@@ -1597,14 +1624,15 @@ def mode(self, dropna=True):
Parameters
----------
- dropna : boolean, default True
+ dropna : bool, default True
Don't consider counts of NaN/NaT.
.. versionadded:: 0.24.0
Returns
-------
- modes : Series (sorted)
+ Series
+ Modes of the Series in sorted order.
"""
# TODO: Add option for bins like value_counts()
return algorithms.mode(self, dropna=dropna)
@@ -1677,12 +1705,13 @@ def drop_duplicates(self, keep='first', inplace=False):
- 'first' : Drop duplicates except for the first occurrence.
- 'last' : Drop duplicates except for the last occurrence.
- ``False`` : Drop all duplicates.
- inplace : boolean, default ``False``
+ inplace : bool, default ``False``
If ``True``, performs operation inplace and returns None.
Returns
-------
- deduplicated : Series
+ Series
+ Series with duplicates dropped.
See Also
--------
@@ -1759,7 +1788,9 @@ def duplicated(self, keep='first'):
Returns
-------
- pandas.core.series.Series
+ Series
+ Series indicating whether each value has occurred in the
+ preceding values.
See Also
--------
@@ -1823,7 +1854,7 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
Parameters
----------
- skipna : boolean, default True
+ skipna : bool, default True
Exclude NA/null values. If the entire Series is NA, the result
will be NA.
axis : int, default 0
@@ -1835,7 +1866,8 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
Returns
-------
- idxmin : Index of minimum of values.
+ Index
+ Label of the minimum value.
Raises
------
@@ -1860,7 +1892,7 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
Examples
--------
>>> s = pd.Series(data=[1, None, 4, 1],
- ... index=['A' ,'B' ,'C' ,'D'])
+ ... index=['A', 'B', 'C', 'D'])
>>> s
A 1.0
B NaN
@@ -1892,7 +1924,7 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs):
Parameters
----------
- skipna : boolean, default True
+ skipna : bool, default True
Exclude NA/null values. If the entire Series is NA, the result
will be NA.
axis : int, default 0
@@ -1904,7 +1936,8 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs):
Returns
-------
- idxmax : Index of maximum of values.
+ Index
+ Label of the maximum value.
Raises
------
@@ -1988,12 +2021,22 @@ def round(self, decimals=0, *args, **kwargs):
Returns
-------
- Series object
+ Series
+ Rounded values of the Series.
See Also
--------
- numpy.around
- DataFrame.round
+ numpy.around : Round values of an np.array.
+ DataFrame.round : Round values of a DataFrame.
+
+ Examples
+ --------
+ >>> s = pd.Series([0.1, 1.3, 2.7])
+ >>> s.round()
+ 0 0.0
+ 1 1.0
+ 2 3.0
+ dtype: float64
"""
nv.validate_round(args, kwargs)
result = com.values_from_object(self).round(decimals)
@@ -2008,7 +2051,7 @@ def quantile(self, q=0.5, interpolation='linear'):
Parameters
----------
q : float or array-like, default 0.5 (50% quantile)
- 0 <= q <= 1, the quantile(s) to compute
+ 0 <= q <= 1, the quantile(s) to compute.
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
.. versionadded:: 0.18.0
@@ -2024,9 +2067,10 @@ def quantile(self, q=0.5, interpolation='linear'):
Returns
-------
- quantile : float or Series
+ float or Series
If ``q`` is an array, a Series will be returned where the
- index is ``q`` and the values are the quantiles.
+ index is ``q`` and the values are the quantiles, otherwise
+ a float will be returned.
See Also
--------
@@ -2072,6 +2116,7 @@ def corr(self, other, method='pearson', min_periods=None):
Parameters
----------
other : Series
+ Series with which to compute the correlation.
method : {'pearson', 'kendall', 'spearman'} or callable
* pearson : standard correlation coefficient
* kendall : Kendall Tau correlation coefficient
@@ -2081,16 +2126,18 @@ def corr(self, other, method='pearson', min_periods=None):
.. versionadded:: 0.24.0
min_periods : int, optional
- Minimum number of observations needed to have a valid result
+ Minimum number of observations needed to have a valid result.
Returns
-------
- correlation : float
+ float
+ Correlation with other.
Examples
--------
- >>> histogram_intersection = lambda a, b: np.minimum(a, b
- ... ).sum().round(decimals=1)
+ >>> def histogram_intersection(a, b):
+ ... v = np.minimum(a, b).sum().round(decimals=1)
+ ... return v
>>> s1 = pd.Series([.2, .0, .6, .2])
>>> s2 = pd.Series([.3, .6, .0, .1])
>>> s1.corr(s2, method=histogram_intersection)
@@ -2115,14 +2162,22 @@ def cov(self, other, min_periods=None):
Parameters
----------
other : Series
+ Series with which to compute the covariance.
min_periods : int, optional
- Minimum number of observations needed to have a valid result
+ Minimum number of observations needed to have a valid result.
Returns
-------
- covariance : float
+ float
+ Covariance between Series and other normalized by N-1
+ (unbiased estimator).
- Normalized by N-1 (unbiased estimator).
+ Examples
+ --------
+ >>> s1 = pd.Series([0.90010907, 0.13484424, 0.62036035])
+ >>> s2 = pd.Series([0.12528585, 0.26962463, 0.51111198])
+ >>> s1.cov(s2)
+ -0.01685762652715874
"""
this, other = self.align(other, join='inner', copy=False)
if len(this) == 0:
@@ -2145,7 +2200,8 @@ def diff(self, periods=1):
Returns
-------
- diffed : Series
+ Series
+ First differences of the Series.
See Also
--------
@@ -2279,7 +2335,7 @@ def dot(self, other):
8
>>> s @ other
8
- >>> df = pd.DataFrame([[0 ,1], [-2, 3], [4, -5], [6, 7]])
+ >>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
@@ -2348,17 +2404,19 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
Parameters
----------
to_append : Series or list/tuple of Series
- ignore_index : boolean, default False
+ Series to append with self.
+ ignore_index : bool, default False
If True, do not use the index labels.
.. versionadded:: 0.19.0
- verify_integrity : boolean, default False
- If True, raise Exception on creating index with duplicates
+ verify_integrity : bool, default False
+ If True, raise Exception on creating index with duplicates.
Returns
-------
- appended : Series
+ Series
+ Concatenated Series.
See Also
--------
@@ -2376,7 +2434,7 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
--------
>>> s1 = pd.Series([1, 2, 3])
>>> s2 = pd.Series([4, 5, 6])
- >>> s3 = pd.Series([4, 5, 6], index=[3,4,5])
+ >>> s3 = pd.Series([4, 5, 6], index=[3, 4, 5])
>>> s1.append(s2)
0 1
1 2
@@ -2439,7 +2497,7 @@ def _binop(self, other, func, level=None, fill_value=None):
Returns
-------
- combined : Series
+ Series
"""
if not isinstance(other, Series):
raise AssertionError('Other operand must be Series')
@@ -2862,7 +2920,7 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
Returns
-------
- pandas.Series
+ Series
The original Series sorted by the labels.
See Also
@@ -3002,7 +3060,9 @@ def argsort(self, axis=0, kind='quicksort', order=None):
Returns
-------
- argsorted : Series, with -1 indicated where nan values are present
+ Series
+ Positions of values within the sort order with -1 indicating
+ nan values.
See Also
--------
@@ -3220,12 +3280,13 @@ def swaplevel(self, i=-2, j=-1, copy=True):
Parameters
----------
- i, j : int, string (can be mixed)
+ i, j : int, str (can be mixed)
Level of index to be swapped. Can pass level name as string.
Returns
-------
- swapped : Series
+ Series
+ Series with levels swapped in MultiIndex.
.. versionchanged:: 0.18.1
@@ -3265,21 +3326,23 @@ def unstack(self, level=-1, fill_value=None):
Parameters
----------
- level : int, string, or list of these, default last level
- Level(s) to unstack, can pass level name
- fill_value : replace NaN with this value if the unstack produces
- missing values
+ level : int, str, or list of these, default last level
+ Level(s) to unstack, can pass level name.
+ fill_value : scalar value, default None
+ Value to use when replacing NaN values.
.. versionadded:: 0.18.0
Returns
-------
- unstacked : DataFrame
+ DataFrame
+ Unstacked Series.
Examples
--------
>>> s = pd.Series([1, 2, 3, 4],
- ... index=pd.MultiIndex.from_product([['one', 'two'], ['a', 'b']]))
+ ... index=pd.MultiIndex.from_product([['one', 'two'],
+ ... ['a', 'b']]))
>>> s
one a 1
b 2
@@ -3679,7 +3742,7 @@ def rename(self, index=None, **kwargs):
Scalar or hashable sequence-like will alter the ``Series.name``
attribute.
copy : bool, default True
- Also copy underlying data
+ Whether to copy underlying data.
inplace : bool, default False
Whether to return a new Series. If True then value of copy is
ignored.
@@ -3689,11 +3752,12 @@ def rename(self, index=None, **kwargs):
Returns
-------
- renamed : Series (new object)
+ Series
+ Series with index labels or name altered.
See Also
--------
- Series.rename_axis
+ Series.rename_axis : Set the name of the axis.
Examples
--------
@@ -3703,7 +3767,7 @@ def rename(self, index=None, **kwargs):
1 2
2 3
dtype: int64
- >>> s.rename("my_name") # scalar, changes Series.name
+ >>> s.rename("my_name") # scalar, changes Series.name
0 1
1 2
2 3
@@ -3762,7 +3826,8 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Returns
-------
- dropped : pandas.Series
+ Series
+ Series with specified index labels removed.
Raises
------
@@ -3778,7 +3843,7 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Examples
--------
- >>> s = pd.Series(data=np.arange(3), index=['A','B','C'])
+ >>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C'])
>>> s
A 0
B 1
@@ -3787,7 +3852,7 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Drop labels B en C
- >>> s.drop(labels=['B','C'])
+ >>> s.drop(labels=['B', 'C'])
A 0
dtype: int64
@@ -3960,7 +4025,8 @@ def isin(self, values):
Returns
-------
- isin : Series (bool dtype)
+ Series
+ Series of booleans indicating if each element is in values.
Raises
------
@@ -4019,7 +4085,8 @@ def between(self, left, right, inclusive=True):
Returns
-------
Series
- Each element will be a boolean.
+ Series representing whether each element is between left and
+ right (inclusive).
See Also
--------
@@ -4101,27 +4168,27 @@ def from_csv(cls, path, sep=',', parse_dates=True, header=None,
Parameters
----------
- path : string file path or file handle / StringIO
- sep : string, default ','
- Field delimiter
- parse_dates : boolean, default True
- Parse dates. Different default from read_table
+ path : str, file path, or file handle / StringIO
+ sep : str, default ','
+ Field delimiter.
+ parse_dates : bool, default True
+ Parse dates. Different default from read_table.
header : int, default None
- Row to use as header (skip prior rows)
+ Row to use as header (skip prior rows).
index_col : int or sequence, default 0
Column to use for index. If a sequence is given, a MultiIndex
- is used. Different default from read_table
- encoding : string, optional
- a string representing the encoding to use if the contents are
- non-ascii, for python versions prior to 3
- infer_datetime_format : boolean, default False
+ is used. Different default from read_table.
+ encoding : str, optional
+ A string representing the encoding to use if the contents are
+ non-ascii, for python versions prior to 3.
+ infer_datetime_format : bool, default False
If True and `parse_dates` is True for a column, try to infer the
datetime format based on the first datetime string. If the format
can be inferred, there often will be a large parsing speed-up.
Returns
-------
- y : Series
+ Series
See Also
--------
@@ -4322,19 +4389,21 @@ def valid(self, inplace=False, **kwargs):
def to_timestamp(self, freq=None, how='start', copy=True):
"""
- Cast to datetimeindex of timestamps, at *beginning* of period.
+ Cast to DatetimeIndex of Timestamps, at *beginning* of period.
Parameters
----------
- freq : string, default frequency of PeriodIndex
- Desired frequency
+ freq : str, default frequency of PeriodIndex
+ Desired frequency.
how : {'s', 'e', 'start', 'end'}
Convention for converting period to timestamp; start of period
- vs. end
+ vs. end.
+ copy : bool, default True
+ Whether or not to return a copy.
Returns
-------
- ts : Series with DatetimeIndex
+ Series with DatetimeIndex
"""
new_values = self._values
if copy:
@@ -4351,11 +4420,15 @@ def to_period(self, freq=None, copy=True):
Parameters
----------
- freq : string, default
+ freq : str, default None
+ Frequency associated with the PeriodIndex.
+ copy : bool, default True
+ Whether or not to return a copy.
Returns
-------
- ts : Series with PeriodIndex
+ Series
+ Series with index converted to PeriodIndex.
"""
new_values = self._values
if copy:
| Fixes (or starts to fix) a few errors in `Series` docstrings. On a related note, it seems `validate_docstrings.py` yields this error for a large number of docstrings:
```
The first line of the Returns section should contain only the type, unless multiple values are being returned
```
which seems to happen whenever the return value is named. Should these be errors? | https://api.github.com/repos/pandas-dev/pandas/pulls/24945 | 2019-01-26T02:05:09Z | 2019-02-07T04:56:55Z | 2019-02-07T04:56:55Z | 2019-02-07T13:40:24Z |
DOC: State that we support scalars in to_numeric | diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 803723dab46ff..79d8ee38637f9 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -21,7 +21,7 @@ def to_numeric(arg, errors='raise', downcast=None):
Parameters
----------
- arg : list, tuple, 1-d array, or Series
+ arg : scalar, list, tuple, 1-d array, or Series
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If 'raise', then invalid parsing will raise an exception
- If 'coerce', then invalid parsing will be set as NaN
| We [support it](https://github.com/pandas-dev/pandas/blob/0c4113f/pandas/core/tools/numeric.py#L114-L120) and [test it](https://github.com/pandas-dev/pandas/blob/0c4113f/pandas/tests/tools/test_numeric.py#L174-L185) already.
xref #24910.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24944 | 2019-01-26T01:56:07Z | 2019-01-26T08:01:00Z | 2019-01-26T08:01:00Z | 2019-01-26T10:18:16Z |
Remove py27 CI jobs | diff --git a/.travis.yml b/.travis.yml
index e478d71a5c350..f8302f4718ef2 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,6 +1,5 @@
sudo: false
language: python
-# Default Python version is usually 2.7
python: 3.5
# To turn off cached cython files and compiler cache
@@ -36,14 +35,6 @@ matrix:
env:
- JOB="3.7" ENV_FILE="ci/deps/travis-37.yaml" PATTERN="(not slow and not network)"
- - dist: trusty
- env:
- - JOB="2.7" ENV_FILE="ci/deps/travis-27.yaml" PATTERN="(not slow or (single and db))"
- addons:
- apt:
- packages:
- - python-gtk2
-
- dist: trusty
env:
- JOB="3.6, locale" ENV_FILE="ci/deps/travis-36-locale.yaml" PATTERN="((not slow and not network) or (single and db))" LOCALE_OVERRIDE="zh_CN.UTF-8"
diff --git a/README.md b/README.md
index 633673d5cd04f..dcf39864e46e2 100644
--- a/README.md
+++ b/README.md
@@ -166,7 +166,7 @@ pip install pandas
## Dependencies
- [NumPy](https://www.numpy.org): 1.12.0 or higher
- [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
-- [pytz](https://pythonhosted.org/pytz): 2011k or higher
+- [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
for recommended and optional dependencies.
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index f0567d76659b6..6c30ec641f292 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -10,7 +10,6 @@ jobs:
name: Linux
vmImage: ubuntu-16.04
-# Windows Python 2.7 needs VC 9.0 installed, handled in the template
- template: ci/azure/windows.yml
parameters:
name: Windows
diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
index b9e0cd0b9258c..65f78c2786927 100644
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -9,24 +9,12 @@ jobs:
strategy:
matrix:
${{ if eq(parameters.name, 'macOS') }}:
- py35_np_120:
+ py35_macos:
ENV_FILE: ci/deps/azure-macos-35.yaml
CONDA_PY: "35"
PATTERN: "not slow and not network"
${{ if eq(parameters.name, 'Linux') }}:
- py27_np_120:
- ENV_FILE: ci/deps/azure-27-compat.yaml
- CONDA_PY: "27"
- PATTERN: "not slow and not network"
-
- py27_locale_slow_old_np:
- ENV_FILE: ci/deps/azure-27-locale.yaml
- CONDA_PY: "27"
- PATTERN: "slow"
- LOCALE_OVERRIDE: "zh_CN.UTF-8"
- EXTRA_APT: "language-pack-zh-hans"
-
py36_locale_slow:
ENV_FILE: ci/deps/azure-36-locale_slow.yaml
CONDA_PY: "36"
diff --git a/ci/azure/windows.yml b/ci/azure/windows.yml
index cece002024936..13f2442806422 100644
--- a/ci/azure/windows.yml
+++ b/ci/azure/windows.yml
@@ -12,23 +12,12 @@ jobs:
ENV_FILE: ci/deps/azure-windows-36.yaml
CONDA_PY: "36"
- py27_np121:
- ENV_FILE: ci/deps/azure-windows-27.yaml
- CONDA_PY: "27"
-
steps:
- task: CondaEnvironment@1
inputs:
updateConda: no
packageSpecs: ''
- - powershell: |
- $wc = New-Object net.webclient
- $wc.Downloadfile("https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi", "VCForPython27.msi")
- Start-Process "VCForPython27.msi" /qn -Wait
- displayName: 'Install VC 9.0 only for Python 2.7'
- condition: eq(variables.CONDA_PY, '27')
-
- script: |
ci\\incremental\\setup_conda_environment.cmd
displayName: 'Before Install'
diff --git a/ci/deps/azure-27-compat.yaml b/ci/deps/azure-27-compat.yaml
deleted file mode 100644
index a7784f17d1956..0000000000000
--- a/ci/deps/azure-27-compat.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-name: pandas-dev
-channels:
- - defaults
- - conda-forge
-dependencies:
- - bottleneck=1.2.0
- - cython=0.28.2
- - jinja2=2.8
- - numexpr=2.6.1
- - numpy=1.12.0
- - openpyxl=2.5.5
- - pytables=3.4.2
- - python-dateutil=2.5.0
- - python=2.7*
- - pytz=2013b
- - scipy=0.18.1
- - xlrd=1.0.0
- - xlsxwriter=0.5.2
- - xlwt=0.7.5
- # universal
- - pytest>=4.0.2
- - pytest-xdist
- - pytest-mock
- - isort
- - pip:
- - html5lib==1.0b2
- - beautifulsoup4==4.2.1
- - hypothesis>=3.58.0
diff --git a/ci/deps/azure-27-locale.yaml b/ci/deps/azure-27-locale.yaml
deleted file mode 100644
index 8636a63d02fed..0000000000000
--- a/ci/deps/azure-27-locale.yaml
+++ /dev/null
@@ -1,30 +0,0 @@
-name: pandas-dev
-channels:
- - defaults
- - conda-forge
-dependencies:
- - bottleneck=1.2.0
- - cython=0.28.2
- - lxml
- - matplotlib=2.0.0
- - numpy=1.12.0
- - openpyxl=2.4.0
- - python-dateutil
- - python-blosc
- - python=2.7
- - pytz
- - pytz=2013b
- - scipy
- - sqlalchemy=0.8.1
- - xlrd=1.0.0
- - xlsxwriter=0.5.2
- - xlwt=0.7.5
- # universal
- - pytest>=4.0.2
- - pytest-xdist
- - pytest-mock
- - hypothesis>=3.58.0
- - isort
- - pip:
- - html5lib==1.0b2
- - beautifulsoup4==4.2.1
diff --git a/ci/deps/azure-windows-27.yaml b/ci/deps/azure-windows-27.yaml
deleted file mode 100644
index f40efdfca3cbd..0000000000000
--- a/ci/deps/azure-windows-27.yaml
+++ /dev/null
@@ -1,33 +0,0 @@
-name: pandas-dev
-channels:
- - defaults
- - conda-forge
-dependencies:
- - beautifulsoup4
- - bottleneck
- - dateutil
- - gcsfs
- - html5lib
- - jinja2=2.8
- - lxml
- - matplotlib=2.0.1
- - numexpr
- - numpy=1.12*
- - openpyxl
- - pytables
- - python=2.7.*
- - pytz
- - s3fs
- - scipy
- - sqlalchemy
- - xlrd
- - xlsxwriter
- - xlwt
- # universal
- - cython>=0.28.2
- - pytest>=4.0.2
- - pytest-xdist
- - pytest-mock
- - moto
- - hypothesis>=3.58.0
- - isort
diff --git a/ci/deps/azure-windows-36.yaml b/ci/deps/azure-windows-36.yaml
index 8517d340f2ba8..5ce55a4cb4c0e 100644
--- a/ci/deps/azure-windows-36.yaml
+++ b/ci/deps/azure-windows-36.yaml
@@ -15,7 +15,7 @@ dependencies:
- pyarrow
- pytables
- python-dateutil
- - python=3.6.6
+ - python=3.6.*
- pytz
- scipy
- xlrd
diff --git a/ci/deps/travis-27.yaml b/ci/deps/travis-27.yaml
deleted file mode 100644
index a910af36a6b10..0000000000000
--- a/ci/deps/travis-27.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-name: pandas-dev
-channels:
- - defaults
- - conda-forge
-dependencies:
- - beautifulsoup4
- - bottleneck
- - cython=0.28.2
- - fastparquet>=0.2.1
- - gcsfs
- - html5lib
- - ipython
- - jemalloc=4.5.0.post
- - jinja2=2.8
- - lxml
- - matplotlib=2.2.2
- - mock
- - nomkl
- - numexpr
- - numpy=1.13*
- - openpyxl=2.4.0
- - patsy
- - psycopg2
- - py
- - pyarrow=0.9.0
- - PyCrypto
- - pymysql=0.6.3
- - pytables
- - blosc=1.14.3
- - python-blosc
- - python-dateutil=2.5.0
- - python=2.7*
- - pytz=2013b
- - s3fs
- - scipy
- - sqlalchemy=0.9.6
- - xarray=0.9.6
- - xlrd=1.0.0
- - xlsxwriter=0.5.2
- - xlwt=0.7.5
- # universal
- - pytest>=4.0.2
- - pytest-xdist
- - pytest-mock
- - moto==1.3.4
- - hypothesis>=3.58.0
- - isort
- - pip:
- - backports.lzma
- - pandas-gbq
- - pathlib
diff --git a/ci/deps/travis-36-doc.yaml b/ci/deps/travis-36-doc.yaml
index 6f33bc58a8b21..8015f7bdc81c6 100644
--- a/ci/deps/travis-36-doc.yaml
+++ b/ci/deps/travis-36-doc.yaml
@@ -15,12 +15,12 @@ dependencies:
- ipywidgets
- lxml
- matplotlib
- - nbconvert
+ - nbconvert>=5.4.1
- nbformat
- nbsphinx
- - notebook
+ - notebook>=5.7.5
- numexpr
- - numpy=1.13*
+ - numpy
- numpydoc
- openpyxl
- pandoc
diff --git a/ci/run_with_env.cmd b/ci/run_with_env.cmd
index 848f4608c8627..0661039a21fae 100644
--- a/ci/run_with_env.cmd
+++ b/ci/run_with_env.cmd
@@ -1,5 +1,5 @@
:: EXPECTED ENV VARS: PYTHON_ARCH (either x86 or x64)
-:: CONDA_PY (either 27, 33, 35 etc. - only major version is extracted)
+:: CONDA_PY (either 35, 36 etc. - only major version is extracted)
::
::
:: To build extensions for 64 bit Python 3, we need to configure environment
@@ -45,7 +45,7 @@ SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows
SET MAJOR_PYTHON_VERSION=%CONDA_PY:~0,1%
IF "%CONDA_PY:~2,1%" == "" (
- :: CONDA_PY style, such as 27, 34 etc.
+ :: CONDA_PY style, such as 36, 37 etc.
SET MINOR_PYTHON_VERSION=%CONDA_PY:~1,1%
) ELSE (
IF "%CONDA_PY:~3,1%" == "." (
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 5310667c403e5..9ecd78c9c19fa 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -226,7 +226,7 @@ Dependencies
* `setuptools <https://setuptools.readthedocs.io/en/latest/>`__: 24.2.0 or higher
* `NumPy <http://www.numpy.org>`__: 1.12.0 or higher
* `python-dateutil <https://dateutil.readthedocs.io/en/stable/>`__: 2.5.0 or higher
-* `pytz <http://pytz.sourceforge.net/>`__
+* `pytz <http://pytz.sourceforge.net/>`__: 2015.4 or higher
.. _install.recommended_dependencies:
@@ -259,7 +259,7 @@ Optional Dependencies
* `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage, Version 3.4.2 or higher
* `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.9.0): necessary for feather-based storage.
* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.2.1) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
-* `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are:
+* `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 1.0.8 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are:
* `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL
* `pymysql <https://github.com/PyMySQL/PyMySQL>`__: for MySQL.
@@ -298,7 +298,7 @@ Optional Dependencies
.. note::
- If using BeautifulSoup4 a minimum version of 4.2.1 is required
+ If using BeautifulSoup4 a minimum version of 4.4.1 is required
* `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
okay.)
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index ddc5e543c6165..1d2466adf9265 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -70,12 +70,27 @@ is respected in indexing. (:issue:`24076`, :issue:`16785`)
Increased minimum versions for dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-We have updated our minimum supported versions of dependencies (:issue:`23519`).
+Due to dropping support for Python 2.7, a number of optional dependencies have updated minimum versions.
+Independently, some minimum supported versions of dependencies were updated (:issue:`23519`, :issue:`24942`).
If installed, we now require:
+-----------------+-----------------+----------+
| Package | Minimum Version | Required |
+=================+=================+==========+
+| beautifulsoup4 | 4.4.1 | |
++-----------------+-----------------+----------+
+| openpyxl | 2.2.6 | |
++-----------------+-----------------+----------+
+| pymysql | 0.6.6 | |
++-----------------+-----------------+----------+
+| pytz | 2015.4 | |
++-----------------+-----------------+----------+
+| sqlalchemy | 1.0.8 | |
++-----------------+-----------------+----------+
+| xlsxwriter | 0.7.7 | |
++-----------------+-----------------+----------+
+| xlwt | 1.0.0 | |
++-----------------+-----------------+----------+
| pytest (dev) | 4.0.2 | |
+-----------------+-----------------+----------+
diff --git a/setup.py b/setup.py
index a83e07b50ed57..d58d444f9a481 100755
--- a/setup.py
+++ b/setup.py
@@ -34,7 +34,7 @@ def is_platform_mac():
setuptools_kwargs = {
'install_requires': [
'python-dateutil >= 2.5.0',
- 'pytz >= 2011k',
+ 'pytz >= 2015.4',
'numpy >= {numpy_ver}'.format(numpy_ver=min_numpy_ver),
],
'setup_requires': ['numpy >= {numpy_ver}'.format(numpy_ver=min_numpy_ver)],
diff --git a/tox.ini b/tox.ini
deleted file mode 100644
index f055251581a93..0000000000000
--- a/tox.ini
+++ /dev/null
@@ -1,82 +0,0 @@
-# Tox (http://tox.testrun.org/) is a tool for running tests
-# in multiple virtualenvs. This configuration file will run the
-# test suite on all supported python versions. To use it, "pip install tox"
-# and then run "tox" from this directory.
-
-[tox]
-envlist = py27, py35, py36
-
-[testenv]
-deps =
- cython
- nose
- pytest
- pytz>=2011k
- python-dateutil
- beautifulsoup4
- lxml
- xlsxwriter
- xlrd
- six
- sqlalchemy
- moto
-
-# cd to anything but the default {toxinidir} which
-# contains the pandas subdirectory and confuses
-# nose away from the fresh install in site-packages
-changedir = {envdir}
-
-commands =
- # TODO: --exe because of GH #761
- {envbindir}/pytest pandas {posargs:-A "not network and not disabled"}
- # cleanup the temp. build dir created by the tox build
-# /bin/rm -rf {toxinidir}/build
-
- # quietly rollback the install.
- # Note this line will only be reached if the
- # previous lines succeed (in particular, the tests),
- # but an uninstall is really only required when
- # files are removed from the source tree, in which case,
- # stale versions of files will will remain in the venv
- # until the next time uninstall is run.
- #
- # tox should provide a preinstall-commands hook.
- pip uninstall pandas -qy
-
-[testenv:py27]
-deps =
- numpy==1.8.1
- boto
- bigquery
- {[testenv]deps}
-
-[testenv:py35]
-deps =
- numpy==1.10.0
- {[testenv]deps}
-
-[testenv:py36]
-deps =
- numpy
- {[testenv]deps}
-
-[testenv:openpyxl1]
-usedevelop = True
-deps =
- {[testenv]deps}
- openpyxl<2.0.0
-commands = {envbindir}/pytest {toxinidir}/pandas/io/tests/test_excel.py
-
-[testenv:openpyxl20]
-usedevelop = True
-deps =
- {[testenv]deps}
- openpyxl<2.2.0
-commands = {envbindir}/pytest {posargs} {toxinidir}/pandas/io/tests/test_excel.py
-
-[testenv:openpyxl22]
-usedevelop = True
-deps =
- {[testenv]deps}
- openpyxl>=2.2.0
-commands = {envbindir}/pytest {posargs} {toxinidir}/pandas/io/tests/test_excel.py
| The start of a looooooooong process to get rid of the py2 compat code.
I kept most of the CI jobs that used to be 2.7 at the lower end of the various supported versions, with 2x 2.7->3.5, 1x 2.7->3.6 and only one real upgrade from 2.7->3.7 (first CI job for 3.7 & windows).
@jreback @TomAugspurger
| https://api.github.com/repos/pandas-dev/pandas/pulls/24942 | 2019-01-26T00:24:17Z | 2019-03-17T15:09:47Z | 2019-03-17T15:09:47Z | 2019-03-25T06:51:06Z |
DOC: switch headline whatsnew to 0.25 | diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index 51487c0d325b5..d04e9194e71dc 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -39,7 +39,7 @@ See the :ref:`overview` for more detail about what's in the library.
{% endif %}
{% if not single_doc -%}
- What's New in 0.24.0 <whatsnew/v0.24.0>
+ What's New in 0.25.0 <whatsnew/v0.25.0>
install
getting_started/index
user_guide/index
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index fac42dbd9c7c8..5129449e4fdf3 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1,10 +1,13 @@
-:orphan:
-
.. _whatsnew_0250:
What's New in 0.25.0 (April XX, 2019)
-------------------------------------
+.. warning::
+
+ Starting with the 0.25.x series of releases, pandas only supports Python 3.5 and higher.
+ See :ref:`install.dropping-27` for more details.
+
{{ header }}
These are the changes in pandas 0.25.0. See :ref:`release` for a full changelog
| @TomAugspurger
Not sure if the warning is still desired or necessary. I'm just thinking that "explicit is better than implicit".
| https://api.github.com/repos/pandas-dev/pandas/pulls/24941 | 2019-01-25T23:55:15Z | 2019-01-26T14:53:59Z | 2019-01-26T14:53:59Z | 2019-01-27T19:13:04Z |
TST: GH#23922 Add missing match params to pytest.raises | diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index f97a1651163e8..405dc0805a285 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -124,14 +124,14 @@ def test_comparison_invalid(self, box_with_array):
result = x != y
expected = tm.box_expected([True] * 5, xbox)
tm.assert_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = 'Invalid comparison between'
+ with pytest.raises(TypeError, match=msg):
x >= y
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
x > y
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
x < y
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
x <= y
@pytest.mark.parametrize('data', [
@@ -327,9 +327,10 @@ def test_comparison_tzawareness_compat(self, op):
# raise
naive_series = Series(dr)
aware_series = Series(dz)
- with pytest.raises(TypeError):
+ msg = 'Cannot compare tz-naive and tz-aware'
+ with pytest.raises(TypeError, match=msg):
op(dz, naive_series)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dr, aware_series)
# TODO: implement _assert_tzawareness_compat for the reverse
@@ -428,14 +429,14 @@ def test_dti_cmp_null_scalar_inequality(self, tz_naive_fixture, other,
dti = pd.date_range('2016-01-01', periods=2, tz=tz)
# FIXME: ValueError with transpose
dtarr = tm.box_expected(dti, box_with_array, transpose=False)
-
- with pytest.raises(TypeError):
+ msg = 'Invalid comparison between'
+ with pytest.raises(TypeError, match=msg):
dtarr < other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr <= other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr > other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr >= other
@pytest.mark.parametrize('dtype', [None, object])
@@ -584,22 +585,23 @@ def test_comparison_tzawareness_compat(self, op, box_with_array):
dr = tm.box_expected(dr, box_with_array, transpose=False)
dz = tm.box_expected(dz, box_with_array, transpose=False)
- with pytest.raises(TypeError):
+ msg = 'Cannot compare tz-naive and tz-aware'
+ with pytest.raises(TypeError, match=msg):
op(dr, dz)
if box_with_array is not pd.DataFrame:
# DataFrame op is invalid until transpose bug is fixed
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dr, list(dz))
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dr, np.array(list(dz), dtype=object))
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dz, dr)
if box_with_array is not pd.DataFrame:
# DataFrame op is invalid until transpose bug is fixed
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dz, list(dr))
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dz, np.array(list(dr), dtype=object))
# Check that there isn't a problem aware-aware and naive-naive do not
@@ -617,15 +619,15 @@ def test_comparison_tzawareness_compat(self, op, box_with_array):
ts_tz = pd.Timestamp('2000-03-14 01:59', tz='Europe/Amsterdam')
assert_all(dr > ts)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dr, ts_tz)
assert_all(dz > ts_tz)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(dz, ts)
# GH#12601: Check comparison against Timestamps and DatetimeIndex
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(ts, dz)
@pytest.mark.parametrize('op', [operator.eq, operator.ne,
@@ -641,10 +643,10 @@ def test_scalar_comparison_tzawareness(self, op, other, tz_aware_fixture,
# FIXME: ValueError with transpose
dtarr = tm.box_expected(dti, box_with_array, transpose=False)
-
- with pytest.raises(TypeError):
+ msg = 'Cannot compare tz-naive and tz-aware'
+ with pytest.raises(TypeError, match=msg):
op(dtarr, other)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
op(other, dtarr)
@pytest.mark.parametrize('op', [operator.eq, operator.ne,
@@ -714,14 +716,14 @@ def test_dt64arr_cmp_scalar_invalid(self, other, tz_naive_fixture,
expected = np.array([True] * 10)
expected = tm.box_expected(expected, xbox, transpose=False)
tm.assert_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = 'Invalid comparison between'
+ with pytest.raises(TypeError, match=msg):
rng < other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
rng <= other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
rng > other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
rng >= other
def test_dti_cmp_list(self):
@@ -749,14 +751,14 @@ def test_dti_cmp_tdi_tzawareness(self, other):
result = dti != other
expected = np.array([True] * 10)
tm.assert_numpy_array_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = 'Invalid comparison between'
+ with pytest.raises(TypeError, match=msg):
dti < other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dti <= other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dti > other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dti >= other
def test_dti_cmp_object_dtype(self):
@@ -770,7 +772,8 @@ def test_dti_cmp_object_dtype(self):
tm.assert_numpy_array_equal(result, expected)
other = dti.tz_localize(None)
- with pytest.raises(TypeError):
+ msg = 'Cannot compare tz-naive and tz-aware'
+ with pytest.raises(TypeError, match=msg):
# tzawareness failure
dti != other
@@ -778,8 +781,8 @@ def test_dti_cmp_object_dtype(self):
result = dti == other
expected = np.array([True] * 5 + [False] * 5)
tm.assert_numpy_array_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = "Cannot compare type"
+ with pytest.raises(TypeError, match=msg):
dti >= other
@@ -898,7 +901,8 @@ def test_dt64arr_add_sub_td64_nat(self, box_with_array, tz_naive_fixture):
tm.assert_equal(result, expected)
result = obj - other
tm.assert_equal(result, expected)
- with pytest.raises(TypeError):
+ msg = 'cannot subtract'
+ with pytest.raises(TypeError, match=msg):
other - obj
def test_dt64arr_add_sub_td64ndarray(self, tz_naive_fixture,
@@ -927,8 +931,8 @@ def test_dt64arr_add_sub_td64ndarray(self, tz_naive_fixture,
result = dtarr - tdarr
tm.assert_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = 'cannot subtract'
+ with pytest.raises(TypeError, match=msg):
tdarr - dtarr
# -----------------------------------------------------------------
@@ -1028,10 +1032,10 @@ def test_dt64arr_aware_sub_dt64ndarray_raises(self, tz_aware_fixture,
dt64vals = dti.values
dtarr = tm.box_expected(dti, box_with_array)
-
- with pytest.raises(TypeError):
+ msg = 'DatetimeArray subtraction must have the same timezones or'
+ with pytest.raises(TypeError, match=msg):
dtarr - dt64vals
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dt64vals - dtarr
# -------------------------------------------------------------
@@ -1048,17 +1052,17 @@ def test_dt64arr_add_dt64ndarray_raises(self, tz_naive_fixture,
dt64vals = dti.values
dtarr = tm.box_expected(dti, box_with_array)
-
- with pytest.raises(TypeError):
+ msg = 'cannot add'
+ with pytest.raises(TypeError, match=msg):
dtarr + dt64vals
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dt64vals + dtarr
def test_dt64arr_add_timestamp_raises(self, box_with_array):
# GH#22163 ensure DataFrame doesn't cast Timestamp to i8
idx = DatetimeIndex(['2011-01-01', '2011-01-02'])
idx = tm.box_expected(idx, box_with_array)
- msg = "cannot add"
+ msg = 'cannot add'
with pytest.raises(TypeError, match=msg):
idx + Timestamp('2011-01-01')
with pytest.raises(TypeError, match=msg):
@@ -1071,13 +1075,14 @@ def test_dt64arr_add_timestamp_raises(self, box_with_array):
def test_dt64arr_add_sub_float(self, other, box_with_array):
dti = DatetimeIndex(['2011-01-01', '2011-01-02'], freq='D')
dtarr = tm.box_expected(dti, box_with_array)
- with pytest.raises(TypeError):
+ msg = '|'.join(['unsupported operand type', 'cannot (add|subtract)'])
+ with pytest.raises(TypeError, match=msg):
dtarr + other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
other + dtarr
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr - other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
other - dtarr
@pytest.mark.parametrize('pi_freq', ['D', 'W', 'Q', 'H'])
@@ -1090,14 +1095,15 @@ def test_dt64arr_add_sub_parr(self, dti_freq, pi_freq,
dtarr = tm.box_expected(dti, box_with_array)
parr = tm.box_expected(pi, box_with_array2)
-
- with pytest.raises(TypeError):
+ msg = '|'.join(['cannot (add|subtract)', 'unsupported operand',
+ 'descriptor.*requires', 'ufunc.*cannot use operands'])
+ with pytest.raises(TypeError, match=msg):
dtarr + parr
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
parr + dtarr
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr - parr
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
parr - dtarr
@pytest.mark.parametrize('dti_freq', [None, 'D'])
@@ -1108,14 +1114,14 @@ def test_dt64arr_add_sub_period_scalar(self, dti_freq, box_with_array):
idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], freq=dti_freq)
dtarr = tm.box_expected(idx, box_with_array)
-
- with pytest.raises(TypeError):
+ msg = '|'.join(['unsupported operand type', 'cannot (add|subtract)'])
+ with pytest.raises(TypeError, match=msg):
dtarr + per
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
per + dtarr
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dtarr - per
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
per - dtarr
@@ -1156,8 +1162,8 @@ def test_dt64arr_series_sub_tick_DateOffset(self, box_with_array):
result2 = -pd.offsets.Second(5) + ser
tm.assert_equal(result2, expected)
-
- with pytest.raises(TypeError):
+ msg = "bad operand type for unary"
+ with pytest.raises(TypeError, match=msg):
pd.offsets.Second(5) - ser
@pytest.mark.parametrize('cls_name', ['Day', 'Hour', 'Minute', 'Second',
@@ -1239,8 +1245,8 @@ def test_dt64arr_add_sub_relativedelta_offsets(self, box_with_array):
expected = DatetimeIndex([x - off for x in vec_items])
expected = tm.box_expected(expected, box_with_array)
tm.assert_equal(expected, vec - off)
-
- with pytest.raises(TypeError):
+ msg = "bad operand type for unary"
+ with pytest.raises(TypeError, match=msg):
off - vec
# -------------------------------------------------------------
@@ -1320,8 +1326,8 @@ def test_dt64arr_add_sub_DateOffsets(self, box_with_array,
expected = DatetimeIndex([offset + x for x in vec_items])
expected = tm.box_expected(expected, box_with_array)
tm.assert_equal(expected, offset + vec)
-
- with pytest.raises(TypeError):
+ msg = "bad operand type for unary"
+ with pytest.raises(TypeError, match=msg):
offset - vec
def test_dt64arr_add_sub_DateOffset(self, box_with_array):
@@ -1440,13 +1446,14 @@ def test_dt64_series_arith_overflow(self):
td = pd.Timedelta('20000 Days')
dti = pd.date_range('1949-09-30', freq='100Y', periods=4)
ser = pd.Series(dti)
- with pytest.raises(OverflowError):
+ msg = 'Overflow in int64 addition'
+ with pytest.raises(OverflowError, match=msg):
ser - dt
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
dt - ser
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
ser + td
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
td + ser
ser.iloc[-1] = pd.NaT
@@ -1480,9 +1487,9 @@ def test_datetimeindex_sub_timestamp_overflow(self):
tspos.to_pydatetime(),
tspos.to_datetime64().astype('datetime64[ns]'),
tspos.to_datetime64().astype('datetime64[D]')]
-
+ msg = 'Overflow in int64 addition'
for variant in ts_neg_variants:
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
dtimax - variant
expected = pd.Timestamp.max.value - tspos.value
@@ -1496,7 +1503,7 @@ def test_datetimeindex_sub_timestamp_overflow(self):
assert res[1].value == expected
for variant in ts_pos_variants:
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
dtimin - variant
def test_datetimeindex_sub_datetimeindex_overflow(self):
@@ -1515,22 +1522,22 @@ def test_datetimeindex_sub_datetimeindex_overflow(self):
expected = pd.Timestamp.min.value - ts_neg[1].value
result = dtimin - ts_neg
assert result[1].value == expected
-
- with pytest.raises(OverflowError):
+ msg = 'Overflow in int64 addition'
+ with pytest.raises(OverflowError, match=msg):
dtimax - ts_neg
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
dtimin - ts_pos
# Edge cases
tmin = pd.to_datetime([pd.Timestamp.min])
t1 = tmin + pd.Timedelta.max + pd.Timedelta('1us')
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
t1 - tmin
tmax = pd.to_datetime([pd.Timestamp.max])
t2 = tmax + pd.Timedelta.min - pd.Timedelta('1us')
- with pytest.raises(OverflowError):
+ with pytest.raises(OverflowError, match=msg):
tmax - t2
@@ -1543,7 +1550,8 @@ def test_empty_series_add_sub(self):
tm.assert_series_equal(a, a + b)
tm.assert_series_equal(a, a - b)
tm.assert_series_equal(a, b + a)
- with pytest.raises(TypeError):
+ msg = 'cannot subtract'
+ with pytest.raises(TypeError, match=msg):
b - a
def test_operators_datetimelike(self):
@@ -1688,12 +1696,13 @@ def test_datetime64_ops_nat(self):
# subtraction
tm.assert_series_equal(-NaT + datetime_series,
nat_series_dtype_timestamp)
- with pytest.raises(TypeError):
+ msg = 'Unary negative expects'
+ with pytest.raises(TypeError, match=msg):
-single_nat_dtype_datetime + datetime_series
tm.assert_series_equal(-NaT + nat_series_dtype_timestamp,
nat_series_dtype_timestamp)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
-single_nat_dtype_datetime + nat_series_dtype_timestamp
# addition
@@ -1718,15 +1727,16 @@ def test_datetime64_ops_nat(self):
@pytest.mark.parametrize('one', [1, 1.0, np.array(1)])
def test_dt64_mul_div_numeric_invalid(self, one, dt64_series):
# multiplication
- with pytest.raises(TypeError):
+ msg = 'cannot perform .* with this index type'
+ with pytest.raises(TypeError, match=msg):
dt64_series * one
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
one * dt64_series
# division
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dt64_series / one
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
one / dt64_series
@pytest.mark.parametrize('op', ['__add__', '__radd__',
@@ -1740,13 +1750,17 @@ def test_dt64_series_add_intlike(self, tz, op):
other = Series([20, 30, 40], dtype='uint8')
method = getattr(ser, op)
- with pytest.raises(TypeError):
+ msg = '|'.join(['incompatible type for a .* operation',
+ 'cannot evaluate a numeric op',
+ 'ufunc .* cannot use operands',
+ 'cannot (add|subtract)'])
+ with pytest.raises(TypeError, match=msg):
method(1)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
method(other)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
method(other.values)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
method(pd.Index(other))
# -------------------------------------------------------------
@@ -1783,13 +1797,14 @@ def test_operators_datetimelike_with_timezones(self):
result = dt1 - td1[0]
exp = (dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize(tz)
tm.assert_series_equal(result, exp)
- with pytest.raises(TypeError):
+ msg = "bad operand type for unary"
+ with pytest.raises(TypeError, match=msg):
td1[0] - dt1
result = dt2 - td2[0]
exp = (dt2.dt.tz_localize(None) - td2[0]).dt.tz_localize(tz)
tm.assert_series_equal(result, exp)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
td2[0] - dt2
result = dt1 + td1
@@ -1807,10 +1822,10 @@ def test_operators_datetimelike_with_timezones(self):
result = dt2 - td2
exp = (dt2.dt.tz_localize(None) - td2).dt.tz_localize(tz)
tm.assert_series_equal(result, exp)
-
- with pytest.raises(TypeError):
+ msg = 'cannot (add|subtract)'
+ with pytest.raises(TypeError, match=msg):
td1 - dt1
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
td2 - dt2
@@ -1909,13 +1924,15 @@ def test_dti_add_intarray_no_freq(self, int_holder):
# GH#19959
dti = pd.DatetimeIndex(['2016-01-01', 'NaT', '2017-04-05 06:07:08'])
other = int_holder([9, 4, -1])
- with pytest.raises(NullFrequencyError):
+ nfmsg = 'Cannot shift with no freq'
+ tmsg = 'cannot subtract DatetimeArray from'
+ with pytest.raises(NullFrequencyError, match=nfmsg):
dti + other
- with pytest.raises(NullFrequencyError):
+ with pytest.raises(NullFrequencyError, match=nfmsg):
other + dti
- with pytest.raises(NullFrequencyError):
+ with pytest.raises(NullFrequencyError, match=nfmsg):
dti - other
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=tmsg):
other - dti
# -------------------------------------------------------------
@@ -2057,14 +2074,14 @@ def test_sub_dti_dti(self):
result = dti_tz - dti_tz
tm.assert_index_equal(result, expected)
-
- with pytest.raises(TypeError):
+ msg = 'DatetimeArray subtraction must have the same timezones or'
+ with pytest.raises(TypeError, match=msg):
dti_tz - dti
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dti - dti_tz
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
dti_tz - dti_tz2
# isub
@@ -2074,7 +2091,8 @@ def test_sub_dti_dti(self):
# different length raises ValueError
dti1 = date_range('20130101', periods=3)
dti2 = date_range('20130101', periods=4)
- with pytest.raises(ValueError):
+ msg = 'cannot add indices of unequal length'
+ with pytest.raises(ValueError, match=msg):
dti1 - dti2
# NaN propagation
@@ -2148,8 +2166,8 @@ def test_ops_nat_mixed_datetime64_timedelta64(self):
tm.assert_series_equal(-single_nat_dtype_timedelta +
nat_series_dtype_timestamp,
nat_series_dtype_timestamp)
-
- with pytest.raises(TypeError):
+ msg = 'cannot subtract a datelike'
+ with pytest.raises(TypeError, match=msg):
timedelta_series - single_nat_dtype_datetime
# addition
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24937 | 2019-01-25T20:50:38Z | 2019-01-26T14:49:31Z | 2019-01-26T14:49:31Z | 2019-01-28T22:23:56Z |
BUG: fix str.replace('.','') should replace every character | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 76ee21b4c9a50..bac5b95741dd0 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -495,7 +495,7 @@ Other Deprecations
Use the public attributes :attr:`~RangeIndex.start`, :attr:`~RangeIndex.stop` and :attr:`~RangeIndex.step` instead (:issue:`26581`).
- The :meth:`Series.ftype`, :meth:`Series.ftypes` and :meth:`DataFrame.ftypes` methods are deprecated and will be removed in a future version.
Instead, use :meth:`Series.dtype` and :meth:`DataFrame.dtypes` (:issue:`26705`).
-
+- :func:`Series.str.replace`, when ``pat`` is single special regex character (such as ``.|\`` etc) and regex is not defined, regex is by default ``False`` for now, but this might be deprecated in the future. (:issue:`24804`)
.. _whatsnew_0250.prior_deprecations:
@@ -605,6 +605,7 @@ Conversion
Strings
^^^^^^^
+- Bug in :func:`Series.str.replace` not applying regex in patterns of length 1 (:issue:`24804`)
- Bug in the ``__name__`` attribute of several methods of :class:`Series.str`, which were set incorrectly (:issue:`23551`)
-
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index d655a8be13de7..332ad04ff674e 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -413,7 +413,8 @@ def melt_stub(df, stub, i, j, value_vars, sep):
newdf = melt(df, id_vars=i, value_vars=value_vars,
value_name=stub.rstrip(sep), var_name=j)
newdf[j] = Categorical(newdf[j])
- newdf[j] = newdf[j].str.replace(re.escape(stub + sep), "")
+ newdf[j] = newdf[j].str.replace(re.escape(stub + sep), "",
+ regex=True)
# GH17627 Cast numerics suffixes to int/float
newdf[j] = to_numeric(newdf[j], errors='ignore')
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index bd756491abd2f..812e8c70580fa 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -421,7 +421,7 @@ def str_endswith(arr, pat, na=np.nan):
return _na_map(f, arr, na, dtype=bool)
-def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
+def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=None):
r"""
Replace occurrences of pattern/regex in the Series/Index with
some other string. Equivalent to :meth:`str.replace` or
@@ -452,9 +452,13 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
flags : int, default 0 (no flags)
- re module flags, e.g. re.IGNORECASE
- Cannot be set if `pat` is a compiled regex
- regex : bool, default True
+ regex : boolean, default None
- If True, assumes the passed-in pattern is a regular expression.
- If False, treats the pattern as a literal string
+ - If `pat` is a single character and `regex` is not specified, `pat`
+ is interpreted as a string literal. If `pat` is also a regular
+ expression symbol, a warning is issued that in the future `pat`
+ will be interpreted as a regex, rather than a literal.
- Cannot be set to False if `pat` is a compiled regex or `repl` is
a callable.
@@ -561,7 +565,7 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
# add case flag, if provided
if case is False:
flags |= re.IGNORECASE
- if is_compiled_re or len(pat) > 1 or flags or callable(repl):
+ if is_compiled_re or pat or flags or callable(repl):
n = n if n >= 0 else 0
compiled = re.compile(pat, flags=flags)
f = lambda x: compiled.sub(repl=repl, string=x, count=n)
@@ -574,6 +578,12 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
if callable(repl):
raise ValueError("Cannot use a callable replacement when "
"regex=False")
+ # if regex is default None, and a single special character is given
+ # in pat, still take it as a literal, and raise the Future warning
+ if regex is None and len(pat) == 1 and pat in list(r"[\^$.|?*+()]"):
+ warnings.warn("'{}' is interpreted as a literal in ".format(pat) +
+ "default, not regex. It will change in the future.",
+ FutureWarning)
f = lambda x: x.replace(pat, repl, n)
return _na_map(f, arr)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index a1d522930e9aa..983e064e514d2 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -6,6 +6,7 @@
from numpy.random import randint
import pytest
+
from pandas import DataFrame, Index, MultiIndex, Series, concat, isna, notna
import pandas.core.strings as strings
import pandas.util.testing as tm
@@ -892,11 +893,11 @@ def test_casemethods(self):
def test_replace(self):
values = Series(['fooBAD__barBAD', NA])
- result = values.str.replace('BAD[_]*', '')
+ result = values.str.replace('BAD[_]*', '', regex=True)
exp = Series(['foobar', NA])
tm.assert_series_equal(result, exp)
- result = values.str.replace('BAD[_]*', '', n=1)
+ result = values.str.replace('BAD[_]*', '', regex=True, n=1)
exp = Series(['foobarBAD', NA])
tm.assert_series_equal(result, exp)
@@ -904,15 +905,27 @@ def test_replace(self):
mixed = Series(['aBAD', NA, 'bBAD', True, datetime.today(), 'fooBAD',
None, 1, 2.])
- rs = Series(mixed).str.replace('BAD[_]*', '')
+ rs = Series(mixed).str.replace('BAD[_]*', '', regex=True)
xp = Series(['a', NA, 'b', NA, NA, 'foo', NA, NA, NA])
assert isinstance(rs, Series)
tm.assert_almost_equal(rs, xp)
+ # unicode
+ values = Series([u'fooBAD__barBAD', NA])
+
+ result = values.str.replace('BAD[_]*', '', regex=True)
+ exp = Series([u'foobar', NA])
+ tm.assert_series_equal(result, exp)
+
+ result = values.str.replace('BAD[_]*', '', n=1, regex=True)
+ exp = Series([u'foobarBAD', NA])
+ tm.assert_series_equal(result, exp)
+
# flags + unicode
values = Series([b"abcd,\xc3\xa0".decode("utf-8")])
exp = Series([b"abcd, \xc3\xa0".decode("utf-8")])
- result = values.str.replace(r"(?<=\w),(?=\w)", ", ", flags=re.UNICODE)
+ result = values.str.replace(r"(?<=\w),(?=\w)", ", ", regex=True,
+ flags=re.UNICODE)
tm.assert_series_equal(result, exp)
# GH 13438
@@ -930,7 +943,7 @@ def test_replace_callable(self):
# test with callable
repl = lambda m: m.group(0).swapcase()
- result = values.str.replace('[a-z][A-Z]{2}', repl, n=2)
+ result = values.str.replace('[a-z][A-Z]{2}', repl, n=2, regex=True)
exp = Series(['foObaD__baRbaD', NA])
tm.assert_series_equal(result, exp)
@@ -940,21 +953,21 @@ def test_replace_callable(self):
repl = lambda: None
with pytest.raises(TypeError, match=p_err):
- values.str.replace('a', repl)
+ values.str.replace('a', repl, regex=True)
repl = lambda m, x: None
with pytest.raises(TypeError, match=p_err):
- values.str.replace('a', repl)
+ values.str.replace('a', repl, regex=True)
repl = lambda m, x, y=None: None
with pytest.raises(TypeError, match=p_err):
- values.str.replace('a', repl)
+ values.str.replace('a', repl, regex=True)
# test regex named groups
values = Series(['Foo Bar Baz', NA])
pat = r"(?P<first>\w+) (?P<middle>\w+) (?P<last>\w+)"
repl = lambda m: m.group('middle').swapcase()
- result = values.str.replace(pat, repl)
+ result = values.str.replace(pat, repl, regex=True)
exp = Series(['bAR', NA])
tm.assert_series_equal(result, exp)
@@ -964,11 +977,11 @@ def test_replace_compiled_regex(self):
# test with compiled regex
pat = re.compile(r'BAD[_]*')
- result = values.str.replace(pat, '')
+ result = values.str.replace(pat, '', regex=True)
exp = Series(['foobar', NA])
tm.assert_series_equal(result, exp)
- result = values.str.replace(pat, '', n=1)
+ result = values.str.replace(pat, '', n=1, regex=True)
exp = Series(['foobarBAD', NA])
tm.assert_series_equal(result, exp)
@@ -976,16 +989,27 @@ def test_replace_compiled_regex(self):
mixed = Series(['aBAD', NA, 'bBAD', True, datetime.today(), 'fooBAD',
None, 1, 2.])
- rs = Series(mixed).str.replace(pat, '')
+ rs = Series(mixed).str.replace(pat, '', regex=True)
xp = Series(['a', NA, 'b', NA, NA, 'foo', NA, NA, NA])
assert isinstance(rs, Series)
tm.assert_almost_equal(rs, xp)
+ # unicode
+ values = Series([u'fooBAD__barBAD', NA])
+
+ result = values.str.replace(pat, '', regex=True)
+ exp = Series([u'foobar', NA])
+ tm.assert_series_equal(result, exp)
+
+ result = values.str.replace(pat, '', n=1, regex=True)
+ exp = Series([u'foobarBAD', NA])
+ tm.assert_series_equal(result, exp)
+
# flags + unicode
values = Series([b"abcd,\xc3\xa0".decode("utf-8")])
exp = Series([b"abcd, \xc3\xa0".decode("utf-8")])
pat = re.compile(r"(?<=\w),(?=\w)", flags=re.UNICODE)
- result = values.str.replace(pat, ", ")
+ result = values.str.replace(pat, ", ", regex=True)
tm.assert_series_equal(result, exp)
# case and flags provided to str.replace will have no effect
@@ -995,21 +1019,22 @@ def test_replace_compiled_regex(self):
with pytest.raises(ValueError,
match="case and flags cannot be"):
- result = values.str.replace(pat, '', flags=re.IGNORECASE)
+ result = values.str.replace(pat, '', flags=re.IGNORECASE,
+ regex=True)
with pytest.raises(ValueError,
match="case and flags cannot be"):
- result = values.str.replace(pat, '', case=False)
+ result = values.str.replace(pat, '', case=False, regex=True)
with pytest.raises(ValueError,
match="case and flags cannot be"):
- result = values.str.replace(pat, '', case=True)
+ result = values.str.replace(pat, '', case=True, regex=True)
# test with callable
values = Series(['fooBAD__barBAD', NA])
repl = lambda m: m.group(0).swapcase()
pat = re.compile('[a-z][A-Z]{2}')
- result = values.str.replace(pat, repl, n=2)
+ result = values.str.replace(pat, repl, n=2, regex=True)
exp = Series(['foObaD__baRbaD', NA])
tm.assert_series_equal(result, exp)
@@ -1017,7 +1042,7 @@ def test_replace_literal(self):
# GH16808 literal replace (regex=False vs regex=True)
values = Series(['f.o', 'foo', NA])
exp = Series(['bao', 'bao', NA])
- result = values.str.replace('f.', 'ba')
+ result = values.str.replace('f.', 'ba', regex=True)
tm.assert_series_equal(result, exp)
exp = Series(['bao', 'foo', NA])
@@ -2710,6 +2735,7 @@ def test_partition_deprecation(self):
result = values.str.rpartition(pat='_')
tm.assert_frame_equal(result, expected)
+ @pytest.mark.filterwarnings("ignore: '|' is interpreted as a literal")
def test_pipe_failures(self):
# #2119
s = Series(['A|B|C'])
@@ -2719,7 +2745,7 @@ def test_pipe_failures(self):
tm.assert_series_equal(result, exp)
- result = s.str.replace('|', ' ')
+ result = s.str.replace('|', ' ', regex=None)
exp = Series(['A B C'])
tm.assert_series_equal(result, exp)
@@ -2980,17 +3006,17 @@ def test_replace_moar(self):
s = Series(['A', 'B', 'C', 'Aaba', 'Baca', '', NA, 'CABA',
'dog', 'cat'])
- result = s.str.replace('A', 'YYY')
+ result = s.str.replace('A', 'YYY', regex=True)
expected = Series(['YYY', 'B', 'C', 'YYYaba', 'Baca', '', NA,
'CYYYBYYY', 'dog', 'cat'])
assert_series_equal(result, expected)
- result = s.str.replace('A', 'YYY', case=False)
+ result = s.str.replace('A', 'YYY', case=False, regex=True)
expected = Series(['YYY', 'B', 'C', 'YYYYYYbYYY', 'BYYYcYYY', '', NA,
'CYYYBYYY', 'dog', 'cYYYt'])
assert_series_equal(result, expected)
- result = s.str.replace('^.a|dog', 'XX-XX ', case=False)
+ result = s.str.replace('^.a|dog', 'XX-XX ', case=False, regex=True)
expected = Series(['A', 'B', 'C', 'XX-XX ba', 'XX-XX ca', '', NA,
'XX-XX BA', 'XX-XX ', 'XX-XX t'])
assert_series_equal(result, expected)
@@ -3162,6 +3188,40 @@ def test_method_on_bytes(self):
match="Cannot use .str.cat with values of.*"):
lhs.str.cat(rhs)
+ @pytest.mark.filterwarnings("ignore: '.' is interpreted as a literal")
+ @pytest.mark.parametrize("regex, expected_array", [
+ (True, ['foofoofoo', 'foofoofoo']),
+ (False, ['abc', '123']),
+ (None, ['abc', '123'])
+ ])
+ def test_replace_single_pattern(self, regex, expected_array):
+ values = Series(['abc', '123'])
+ # GH: 24804
+ result = values.str.replace('.', 'foo', regex=regex)
+ expected = Series(expected_array)
+ tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize("input_array, single_char, replace_char, "
+ "expect_array, warn",
+ [("a.c", ".", "b", "abc", True),
+ ("a@c", "@", "at", "aatc", False)]
+ )
+ def test_replace_warning_single_character(self, input_array,
+ single_char, replace_char,
+ expect_array, warn):
+ # GH: 24804
+ values = Series([input_array])
+ if warn:
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = values.str.replace(single_char, replace_char,
+ regex=None)
+ else:
+ result = values.str.replace(single_char, replace_char)
+
+ expected = Series([expect_array])
+ tm.assert_series_equal(result, expected)
+
def test_casefold(self):
# GH25405
expected = Series(['ss', NA, 'case', 'ssd'])
| - [ ] closes #24804
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24935 | 2019-01-25T18:30:44Z | 2019-06-17T23:29:58Z | null | 2019-06-17T23:29:59Z |
DOC Minor what's new fix for v0.24 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index fc963fce37a5b..16319a3b83ca4 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -6,7 +6,8 @@ What's New in 0.24.0 (January 25, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See :ref:`install.dropping-27` for more
+ details.
{{ header }}
@@ -244,7 +245,7 @@ the new extension arrays that back interval and period data.
Joining with two multi-indexes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-:func:`DataFrame.merge` and :func:`DataFrame.join` can now be used to join multi-indexed ``Dataframe`` instances on the overlaping index levels (:issue:`6360`)
+:func:`DataFrame.merge` and :func:`DataFrame.join` can now be used to join multi-indexed ``Dataframe`` instances on the overlapping index levels (:issue:`6360`)
See the :ref:`Merge, join, and concatenate
<merging.Join_with_two_multi_indexes>` documentation section.
| I think there is a word missing in the first sentence of the 0.24 what's new. Maybe it's just me but I was expecting some word after "See :ref:`install.dropping-27` for more.".
cc @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/24933 | 2019-01-25T16:49:26Z | 2019-01-26T08:01:55Z | 2019-01-26T08:01:55Z | 2019-01-26T14:59:04Z |
CLN: Refactor cython to use memory views | diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index b3c519ab99b6e..663411ad984c2 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -76,7 +76,7 @@ class NegInfinity(object):
@cython.wraparound(False)
@cython.boundscheck(False)
-cpdef ndarray[int64_t, ndim=1] unique_deltas(ndarray[int64_t] arr):
+cpdef ndarray[int64_t, ndim=1] unique_deltas(const int64_t[:] arr):
"""
Efficiently find the unique first-differences of the given array.
@@ -150,7 +150,7 @@ def is_lexsorted(list_of_arrays: list) -> bint:
@cython.boundscheck(False)
@cython.wraparound(False)
-def groupsort_indexer(ndarray[int64_t] index, Py_ssize_t ngroups):
+def groupsort_indexer(const int64_t[:] index, Py_ssize_t ngroups):
"""
compute a 1-d indexer that is an ordering of the passed index,
ordered by the groups. This is a reverse of the label
@@ -230,7 +230,7 @@ def kth_smallest(numeric[:] a, Py_ssize_t k) -> numeric:
@cython.boundscheck(False)
@cython.wraparound(False)
-def nancorr(ndarray[float64_t, ndim=2] mat, bint cov=0, minp=None):
+def nancorr(const float64_t[:, :] mat, bint cov=0, minp=None):
cdef:
Py_ssize_t i, j, xi, yi, N, K
bint minpv
@@ -294,7 +294,7 @@ def nancorr(ndarray[float64_t, ndim=2] mat, bint cov=0, minp=None):
@cython.boundscheck(False)
@cython.wraparound(False)
-def nancorr_spearman(ndarray[float64_t, ndim=2] mat, Py_ssize_t minp=1):
+def nancorr_spearman(const float64_t[:, :] mat, Py_ssize_t minp=1):
cdef:
Py_ssize_t i, j, xi, yi, N, K
ndarray[float64_t, ndim=2] result
@@ -435,8 +435,8 @@ def pad(ndarray[algos_t] old, ndarray[algos_t] new, limit=None):
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_inplace(ndarray[algos_t] values,
- ndarray[uint8_t, cast=True] mask,
+def pad_inplace(algos_t[:] values,
+ const uint8_t[:] mask,
limit=None):
cdef:
Py_ssize_t i, N
@@ -472,8 +472,8 @@ def pad_inplace(ndarray[algos_t] values,
@cython.boundscheck(False)
@cython.wraparound(False)
-def pad_2d_inplace(ndarray[algos_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
+def pad_2d_inplace(algos_t[:, :] values,
+ const uint8_t[:, :] mask,
limit=None):
cdef:
Py_ssize_t i, j, N, K
@@ -602,8 +602,8 @@ def backfill(ndarray[algos_t] old, ndarray[algos_t] new, limit=None):
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_inplace(ndarray[algos_t] values,
- ndarray[uint8_t, cast=True] mask,
+def backfill_inplace(algos_t[:] values,
+ const uint8_t[:] mask,
limit=None):
cdef:
Py_ssize_t i, N
@@ -639,8 +639,8 @@ def backfill_inplace(ndarray[algos_t] values,
@cython.boundscheck(False)
@cython.wraparound(False)
-def backfill_2d_inplace(ndarray[algos_t, ndim=2] values,
- ndarray[uint8_t, ndim=2] mask,
+def backfill_2d_inplace(algos_t[:, :] values,
+ const uint8_t[:, :] mask,
limit=None):
cdef:
Py_ssize_t i, j, N, K
@@ -678,7 +678,7 @@ def backfill_2d_inplace(ndarray[algos_t, ndim=2] values,
@cython.wraparound(False)
@cython.boundscheck(False)
-def arrmap(ndarray[algos_t] index, object func):
+def arrmap(algos_t[:] index, object func):
cdef:
Py_ssize_t length = index.shape[0]
Py_ssize_t i = 0
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index abac9f147848e..858039f038d02 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -29,10 +29,10 @@ def get_dispatch(dtypes):
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_add_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_add_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=0):
"""
Only aggregates on axis=0
@@ -76,10 +76,10 @@ def group_add_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_prod_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_prod_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=0):
"""
Only aggregates on axis=0
@@ -123,10 +123,10 @@ def group_prod_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
@cython.cdivision(True)
-def group_var_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_var_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=-1):
cdef:
Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
@@ -175,10 +175,10 @@ def group_var_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_mean_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_mean_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=-1):
cdef:
Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
@@ -220,11 +220,11 @@ def group_mean_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_ohlc_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
- Py_ssize_t min_count=-1):
+def group_ohlc_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -293,10 +293,10 @@ def get_dispatch(dtypes):
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_last_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_last_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
@@ -350,10 +350,10 @@ def group_last_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_nth_{{name}}(ndarray[{{c_type}}, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels, int64_t rank,
+def group_nth_{{name}}({{c_type}}[:, :] out,
+ int64_t[:] counts,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels, int64_t rank,
Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
@@ -411,9 +411,9 @@ def group_nth_{{name}}(ndarray[{{c_type}}, ndim=2] out,
@cython.boundscheck(False)
@cython.wraparound(False)
-def group_rank_{{name}}(ndarray[float64_t, ndim=2] out,
- ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels,
+def group_rank_{{name}}(float64_t[:, :] out,
+ {{c_type}}[:, :] values,
+ const int64_t[:] labels,
bint is_datetimelike, object ties_method,
bint ascending, bint pct, object na_option):
"""
@@ -606,10 +606,10 @@ ctypedef fused groupby_t:
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_max(ndarray[groupby_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[groupby_t, ndim=2] values,
- ndarray[int64_t] labels,
+def group_max(groupby_t[:, :] out,
+ int64_t[:] counts,
+ groupby_t[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
@@ -669,10 +669,10 @@ def group_max(ndarray[groupby_t, ndim=2] out,
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_min(ndarray[groupby_t, ndim=2] out,
- ndarray[int64_t] counts,
- ndarray[groupby_t, ndim=2] values,
- ndarray[int64_t] labels,
+def group_min(groupby_t[:, :] out,
+ int64_t[:] counts,
+ groupby_t[:, :] values,
+ const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
@@ -731,9 +731,9 @@ def group_min(ndarray[groupby_t, ndim=2] out,
@cython.boundscheck(False)
@cython.wraparound(False)
-def group_cummin(ndarray[groupby_t, ndim=2] out,
- ndarray[groupby_t, ndim=2] values,
- ndarray[int64_t] labels,
+def group_cummin(groupby_t[:, :] out,
+ groupby_t[:, :] values,
+ const int64_t[:] labels,
bint is_datetimelike):
"""
Only transforms on axis=0
@@ -779,9 +779,9 @@ def group_cummin(ndarray[groupby_t, ndim=2] out,
@cython.boundscheck(False)
@cython.wraparound(False)
-def group_cummax(ndarray[groupby_t, ndim=2] out,
- ndarray[groupby_t, ndim=2] values,
- ndarray[int64_t] labels,
+def group_cummax(groupby_t[:, :] out,
+ groupby_t[:, :] values,
+ const int64_t[:] labels,
bint is_datetimelike):
"""
Only transforms on axis=0
diff --git a/pandas/_libs/hashtable.pyx b/pandas/_libs/hashtable.pyx
index 47fa5932290af..8d0c451ad0ab8 100644
--- a/pandas/_libs/hashtable.pyx
+++ b/pandas/_libs/hashtable.pyx
@@ -52,9 +52,10 @@ include "hashtable_class_helper.pxi"
include "hashtable_func_helper.pxi"
cdef class Factorizer:
- cdef public PyObjectHashTable table
- cdef public ObjectVector uniques
- cdef public Py_ssize_t count
+ cdef public:
+ PyObjectHashTable table
+ ObjectVector uniques
+ Py_ssize_t count
def __init__(self, size_hint):
self.table = PyObjectHashTable(size_hint)
@@ -96,9 +97,10 @@ cdef class Factorizer:
cdef class Int64Factorizer:
- cdef public Int64HashTable table
- cdef public Int64Vector uniques
- cdef public Py_ssize_t count
+ cdef public:
+ Int64HashTable table
+ Int64Vector uniques
+ Py_ssize_t count
def __init__(self, size_hint):
self.table = Int64HashTable(size_hint)
@@ -140,7 +142,7 @@ cdef class Int64Factorizer:
@cython.wraparound(False)
@cython.boundscheck(False)
-def unique_label_indices(ndarray[int64_t, ndim=1] labels):
+def unique_label_indices(const int64_t[:] labels):
"""
indices of the first occurrences of the unique labels
*excluding* -1. equivalent to:
@@ -168,6 +170,6 @@ def unique_label_indices(ndarray[int64_t, ndim=1] labels):
kh_destroy_int64(table)
arr = idx.to_array()
- arr = arr[labels[arr].argsort()]
+ arr = arr[np.asarray(labels)[arr].argsort()]
return arr[1:] if arr.size != 0 and labels[arr[0]] == -1 else arr
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index eac35588b6fc3..3644928d8dedc 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -322,7 +322,7 @@ cdef class {{name}}HashTable(HashTable):
self.table.vals[k] = <Py_ssize_t>values[i]
@cython.boundscheck(False)
- def map_locations(self, ndarray[{{dtype}}_t, ndim=1] values):
+ def map_locations(self, const {{dtype}}_t[:] values):
cdef:
Py_ssize_t i, n = len(values)
int ret = 0
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 72a1cf16f96b6..f23d2666b4bf4 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -23,10 +23,11 @@ from pandas._libs.algos import ensure_int64
cdef class BlockPlacement:
# __slots__ = '_as_slice', '_as_array', '_len'
- cdef slice _as_slice
- cdef object _as_array
+ cdef:
+ slice _as_slice
+ object _as_array
- cdef bint _has_slice, _has_array, _is_known_slice_like
+ bint _has_slice, _has_array, _is_known_slice_like
def __init__(self, val):
cdef:
diff --git a/pandas/_libs/join.pyx b/pandas/_libs/join.pyx
index e4440ac3d9fd8..503867058b3c8 100644
--- a/pandas/_libs/join.pyx
+++ b/pandas/_libs/join.pyx
@@ -14,7 +14,7 @@ from pandas._libs.algos import groupsort_indexer, ensure_platform_int
from pandas.core.algorithms import take_nd
-def inner_join(ndarray[int64_t] left, ndarray[int64_t] right,
+def inner_join(const int64_t[:] left, const int64_t[:] right,
Py_ssize_t max_groups):
cdef:
Py_ssize_t i, j, k, count = 0
@@ -65,7 +65,7 @@ def inner_join(ndarray[int64_t] left, ndarray[int64_t] right,
_get_result_indexer(right_sorter, right_indexer))
-def left_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
+def left_outer_join(const int64_t[:] left, const int64_t[:] right,
Py_ssize_t max_groups, sort=True):
cdef:
Py_ssize_t i, j, k, count = 0
@@ -139,7 +139,7 @@ def left_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
return left_indexer, right_indexer
-def full_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
+def full_outer_join(const int64_t[:] left, const int64_t[:] right,
Py_ssize_t max_groups):
cdef:
Py_ssize_t i, j, k, count = 0
@@ -213,7 +213,7 @@ def _get_result_indexer(sorter, indexer):
return res
-def ffill_indexer(ndarray[int64_t] indexer):
+def ffill_indexer(const int64_t[:] indexer):
cdef:
Py_ssize_t i, n = len(indexer)
ndarray[int64_t] result
@@ -252,7 +252,7 @@ ctypedef fused join_t:
@cython.wraparound(False)
@cython.boundscheck(False)
-def left_join_indexer_unique(ndarray[join_t] left, ndarray[join_t] right):
+def left_join_indexer_unique(join_t[:] left, join_t[:] right):
cdef:
Py_ssize_t i, j, nleft, nright
ndarray[int64_t] indexer
@@ -677,10 +677,10 @@ ctypedef fused by_t:
uint64_t
-def asof_join_backward_on_X_by_Y(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
- ndarray[by_t] left_by_values,
- ndarray[by_t] right_by_values,
+def asof_join_backward_on_X_by_Y(asof_t[:] left_values,
+ asof_t[:] right_values,
+ by_t[:] left_by_values,
+ by_t[:] right_by_values,
bint allow_exact_matches=1,
tolerance=None):
@@ -746,10 +746,10 @@ def asof_join_backward_on_X_by_Y(ndarray[asof_t] left_values,
return left_indexer, right_indexer
-def asof_join_forward_on_X_by_Y(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
- ndarray[by_t] left_by_values,
- ndarray[by_t] right_by_values,
+def asof_join_forward_on_X_by_Y(asof_t[:] left_values,
+ asof_t[:] right_values,
+ by_t[:] left_by_values,
+ by_t[:] right_by_values,
bint allow_exact_matches=1,
tolerance=None):
@@ -815,10 +815,10 @@ def asof_join_forward_on_X_by_Y(ndarray[asof_t] left_values,
return left_indexer, right_indexer
-def asof_join_nearest_on_X_by_Y(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
- ndarray[by_t] left_by_values,
- ndarray[by_t] right_by_values,
+def asof_join_nearest_on_X_by_Y(asof_t[:] left_values,
+ asof_t[:] right_values,
+ by_t[:] left_by_values,
+ by_t[:] right_by_values,
bint allow_exact_matches=1,
tolerance=None):
@@ -864,8 +864,8 @@ def asof_join_nearest_on_X_by_Y(ndarray[asof_t] left_values,
# asof_join
# ----------------------------------------------------------------------
-def asof_join_backward(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
+def asof_join_backward(asof_t[:] left_values,
+ asof_t[:] right_values,
bint allow_exact_matches=1,
tolerance=None):
@@ -917,8 +917,8 @@ def asof_join_backward(ndarray[asof_t] left_values,
return left_indexer, right_indexer
-def asof_join_forward(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
+def asof_join_forward(asof_t[:] left_values,
+ asof_t[:] right_values,
bint allow_exact_matches=1,
tolerance=None):
@@ -971,8 +971,8 @@ def asof_join_forward(ndarray[asof_t] left_values,
return left_indexer, right_indexer
-def asof_join_nearest(ndarray[asof_t] left_values,
- ndarray[asof_t] right_values,
+def asof_join_nearest(asof_t[:] left_values,
+ asof_t[:] right_values,
bint allow_exact_matches=1,
tolerance=None):
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index f845a5437ded4..4745916eb0ce2 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -40,11 +40,12 @@ cdef extern from "numpy/arrayobject.h":
# Use PyDataType_* macros when possible, however there are no macros
# for accessing some of the fields, so some are defined. Please
# ask on cython-dev if you need more.
- cdef int type_num
- cdef int itemsize "elsize"
- cdef char byteorder
- cdef object fields
- cdef tuple names
+ cdef:
+ int type_num
+ int itemsize "elsize"
+ char byteorder
+ object fields
+ tuple names
cdef extern from "src/parse_helper.h":
@@ -67,12 +68,13 @@ from pandas._libs.missing cimport (
# constants that will be compared to potentially arbitrarily large
# python int
-cdef object oINT64_MAX = <int64_t>INT64_MAX
-cdef object oINT64_MIN = <int64_t>INT64_MIN
-cdef object oUINT64_MAX = <uint64_t>UINT64_MAX
+cdef:
+ object oINT64_MAX = <int64_t>INT64_MAX
+ object oINT64_MIN = <int64_t>INT64_MIN
+ object oUINT64_MAX = <uint64_t>UINT64_MAX
-cdef bint PY2 = sys.version_info[0] == 2
-cdef float64_t NaN = <float64_t>np.NaN
+ bint PY2 = sys.version_info[0] == 2
+ float64_t NaN = <float64_t>np.NaN
def values_from_object(obj: object):
@@ -376,7 +378,7 @@ def fast_zip(list ndarrays):
return result
-def get_reverse_indexer(ndarray[int64_t] indexer, Py_ssize_t length):
+def get_reverse_indexer(const int64_t[:] indexer, Py_ssize_t length):
"""
Reverse indexing operation.
@@ -405,7 +407,7 @@ def get_reverse_indexer(ndarray[int64_t] indexer, Py_ssize_t length):
@cython.wraparound(False)
@cython.boundscheck(False)
-def has_infs_f4(ndarray[float32_t] arr) -> bool:
+def has_infs_f4(const float32_t[:] arr) -> bool:
cdef:
Py_ssize_t i, n = len(arr)
float32_t inf, neginf, val
@@ -422,7 +424,7 @@ def has_infs_f4(ndarray[float32_t] arr) -> bool:
@cython.wraparound(False)
@cython.boundscheck(False)
-def has_infs_f8(ndarray[float64_t] arr) -> bool:
+def has_infs_f8(const float64_t[:] arr) -> bool:
cdef:
Py_ssize_t i, n = len(arr)
float64_t inf, neginf, val
@@ -660,7 +662,7 @@ def clean_index_list(obj: list):
# is a general, O(max(len(values), len(binner))) method.
@cython.boundscheck(False)
@cython.wraparound(False)
-def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
+def generate_bins_dt64(ndarray[int64_t] values, const int64_t[:] binner,
object closed='left', bint hasnans=0):
"""
Int64 (datetime64) version of generic python version in groupby.py
@@ -723,7 +725,7 @@ def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
@cython.boundscheck(False)
@cython.wraparound(False)
-def row_bool_subset(ndarray[float64_t, ndim=2] values,
+def row_bool_subset(const float64_t[:, :] values,
ndarray[uint8_t, cast=True] mask):
cdef:
Py_ssize_t i, j, n, k, pos = 0
@@ -767,8 +769,8 @@ def row_bool_subset_object(ndarray[object, ndim=2] values,
@cython.boundscheck(False)
@cython.wraparound(False)
-def get_level_sorter(ndarray[int64_t, ndim=1] label,
- ndarray[int64_t, ndim=1] starts):
+def get_level_sorter(const int64_t[:] label,
+ const int64_t[:] starts):
"""
argsort for a single level of a multi-index, keeping the order of higher
levels unchanged. `starts` points to starts of same-key indices w.r.t
@@ -780,10 +782,11 @@ def get_level_sorter(ndarray[int64_t, ndim=1] label,
int64_t l, r
Py_ssize_t i
ndarray[int64_t, ndim=1] out = np.empty(len(label), dtype=np.int64)
+ ndarray[int64_t, ndim=1] label_arr = np.asarray(label)
for i in range(len(starts) - 1):
l, r = starts[i], starts[i + 1]
- out[l:r] = l + label[l:r].argsort(kind='mergesort')
+ out[l:r] = l + label_arr[l:r].argsort(kind='mergesort')
return out
@@ -791,7 +794,7 @@ def get_level_sorter(ndarray[int64_t, ndim=1] label,
@cython.boundscheck(False)
@cython.wraparound(False)
def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
- ndarray[int64_t, ndim=1] labels,
+ const int64_t[:] labels,
Py_ssize_t max_bin,
int axis):
cdef:
@@ -818,7 +821,7 @@ def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
return counts
-def generate_slices(ndarray[int64_t] labels, Py_ssize_t ngroups):
+def generate_slices(const int64_t[:] labels, Py_ssize_t ngroups):
cdef:
Py_ssize_t i, group_size, n, start
int64_t lab
@@ -847,7 +850,7 @@ def generate_slices(ndarray[int64_t] labels, Py_ssize_t ngroups):
return starts, ends
-def indices_fast(object index, ndarray[int64_t] labels, list keys,
+def indices_fast(object index, const int64_t[:] labels, list keys,
list sorted_labels):
cdef:
Py_ssize_t i, j, k, lab, cur, start, n = len(labels)
@@ -2146,7 +2149,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
@cython.boundscheck(False)
@cython.wraparound(False)
-def map_infer_mask(ndarray arr, object f, ndarray[uint8_t] mask,
+def map_infer_mask(ndarray arr, object f, const uint8_t[:] mask,
bint convert=1):
"""
Substitute for np.vectorize with pandas-friendly dtype inference
diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index 229edbac4992d..ab0e4cd6cc765 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -16,10 +16,11 @@ from pandas._libs.tslibs.nattype cimport (
checknull_with_nat, c_NaT as NaT, is_null_datetimelike)
-cdef float64_t INF = <float64_t>np.inf
-cdef float64_t NEGINF = -INF
+cdef:
+ float64_t INF = <float64_t>np.inf
+ float64_t NEGINF = -INF
-cdef int64_t NPY_NAT = util.get_nat()
+ int64_t NPY_NAT = util.get_nat()
cpdef bint checknull(object val):
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 6cb6ed749f87b..f679746643643 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -64,10 +64,11 @@ from pandas.errors import (ParserError, DtypeWarning,
CParserError = ParserError
-cdef bint PY3 = (sys.version_info[0] >= 3)
+cdef:
+ bint PY3 = (sys.version_info[0] >= 3)
-cdef float64_t INF = <float64_t>np.inf
-cdef float64_t NEGINF = -INF
+ float64_t INF = <float64_t>np.inf
+ float64_t NEGINF = -INF
cdef extern from "errno.h":
@@ -735,7 +736,7 @@ cdef class TextReader:
int status
int64_t hr, data_line
char *errors = "strict"
- cdef StringPath path = _string_path(self.c_encoding)
+ StringPath path = _string_path(self.c_encoding)
header = []
unnamed_cols = set()
@@ -1389,8 +1390,9 @@ cdef class TextReader:
return None
-cdef object _true_values = [b'True', b'TRUE', b'true']
-cdef object _false_values = [b'False', b'FALSE', b'false']
+cdef:
+ object _true_values = [b'True', b'TRUE', b'true']
+ object _false_values = [b'False', b'FALSE', b'false']
def _ensure_encoded(list lst):
@@ -1637,7 +1639,7 @@ cdef _categorical_convert(parser_t *parser, int64_t col,
int64_t current_category = 0
char *errors = "strict"
- cdef StringPath path = _string_path(encoding)
+ StringPath path = _string_path(encoding)
int ret = 0
kh_str_t *table
@@ -1727,9 +1729,10 @@ cdef inline void _to_fw_string_nogil(parser_t *parser, int64_t col,
data += width
-cdef char* cinf = b'inf'
-cdef char* cposinf = b'+inf'
-cdef char* cneginf = b'-inf'
+cdef:
+ char* cinf = b'inf'
+ char* cposinf = b'+inf'
+ char* cneginf = b'-inf'
cdef _try_double(parser_t *parser, int64_t col,
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index ca39c4de4d309..507567cf480d7 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -494,7 +494,7 @@ class InvalidApply(Exception):
def apply_frame_axis0(object frame, object f, object names,
- ndarray[int64_t] starts, ndarray[int64_t] ends):
+ const int64_t[:] starts, const int64_t[:] ends):
cdef:
BlockSlider slider
Py_ssize_t i, n = len(starts)
diff --git a/pandas/_libs/skiplist.pyx b/pandas/_libs/skiplist.pyx
index 6698fcb767d7c..2fdee72f9d588 100644
--- a/pandas/_libs/skiplist.pyx
+++ b/pandas/_libs/skiplist.pyx
@@ -57,8 +57,9 @@ cdef class IndexableSkiplist:
return self.get(i)
cpdef get(self, Py_ssize_t i):
- cdef Py_ssize_t level
- cdef Node node
+ cdef:
+ Py_ssize_t level
+ Node node
node = self.head
i += 1
@@ -71,9 +72,10 @@ cdef class IndexableSkiplist:
return node.value
cpdef insert(self, double value):
- cdef Py_ssize_t level, steps, d
- cdef Node node, prevnode, newnode, next_at_level, tmp
- cdef list chain, steps_at_level
+ cdef:
+ Py_ssize_t level, steps, d
+ Node node, prevnode, newnode, next_at_level, tmp
+ list chain, steps_at_level
# find first node on each level where node.next[levels].value > value
chain = [None] * self.maxlevels
@@ -110,9 +112,10 @@ cdef class IndexableSkiplist:
self.size += 1
cpdef remove(self, double value):
- cdef Py_ssize_t level, d
- cdef Node node, prevnode, tmpnode, next_at_level
- cdef list chain
+ cdef:
+ Py_ssize_t level, d
+ Node node, prevnode, tmpnode, next_at_level
+ list chain
# find first node on each level where node.next[levels].value >= value
chain = [None] * self.maxlevels
diff --git a/pandas/_libs/sparse_op_helper.pxi.in b/pandas/_libs/sparse_op_helper.pxi.in
index c6621ab5977ca..5949a3fd0ed81 100644
--- a/pandas/_libs/sparse_op_helper.pxi.in
+++ b/pandas/_libs/sparse_op_helper.pxi.in
@@ -125,10 +125,10 @@ def get_dispatch(dtypes):
@cython.wraparound(False)
@cython.boundscheck(False)
-cdef inline tuple block_op_{{opname}}_{{dtype}}(ndarray x_,
+cdef inline tuple block_op_{{opname}}_{{dtype}}({{dtype}}_t[:] x_,
BlockIndex xindex,
{{dtype}}_t xfill,
- ndarray y_,
+ {{dtype}}_t[:] y_,
BlockIndex yindex,
{{dtype}}_t yfill):
'''
@@ -142,7 +142,7 @@ cdef inline tuple block_op_{{opname}}_{{dtype}}(ndarray x_,
int32_t xloc, yloc
Py_ssize_t xblock = 0, yblock = 0 # block numbers
- ndarray[{{dtype}}_t, ndim=1] x, y
+ {{dtype}}_t[:] x, y
ndarray[{{rdtype}}_t, ndim=1] out
# to suppress Cython warning
@@ -226,16 +226,18 @@ cdef inline tuple block_op_{{opname}}_{{dtype}}(ndarray x_,
@cython.wraparound(False)
@cython.boundscheck(False)
-cdef inline tuple int_op_{{opname}}_{{dtype}}(ndarray x_, IntIndex xindex,
+cdef inline tuple int_op_{{opname}}_{{dtype}}({{dtype}}_t[:] x_,
+ IntIndex xindex,
{{dtype}}_t xfill,
- ndarray y_, IntIndex yindex,
+ {{dtype}}_t[:] y_,
+ IntIndex yindex,
{{dtype}}_t yfill):
cdef:
IntIndex out_index
Py_ssize_t xi = 0, yi = 0, out_i = 0 # fp buf indices
int32_t xloc, yloc
- ndarray[int32_t, ndim=1] xindices, yindices, out_indices
- ndarray[{{dtype}}_t, ndim=1] x, y
+ int32_t[:] xindices, yindices, out_indices
+ {{dtype}}_t[:] x, y
ndarray[{{rdtype}}_t, ndim=1] out
# suppress Cython compiler warnings due to inlining
@@ -284,9 +286,9 @@ cdef inline tuple int_op_{{opname}}_{{dtype}}(ndarray x_, IntIndex xindex,
return out, out_index, {{(opname, 'xfill', 'yfill', dtype) | get_op}}
-cpdef sparse_{{opname}}_{{dtype}}(ndarray[{{dtype}}_t, ndim=1] x,
+cpdef sparse_{{opname}}_{{dtype}}({{dtype}}_t[:] x,
SparseIndex xindex, {{dtype}}_t xfill,
- ndarray[{{dtype}}_t, ndim=1] y,
+ {{dtype}}_t[:] y,
SparseIndex yindex, {{dtype}}_t yfill):
if isinstance(xindex, BlockIndex):
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 6c8b732928bc3..1c0adaaa288a9 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -147,7 +147,7 @@ def ensure_timedelta64ns(arr: ndarray, copy: bool=True):
@cython.boundscheck(False)
@cython.wraparound(False)
-def datetime_to_datetime64(values: object[:]):
+def datetime_to_datetime64(object[:] values):
"""
Convert ndarray of datetime-like objects to int64 array representing
nanosecond timestamps.
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index 5cda7992369fc..240f008394099 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -381,7 +381,7 @@ def get_start_end_field(int64_t[:] dtindex, object field,
@cython.wraparound(False)
@cython.boundscheck(False)
-def get_date_field(ndarray[int64_t] dtindex, object field):
+def get_date_field(int64_t[:] dtindex, object field):
"""
Given a int64-based datetime index, extract the year, month, etc.,
field and return an array of these values.
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 82719de2dbdbd..7759e165b7193 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -44,9 +44,10 @@ class DateParseError(ValueError):
_DEFAULT_DATETIME = datetime(1, 1, 1).replace(hour=0, minute=0,
second=0, microsecond=0)
-cdef object _TIMEPAT = re.compile(r'^([01]?[0-9]|2[0-3]):([0-5][0-9])')
+cdef:
+ object _TIMEPAT = re.compile(r'^([01]?[0-9]|2[0-3]):([0-5][0-9])')
-cdef set _not_datelike_strings = {'a', 'A', 'm', 'M', 'p', 'P', 't', 'T'}
+ set _not_datelike_strings = {'a', 'A', 'm', 'M', 'p', 'P', 't', 'T'}
# ----------------------------------------------------------------------
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 2f4edb7de8f95..e38e9a1ca5df6 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -52,9 +52,10 @@ from pandas._libs.tslibs.nattype cimport (
from pandas._libs.tslibs.offsets cimport to_offset
from pandas._libs.tslibs.offsets import _Tick
-cdef bint PY2 = str == bytes
-cdef enum:
- INT32_MIN = -2147483648
+cdef:
+ bint PY2 = str == bytes
+ enum:
+ INT32_MIN = -2147483648
ctypedef struct asfreq_info:
diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx
index f80c1e9841abe..13a4f5ba48557 100644
--- a/pandas/_libs/tslibs/resolution.pyx
+++ b/pandas/_libs/tslibs/resolution.pyx
@@ -16,15 +16,16 @@ from pandas._libs.tslibs.ccalendar cimport get_days_in_month
# ----------------------------------------------------------------------
# Constants
-cdef int64_t NPY_NAT = get_nat()
-
-cdef int RESO_NS = 0
-cdef int RESO_US = 1
-cdef int RESO_MS = 2
-cdef int RESO_SEC = 3
-cdef int RESO_MIN = 4
-cdef int RESO_HR = 5
-cdef int RESO_DAY = 6
+cdef:
+ int64_t NPY_NAT = get_nat()
+
+ int RESO_NS = 0
+ int RESO_US = 1
+ int RESO_MS = 2
+ int RESO_SEC = 3
+ int RESO_MIN = 4
+ int RESO_HR = 5
+ int RESO_DAY = 6
# ----------------------------------------------------------------------
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
index e8f3de64c3823..cc5b3b63f5b04 100644
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -26,13 +26,14 @@ from pandas._libs.skiplist cimport (
skiplist_t, skiplist_init, skiplist_destroy, skiplist_get, skiplist_insert,
skiplist_remove)
-cdef float32_t MINfloat32 = np.NINF
-cdef float64_t MINfloat64 = np.NINF
+cdef:
+ float32_t MINfloat32 = np.NINF
+ float64_t MINfloat64 = np.NINF
-cdef float32_t MAXfloat32 = np.inf
-cdef float64_t MAXfloat64 = np.inf
+ float32_t MAXfloat32 = np.inf
+ float64_t MAXfloat64 = np.inf
-cdef float64_t NaN = <float64_t>np.NaN
+ float64_t NaN = <float64_t>np.NaN
cdef inline int int_max(int a, int b): return a if a >= b else b
cdef inline int int_min(int a, int b): return a if a <= b else b
@@ -242,7 +243,7 @@ cdef class VariableWindowIndexer(WindowIndexer):
# max window size
self.win = (self.end - self.start).max()
- def build(self, ndarray[int64_t] index, int64_t win, bint left_closed,
+ def build(self, const int64_t[:] index, int64_t win, bint left_closed,
bint right_closed):
cdef:
diff --git a/pandas/io/msgpack/_packer.pyx b/pandas/io/msgpack/_packer.pyx
index d67c632188e62..8e2d943d8ddb1 100644
--- a/pandas/io/msgpack/_packer.pyx
+++ b/pandas/io/msgpack/_packer.pyx
@@ -74,14 +74,15 @@ cdef class Packer(object):
Use bin type introduced in msgpack spec 2.0 for bytes.
It also enable str8 type for unicode.
"""
- cdef msgpack_packer pk
- cdef object _default
- cdef object _bencoding
- cdef object _berrors
- cdef char *encoding
- cdef char *unicode_errors
- cdef bint use_float
- cdef bint autoreset
+ cdef:
+ msgpack_packer pk
+ object _default
+ object _bencoding
+ object _berrors
+ char *encoding
+ char *unicode_errors
+ bint use_float
+ bint autoreset
def __cinit__(self):
cdef int buf_size = 1024 * 1024
@@ -123,16 +124,17 @@ cdef class Packer(object):
cdef int _pack(self, object o,
int nest_limit=DEFAULT_RECURSE_LIMIT) except -1:
- cdef long long llval
- cdef unsigned long long ullval
- cdef long longval
- cdef float fval
- cdef double dval
- cdef char* rawval
- cdef int ret
- cdef dict d
- cdef size_t L
- cdef int default_used = 0
+ cdef:
+ long long llval
+ unsigned long long ullval
+ long longval
+ float fval
+ double dval
+ char* rawval
+ int ret
+ dict d
+ size_t L
+ int default_used = 0
if nest_limit < 0:
raise PackValueError("recursion limit exceeded.")
diff --git a/pandas/io/msgpack/_unpacker.pyx b/pandas/io/msgpack/_unpacker.pyx
index 0c50aa5e68103..9bbfe749ef9ba 100644
--- a/pandas/io/msgpack/_unpacker.pyx
+++ b/pandas/io/msgpack/_unpacker.pyx
@@ -120,14 +120,15 @@ def unpackb(object packed, object object_hook=None, object list_hook=None,
See :class:`Unpacker` for options.
"""
- cdef unpack_context ctx
- cdef size_t off = 0
- cdef int ret
+ cdef:
+ unpack_context ctx
+ size_t off = 0
+ int ret
- cdef char* buf
- cdef Py_ssize_t buf_len
- cdef char* cenc = NULL
- cdef char* cerr = NULL
+ char* buf
+ Py_ssize_t buf_len
+ char* cenc = NULL
+ char* cerr = NULL
PyObject_AsReadBuffer(packed, <const void**>&buf, &buf_len)
@@ -243,16 +244,17 @@ cdef class Unpacker(object):
for o in unpacker:
process(o)
"""
- cdef unpack_context ctx
- cdef char* buf
- cdef size_t buf_size, buf_head, buf_tail
- cdef object file_like
- cdef object file_like_read
- cdef Py_ssize_t read_size
- # To maintain refcnt.
- cdef object object_hook, object_pairs_hook, list_hook, ext_hook
- cdef object encoding, unicode_errors
- cdef size_t max_buffer_size
+ cdef:
+ unpack_context ctx
+ char* buf
+ size_t buf_size, buf_head, buf_tail
+ object file_like
+ object file_like_read
+ Py_ssize_t read_size
+ # To maintain refcnt.
+ object object_hook, object_pairs_hook, list_hook, ext_hook
+ object encoding, unicode_errors
+ size_t max_buffer_size
def __cinit__(self):
self.buf = NULL
@@ -270,8 +272,9 @@ cdef class Unpacker(object):
Py_ssize_t max_array_len=2147483647,
Py_ssize_t max_map_len=2147483647,
Py_ssize_t max_ext_len=2147483647):
- cdef char *cenc=NULL,
- cdef char *cerr=NULL
+ cdef:
+ char *cenc=NULL,
+ char *cerr=NULL
self.object_hook = object_hook
self.object_pairs_hook = object_pairs_hook
@@ -388,9 +391,10 @@ cdef class Unpacker(object):
cdef object _unpack(self, execute_fn execute,
object write_bytes, bint iter=0):
- cdef int ret
- cdef object obj
- cdef size_t prev_head
+ cdef:
+ int ret
+ object obj
+ size_t prev_head
if self.buf_head >= self.buf_tail and self.file_like is not None:
self.read_from_file()
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index a5bfd5866a261..9b8fba16741f6 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -203,11 +203,12 @@ cdef enum ColumnTypes:
# type the page_data types
-cdef int page_meta_type = const.page_meta_type
-cdef int page_mix_types_0 = const.page_mix_types[0]
-cdef int page_mix_types_1 = const.page_mix_types[1]
-cdef int page_data_type = const.page_data_type
-cdef int subheader_pointers_offset = const.subheader_pointers_offset
+cdef:
+ int page_meta_type = const.page_meta_type
+ int page_mix_types_0 = const.page_mix_types[0]
+ int page_mix_types_1 = const.page_mix_types[1]
+ int page_data_type = const.page_data_type
+ int subheader_pointers_offset = const.subheader_pointers_offset
cdef class Parser(object):
| Refactored cython files (.pxi.in, .pyx):
1. Made cdef usage consistent (using a cdef block instead of multiple cdef modifier when possible)
2. Replaced type hints with cython type declaration in function signatures
3. Replaced ndarray with memory view in function signatures
4. Added the const modifier to memory views in function signatures
@jreback I took your suggestion in a previous [PR](https://github.com/pandas-dev/pandas/pull/24795#issuecomment-454815511) and implemented on a broader scope.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24932 | 2019-01-25T15:28:27Z | 2019-01-26T17:48:41Z | 2019-01-26T17:48:41Z | 2019-01-26T17:48:45Z |
DOC: 0.24 release date | diff --git a/.gitignore b/.gitignore
index 9891883879cf1..816aff376fc83 100644
--- a/.gitignore
+++ b/.gitignore
@@ -101,6 +101,7 @@ asv_bench/pandas/
# Documentation generated files #
#################################
doc/source/generated
+doc/source/user_guide/styled.xlsx
doc/source/reference/api
doc/source/_static
doc/source/vbench
@@ -109,6 +110,5 @@ doc/source/index.rst
doc/build/html/index.html
# Windows specific leftover:
doc/tmp.sv
-doc/source/styled.xlsx
env/
doc/source/savefig/
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index f0f99d2def136..489d505cb8f67 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1,6 +1,6 @@
.. _whatsnew_0240:
-What's New in 0.24.0 (January XX, 2019)
+What's New in 0.24.0 (January 25, 2019)
---------------------------------------
.. warning::
| Also some gitignore cleanup. | https://api.github.com/repos/pandas-dev/pandas/pulls/24930 | 2019-01-25T14:02:03Z | 2019-01-25T14:23:07Z | 2019-01-25T14:23:07Z | 2019-01-25T14:55:16Z |
DOC: Adding version to the whatsnew section in the home page | diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index 55b95868c01dd..51487c0d325b5 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -39,7 +39,7 @@ See the :ref:`overview` for more detail about what's in the library.
{% endif %}
{% if not single_doc -%}
- What's New <whatsnew/v0.24.0>
+ What's New in 0.24.0 <whatsnew/v0.24.0>
install
getting_started/index
user_guide/index
| @TomAugspurger @jorisvandenbossche
Not so important, but looking at the home, feels quite weird to have the "What's New" section first thing in the toctree, without specifying what's new of in which version.
May be it's just me, feel free to dismiss this, but wanted to open as it's a trivial change, and IMO makes it much clearer.
You can see how this looks here: https://datapythonista.github.io/pandas-doc-preview/
Sorry for the last minute stuff. | https://api.github.com/repos/pandas-dev/pandas/pulls/24929 | 2019-01-25T13:46:33Z | 2019-01-25T14:54:37Z | 2019-01-25T14:54:37Z | 2019-01-25T15:24:19Z |
DOC: Making home page links more compact and clearer | diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index ab51911a610e3..93d74ece3115c 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -1,26 +1,21 @@
.. pandas documentation master file, created by
+.. module:: pandas
+
*********************************************
pandas: powerful Python data analysis toolkit
*********************************************
-`PDF Version <pandas.pdf>`__
-
-`Zipped HTML <pandas.zip>`__
-
-.. module:: pandas
-
**Date**: |today| **Version**: |version|
-**Binary Installers:** https://pypi.org/project/pandas
-
-**Source Repository:** https://github.com/pandas-dev/pandas
-
-**Issues & Ideas:** https://github.com/pandas-dev/pandas/issues
-
-**Q&A Support:** https://stackoverflow.com/questions/tagged/pandas
+**Download documentation**: `PDF Version <pandas.pdf>`__ | `Zipped HTML <pandas.zip>`__
-**Developer Mailing List:** https://groups.google.com/forum/#!forum/pydata
+**Useful links**:
+`Binary Installers <https://pypi.org/project/pandas>`__ |
+`Source Repository <https://github.com/pandas-dev/pandas>`__ |
+`Issues & Ideas <https://github.com/pandas-dev/pandas/issues>`__ |
+`Q&A Support <https://stackoverflow.com/questions/tagged/pandas>`__ |
+`Mailing List <https://groups.google.com/forum/#!forum/pydata>`__
:mod:`pandas` is an open source, BSD-licensed library providing high-performance,
easy-to-use data structures and data analysis tools for the `Python <https://www.python.org/>`__
| @TomAugspurger @jorisvandenbossche
Sharing in a minute a link so you can see this changes rendered.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24928 | 2019-01-25T13:10:10Z | 2019-01-25T13:19:33Z | 2019-01-25T13:19:33Z | 2019-01-25T19:07:59Z |
API: Remove IntervalArray from top-level | diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index 7281f4f748d6f..1dc74ad83b7e6 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -288,12 +288,12 @@ Properties
Interval.overlaps
Interval.right
-A collection of intervals may be stored in an :class:`IntervalArray`.
+A collection of intervals may be stored in an :class:`arrays.IntervalArray`.
.. autosummary::
:toctree: api/
- IntervalArray
+ arrays.IntervalArray
IntervalDtype
.. _api.arrays.integer_na:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index f0f99d2def136..d3d8863df9fd5 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -225,7 +225,7 @@ from the ``Series``:
ser.array
pser.array
-These return an instance of :class:`IntervalArray` or :class:`arrays.PeriodArray`,
+These return an instance of :class:`arrays.IntervalArray` or :class:`arrays.PeriodArray`,
the new extension arrays that back interval and period data.
.. warning::
@@ -411,7 +411,7 @@ Other Enhancements
- :meth:`Categorical.from_codes` now can take a ``dtype`` parameter as an alternative to passing ``categories`` and ``ordered`` (:issue:`24398`).
- New attribute ``__git_version__`` will return git commit sha of current build (:issue:`21295`).
- Compatibility with Matplotlib 3.0 (:issue:`22790`).
-- Added :meth:`Interval.overlaps`, :meth:`IntervalArray.overlaps`, and :meth:`IntervalIndex.overlaps` for determining overlaps between interval-like objects (:issue:`21998`)
+- Added :meth:`Interval.overlaps`, :meth:`arrays.IntervalArray.overlaps`, and :meth:`IntervalIndex.overlaps` for determining overlaps between interval-like objects (:issue:`21998`)
- :func:`read_fwf` now accepts keyword ``infer_nrows`` (:issue:`15138`).
- :func:`~DataFrame.to_parquet` now supports writing a ``DataFrame`` as a directory of parquet files partitioned by a subset of the columns when ``engine = 'pyarrow'`` (:issue:`23283`)
- :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have gained the ``nonexistent`` argument for alternative handling of nonexistent times. See :ref:`timeseries.timezone_nonexistent` (:issue:`8917`, :issue:`24466`)
diff --git a/pandas/core/api.py b/pandas/core/api.py
index afc929c39086c..8c92287e212a6 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -4,7 +4,6 @@
import numpy as np
-from pandas.core.arrays import IntervalArray
from pandas.core.arrays.integer import (
Int8Dtype,
Int16Dtype,
diff --git a/pandas/core/arrays/array_.py b/pandas/core/arrays/array_.py
index c7be8e3f745c4..41d623c7efd9c 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/arrays/array_.py
@@ -50,7 +50,7 @@ def array(data, # type: Sequence[object]
============================== =====================================
Scalar Type Array Type
============================== =====================================
- :class:`pandas.Interval` :class:`pandas.IntervalArray`
+ :class:`pandas.Interval` :class:`pandas.arrays.IntervalArray`
:class:`pandas.Period` :class:`pandas.arrays.PeriodArray`
:class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray`
:class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray`
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 45470e03c041a..1e671c7bd956a 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -32,6 +32,7 @@
_shared_docs_kwargs = dict(
klass='IntervalArray',
+ qualname='arrays.IntervalArray',
name=''
)
@@ -115,7 +116,7 @@
A new ``IntervalArray`` can be constructed directly from an array-like of
``Interval`` objects:
- >>> pd.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])
+ >>> pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])
IntervalArray([(0, 1], (1, 5]],
closed='right',
dtype='interval[int64]')
@@ -248,8 +249,8 @@ def _from_factorized(cls, values, original):
Examples
--------
- >>> pd.%(klass)s.from_breaks([0, 1, 2, 3])
- %(klass)s([(0, 1], (1, 2], (2, 3]]
+ >>> pd.%(qualname)s.from_breaks([0, 1, 2, 3])
+ %(klass)s([(0, 1], (1, 2], (2, 3]],
closed='right',
dtype='interval[int64]')
"""
@@ -311,7 +312,7 @@ def from_breaks(cls, breaks, closed='right', copy=False, dtype=None):
Examples
--------
>>> %(klass)s.from_arrays([0, 1, 2], [1, 2, 3])
- %(klass)s([(0, 1], (1, 2], (2, 3]]
+ %(klass)s([(0, 1], (1, 2], (2, 3]],
closed='right',
dtype='interval[int64]')
"""
@@ -354,16 +355,16 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
Examples
--------
- >>> pd.%(klass)s.from_intervals([pd.Interval(0, 1),
+ >>> pd.%(qualname)s.from_intervals([pd.Interval(0, 1),
... pd.Interval(1, 2)])
- %(klass)s([(0, 1], (1, 2]]
+ %(klass)s([(0, 1], (1, 2]],
closed='right', dtype='interval[int64]')
The generic Index constructor work identically when it infers an array
of all intervals:
>>> pd.Index([pd.Interval(0, 1), pd.Interval(1, 2)])
- %(klass)s([(0, 1], (1, 2]]
+ %(klass)s([(0, 1], (1, 2]],
closed='right', dtype='interval[int64]')
"""
@@ -394,7 +395,7 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
Examples
--------
- >>> pd.%(klass)s.from_tuples([(0, 1), (1, 2)])
+ >>> pd.%(qualname)s.from_tuples([(0, 1), (1, 2)])
%(klass)s([(0, 1], (1, 2]],
closed='right', dtype='interval[int64]')
"""
@@ -891,13 +892,13 @@ def closed(self):
Examples
--------
- >>> index = pd.interval_range(0, 3)
- >>> index
- %(klass)s([(0, 1], (1, 2], (2, 3]]
+ >>> index = pd.interval_range(0, 3)
+ >>> index
+ IntervalIndex([(0, 1], (1, 2], (2, 3]],
closed='right',
dtype='interval[int64]')
- >>> index.set_closed('both')
- %(klass)s([[0, 1], [1, 2], [2, 3]]
+ >>> index.set_closed('both')
+ IntervalIndex([[0, 1], [1, 2], [2, 3]],
closed='both',
dtype='interval[int64]')
"""
@@ -1039,7 +1040,7 @@ def repeat(self, repeats, axis=None):
Examples
--------
- >>> intervals = pd.%(klass)s.from_tuples([(0, 1), (1, 3), (2, 4)])
+ >>> intervals = pd.%(qualname)s.from_tuples([(0, 1), (1, 3), (2, 4)])
>>> intervals
%(klass)s([(0, 1], (1, 3], (2, 4]],
closed='right',
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 2a6044fb0a08b..0210560aaa21f 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -38,6 +38,7 @@
_index_doc_kwargs.update(
dict(klass='IntervalIndex',
+ qualname="IntervalIndex",
target_klass='IntervalIndex or list of Intervals',
name=textwrap.dedent("""\
name : object, optional
@@ -282,10 +283,10 @@ def contains(self, key):
examples="""
Examples
--------
- >>> idx = pd.IntervalIndex.from_arrays([0, np.nan, 2], [1, np.nan, 3])
- >>> idx.to_tuples()
+ >>> idx = pd.IntervalIndex.from_arrays([0, np.nan, 2], [1, np.nan, 3])
+ >>> idx.to_tuples()
Index([(0.0, 1.0), (nan, nan), (2.0, 3.0)], dtype='object')
- >>> idx.to_tuples(na_tuple=False)
+ >>> idx.to_tuples(na_tuple=False)
Index([(0.0, 1.0), nan, (2.0, 3.0)], dtype='object')""",
))
def to_tuples(self, na_tuple=True):
@@ -1201,15 +1202,15 @@ def interval_range(start=None, end=None, periods=None, freq=None,
Numeric ``start`` and ``end`` is supported.
>>> pd.interval_range(start=0, end=5)
- IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]]
+ IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
closed='right', dtype='interval[int64]')
Additionally, datetime-like input is also supported.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
- end=pd.Timestamp('2017-01-04'))
+ ... end=pd.Timestamp('2017-01-04'))
IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03],
- (2017-01-03, 2017-01-04]]
+ (2017-01-03, 2017-01-04]],
closed='right', dtype='interval[datetime64[ns]]')
The ``freq`` parameter specifies the frequency between the left and right.
@@ -1217,23 +1218,23 @@ def interval_range(start=None, end=None, periods=None, freq=None,
numeric ``start`` and ``end``, the frequency must also be numeric.
>>> pd.interval_range(start=0, periods=4, freq=1.5)
- IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]]
+ IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
closed='right', dtype='interval[float64]')
Similarly, for datetime-like ``start`` and ``end``, the frequency must be
convertible to a DateOffset.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
- periods=3, freq='MS')
+ ... periods=3, freq='MS')
IntervalIndex([(2017-01-01, 2017-02-01], (2017-02-01, 2017-03-01],
- (2017-03-01, 2017-04-01]]
+ (2017-03-01, 2017-04-01]],
closed='right', dtype='interval[datetime64[ns]]')
Specify ``start``, ``end``, and ``periods``; the frequency is generated
automatically (linearly spaced).
>>> pd.interval_range(start=0, end=6, periods=4)
- IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]]
+ IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
closed='right',
dtype='interval[float64]')
@@ -1241,7 +1242,7 @@ def interval_range(start=None, end=None, periods=None, freq=None,
intervals within the ``IntervalIndex`` are closed.
>>> pd.interval_range(end=5, periods=4, closed='both')
- IntervalIndex([[1, 2], [2, 3], [3, 4], [4, 5]]
+ IntervalIndex([[1, 2], [2, 3], [3, 4], [4, 5]],
closed='both', dtype='interval[int64]')
"""
start = com.maybe_box_datetimelike(start)
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 07cf358c765b3..599ab9a3c5f7c 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -46,7 +46,6 @@ class TestPDApi(Base):
'Series', 'SparseArray', 'SparseDataFrame', 'SparseDtype',
'SparseSeries', 'Timedelta',
'TimedeltaIndex', 'Timestamp', 'Interval', 'IntervalIndex',
- 'IntervalArray',
'CategoricalDtype', 'PeriodDtype', 'IntervalDtype',
'DatetimeTZDtype',
'Int8Dtype', 'Int16Dtype', 'Int32Dtype', 'Int64Dtype',
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index 4a51fd63d963b..9fea1989e46df 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -74,7 +74,7 @@
# Interval
([pd.Interval(1, 2), pd.Interval(3, 4)], 'interval',
- pd.IntervalArray.from_tuples([(1, 2), (3, 4)])),
+ pd.arrays.IntervalArray.from_tuples([(1, 2), (3, 4)])),
# Sparse
([0, 1], 'Sparse[int64]', pd.SparseArray([0, 1], dtype='int64')),
@@ -129,7 +129,7 @@ def test_array_copy():
# interval
([pd.Interval(0, 1), pd.Interval(1, 2)],
- pd.IntervalArray.from_breaks([0, 1, 2])),
+ pd.arrays.IntervalArray.from_breaks([0, 1, 2])),
# datetime
([pd.Timestamp('2000',), pd.Timestamp('2001')],
| cc @jschendel @jreback @jorisvandenbossche
Originally, we put IntervalArray in the top-level since it has the special alternative constructors. But it's a bit inconsistent with all our new arrays, so this is a proposal to remove it from the top-level and just have it in `pandas.arrays`. I don't think the extra typing is too bad, and it's more consistent with our other (new) arrays. | https://api.github.com/repos/pandas-dev/pandas/pulls/24926 | 2019-01-25T12:24:15Z | 2019-01-25T14:55:50Z | 2019-01-25T14:55:50Z | 2019-01-25T14:57:05Z |
ENH: indexing and __getitem__ of dataframe and series accept zerodim integer np.array as int | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 686c5ad0165e7..658521803824b 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -19,6 +19,7 @@ including other versions of pandas.
Other Enhancements
^^^^^^^^^^^^^^^^^^
+- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
- :meth:`DataFrame.at_time` and :meth:`Series.at_time` now support :meth:`datetime.time` objects with timezones (:issue:`24043`)
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a239ff4b4d5db..79f209f9ebc0a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2838,6 +2838,7 @@ def _ixs(self, i, axis=0):
return result
def __getitem__(self, key):
+ key = lib.item_from_zerodim(key)
key = com.apply_if_callable(key, self)
# shortcut if the key is in columns
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 539da0beaefb4..623a48acdd48b 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -5,6 +5,7 @@
import numpy as np
from pandas._libs.indexing import _NDFrameIndexerBase
+from pandas._libs.lib import item_from_zerodim
import pandas.compat as compat
from pandas.compat import range, zip
from pandas.errors import AbstractMethodError
@@ -1856,6 +1857,7 @@ def _getitem_axis(self, key, axis=None):
if axis is None:
axis = self.axis or 0
+ key = item_from_zerodim(key)
if is_iterator(key):
key = list(key)
@@ -2222,6 +2224,7 @@ def _getitem_axis(self, key, axis=None):
# a single integer
else:
+ key = item_from_zerodim(key)
if not is_integer(key):
raise TypeError("Cannot index by location index with a "
"non-integer key")
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 5c87d553daba3..69ec6454e952a 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -697,3 +697,16 @@ def test_identity_slice_returns_new_object(self):
# should also be a shallow copy
original_series[:3] = [7, 8, 9]
assert all(sliced_series[:3] == [7, 8, 9])
+
+ def test_indexing_zerodim_np_array(self):
+ # GH24919
+ df = DataFrame([[1, 2], [3, 4]])
+ result = df.iloc[np.array(0)]
+ s = pd.Series([1, 2], name=0)
+ tm.assert_series_equal(result, s)
+
+ def test_series_indexing_zerodim_np_array(self):
+ # GH24919
+ s = Series([1, 2])
+ result = s.iloc[np.array(0)]
+ assert result == 1
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 3bf4a6bee4af9..29f70929624fc 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -778,3 +778,16 @@ def test_loc_setitem_empty_append_raises(self):
msg = "cannot copy sequence with size 2 to array axis with dimension 0"
with pytest.raises(ValueError, match=msg):
df.loc[0:2, 'x'] = data
+
+ def test_indexing_zerodim_np_array(self):
+ # GH24924
+ df = DataFrame([[1, 2], [3, 4]])
+ result = df.loc[np.array(0)]
+ s = pd.Series([1, 2], name=0)
+ tm.assert_series_equal(result, s)
+
+ def test_series_indexing_zerodim_np_array(self):
+ # GH24924
+ s = Series([1, 2])
+ result = s.loc[np.array(0)]
+ assert result == 1
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index 6d607ce86c08e..0cd41562541d1 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -221,3 +221,16 @@ def test_iat_setter_incompatible_assignment(self):
result.iat[0, 0] = None
expected = DataFrame({"a": [None, 1], "b": [4, 5]})
tm.assert_frame_equal(result, expected)
+
+ def test_getitem_zerodim_np_array(self):
+ # GH24924
+ # dataframe __getitem__
+ df = DataFrame([[1, 2], [3, 4]])
+ result = df[np.array(0)]
+ expected = Series([1, 3], name=0)
+ tm.assert_series_equal(result, expected)
+
+ # series __getitem__
+ s = Series([1, 2])
+ result = s[np.array(0)]
+ assert result == 1
| - [ ] closes #24919
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
- Now the `df.iloc`, `df.loc`, `df.__getitem__` accepts zero dim integer numpy array, as follows:
```python
df.iloc[np.array(0)]
df.loc[np.array(0)]
df[np.array(0)]
```
- I think this change possibly harms the performance of df.iloc | https://api.github.com/repos/pandas-dev/pandas/pulls/24924 | 2019-01-25T06:03:36Z | 2019-02-20T10:28:44Z | 2019-02-20T10:28:44Z | 2019-02-20T10:28:50Z |
Add tests for NaT when performing dt.to_period | diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 0a86bb0b67797..7e0feb418e8df 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -164,3 +164,12 @@ def test_end_time_timevalues(self, input_vals):
result = s.dt.end_time
expected = s.apply(lambda x: x.end_time)
tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize('input_vals', [
+ ('2001'), ('NaT')
+ ])
+ def test_to_period(self, input_vals):
+ # GH 21205
+ expected = Series([input_vals], dtype='Period[D]')
+ result = Series([input_vals], dtype='datetime64[ns]').dt.to_period('D')
+ tm.assert_series_equal(result, expected)
|
@mroeschke Here is a test that tests the functionality discussed in the issue. Please let me know if it needs to be done differently and if more tests should be added.
- [x] closes #21205
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/24921 | 2019-01-25T04:28:37Z | 2019-01-26T14:50:50Z | 2019-01-26T14:50:50Z | 2019-01-26T14:50:54Z |
BUG-24915 fix unhashble Series.name in df aggregation | diff --git a/pandas/core/base.py b/pandas/core/base.py
index c02ba88ea7fda..f306c9d9c3744 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -16,8 +16,8 @@
from pandas.core.dtypes.common import (
is_datetime64_ns_dtype, is_datetime64tz_dtype, is_datetimelike,
- is_extension_array_dtype, is_extension_type, is_list_like, is_object_dtype,
- is_scalar, is_timedelta64_ns_dtype)
+ is_extension_array_dtype, is_extension_type, is_hashable, is_list_like,
+ is_object_dtype, is_scalar, is_timedelta64_ns_dtype)
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna
@@ -546,10 +546,10 @@ def is_any_frame():
try:
result = DataFrame(result)
except ValueError:
-
+ name_attr = getattr(self, 'name', None)
+ name_attr = name_attr if is_hashable(name_attr) else None
# we have a dict of scalars
- result = Series(result,
- name=getattr(self, 'name', None))
+ result = Series(result, name=name_attr)
return result, True
elif is_list_like(arg) and arg not in compat.string_types:
| - [x] closes #24915
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Before creating a pd.Series and pass {name} as argument, check if {name} is hashable. If not use default None. | https://api.github.com/repos/pandas-dev/pandas/pulls/24920 | 2019-01-25T03:44:19Z | 2019-02-28T00:34:12Z | null | 2019-02-28T00:34:13Z |
CLN: fix typo in asv benchmark of non_unique_sorted, which was not sorted | diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index f76040921393f..bbe164d4858ab 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -138,7 +138,8 @@ def setup(self, dtype):
self.sorted = self.idx.sort_values()
half = N // 2
self.non_unique = self.idx[:half].append(self.idx[:half])
- self.non_unique_sorted = self.sorted[:half].append(self.sorted[:half])
+ self.non_unique_sorted = (self.sorted[:half].append(self.sorted[:half])
+ .sort_values())
self.key = self.sorted[N // 4]
def time_boolean_array(self, dtype):
| Our benchmark for sorted, non-unique indexes is not sorted. This PR sorts the test data, leading to significantly different benchmark results. Before:
```
[ 0.01%] ··· index_object.Indexing.time_get_loc_non_unique_sorted
ok
[ 0.01%] ··· ======== ============
dtype
-------- ------------
String 319±4ms
Float 2.74±0.1ms
Int 1.80±0.1ms
======== ============
```
New:
```
[ 0.01%] ··· index_object.Indexing.time_get_loc_non_unique_sorted ok
[ 0.01%] ··· ======== ==========
dtype
-------- ----------
String 187±20ms
Float 57.2±2μs
Int 16.6±3μs
======== ==========
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24917 | 2019-01-24T23:18:46Z | 2019-01-25T02:26:45Z | 2019-01-25T02:26:45Z | 2019-01-25T02:50:37Z |
BUG-24212 fix regression in #24897 | diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index ee4b7ab62b31a..3ac2ed73ea53f 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -63,6 +63,9 @@ Bug Fixes
-
-
+**Reshaping**
+
+- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
**Other**
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index e11847d2b8ce2..1dd19a7c1514e 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -757,13 +757,21 @@ def _get_join_info(self):
if self.right_index:
if len(self.left) > 0:
- join_index = self.left.index.take(left_indexer)
+ join_index = self._create_join_index(self.left.index,
+ self.right.index,
+ left_indexer,
+ right_indexer,
+ how='right')
else:
join_index = self.right.index.take(right_indexer)
left_indexer = np.array([-1] * len(join_index))
elif self.left_index:
if len(self.right) > 0:
- join_index = self.right.index.take(right_indexer)
+ join_index = self._create_join_index(self.right.index,
+ self.left.index,
+ right_indexer,
+ left_indexer,
+ how='left')
else:
join_index = self.left.index.take(left_indexer)
right_indexer = np.array([-1] * len(join_index))
@@ -774,6 +782,39 @@ def _get_join_info(self):
join_index = join_index.astype(object)
return join_index, left_indexer, right_indexer
+ def _create_join_index(self, index, other_index, indexer,
+ other_indexer, how='left'):
+ """
+ Create a join index by rearranging one index to match another
+
+ Parameters
+ ----------
+ index: Index being rearranged
+ other_index: Index used to supply values not found in index
+ indexer: how to rearrange index
+ how: replacement is only necessary if indexer based on other_index
+
+ Returns
+ -------
+ join_index
+ """
+ join_index = index.take(indexer)
+ if (self.how in (how, 'outer') and
+ not isinstance(other_index, MultiIndex)):
+ # if final index requires values in other_index but not target
+ # index, indexer may hold missing (-1) values, causing Index.take
+ # to take the final value in target index
+ mask = indexer == -1
+ if np.any(mask):
+ # if values missing (-1) from target index,
+ # take from other_index instead
+ join_list = join_index.to_numpy()
+ other_list = other_index.take(other_indexer).to_numpy()
+ join_list[mask] = other_list[mask]
+ join_index = Index(join_list, dtype=join_index.dtype,
+ name=join_index.name)
+ return join_index
+
def _get_merge_keys(self):
"""
Note: has side effects (copy/delete key columns)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index f0a3ddc8ce8a4..c17c301968269 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -939,25 +939,22 @@ def test_merge_two_empty_df_no_division_error(self):
with np.errstate(divide='raise'):
merge(a, a, on=('a', 'b'))
- @pytest.mark.parametrize('how', ['left', 'outer'])
- @pytest.mark.xfail(reason="GH-24897")
+ @pytest.mark.parametrize('how', ['right', 'outer'])
def test_merge_on_index_with_more_values(self, how):
# GH 24212
- # pd.merge gets [-1, -1, 0, 1] as right_indexer, ensure that -1 is
- # interpreted as a missing value instead of the last element
- df1 = pd.DataFrame([[1, 2], [2, 4], [3, 6], [4, 8]],
- columns=['a', 'b'])
- df2 = pd.DataFrame([[3, 30], [4, 40]],
- columns=['a', 'c'])
- df1.set_index('a', drop=False, inplace=True)
- df2.set_index('a', inplace=True)
- result = pd.merge(df1, df2, left_index=True, right_on='a', how=how)
- expected = pd.DataFrame([[1, 2, np.nan],
- [2, 4, np.nan],
- [3, 6, 30.0],
- [4, 8, 40.0]],
- columns=['a', 'b', 'c'])
- expected.set_index('a', drop=False, inplace=True)
+ # pd.merge gets [0, 1, 2, -1, -1, -1] as left_indexer, ensure that
+ # -1 is interpreted as a missing value instead of the last element
+ df1 = pd.DataFrame({'a': [1, 2, 3], 'key': [0, 2, 2]})
+ df2 = pd.DataFrame({'b': [1, 2, 3, 4, 5]})
+ result = df1.merge(df2, left_on='key', right_index=True, how=how)
+ expected = pd.DataFrame([[1.0, 0, 1],
+ [2.0, 2, 3],
+ [3.0, 2, 3],
+ [np.nan, 1, 2],
+ [np.nan, 3, 4],
+ [np.nan, 4, 5]],
+ columns=['a', 'key', 'b'])
+ expected.set_index(Int64Index([0, 1, 2, 1, 3, 4]), inplace=True)
assert_frame_equal(result, expected)
def test_merge_right_index_right(self):
| - [ ] closes #24897
- [x] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Use an extra take to ensure the `other_index` is arranged correctly for replacing values (in the original issue, the `other_index` would've taken from [0, 1, 2, 3] and be left unchanged, so this step was overlooked) | https://api.github.com/repos/pandas-dev/pandas/pulls/24916 | 2019-01-24T21:28:14Z | 2019-01-26T14:58:08Z | 2019-01-26T14:58:08Z | 2019-01-30T16:58:45Z |
TST/REF: collect DataFrame reduction tests | diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 386e5f57617cf..2d3431965bbf6 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -231,9 +231,9 @@ def assert_bool_op_api(opname, bool_frame_with_na, float_string_frame,
getattr(bool_frame_with_na, opname)(axis=1, bool_only=False)
-class TestDataFrameAnalytics():
+class TestDataFrameAnalytics(object):
- # ---------------------------------------------------------------------=
+ # ---------------------------------------------------------------------
# Correlation and covariance
@td.skip_if_no_scipy
@@ -502,6 +502,9 @@ def test_corrwith_kendall(self):
expected = Series(np.ones(len(result)))
tm.assert_series_equal(result, expected)
+ # ---------------------------------------------------------------------
+ # Describe
+
def test_bool_describe_in_mixed_frame(self):
df = DataFrame({
'string_data': ['a', 'b', 'c', 'd', 'e'],
@@ -693,82 +696,113 @@ def test_describe_tz_values(self, tz_naive_fixture):
result = df.describe(include='all')
tm.assert_frame_equal(result, expected)
- def test_reduce_mixed_frame(self):
- # GH 6806
- df = DataFrame({
- 'bool_data': [True, True, False, False, False],
- 'int_data': [10, 20, 30, 40, 50],
- 'string_data': ['a', 'b', 'c', 'd', 'e'],
- })
- df.reindex(columns=['bool_data', 'int_data', 'string_data'])
- test = df.sum(axis=0)
- tm.assert_numpy_array_equal(test.values,
- np.array([2, 150, 'abcde'], dtype=object))
- tm.assert_series_equal(test, df.T.sum(axis=1))
+ # ---------------------------------------------------------------------
+ # Reductions
- def test_count(self, float_frame_with_na, float_frame, float_string_frame):
- f = lambda s: notna(s).sum()
- assert_stat_op_calc('count', f, float_frame_with_na, has_skipna=False,
- check_dtype=False, check_dates=True)
+ def test_stat_op_api(self, float_frame, float_string_frame):
assert_stat_op_api('count', float_frame, float_string_frame,
has_numeric_only=True)
+ assert_stat_op_api('sum', float_frame, float_string_frame,
+ has_numeric_only=True)
- # corner case
- frame = DataFrame()
- ct1 = frame.count(1)
- assert isinstance(ct1, Series)
+ assert_stat_op_api('nunique', float_frame, float_string_frame)
+ assert_stat_op_api('mean', float_frame, float_string_frame)
+ assert_stat_op_api('product', float_frame, float_string_frame)
+ assert_stat_op_api('median', float_frame, float_string_frame)
+ assert_stat_op_api('min', float_frame, float_string_frame)
+ assert_stat_op_api('max', float_frame, float_string_frame)
+ assert_stat_op_api('mad', float_frame, float_string_frame)
+ assert_stat_op_api('var', float_frame, float_string_frame)
+ assert_stat_op_api('std', float_frame, float_string_frame)
+ assert_stat_op_api('sem', float_frame, float_string_frame)
+ assert_stat_op_api('median', float_frame, float_string_frame)
- ct2 = frame.count(0)
- assert isinstance(ct2, Series)
+ try:
+ from scipy.stats import skew, kurtosis # noqa:F401
+ assert_stat_op_api('skew', float_frame, float_string_frame)
+ assert_stat_op_api('kurt', float_frame, float_string_frame)
+ except ImportError:
+ pass
- # GH 423
- df = DataFrame(index=lrange(10))
- result = df.count(1)
- expected = Series(0, index=df.index)
- tm.assert_series_equal(result, expected)
+ def test_stat_op_calc(self, float_frame_with_na, mixed_float_frame):
- df = DataFrame(columns=lrange(10))
- result = df.count(0)
- expected = Series(0, index=df.columns)
- tm.assert_series_equal(result, expected)
+ def count(s):
+ return notna(s).sum()
- df = DataFrame()
- result = df.count()
- expected = Series(0, index=[])
- tm.assert_series_equal(result, expected)
+ def nunique(s):
+ return len(algorithms.unique1d(s.dropna()))
- def test_nunique(self, float_frame_with_na, float_frame,
- float_string_frame):
- f = lambda s: len(algorithms.unique1d(s.dropna()))
- assert_stat_op_calc('nunique', f, float_frame_with_na,
+ def mad(x):
+ return np.abs(x - x.mean()).mean()
+
+ def var(x):
+ return np.var(x, ddof=1)
+
+ def std(x):
+ return np.std(x, ddof=1)
+
+ def sem(x):
+ return np.std(x, ddof=1) / np.sqrt(len(x))
+
+ def skewness(x):
+ from scipy.stats import skew # noqa:F811
+ if len(x) < 3:
+ return np.nan
+ return skew(x, bias=False)
+
+ def kurt(x):
+ from scipy.stats import kurtosis # noqa:F811
+ if len(x) < 4:
+ return np.nan
+ return kurtosis(x, bias=False)
+
+ assert_stat_op_calc('nunique', nunique, float_frame_with_na,
has_skipna=False, check_dtype=False,
check_dates=True)
- assert_stat_op_api('nunique', float_frame, float_string_frame)
- df = DataFrame({'A': [1, 1, 1],
- 'B': [1, 2, 3],
- 'C': [1, np.nan, 3]})
- tm.assert_series_equal(df.nunique(), Series({'A': 1, 'B': 3, 'C': 2}))
- tm.assert_series_equal(df.nunique(dropna=False),
- Series({'A': 1, 'B': 3, 'C': 3}))
- tm.assert_series_equal(df.nunique(axis=1), Series({0: 1, 1: 2, 2: 2}))
- tm.assert_series_equal(df.nunique(axis=1, dropna=False),
- Series({0: 1, 1: 3, 2: 2}))
-
- def test_sum(self, float_frame_with_na, mixed_float_frame,
- float_frame, float_string_frame):
- assert_stat_op_api('sum', float_frame, float_string_frame,
- has_numeric_only=True)
- assert_stat_op_calc('sum', np.sum, float_frame_with_na,
- skipna_alternative=np.nansum)
# mixed types (with upcasting happening)
assert_stat_op_calc('sum', np.sum, mixed_float_frame.astype('float32'),
check_dtype=False, check_less_precise=True)
+ assert_stat_op_calc('sum', np.sum, float_frame_with_na,
+ skipna_alternative=np.nansum)
+ assert_stat_op_calc('mean', np.mean, float_frame_with_na,
+ check_dates=True)
+ assert_stat_op_calc('product', np.prod, float_frame_with_na)
+
+ assert_stat_op_calc('mad', mad, float_frame_with_na)
+ assert_stat_op_calc('var', var, float_frame_with_na)
+ assert_stat_op_calc('std', std, float_frame_with_na)
+ assert_stat_op_calc('sem', sem, float_frame_with_na)
+
+ assert_stat_op_calc('count', count, float_frame_with_na,
+ has_skipna=False, check_dtype=False,
+ check_dates=True)
+
+ try:
+ from scipy import skew, kurtosis # noqa:F401
+ assert_stat_op_calc('skew', skewness, float_frame_with_na)
+ assert_stat_op_calc('kurt', kurt, float_frame_with_na)
+ except ImportError:
+ pass
+
+ # TODO: Ensure warning isn't emitted in the first place
+ @pytest.mark.filterwarnings("ignore:All-NaN:RuntimeWarning")
+ def test_median(self, float_frame_with_na, int_frame):
+ def wrapper(x):
+ if isna(x).any():
+ return np.nan
+ return np.median(x)
+
+ assert_stat_op_calc('median', wrapper, float_frame_with_na,
+ check_dates=True)
+ assert_stat_op_calc('median', wrapper, int_frame, check_dtype=False,
+ check_dates=True)
+
@pytest.mark.parametrize('method', ['sum', 'mean', 'prod', 'var',
'std', 'skew', 'min', 'max'])
def test_stat_operators_attempt_obj_array(self, method):
- # GH 676
+ # GH#676
data = {
'a': [-0.00049987540199591344, -0.0016467257772919831,
0.00067695870775883013],
@@ -789,10 +823,44 @@ def test_stat_operators_attempt_obj_array(self, method):
if method in ['sum', 'prod']:
tm.assert_series_equal(result, expected)
- def test_mean(self, float_frame_with_na, float_frame, float_string_frame):
- assert_stat_op_calc('mean', np.mean, float_frame_with_na,
- check_dates=True)
- assert_stat_op_api('mean', float_frame, float_string_frame)
+ @pytest.mark.parametrize('op', ['mean', 'std', 'var',
+ 'skew', 'kurt', 'sem'])
+ def test_mixed_ops(self, op):
+ # GH#16116
+ df = DataFrame({'int': [1, 2, 3, 4],
+ 'float': [1., 2., 3., 4.],
+ 'str': ['a', 'b', 'c', 'd']})
+
+ result = getattr(df, op)()
+ assert len(result) == 2
+
+ with pd.option_context('use_bottleneck', False):
+ result = getattr(df, op)()
+ assert len(result) == 2
+
+ def test_reduce_mixed_frame(self):
+ # GH 6806
+ df = DataFrame({
+ 'bool_data': [True, True, False, False, False],
+ 'int_data': [10, 20, 30, 40, 50],
+ 'string_data': ['a', 'b', 'c', 'd', 'e'],
+ })
+ df.reindex(columns=['bool_data', 'int_data', 'string_data'])
+ test = df.sum(axis=0)
+ tm.assert_numpy_array_equal(test.values,
+ np.array([2, 150, 'abcde'], dtype=object))
+ tm.assert_series_equal(test, df.T.sum(axis=1))
+
+ def test_nunique(self):
+ df = DataFrame({'A': [1, 1, 1],
+ 'B': [1, 2, 3],
+ 'C': [1, np.nan, 3]})
+ tm.assert_series_equal(df.nunique(), Series({'A': 1, 'B': 3, 'C': 2}))
+ tm.assert_series_equal(df.nunique(dropna=False),
+ Series({'A': 1, 'B': 3, 'C': 3}))
+ tm.assert_series_equal(df.nunique(axis=1), Series({0: 1, 1: 2, 2: 2}))
+ tm.assert_series_equal(df.nunique(axis=1, dropna=False),
+ Series({0: 1, 1: 3, 2: 2}))
@pytest.mark.parametrize('tz', [None, 'UTC'])
def test_mean_mixed_datetime_numeric(self, tz):
@@ -813,103 +881,7 @@ def test_mean_excludeds_datetimes(self, tz):
expected = pd.Series()
tm.assert_series_equal(result, expected)
- def test_product(self, float_frame_with_na, float_frame,
- float_string_frame):
- assert_stat_op_calc('product', np.prod, float_frame_with_na)
- assert_stat_op_api('product', float_frame, float_string_frame)
-
- # TODO: Ensure warning isn't emitted in the first place
- @pytest.mark.filterwarnings("ignore:All-NaN:RuntimeWarning")
- def test_median(self, float_frame_with_na, float_frame,
- float_string_frame):
- def wrapper(x):
- if isna(x).any():
- return np.nan
- return np.median(x)
-
- assert_stat_op_calc('median', wrapper, float_frame_with_na,
- check_dates=True)
- assert_stat_op_api('median', float_frame, float_string_frame)
-
- def test_min(self, float_frame_with_na, int_frame,
- float_frame, float_string_frame):
- with warnings.catch_warnings(record=True):
- warnings.simplefilter("ignore", RuntimeWarning)
- assert_stat_op_calc('min', np.min, float_frame_with_na,
- check_dates=True)
- assert_stat_op_calc('min', np.min, int_frame)
- assert_stat_op_api('min', float_frame, float_string_frame)
-
- def test_cummin(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
-
- # axis = 0
- cummin = datetime_frame.cummin()
- expected = datetime_frame.apply(Series.cummin)
- tm.assert_frame_equal(cummin, expected)
-
- # axis = 1
- cummin = datetime_frame.cummin(axis=1)
- expected = datetime_frame.apply(Series.cummin, axis=1)
- tm.assert_frame_equal(cummin, expected)
-
- # it works
- df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
- result = df.cummin() # noqa
-
- # fix issue
- cummin_xs = datetime_frame.cummin(axis=1)
- assert np.shape(cummin_xs) == np.shape(datetime_frame)
-
- def test_cummax(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
-
- # axis = 0
- cummax = datetime_frame.cummax()
- expected = datetime_frame.apply(Series.cummax)
- tm.assert_frame_equal(cummax, expected)
-
- # axis = 1
- cummax = datetime_frame.cummax(axis=1)
- expected = datetime_frame.apply(Series.cummax, axis=1)
- tm.assert_frame_equal(cummax, expected)
-
- # it works
- df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
- result = df.cummax() # noqa
-
- # fix issue
- cummax_xs = datetime_frame.cummax(axis=1)
- assert np.shape(cummax_xs) == np.shape(datetime_frame)
-
- def test_max(self, float_frame_with_na, int_frame,
- float_frame, float_string_frame):
- with warnings.catch_warnings(record=True):
- warnings.simplefilter("ignore", RuntimeWarning)
- assert_stat_op_calc('max', np.max, float_frame_with_na,
- check_dates=True)
- assert_stat_op_calc('max', np.max, int_frame)
- assert_stat_op_api('max', float_frame, float_string_frame)
-
- def test_mad(self, float_frame_with_na, float_frame, float_string_frame):
- f = lambda x: np.abs(x - x.mean()).mean()
- assert_stat_op_calc('mad', f, float_frame_with_na)
- assert_stat_op_api('mad', float_frame, float_string_frame)
-
- def test_var_std(self, float_frame_with_na, datetime_frame, float_frame,
- float_string_frame):
- alt = lambda x: np.var(x, ddof=1)
- assert_stat_op_calc('var', alt, float_frame_with_na)
- assert_stat_op_api('var', float_frame, float_string_frame)
-
- alt = lambda x: np.std(x, ddof=1)
- assert_stat_op_calc('std', alt, float_frame_with_na)
- assert_stat_op_api('std', float_frame, float_string_frame)
-
+ def test_var_std(self, datetime_frame):
result = datetime_frame.std(ddof=4)
expected = datetime_frame.apply(lambda x: x.std(ddof=4))
tm.assert_almost_equal(result, expected)
@@ -952,79 +924,7 @@ def test_numeric_only_flag(self, meth):
pytest.raises(TypeError, lambda: getattr(df2, meth)(
axis=1, numeric_only=False))
- @pytest.mark.parametrize('op', ['mean', 'std', 'var',
- 'skew', 'kurt', 'sem'])
- def test_mixed_ops(self, op):
- # GH 16116
- df = DataFrame({'int': [1, 2, 3, 4],
- 'float': [1., 2., 3., 4.],
- 'str': ['a', 'b', 'c', 'd']})
-
- result = getattr(df, op)()
- assert len(result) == 2
-
- with pd.option_context('use_bottleneck', False):
- result = getattr(df, op)()
- assert len(result) == 2
-
- def test_cumsum(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
-
- # axis = 0
- cumsum = datetime_frame.cumsum()
- expected = datetime_frame.apply(Series.cumsum)
- tm.assert_frame_equal(cumsum, expected)
-
- # axis = 1
- cumsum = datetime_frame.cumsum(axis=1)
- expected = datetime_frame.apply(Series.cumsum, axis=1)
- tm.assert_frame_equal(cumsum, expected)
-
- # works
- df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
- result = df.cumsum() # noqa
-
- # fix issue
- cumsum_xs = datetime_frame.cumsum(axis=1)
- assert np.shape(cumsum_xs) == np.shape(datetime_frame)
-
- def test_cumprod(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
-
- # axis = 0
- cumprod = datetime_frame.cumprod()
- expected = datetime_frame.apply(Series.cumprod)
- tm.assert_frame_equal(cumprod, expected)
-
- # axis = 1
- cumprod = datetime_frame.cumprod(axis=1)
- expected = datetime_frame.apply(Series.cumprod, axis=1)
- tm.assert_frame_equal(cumprod, expected)
-
- # fix issue
- cumprod_xs = datetime_frame.cumprod(axis=1)
- assert np.shape(cumprod_xs) == np.shape(datetime_frame)
-
- # ints
- df = datetime_frame.fillna(0).astype(int)
- df.cumprod(0)
- df.cumprod(1)
-
- # ints32
- df = datetime_frame.fillna(0).astype(np.int32)
- df.cumprod(0)
- df.cumprod(1)
-
- def test_sem(self, float_frame_with_na, datetime_frame,
- float_frame, float_string_frame):
- alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
- assert_stat_op_calc('sem', alt, float_frame_with_na)
- assert_stat_op_api('sem', float_frame, float_string_frame)
-
+ def test_sem(self, datetime_frame):
result = datetime_frame.sem(ddof=4)
expected = datetime_frame.apply(
lambda x: x.std(ddof=4) / np.sqrt(len(x)))
@@ -1039,29 +939,7 @@ def test_sem(self, float_frame_with_na, datetime_frame,
assert not (result < 0).any()
@td.skip_if_no_scipy
- def test_skew(self, float_frame_with_na, float_frame, float_string_frame):
- from scipy.stats import skew
-
- def alt(x):
- if len(x) < 3:
- return np.nan
- return skew(x, bias=False)
-
- assert_stat_op_calc('skew', alt, float_frame_with_na)
- assert_stat_op_api('skew', float_frame, float_string_frame)
-
- @td.skip_if_no_scipy
- def test_kurt(self, float_frame_with_na, float_frame, float_string_frame):
- from scipy.stats import kurtosis
-
- def alt(x):
- if len(x) < 4:
- return np.nan
- return kurtosis(x, bias=False)
-
- assert_stat_op_calc('kurt', alt, float_frame_with_na)
- assert_stat_op_api('kurt', float_frame, float_string_frame)
-
+ def test_kurt(self):
index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]],
codes=[[0, 0, 0, 0, 0, 0],
[0, 1, 2, 0, 1, 2],
@@ -1323,20 +1201,146 @@ def test_stats_mixed_type(self, float_string_frame):
float_string_frame.mean(1)
float_string_frame.skew(1)
- # TODO: Ensure warning isn't emitted in the first place
- @pytest.mark.filterwarnings("ignore:All-NaN:RuntimeWarning")
- def test_median_corner(self, int_frame, float_frame, float_string_frame):
- def wrapper(x):
- if isna(x).any():
- return np.nan
- return np.median(x)
+ def test_sum_bools(self):
+ df = DataFrame(index=lrange(1), columns=lrange(10))
+ bools = isna(df)
+ assert bools.sum(axis=1)[0] == 10
- assert_stat_op_calc('median', wrapper, int_frame, check_dtype=False,
- check_dates=True)
- assert_stat_op_api('median', float_frame, float_string_frame)
+ # ---------------------------------------------------------------------
+ # Cumulative Reductions - cumsum, cummax, ...
+
+ def test_cumsum_corner(self):
+ dm = DataFrame(np.arange(20).reshape(4, 5),
+ index=lrange(4), columns=lrange(5))
+ # ?(wesm)
+ result = dm.cumsum() # noqa
+
+ def test_cumsum(self, datetime_frame):
+ datetime_frame.loc[5:10, 0] = np.nan
+ datetime_frame.loc[10:15, 1] = np.nan
+ datetime_frame.loc[15:, 2] = np.nan
+
+ # axis = 0
+ cumsum = datetime_frame.cumsum()
+ expected = datetime_frame.apply(Series.cumsum)
+ tm.assert_frame_equal(cumsum, expected)
+
+ # axis = 1
+ cumsum = datetime_frame.cumsum(axis=1)
+ expected = datetime_frame.apply(Series.cumsum, axis=1)
+ tm.assert_frame_equal(cumsum, expected)
+
+ # works
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
+ result = df.cumsum() # noqa
+
+ # fix issue
+ cumsum_xs = datetime_frame.cumsum(axis=1)
+ assert np.shape(cumsum_xs) == np.shape(datetime_frame)
+
+ def test_cumprod(self, datetime_frame):
+ datetime_frame.loc[5:10, 0] = np.nan
+ datetime_frame.loc[10:15, 1] = np.nan
+ datetime_frame.loc[15:, 2] = np.nan
+
+ # axis = 0
+ cumprod = datetime_frame.cumprod()
+ expected = datetime_frame.apply(Series.cumprod)
+ tm.assert_frame_equal(cumprod, expected)
+
+ # axis = 1
+ cumprod = datetime_frame.cumprod(axis=1)
+ expected = datetime_frame.apply(Series.cumprod, axis=1)
+ tm.assert_frame_equal(cumprod, expected)
+
+ # fix issue
+ cumprod_xs = datetime_frame.cumprod(axis=1)
+ assert np.shape(cumprod_xs) == np.shape(datetime_frame)
+ # ints
+ df = datetime_frame.fillna(0).astype(int)
+ df.cumprod(0)
+ df.cumprod(1)
+
+ # ints32
+ df = datetime_frame.fillna(0).astype(np.int32)
+ df.cumprod(0)
+ df.cumprod(1)
+
+ def test_cummin(self, datetime_frame):
+ datetime_frame.loc[5:10, 0] = np.nan
+ datetime_frame.loc[10:15, 1] = np.nan
+ datetime_frame.loc[15:, 2] = np.nan
+
+ # axis = 0
+ cummin = datetime_frame.cummin()
+ expected = datetime_frame.apply(Series.cummin)
+ tm.assert_frame_equal(cummin, expected)
+
+ # axis = 1
+ cummin = datetime_frame.cummin(axis=1)
+ expected = datetime_frame.apply(Series.cummin, axis=1)
+ tm.assert_frame_equal(cummin, expected)
+
+ # it works
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
+ result = df.cummin() # noqa
+
+ # fix issue
+ cummin_xs = datetime_frame.cummin(axis=1)
+ assert np.shape(cummin_xs) == np.shape(datetime_frame)
+
+ def test_cummax(self, datetime_frame):
+ datetime_frame.loc[5:10, 0] = np.nan
+ datetime_frame.loc[10:15, 1] = np.nan
+ datetime_frame.loc[15:, 2] = np.nan
+
+ # axis = 0
+ cummax = datetime_frame.cummax()
+ expected = datetime_frame.apply(Series.cummax)
+ tm.assert_frame_equal(cummax, expected)
+
+ # axis = 1
+ cummax = datetime_frame.cummax(axis=1)
+ expected = datetime_frame.apply(Series.cummax, axis=1)
+ tm.assert_frame_equal(cummax, expected)
+
+ # it works
+ df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
+ result = df.cummax() # noqa
+
+ # fix issue
+ cummax_xs = datetime_frame.cummax(axis=1)
+ assert np.shape(cummax_xs) == np.shape(datetime_frame)
+
+ # ---------------------------------------------------------------------
# Miscellanea
+ def test_count(self):
+ # corner case
+ frame = DataFrame()
+ ct1 = frame.count(1)
+ assert isinstance(ct1, Series)
+
+ ct2 = frame.count(0)
+ assert isinstance(ct2, Series)
+
+ # GH#423
+ df = DataFrame(index=lrange(10))
+ result = df.count(1)
+ expected = Series(0, index=df.index)
+ tm.assert_series_equal(result, expected)
+
+ df = DataFrame(columns=lrange(10))
+ result = df.count(0)
+ expected = Series(0, index=df.columns)
+ tm.assert_series_equal(result, expected)
+
+ df = DataFrame()
+ result = df.count()
+ expected = Series(0, index=[])
+ tm.assert_series_equal(result, expected)
+
def test_count_objects(self, float_string_frame):
dm = DataFrame(float_string_frame._series)
df = DataFrame(float_string_frame._series)
@@ -1344,17 +1348,23 @@ def test_count_objects(self, float_string_frame):
tm.assert_series_equal(dm.count(), df.count())
tm.assert_series_equal(dm.count(1), df.count(1))
- def test_cumsum_corner(self):
- dm = DataFrame(np.arange(20).reshape(4, 5),
- index=lrange(4), columns=lrange(5))
- # ?(wesm)
- result = dm.cumsum() # noqa
+ def test_pct_change(self):
+ # GH#11150
+ pnl = DataFrame([np.arange(0, 40, 10),
+ np.arange(0, 40, 10),
+ np.arange(0, 40, 10)]).astype(np.float64)
+ pnl.iat[1, 0] = np.nan
+ pnl.iat[1, 1] = np.nan
+ pnl.iat[2, 3] = 60
- def test_sum_bools(self):
- df = DataFrame(index=lrange(1), columns=lrange(10))
- bools = isna(df)
- assert bools.sum(axis=1)[0] == 10
+ for axis in range(2):
+ expected = pnl.ffill(axis=axis) / pnl.ffill(axis=axis).shift(
+ axis=axis) - 1
+ result = pnl.pct_change(axis=axis, fill_method='pad')
+ tm.assert_frame_equal(result, expected)
+
+ # ----------------------------------------------------------------------
# Index of max / min
def test_idxmin(self, float_frame, int_frame):
@@ -1680,7 +1690,9 @@ def test_isin_empty_datetimelike(self):
result = df1_td.isin(df3)
tm.assert_frame_equal(result, expected)
+ # ---------------------------------------------------------------------
# Rounding
+
def test_round(self):
# GH 2665
@@ -1868,22 +1880,9 @@ def test_round_nonunique_categorical(self):
tm.assert_frame_equal(result, expected)
- def test_pct_change(self):
- # GH 11150
- pnl = DataFrame([np.arange(0, 40, 10), np.arange(0, 40, 10), np.arange(
- 0, 40, 10)]).astype(np.float64)
- pnl.iat[1, 0] = np.nan
- pnl.iat[1, 1] = np.nan
- pnl.iat[2, 3] = 60
-
- for axis in range(2):
- expected = pnl.ffill(axis=axis) / pnl.ffill(axis=axis).shift(
- axis=axis) - 1
- result = pnl.pct_change(axis=axis, fill_method='pad')
-
- tm.assert_frame_equal(result, expected)
-
+ # ---------------------------------------------------------------------
# Clip
+
def test_clip(self, float_frame):
median = float_frame.median().median()
original = float_frame.copy()
@@ -2056,7 +2055,9 @@ def test_clip_with_na_args(self, float_frame):
'col_2': [np.nan, np.nan, np.nan]})
tm.assert_frame_equal(result, expected)
+ # ---------------------------------------------------------------------
# Matrix-like
+
def test_dot(self):
a = DataFrame(np.random.randn(3, 4), index=['a', 'b', 'c'],
columns=['p', 'q', 'r', 's'])
| NB: this retains all extant fixture usage.
Collect tests by method being tested. Collect all calls to assert_stat_op_api in one test (one test which can be usefully parametrized/fixturized in a follow-up), removing a ton of noise from other test function signatures. As a result, it becomes much easier to see places where we can _usefully_ parametrize/fixturize tests in this module. e.g. test_min and test_max are nearly identical. ditto test_cumsum/test_cumprod/test_cummin/test_cummax. ditto test_idxmin/test_idxmax... | https://api.github.com/repos/pandas-dev/pandas/pulls/24914 | 2019-01-24T19:32:23Z | 2019-02-04T13:35:36Z | 2019-02-04T13:35:36Z | 2019-02-04T16:11:41Z |
CLN: reduce overhead in setup for categoricals benchmarks in asv | diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py
index e5dab0cb066aa..4b5b2848f7e0f 100644
--- a/asv_bench/benchmarks/categoricals.py
+++ b/asv_bench/benchmarks/categoricals.py
@@ -223,12 +223,19 @@ class CategoricalSlicing(object):
def setup(self, index):
N = 10**6
- values = list('a' * N + 'b' * N + 'c' * N)
- indices = {
- 'monotonic_incr': pd.Categorical(values),
- 'monotonic_decr': pd.Categorical(reversed(values)),
- 'non_monotonic': pd.Categorical(list('abc' * N))}
- self.data = indices[index]
+ categories = ['a', 'b', 'c']
+ values = [0] * N + [1] * N + [2] * N
+ if index == 'monotonic_incr':
+ self.data = pd.Categorical.from_codes(values,
+ categories=categories)
+ elif index == 'monotonic_decr':
+ self.data = pd.Categorical.from_codes(list(reversed(values)),
+ categories=categories)
+ elif index == 'non_monotonic':
+ self.data = pd.Categorical.from_codes([0, 1, 2] * N,
+ categories=categories)
+ else:
+ raise ValueError('Invalid index param: {}'.format(index))
self.scalar = 10000
self.list = list(range(10000))
| The setup functions for the `categoricals.CategoricalSlicing` suite have ~40s of overhead that can be eliminated by two approaches:
- Use `pd.CategoricalIndex.from_codes()` instead of `pd.CategoricalIndex()`
- ~30s of the speedup comes from this
- Replace the dict of all cases with an `if`/`else` block to minimize unnecessary construction
- Another ~10s speedup from this
Before:
```
$ time asv dev -b CategoricalSlicing
[...]
real 1m3.747s
user 0m46.250s
sys 0m15.141s
```
After:
```
$ time asv dev -b CategoricalSlicing
real 0m23.893s
user 0m14.578s
sys 0m6.578s
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24913 | 2019-01-24T19:29:45Z | 2019-01-26T15:08:51Z | 2019-01-26T15:08:51Z | 2019-01-26T15:08:55Z |
API/VIS: remove misc plotting methods from plot accessor (revert #23811) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3b3fad22ce949..e7c9a4752db06 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -429,7 +429,6 @@ Other Enhancements
- :meth:`MultiIndex.to_flat_index` has been added to flatten multiple levels into a single-level :class:`Index` object.
- :meth:`DataFrame.to_stata` and :class:`pandas.io.stata.StataWriter117` can write mixed sting columns to Stata strl format (:issue:`23633`)
- :meth:`DataFrame.between_time` and :meth:`DataFrame.at_time` have gained the ``axis`` parameter (:issue:`8839`)
-- The ``scatter_matrix``, ``andrews_curves``, ``parallel_coordinates``, ``lag_plot``, ``autocorrelation_plot``, ``bootstrap_plot``, and ``radviz`` plots from the ``pandas.plotting`` module are now accessible from calling :meth:`DataFrame.plot` (:issue:`11978`)
- :meth:`DataFrame.to_records` now accepts ``index_dtypes`` and ``column_dtypes`` parameters to allow different data types in stored column and index records (:issue:`18146`)
- :class:`IntervalIndex` has gained the :attr:`~IntervalIndex.is_overlapping` attribute to indicate if the ``IntervalIndex`` contains any overlapping intervals (:issue:`23309`)
- :func:`pandas.DataFrame.to_sql` has gained the ``method`` argument to control SQL insertion clause. See the :ref:`insertion method <io.sql.method>` section in the documentation. (:issue:`8953`)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 3ba06c0638317..e543ab88f53b2 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -26,7 +26,6 @@
from pandas.core.generic import _shared_doc_kwargs, _shared_docs
from pandas.io.formats.printing import pprint_thing
-from pandas.plotting import _misc as misc
from pandas.plotting._compat import _mpl_ge_3_0_0
from pandas.plotting._style import _get_standard_colors, plot_params
from pandas.plotting._tools import (
@@ -2906,15 +2905,6 @@ def pie(self, **kwds):
"""
return self(kind='pie', **kwds)
- def lag(self, *args, **kwds):
- return misc.lag_plot(self._parent, *args, **kwds)
-
- def autocorrelation(self, *args, **kwds):
- return misc.autocorrelation_plot(self._parent, *args, **kwds)
-
- def bootstrap(self, *args, **kwds):
- return misc.bootstrap_plot(self._parent, *args, **kwds)
-
class FramePlotMethods(BasePlotMethods):
"""DataFrame plotting accessor and method
@@ -3610,16 +3600,3 @@ def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None,
if gridsize is not None:
kwds['gridsize'] = gridsize
return self(kind='hexbin', x=x, y=y, C=C, **kwds)
-
- def scatter_matrix(self, *args, **kwds):
- return misc.scatter_matrix(self._parent, *args, **kwds)
-
- def andrews_curves(self, class_column, *args, **kwds):
- return misc.andrews_curves(self._parent, class_column, *args, **kwds)
-
- def parallel_coordinates(self, class_column, *args, **kwds):
- return misc.parallel_coordinates(self._parent, class_column,
- *args, **kwds)
-
- def radviz(self, class_column, *args, **kwds):
- return misc.radviz(self._parent, class_column, *args, **kwds)
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 0e7672f4e2f9d..98b241f5c8206 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2988,22 +2988,6 @@ def test_secondary_axis_font_size(self, method):
self._check_ticks_props(axes=ax.right_ax,
ylabelsize=fontsize)
- def test_misc_bindings(self, monkeypatch):
- df = pd.DataFrame(randn(10, 10), columns=list('abcdefghij'))
- monkeypatch.setattr('pandas.plotting._misc.scatter_matrix',
- lambda x: 2)
- monkeypatch.setattr('pandas.plotting._misc.andrews_curves',
- lambda x, y: 2)
- monkeypatch.setattr('pandas.plotting._misc.parallel_coordinates',
- lambda x, y: 2)
- monkeypatch.setattr('pandas.plotting._misc.radviz',
- lambda x, y: 2)
-
- assert df.plot.scatter_matrix() == 2
- assert df.plot.andrews_curves('a') == 2
- assert df.plot.parallel_coordinates('a') == 2
- assert df.plot.radviz('a') == 2
-
def _generate_4_axes_via_gridspec():
import matplotlib.pyplot as plt
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 1e223c20f55b7..07a4b168a66f1 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -878,19 +878,6 @@ def test_custom_business_day_freq(self):
_check_plot_works(s.plot)
- def test_misc_bindings(self, monkeypatch):
- s = Series(randn(10))
- monkeypatch.setattr('pandas.plotting._misc.lag_plot',
- lambda x: 2)
- monkeypatch.setattr('pandas.plotting._misc.autocorrelation_plot',
- lambda x: 2)
- monkeypatch.setattr('pandas.plotting._misc.bootstrap_plot',
- lambda x: 2)
-
- assert s.plot.lag() == 2
- assert s.plot.autocorrelation() == 2
- assert s.plot.bootstrap() == 2
-
@pytest.mark.xfail
def test_plot_accessor_updates_on_inplace(self):
s = Series([1, 2, 3, 4])
| To be clear: I am opening this for discussion, but directly as a PR instead of an issue so it is easier to merge it in case we agree.
https://github.com/pandas-dev/pandas/pull/23811 added the misc plotting methods (like 'andrew_curves', 'parallel_coordinates', .. see http://pandas.pydata.org/pandas-docs/stable/visualization.html#plotting-tools) to the `DataFrame/Series.plot` accessor.
But personally, I think we really shouldn't expose those methods too much (we should rather be deprecating them, but we only kept them for historical reasons).
cc @jreback @mroeschke | https://api.github.com/repos/pandas-dev/pandas/pulls/24912 | 2019-01-24T19:05:41Z | 2019-01-25T07:10:40Z | 2019-01-25T07:10:40Z | 2019-01-25T07:20:35Z |
DOC: some 0.24.0 whatsnew clean-up | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 9198c610f0f44..b0d3fead40ef0 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -10,27 +10,34 @@ What's New in 0.24.0 (January XX, 2019)
{{ header }}
-These are the changes in pandas 0.24.0. See :ref:`release` for a full changelog
-including other versions of pandas.
+This is a major release from 0.23.4 and includes a number of API changes, new
+features, enhancements, and performance improvements along with a large number
+of bug fixes.
-Enhancements
-~~~~~~~~~~~~
+Highlights include:
-Highlights include
-
-* :ref:`Optional Nullable Integer Support <whatsnew_0240.enhancements.intna>`
+* :ref:`Optional Integer NA Support <whatsnew_0240.enhancements.intna>`
* :ref:`New APIs for accessing the array backing a Series or Index <whatsnew_0240.values_api>`
* :ref:`A new top-level method for creating arrays <whatsnew_0240.enhancements.array>`
* :ref:`Store Interval and Period data in a Series or DataFrame <whatsnew_0240.enhancements.interval>`
* :ref:`Support for joining on two MultiIndexes <whatsnew_0240.enhancements.join_with_two_multiindexes>`
+
+Check the :ref:`API Changes <whatsnew_0240.api_breaking>` and :ref:`deprecations <whatsnew_0240.deprecations>` before updating.
+
+These are the changes in pandas 0.24.0. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+
+Enhancements
+~~~~~~~~~~~~
+
.. _whatsnew_0240.enhancements.intna:
Optional Integer NA Support
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of :ref:`extension types <extending.extension-types>`.
-Here is an example of the usage.
We can construct a ``Series`` with the specified dtype. The dtype string ``Int64`` is a pandas ``ExtensionDtype``. Specifying a list or array using the traditional missing value
marker of ``np.nan`` will infer to integer dtype. The display of the ``Series`` will also use the ``NaN`` to indicate missing values in string outputs. (:issue:`20700`, :issue:`20747`, :issue:`22441`, :issue:`21789`, :issue:`22346`)
@@ -60,7 +67,7 @@ Operations on these dtypes will propagate ``NaN`` as other pandas operations.
# coerce when needed
s + 0.01
-These dtypes can operate as part of of ``DataFrame``.
+These dtypes can operate as part of a ``DataFrame``.
.. ipython:: python
@@ -69,7 +76,7 @@ These dtypes can operate as part of of ``DataFrame``.
df.dtypes
-These dtypes can be merged & reshaped & casted.
+These dtypes can be merged, reshaped, and casted.
.. ipython:: python
@@ -112,6 +119,7 @@ a new ndarray of period objects each time.
.. ipython:: python
+ idx.values
id(idx.values)
id(idx.values)
@@ -124,7 +132,7 @@ If you need an actual NumPy array, use :meth:`Series.to_numpy` or :meth:`Index.t
For Series and Indexes backed by normal NumPy arrays, :attr:`Series.array` will return a
new :class:`arrays.PandasArray`, which is a thin (no-copy) wrapper around a
-:class:`numpy.ndarray`. :class:`arrays.PandasArray` isn't especially useful on its own,
+:class:`numpy.ndarray`. :class:`~arrays.PandasArray` isn't especially useful on its own,
but it does provide the same interface as any extension array defined in pandas or by
a third-party library.
@@ -142,14 +150,13 @@ See :ref:`Dtypes <basics.dtypes>` and :ref:`Attributes and Underlying Data <basi
.. _whatsnew_0240.enhancements.array:
-Array
-^^^^^
+``pandas.array``: a new top-level method for creating arrays
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A new top-level method :func:`array` has been added for creating 1-dimensional arrays (:issue:`22860`).
This can be used to create any :ref:`extension array <extending.extension-types>`, including
-extension arrays registered by :ref:`3rd party libraries <ecosystem.extensions>`. See
-
-See :ref:`Dtypes <basics.dtypes>` for more on extension arrays.
+extension arrays registered by :ref:`3rd party libraries <ecosystem.extensions>`.
+See the :ref:`dtypes docs <basics.dtypes>` for more on extension arrays.
.. ipython:: python
@@ -158,15 +165,15 @@ See :ref:`Dtypes <basics.dtypes>` for more on extension arrays.
Passing data for which there isn't dedicated extension type (e.g. float, integer, etc.)
will return a new :class:`arrays.PandasArray`, which is just a thin (no-copy)
-wrapper around a :class:`numpy.ndarray` that satisfies the extension array interface.
+wrapper around a :class:`numpy.ndarray` that satisfies the pandas extension array interface.
.. ipython:: python
pd.array([1, 2, 3])
-On their own, a :class:`arrays.PandasArray` isn't a very useful object.
+On their own, a :class:`~arrays.PandasArray` isn't a very useful object.
But if you need write low-level code that works generically for any
-:class:`~pandas.api.extensions.ExtensionArray`, :class:`arrays.PandasArray`
+:class:`~pandas.api.extensions.ExtensionArray`, :class:`~arrays.PandasArray`
satisfies that need.
Notice that by default, if no ``dtype`` is specified, the dtype of the returned
@@ -197,7 +204,7 @@ For periods:
.. ipython:: python
- pser = pd.Series(pd.date_range("2000", freq="D", periods=5))
+ pser = pd.Series(pd.period_range("2000", freq="D", periods=5))
pser
pser.dtype
@@ -259,23 +266,6 @@ For earlier versions this can be done using the following.
pd.merge(left.reset_index(), right.reset_index(),
on=['key'], how='inner').set_index(['key', 'X', 'Y'])
-
-.. _whatsnew_0240.enhancements.extension_array_operators:
-
-``ExtensionArray`` operator support
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A ``Series`` based on an ``ExtensionArray`` now supports arithmetic and comparison
-operators (:issue:`19577`). There are two approaches for providing operator support for an ``ExtensionArray``:
-
-1. Define each of the operators on your ``ExtensionArray`` subclass.
-2. Use an operator implementation from pandas that depends on operators that are already defined
- on the underlying elements (scalars) of the ``ExtensionArray``.
-
-See the :ref:`ExtensionArray Operator Support
-<extending.extension.operator>` documentation section for details on both
-ways of adding operator support.
-
.. _whatsnew_0240.enhancements.read_html:
``read_html`` Enhancements
@@ -335,7 +325,7 @@ convenient way to apply users' predefined styling functions, and can help reduce
df.style.pipe(format_and_align).set_caption('Summary of results.')
Similar methods already exist for other classes in pandas, including :meth:`DataFrame.pipe`,
-:meth:`pandas.core.groupby.GroupBy.pipe`, and :meth:`pandas.core.resample.Resampler.pipe`.
+:meth:`GroupBy.pipe() <pandas.core.groupby.GroupBy.pipe>`, and :meth:`Resampler.pipe() <pandas.core.resample.Resampler.pipe>`.
.. _whatsnew_0240.enhancements.rename_axis:
@@ -343,7 +333,7 @@ Renaming names in a MultiIndex
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:func:`DataFrame.rename_axis` now supports ``index`` and ``columns`` arguments
-and :func:`Series.rename_axis` supports ``index`` argument (:issue:`19978`)
+and :func:`Series.rename_axis` supports ``index`` argument (:issue:`19978`).
This change allows a dictionary to be passed so that some of the names
of a ``MultiIndex`` can be changed.
@@ -371,13 +361,13 @@ Other Enhancements
- :func:`DataFrame.to_parquet` now accepts ``index`` as an argument, allowing
the user to override the engine's default behavior to include or omit the
dataframe's indexes from the resulting Parquet file. (:issue:`20768`)
+- :func:`read_feather` now accepts ``columns`` as an argument, allowing the user to specify which columns should be read. (:issue:`24025`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (:issue:`22684`)
- :func:`DataFrame.to_string` now accepts ``decimal`` as an argument, allowing the user to specify which decimal separator should be used in the output. (:issue:`23614`)
-- :func:`read_feather` now accepts ``columns`` as an argument, allowing the user to specify which columns should be read. (:issue:`24025`)
- :func:`DataFrame.to_html` now accepts ``render_links`` as an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame.
See the :ref:`section on writing HTML <io.html>` in the IO docs for example usage. (:issue:`2679`)
- :func:`pandas.read_csv` now supports pandas extension types as an argument to ``dtype``, allowing the user to use pandas extension types when reading CSVs. (:issue:`23228`)
-- :meth:`DataFrame.shift` :meth:`Series.shift`, :meth:`ExtensionArray.shift`, :meth:`SparseArray.shift`, :meth:`Period.shift`, :meth:`GroupBy.shift`, :meth:`Categorical.shift`, :meth:`NDFrame.shift` and :meth:`Block.shift` now accept `fill_value` as an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (:issue:`15486`)
+- The :meth:`~DataFrame.shift` method now accepts `fill_value` as an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (:issue:`15486`)
- :func:`to_datetime` now supports the ``%Z`` and ``%z`` directive when passed into ``format`` (:issue:`13486`)
- :func:`Series.mode` and :func:`DataFrame.mode` now support the ``dropna`` parameter which can be used to specify whether ``NaN``/``NaT`` values should be considered (:issue:`17534`)
- :func:`DataFrame.to_csv` and :func:`Series.to_csv` now support the ``compression`` keyword when a file handle is passed. (:issue:`21227`)
@@ -399,18 +389,19 @@ Other Enhancements
The default compression for ``to_csv``, ``to_json``, and ``to_pickle`` methods has been updated to ``'infer'`` (:issue:`22004`).
- :meth:`DataFrame.to_sql` now supports writing ``TIMESTAMP WITH TIME ZONE`` types for supported databases. For databases that don't support timezones, datetime data will be stored as timezone unaware local timestamps. See the :ref:`io.sql_datetime_data` for implications (:issue:`9086`).
- :func:`to_timedelta` now supports iso-formated timedelta strings (:issue:`21877`)
-- :class:`Series` and :class:`DataFrame` now support :class:`Iterable` in constructor (:issue:`2193`)
+- :class:`Series` and :class:`DataFrame` now support :class:`Iterable` objects in the constructor (:issue:`2193`)
- :class:`DatetimeIndex` has gained the :attr:`DatetimeIndex.timetz` attribute. This returns the local time with timezone information. (:issue:`21358`)
-- :meth:`Timestamp.round`, :meth:`Timestamp.ceil`, and :meth:`Timestamp.floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support an ``ambiguous`` argument for handling datetimes that are rounded to ambiguous times (:issue:`18946`)
-- :meth:`Timestamp.round`, :meth:`Timestamp.ceil`, and :meth:`Timestamp.floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support a ``nonexistent`` argument for handling datetimes that are rounded to nonexistent times. See :ref:`timeseries.timezone_nonexistent` (:issue:`22647`)
-- :class:`pandas.core.resample.Resampler` now is iterable like :class:`pandas.core.groupby.GroupBy` (:issue:`15314`).
+- :meth:`~Timestamp.round`, :meth:`~Timestamp.ceil`, and :meth:`~Timestamp.floor` for :class:`DatetimeIndex` and :class:`Timestamp`
+ now support an ``ambiguous`` argument for handling datetimes that are rounded to ambiguous times (:issue:`18946`)
+ and a ``nonexistent`` argument for handling datetimes that are rounded to nonexistent times. See :ref:`timeseries.timezone_nonexistent` (:issue:`22647`)
+- The result of :meth:`~DataFrame.resample` is now iterable similar to ``groupby()`` (:issue:`15314`).
- :meth:`Series.resample` and :meth:`DataFrame.resample` have gained the :meth:`pandas.core.resample.Resampler.quantile` (:issue:`15023`).
- :meth:`DataFrame.resample` and :meth:`Series.resample` with a :class:`PeriodIndex` will now respect the ``base`` argument in the same fashion as with a :class:`DatetimeIndex`. (:issue:`23882`)
- :meth:`pandas.api.types.is_list_like` has gained a keyword ``allow_sets`` which is ``True`` by default; if ``False``,
all instances of ``set`` will not be considered "list-like" anymore (:issue:`23061`)
- :meth:`Index.to_frame` now supports overriding column name(s) (:issue:`22580`).
- :meth:`Categorical.from_codes` now can take a ``dtype`` parameter as an alternative to passing ``categories`` and ``ordered`` (:issue:`24398`).
-- New attribute :attr:`__git_version__` will return git commit sha of current build (:issue:`21295`).
+- New attribute ``__git_version__`` will return git commit sha of current build (:issue:`21295`).
- Compatibility with Matplotlib 3.0 (:issue:`22790`).
- Added :meth:`Interval.overlaps`, :meth:`IntervalArray.overlaps`, and :meth:`IntervalIndex.overlaps` for determining overlaps between interval-like objects (:issue:`21998`)
- :func:`read_fwf` now accepts keyword ``infer_nrows`` (:issue:`15138`).
@@ -426,7 +417,7 @@ Other Enhancements
- :class:`IntervalIndex` has gained the :attr:`~IntervalIndex.is_overlapping` attribute to indicate if the ``IntervalIndex`` contains any overlapping intervals (:issue:`23309`)
- :func:`pandas.DataFrame.to_sql` has gained the ``method`` argument to control SQL insertion clause. See the :ref:`insertion method <io.sql.method>` section in the documentation. (:issue:`8953`)
- :meth:`DataFrame.corrwith` now supports Spearman's rank correlation, Kendall's tau as well as callable correlation methods. (:issue:`21925`)
-- :meth:`DataFrame.to_json`, :meth:`DataFrame.to_csv`, :meth:`DataFrame.to_pickle`, and :meth:`DataFrame.to_XXX` etc. now support tilde(~) in path argument. (:issue:`23473`)
+- :meth:`DataFrame.to_json`, :meth:`DataFrame.to_csv`, :meth:`DataFrame.to_pickle`, and other export methods now support tilde(~) in path argument. (:issue:`23473`)
.. _whatsnew_0240.api_breaking:
@@ -438,8 +429,8 @@ Pandas 0.24.0 includes a number of API breaking changes.
.. _whatsnew_0240.api_breaking.deps:
-Dependencies have increased minimum versions
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Increased minimum versions for dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We have updated our minimum supported versions of dependencies (:issue:`21242`, :issue:`18742`, :issue:`23774`, :issue:`24767`).
If installed, we now require:
@@ -1167,17 +1158,19 @@ Other API Changes
.. _whatsnew_0240.api.extension:
-ExtensionType Changes
-~~~~~~~~~~~~~~~~~~~~~
+Extension Type Changes
+~~~~~~~~~~~~~~~~~~~~~~
**Equality and Hashability**
-Pandas now requires that extension dtypes be hashable. The base class implements
+Pandas now requires that extension dtypes be hashable (i.e. the respective
+``ExtensionDtype`` objects; hashability is not a requirement for the values
+of the corresponding ``ExtensionArray``). The base class implements
a default ``__eq__`` and ``__hash__``. If you have a parametrized dtype, you should
update the ``ExtensionDtype._metadata`` tuple to match the signature of your
``__init__`` method. See :class:`pandas.api.extensions.ExtensionDtype` for more (:issue:`22476`).
-**Reshaping changes**
+**New and changed methods**
- :meth:`~pandas.api.types.ExtensionArray.dropna` has been added (:issue:`21185`)
- :meth:`~pandas.api.types.ExtensionArray.repeat` has been added (:issue:`24349`)
@@ -1195,9 +1188,25 @@ update the ``ExtensionDtype._metadata`` tuple to match the signature of your
- Added :meth:`pandas.api.types.register_extension_dtype` to register an extension type with pandas (:issue:`22664`)
- Updated the ``.type`` attribute for ``PeriodDtype``, ``DatetimeTZDtype``, and ``IntervalDtype`` to be instances of the dtype (``Period``, ``Timestamp``, and ``Interval`` respectively) (:issue:`22938`)
+.. _whatsnew_0240.enhancements.extension_array_operators:
+
+**Operator support**
+
+A ``Series`` based on an ``ExtensionArray`` now supports arithmetic and comparison
+operators (:issue:`19577`). There are two approaches for providing operator support for an ``ExtensionArray``:
+
+1. Define each of the operators on your ``ExtensionArray`` subclass.
+2. Use an operator implementation from pandas that depends on operators that are already defined
+ on the underlying elements (scalars) of the ``ExtensionArray``.
+
+See the :ref:`ExtensionArray Operator Support
+<extending.extension.operator>` documentation section for details on both
+ways of adding operator support.
+
**Other changes**
- A default repr for :class:`pandas.api.extensions.ExtensionArray` is now provided (:issue:`23601`).
+- :meth:`ExtensionArray._formatting_values` is deprecated. Use :attr:`ExtensionArray._formatter` instead. (:issue:`23601`)
- An ``ExtensionArray`` with a boolean dtype now works correctly as a boolean indexer. :meth:`pandas.api.types.is_bool_dtype` now properly considers them boolean (:issue:`22326`)
**Bug Fixes**
@@ -1246,7 +1255,6 @@ Deprecations
- The methods :meth:`DataFrame.update` and :meth:`Panel.update` have deprecated the ``raise_conflict=False|True`` keyword in favor of ``errors='ignore'|'raise'`` (:issue:`23585`)
- The methods :meth:`Series.str.partition` and :meth:`Series.str.rpartition` have deprecated the ``pat`` keyword in favor of ``sep`` (:issue:`22676`)
- Deprecated the ``nthreads`` keyword of :func:`pandas.read_feather` in favor of ``use_threads`` to reflect the changes in ``pyarrow>=0.11.0``. (:issue:`23053`)
-- :meth:`ExtensionArray._formatting_values` is deprecated. Use :attr:`ExtensionArray._formatter` instead. (:issue:`23601`)
- :func:`pandas.read_excel` has deprecated accepting ``usecols`` as an integer. Please pass in a list of ints from 0 to ``usecols`` inclusive instead (:issue:`23527`)
- Constructing a :class:`TimedeltaIndex` from data with ``datetime64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23539`)
- Constructing a :class:`DatetimeIndex` from data with ``timedelta64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23675`)
| Did some proofreading on the train
cc @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/24911 | 2019-01-24T18:16:39Z | 2019-01-25T08:45:36Z | 2019-01-25T08:45:36Z | 2019-01-25T08:45:40Z |
DOC: Adding redirects to API moved pages | diff --git a/.gitignore b/.gitignore
index 4598714db6c6a..9891883879cf1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -101,7 +101,7 @@ asv_bench/pandas/
# Documentation generated files #
#################################
doc/source/generated
-doc/source/api/generated
+doc/source/reference/api
doc/source/_static
doc/source/vbench
doc/source/vbench.rst
diff --git a/doc/make.py b/doc/make.py
index bc458d6b53cb0..438c4a04a3f08 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -53,7 +53,7 @@ def __init__(self, num_jobs=0, include_api=True, single_doc=None,
if single_doc and single_doc.endswith('.rst'):
self.single_doc_html = os.path.splitext(single_doc)[0] + '.html'
elif single_doc:
- self.single_doc_html = 'api/generated/pandas.{}.html'.format(
+ self.single_doc_html = 'reference/api/pandas.{}.html'.format(
single_doc)
def _process_single_doc(self, single_doc):
@@ -63,7 +63,7 @@ def _process_single_doc(self, single_doc):
For example, categorial.rst or pandas.DataFrame.head. For the latter,
return the corresponding file path
- (e.g. generated/pandas.DataFrame.head.rst).
+ (e.g. reference/api/pandas.DataFrame.head.rst).
"""
base_name, extension = os.path.splitext(single_doc)
if extension in ('.rst', '.ipynb'):
@@ -258,7 +258,7 @@ def clean():
Clean documentation generated files.
"""
shutil.rmtree(BUILD_PATH, ignore_errors=True)
- shutil.rmtree(os.path.join(SOURCE_PATH, 'api', 'generated'),
+ shutil.rmtree(os.path.join(SOURCE_PATH, 'reference', 'api'),
ignore_errors=True)
def zip_html(self):
diff --git a/doc/redirects.csv b/doc/redirects.csv
index e0de03745aaa8..43542258799e9 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -39,3 +39,1538 @@ contributing_docstring,development/contributing_docstring
developer,development/developer
extending,development/extending
internals,development/internals
+
+# api
+api,reference/index
+generated/pandas.api.extensions.ExtensionArray.argsort,../reference/api/pandas.api.extensions.ExtensionArray.argsort
+generated/pandas.api.extensions.ExtensionArray.astype,../reference/api/pandas.api.extensions.ExtensionArray.astype
+generated/pandas.api.extensions.ExtensionArray.copy,../reference/api/pandas.api.extensions.ExtensionArray.copy
+generated/pandas.api.extensions.ExtensionArray.dropna,../reference/api/pandas.api.extensions.ExtensionArray.dropna
+generated/pandas.api.extensions.ExtensionArray.dtype,../reference/api/pandas.api.extensions.ExtensionArray.dtype
+generated/pandas.api.extensions.ExtensionArray.factorize,../reference/api/pandas.api.extensions.ExtensionArray.factorize
+generated/pandas.api.extensions.ExtensionArray.fillna,../reference/api/pandas.api.extensions.ExtensionArray.fillna
+generated/pandas.api.extensions.ExtensionArray,../reference/api/pandas.api.extensions.ExtensionArray
+generated/pandas.api.extensions.ExtensionArray.isna,../reference/api/pandas.api.extensions.ExtensionArray.isna
+generated/pandas.api.extensions.ExtensionArray.nbytes,../reference/api/pandas.api.extensions.ExtensionArray.nbytes
+generated/pandas.api.extensions.ExtensionArray.ndim,../reference/api/pandas.api.extensions.ExtensionArray.ndim
+generated/pandas.api.extensions.ExtensionArray.shape,../reference/api/pandas.api.extensions.ExtensionArray.shape
+generated/pandas.api.extensions.ExtensionArray.take,../reference/api/pandas.api.extensions.ExtensionArray.take
+generated/pandas.api.extensions.ExtensionArray.unique,../reference/api/pandas.api.extensions.ExtensionArray.unique
+generated/pandas.api.extensions.ExtensionDtype.construct_array_type,../reference/api/pandas.api.extensions.ExtensionDtype.construct_array_type
+generated/pandas.api.extensions.ExtensionDtype.construct_from_string,../reference/api/pandas.api.extensions.ExtensionDtype.construct_from_string
+generated/pandas.api.extensions.ExtensionDtype,../reference/api/pandas.api.extensions.ExtensionDtype
+generated/pandas.api.extensions.ExtensionDtype.is_dtype,../reference/api/pandas.api.extensions.ExtensionDtype.is_dtype
+generated/pandas.api.extensions.ExtensionDtype.kind,../reference/api/pandas.api.extensions.ExtensionDtype.kind
+generated/pandas.api.extensions.ExtensionDtype.name,../reference/api/pandas.api.extensions.ExtensionDtype.name
+generated/pandas.api.extensions.ExtensionDtype.names,../reference/api/pandas.api.extensions.ExtensionDtype.names
+generated/pandas.api.extensions.ExtensionDtype.na_value,../reference/api/pandas.api.extensions.ExtensionDtype.na_value
+generated/pandas.api.extensions.ExtensionDtype.type,../reference/api/pandas.api.extensions.ExtensionDtype.type
+generated/pandas.api.extensions.register_dataframe_accessor,../reference/api/pandas.api.extensions.register_dataframe_accessor
+generated/pandas.api.extensions.register_extension_dtype,../reference/api/pandas.api.extensions.register_extension_dtype
+generated/pandas.api.extensions.register_index_accessor,../reference/api/pandas.api.extensions.register_index_accessor
+generated/pandas.api.extensions.register_series_accessor,../reference/api/pandas.api.extensions.register_series_accessor
+generated/pandas.api.types.infer_dtype,../reference/api/pandas.api.types.infer_dtype
+generated/pandas.api.types.is_bool_dtype,../reference/api/pandas.api.types.is_bool_dtype
+generated/pandas.api.types.is_bool,../reference/api/pandas.api.types.is_bool
+generated/pandas.api.types.is_categorical_dtype,../reference/api/pandas.api.types.is_categorical_dtype
+generated/pandas.api.types.is_categorical,../reference/api/pandas.api.types.is_categorical
+generated/pandas.api.types.is_complex_dtype,../reference/api/pandas.api.types.is_complex_dtype
+generated/pandas.api.types.is_complex,../reference/api/pandas.api.types.is_complex
+generated/pandas.api.types.is_datetime64_any_dtype,../reference/api/pandas.api.types.is_datetime64_any_dtype
+generated/pandas.api.types.is_datetime64_dtype,../reference/api/pandas.api.types.is_datetime64_dtype
+generated/pandas.api.types.is_datetime64_ns_dtype,../reference/api/pandas.api.types.is_datetime64_ns_dtype
+generated/pandas.api.types.is_datetime64tz_dtype,../reference/api/pandas.api.types.is_datetime64tz_dtype
+generated/pandas.api.types.is_datetimetz,../reference/api/pandas.api.types.is_datetimetz
+generated/pandas.api.types.is_dict_like,../reference/api/pandas.api.types.is_dict_like
+generated/pandas.api.types.is_extension_array_dtype,../reference/api/pandas.api.types.is_extension_array_dtype
+generated/pandas.api.types.is_extension_type,../reference/api/pandas.api.types.is_extension_type
+generated/pandas.api.types.is_file_like,../reference/api/pandas.api.types.is_file_like
+generated/pandas.api.types.is_float_dtype,../reference/api/pandas.api.types.is_float_dtype
+generated/pandas.api.types.is_float,../reference/api/pandas.api.types.is_float
+generated/pandas.api.types.is_hashable,../reference/api/pandas.api.types.is_hashable
+generated/pandas.api.types.is_int64_dtype,../reference/api/pandas.api.types.is_int64_dtype
+generated/pandas.api.types.is_integer_dtype,../reference/api/pandas.api.types.is_integer_dtype
+generated/pandas.api.types.is_integer,../reference/api/pandas.api.types.is_integer
+generated/pandas.api.types.is_interval_dtype,../reference/api/pandas.api.types.is_interval_dtype
+generated/pandas.api.types.is_interval,../reference/api/pandas.api.types.is_interval
+generated/pandas.api.types.is_iterator,../reference/api/pandas.api.types.is_iterator
+generated/pandas.api.types.is_list_like,../reference/api/pandas.api.types.is_list_like
+generated/pandas.api.types.is_named_tuple,../reference/api/pandas.api.types.is_named_tuple
+generated/pandas.api.types.is_number,../reference/api/pandas.api.types.is_number
+generated/pandas.api.types.is_numeric_dtype,../reference/api/pandas.api.types.is_numeric_dtype
+generated/pandas.api.types.is_object_dtype,../reference/api/pandas.api.types.is_object_dtype
+generated/pandas.api.types.is_period_dtype,../reference/api/pandas.api.types.is_period_dtype
+generated/pandas.api.types.is_period,../reference/api/pandas.api.types.is_period
+generated/pandas.api.types.is_re_compilable,../reference/api/pandas.api.types.is_re_compilable
+generated/pandas.api.types.is_re,../reference/api/pandas.api.types.is_re
+generated/pandas.api.types.is_scalar,../reference/api/pandas.api.types.is_scalar
+generated/pandas.api.types.is_signed_integer_dtype,../reference/api/pandas.api.types.is_signed_integer_dtype
+generated/pandas.api.types.is_sparse,../reference/api/pandas.api.types.is_sparse
+generated/pandas.api.types.is_string_dtype,../reference/api/pandas.api.types.is_string_dtype
+generated/pandas.api.types.is_timedelta64_dtype,../reference/api/pandas.api.types.is_timedelta64_dtype
+generated/pandas.api.types.is_timedelta64_ns_dtype,../reference/api/pandas.api.types.is_timedelta64_ns_dtype
+generated/pandas.api.types.is_unsigned_integer_dtype,../reference/api/pandas.api.types.is_unsigned_integer_dtype
+generated/pandas.api.types.pandas_dtype,../reference/api/pandas.api.types.pandas_dtype
+generated/pandas.api.types.union_categoricals,../reference/api/pandas.api.types.union_categoricals
+generated/pandas.bdate_range,../reference/api/pandas.bdate_range
+generated/pandas.Categorical.__array__,../reference/api/pandas.Categorical.__array__
+generated/pandas.Categorical.categories,../reference/api/pandas.Categorical.categories
+generated/pandas.Categorical.codes,../reference/api/pandas.Categorical.codes
+generated/pandas.CategoricalDtype.categories,../reference/api/pandas.CategoricalDtype.categories
+generated/pandas.Categorical.dtype,../reference/api/pandas.Categorical.dtype
+generated/pandas.CategoricalDtype,../reference/api/pandas.CategoricalDtype
+generated/pandas.CategoricalDtype.ordered,../reference/api/pandas.CategoricalDtype.ordered
+generated/pandas.Categorical.from_codes,../reference/api/pandas.Categorical.from_codes
+generated/pandas.Categorical,../reference/api/pandas.Categorical
+generated/pandas.CategoricalIndex.add_categories,../reference/api/pandas.CategoricalIndex.add_categories
+generated/pandas.CategoricalIndex.as_ordered,../reference/api/pandas.CategoricalIndex.as_ordered
+generated/pandas.CategoricalIndex.as_unordered,../reference/api/pandas.CategoricalIndex.as_unordered
+generated/pandas.CategoricalIndex.categories,../reference/api/pandas.CategoricalIndex.categories
+generated/pandas.CategoricalIndex.codes,../reference/api/pandas.CategoricalIndex.codes
+generated/pandas.CategoricalIndex.equals,../reference/api/pandas.CategoricalIndex.equals
+generated/pandas.CategoricalIndex,../reference/api/pandas.CategoricalIndex
+generated/pandas.CategoricalIndex.map,../reference/api/pandas.CategoricalIndex.map
+generated/pandas.CategoricalIndex.ordered,../reference/api/pandas.CategoricalIndex.ordered
+generated/pandas.CategoricalIndex.remove_categories,../reference/api/pandas.CategoricalIndex.remove_categories
+generated/pandas.CategoricalIndex.remove_unused_categories,../reference/api/pandas.CategoricalIndex.remove_unused_categories
+generated/pandas.CategoricalIndex.rename_categories,../reference/api/pandas.CategoricalIndex.rename_categories
+generated/pandas.CategoricalIndex.reorder_categories,../reference/api/pandas.CategoricalIndex.reorder_categories
+generated/pandas.CategoricalIndex.set_categories,../reference/api/pandas.CategoricalIndex.set_categories
+generated/pandas.Categorical.ordered,../reference/api/pandas.Categorical.ordered
+generated/pandas.concat,../reference/api/pandas.concat
+generated/pandas.core.groupby.DataFrameGroupBy.all,../reference/api/pandas.core.groupby.DataFrameGroupBy.all
+generated/pandas.core.groupby.DataFrameGroupBy.any,../reference/api/pandas.core.groupby.DataFrameGroupBy.any
+generated/pandas.core.groupby.DataFrameGroupBy.bfill,../reference/api/pandas.core.groupby.DataFrameGroupBy.bfill
+generated/pandas.core.groupby.DataFrameGroupBy.boxplot,../reference/api/pandas.core.groupby.DataFrameGroupBy.boxplot
+generated/pandas.core.groupby.DataFrameGroupBy.corr,../reference/api/pandas.core.groupby.DataFrameGroupBy.corr
+generated/pandas.core.groupby.DataFrameGroupBy.corrwith,../reference/api/pandas.core.groupby.DataFrameGroupBy.corrwith
+generated/pandas.core.groupby.DataFrameGroupBy.count,../reference/api/pandas.core.groupby.DataFrameGroupBy.count
+generated/pandas.core.groupby.DataFrameGroupBy.cov,../reference/api/pandas.core.groupby.DataFrameGroupBy.cov
+generated/pandas.core.groupby.DataFrameGroupBy.cummax,../reference/api/pandas.core.groupby.DataFrameGroupBy.cummax
+generated/pandas.core.groupby.DataFrameGroupBy.cummin,../reference/api/pandas.core.groupby.DataFrameGroupBy.cummin
+generated/pandas.core.groupby.DataFrameGroupBy.cumprod,../reference/api/pandas.core.groupby.DataFrameGroupBy.cumprod
+generated/pandas.core.groupby.DataFrameGroupBy.cumsum,../reference/api/pandas.core.groupby.DataFrameGroupBy.cumsum
+generated/pandas.core.groupby.DataFrameGroupBy.describe,../reference/api/pandas.core.groupby.DataFrameGroupBy.describe
+generated/pandas.core.groupby.DataFrameGroupBy.diff,../reference/api/pandas.core.groupby.DataFrameGroupBy.diff
+generated/pandas.core.groupby.DataFrameGroupBy.ffill,../reference/api/pandas.core.groupby.DataFrameGroupBy.ffill
+generated/pandas.core.groupby.DataFrameGroupBy.fillna,../reference/api/pandas.core.groupby.DataFrameGroupBy.fillna
+generated/pandas.core.groupby.DataFrameGroupBy.filter,../reference/api/pandas.core.groupby.DataFrameGroupBy.filter
+generated/pandas.core.groupby.DataFrameGroupBy.hist,../reference/api/pandas.core.groupby.DataFrameGroupBy.hist
+generated/pandas.core.groupby.DataFrameGroupBy.idxmax,../reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax
+generated/pandas.core.groupby.DataFrameGroupBy.idxmin,../reference/api/pandas.core.groupby.DataFrameGroupBy.idxmin
+generated/pandas.core.groupby.DataFrameGroupBy.mad,../reference/api/pandas.core.groupby.DataFrameGroupBy.mad
+generated/pandas.core.groupby.DataFrameGroupBy.pct_change,../reference/api/pandas.core.groupby.DataFrameGroupBy.pct_change
+generated/pandas.core.groupby.DataFrameGroupBy.plot,../reference/api/pandas.core.groupby.DataFrameGroupBy.plot
+generated/pandas.core.groupby.DataFrameGroupBy.quantile,../reference/api/pandas.core.groupby.DataFrameGroupBy.quantile
+generated/pandas.core.groupby.DataFrameGroupBy.rank,../reference/api/pandas.core.groupby.DataFrameGroupBy.rank
+generated/pandas.core.groupby.DataFrameGroupBy.resample,../reference/api/pandas.core.groupby.DataFrameGroupBy.resample
+generated/pandas.core.groupby.DataFrameGroupBy.shift,../reference/api/pandas.core.groupby.DataFrameGroupBy.shift
+generated/pandas.core.groupby.DataFrameGroupBy.size,../reference/api/pandas.core.groupby.DataFrameGroupBy.size
+generated/pandas.core.groupby.DataFrameGroupBy.skew,../reference/api/pandas.core.groupby.DataFrameGroupBy.skew
+generated/pandas.core.groupby.DataFrameGroupBy.take,../reference/api/pandas.core.groupby.DataFrameGroupBy.take
+generated/pandas.core.groupby.DataFrameGroupBy.tshift,../reference/api/pandas.core.groupby.DataFrameGroupBy.tshift
+generated/pandas.core.groupby.GroupBy.agg,../reference/api/pandas.core.groupby.GroupBy.agg
+generated/pandas.core.groupby.GroupBy.aggregate,../reference/api/pandas.core.groupby.GroupBy.aggregate
+generated/pandas.core.groupby.GroupBy.all,../reference/api/pandas.core.groupby.GroupBy.all
+generated/pandas.core.groupby.GroupBy.any,../reference/api/pandas.core.groupby.GroupBy.any
+generated/pandas.core.groupby.GroupBy.apply,../reference/api/pandas.core.groupby.GroupBy.apply
+generated/pandas.core.groupby.GroupBy.bfill,../reference/api/pandas.core.groupby.GroupBy.bfill
+generated/pandas.core.groupby.GroupBy.count,../reference/api/pandas.core.groupby.GroupBy.count
+generated/pandas.core.groupby.GroupBy.cumcount,../reference/api/pandas.core.groupby.GroupBy.cumcount
+generated/pandas.core.groupby.GroupBy.ffill,../reference/api/pandas.core.groupby.GroupBy.ffill
+generated/pandas.core.groupby.GroupBy.first,../reference/api/pandas.core.groupby.GroupBy.first
+generated/pandas.core.groupby.GroupBy.get_group,../reference/api/pandas.core.groupby.GroupBy.get_group
+generated/pandas.core.groupby.GroupBy.groups,../reference/api/pandas.core.groupby.GroupBy.groups
+generated/pandas.core.groupby.GroupBy.head,../reference/api/pandas.core.groupby.GroupBy.head
+generated/pandas.core.groupby.GroupBy.indices,../reference/api/pandas.core.groupby.GroupBy.indices
+generated/pandas.core.groupby.GroupBy.__iter__,../reference/api/pandas.core.groupby.GroupBy.__iter__
+generated/pandas.core.groupby.GroupBy.last,../reference/api/pandas.core.groupby.GroupBy.last
+generated/pandas.core.groupby.GroupBy.max,../reference/api/pandas.core.groupby.GroupBy.max
+generated/pandas.core.groupby.GroupBy.mean,../reference/api/pandas.core.groupby.GroupBy.mean
+generated/pandas.core.groupby.GroupBy.median,../reference/api/pandas.core.groupby.GroupBy.median
+generated/pandas.core.groupby.GroupBy.min,../reference/api/pandas.core.groupby.GroupBy.min
+generated/pandas.core.groupby.GroupBy.ngroup,../reference/api/pandas.core.groupby.GroupBy.ngroup
+generated/pandas.core.groupby.GroupBy.nth,../reference/api/pandas.core.groupby.GroupBy.nth
+generated/pandas.core.groupby.GroupBy.ohlc,../reference/api/pandas.core.groupby.GroupBy.ohlc
+generated/pandas.core.groupby.GroupBy.pct_change,../reference/api/pandas.core.groupby.GroupBy.pct_change
+generated/pandas.core.groupby.GroupBy.pipe,../reference/api/pandas.core.groupby.GroupBy.pipe
+generated/pandas.core.groupby.GroupBy.prod,../reference/api/pandas.core.groupby.GroupBy.prod
+generated/pandas.core.groupby.GroupBy.rank,../reference/api/pandas.core.groupby.GroupBy.rank
+generated/pandas.core.groupby.GroupBy.sem,../reference/api/pandas.core.groupby.GroupBy.sem
+generated/pandas.core.groupby.GroupBy.size,../reference/api/pandas.core.groupby.GroupBy.size
+generated/pandas.core.groupby.GroupBy.std,../reference/api/pandas.core.groupby.GroupBy.std
+generated/pandas.core.groupby.GroupBy.sum,../reference/api/pandas.core.groupby.GroupBy.sum
+generated/pandas.core.groupby.GroupBy.tail,../reference/api/pandas.core.groupby.GroupBy.tail
+generated/pandas.core.groupby.GroupBy.transform,../reference/api/pandas.core.groupby.GroupBy.transform
+generated/pandas.core.groupby.GroupBy.var,../reference/api/pandas.core.groupby.GroupBy.var
+generated/pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing,../reference/api/pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing
+generated/pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing,../reference/api/pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing
+generated/pandas.core.groupby.SeriesGroupBy.nlargest,../reference/api/pandas.core.groupby.SeriesGroupBy.nlargest
+generated/pandas.core.groupby.SeriesGroupBy.nsmallest,../reference/api/pandas.core.groupby.SeriesGroupBy.nsmallest
+generated/pandas.core.groupby.SeriesGroupBy.nunique,../reference/api/pandas.core.groupby.SeriesGroupBy.nunique
+generated/pandas.core.groupby.SeriesGroupBy.unique,../reference/api/pandas.core.groupby.SeriesGroupBy.unique
+generated/pandas.core.groupby.SeriesGroupBy.value_counts,../reference/api/pandas.core.groupby.SeriesGroupBy.value_counts
+generated/pandas.core.resample.Resampler.aggregate,../reference/api/pandas.core.resample.Resampler.aggregate
+generated/pandas.core.resample.Resampler.apply,../reference/api/pandas.core.resample.Resampler.apply
+generated/pandas.core.resample.Resampler.asfreq,../reference/api/pandas.core.resample.Resampler.asfreq
+generated/pandas.core.resample.Resampler.backfill,../reference/api/pandas.core.resample.Resampler.backfill
+generated/pandas.core.resample.Resampler.bfill,../reference/api/pandas.core.resample.Resampler.bfill
+generated/pandas.core.resample.Resampler.count,../reference/api/pandas.core.resample.Resampler.count
+generated/pandas.core.resample.Resampler.ffill,../reference/api/pandas.core.resample.Resampler.ffill
+generated/pandas.core.resample.Resampler.fillna,../reference/api/pandas.core.resample.Resampler.fillna
+generated/pandas.core.resample.Resampler.first,../reference/api/pandas.core.resample.Resampler.first
+generated/pandas.core.resample.Resampler.get_group,../reference/api/pandas.core.resample.Resampler.get_group
+generated/pandas.core.resample.Resampler.groups,../reference/api/pandas.core.resample.Resampler.groups
+generated/pandas.core.resample.Resampler.indices,../reference/api/pandas.core.resample.Resampler.indices
+generated/pandas.core.resample.Resampler.interpolate,../reference/api/pandas.core.resample.Resampler.interpolate
+generated/pandas.core.resample.Resampler.__iter__,../reference/api/pandas.core.resample.Resampler.__iter__
+generated/pandas.core.resample.Resampler.last,../reference/api/pandas.core.resample.Resampler.last
+generated/pandas.core.resample.Resampler.max,../reference/api/pandas.core.resample.Resampler.max
+generated/pandas.core.resample.Resampler.mean,../reference/api/pandas.core.resample.Resampler.mean
+generated/pandas.core.resample.Resampler.median,../reference/api/pandas.core.resample.Resampler.median
+generated/pandas.core.resample.Resampler.min,../reference/api/pandas.core.resample.Resampler.min
+generated/pandas.core.resample.Resampler.nearest,../reference/api/pandas.core.resample.Resampler.nearest
+generated/pandas.core.resample.Resampler.nunique,../reference/api/pandas.core.resample.Resampler.nunique
+generated/pandas.core.resample.Resampler.ohlc,../reference/api/pandas.core.resample.Resampler.ohlc
+generated/pandas.core.resample.Resampler.pad,../reference/api/pandas.core.resample.Resampler.pad
+generated/pandas.core.resample.Resampler.pipe,../reference/api/pandas.core.resample.Resampler.pipe
+generated/pandas.core.resample.Resampler.prod,../reference/api/pandas.core.resample.Resampler.prod
+generated/pandas.core.resample.Resampler.quantile,../reference/api/pandas.core.resample.Resampler.quantile
+generated/pandas.core.resample.Resampler.sem,../reference/api/pandas.core.resample.Resampler.sem
+generated/pandas.core.resample.Resampler.size,../reference/api/pandas.core.resample.Resampler.size
+generated/pandas.core.resample.Resampler.std,../reference/api/pandas.core.resample.Resampler.std
+generated/pandas.core.resample.Resampler.sum,../reference/api/pandas.core.resample.Resampler.sum
+generated/pandas.core.resample.Resampler.transform,../reference/api/pandas.core.resample.Resampler.transform
+generated/pandas.core.resample.Resampler.var,../reference/api/pandas.core.resample.Resampler.var
+generated/pandas.core.window.EWM.corr,../reference/api/pandas.core.window.EWM.corr
+generated/pandas.core.window.EWM.cov,../reference/api/pandas.core.window.EWM.cov
+generated/pandas.core.window.EWM.mean,../reference/api/pandas.core.window.EWM.mean
+generated/pandas.core.window.EWM.std,../reference/api/pandas.core.window.EWM.std
+generated/pandas.core.window.EWM.var,../reference/api/pandas.core.window.EWM.var
+generated/pandas.core.window.Expanding.aggregate,../reference/api/pandas.core.window.Expanding.aggregate
+generated/pandas.core.window.Expanding.apply,../reference/api/pandas.core.window.Expanding.apply
+generated/pandas.core.window.Expanding.corr,../reference/api/pandas.core.window.Expanding.corr
+generated/pandas.core.window.Expanding.count,../reference/api/pandas.core.window.Expanding.count
+generated/pandas.core.window.Expanding.cov,../reference/api/pandas.core.window.Expanding.cov
+generated/pandas.core.window.Expanding.kurt,../reference/api/pandas.core.window.Expanding.kurt
+generated/pandas.core.window.Expanding.max,../reference/api/pandas.core.window.Expanding.max
+generated/pandas.core.window.Expanding.mean,../reference/api/pandas.core.window.Expanding.mean
+generated/pandas.core.window.Expanding.median,../reference/api/pandas.core.window.Expanding.median
+generated/pandas.core.window.Expanding.min,../reference/api/pandas.core.window.Expanding.min
+generated/pandas.core.window.Expanding.quantile,../reference/api/pandas.core.window.Expanding.quantile
+generated/pandas.core.window.Expanding.skew,../reference/api/pandas.core.window.Expanding.skew
+generated/pandas.core.window.Expanding.std,../reference/api/pandas.core.window.Expanding.std
+generated/pandas.core.window.Expanding.sum,../reference/api/pandas.core.window.Expanding.sum
+generated/pandas.core.window.Expanding.var,../reference/api/pandas.core.window.Expanding.var
+generated/pandas.core.window.Rolling.aggregate,../reference/api/pandas.core.window.Rolling.aggregate
+generated/pandas.core.window.Rolling.apply,../reference/api/pandas.core.window.Rolling.apply
+generated/pandas.core.window.Rolling.corr,../reference/api/pandas.core.window.Rolling.corr
+generated/pandas.core.window.Rolling.count,../reference/api/pandas.core.window.Rolling.count
+generated/pandas.core.window.Rolling.cov,../reference/api/pandas.core.window.Rolling.cov
+generated/pandas.core.window.Rolling.kurt,../reference/api/pandas.core.window.Rolling.kurt
+generated/pandas.core.window.Rolling.max,../reference/api/pandas.core.window.Rolling.max
+generated/pandas.core.window.Rolling.mean,../reference/api/pandas.core.window.Rolling.mean
+generated/pandas.core.window.Rolling.median,../reference/api/pandas.core.window.Rolling.median
+generated/pandas.core.window.Rolling.min,../reference/api/pandas.core.window.Rolling.min
+generated/pandas.core.window.Rolling.quantile,../reference/api/pandas.core.window.Rolling.quantile
+generated/pandas.core.window.Rolling.skew,../reference/api/pandas.core.window.Rolling.skew
+generated/pandas.core.window.Rolling.std,../reference/api/pandas.core.window.Rolling.std
+generated/pandas.core.window.Rolling.sum,../reference/api/pandas.core.window.Rolling.sum
+generated/pandas.core.window.Rolling.var,../reference/api/pandas.core.window.Rolling.var
+generated/pandas.core.window.Window.mean,../reference/api/pandas.core.window.Window.mean
+generated/pandas.core.window.Window.sum,../reference/api/pandas.core.window.Window.sum
+generated/pandas.crosstab,../reference/api/pandas.crosstab
+generated/pandas.cut,../reference/api/pandas.cut
+generated/pandas.DataFrame.abs,../reference/api/pandas.DataFrame.abs
+generated/pandas.DataFrame.add,../reference/api/pandas.DataFrame.add
+generated/pandas.DataFrame.add_prefix,../reference/api/pandas.DataFrame.add_prefix
+generated/pandas.DataFrame.add_suffix,../reference/api/pandas.DataFrame.add_suffix
+generated/pandas.DataFrame.agg,../reference/api/pandas.DataFrame.agg
+generated/pandas.DataFrame.aggregate,../reference/api/pandas.DataFrame.aggregate
+generated/pandas.DataFrame.align,../reference/api/pandas.DataFrame.align
+generated/pandas.DataFrame.all,../reference/api/pandas.DataFrame.all
+generated/pandas.DataFrame.any,../reference/api/pandas.DataFrame.any
+generated/pandas.DataFrame.append,../reference/api/pandas.DataFrame.append
+generated/pandas.DataFrame.apply,../reference/api/pandas.DataFrame.apply
+generated/pandas.DataFrame.applymap,../reference/api/pandas.DataFrame.applymap
+generated/pandas.DataFrame.as_blocks,../reference/api/pandas.DataFrame.as_blocks
+generated/pandas.DataFrame.asfreq,../reference/api/pandas.DataFrame.asfreq
+generated/pandas.DataFrame.as_matrix,../reference/api/pandas.DataFrame.as_matrix
+generated/pandas.DataFrame.asof,../reference/api/pandas.DataFrame.asof
+generated/pandas.DataFrame.assign,../reference/api/pandas.DataFrame.assign
+generated/pandas.DataFrame.astype,../reference/api/pandas.DataFrame.astype
+generated/pandas.DataFrame.at,../reference/api/pandas.DataFrame.at
+generated/pandas.DataFrame.at_time,../reference/api/pandas.DataFrame.at_time
+generated/pandas.DataFrame.axes,../reference/api/pandas.DataFrame.axes
+generated/pandas.DataFrame.between_time,../reference/api/pandas.DataFrame.between_time
+generated/pandas.DataFrame.bfill,../reference/api/pandas.DataFrame.bfill
+generated/pandas.DataFrame.blocks,../reference/api/pandas.DataFrame.blocks
+generated/pandas.DataFrame.bool,../reference/api/pandas.DataFrame.bool
+generated/pandas.DataFrame.boxplot,../reference/api/pandas.DataFrame.boxplot
+generated/pandas.DataFrame.clip,../reference/api/pandas.DataFrame.clip
+generated/pandas.DataFrame.clip_lower,../reference/api/pandas.DataFrame.clip_lower
+generated/pandas.DataFrame.clip_upper,../reference/api/pandas.DataFrame.clip_upper
+generated/pandas.DataFrame.columns,../reference/api/pandas.DataFrame.columns
+generated/pandas.DataFrame.combine_first,../reference/api/pandas.DataFrame.combine_first
+generated/pandas.DataFrame.combine,../reference/api/pandas.DataFrame.combine
+generated/pandas.DataFrame.compound,../reference/api/pandas.DataFrame.compound
+generated/pandas.DataFrame.convert_objects,../reference/api/pandas.DataFrame.convert_objects
+generated/pandas.DataFrame.copy,../reference/api/pandas.DataFrame.copy
+generated/pandas.DataFrame.corr,../reference/api/pandas.DataFrame.corr
+generated/pandas.DataFrame.corrwith,../reference/api/pandas.DataFrame.corrwith
+generated/pandas.DataFrame.count,../reference/api/pandas.DataFrame.count
+generated/pandas.DataFrame.cov,../reference/api/pandas.DataFrame.cov
+generated/pandas.DataFrame.cummax,../reference/api/pandas.DataFrame.cummax
+generated/pandas.DataFrame.cummin,../reference/api/pandas.DataFrame.cummin
+generated/pandas.DataFrame.cumprod,../reference/api/pandas.DataFrame.cumprod
+generated/pandas.DataFrame.cumsum,../reference/api/pandas.DataFrame.cumsum
+generated/pandas.DataFrame.describe,../reference/api/pandas.DataFrame.describe
+generated/pandas.DataFrame.diff,../reference/api/pandas.DataFrame.diff
+generated/pandas.DataFrame.div,../reference/api/pandas.DataFrame.div
+generated/pandas.DataFrame.divide,../reference/api/pandas.DataFrame.divide
+generated/pandas.DataFrame.dot,../reference/api/pandas.DataFrame.dot
+generated/pandas.DataFrame.drop_duplicates,../reference/api/pandas.DataFrame.drop_duplicates
+generated/pandas.DataFrame.drop,../reference/api/pandas.DataFrame.drop
+generated/pandas.DataFrame.droplevel,../reference/api/pandas.DataFrame.droplevel
+generated/pandas.DataFrame.dropna,../reference/api/pandas.DataFrame.dropna
+generated/pandas.DataFrame.dtypes,../reference/api/pandas.DataFrame.dtypes
+generated/pandas.DataFrame.duplicated,../reference/api/pandas.DataFrame.duplicated
+generated/pandas.DataFrame.empty,../reference/api/pandas.DataFrame.empty
+generated/pandas.DataFrame.eq,../reference/api/pandas.DataFrame.eq
+generated/pandas.DataFrame.equals,../reference/api/pandas.DataFrame.equals
+generated/pandas.DataFrame.eval,../reference/api/pandas.DataFrame.eval
+generated/pandas.DataFrame.ewm,../reference/api/pandas.DataFrame.ewm
+generated/pandas.DataFrame.expanding,../reference/api/pandas.DataFrame.expanding
+generated/pandas.DataFrame.ffill,../reference/api/pandas.DataFrame.ffill
+generated/pandas.DataFrame.fillna,../reference/api/pandas.DataFrame.fillna
+generated/pandas.DataFrame.filter,../reference/api/pandas.DataFrame.filter
+generated/pandas.DataFrame.first,../reference/api/pandas.DataFrame.first
+generated/pandas.DataFrame.first_valid_index,../reference/api/pandas.DataFrame.first_valid_index
+generated/pandas.DataFrame.floordiv,../reference/api/pandas.DataFrame.floordiv
+generated/pandas.DataFrame.from_csv,../reference/api/pandas.DataFrame.from_csv
+generated/pandas.DataFrame.from_dict,../reference/api/pandas.DataFrame.from_dict
+generated/pandas.DataFrame.from_items,../reference/api/pandas.DataFrame.from_items
+generated/pandas.DataFrame.from_records,../reference/api/pandas.DataFrame.from_records
+generated/pandas.DataFrame.ftypes,../reference/api/pandas.DataFrame.ftypes
+generated/pandas.DataFrame.ge,../reference/api/pandas.DataFrame.ge
+generated/pandas.DataFrame.get_dtype_counts,../reference/api/pandas.DataFrame.get_dtype_counts
+generated/pandas.DataFrame.get_ftype_counts,../reference/api/pandas.DataFrame.get_ftype_counts
+generated/pandas.DataFrame.get,../reference/api/pandas.DataFrame.get
+generated/pandas.DataFrame.get_value,../reference/api/pandas.DataFrame.get_value
+generated/pandas.DataFrame.get_values,../reference/api/pandas.DataFrame.get_values
+generated/pandas.DataFrame.groupby,../reference/api/pandas.DataFrame.groupby
+generated/pandas.DataFrame.gt,../reference/api/pandas.DataFrame.gt
+generated/pandas.DataFrame.head,../reference/api/pandas.DataFrame.head
+generated/pandas.DataFrame.hist,../reference/api/pandas.DataFrame.hist
+generated/pandas.DataFrame,../reference/api/pandas.DataFrame
+generated/pandas.DataFrame.iat,../reference/api/pandas.DataFrame.iat
+generated/pandas.DataFrame.idxmax,../reference/api/pandas.DataFrame.idxmax
+generated/pandas.DataFrame.idxmin,../reference/api/pandas.DataFrame.idxmin
+generated/pandas.DataFrame.iloc,../reference/api/pandas.DataFrame.iloc
+generated/pandas.DataFrame.index,../reference/api/pandas.DataFrame.index
+generated/pandas.DataFrame.infer_objects,../reference/api/pandas.DataFrame.infer_objects
+generated/pandas.DataFrame.info,../reference/api/pandas.DataFrame.info
+generated/pandas.DataFrame.insert,../reference/api/pandas.DataFrame.insert
+generated/pandas.DataFrame.interpolate,../reference/api/pandas.DataFrame.interpolate
+generated/pandas.DataFrame.is_copy,../reference/api/pandas.DataFrame.is_copy
+generated/pandas.DataFrame.isin,../reference/api/pandas.DataFrame.isin
+generated/pandas.DataFrame.isna,../reference/api/pandas.DataFrame.isna
+generated/pandas.DataFrame.isnull,../reference/api/pandas.DataFrame.isnull
+generated/pandas.DataFrame.items,../reference/api/pandas.DataFrame.items
+generated/pandas.DataFrame.__iter__,../reference/api/pandas.DataFrame.__iter__
+generated/pandas.DataFrame.iteritems,../reference/api/pandas.DataFrame.iteritems
+generated/pandas.DataFrame.iterrows,../reference/api/pandas.DataFrame.iterrows
+generated/pandas.DataFrame.itertuples,../reference/api/pandas.DataFrame.itertuples
+generated/pandas.DataFrame.ix,../reference/api/pandas.DataFrame.ix
+generated/pandas.DataFrame.join,../reference/api/pandas.DataFrame.join
+generated/pandas.DataFrame.keys,../reference/api/pandas.DataFrame.keys
+generated/pandas.DataFrame.kurt,../reference/api/pandas.DataFrame.kurt
+generated/pandas.DataFrame.kurtosis,../reference/api/pandas.DataFrame.kurtosis
+generated/pandas.DataFrame.last,../reference/api/pandas.DataFrame.last
+generated/pandas.DataFrame.last_valid_index,../reference/api/pandas.DataFrame.last_valid_index
+generated/pandas.DataFrame.le,../reference/api/pandas.DataFrame.le
+generated/pandas.DataFrame.loc,../reference/api/pandas.DataFrame.loc
+generated/pandas.DataFrame.lookup,../reference/api/pandas.DataFrame.lookup
+generated/pandas.DataFrame.lt,../reference/api/pandas.DataFrame.lt
+generated/pandas.DataFrame.mad,../reference/api/pandas.DataFrame.mad
+generated/pandas.DataFrame.mask,../reference/api/pandas.DataFrame.mask
+generated/pandas.DataFrame.max,../reference/api/pandas.DataFrame.max
+generated/pandas.DataFrame.mean,../reference/api/pandas.DataFrame.mean
+generated/pandas.DataFrame.median,../reference/api/pandas.DataFrame.median
+generated/pandas.DataFrame.melt,../reference/api/pandas.DataFrame.melt
+generated/pandas.DataFrame.memory_usage,../reference/api/pandas.DataFrame.memory_usage
+generated/pandas.DataFrame.merge,../reference/api/pandas.DataFrame.merge
+generated/pandas.DataFrame.min,../reference/api/pandas.DataFrame.min
+generated/pandas.DataFrame.mode,../reference/api/pandas.DataFrame.mode
+generated/pandas.DataFrame.mod,../reference/api/pandas.DataFrame.mod
+generated/pandas.DataFrame.mul,../reference/api/pandas.DataFrame.mul
+generated/pandas.DataFrame.multiply,../reference/api/pandas.DataFrame.multiply
+generated/pandas.DataFrame.ndim,../reference/api/pandas.DataFrame.ndim
+generated/pandas.DataFrame.ne,../reference/api/pandas.DataFrame.ne
+generated/pandas.DataFrame.nlargest,../reference/api/pandas.DataFrame.nlargest
+generated/pandas.DataFrame.notna,../reference/api/pandas.DataFrame.notna
+generated/pandas.DataFrame.notnull,../reference/api/pandas.DataFrame.notnull
+generated/pandas.DataFrame.nsmallest,../reference/api/pandas.DataFrame.nsmallest
+generated/pandas.DataFrame.nunique,../reference/api/pandas.DataFrame.nunique
+generated/pandas.DataFrame.pct_change,../reference/api/pandas.DataFrame.pct_change
+generated/pandas.DataFrame.pipe,../reference/api/pandas.DataFrame.pipe
+generated/pandas.DataFrame.pivot,../reference/api/pandas.DataFrame.pivot
+generated/pandas.DataFrame.pivot_table,../reference/api/pandas.DataFrame.pivot_table
+generated/pandas.DataFrame.plot.barh,../reference/api/pandas.DataFrame.plot.barh
+generated/pandas.DataFrame.plot.bar,../reference/api/pandas.DataFrame.plot.bar
+generated/pandas.DataFrame.plot.box,../reference/api/pandas.DataFrame.plot.box
+generated/pandas.DataFrame.plot.density,../reference/api/pandas.DataFrame.plot.density
+generated/pandas.DataFrame.plot.hexbin,../reference/api/pandas.DataFrame.plot.hexbin
+generated/pandas.DataFrame.plot.hist,../reference/api/pandas.DataFrame.plot.hist
+generated/pandas.DataFrame.plot,../reference/api/pandas.DataFrame.plot
+generated/pandas.DataFrame.plot.kde,../reference/api/pandas.DataFrame.plot.kde
+generated/pandas.DataFrame.plot.line,../reference/api/pandas.DataFrame.plot.line
+generated/pandas.DataFrame.plot.pie,../reference/api/pandas.DataFrame.plot.pie
+generated/pandas.DataFrame.plot.scatter,../reference/api/pandas.DataFrame.plot.scatter
+generated/pandas.DataFrame.pop,../reference/api/pandas.DataFrame.pop
+generated/pandas.DataFrame.pow,../reference/api/pandas.DataFrame.pow
+generated/pandas.DataFrame.prod,../reference/api/pandas.DataFrame.prod
+generated/pandas.DataFrame.product,../reference/api/pandas.DataFrame.product
+generated/pandas.DataFrame.quantile,../reference/api/pandas.DataFrame.quantile
+generated/pandas.DataFrame.query,../reference/api/pandas.DataFrame.query
+generated/pandas.DataFrame.radd,../reference/api/pandas.DataFrame.radd
+generated/pandas.DataFrame.rank,../reference/api/pandas.DataFrame.rank
+generated/pandas.DataFrame.rdiv,../reference/api/pandas.DataFrame.rdiv
+generated/pandas.DataFrame.reindex_axis,../reference/api/pandas.DataFrame.reindex_axis
+generated/pandas.DataFrame.reindex,../reference/api/pandas.DataFrame.reindex
+generated/pandas.DataFrame.reindex_like,../reference/api/pandas.DataFrame.reindex_like
+generated/pandas.DataFrame.rename_axis,../reference/api/pandas.DataFrame.rename_axis
+generated/pandas.DataFrame.rename,../reference/api/pandas.DataFrame.rename
+generated/pandas.DataFrame.reorder_levels,../reference/api/pandas.DataFrame.reorder_levels
+generated/pandas.DataFrame.replace,../reference/api/pandas.DataFrame.replace
+generated/pandas.DataFrame.resample,../reference/api/pandas.DataFrame.resample
+generated/pandas.DataFrame.reset_index,../reference/api/pandas.DataFrame.reset_index
+generated/pandas.DataFrame.rfloordiv,../reference/api/pandas.DataFrame.rfloordiv
+generated/pandas.DataFrame.rmod,../reference/api/pandas.DataFrame.rmod
+generated/pandas.DataFrame.rmul,../reference/api/pandas.DataFrame.rmul
+generated/pandas.DataFrame.rolling,../reference/api/pandas.DataFrame.rolling
+generated/pandas.DataFrame.round,../reference/api/pandas.DataFrame.round
+generated/pandas.DataFrame.rpow,../reference/api/pandas.DataFrame.rpow
+generated/pandas.DataFrame.rsub,../reference/api/pandas.DataFrame.rsub
+generated/pandas.DataFrame.rtruediv,../reference/api/pandas.DataFrame.rtruediv
+generated/pandas.DataFrame.sample,../reference/api/pandas.DataFrame.sample
+generated/pandas.DataFrame.select_dtypes,../reference/api/pandas.DataFrame.select_dtypes
+generated/pandas.DataFrame.select,../reference/api/pandas.DataFrame.select
+generated/pandas.DataFrame.sem,../reference/api/pandas.DataFrame.sem
+generated/pandas.DataFrame.set_axis,../reference/api/pandas.DataFrame.set_axis
+generated/pandas.DataFrame.set_index,../reference/api/pandas.DataFrame.set_index
+generated/pandas.DataFrame.set_value,../reference/api/pandas.DataFrame.set_value
+generated/pandas.DataFrame.shape,../reference/api/pandas.DataFrame.shape
+generated/pandas.DataFrame.shift,../reference/api/pandas.DataFrame.shift
+generated/pandas.DataFrame.size,../reference/api/pandas.DataFrame.size
+generated/pandas.DataFrame.skew,../reference/api/pandas.DataFrame.skew
+generated/pandas.DataFrame.slice_shift,../reference/api/pandas.DataFrame.slice_shift
+generated/pandas.DataFrame.sort_index,../reference/api/pandas.DataFrame.sort_index
+generated/pandas.DataFrame.sort_values,../reference/api/pandas.DataFrame.sort_values
+generated/pandas.DataFrame.squeeze,../reference/api/pandas.DataFrame.squeeze
+generated/pandas.DataFrame.stack,../reference/api/pandas.DataFrame.stack
+generated/pandas.DataFrame.std,../reference/api/pandas.DataFrame.std
+generated/pandas.DataFrame.style,../reference/api/pandas.DataFrame.style
+generated/pandas.DataFrame.sub,../reference/api/pandas.DataFrame.sub
+generated/pandas.DataFrame.subtract,../reference/api/pandas.DataFrame.subtract
+generated/pandas.DataFrame.sum,../reference/api/pandas.DataFrame.sum
+generated/pandas.DataFrame.swapaxes,../reference/api/pandas.DataFrame.swapaxes
+generated/pandas.DataFrame.swaplevel,../reference/api/pandas.DataFrame.swaplevel
+generated/pandas.DataFrame.tail,../reference/api/pandas.DataFrame.tail
+generated/pandas.DataFrame.take,../reference/api/pandas.DataFrame.take
+generated/pandas.DataFrame.T,../reference/api/pandas.DataFrame.T
+generated/pandas.DataFrame.timetuple,../reference/api/pandas.DataFrame.timetuple
+generated/pandas.DataFrame.to_clipboard,../reference/api/pandas.DataFrame.to_clipboard
+generated/pandas.DataFrame.to_csv,../reference/api/pandas.DataFrame.to_csv
+generated/pandas.DataFrame.to_dense,../reference/api/pandas.DataFrame.to_dense
+generated/pandas.DataFrame.to_dict,../reference/api/pandas.DataFrame.to_dict
+generated/pandas.DataFrame.to_excel,../reference/api/pandas.DataFrame.to_excel
+generated/pandas.DataFrame.to_feather,../reference/api/pandas.DataFrame.to_feather
+generated/pandas.DataFrame.to_gbq,../reference/api/pandas.DataFrame.to_gbq
+generated/pandas.DataFrame.to_hdf,../reference/api/pandas.DataFrame.to_hdf
+generated/pandas.DataFrame.to,../reference/api/pandas.DataFrame.to
+generated/pandas.DataFrame.to_json,../reference/api/pandas.DataFrame.to_json
+generated/pandas.DataFrame.to_latex,../reference/api/pandas.DataFrame.to_latex
+generated/pandas.DataFrame.to_msgpack,../reference/api/pandas.DataFrame.to_msgpack
+generated/pandas.DataFrame.to_numpy,../reference/api/pandas.DataFrame.to_numpy
+generated/pandas.DataFrame.to_panel,../reference/api/pandas.DataFrame.to_panel
+generated/pandas.DataFrame.to_parquet,../reference/api/pandas.DataFrame.to_parquet
+generated/pandas.DataFrame.to_period,../reference/api/pandas.DataFrame.to_period
+generated/pandas.DataFrame.to_pickle,../reference/api/pandas.DataFrame.to_pickle
+generated/pandas.DataFrame.to_records,../reference/api/pandas.DataFrame.to_records
+generated/pandas.DataFrame.to_sparse,../reference/api/pandas.DataFrame.to_sparse
+generated/pandas.DataFrame.to_sql,../reference/api/pandas.DataFrame.to_sql
+generated/pandas.DataFrame.to_stata,../reference/api/pandas.DataFrame.to_stata
+generated/pandas.DataFrame.to_string,../reference/api/pandas.DataFrame.to_string
+generated/pandas.DataFrame.to_timestamp,../reference/api/pandas.DataFrame.to_timestamp
+generated/pandas.DataFrame.to_xarray,../reference/api/pandas.DataFrame.to_xarray
+generated/pandas.DataFrame.transform,../reference/api/pandas.DataFrame.transform
+generated/pandas.DataFrame.transpose,../reference/api/pandas.DataFrame.transpose
+generated/pandas.DataFrame.truediv,../reference/api/pandas.DataFrame.truediv
+generated/pandas.DataFrame.truncate,../reference/api/pandas.DataFrame.truncate
+generated/pandas.DataFrame.tshift,../reference/api/pandas.DataFrame.tshift
+generated/pandas.DataFrame.tz_convert,../reference/api/pandas.DataFrame.tz_convert
+generated/pandas.DataFrame.tz_localize,../reference/api/pandas.DataFrame.tz_localize
+generated/pandas.DataFrame.unstack,../reference/api/pandas.DataFrame.unstack
+generated/pandas.DataFrame.update,../reference/api/pandas.DataFrame.update
+generated/pandas.DataFrame.values,../reference/api/pandas.DataFrame.values
+generated/pandas.DataFrame.var,../reference/api/pandas.DataFrame.var
+generated/pandas.DataFrame.where,../reference/api/pandas.DataFrame.where
+generated/pandas.DataFrame.xs,../reference/api/pandas.DataFrame.xs
+generated/pandas.date_range,../reference/api/pandas.date_range
+generated/pandas.DatetimeIndex.ceil,../reference/api/pandas.DatetimeIndex.ceil
+generated/pandas.DatetimeIndex.date,../reference/api/pandas.DatetimeIndex.date
+generated/pandas.DatetimeIndex.day,../reference/api/pandas.DatetimeIndex.day
+generated/pandas.DatetimeIndex.day_name,../reference/api/pandas.DatetimeIndex.day_name
+generated/pandas.DatetimeIndex.dayofweek,../reference/api/pandas.DatetimeIndex.dayofweek
+generated/pandas.DatetimeIndex.dayofyear,../reference/api/pandas.DatetimeIndex.dayofyear
+generated/pandas.DatetimeIndex.floor,../reference/api/pandas.DatetimeIndex.floor
+generated/pandas.DatetimeIndex.freq,../reference/api/pandas.DatetimeIndex.freq
+generated/pandas.DatetimeIndex.freqstr,../reference/api/pandas.DatetimeIndex.freqstr
+generated/pandas.DatetimeIndex.hour,../reference/api/pandas.DatetimeIndex.hour
+generated/pandas.DatetimeIndex,../reference/api/pandas.DatetimeIndex
+generated/pandas.DatetimeIndex.indexer_at_time,../reference/api/pandas.DatetimeIndex.indexer_at_time
+generated/pandas.DatetimeIndex.indexer_between_time,../reference/api/pandas.DatetimeIndex.indexer_between_time
+generated/pandas.DatetimeIndex.inferred_freq,../reference/api/pandas.DatetimeIndex.inferred_freq
+generated/pandas.DatetimeIndex.is_leap_year,../reference/api/pandas.DatetimeIndex.is_leap_year
+generated/pandas.DatetimeIndex.is_month_end,../reference/api/pandas.DatetimeIndex.is_month_end
+generated/pandas.DatetimeIndex.is_month_start,../reference/api/pandas.DatetimeIndex.is_month_start
+generated/pandas.DatetimeIndex.is_quarter_end,../reference/api/pandas.DatetimeIndex.is_quarter_end
+generated/pandas.DatetimeIndex.is_quarter_start,../reference/api/pandas.DatetimeIndex.is_quarter_start
+generated/pandas.DatetimeIndex.is_year_end,../reference/api/pandas.DatetimeIndex.is_year_end
+generated/pandas.DatetimeIndex.is_year_start,../reference/api/pandas.DatetimeIndex.is_year_start
+generated/pandas.DatetimeIndex.microsecond,../reference/api/pandas.DatetimeIndex.microsecond
+generated/pandas.DatetimeIndex.minute,../reference/api/pandas.DatetimeIndex.minute
+generated/pandas.DatetimeIndex.month,../reference/api/pandas.DatetimeIndex.month
+generated/pandas.DatetimeIndex.month_name,../reference/api/pandas.DatetimeIndex.month_name
+generated/pandas.DatetimeIndex.nanosecond,../reference/api/pandas.DatetimeIndex.nanosecond
+generated/pandas.DatetimeIndex.normalize,../reference/api/pandas.DatetimeIndex.normalize
+generated/pandas.DatetimeIndex.quarter,../reference/api/pandas.DatetimeIndex.quarter
+generated/pandas.DatetimeIndex.round,../reference/api/pandas.DatetimeIndex.round
+generated/pandas.DatetimeIndex.second,../reference/api/pandas.DatetimeIndex.second
+generated/pandas.DatetimeIndex.snap,../reference/api/pandas.DatetimeIndex.snap
+generated/pandas.DatetimeIndex.strftime,../reference/api/pandas.DatetimeIndex.strftime
+generated/pandas.DatetimeIndex.time,../reference/api/pandas.DatetimeIndex.time
+generated/pandas.DatetimeIndex.timetz,../reference/api/pandas.DatetimeIndex.timetz
+generated/pandas.DatetimeIndex.to_frame,../reference/api/pandas.DatetimeIndex.to_frame
+generated/pandas.DatetimeIndex.to_perioddelta,../reference/api/pandas.DatetimeIndex.to_perioddelta
+generated/pandas.DatetimeIndex.to_period,../reference/api/pandas.DatetimeIndex.to_period
+generated/pandas.DatetimeIndex.to_pydatetime,../reference/api/pandas.DatetimeIndex.to_pydatetime
+generated/pandas.DatetimeIndex.to_series,../reference/api/pandas.DatetimeIndex.to_series
+generated/pandas.DatetimeIndex.tz_convert,../reference/api/pandas.DatetimeIndex.tz_convert
+generated/pandas.DatetimeIndex.tz,../reference/api/pandas.DatetimeIndex.tz
+generated/pandas.DatetimeIndex.tz_localize,../reference/api/pandas.DatetimeIndex.tz_localize
+generated/pandas.DatetimeIndex.weekday,../reference/api/pandas.DatetimeIndex.weekday
+generated/pandas.DatetimeIndex.week,../reference/api/pandas.DatetimeIndex.week
+generated/pandas.DatetimeIndex.weekofyear,../reference/api/pandas.DatetimeIndex.weekofyear
+generated/pandas.DatetimeIndex.year,../reference/api/pandas.DatetimeIndex.year
+generated/pandas.DatetimeTZDtype.base,../reference/api/pandas.DatetimeTZDtype.base
+generated/pandas.DatetimeTZDtype.construct_array_type,../reference/api/pandas.DatetimeTZDtype.construct_array_type
+generated/pandas.DatetimeTZDtype.construct_from_string,../reference/api/pandas.DatetimeTZDtype.construct_from_string
+generated/pandas.DatetimeTZDtype,../reference/api/pandas.DatetimeTZDtype
+generated/pandas.DatetimeTZDtype.isbuiltin,../reference/api/pandas.DatetimeTZDtype.isbuiltin
+generated/pandas.DatetimeTZDtype.is_dtype,../reference/api/pandas.DatetimeTZDtype.is_dtype
+generated/pandas.DatetimeTZDtype.isnative,../reference/api/pandas.DatetimeTZDtype.isnative
+generated/pandas.DatetimeTZDtype.itemsize,../reference/api/pandas.DatetimeTZDtype.itemsize
+generated/pandas.DatetimeTZDtype.kind,../reference/api/pandas.DatetimeTZDtype.kind
+generated/pandas.DatetimeTZDtype.name,../reference/api/pandas.DatetimeTZDtype.name
+generated/pandas.DatetimeTZDtype.names,../reference/api/pandas.DatetimeTZDtype.names
+generated/pandas.DatetimeTZDtype.na_value,../reference/api/pandas.DatetimeTZDtype.na_value
+generated/pandas.DatetimeTZDtype.num,../reference/api/pandas.DatetimeTZDtype.num
+generated/pandas.DatetimeTZDtype.reset_cache,../reference/api/pandas.DatetimeTZDtype.reset_cache
+generated/pandas.DatetimeTZDtype.shape,../reference/api/pandas.DatetimeTZDtype.shape
+generated/pandas.DatetimeTZDtype.str,../reference/api/pandas.DatetimeTZDtype.str
+generated/pandas.DatetimeTZDtype.subdtype,../reference/api/pandas.DatetimeTZDtype.subdtype
+generated/pandas.DatetimeTZDtype.tz,../reference/api/pandas.DatetimeTZDtype.tz
+generated/pandas.DatetimeTZDtype.unit,../reference/api/pandas.DatetimeTZDtype.unit
+generated/pandas.describe_option,../reference/api/pandas.describe_option
+generated/pandas.errors.DtypeWarning,../reference/api/pandas.errors.DtypeWarning
+generated/pandas.errors.EmptyDataError,../reference/api/pandas.errors.EmptyDataError
+generated/pandas.errors.OutOfBoundsDatetime,../reference/api/pandas.errors.OutOfBoundsDatetime
+generated/pandas.errors.ParserError,../reference/api/pandas.errors.ParserError
+generated/pandas.errors.ParserWarning,../reference/api/pandas.errors.ParserWarning
+generated/pandas.errors.PerformanceWarning,../reference/api/pandas.errors.PerformanceWarning
+generated/pandas.errors.UnsortedIndexError,../reference/api/pandas.errors.UnsortedIndexError
+generated/pandas.errors.UnsupportedFunctionCall,../reference/api/pandas.errors.UnsupportedFunctionCall
+generated/pandas.eval,../reference/api/pandas.eval
+generated/pandas.ExcelFile.parse,../reference/api/pandas.ExcelFile.parse
+generated/pandas.ExcelWriter,../reference/api/pandas.ExcelWriter
+generated/pandas.factorize,../reference/api/pandas.factorize
+generated/pandas.Float64Index,../reference/api/pandas.Float64Index
+generated/pandas.get_dummies,../reference/api/pandas.get_dummies
+generated/pandas.get_option,../reference/api/pandas.get_option
+generated/pandas.Grouper,../reference/api/pandas.Grouper
+generated/pandas.HDFStore.append,../reference/api/pandas.HDFStore.append
+generated/pandas.HDFStore.get,../reference/api/pandas.HDFStore.get
+generated/pandas.HDFStore.groups,../reference/api/pandas.HDFStore.groups
+generated/pandas.HDFStore.info,../reference/api/pandas.HDFStore.info
+generated/pandas.HDFStore.keys,../reference/api/pandas.HDFStore.keys
+generated/pandas.HDFStore.put,../reference/api/pandas.HDFStore.put
+generated/pandas.HDFStore.select,../reference/api/pandas.HDFStore.select
+generated/pandas.HDFStore.walk,../reference/api/pandas.HDFStore.walk
+generated/pandas.Index.all,../reference/api/pandas.Index.all
+generated/pandas.Index.any,../reference/api/pandas.Index.any
+generated/pandas.Index.append,../reference/api/pandas.Index.append
+generated/pandas.Index.argmax,../reference/api/pandas.Index.argmax
+generated/pandas.Index.argmin,../reference/api/pandas.Index.argmin
+generated/pandas.Index.argsort,../reference/api/pandas.Index.argsort
+generated/pandas.Index.array,../reference/api/pandas.Index.array
+generated/pandas.Index.asi8,../reference/api/pandas.Index.asi8
+generated/pandas.Index.asof,../reference/api/pandas.Index.asof
+generated/pandas.Index.asof_locs,../reference/api/pandas.Index.asof_locs
+generated/pandas.Index.astype,../reference/api/pandas.Index.astype
+generated/pandas.Index.base,../reference/api/pandas.Index.base
+generated/pandas.Index.contains,../reference/api/pandas.Index.contains
+generated/pandas.Index.copy,../reference/api/pandas.Index.copy
+generated/pandas.Index.data,../reference/api/pandas.Index.data
+generated/pandas.Index.delete,../reference/api/pandas.Index.delete
+generated/pandas.Index.difference,../reference/api/pandas.Index.difference
+generated/pandas.Index.drop_duplicates,../reference/api/pandas.Index.drop_duplicates
+generated/pandas.Index.drop,../reference/api/pandas.Index.drop
+generated/pandas.Index.droplevel,../reference/api/pandas.Index.droplevel
+generated/pandas.Index.dropna,../reference/api/pandas.Index.dropna
+generated/pandas.Index.dtype,../reference/api/pandas.Index.dtype
+generated/pandas.Index.dtype_str,../reference/api/pandas.Index.dtype_str
+generated/pandas.Index.duplicated,../reference/api/pandas.Index.duplicated
+generated/pandas.Index.empty,../reference/api/pandas.Index.empty
+generated/pandas.Index.equals,../reference/api/pandas.Index.equals
+generated/pandas.Index.factorize,../reference/api/pandas.Index.factorize
+generated/pandas.Index.fillna,../reference/api/pandas.Index.fillna
+generated/pandas.Index.flags,../reference/api/pandas.Index.flags
+generated/pandas.Index.format,../reference/api/pandas.Index.format
+generated/pandas.Index.get_duplicates,../reference/api/pandas.Index.get_duplicates
+generated/pandas.Index.get_indexer_for,../reference/api/pandas.Index.get_indexer_for
+generated/pandas.Index.get_indexer,../reference/api/pandas.Index.get_indexer
+generated/pandas.Index.get_indexer_non_unique,../reference/api/pandas.Index.get_indexer_non_unique
+generated/pandas.Index.get_level_values,../reference/api/pandas.Index.get_level_values
+generated/pandas.Index.get_loc,../reference/api/pandas.Index.get_loc
+generated/pandas.Index.get_slice_bound,../reference/api/pandas.Index.get_slice_bound
+generated/pandas.Index.get_value,../reference/api/pandas.Index.get_value
+generated/pandas.Index.get_values,../reference/api/pandas.Index.get_values
+generated/pandas.Index.groupby,../reference/api/pandas.Index.groupby
+generated/pandas.Index.has_duplicates,../reference/api/pandas.Index.has_duplicates
+generated/pandas.Index.hasnans,../reference/api/pandas.Index.hasnans
+generated/pandas.Index.holds_integer,../reference/api/pandas.Index.holds_integer
+generated/pandas.Index,../reference/api/pandas.Index
+generated/pandas.Index.identical,../reference/api/pandas.Index.identical
+generated/pandas.Index.inferred_type,../reference/api/pandas.Index.inferred_type
+generated/pandas.Index.insert,../reference/api/pandas.Index.insert
+generated/pandas.Index.intersection,../reference/api/pandas.Index.intersection
+generated/pandas.Index.is_all_dates,../reference/api/pandas.Index.is_all_dates
+generated/pandas.Index.is_boolean,../reference/api/pandas.Index.is_boolean
+generated/pandas.Index.is_categorical,../reference/api/pandas.Index.is_categorical
+generated/pandas.Index.is_floating,../reference/api/pandas.Index.is_floating
+generated/pandas.Index.is_,../reference/api/pandas.Index.is_
+generated/pandas.Index.isin,../reference/api/pandas.Index.isin
+generated/pandas.Index.is_integer,../reference/api/pandas.Index.is_integer
+generated/pandas.Index.is_interval,../reference/api/pandas.Index.is_interval
+generated/pandas.Index.is_lexsorted_for_tuple,../reference/api/pandas.Index.is_lexsorted_for_tuple
+generated/pandas.Index.is_mixed,../reference/api/pandas.Index.is_mixed
+generated/pandas.Index.is_monotonic_decreasing,../reference/api/pandas.Index.is_monotonic_decreasing
+generated/pandas.Index.is_monotonic,../reference/api/pandas.Index.is_monotonic
+generated/pandas.Index.is_monotonic_increasing,../reference/api/pandas.Index.is_monotonic_increasing
+generated/pandas.Index.isna,../reference/api/pandas.Index.isna
+generated/pandas.Index.isnull,../reference/api/pandas.Index.isnull
+generated/pandas.Index.is_numeric,../reference/api/pandas.Index.is_numeric
+generated/pandas.Index.is_object,../reference/api/pandas.Index.is_object
+generated/pandas.Index.is_type_compatible,../reference/api/pandas.Index.is_type_compatible
+generated/pandas.Index.is_unique,../reference/api/pandas.Index.is_unique
+generated/pandas.Index.item,../reference/api/pandas.Index.item
+generated/pandas.Index.itemsize,../reference/api/pandas.Index.itemsize
+generated/pandas.Index.join,../reference/api/pandas.Index.join
+generated/pandas.Index.map,../reference/api/pandas.Index.map
+generated/pandas.Index.max,../reference/api/pandas.Index.max
+generated/pandas.Index.memory_usage,../reference/api/pandas.Index.memory_usage
+generated/pandas.Index.min,../reference/api/pandas.Index.min
+generated/pandas.Index.name,../reference/api/pandas.Index.name
+generated/pandas.Index.names,../reference/api/pandas.Index.names
+generated/pandas.Index.nbytes,../reference/api/pandas.Index.nbytes
+generated/pandas.Index.ndim,../reference/api/pandas.Index.ndim
+generated/pandas.Index.nlevels,../reference/api/pandas.Index.nlevels
+generated/pandas.Index.notna,../reference/api/pandas.Index.notna
+generated/pandas.Index.notnull,../reference/api/pandas.Index.notnull
+generated/pandas.Index.nunique,../reference/api/pandas.Index.nunique
+generated/pandas.Index.putmask,../reference/api/pandas.Index.putmask
+generated/pandas.Index.ravel,../reference/api/pandas.Index.ravel
+generated/pandas.Index.reindex,../reference/api/pandas.Index.reindex
+generated/pandas.Index.rename,../reference/api/pandas.Index.rename
+generated/pandas.Index.repeat,../reference/api/pandas.Index.repeat
+generated/pandas.Index.searchsorted,../reference/api/pandas.Index.searchsorted
+generated/pandas.Index.set_names,../reference/api/pandas.Index.set_names
+generated/pandas.Index.set_value,../reference/api/pandas.Index.set_value
+generated/pandas.Index.shape,../reference/api/pandas.Index.shape
+generated/pandas.Index.shift,../reference/api/pandas.Index.shift
+generated/pandas.Index.size,../reference/api/pandas.Index.size
+generated/pandas.IndexSlice,../reference/api/pandas.IndexSlice
+generated/pandas.Index.slice_indexer,../reference/api/pandas.Index.slice_indexer
+generated/pandas.Index.slice_locs,../reference/api/pandas.Index.slice_locs
+generated/pandas.Index.sort,../reference/api/pandas.Index.sort
+generated/pandas.Index.sortlevel,../reference/api/pandas.Index.sortlevel
+generated/pandas.Index.sort_values,../reference/api/pandas.Index.sort_values
+generated/pandas.Index.str,../reference/api/pandas.Index.str
+generated/pandas.Index.strides,../reference/api/pandas.Index.strides
+generated/pandas.Index.summary,../reference/api/pandas.Index.summary
+generated/pandas.Index.symmetric_difference,../reference/api/pandas.Index.symmetric_difference
+generated/pandas.Index.take,../reference/api/pandas.Index.take
+generated/pandas.Index.T,../reference/api/pandas.Index.T
+generated/pandas.Index.to_flat_index,../reference/api/pandas.Index.to_flat_index
+generated/pandas.Index.to_frame,../reference/api/pandas.Index.to_frame
+generated/pandas.Index.to_list,../reference/api/pandas.Index.to_list
+generated/pandas.Index.tolist,../reference/api/pandas.Index.tolist
+generated/pandas.Index.to_native_types,../reference/api/pandas.Index.to_native_types
+generated/pandas.Index.to_numpy,../reference/api/pandas.Index.to_numpy
+generated/pandas.Index.to_series,../reference/api/pandas.Index.to_series
+generated/pandas.Index.transpose,../reference/api/pandas.Index.transpose
+generated/pandas.Index.union,../reference/api/pandas.Index.union
+generated/pandas.Index.unique,../reference/api/pandas.Index.unique
+generated/pandas.Index.value_counts,../reference/api/pandas.Index.value_counts
+generated/pandas.Index.values,../reference/api/pandas.Index.values
+generated/pandas.Index.view,../reference/api/pandas.Index.view
+generated/pandas.Index.where,../reference/api/pandas.Index.where
+generated/pandas.infer_freq,../reference/api/pandas.infer_freq
+generated/pandas.Interval.closed,../reference/api/pandas.Interval.closed
+generated/pandas.Interval.closed_left,../reference/api/pandas.Interval.closed_left
+generated/pandas.Interval.closed_right,../reference/api/pandas.Interval.closed_right
+generated/pandas.Interval,../reference/api/pandas.Interval
+generated/pandas.IntervalIndex.closed,../reference/api/pandas.IntervalIndex.closed
+generated/pandas.IntervalIndex.contains,../reference/api/pandas.IntervalIndex.contains
+generated/pandas.IntervalIndex.from_arrays,../reference/api/pandas.IntervalIndex.from_arrays
+generated/pandas.IntervalIndex.from_breaks,../reference/api/pandas.IntervalIndex.from_breaks
+generated/pandas.IntervalIndex.from_tuples,../reference/api/pandas.IntervalIndex.from_tuples
+generated/pandas.IntervalIndex.get_indexer,../reference/api/pandas.IntervalIndex.get_indexer
+generated/pandas.IntervalIndex.get_loc,../reference/api/pandas.IntervalIndex.get_loc
+generated/pandas.IntervalIndex,../reference/api/pandas.IntervalIndex
+generated/pandas.IntervalIndex.is_non_overlapping_monotonic,../reference/api/pandas.IntervalIndex.is_non_overlapping_monotonic
+generated/pandas.IntervalIndex.is_overlapping,../reference/api/pandas.IntervalIndex.is_overlapping
+generated/pandas.IntervalIndex.left,../reference/api/pandas.IntervalIndex.left
+generated/pandas.IntervalIndex.length,../reference/api/pandas.IntervalIndex.length
+generated/pandas.IntervalIndex.mid,../reference/api/pandas.IntervalIndex.mid
+generated/pandas.IntervalIndex.overlaps,../reference/api/pandas.IntervalIndex.overlaps
+generated/pandas.IntervalIndex.right,../reference/api/pandas.IntervalIndex.right
+generated/pandas.IntervalIndex.set_closed,../reference/api/pandas.IntervalIndex.set_closed
+generated/pandas.IntervalIndex.to_tuples,../reference/api/pandas.IntervalIndex.to_tuples
+generated/pandas.IntervalIndex.values,../reference/api/pandas.IntervalIndex.values
+generated/pandas.Interval.left,../reference/api/pandas.Interval.left
+generated/pandas.Interval.length,../reference/api/pandas.Interval.length
+generated/pandas.Interval.mid,../reference/api/pandas.Interval.mid
+generated/pandas.Interval.open_left,../reference/api/pandas.Interval.open_left
+generated/pandas.Interval.open_right,../reference/api/pandas.Interval.open_right
+generated/pandas.Interval.overlaps,../reference/api/pandas.Interval.overlaps
+generated/pandas.interval_range,../reference/api/pandas.interval_range
+generated/pandas.Interval.right,../reference/api/pandas.Interval.right
+generated/pandas.io.formats.style.Styler.apply,../reference/api/pandas.io.formats.style.Styler.apply
+generated/pandas.io.formats.style.Styler.applymap,../reference/api/pandas.io.formats.style.Styler.applymap
+generated/pandas.io.formats.style.Styler.background_gradient,../reference/api/pandas.io.formats.style.Styler.background_gradient
+generated/pandas.io.formats.style.Styler.bar,../reference/api/pandas.io.formats.style.Styler.bar
+generated/pandas.io.formats.style.Styler.clear,../reference/api/pandas.io.formats.style.Styler.clear
+generated/pandas.io.formats.style.Styler.env,../reference/api/pandas.io.formats.style.Styler.env
+generated/pandas.io.formats.style.Styler.export,../reference/api/pandas.io.formats.style.Styler.export
+generated/pandas.io.formats.style.Styler.format,../reference/api/pandas.io.formats.style.Styler.format
+generated/pandas.io.formats.style.Styler.from_custom_template,../reference/api/pandas.io.formats.style.Styler.from_custom_template
+generated/pandas.io.formats.style.Styler.hide_columns,../reference/api/pandas.io.formats.style.Styler.hide_columns
+generated/pandas.io.formats.style.Styler.hide_index,../reference/api/pandas.io.formats.style.Styler.hide_index
+generated/pandas.io.formats.style.Styler.highlight_max,../reference/api/pandas.io.formats.style.Styler.highlight_max
+generated/pandas.io.formats.style.Styler.highlight_min,../reference/api/pandas.io.formats.style.Styler.highlight_min
+generated/pandas.io.formats.style.Styler.highlight_null,../reference/api/pandas.io.formats.style.Styler.highlight_null
+generated/pandas.io.formats.style.Styler,../reference/api/pandas.io.formats.style.Styler
+generated/pandas.io.formats.style.Styler.loader,../reference/api/pandas.io.formats.style.Styler.loader
+generated/pandas.io.formats.style.Styler.pipe,../reference/api/pandas.io.formats.style.Styler.pipe
+generated/pandas.io.formats.style.Styler.render,../reference/api/pandas.io.formats.style.Styler.render
+generated/pandas.io.formats.style.Styler.set_caption,../reference/api/pandas.io.formats.style.Styler.set_caption
+generated/pandas.io.formats.style.Styler.set_precision,../reference/api/pandas.io.formats.style.Styler.set_precision
+generated/pandas.io.formats.style.Styler.set_properties,../reference/api/pandas.io.formats.style.Styler.set_properties
+generated/pandas.io.formats.style.Styler.set_table_attributes,../reference/api/pandas.io.formats.style.Styler.set_table_attributes
+generated/pandas.io.formats.style.Styler.set_table_styles,../reference/api/pandas.io.formats.style.Styler.set_table_styles
+generated/pandas.io.formats.style.Styler.set_uuid,../reference/api/pandas.io.formats.style.Styler.set_uuid
+generated/pandas.io.formats.style.Styler.template,../reference/api/pandas.io.formats.style.Styler.template
+generated/pandas.io.formats.style.Styler.to_excel,../reference/api/pandas.io.formats.style.Styler.to_excel
+generated/pandas.io.formats.style.Styler.use,../reference/api/pandas.io.formats.style.Styler.use
+generated/pandas.io.formats.style.Styler.where,../reference/api/pandas.io.formats.style.Styler.where
+generated/pandas.io.json.build_table_schema,../reference/api/pandas.io.json.build_table_schema
+generated/pandas.io.json.json_normalize,../reference/api/pandas.io.json.json_normalize
+generated/pandas.io.stata.StataReader.data,../reference/api/pandas.io.stata.StataReader.data
+generated/pandas.io.stata.StataReader.data_label,../reference/api/pandas.io.stata.StataReader.data_label
+generated/pandas.io.stata.StataReader.value_labels,../reference/api/pandas.io.stata.StataReader.value_labels
+generated/pandas.io.stata.StataReader.variable_labels,../reference/api/pandas.io.stata.StataReader.variable_labels
+generated/pandas.io.stata.StataWriter.write_file,../reference/api/pandas.io.stata.StataWriter.write_file
+generated/pandas.isna,../reference/api/pandas.isna
+generated/pandas.isnull,../reference/api/pandas.isnull
+generated/pandas.melt,../reference/api/pandas.melt
+generated/pandas.merge_asof,../reference/api/pandas.merge_asof
+generated/pandas.merge,../reference/api/pandas.merge
+generated/pandas.merge_ordered,../reference/api/pandas.merge_ordered
+generated/pandas.MultiIndex.codes,../reference/api/pandas.MultiIndex.codes
+generated/pandas.MultiIndex.droplevel,../reference/api/pandas.MultiIndex.droplevel
+generated/pandas.MultiIndex.from_arrays,../reference/api/pandas.MultiIndex.from_arrays
+generated/pandas.MultiIndex.from_frame,../reference/api/pandas.MultiIndex.from_frame
+generated/pandas.MultiIndex.from_product,../reference/api/pandas.MultiIndex.from_product
+generated/pandas.MultiIndex.from_tuples,../reference/api/pandas.MultiIndex.from_tuples
+generated/pandas.MultiIndex.get_indexer,../reference/api/pandas.MultiIndex.get_indexer
+generated/pandas.MultiIndex.get_level_values,../reference/api/pandas.MultiIndex.get_level_values
+generated/pandas.MultiIndex.get_loc,../reference/api/pandas.MultiIndex.get_loc
+generated/pandas.MultiIndex.get_loc_level,../reference/api/pandas.MultiIndex.get_loc_level
+generated/pandas.MultiIndex,../reference/api/pandas.MultiIndex
+generated/pandas.MultiIndex.is_lexsorted,../reference/api/pandas.MultiIndex.is_lexsorted
+generated/pandas.MultiIndex.levels,../reference/api/pandas.MultiIndex.levels
+generated/pandas.MultiIndex.levshape,../reference/api/pandas.MultiIndex.levshape
+generated/pandas.MultiIndex.names,../reference/api/pandas.MultiIndex.names
+generated/pandas.MultiIndex.nlevels,../reference/api/pandas.MultiIndex.nlevels
+generated/pandas.MultiIndex.remove_unused_levels,../reference/api/pandas.MultiIndex.remove_unused_levels
+generated/pandas.MultiIndex.reorder_levels,../reference/api/pandas.MultiIndex.reorder_levels
+generated/pandas.MultiIndex.set_codes,../reference/api/pandas.MultiIndex.set_codes
+generated/pandas.MultiIndex.set_levels,../reference/api/pandas.MultiIndex.set_levels
+generated/pandas.MultiIndex.sortlevel,../reference/api/pandas.MultiIndex.sortlevel
+generated/pandas.MultiIndex.swaplevel,../reference/api/pandas.MultiIndex.swaplevel
+generated/pandas.MultiIndex.to_flat_index,../reference/api/pandas.MultiIndex.to_flat_index
+generated/pandas.MultiIndex.to_frame,../reference/api/pandas.MultiIndex.to_frame
+generated/pandas.MultiIndex.to_hierarchical,../reference/api/pandas.MultiIndex.to_hierarchical
+generated/pandas.notna,../reference/api/pandas.notna
+generated/pandas.notnull,../reference/api/pandas.notnull
+generated/pandas.option_context,../reference/api/pandas.option_context
+generated/pandas.Panel.abs,../reference/api/pandas.Panel.abs
+generated/pandas.Panel.add,../reference/api/pandas.Panel.add
+generated/pandas.Panel.add_prefix,../reference/api/pandas.Panel.add_prefix
+generated/pandas.Panel.add_suffix,../reference/api/pandas.Panel.add_suffix
+generated/pandas.Panel.agg,../reference/api/pandas.Panel.agg
+generated/pandas.Panel.aggregate,../reference/api/pandas.Panel.aggregate
+generated/pandas.Panel.align,../reference/api/pandas.Panel.align
+generated/pandas.Panel.all,../reference/api/pandas.Panel.all
+generated/pandas.Panel.any,../reference/api/pandas.Panel.any
+generated/pandas.Panel.apply,../reference/api/pandas.Panel.apply
+generated/pandas.Panel.as_blocks,../reference/api/pandas.Panel.as_blocks
+generated/pandas.Panel.asfreq,../reference/api/pandas.Panel.asfreq
+generated/pandas.Panel.as_matrix,../reference/api/pandas.Panel.as_matrix
+generated/pandas.Panel.asof,../reference/api/pandas.Panel.asof
+generated/pandas.Panel.astype,../reference/api/pandas.Panel.astype
+generated/pandas.Panel.at,../reference/api/pandas.Panel.at
+generated/pandas.Panel.at_time,../reference/api/pandas.Panel.at_time
+generated/pandas.Panel.axes,../reference/api/pandas.Panel.axes
+generated/pandas.Panel.between_time,../reference/api/pandas.Panel.between_time
+generated/pandas.Panel.bfill,../reference/api/pandas.Panel.bfill
+generated/pandas.Panel.blocks,../reference/api/pandas.Panel.blocks
+generated/pandas.Panel.bool,../reference/api/pandas.Panel.bool
+generated/pandas.Panel.clip,../reference/api/pandas.Panel.clip
+generated/pandas.Panel.clip_lower,../reference/api/pandas.Panel.clip_lower
+generated/pandas.Panel.clip_upper,../reference/api/pandas.Panel.clip_upper
+generated/pandas.Panel.compound,../reference/api/pandas.Panel.compound
+generated/pandas.Panel.conform,../reference/api/pandas.Panel.conform
+generated/pandas.Panel.convert_objects,../reference/api/pandas.Panel.convert_objects
+generated/pandas.Panel.copy,../reference/api/pandas.Panel.copy
+generated/pandas.Panel.count,../reference/api/pandas.Panel.count
+generated/pandas.Panel.cummax,../reference/api/pandas.Panel.cummax
+generated/pandas.Panel.cummin,../reference/api/pandas.Panel.cummin
+generated/pandas.Panel.cumprod,../reference/api/pandas.Panel.cumprod
+generated/pandas.Panel.cumsum,../reference/api/pandas.Panel.cumsum
+generated/pandas.Panel.describe,../reference/api/pandas.Panel.describe
+generated/pandas.Panel.div,../reference/api/pandas.Panel.div
+generated/pandas.Panel.divide,../reference/api/pandas.Panel.divide
+generated/pandas.Panel.drop,../reference/api/pandas.Panel.drop
+generated/pandas.Panel.droplevel,../reference/api/pandas.Panel.droplevel
+generated/pandas.Panel.dropna,../reference/api/pandas.Panel.dropna
+generated/pandas.Panel.dtypes,../reference/api/pandas.Panel.dtypes
+generated/pandas.Panel.empty,../reference/api/pandas.Panel.empty
+generated/pandas.Panel.eq,../reference/api/pandas.Panel.eq
+generated/pandas.Panel.equals,../reference/api/pandas.Panel.equals
+generated/pandas.Panel.ffill,../reference/api/pandas.Panel.ffill
+generated/pandas.Panel.fillna,../reference/api/pandas.Panel.fillna
+generated/pandas.Panel.filter,../reference/api/pandas.Panel.filter
+generated/pandas.Panel.first,../reference/api/pandas.Panel.first
+generated/pandas.Panel.first_valid_index,../reference/api/pandas.Panel.first_valid_index
+generated/pandas.Panel.floordiv,../reference/api/pandas.Panel.floordiv
+generated/pandas.Panel.from_dict,../reference/api/pandas.Panel.from_dict
+generated/pandas.Panel.fromDict,../reference/api/pandas.Panel.fromDict
+generated/pandas.Panel.ftypes,../reference/api/pandas.Panel.ftypes
+generated/pandas.Panel.ge,../reference/api/pandas.Panel.ge
+generated/pandas.Panel.get_dtype_counts,../reference/api/pandas.Panel.get_dtype_counts
+generated/pandas.Panel.get_ftype_counts,../reference/api/pandas.Panel.get_ftype_counts
+generated/pandas.Panel.get,../reference/api/pandas.Panel.get
+generated/pandas.Panel.get_value,../reference/api/pandas.Panel.get_value
+generated/pandas.Panel.get_values,../reference/api/pandas.Panel.get_values
+generated/pandas.Panel.groupby,../reference/api/pandas.Panel.groupby
+generated/pandas.Panel.gt,../reference/api/pandas.Panel.gt
+generated/pandas.Panel.head,../reference/api/pandas.Panel.head
+generated/pandas.Panel,../reference/api/pandas.Panel
+generated/pandas.Panel.iat,../reference/api/pandas.Panel.iat
+generated/pandas.Panel.iloc,../reference/api/pandas.Panel.iloc
+generated/pandas.Panel.infer_objects,../reference/api/pandas.Panel.infer_objects
+generated/pandas.Panel.interpolate,../reference/api/pandas.Panel.interpolate
+generated/pandas.Panel.is_copy,../reference/api/pandas.Panel.is_copy
+generated/pandas.Panel.isna,../reference/api/pandas.Panel.isna
+generated/pandas.Panel.isnull,../reference/api/pandas.Panel.isnull
+generated/pandas.Panel.items,../reference/api/pandas.Panel.items
+generated/pandas.Panel.__iter__,../reference/api/pandas.Panel.__iter__
+generated/pandas.Panel.iteritems,../reference/api/pandas.Panel.iteritems
+generated/pandas.Panel.ix,../reference/api/pandas.Panel.ix
+generated/pandas.Panel.join,../reference/api/pandas.Panel.join
+generated/pandas.Panel.keys,../reference/api/pandas.Panel.keys
+generated/pandas.Panel.kurt,../reference/api/pandas.Panel.kurt
+generated/pandas.Panel.kurtosis,../reference/api/pandas.Panel.kurtosis
+generated/pandas.Panel.last,../reference/api/pandas.Panel.last
+generated/pandas.Panel.last_valid_index,../reference/api/pandas.Panel.last_valid_index
+generated/pandas.Panel.le,../reference/api/pandas.Panel.le
+generated/pandas.Panel.loc,../reference/api/pandas.Panel.loc
+generated/pandas.Panel.lt,../reference/api/pandas.Panel.lt
+generated/pandas.Panel.mad,../reference/api/pandas.Panel.mad
+generated/pandas.Panel.major_axis,../reference/api/pandas.Panel.major_axis
+generated/pandas.Panel.major_xs,../reference/api/pandas.Panel.major_xs
+generated/pandas.Panel.mask,../reference/api/pandas.Panel.mask
+generated/pandas.Panel.max,../reference/api/pandas.Panel.max
+generated/pandas.Panel.mean,../reference/api/pandas.Panel.mean
+generated/pandas.Panel.median,../reference/api/pandas.Panel.median
+generated/pandas.Panel.min,../reference/api/pandas.Panel.min
+generated/pandas.Panel.minor_axis,../reference/api/pandas.Panel.minor_axis
+generated/pandas.Panel.minor_xs,../reference/api/pandas.Panel.minor_xs
+generated/pandas.Panel.mod,../reference/api/pandas.Panel.mod
+generated/pandas.Panel.mul,../reference/api/pandas.Panel.mul
+generated/pandas.Panel.multiply,../reference/api/pandas.Panel.multiply
+generated/pandas.Panel.ndim,../reference/api/pandas.Panel.ndim
+generated/pandas.Panel.ne,../reference/api/pandas.Panel.ne
+generated/pandas.Panel.notna,../reference/api/pandas.Panel.notna
+generated/pandas.Panel.notnull,../reference/api/pandas.Panel.notnull
+generated/pandas.Panel.pct_change,../reference/api/pandas.Panel.pct_change
+generated/pandas.Panel.pipe,../reference/api/pandas.Panel.pipe
+generated/pandas.Panel.pop,../reference/api/pandas.Panel.pop
+generated/pandas.Panel.pow,../reference/api/pandas.Panel.pow
+generated/pandas.Panel.prod,../reference/api/pandas.Panel.prod
+generated/pandas.Panel.product,../reference/api/pandas.Panel.product
+generated/pandas.Panel.radd,../reference/api/pandas.Panel.radd
+generated/pandas.Panel.rank,../reference/api/pandas.Panel.rank
+generated/pandas.Panel.rdiv,../reference/api/pandas.Panel.rdiv
+generated/pandas.Panel.reindex_axis,../reference/api/pandas.Panel.reindex_axis
+generated/pandas.Panel.reindex,../reference/api/pandas.Panel.reindex
+generated/pandas.Panel.reindex_like,../reference/api/pandas.Panel.reindex_like
+generated/pandas.Panel.rename_axis,../reference/api/pandas.Panel.rename_axis
+generated/pandas.Panel.rename,../reference/api/pandas.Panel.rename
+generated/pandas.Panel.replace,../reference/api/pandas.Panel.replace
+generated/pandas.Panel.resample,../reference/api/pandas.Panel.resample
+generated/pandas.Panel.rfloordiv,../reference/api/pandas.Panel.rfloordiv
+generated/pandas.Panel.rmod,../reference/api/pandas.Panel.rmod
+generated/pandas.Panel.rmul,../reference/api/pandas.Panel.rmul
+generated/pandas.Panel.round,../reference/api/pandas.Panel.round
+generated/pandas.Panel.rpow,../reference/api/pandas.Panel.rpow
+generated/pandas.Panel.rsub,../reference/api/pandas.Panel.rsub
+generated/pandas.Panel.rtruediv,../reference/api/pandas.Panel.rtruediv
+generated/pandas.Panel.sample,../reference/api/pandas.Panel.sample
+generated/pandas.Panel.select,../reference/api/pandas.Panel.select
+generated/pandas.Panel.sem,../reference/api/pandas.Panel.sem
+generated/pandas.Panel.set_axis,../reference/api/pandas.Panel.set_axis
+generated/pandas.Panel.set_value,../reference/api/pandas.Panel.set_value
+generated/pandas.Panel.shape,../reference/api/pandas.Panel.shape
+generated/pandas.Panel.shift,../reference/api/pandas.Panel.shift
+generated/pandas.Panel.size,../reference/api/pandas.Panel.size
+generated/pandas.Panel.skew,../reference/api/pandas.Panel.skew
+generated/pandas.Panel.slice_shift,../reference/api/pandas.Panel.slice_shift
+generated/pandas.Panel.sort_index,../reference/api/pandas.Panel.sort_index
+generated/pandas.Panel.sort_values,../reference/api/pandas.Panel.sort_values
+generated/pandas.Panel.squeeze,../reference/api/pandas.Panel.squeeze
+generated/pandas.Panel.std,../reference/api/pandas.Panel.std
+generated/pandas.Panel.sub,../reference/api/pandas.Panel.sub
+generated/pandas.Panel.subtract,../reference/api/pandas.Panel.subtract
+generated/pandas.Panel.sum,../reference/api/pandas.Panel.sum
+generated/pandas.Panel.swapaxes,../reference/api/pandas.Panel.swapaxes
+generated/pandas.Panel.swaplevel,../reference/api/pandas.Panel.swaplevel
+generated/pandas.Panel.tail,../reference/api/pandas.Panel.tail
+generated/pandas.Panel.take,../reference/api/pandas.Panel.take
+generated/pandas.Panel.timetuple,../reference/api/pandas.Panel.timetuple
+generated/pandas.Panel.to_clipboard,../reference/api/pandas.Panel.to_clipboard
+generated/pandas.Panel.to_csv,../reference/api/pandas.Panel.to_csv
+generated/pandas.Panel.to_dense,../reference/api/pandas.Panel.to_dense
+generated/pandas.Panel.to_excel,../reference/api/pandas.Panel.to_excel
+generated/pandas.Panel.to_frame,../reference/api/pandas.Panel.to_frame
+generated/pandas.Panel.to_hdf,../reference/api/pandas.Panel.to_hdf
+generated/pandas.Panel.to_json,../reference/api/pandas.Panel.to_json
+generated/pandas.Panel.to_latex,../reference/api/pandas.Panel.to_latex
+generated/pandas.Panel.to_msgpack,../reference/api/pandas.Panel.to_msgpack
+generated/pandas.Panel.to_pickle,../reference/api/pandas.Panel.to_pickle
+generated/pandas.Panel.to_sparse,../reference/api/pandas.Panel.to_sparse
+generated/pandas.Panel.to_sql,../reference/api/pandas.Panel.to_sql
+generated/pandas.Panel.to_xarray,../reference/api/pandas.Panel.to_xarray
+generated/pandas.Panel.transform,../reference/api/pandas.Panel.transform
+generated/pandas.Panel.transpose,../reference/api/pandas.Panel.transpose
+generated/pandas.Panel.truediv,../reference/api/pandas.Panel.truediv
+generated/pandas.Panel.truncate,../reference/api/pandas.Panel.truncate
+generated/pandas.Panel.tshift,../reference/api/pandas.Panel.tshift
+generated/pandas.Panel.tz_convert,../reference/api/pandas.Panel.tz_convert
+generated/pandas.Panel.tz_localize,../reference/api/pandas.Panel.tz_localize
+generated/pandas.Panel.update,../reference/api/pandas.Panel.update
+generated/pandas.Panel.values,../reference/api/pandas.Panel.values
+generated/pandas.Panel.var,../reference/api/pandas.Panel.var
+generated/pandas.Panel.where,../reference/api/pandas.Panel.where
+generated/pandas.Panel.xs,../reference/api/pandas.Panel.xs
+generated/pandas.Period.asfreq,../reference/api/pandas.Period.asfreq
+generated/pandas.Period.day,../reference/api/pandas.Period.day
+generated/pandas.Period.dayofweek,../reference/api/pandas.Period.dayofweek
+generated/pandas.Period.dayofyear,../reference/api/pandas.Period.dayofyear
+generated/pandas.Period.days_in_month,../reference/api/pandas.Period.days_in_month
+generated/pandas.Period.daysinmonth,../reference/api/pandas.Period.daysinmonth
+generated/pandas.Period.end_time,../reference/api/pandas.Period.end_time
+generated/pandas.Period.freq,../reference/api/pandas.Period.freq
+generated/pandas.Period.freqstr,../reference/api/pandas.Period.freqstr
+generated/pandas.Period.hour,../reference/api/pandas.Period.hour
+generated/pandas.Period,../reference/api/pandas.Period
+generated/pandas.PeriodIndex.asfreq,../reference/api/pandas.PeriodIndex.asfreq
+generated/pandas.PeriodIndex.day,../reference/api/pandas.PeriodIndex.day
+generated/pandas.PeriodIndex.dayofweek,../reference/api/pandas.PeriodIndex.dayofweek
+generated/pandas.PeriodIndex.dayofyear,../reference/api/pandas.PeriodIndex.dayofyear
+generated/pandas.PeriodIndex.days_in_month,../reference/api/pandas.PeriodIndex.days_in_month
+generated/pandas.PeriodIndex.daysinmonth,../reference/api/pandas.PeriodIndex.daysinmonth
+generated/pandas.PeriodIndex.end_time,../reference/api/pandas.PeriodIndex.end_time
+generated/pandas.PeriodIndex.freq,../reference/api/pandas.PeriodIndex.freq
+generated/pandas.PeriodIndex.freqstr,../reference/api/pandas.PeriodIndex.freqstr
+generated/pandas.PeriodIndex.hour,../reference/api/pandas.PeriodIndex.hour
+generated/pandas.PeriodIndex,../reference/api/pandas.PeriodIndex
+generated/pandas.PeriodIndex.is_leap_year,../reference/api/pandas.PeriodIndex.is_leap_year
+generated/pandas.PeriodIndex.minute,../reference/api/pandas.PeriodIndex.minute
+generated/pandas.PeriodIndex.month,../reference/api/pandas.PeriodIndex.month
+generated/pandas.PeriodIndex.quarter,../reference/api/pandas.PeriodIndex.quarter
+generated/pandas.PeriodIndex.qyear,../reference/api/pandas.PeriodIndex.qyear
+generated/pandas.PeriodIndex.second,../reference/api/pandas.PeriodIndex.second
+generated/pandas.PeriodIndex.start_time,../reference/api/pandas.PeriodIndex.start_time
+generated/pandas.PeriodIndex.strftime,../reference/api/pandas.PeriodIndex.strftime
+generated/pandas.PeriodIndex.to_timestamp,../reference/api/pandas.PeriodIndex.to_timestamp
+generated/pandas.PeriodIndex.weekday,../reference/api/pandas.PeriodIndex.weekday
+generated/pandas.PeriodIndex.week,../reference/api/pandas.PeriodIndex.week
+generated/pandas.PeriodIndex.weekofyear,../reference/api/pandas.PeriodIndex.weekofyear
+generated/pandas.PeriodIndex.year,../reference/api/pandas.PeriodIndex.year
+generated/pandas.Period.is_leap_year,../reference/api/pandas.Period.is_leap_year
+generated/pandas.Period.minute,../reference/api/pandas.Period.minute
+generated/pandas.Period.month,../reference/api/pandas.Period.month
+generated/pandas.Period.now,../reference/api/pandas.Period.now
+generated/pandas.Period.ordinal,../reference/api/pandas.Period.ordinal
+generated/pandas.Period.quarter,../reference/api/pandas.Period.quarter
+generated/pandas.Period.qyear,../reference/api/pandas.Period.qyear
+generated/pandas.period_range,../reference/api/pandas.period_range
+generated/pandas.Period.second,../reference/api/pandas.Period.second
+generated/pandas.Period.start_time,../reference/api/pandas.Period.start_time
+generated/pandas.Period.strftime,../reference/api/pandas.Period.strftime
+generated/pandas.Period.to_timestamp,../reference/api/pandas.Period.to_timestamp
+generated/pandas.Period.weekday,../reference/api/pandas.Period.weekday
+generated/pandas.Period.week,../reference/api/pandas.Period.week
+generated/pandas.Period.weekofyear,../reference/api/pandas.Period.weekofyear
+generated/pandas.Period.year,../reference/api/pandas.Period.year
+generated/pandas.pivot,../reference/api/pandas.pivot
+generated/pandas.pivot_table,../reference/api/pandas.pivot_table
+generated/pandas.plotting.andrews_curves,../reference/api/pandas.plotting.andrews_curves
+generated/pandas.plotting.bootstrap_plot,../reference/api/pandas.plotting.bootstrap_plot
+generated/pandas.plotting.deregister_matplotlib_converters,../reference/api/pandas.plotting.deregister_matplotlib_converters
+generated/pandas.plotting.lag_plot,../reference/api/pandas.plotting.lag_plot
+generated/pandas.plotting.parallel_coordinates,../reference/api/pandas.plotting.parallel_coordinates
+generated/pandas.plotting.radviz,../reference/api/pandas.plotting.radviz
+generated/pandas.plotting.register_matplotlib_converters,../reference/api/pandas.plotting.register_matplotlib_converters
+generated/pandas.plotting.scatter_matrix,../reference/api/pandas.plotting.scatter_matrix
+generated/pandas.qcut,../reference/api/pandas.qcut
+generated/pandas.RangeIndex.from_range,../reference/api/pandas.RangeIndex.from_range
+generated/pandas.RangeIndex,../reference/api/pandas.RangeIndex
+generated/pandas.read_clipboard,../reference/api/pandas.read_clipboard
+generated/pandas.read_csv,../reference/api/pandas.read_csv
+generated/pandas.read_excel,../reference/api/pandas.read_excel
+generated/pandas.read_feather,../reference/api/pandas.read_feather
+generated/pandas.read_fwf,../reference/api/pandas.read_fwf
+generated/pandas.read_gbq,../reference/api/pandas.read_gbq
+generated/pandas.read_hdf,../reference/api/pandas.read_hdf
+generated/pandas.read,../reference/api/pandas.read
+generated/pandas.read_json,../reference/api/pandas.read_json
+generated/pandas.read_msgpack,../reference/api/pandas.read_msgpack
+generated/pandas.read_parquet,../reference/api/pandas.read_parquet
+generated/pandas.read_pickle,../reference/api/pandas.read_pickle
+generated/pandas.read_sas,../reference/api/pandas.read_sas
+generated/pandas.read_sql,../reference/api/pandas.read_sql
+generated/pandas.read_sql_query,../reference/api/pandas.read_sql_query
+generated/pandas.read_sql_table,../reference/api/pandas.read_sql_table
+generated/pandas.read_stata,../reference/api/pandas.read_stata
+generated/pandas.read_table,../reference/api/pandas.read_table
+generated/pandas.reset_option,../reference/api/pandas.reset_option
+generated/pandas.Series.abs,../reference/api/pandas.Series.abs
+generated/pandas.Series.add,../reference/api/pandas.Series.add
+generated/pandas.Series.add_prefix,../reference/api/pandas.Series.add_prefix
+generated/pandas.Series.add_suffix,../reference/api/pandas.Series.add_suffix
+generated/pandas.Series.agg,../reference/api/pandas.Series.agg
+generated/pandas.Series.aggregate,../reference/api/pandas.Series.aggregate
+generated/pandas.Series.align,../reference/api/pandas.Series.align
+generated/pandas.Series.all,../reference/api/pandas.Series.all
+generated/pandas.Series.any,../reference/api/pandas.Series.any
+generated/pandas.Series.append,../reference/api/pandas.Series.append
+generated/pandas.Series.apply,../reference/api/pandas.Series.apply
+generated/pandas.Series.argmax,../reference/api/pandas.Series.argmax
+generated/pandas.Series.argmin,../reference/api/pandas.Series.argmin
+generated/pandas.Series.argsort,../reference/api/pandas.Series.argsort
+generated/pandas.Series.__array__,../reference/api/pandas.Series.__array__
+generated/pandas.Series.array,../reference/api/pandas.Series.array
+generated/pandas.Series.as_blocks,../reference/api/pandas.Series.as_blocks
+generated/pandas.Series.asfreq,../reference/api/pandas.Series.asfreq
+generated/pandas.Series.as_matrix,../reference/api/pandas.Series.as_matrix
+generated/pandas.Series.asobject,../reference/api/pandas.Series.asobject
+generated/pandas.Series.asof,../reference/api/pandas.Series.asof
+generated/pandas.Series.astype,../reference/api/pandas.Series.astype
+generated/pandas.Series.at,../reference/api/pandas.Series.at
+generated/pandas.Series.at_time,../reference/api/pandas.Series.at_time
+generated/pandas.Series.autocorr,../reference/api/pandas.Series.autocorr
+generated/pandas.Series.axes,../reference/api/pandas.Series.axes
+generated/pandas.Series.base,../reference/api/pandas.Series.base
+generated/pandas.Series.between,../reference/api/pandas.Series.between
+generated/pandas.Series.between_time,../reference/api/pandas.Series.between_time
+generated/pandas.Series.bfill,../reference/api/pandas.Series.bfill
+generated/pandas.Series.blocks,../reference/api/pandas.Series.blocks
+generated/pandas.Series.bool,../reference/api/pandas.Series.bool
+generated/pandas.Series.cat.add_categories,../reference/api/pandas.Series.cat.add_categories
+generated/pandas.Series.cat.as_ordered,../reference/api/pandas.Series.cat.as_ordered
+generated/pandas.Series.cat.as_unordered,../reference/api/pandas.Series.cat.as_unordered
+generated/pandas.Series.cat.categories,../reference/api/pandas.Series.cat.categories
+generated/pandas.Series.cat.codes,../reference/api/pandas.Series.cat.codes
+generated/pandas.Series.cat,../reference/api/pandas.Series.cat
+generated/pandas.Series.cat.ordered,../reference/api/pandas.Series.cat.ordered
+generated/pandas.Series.cat.remove_categories,../reference/api/pandas.Series.cat.remove_categories
+generated/pandas.Series.cat.remove_unused_categories,../reference/api/pandas.Series.cat.remove_unused_categories
+generated/pandas.Series.cat.rename_categories,../reference/api/pandas.Series.cat.rename_categories
+generated/pandas.Series.cat.reorder_categories,../reference/api/pandas.Series.cat.reorder_categories
+generated/pandas.Series.cat.set_categories,../reference/api/pandas.Series.cat.set_categories
+generated/pandas.Series.clip,../reference/api/pandas.Series.clip
+generated/pandas.Series.clip_lower,../reference/api/pandas.Series.clip_lower
+generated/pandas.Series.clip_upper,../reference/api/pandas.Series.clip_upper
+generated/pandas.Series.combine_first,../reference/api/pandas.Series.combine_first
+generated/pandas.Series.combine,../reference/api/pandas.Series.combine
+generated/pandas.Series.compound,../reference/api/pandas.Series.compound
+generated/pandas.Series.compress,../reference/api/pandas.Series.compress
+generated/pandas.Series.convert_objects,../reference/api/pandas.Series.convert_objects
+generated/pandas.Series.copy,../reference/api/pandas.Series.copy
+generated/pandas.Series.corr,../reference/api/pandas.Series.corr
+generated/pandas.Series.count,../reference/api/pandas.Series.count
+generated/pandas.Series.cov,../reference/api/pandas.Series.cov
+generated/pandas.Series.cummax,../reference/api/pandas.Series.cummax
+generated/pandas.Series.cummin,../reference/api/pandas.Series.cummin
+generated/pandas.Series.cumprod,../reference/api/pandas.Series.cumprod
+generated/pandas.Series.cumsum,../reference/api/pandas.Series.cumsum
+generated/pandas.Series.data,../reference/api/pandas.Series.data
+generated/pandas.Series.describe,../reference/api/pandas.Series.describe
+generated/pandas.Series.diff,../reference/api/pandas.Series.diff
+generated/pandas.Series.div,../reference/api/pandas.Series.div
+generated/pandas.Series.divide,../reference/api/pandas.Series.divide
+generated/pandas.Series.divmod,../reference/api/pandas.Series.divmod
+generated/pandas.Series.dot,../reference/api/pandas.Series.dot
+generated/pandas.Series.drop_duplicates,../reference/api/pandas.Series.drop_duplicates
+generated/pandas.Series.drop,../reference/api/pandas.Series.drop
+generated/pandas.Series.droplevel,../reference/api/pandas.Series.droplevel
+generated/pandas.Series.dropna,../reference/api/pandas.Series.dropna
+generated/pandas.Series.dt.ceil,../reference/api/pandas.Series.dt.ceil
+generated/pandas.Series.dt.components,../reference/api/pandas.Series.dt.components
+generated/pandas.Series.dt.date,../reference/api/pandas.Series.dt.date
+generated/pandas.Series.dt.day,../reference/api/pandas.Series.dt.day
+generated/pandas.Series.dt.day_name,../reference/api/pandas.Series.dt.day_name
+generated/pandas.Series.dt.dayofweek,../reference/api/pandas.Series.dt.dayofweek
+generated/pandas.Series.dt.dayofyear,../reference/api/pandas.Series.dt.dayofyear
+generated/pandas.Series.dt.days,../reference/api/pandas.Series.dt.days
+generated/pandas.Series.dt.days_in_month,../reference/api/pandas.Series.dt.days_in_month
+generated/pandas.Series.dt.daysinmonth,../reference/api/pandas.Series.dt.daysinmonth
+generated/pandas.Series.dt.end_time,../reference/api/pandas.Series.dt.end_time
+generated/pandas.Series.dt.floor,../reference/api/pandas.Series.dt.floor
+generated/pandas.Series.dt.freq,../reference/api/pandas.Series.dt.freq
+generated/pandas.Series.dt.hour,../reference/api/pandas.Series.dt.hour
+generated/pandas.Series.dt,../reference/api/pandas.Series.dt
+generated/pandas.Series.dt.is_leap_year,../reference/api/pandas.Series.dt.is_leap_year
+generated/pandas.Series.dt.is_month_end,../reference/api/pandas.Series.dt.is_month_end
+generated/pandas.Series.dt.is_month_start,../reference/api/pandas.Series.dt.is_month_start
+generated/pandas.Series.dt.is_quarter_end,../reference/api/pandas.Series.dt.is_quarter_end
+generated/pandas.Series.dt.is_quarter_start,../reference/api/pandas.Series.dt.is_quarter_start
+generated/pandas.Series.dt.is_year_end,../reference/api/pandas.Series.dt.is_year_end
+generated/pandas.Series.dt.is_year_start,../reference/api/pandas.Series.dt.is_year_start
+generated/pandas.Series.dt.microsecond,../reference/api/pandas.Series.dt.microsecond
+generated/pandas.Series.dt.microseconds,../reference/api/pandas.Series.dt.microseconds
+generated/pandas.Series.dt.minute,../reference/api/pandas.Series.dt.minute
+generated/pandas.Series.dt.month,../reference/api/pandas.Series.dt.month
+generated/pandas.Series.dt.month_name,../reference/api/pandas.Series.dt.month_name
+generated/pandas.Series.dt.nanosecond,../reference/api/pandas.Series.dt.nanosecond
+generated/pandas.Series.dt.nanoseconds,../reference/api/pandas.Series.dt.nanoseconds
+generated/pandas.Series.dt.normalize,../reference/api/pandas.Series.dt.normalize
+generated/pandas.Series.dt.quarter,../reference/api/pandas.Series.dt.quarter
+generated/pandas.Series.dt.qyear,../reference/api/pandas.Series.dt.qyear
+generated/pandas.Series.dt.round,../reference/api/pandas.Series.dt.round
+generated/pandas.Series.dt.second,../reference/api/pandas.Series.dt.second
+generated/pandas.Series.dt.seconds,../reference/api/pandas.Series.dt.seconds
+generated/pandas.Series.dt.start_time,../reference/api/pandas.Series.dt.start_time
+generated/pandas.Series.dt.strftime,../reference/api/pandas.Series.dt.strftime
+generated/pandas.Series.dt.time,../reference/api/pandas.Series.dt.time
+generated/pandas.Series.dt.timetz,../reference/api/pandas.Series.dt.timetz
+generated/pandas.Series.dt.to_period,../reference/api/pandas.Series.dt.to_period
+generated/pandas.Series.dt.to_pydatetime,../reference/api/pandas.Series.dt.to_pydatetime
+generated/pandas.Series.dt.to_pytimedelta,../reference/api/pandas.Series.dt.to_pytimedelta
+generated/pandas.Series.dt.total_seconds,../reference/api/pandas.Series.dt.total_seconds
+generated/pandas.Series.dt.tz_convert,../reference/api/pandas.Series.dt.tz_convert
+generated/pandas.Series.dt.tz,../reference/api/pandas.Series.dt.tz
+generated/pandas.Series.dt.tz_localize,../reference/api/pandas.Series.dt.tz_localize
+generated/pandas.Series.dt.weekday,../reference/api/pandas.Series.dt.weekday
+generated/pandas.Series.dt.week,../reference/api/pandas.Series.dt.week
+generated/pandas.Series.dt.weekofyear,../reference/api/pandas.Series.dt.weekofyear
+generated/pandas.Series.dt.year,../reference/api/pandas.Series.dt.year
+generated/pandas.Series.dtype,../reference/api/pandas.Series.dtype
+generated/pandas.Series.dtypes,../reference/api/pandas.Series.dtypes
+generated/pandas.Series.duplicated,../reference/api/pandas.Series.duplicated
+generated/pandas.Series.empty,../reference/api/pandas.Series.empty
+generated/pandas.Series.eq,../reference/api/pandas.Series.eq
+generated/pandas.Series.equals,../reference/api/pandas.Series.equals
+generated/pandas.Series.ewm,../reference/api/pandas.Series.ewm
+generated/pandas.Series.expanding,../reference/api/pandas.Series.expanding
+generated/pandas.Series.factorize,../reference/api/pandas.Series.factorize
+generated/pandas.Series.ffill,../reference/api/pandas.Series.ffill
+generated/pandas.Series.fillna,../reference/api/pandas.Series.fillna
+generated/pandas.Series.filter,../reference/api/pandas.Series.filter
+generated/pandas.Series.first,../reference/api/pandas.Series.first
+generated/pandas.Series.first_valid_index,../reference/api/pandas.Series.first_valid_index
+generated/pandas.Series.flags,../reference/api/pandas.Series.flags
+generated/pandas.Series.floordiv,../reference/api/pandas.Series.floordiv
+generated/pandas.Series.from_array,../reference/api/pandas.Series.from_array
+generated/pandas.Series.from_csv,../reference/api/pandas.Series.from_csv
+generated/pandas.Series.ftype,../reference/api/pandas.Series.ftype
+generated/pandas.Series.ftypes,../reference/api/pandas.Series.ftypes
+generated/pandas.Series.ge,../reference/api/pandas.Series.ge
+generated/pandas.Series.get_dtype_counts,../reference/api/pandas.Series.get_dtype_counts
+generated/pandas.Series.get_ftype_counts,../reference/api/pandas.Series.get_ftype_counts
+generated/pandas.Series.get,../reference/api/pandas.Series.get
+generated/pandas.Series.get_value,../reference/api/pandas.Series.get_value
+generated/pandas.Series.get_values,../reference/api/pandas.Series.get_values
+generated/pandas.Series.groupby,../reference/api/pandas.Series.groupby
+generated/pandas.Series.gt,../reference/api/pandas.Series.gt
+generated/pandas.Series.hasnans,../reference/api/pandas.Series.hasnans
+generated/pandas.Series.head,../reference/api/pandas.Series.head
+generated/pandas.Series.hist,../reference/api/pandas.Series.hist
+generated/pandas.Series,../reference/api/pandas.Series
+generated/pandas.Series.iat,../reference/api/pandas.Series.iat
+generated/pandas.Series.idxmax,../reference/api/pandas.Series.idxmax
+generated/pandas.Series.idxmin,../reference/api/pandas.Series.idxmin
+generated/pandas.Series.iloc,../reference/api/pandas.Series.iloc
+generated/pandas.Series.imag,../reference/api/pandas.Series.imag
+generated/pandas.Series.index,../reference/api/pandas.Series.index
+generated/pandas.Series.infer_objects,../reference/api/pandas.Series.infer_objects
+generated/pandas.Series.interpolate,../reference/api/pandas.Series.interpolate
+generated/pandas.Series.is_copy,../reference/api/pandas.Series.is_copy
+generated/pandas.Series.isin,../reference/api/pandas.Series.isin
+generated/pandas.Series.is_monotonic_decreasing,../reference/api/pandas.Series.is_monotonic_decreasing
+generated/pandas.Series.is_monotonic,../reference/api/pandas.Series.is_monotonic
+generated/pandas.Series.is_monotonic_increasing,../reference/api/pandas.Series.is_monotonic_increasing
+generated/pandas.Series.isna,../reference/api/pandas.Series.isna
+generated/pandas.Series.isnull,../reference/api/pandas.Series.isnull
+generated/pandas.Series.is_unique,../reference/api/pandas.Series.is_unique
+generated/pandas.Series.item,../reference/api/pandas.Series.item
+generated/pandas.Series.items,../reference/api/pandas.Series.items
+generated/pandas.Series.itemsize,../reference/api/pandas.Series.itemsize
+generated/pandas.Series.__iter__,../reference/api/pandas.Series.__iter__
+generated/pandas.Series.iteritems,../reference/api/pandas.Series.iteritems
+generated/pandas.Series.ix,../reference/api/pandas.Series.ix
+generated/pandas.Series.keys,../reference/api/pandas.Series.keys
+generated/pandas.Series.kurt,../reference/api/pandas.Series.kurt
+generated/pandas.Series.kurtosis,../reference/api/pandas.Series.kurtosis
+generated/pandas.Series.last,../reference/api/pandas.Series.last
+generated/pandas.Series.last_valid_index,../reference/api/pandas.Series.last_valid_index
+generated/pandas.Series.le,../reference/api/pandas.Series.le
+generated/pandas.Series.loc,../reference/api/pandas.Series.loc
+generated/pandas.Series.lt,../reference/api/pandas.Series.lt
+generated/pandas.Series.mad,../reference/api/pandas.Series.mad
+generated/pandas.Series.map,../reference/api/pandas.Series.map
+generated/pandas.Series.mask,../reference/api/pandas.Series.mask
+generated/pandas.Series.max,../reference/api/pandas.Series.max
+generated/pandas.Series.mean,../reference/api/pandas.Series.mean
+generated/pandas.Series.median,../reference/api/pandas.Series.median
+generated/pandas.Series.memory_usage,../reference/api/pandas.Series.memory_usage
+generated/pandas.Series.min,../reference/api/pandas.Series.min
+generated/pandas.Series.mode,../reference/api/pandas.Series.mode
+generated/pandas.Series.mod,../reference/api/pandas.Series.mod
+generated/pandas.Series.mul,../reference/api/pandas.Series.mul
+generated/pandas.Series.multiply,../reference/api/pandas.Series.multiply
+generated/pandas.Series.name,../reference/api/pandas.Series.name
+generated/pandas.Series.nbytes,../reference/api/pandas.Series.nbytes
+generated/pandas.Series.ndim,../reference/api/pandas.Series.ndim
+generated/pandas.Series.ne,../reference/api/pandas.Series.ne
+generated/pandas.Series.nlargest,../reference/api/pandas.Series.nlargest
+generated/pandas.Series.nonzero,../reference/api/pandas.Series.nonzero
+generated/pandas.Series.notna,../reference/api/pandas.Series.notna
+generated/pandas.Series.notnull,../reference/api/pandas.Series.notnull
+generated/pandas.Series.nsmallest,../reference/api/pandas.Series.nsmallest
+generated/pandas.Series.nunique,../reference/api/pandas.Series.nunique
+generated/pandas.Series.pct_change,../reference/api/pandas.Series.pct_change
+generated/pandas.Series.pipe,../reference/api/pandas.Series.pipe
+generated/pandas.Series.plot.area,../reference/api/pandas.Series.plot.area
+generated/pandas.Series.plot.barh,../reference/api/pandas.Series.plot.barh
+generated/pandas.Series.plot.bar,../reference/api/pandas.Series.plot.bar
+generated/pandas.Series.plot.box,../reference/api/pandas.Series.plot.box
+generated/pandas.Series.plot.density,../reference/api/pandas.Series.plot.density
+generated/pandas.Series.plot.hist,../reference/api/pandas.Series.plot.hist
+generated/pandas.Series.plot,../reference/api/pandas.Series.plot
+generated/pandas.Series.plot.kde,../reference/api/pandas.Series.plot.kde
+generated/pandas.Series.plot.line,../reference/api/pandas.Series.plot.line
+generated/pandas.Series.plot.pie,../reference/api/pandas.Series.plot.pie
+generated/pandas.Series.pop,../reference/api/pandas.Series.pop
+generated/pandas.Series.pow,../reference/api/pandas.Series.pow
+generated/pandas.Series.prod,../reference/api/pandas.Series.prod
+generated/pandas.Series.product,../reference/api/pandas.Series.product
+generated/pandas.Series.ptp,../reference/api/pandas.Series.ptp
+generated/pandas.Series.put,../reference/api/pandas.Series.put
+generated/pandas.Series.quantile,../reference/api/pandas.Series.quantile
+generated/pandas.Series.radd,../reference/api/pandas.Series.radd
+generated/pandas.Series.rank,../reference/api/pandas.Series.rank
+generated/pandas.Series.ravel,../reference/api/pandas.Series.ravel
+generated/pandas.Series.rdiv,../reference/api/pandas.Series.rdiv
+generated/pandas.Series.rdivmod,../reference/api/pandas.Series.rdivmod
+generated/pandas.Series.real,../reference/api/pandas.Series.real
+generated/pandas.Series.reindex_axis,../reference/api/pandas.Series.reindex_axis
+generated/pandas.Series.reindex,../reference/api/pandas.Series.reindex
+generated/pandas.Series.reindex_like,../reference/api/pandas.Series.reindex_like
+generated/pandas.Series.rename_axis,../reference/api/pandas.Series.rename_axis
+generated/pandas.Series.rename,../reference/api/pandas.Series.rename
+generated/pandas.Series.reorder_levels,../reference/api/pandas.Series.reorder_levels
+generated/pandas.Series.repeat,../reference/api/pandas.Series.repeat
+generated/pandas.Series.replace,../reference/api/pandas.Series.replace
+generated/pandas.Series.resample,../reference/api/pandas.Series.resample
+generated/pandas.Series.reset_index,../reference/api/pandas.Series.reset_index
+generated/pandas.Series.rfloordiv,../reference/api/pandas.Series.rfloordiv
+generated/pandas.Series.rmod,../reference/api/pandas.Series.rmod
+generated/pandas.Series.rmul,../reference/api/pandas.Series.rmul
+generated/pandas.Series.rolling,../reference/api/pandas.Series.rolling
+generated/pandas.Series.round,../reference/api/pandas.Series.round
+generated/pandas.Series.rpow,../reference/api/pandas.Series.rpow
+generated/pandas.Series.rsub,../reference/api/pandas.Series.rsub
+generated/pandas.Series.rtruediv,../reference/api/pandas.Series.rtruediv
+generated/pandas.Series.sample,../reference/api/pandas.Series.sample
+generated/pandas.Series.searchsorted,../reference/api/pandas.Series.searchsorted
+generated/pandas.Series.select,../reference/api/pandas.Series.select
+generated/pandas.Series.sem,../reference/api/pandas.Series.sem
+generated/pandas.Series.set_axis,../reference/api/pandas.Series.set_axis
+generated/pandas.Series.set_value,../reference/api/pandas.Series.set_value
+generated/pandas.Series.shape,../reference/api/pandas.Series.shape
+generated/pandas.Series.shift,../reference/api/pandas.Series.shift
+generated/pandas.Series.size,../reference/api/pandas.Series.size
+generated/pandas.Series.skew,../reference/api/pandas.Series.skew
+generated/pandas.Series.slice_shift,../reference/api/pandas.Series.slice_shift
+generated/pandas.Series.sort_index,../reference/api/pandas.Series.sort_index
+generated/pandas.Series.sort_values,../reference/api/pandas.Series.sort_values
+generated/pandas.Series.sparse.density,../reference/api/pandas.Series.sparse.density
+generated/pandas.Series.sparse.fill_value,../reference/api/pandas.Series.sparse.fill_value
+generated/pandas.Series.sparse.from_coo,../reference/api/pandas.Series.sparse.from_coo
+generated/pandas.Series.sparse.npoints,../reference/api/pandas.Series.sparse.npoints
+generated/pandas.Series.sparse.sp_values,../reference/api/pandas.Series.sparse.sp_values
+generated/pandas.Series.sparse.to_coo,../reference/api/pandas.Series.sparse.to_coo
+generated/pandas.Series.squeeze,../reference/api/pandas.Series.squeeze
+generated/pandas.Series.std,../reference/api/pandas.Series.std
+generated/pandas.Series.str.capitalize,../reference/api/pandas.Series.str.capitalize
+generated/pandas.Series.str.cat,../reference/api/pandas.Series.str.cat
+generated/pandas.Series.str.center,../reference/api/pandas.Series.str.center
+generated/pandas.Series.str.contains,../reference/api/pandas.Series.str.contains
+generated/pandas.Series.str.count,../reference/api/pandas.Series.str.count
+generated/pandas.Series.str.decode,../reference/api/pandas.Series.str.decode
+generated/pandas.Series.str.encode,../reference/api/pandas.Series.str.encode
+generated/pandas.Series.str.endswith,../reference/api/pandas.Series.str.endswith
+generated/pandas.Series.str.extractall,../reference/api/pandas.Series.str.extractall
+generated/pandas.Series.str.extract,../reference/api/pandas.Series.str.extract
+generated/pandas.Series.str.findall,../reference/api/pandas.Series.str.findall
+generated/pandas.Series.str.find,../reference/api/pandas.Series.str.find
+generated/pandas.Series.str.get_dummies,../reference/api/pandas.Series.str.get_dummies
+generated/pandas.Series.str.get,../reference/api/pandas.Series.str.get
+generated/pandas.Series.str,../reference/api/pandas.Series.str
+generated/pandas.Series.strides,../reference/api/pandas.Series.strides
+generated/pandas.Series.str.index,../reference/api/pandas.Series.str.index
+generated/pandas.Series.str.isalnum,../reference/api/pandas.Series.str.isalnum
+generated/pandas.Series.str.isalpha,../reference/api/pandas.Series.str.isalpha
+generated/pandas.Series.str.isdecimal,../reference/api/pandas.Series.str.isdecimal
+generated/pandas.Series.str.isdigit,../reference/api/pandas.Series.str.isdigit
+generated/pandas.Series.str.islower,../reference/api/pandas.Series.str.islower
+generated/pandas.Series.str.isnumeric,../reference/api/pandas.Series.str.isnumeric
+generated/pandas.Series.str.isspace,../reference/api/pandas.Series.str.isspace
+generated/pandas.Series.str.istitle,../reference/api/pandas.Series.str.istitle
+generated/pandas.Series.str.isupper,../reference/api/pandas.Series.str.isupper
+generated/pandas.Series.str.join,../reference/api/pandas.Series.str.join
+generated/pandas.Series.str.len,../reference/api/pandas.Series.str.len
+generated/pandas.Series.str.ljust,../reference/api/pandas.Series.str.ljust
+generated/pandas.Series.str.lower,../reference/api/pandas.Series.str.lower
+generated/pandas.Series.str.lstrip,../reference/api/pandas.Series.str.lstrip
+generated/pandas.Series.str.match,../reference/api/pandas.Series.str.match
+generated/pandas.Series.str.normalize,../reference/api/pandas.Series.str.normalize
+generated/pandas.Series.str.pad,../reference/api/pandas.Series.str.pad
+generated/pandas.Series.str.partition,../reference/api/pandas.Series.str.partition
+generated/pandas.Series.str.repeat,../reference/api/pandas.Series.str.repeat
+generated/pandas.Series.str.replace,../reference/api/pandas.Series.str.replace
+generated/pandas.Series.str.rfind,../reference/api/pandas.Series.str.rfind
+generated/pandas.Series.str.rindex,../reference/api/pandas.Series.str.rindex
+generated/pandas.Series.str.rjust,../reference/api/pandas.Series.str.rjust
+generated/pandas.Series.str.rpartition,../reference/api/pandas.Series.str.rpartition
+generated/pandas.Series.str.rsplit,../reference/api/pandas.Series.str.rsplit
+generated/pandas.Series.str.rstrip,../reference/api/pandas.Series.str.rstrip
+generated/pandas.Series.str.slice,../reference/api/pandas.Series.str.slice
+generated/pandas.Series.str.slice_replace,../reference/api/pandas.Series.str.slice_replace
+generated/pandas.Series.str.split,../reference/api/pandas.Series.str.split
+generated/pandas.Series.str.startswith,../reference/api/pandas.Series.str.startswith
+generated/pandas.Series.str.strip,../reference/api/pandas.Series.str.strip
+generated/pandas.Series.str.swapcase,../reference/api/pandas.Series.str.swapcase
+generated/pandas.Series.str.title,../reference/api/pandas.Series.str.title
+generated/pandas.Series.str.translate,../reference/api/pandas.Series.str.translate
+generated/pandas.Series.str.upper,../reference/api/pandas.Series.str.upper
+generated/pandas.Series.str.wrap,../reference/api/pandas.Series.str.wrap
+generated/pandas.Series.str.zfill,../reference/api/pandas.Series.str.zfill
+generated/pandas.Series.sub,../reference/api/pandas.Series.sub
+generated/pandas.Series.subtract,../reference/api/pandas.Series.subtract
+generated/pandas.Series.sum,../reference/api/pandas.Series.sum
+generated/pandas.Series.swapaxes,../reference/api/pandas.Series.swapaxes
+generated/pandas.Series.swaplevel,../reference/api/pandas.Series.swaplevel
+generated/pandas.Series.tail,../reference/api/pandas.Series.tail
+generated/pandas.Series.take,../reference/api/pandas.Series.take
+generated/pandas.Series.T,../reference/api/pandas.Series.T
+generated/pandas.Series.timetuple,../reference/api/pandas.Series.timetuple
+generated/pandas.Series.to_clipboard,../reference/api/pandas.Series.to_clipboard
+generated/pandas.Series.to_csv,../reference/api/pandas.Series.to_csv
+generated/pandas.Series.to_dense,../reference/api/pandas.Series.to_dense
+generated/pandas.Series.to_dict,../reference/api/pandas.Series.to_dict
+generated/pandas.Series.to_excel,../reference/api/pandas.Series.to_excel
+generated/pandas.Series.to_frame,../reference/api/pandas.Series.to_frame
+generated/pandas.Series.to_hdf,../reference/api/pandas.Series.to_hdf
+generated/pandas.Series.to_json,../reference/api/pandas.Series.to_json
+generated/pandas.Series.to_latex,../reference/api/pandas.Series.to_latex
+generated/pandas.Series.to_list,../reference/api/pandas.Series.to_list
+generated/pandas.Series.tolist,../reference/api/pandas.Series.tolist
+generated/pandas.Series.to_msgpack,../reference/api/pandas.Series.to_msgpack
+generated/pandas.Series.to_numpy,../reference/api/pandas.Series.to_numpy
+generated/pandas.Series.to_period,../reference/api/pandas.Series.to_period
+generated/pandas.Series.to_pickle,../reference/api/pandas.Series.to_pickle
+generated/pandas.Series.to_sparse,../reference/api/pandas.Series.to_sparse
+generated/pandas.Series.to_sql,../reference/api/pandas.Series.to_sql
+generated/pandas.Series.to_string,../reference/api/pandas.Series.to_string
+generated/pandas.Series.to_timestamp,../reference/api/pandas.Series.to_timestamp
+generated/pandas.Series.to_xarray,../reference/api/pandas.Series.to_xarray
+generated/pandas.Series.transform,../reference/api/pandas.Series.transform
+generated/pandas.Series.transpose,../reference/api/pandas.Series.transpose
+generated/pandas.Series.truediv,../reference/api/pandas.Series.truediv
+generated/pandas.Series.truncate,../reference/api/pandas.Series.truncate
+generated/pandas.Series.tshift,../reference/api/pandas.Series.tshift
+generated/pandas.Series.tz_convert,../reference/api/pandas.Series.tz_convert
+generated/pandas.Series.tz_localize,../reference/api/pandas.Series.tz_localize
+generated/pandas.Series.unique,../reference/api/pandas.Series.unique
+generated/pandas.Series.unstack,../reference/api/pandas.Series.unstack
+generated/pandas.Series.update,../reference/api/pandas.Series.update
+generated/pandas.Series.valid,../reference/api/pandas.Series.valid
+generated/pandas.Series.value_counts,../reference/api/pandas.Series.value_counts
+generated/pandas.Series.values,../reference/api/pandas.Series.values
+generated/pandas.Series.var,../reference/api/pandas.Series.var
+generated/pandas.Series.view,../reference/api/pandas.Series.view
+generated/pandas.Series.where,../reference/api/pandas.Series.where
+generated/pandas.Series.xs,../reference/api/pandas.Series.xs
+generated/pandas.set_option,../reference/api/pandas.set_option
+generated/pandas.SparseDataFrame.to_coo,../reference/api/pandas.SparseDataFrame.to_coo
+generated/pandas.SparseSeries.from_coo,../reference/api/pandas.SparseSeries.from_coo
+generated/pandas.SparseSeries.to_coo,../reference/api/pandas.SparseSeries.to_coo
+generated/pandas.test,../reference/api/pandas.test
+generated/pandas.testing.assert_frame_equal,../reference/api/pandas.testing.assert_frame_equal
+generated/pandas.testing.assert_index_equal,../reference/api/pandas.testing.assert_index_equal
+generated/pandas.testing.assert_series_equal,../reference/api/pandas.testing.assert_series_equal
+generated/pandas.Timedelta.asm8,../reference/api/pandas.Timedelta.asm8
+generated/pandas.Timedelta.ceil,../reference/api/pandas.Timedelta.ceil
+generated/pandas.Timedelta.components,../reference/api/pandas.Timedelta.components
+generated/pandas.Timedelta.days,../reference/api/pandas.Timedelta.days
+generated/pandas.Timedelta.delta,../reference/api/pandas.Timedelta.delta
+generated/pandas.Timedelta.floor,../reference/api/pandas.Timedelta.floor
+generated/pandas.Timedelta.freq,../reference/api/pandas.Timedelta.freq
+generated/pandas.Timedelta,../reference/api/pandas.Timedelta
+generated/pandas.TimedeltaIndex.ceil,../reference/api/pandas.TimedeltaIndex.ceil
+generated/pandas.TimedeltaIndex.components,../reference/api/pandas.TimedeltaIndex.components
+generated/pandas.TimedeltaIndex.days,../reference/api/pandas.TimedeltaIndex.days
+generated/pandas.TimedeltaIndex.floor,../reference/api/pandas.TimedeltaIndex.floor
+generated/pandas.TimedeltaIndex,../reference/api/pandas.TimedeltaIndex
+generated/pandas.TimedeltaIndex.inferred_freq,../reference/api/pandas.TimedeltaIndex.inferred_freq
+generated/pandas.TimedeltaIndex.microseconds,../reference/api/pandas.TimedeltaIndex.microseconds
+generated/pandas.TimedeltaIndex.nanoseconds,../reference/api/pandas.TimedeltaIndex.nanoseconds
+generated/pandas.TimedeltaIndex.round,../reference/api/pandas.TimedeltaIndex.round
+generated/pandas.TimedeltaIndex.seconds,../reference/api/pandas.TimedeltaIndex.seconds
+generated/pandas.TimedeltaIndex.to_frame,../reference/api/pandas.TimedeltaIndex.to_frame
+generated/pandas.TimedeltaIndex.to_pytimedelta,../reference/api/pandas.TimedeltaIndex.to_pytimedelta
+generated/pandas.TimedeltaIndex.to_series,../reference/api/pandas.TimedeltaIndex.to_series
+generated/pandas.Timedelta.isoformat,../reference/api/pandas.Timedelta.isoformat
+generated/pandas.Timedelta.is_populated,../reference/api/pandas.Timedelta.is_populated
+generated/pandas.Timedelta.max,../reference/api/pandas.Timedelta.max
+generated/pandas.Timedelta.microseconds,../reference/api/pandas.Timedelta.microseconds
+generated/pandas.Timedelta.min,../reference/api/pandas.Timedelta.min
+generated/pandas.Timedelta.nanoseconds,../reference/api/pandas.Timedelta.nanoseconds
+generated/pandas.timedelta_range,../reference/api/pandas.timedelta_range
+generated/pandas.Timedelta.resolution,../reference/api/pandas.Timedelta.resolution
+generated/pandas.Timedelta.round,../reference/api/pandas.Timedelta.round
+generated/pandas.Timedelta.seconds,../reference/api/pandas.Timedelta.seconds
+generated/pandas.Timedelta.to_pytimedelta,../reference/api/pandas.Timedelta.to_pytimedelta
+generated/pandas.Timedelta.total_seconds,../reference/api/pandas.Timedelta.total_seconds
+generated/pandas.Timedelta.to_timedelta64,../reference/api/pandas.Timedelta.to_timedelta64
+generated/pandas.Timedelta.value,../reference/api/pandas.Timedelta.value
+generated/pandas.Timedelta.view,../reference/api/pandas.Timedelta.view
+generated/pandas.Timestamp.asm8,../reference/api/pandas.Timestamp.asm8
+generated/pandas.Timestamp.astimezone,../reference/api/pandas.Timestamp.astimezone
+generated/pandas.Timestamp.ceil,../reference/api/pandas.Timestamp.ceil
+generated/pandas.Timestamp.combine,../reference/api/pandas.Timestamp.combine
+generated/pandas.Timestamp.ctime,../reference/api/pandas.Timestamp.ctime
+generated/pandas.Timestamp.date,../reference/api/pandas.Timestamp.date
+generated/pandas.Timestamp.day,../reference/api/pandas.Timestamp.day
+generated/pandas.Timestamp.day_name,../reference/api/pandas.Timestamp.day_name
+generated/pandas.Timestamp.dayofweek,../reference/api/pandas.Timestamp.dayofweek
+generated/pandas.Timestamp.dayofyear,../reference/api/pandas.Timestamp.dayofyear
+generated/pandas.Timestamp.days_in_month,../reference/api/pandas.Timestamp.days_in_month
+generated/pandas.Timestamp.daysinmonth,../reference/api/pandas.Timestamp.daysinmonth
+generated/pandas.Timestamp.dst,../reference/api/pandas.Timestamp.dst
+generated/pandas.Timestamp.floor,../reference/api/pandas.Timestamp.floor
+generated/pandas.Timestamp.fold,../reference/api/pandas.Timestamp.fold
+generated/pandas.Timestamp.freq,../reference/api/pandas.Timestamp.freq
+generated/pandas.Timestamp.freqstr,../reference/api/pandas.Timestamp.freqstr
+generated/pandas.Timestamp.fromisoformat,../reference/api/pandas.Timestamp.fromisoformat
+generated/pandas.Timestamp.fromordinal,../reference/api/pandas.Timestamp.fromordinal
+generated/pandas.Timestamp.fromtimestamp,../reference/api/pandas.Timestamp.fromtimestamp
+generated/pandas.Timestamp.hour,../reference/api/pandas.Timestamp.hour
+generated/pandas.Timestamp,../reference/api/pandas.Timestamp
+generated/pandas.Timestamp.is_leap_year,../reference/api/pandas.Timestamp.is_leap_year
+generated/pandas.Timestamp.is_month_end,../reference/api/pandas.Timestamp.is_month_end
+generated/pandas.Timestamp.is_month_start,../reference/api/pandas.Timestamp.is_month_start
+generated/pandas.Timestamp.isocalendar,../reference/api/pandas.Timestamp.isocalendar
+generated/pandas.Timestamp.isoformat,../reference/api/pandas.Timestamp.isoformat
+generated/pandas.Timestamp.isoweekday,../reference/api/pandas.Timestamp.isoweekday
+generated/pandas.Timestamp.is_quarter_end,../reference/api/pandas.Timestamp.is_quarter_end
+generated/pandas.Timestamp.is_quarter_start,../reference/api/pandas.Timestamp.is_quarter_start
+generated/pandas.Timestamp.is_year_end,../reference/api/pandas.Timestamp.is_year_end
+generated/pandas.Timestamp.is_year_start,../reference/api/pandas.Timestamp.is_year_start
+generated/pandas.Timestamp.max,../reference/api/pandas.Timestamp.max
+generated/pandas.Timestamp.microsecond,../reference/api/pandas.Timestamp.microsecond
+generated/pandas.Timestamp.min,../reference/api/pandas.Timestamp.min
+generated/pandas.Timestamp.minute,../reference/api/pandas.Timestamp.minute
+generated/pandas.Timestamp.month,../reference/api/pandas.Timestamp.month
+generated/pandas.Timestamp.month_name,../reference/api/pandas.Timestamp.month_name
+generated/pandas.Timestamp.nanosecond,../reference/api/pandas.Timestamp.nanosecond
+generated/pandas.Timestamp.normalize,../reference/api/pandas.Timestamp.normalize
+generated/pandas.Timestamp.now,../reference/api/pandas.Timestamp.now
+generated/pandas.Timestamp.quarter,../reference/api/pandas.Timestamp.quarter
+generated/pandas.Timestamp.replace,../reference/api/pandas.Timestamp.replace
+generated/pandas.Timestamp.resolution,../reference/api/pandas.Timestamp.resolution
+generated/pandas.Timestamp.round,../reference/api/pandas.Timestamp.round
+generated/pandas.Timestamp.second,../reference/api/pandas.Timestamp.second
+generated/pandas.Timestamp.strftime,../reference/api/pandas.Timestamp.strftime
+generated/pandas.Timestamp.strptime,../reference/api/pandas.Timestamp.strptime
+generated/pandas.Timestamp.time,../reference/api/pandas.Timestamp.time
+generated/pandas.Timestamp.timestamp,../reference/api/pandas.Timestamp.timestamp
+generated/pandas.Timestamp.timetuple,../reference/api/pandas.Timestamp.timetuple
+generated/pandas.Timestamp.timetz,../reference/api/pandas.Timestamp.timetz
+generated/pandas.Timestamp.to_datetime64,../reference/api/pandas.Timestamp.to_datetime64
+generated/pandas.Timestamp.today,../reference/api/pandas.Timestamp.today
+generated/pandas.Timestamp.to_julian_date,../reference/api/pandas.Timestamp.to_julian_date
+generated/pandas.Timestamp.toordinal,../reference/api/pandas.Timestamp.toordinal
+generated/pandas.Timestamp.to_period,../reference/api/pandas.Timestamp.to_period
+generated/pandas.Timestamp.to_pydatetime,../reference/api/pandas.Timestamp.to_pydatetime
+generated/pandas.Timestamp.tz_convert,../reference/api/pandas.Timestamp.tz_convert
+generated/pandas.Timestamp.tz,../reference/api/pandas.Timestamp.tz
+generated/pandas.Timestamp.tzinfo,../reference/api/pandas.Timestamp.tzinfo
+generated/pandas.Timestamp.tz_localize,../reference/api/pandas.Timestamp.tz_localize
+generated/pandas.Timestamp.tzname,../reference/api/pandas.Timestamp.tzname
+generated/pandas.Timestamp.utcfromtimestamp,../reference/api/pandas.Timestamp.utcfromtimestamp
+generated/pandas.Timestamp.utcnow,../reference/api/pandas.Timestamp.utcnow
+generated/pandas.Timestamp.utcoffset,../reference/api/pandas.Timestamp.utcoffset
+generated/pandas.Timestamp.utctimetuple,../reference/api/pandas.Timestamp.utctimetuple
+generated/pandas.Timestamp.value,../reference/api/pandas.Timestamp.value
+generated/pandas.Timestamp.weekday,../reference/api/pandas.Timestamp.weekday
+generated/pandas.Timestamp.weekday_name,../reference/api/pandas.Timestamp.weekday_name
+generated/pandas.Timestamp.week,../reference/api/pandas.Timestamp.week
+generated/pandas.Timestamp.weekofyear,../reference/api/pandas.Timestamp.weekofyear
+generated/pandas.Timestamp.year,../reference/api/pandas.Timestamp.year
+generated/pandas.to_datetime,../reference/api/pandas.to_datetime
+generated/pandas.to_numeric,../reference/api/pandas.to_numeric
+generated/pandas.to_timedelta,../reference/api/pandas.to_timedelta
+generated/pandas.tseries.frequencies.to_offset,../reference/api/pandas.tseries.frequencies.to_offset
+generated/pandas.unique,../reference/api/pandas.unique
+generated/pandas.util.hash_array,../reference/api/pandas.util.hash_array
+generated/pandas.util.hash_pandas_object,../reference/api/pandas.util.hash_pandas_object
+generated/pandas.wide_to_long,../reference/api/pandas.wide_to_long
diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index bc420a906b59c..e0c7ab4806a11 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -113,7 +113,7 @@ See the package overview for more detail about what's in the library.
{{ single_doc[:-4] }}
{% elif single_doc %}
.. autosummary::
- :toctree: api/generated/
+ :toctree: reference/api/
{{ single_doc }}
{% else -%}
@@ -135,7 +135,7 @@ See the package overview for more detail about what's in the library.
comparison_with_stata
{% endif -%}
{% if include_api -%}
- api/index
+ reference/index
{% endif -%}
{% if not single_doc -%}
development/index
diff --git a/doc/source/api/arrays.rst b/doc/source/reference/arrays.rst
similarity index 93%
rename from doc/source/api/arrays.rst
rename to doc/source/reference/arrays.rst
index 5ecc5181af22c..7281f4f748d6f 100644
--- a/doc/source/api/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -31,7 +31,7 @@ The top-level :meth:`array` method can be used to create a new array, which may
stored in a :class:`Series`, :class:`Index`, or as a column in a :class:`DataFrame`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
array
@@ -48,14 +48,14 @@ or timezone-aware values.
scalar type for timezone-naive or timezone-aware datetime data.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timestamp
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timestamp.asm8
Timestamp.day
@@ -91,7 +91,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timestamp.astimezone
Timestamp.ceil
@@ -142,7 +142,7 @@ is used.
If the data are tz-aware, then every value in the array must have the same timezone.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
arrays.DatetimeArray
DatetimeTZDtype
@@ -156,14 +156,14 @@ NumPy can natively represent timedeltas. Pandas provides :class:`Timedelta`
for symmetry with :class:`Timestamp`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timedelta
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timedelta.asm8
Timedelta.components
@@ -183,7 +183,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Timedelta.ceil
Timedelta.floor
@@ -196,7 +196,7 @@ Methods
A collection of timedeltas may be stored in a :class:`TimedeltaArray`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
arrays.TimedeltaArray
@@ -210,14 +210,14 @@ Pandas represents spans of times as :class:`Period` objects.
Period
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Period
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Period.day
Period.dayofweek
@@ -244,7 +244,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Period.asfreq
Period.now
@@ -255,7 +255,7 @@ A collection of timedeltas may be stored in a :class:`arrays.PeriodArray`.
Every period in a ``PeriodArray`` must have the same ``freq``.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
arrays.DatetimeArray
PeriodDtype
@@ -268,14 +268,14 @@ Interval Data
Arbitrary intervals can be represented as :class:`Interval` objects.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Interval
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Interval.closed
Interval.closed_left
@@ -291,7 +291,7 @@ Properties
A collection of intervals may be stored in an :class:`IntervalArray`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
IntervalArray
IntervalDtype
@@ -305,7 +305,7 @@ Nullable Integer
Pandas provides this through :class:`arrays.IntegerArray`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
arrays.IntegerArray
Int8Dtype
@@ -327,13 +327,13 @@ limited, fixed set of values. The dtype of a ``Categorical`` can be described by
a :class:`pandas.api.types.CategoricalDtype`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
CategoricalDtype
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CategoricalDtype.categories
CategoricalDtype.ordered
@@ -341,7 +341,7 @@ a :class:`pandas.api.types.CategoricalDtype`.
Categorical data can be stored in a :class:`pandas.Categorical`
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
Categorical
@@ -350,14 +350,14 @@ The alternative :meth:`Categorical.from_codes` constructor can be used when you
have the categories and integer codes already:
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Categorical.from_codes
The dtype information is available on the ``Categorical``
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Categorical.dtype
Categorical.categories
@@ -368,7 +368,7 @@ The dtype information is available on the ``Categorical``
the Categorical back to a NumPy array, so categories and order information is not preserved!
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Categorical.__array__
@@ -391,7 +391,7 @@ Data where a single value is repeated many times (e.g. ``0`` or ``NaN``) may
be stored efficiently as a :class:`SparseArray`.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SparseArray
SparseDtype
diff --git a/doc/source/api/extensions.rst b/doc/source/reference/extensions.rst
similarity index 95%
rename from doc/source/api/extensions.rst
rename to doc/source/reference/extensions.rst
index 3972354ff9651..6146e34fab274 100644
--- a/doc/source/api/extensions.rst
+++ b/doc/source/reference/extensions.rst
@@ -11,7 +11,7 @@ These are primarily intended for library authors looking to extend pandas
objects.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
api.extensions.register_extension_dtype
api.extensions.register_dataframe_accessor
diff --git a/doc/source/api/frame.rst b/doc/source/reference/frame.rst
similarity index 93%
rename from doc/source/api/frame.rst
rename to doc/source/reference/frame.rst
index de16d59fe7c40..568acd5207bd1 100644
--- a/doc/source/api/frame.rst
+++ b/doc/source/reference/frame.rst
@@ -10,7 +10,7 @@ DataFrame
Constructor
~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame
@@ -19,13 +19,13 @@ Attributes and underlying data
**Axes**
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.index
DataFrame.columns
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.dtypes
DataFrame.ftypes
@@ -45,7 +45,7 @@ Attributes and underlying data
Conversion
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.astype
DataFrame.convert_objects
@@ -58,7 +58,7 @@ Conversion
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.head
DataFrame.at
@@ -88,7 +88,7 @@ For more information on ``.at``, ``.iat``, ``.loc``, and
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.add
DataFrame.sub
@@ -119,7 +119,7 @@ Binary operator functions
Function application, GroupBy & Window
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.apply
DataFrame.applymap
@@ -137,7 +137,7 @@ Function application, GroupBy & Window
Computations / Descriptive Stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.abs
DataFrame.all
@@ -181,7 +181,7 @@ Computations / Descriptive Stats
Reindexing / Selection / Label manipulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.add_prefix
DataFrame.add_suffix
@@ -217,7 +217,7 @@ Reindexing / Selection / Label manipulation
Missing data handling
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.dropna
DataFrame.fillna
@@ -227,7 +227,7 @@ Missing data handling
Reshaping, sorting, transposing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.droplevel
DataFrame.pivot
@@ -251,7 +251,7 @@ Reshaping, sorting, transposing
Combining / joining / merging
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.append
DataFrame.assign
@@ -262,7 +262,7 @@ Combining / joining / merging
Time series-related
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.asfreq
DataFrame.asof
@@ -285,13 +285,13 @@ Plotting
specific plotting methods of the form ``DataFrame.plot.<kind>``.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_callable.rst
DataFrame.plot
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
DataFrame.plot.area
@@ -307,7 +307,7 @@ specific plotting methods of the form ``DataFrame.plot.<kind>``.
DataFrame.plot.scatter
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.boxplot
DataFrame.hist
@@ -315,7 +315,7 @@ specific plotting methods of the form ``DataFrame.plot.<kind>``.
Serialization / IO / Conversion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrame.from_csv
DataFrame.from_dict
@@ -346,6 +346,6 @@ Serialization / IO / Conversion
Sparse
~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SparseDataFrame.to_coo
diff --git a/doc/source/api/general_functions.rst b/doc/source/reference/general_functions.rst
similarity index 84%
rename from doc/source/api/general_functions.rst
rename to doc/source/reference/general_functions.rst
index cef5d8cac6abc..b5832cb8aa591 100644
--- a/doc/source/api/general_functions.rst
+++ b/doc/source/reference/general_functions.rst
@@ -10,7 +10,7 @@ General functions
Data manipulations
~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
melt
pivot
@@ -30,7 +30,7 @@ Data manipulations
Top-level missing data
~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
isna
isnull
@@ -40,14 +40,14 @@ Top-level missing data
Top-level conversions
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
to_numeric
Top-level dealing with datetimelike
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
to_datetime
to_timedelta
@@ -60,21 +60,21 @@ Top-level dealing with datetimelike
Top-level dealing with intervals
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
interval_range
Top-level evaluation
~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
eval
Hashing
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
util.hash_array
util.hash_pandas_object
@@ -82,6 +82,6 @@ Hashing
Testing
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
test
diff --git a/doc/source/api/general_utility_functions.rst b/doc/source/reference/general_utility_functions.rst
similarity index 93%
rename from doc/source/api/general_utility_functions.rst
rename to doc/source/reference/general_utility_functions.rst
index e151f8f57ed5e..9c69770c0f1b7 100644
--- a/doc/source/api/general_utility_functions.rst
+++ b/doc/source/reference/general_utility_functions.rst
@@ -10,7 +10,7 @@ General utility functions
Working with options
--------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
describe_option
reset_option
@@ -21,7 +21,7 @@ Working with options
Testing functions
-----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
testing.assert_frame_equal
testing.assert_series_equal
@@ -30,7 +30,7 @@ Testing functions
Exceptions and warnings
-----------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
errors.DtypeWarning
errors.EmptyDataError
@@ -44,7 +44,7 @@ Exceptions and warnings
Data types related functionality
--------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
api.types.union_categoricals
api.types.infer_dtype
@@ -53,7 +53,7 @@ Data types related functionality
Dtype introspection
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
api.types.is_bool_dtype
api.types.is_categorical_dtype
@@ -81,7 +81,7 @@ Dtype introspection
Iterable introspection
~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
api.types.is_dict_like
api.types.is_file_like
@@ -92,7 +92,7 @@ Iterable introspection
Scalar introspection
~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
api.types.is_bool
api.types.is_categorical
diff --git a/doc/source/api/groupby.rst b/doc/source/reference/groupby.rst
similarity index 94%
rename from doc/source/api/groupby.rst
rename to doc/source/reference/groupby.rst
index d67c7e0889522..6ed85ff2fac43 100644
--- a/doc/source/api/groupby.rst
+++ b/doc/source/reference/groupby.rst
@@ -12,7 +12,7 @@ GroupBy objects are returned by groupby calls: :func:`pandas.DataFrame.groupby`,
Indexing, iteration
-------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
GroupBy.__iter__
GroupBy.groups
@@ -22,7 +22,7 @@ Indexing, iteration
.. currentmodule:: pandas
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
Grouper
@@ -32,7 +32,7 @@ Indexing, iteration
Function application
--------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
GroupBy.apply
GroupBy.agg
@@ -43,7 +43,7 @@ Function application
Computations / Descriptive Stats
--------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
GroupBy.all
GroupBy.any
@@ -78,7 +78,7 @@ axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrameGroupBy.all
DataFrameGroupBy.any
@@ -113,7 +113,7 @@ application to columns of a specific data type.
The following methods are available only for ``SeriesGroupBy`` objects.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SeriesGroupBy.nlargest
SeriesGroupBy.nsmallest
@@ -126,7 +126,7 @@ The following methods are available only for ``SeriesGroupBy`` objects.
The following methods are available only for ``DataFrameGroupBy`` objects.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DataFrameGroupBy.corrwith
DataFrameGroupBy.boxplot
diff --git a/doc/source/api/index.rst b/doc/source/reference/index.rst
similarity index 56%
rename from doc/source/api/index.rst
rename to doc/source/reference/index.rst
index e4d118e278128..ef4676054473a 100644
--- a/doc/source/api/index.rst
+++ b/doc/source/reference/index.rst
@@ -44,31 +44,31 @@ public functions related to data types in pandas.
.. toctree::
:hidden:
- generated/pandas.DataFrame.blocks
- generated/pandas.DataFrame.as_matrix
- generated/pandas.DataFrame.ix
- generated/pandas.Index.asi8
- generated/pandas.Index.data
- generated/pandas.Index.flags
- generated/pandas.Index.holds_integer
- generated/pandas.Index.is_type_compatible
- generated/pandas.Index.nlevels
- generated/pandas.Index.sort
- generated/pandas.Panel.agg
- generated/pandas.Panel.aggregate
- generated/pandas.Panel.blocks
- generated/pandas.Panel.empty
- generated/pandas.Panel.is_copy
- generated/pandas.Panel.items
- generated/pandas.Panel.ix
- generated/pandas.Panel.major_axis
- generated/pandas.Panel.minor_axis
- generated/pandas.Series.asobject
- generated/pandas.Series.blocks
- generated/pandas.Series.from_array
- generated/pandas.Series.ix
- generated/pandas.Series.imag
- generated/pandas.Series.real
+ api/pandas.DataFrame.blocks
+ api/pandas.DataFrame.as_matrix
+ api/pandas.DataFrame.ix
+ api/pandas.Index.asi8
+ api/pandas.Index.data
+ api/pandas.Index.flags
+ api/pandas.Index.holds_integer
+ api/pandas.Index.is_type_compatible
+ api/pandas.Index.nlevels
+ api/pandas.Index.sort
+ api/pandas.Panel.agg
+ api/pandas.Panel.aggregate
+ api/pandas.Panel.blocks
+ api/pandas.Panel.empty
+ api/pandas.Panel.is_copy
+ api/pandas.Panel.items
+ api/pandas.Panel.ix
+ api/pandas.Panel.major_axis
+ api/pandas.Panel.minor_axis
+ api/pandas.Series.asobject
+ api/pandas.Series.blocks
+ api/pandas.Series.from_array
+ api/pandas.Series.ix
+ api/pandas.Series.imag
+ api/pandas.Series.real
.. Can't convince sphinx to generate toctree for this class attribute.
@@ -77,4 +77,4 @@ public functions related to data types in pandas.
.. toctree::
:hidden:
- generated/pandas.api.extensions.ExtensionDtype.na_value
+ api/pandas.api.extensions.ExtensionDtype.na_value
diff --git a/doc/source/api/indexing.rst b/doc/source/reference/indexing.rst
similarity index 91%
rename from doc/source/api/indexing.rst
rename to doc/source/reference/indexing.rst
index d27b05322c1f2..680cb7e3dac91 100644
--- a/doc/source/api/indexing.rst
+++ b/doc/source/reference/indexing.rst
@@ -15,14 +15,14 @@ that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.**
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.values
Index.is_monotonic
@@ -51,7 +51,7 @@ Properties
Modifying and Computations
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.all
Index.any
@@ -90,7 +90,7 @@ Modifying and Computations
Compatibility with MultiIndex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.set_names
Index.is_lexsorted_for_tuple
@@ -99,7 +99,7 @@ Compatibility with MultiIndex
Missing Values
~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.fillna
Index.dropna
@@ -109,7 +109,7 @@ Missing Values
Conversion
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.astype
Index.item
@@ -124,7 +124,7 @@ Conversion
Sorting
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.argsort
Index.searchsorted
@@ -133,14 +133,14 @@ Sorting
Time-specific operations
~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.shift
Combining / joining / set operations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.append
Index.join
@@ -152,7 +152,7 @@ Combining / joining / set operations
Selecting
~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Index.asof
Index.asof_locs
@@ -176,7 +176,7 @@ Selecting
Numeric Index
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
RangeIndex
@@ -188,7 +188,7 @@ Numeric Index
.. Separate block, since they aren't classes.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
RangeIndex.from_range
@@ -197,7 +197,7 @@ Numeric Index
CategoricalIndex
----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
CategoricalIndex
@@ -205,7 +205,7 @@ CategoricalIndex
Categorical Components
~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CategoricalIndex.codes
CategoricalIndex.categories
@@ -222,7 +222,7 @@ Categorical Components
Modifying and Computations
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CategoricalIndex.map
CategoricalIndex.equals
@@ -232,7 +232,7 @@ Modifying and Computations
IntervalIndex
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
IntervalIndex
@@ -240,7 +240,7 @@ IntervalIndex
IntervalIndex Components
~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
IntervalIndex.from_arrays
IntervalIndex.from_tuples
@@ -265,20 +265,20 @@ IntervalIndex Components
MultiIndex
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
MultiIndex
.. autosummary::
- :toctree: generated/
+ :toctree: api/
IndexSlice
MultiIndex Constructors
~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MultiIndex.from_arrays
MultiIndex.from_tuples
@@ -288,7 +288,7 @@ MultiIndex Constructors
MultiIndex Properties
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MultiIndex.names
MultiIndex.levels
@@ -299,7 +299,7 @@ MultiIndex Properties
MultiIndex Components
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MultiIndex.set_levels
MultiIndex.set_codes
@@ -316,7 +316,7 @@ MultiIndex Components
MultiIndex Selecting
~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MultiIndex.get_loc
MultiIndex.get_loc_level
@@ -328,7 +328,7 @@ MultiIndex Selecting
DatetimeIndex
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
DatetimeIndex
@@ -336,7 +336,7 @@ DatetimeIndex
Time/Date Components
~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DatetimeIndex.year
DatetimeIndex.month
@@ -370,7 +370,7 @@ Time/Date Components
Selecting
~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DatetimeIndex.indexer_at_time
DatetimeIndex.indexer_between_time
@@ -379,7 +379,7 @@ Selecting
Time-specific operations
~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DatetimeIndex.normalize
DatetimeIndex.strftime
@@ -395,7 +395,7 @@ Time-specific operations
Conversion
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DatetimeIndex.to_period
DatetimeIndex.to_perioddelta
@@ -406,7 +406,7 @@ Conversion
TimedeltaIndex
--------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
TimedeltaIndex
@@ -414,7 +414,7 @@ TimedeltaIndex
Components
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
TimedeltaIndex.days
TimedeltaIndex.seconds
@@ -426,7 +426,7 @@ Components
Conversion
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
TimedeltaIndex.to_pytimedelta
TimedeltaIndex.to_series
@@ -440,7 +440,7 @@ Conversion
PeriodIndex
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
PeriodIndex
@@ -448,7 +448,7 @@ PeriodIndex
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
PeriodIndex.day
PeriodIndex.dayofweek
@@ -474,7 +474,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
PeriodIndex.asfreq
PeriodIndex.strftime
diff --git a/doc/source/api/io.rst b/doc/source/reference/io.rst
similarity index 78%
rename from doc/source/api/io.rst
rename to doc/source/reference/io.rst
index f2060b7c05413..9c776e3ff8a82 100644
--- a/doc/source/api/io.rst
+++ b/doc/source/reference/io.rst
@@ -10,14 +10,14 @@ Input/Output
Pickling
~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_pickle
Flat File
~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_table
read_csv
@@ -27,20 +27,20 @@ Flat File
Clipboard
~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_clipboard
Excel
~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_excel
ExcelFile.parse
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/class_without_autosummary.rst
ExcelWriter
@@ -48,14 +48,14 @@ Excel
JSON
~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_json
.. currentmodule:: pandas.io.json
.. autosummary::
- :toctree: generated/
+ :toctree: api/
json_normalize
build_table_schema
@@ -65,14 +65,14 @@ JSON
HTML
~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_html
HDFStore: PyTables (HDF5)
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_hdf
HDFStore.put
@@ -87,28 +87,28 @@ HDFStore: PyTables (HDF5)
Feather
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_feather
Parquet
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_parquet
SAS
~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_sas
SQL
~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_sql_table
read_sql_query
@@ -117,21 +117,21 @@ SQL
Google BigQuery
~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_gbq
STATA
~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
read_stata
.. currentmodule:: pandas.io.stata
.. autosummary::
- :toctree: generated/
+ :toctree: api/
StataReader.data
StataReader.data_label
diff --git a/doc/source/api/offset_frequency.rst b/doc/source/reference/offset_frequency.rst
similarity index 84%
rename from doc/source/api/offset_frequency.rst
rename to doc/source/reference/offset_frequency.rst
index 42894fe8d7f2f..ccc1c7e171d22 100644
--- a/doc/source/api/offset_frequency.rst
+++ b/doc/source/reference/offset_frequency.rst
@@ -10,14 +10,14 @@ Date Offsets
DateOffset
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DateOffset
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DateOffset.freqstr
DateOffset.kwds
@@ -29,7 +29,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
DateOffset.apply
DateOffset.copy
@@ -39,14 +39,14 @@ Methods
BusinessDay
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessDay
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessDay.freqstr
BusinessDay.kwds
@@ -58,7 +58,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessDay.apply
BusinessDay.apply_index
@@ -69,14 +69,14 @@ Methods
BusinessHour
------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessHour
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessHour.freqstr
BusinessHour.kwds
@@ -88,7 +88,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessHour.apply
BusinessHour.copy
@@ -98,14 +98,14 @@ Methods
CustomBusinessDay
-----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessDay
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessDay.freqstr
CustomBusinessDay.kwds
@@ -117,7 +117,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessDay.apply
CustomBusinessDay.copy
@@ -127,14 +127,14 @@ Methods
CustomBusinessHour
------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessHour
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessHour.freqstr
CustomBusinessHour.kwds
@@ -146,7 +146,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessHour.apply
CustomBusinessHour.copy
@@ -156,14 +156,14 @@ Methods
MonthOffset
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthOffset
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthOffset.freqstr
MonthOffset.kwds
@@ -175,7 +175,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthOffset.apply
MonthOffset.apply_index
@@ -186,14 +186,14 @@ Methods
MonthEnd
--------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthEnd.freqstr
MonthEnd.kwds
@@ -205,7 +205,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthEnd.apply
MonthEnd.apply_index
@@ -216,14 +216,14 @@ Methods
MonthBegin
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthBegin.freqstr
MonthBegin.kwds
@@ -235,7 +235,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
MonthBegin.apply
MonthBegin.apply_index
@@ -246,14 +246,14 @@ Methods
BusinessMonthEnd
----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthEnd.freqstr
BusinessMonthEnd.kwds
@@ -265,7 +265,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthEnd.apply
BusinessMonthEnd.apply_index
@@ -276,14 +276,14 @@ Methods
BusinessMonthBegin
------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthBegin.freqstr
BusinessMonthBegin.kwds
@@ -295,7 +295,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BusinessMonthBegin.apply
BusinessMonthBegin.apply_index
@@ -306,14 +306,14 @@ Methods
CustomBusinessMonthEnd
----------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthEnd.freqstr
CustomBusinessMonthEnd.kwds
@@ -326,7 +326,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthEnd.apply
CustomBusinessMonthEnd.copy
@@ -336,14 +336,14 @@ Methods
CustomBusinessMonthBegin
------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthBegin.freqstr
CustomBusinessMonthBegin.kwds
@@ -356,7 +356,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CustomBusinessMonthBegin.apply
CustomBusinessMonthBegin.copy
@@ -366,14 +366,14 @@ Methods
SemiMonthOffset
---------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthOffset
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthOffset.freqstr
SemiMonthOffset.kwds
@@ -385,7 +385,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthOffset.apply
SemiMonthOffset.apply_index
@@ -396,14 +396,14 @@ Methods
SemiMonthEnd
------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthEnd.freqstr
SemiMonthEnd.kwds
@@ -415,7 +415,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthEnd.apply
SemiMonthEnd.apply_index
@@ -426,14 +426,14 @@ Methods
SemiMonthBegin
--------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthBegin.freqstr
SemiMonthBegin.kwds
@@ -445,7 +445,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SemiMonthBegin.apply
SemiMonthBegin.apply_index
@@ -456,14 +456,14 @@ Methods
Week
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Week
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Week.freqstr
Week.kwds
@@ -475,7 +475,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Week.apply
Week.apply_index
@@ -486,14 +486,14 @@ Methods
WeekOfMonth
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
WeekOfMonth
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
WeekOfMonth.freqstr
WeekOfMonth.kwds
@@ -505,7 +505,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
WeekOfMonth.apply
WeekOfMonth.copy
@@ -515,14 +515,14 @@ Methods
LastWeekOfMonth
---------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
LastWeekOfMonth
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
LastWeekOfMonth.freqstr
LastWeekOfMonth.kwds
@@ -534,7 +534,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
LastWeekOfMonth.apply
LastWeekOfMonth.copy
@@ -544,14 +544,14 @@ Methods
QuarterOffset
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterOffset
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterOffset.freqstr
QuarterOffset.kwds
@@ -563,7 +563,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterOffset.apply
QuarterOffset.apply_index
@@ -574,14 +574,14 @@ Methods
BQuarterEnd
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterEnd.freqstr
BQuarterEnd.kwds
@@ -593,7 +593,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterEnd.apply
BQuarterEnd.apply_index
@@ -604,14 +604,14 @@ Methods
BQuarterBegin
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterBegin.freqstr
BQuarterBegin.kwds
@@ -623,7 +623,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BQuarterBegin.apply
BQuarterBegin.apply_index
@@ -634,14 +634,14 @@ Methods
QuarterEnd
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterEnd.freqstr
QuarterEnd.kwds
@@ -653,7 +653,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterEnd.apply
QuarterEnd.apply_index
@@ -664,14 +664,14 @@ Methods
QuarterBegin
------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterBegin.freqstr
QuarterBegin.kwds
@@ -683,7 +683,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
QuarterBegin.apply
QuarterBegin.apply_index
@@ -694,14 +694,14 @@ Methods
YearOffset
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearOffset
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearOffset.freqstr
YearOffset.kwds
@@ -713,7 +713,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearOffset.apply
YearOffset.apply_index
@@ -724,14 +724,14 @@ Methods
BYearEnd
--------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearEnd.freqstr
BYearEnd.kwds
@@ -743,7 +743,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearEnd.apply
BYearEnd.apply_index
@@ -754,14 +754,14 @@ Methods
BYearBegin
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearBegin.freqstr
BYearBegin.kwds
@@ -773,7 +773,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BYearBegin.apply
BYearBegin.apply_index
@@ -784,14 +784,14 @@ Methods
YearEnd
-------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearEnd.freqstr
YearEnd.kwds
@@ -803,7 +803,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearEnd.apply
YearEnd.apply_index
@@ -814,14 +814,14 @@ Methods
YearBegin
---------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearBegin.freqstr
YearBegin.kwds
@@ -833,7 +833,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
YearBegin.apply
YearBegin.apply_index
@@ -844,14 +844,14 @@ Methods
FY5253
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253.freqstr
FY5253.kwds
@@ -863,7 +863,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253.apply
FY5253.copy
@@ -875,14 +875,14 @@ Methods
FY5253Quarter
-------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253Quarter
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253Quarter.freqstr
FY5253Quarter.kwds
@@ -894,7 +894,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
FY5253Quarter.apply
FY5253Quarter.copy
@@ -906,14 +906,14 @@ Methods
Easter
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Easter
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Easter.freqstr
Easter.kwds
@@ -925,7 +925,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Easter.apply
Easter.copy
@@ -935,14 +935,14 @@ Methods
Tick
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Tick
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Tick.delta
Tick.freqstr
@@ -955,7 +955,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Tick.copy
Tick.isAnchored
@@ -964,14 +964,14 @@ Methods
Day
---
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Day
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Day.delta
Day.freqstr
@@ -984,7 +984,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Day.copy
Day.isAnchored
@@ -993,14 +993,14 @@ Methods
Hour
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Hour
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Hour.delta
Hour.freqstr
@@ -1013,7 +1013,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Hour.copy
Hour.isAnchored
@@ -1022,14 +1022,14 @@ Methods
Minute
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Minute
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Minute.delta
Minute.freqstr
@@ -1042,7 +1042,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Minute.copy
Minute.isAnchored
@@ -1051,14 +1051,14 @@ Methods
Second
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Second
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Second.delta
Second.freqstr
@@ -1071,7 +1071,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Second.copy
Second.isAnchored
@@ -1080,14 +1080,14 @@ Methods
Milli
-----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Milli
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Milli.delta
Milli.freqstr
@@ -1100,7 +1100,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Milli.copy
Milli.isAnchored
@@ -1109,14 +1109,14 @@ Methods
Micro
-----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Micro
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Micro.delta
Micro.freqstr
@@ -1129,7 +1129,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Micro.copy
Micro.isAnchored
@@ -1138,14 +1138,14 @@ Methods
Nano
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Nano
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Nano.delta
Nano.freqstr
@@ -1158,7 +1158,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Nano.copy
Nano.isAnchored
@@ -1167,14 +1167,14 @@ Methods
BDay
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BDay
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BDay.base
BDay.freqstr
@@ -1188,7 +1188,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BDay.apply
BDay.apply_index
@@ -1201,14 +1201,14 @@ Methods
BMonthEnd
---------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthEnd.base
BMonthEnd.freqstr
@@ -1221,7 +1221,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthEnd.apply
BMonthEnd.apply_index
@@ -1234,14 +1234,14 @@ Methods
BMonthBegin
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthBegin.base
BMonthBegin.freqstr
@@ -1254,7 +1254,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
BMonthBegin.apply
BMonthBegin.apply_index
@@ -1267,14 +1267,14 @@ Methods
CBMonthEnd
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthEnd
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthEnd.base
CBMonthEnd.cbday_roll
@@ -1291,7 +1291,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthEnd.apply
CBMonthEnd.apply_index
@@ -1304,14 +1304,14 @@ Methods
CBMonthBegin
------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthBegin
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthBegin.base
CBMonthBegin.cbday_roll
@@ -1328,7 +1328,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CBMonthBegin.apply
CBMonthBegin.apply_index
@@ -1341,14 +1341,14 @@ Methods
CDay
----
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CDay
Properties
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CDay.base
CDay.freqstr
@@ -1362,7 +1362,7 @@ Properties
Methods
~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
CDay.apply
CDay.apply_index
@@ -1382,6 +1382,6 @@ Frequencies
.. _api.offsets:
.. autosummary::
- :toctree: generated/
+ :toctree: api/
to_offset
diff --git a/doc/source/api/panel.rst b/doc/source/reference/panel.rst
similarity index 90%
rename from doc/source/api/panel.rst
rename to doc/source/reference/panel.rst
index 4edcd22d2685d..39c8ba0828859 100644
--- a/doc/source/api/panel.rst
+++ b/doc/source/reference/panel.rst
@@ -10,7 +10,7 @@ Panel
Constructor
~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel
@@ -23,7 +23,7 @@ Properties and underlying data
* **minor_axis**: axis 2; the columns of each of the DataFrames
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.values
Panel.axes
@@ -38,7 +38,7 @@ Properties and underlying data
Conversion
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.astype
Panel.copy
@@ -48,7 +48,7 @@ Conversion
Getting and setting
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.get_value
Panel.set_value
@@ -56,7 +56,7 @@ Getting and setting
Indexing, iteration, slicing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.at
Panel.iat
@@ -75,7 +75,7 @@ For more information on ``.at``, ``.iat``, ``.loc``, and
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.add
Panel.sub
@@ -103,7 +103,7 @@ Binary operator functions
Function application, GroupBy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.apply
Panel.groupby
@@ -113,7 +113,7 @@ Function application, GroupBy
Computations / Descriptive Stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.abs
Panel.clip
@@ -139,7 +139,7 @@ Computations / Descriptive Stats
Reindexing / Selection / Label manipulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.add_prefix
Panel.add_suffix
@@ -160,14 +160,14 @@ Reindexing / Selection / Label manipulation
Missing data handling
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.dropna
Reshaping, sorting, transposing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.sort_index
Panel.swaplevel
@@ -178,7 +178,7 @@ Reshaping, sorting, transposing
Combining / joining / merging
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.join
Panel.update
@@ -186,7 +186,7 @@ Combining / joining / merging
Time series-related
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.asfreq
Panel.shift
@@ -197,7 +197,7 @@ Time series-related
Serialization / IO / Conversion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Panel.from_dict
Panel.to_pickle
diff --git a/doc/source/api/plotting.rst b/doc/source/reference/plotting.rst
similarity index 93%
rename from doc/source/api/plotting.rst
rename to doc/source/reference/plotting.rst
index c4e6333ebda37..7615e1d20f5e2 100644
--- a/doc/source/api/plotting.rst
+++ b/doc/source/reference/plotting.rst
@@ -10,7 +10,7 @@ Plotting
The following functions are contained in the `pandas.plotting` module.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
andrews_curves
bootstrap_plot
diff --git a/doc/source/api/resampling.rst b/doc/source/reference/resampling.rst
similarity index 91%
rename from doc/source/api/resampling.rst
rename to doc/source/reference/resampling.rst
index f5c6ccce3cdd7..2a52defa3c68f 100644
--- a/doc/source/api/resampling.rst
+++ b/doc/source/reference/resampling.rst
@@ -12,7 +12,7 @@ Resampler objects are returned by resample calls: :func:`pandas.DataFrame.resamp
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Resampler.__iter__
Resampler.groups
@@ -22,7 +22,7 @@ Indexing, iteration
Function application
~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Resampler.apply
Resampler.aggregate
@@ -32,7 +32,7 @@ Function application
Upsampling
~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Resampler.ffill
Resampler.backfill
@@ -46,7 +46,7 @@ Upsampling
Computations / Descriptive Stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Resampler.count
Resampler.nunique
diff --git a/doc/source/api/series.rst b/doc/source/reference/series.rst
similarity index 93%
rename from doc/source/api/series.rst
rename to doc/source/reference/series.rst
index aa43c8b643d44..a6ac40b5203bf 100644
--- a/doc/source/api/series.rst
+++ b/doc/source/reference/series.rst
@@ -10,7 +10,7 @@ Series
Constructor
-----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series
@@ -19,12 +19,12 @@ Attributes
**Axes**
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.index
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.array
Series.values
@@ -52,7 +52,7 @@ Attributes
Conversion
----------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.astype
Series.infer_objects
@@ -69,7 +69,7 @@ Conversion
Indexing, iteration
-------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.get
Series.at
@@ -90,7 +90,7 @@ For more information on ``.at``, ``.iat``, ``.loc``, and
Binary operator functions
-------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.add
Series.sub
@@ -123,7 +123,7 @@ Binary operator functions
Function application, GroupBy & Window
--------------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.apply
Series.agg
@@ -141,7 +141,7 @@ Function application, GroupBy & Window
Computations / Descriptive Stats
--------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.abs
Series.all
@@ -192,7 +192,7 @@ Computations / Descriptive Stats
Reindexing / Selection / Label manipulation
-------------------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.align
Series.drop
@@ -226,7 +226,7 @@ Reindexing / Selection / Label manipulation
Missing data handling
---------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.isna
Series.notna
@@ -237,7 +237,7 @@ Missing data handling
Reshaping, sorting
------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.argsort
Series.argmin
@@ -256,7 +256,7 @@ Reshaping, sorting
Combining / joining / merging
-----------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.append
Series.replace
@@ -265,7 +265,7 @@ Combining / joining / merging
Time series-related
-------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.asfreq
Series.asof
@@ -309,7 +309,7 @@ Datetime Properties
^^^^^^^^^^^^^^^^^^^
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_attribute.rst
Series.dt.date
@@ -345,7 +345,7 @@ Datetime Methods
^^^^^^^^^^^^^^^^
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
Series.dt.to_period
@@ -364,7 +364,7 @@ Period Properties
^^^^^^^^^^^^^^^^^
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_attribute.rst
Series.dt.qyear
@@ -375,7 +375,7 @@ Timedelta Properties
^^^^^^^^^^^^^^^^^^^^
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_attribute.rst
Series.dt.days
@@ -388,7 +388,7 @@ Timedelta Methods
^^^^^^^^^^^^^^^^^
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
Series.dt.to_pytimedelta
@@ -405,7 +405,7 @@ strings and apply several methods to it. These can be accessed like
``Series.str.<function/property>``.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
Series.str.capitalize
@@ -467,7 +467,7 @@ strings and apply several methods to it. These can be accessed like
..
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor.rst
Series.str
@@ -484,7 +484,7 @@ Categorical-dtype specific methods and attributes are available under
the ``Series.cat`` accessor.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_attribute.rst
Series.cat.categories
@@ -492,7 +492,7 @@ the ``Series.cat`` accessor.
Series.cat.codes
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
Series.cat.rename_categories
@@ -514,7 +514,7 @@ Sparse-dtype specific methods and attributes are provided under the
``Series.sparse`` accessor.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_attribute.rst
Series.sparse.npoints
@@ -523,7 +523,7 @@ Sparse-dtype specific methods and attributes are provided under the
Series.sparse.sp_values
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.sparse.from_coo
Series.sparse.to_coo
@@ -535,13 +535,13 @@ Plotting
specific plotting methods of the form ``Series.plot.<kind>``.
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_callable.rst
Series.plot
.. autosummary::
- :toctree: generated/
+ :toctree: api/
:template: autosummary/accessor_method.rst
Series.plot.area
@@ -555,14 +555,14 @@ specific plotting methods of the form ``Series.plot.<kind>``.
Series.plot.pie
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.hist
Serialization / IO / Conversion
-------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Series.to_pickle
Series.to_csv
@@ -585,7 +585,7 @@ Sparse
------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
SparseSeries.to_coo
SparseSeries.from_coo
diff --git a/doc/source/api/style.rst b/doc/source/reference/style.rst
similarity index 88%
rename from doc/source/api/style.rst
rename to doc/source/reference/style.rst
index 70913bbec410d..bd9635b41e343 100644
--- a/doc/source/api/style.rst
+++ b/doc/source/reference/style.rst
@@ -12,7 +12,7 @@ Style
Styler Constructor
------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Styler
Styler.from_custom_template
@@ -20,7 +20,7 @@ Styler Constructor
Styler Properties
-----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Styler.env
Styler.template
@@ -29,7 +29,7 @@ Styler Properties
Style Application
-----------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Styler.apply
Styler.applymap
@@ -47,7 +47,7 @@ Style Application
Builtin Styles
--------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Styler.highlight_max
Styler.highlight_min
@@ -58,7 +58,7 @@ Builtin Styles
Style Export and Import
-----------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Styler.render
Styler.export
diff --git a/doc/source/api/window.rst b/doc/source/reference/window.rst
similarity index 95%
rename from doc/source/api/window.rst
rename to doc/source/reference/window.rst
index 3245f5f831688..9e1374a3bd8e4 100644
--- a/doc/source/api/window.rst
+++ b/doc/source/reference/window.rst
@@ -14,7 +14,7 @@ EWM objects are returned by ``.ewm`` calls: :func:`pandas.DataFrame.ewm`, :func:
Standard moving window functions
--------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Rolling.count
Rolling.sum
@@ -39,7 +39,7 @@ Standard moving window functions
Standard expanding window functions
-----------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
Expanding.count
Expanding.sum
@@ -60,7 +60,7 @@ Standard expanding window functions
Exponentially-weighted moving window functions
----------------------------------------------
.. autosummary::
- :toctree: generated/
+ :toctree: api/
EWM.mean
EWM.std
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 0132392aacaff..58e1b2370c7c8 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -4850,7 +4850,7 @@ See also some :ref:`cookbook examples <cookbook.sql>` for some advanced strategi
The key functions are:
.. autosummary::
- :toctree: generated/
+ :toctree: ../reference/api/
read_sql_table
read_sql_query
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 4e389aed2b0d2..bce33f7e78daa 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -796,7 +796,8 @@ def validate_all(prefix, ignore_deprecated=False):
seen = {}
# functions from the API docs
- api_doc_fnames = os.path.join(BASE_PATH, 'doc', 'source', 'api', '*.rst')
+ api_doc_fnames = os.path.join(
+ BASE_PATH, 'doc', 'source', 'reference', '*.rst')
api_items = []
for api_doc_fname in glob.glob(api_doc_fnames):
with open(api_doc_fname) as f:
| - [X] closes #24451
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/24909 | 2019-01-24T17:36:39Z | 2019-01-25T13:07:04Z | 2019-01-25T13:07:04Z | 2019-01-25T13:17:40Z |
Disable M8 in nanops | diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index cafd3a9915fa0..86c3c380636c9 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -14,7 +14,8 @@
_get_dtype, is_any_int_dtype, is_bool_dtype, is_complex, is_complex_dtype,
is_datetime64_dtype, is_datetime64tz_dtype, is_datetime_or_timedelta_dtype,
is_float, is_float_dtype, is_integer, is_integer_dtype, is_numeric_dtype,
- is_object_dtype, is_scalar, is_timedelta64_dtype)
+ is_object_dtype, is_scalar, is_timedelta64_dtype, pandas_dtype)
+from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import isna, na_value_for_dtype, notna
import pandas.core.common as com
@@ -57,7 +58,7 @@ class disallow(object):
def __init__(self, *dtypes):
super(disallow, self).__init__()
- self.dtypes = tuple(np.dtype(dtype).type for dtype in dtypes)
+ self.dtypes = tuple(pandas_dtype(dtype).type for dtype in dtypes)
def check(self, obj):
return hasattr(obj, 'dtype') and issubclass(obj.dtype.type,
@@ -437,6 +438,7 @@ def nansum(values, axis=None, skipna=True, min_count=0, mask=None):
return _wrap_results(the_sum, dtype)
+@disallow('M8', DatetimeTZDtype)
@bottleneck_switch()
def nanmean(values, axis=None, skipna=True, mask=None):
"""
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index f2c3f50c291c3..386e5f57617cf 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -794,6 +794,25 @@ def test_mean(self, float_frame_with_na, float_frame, float_string_frame):
check_dates=True)
assert_stat_op_api('mean', float_frame, float_string_frame)
+ @pytest.mark.parametrize('tz', [None, 'UTC'])
+ def test_mean_mixed_datetime_numeric(self, tz):
+ # https://github.com/pandas-dev/pandas/issues/24752
+ df = pd.DataFrame({"A": [1, 1],
+ "B": [pd.Timestamp('2000', tz=tz)] * 2})
+ result = df.mean()
+ expected = pd.Series([1.0], index=['A'])
+ tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize('tz', [None, 'UTC'])
+ def test_mean_excludeds_datetimes(self, tz):
+ # https://github.com/pandas-dev/pandas/issues/24752
+ # Our long-term desired behavior is unclear, but the behavior in
+ # 0.24.0rc1 was buggy.
+ df = pd.DataFrame({"A": [pd.Timestamp('2000', tz=tz)] * 2})
+ result = df.mean()
+ expected = pd.Series()
+ tm.assert_series_equal(result, expected)
+
def test_product(self, float_frame_with_na, float_frame,
float_string_frame):
assert_stat_op_calc('product', np.prod, float_frame_with_na)
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 4bcd16a86e865..cf5ef6cf15eca 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -971,6 +971,9 @@ def prng(self):
class TestDatetime64NaNOps(object):
@pytest.mark.parametrize('tz', [None, 'UTC'])
+ @pytest.mark.xfail(reason="disabled")
+ # Enabling mean changes the behavior of DataFrame.mean
+ # See https://github.com/pandas-dev/pandas/issues/24752
def test_nanmean(self, tz):
dti = pd.date_range('2016-01-01', periods=3, tz=tz)
expected = dti[1]
| Closes https://github.com/pandas-dev/pandas/issues/24752
xref https://github.com/pandas-dev/pandas/commit/fe696e453b9236b226a0656028be43b4706ce1a1#diff-12444145caa7d0c14da51377a05e46d6 | https://api.github.com/repos/pandas-dev/pandas/pulls/24907 | 2019-01-24T15:48:13Z | 2019-01-24T19:01:24Z | 2019-01-24T19:01:23Z | 2019-01-24T19:16:08Z |
DOC: also redirect old whatsnew url | diff --git a/doc/redirects.csv b/doc/redirects.csv
index 4f4b3d7fc0780..e0de03745aaa8 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -1,6 +1,10 @@
# This file should contain all the redirects in the documentation
# in the format `<old_path>,<new_path>`
+# whatsnew
+whatsnew,whatsnew/index
+release,whatsnew/index
+
# getting started
10min,getting_started/10min
basics,getting_started/basics
| The whatsnew was already splitted longer time ago, but I think we can also add a redirect for the main release page. | https://api.github.com/repos/pandas-dev/pandas/pulls/24906 | 2019-01-24T14:31:38Z | 2019-01-24T15:26:06Z | 2019-01-24T15:26:06Z | 2019-01-24T15:29:23Z |
BUG (output formatting): use fixed with for truncation column instead of inferring from last column | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3dd345890881c..78bc4cf751c4f 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1752,6 +1752,7 @@ I/O
- Bug in :meth:`DataFrame.to_stata` and :class:`pandas.io.stata.StataWriter117` that produced invalid files when using strLs with non-ASCII characters (:issue:`23573`)
- Bug in :class:`HDFStore` that caused it to raise ``ValueError`` when reading a Dataframe in Python 3 from fixed format written in Python 2 (:issue:`24510`)
- Bug in :func:`DataFrame.to_string()` and more generally in the floating ``repr`` formatter. Zeros were not trimmed if ``inf`` was present in a columns while it was the case with NA values. Zeros are now trimmed as in the presence of NA (:issue:`24861`).
+- Bug in the ``repr`` when truncating the number of columns and having a wide last column (:issue:`24849`).
Plotting
^^^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 2c1fcab1ebde9..62fa04e784072 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -435,9 +435,6 @@ def _chk_truncate(self):
"""
from pandas.core.reshape.concat import concat
- # Column of which first element is used to determine width of a dot col
- self.tr_size_col = -1
-
# Cut the data to the information actually printed
max_cols = self.max_cols
max_rows = self.max_rows
@@ -556,10 +553,7 @@ def _to_str_columns(self):
if truncate_h:
col_num = self.tr_col_num
- # infer from column header
- col_width = self.adj.len(strcols[self.tr_size_col][0])
- strcols.insert(self.tr_col_num + 1, ['...'.center(col_width)] *
- (len(str_index)))
+ strcols.insert(self.tr_col_num + 1, [' ...'] * (len(str_index)))
if truncate_v:
n_header_rows = len(str_index) - len(frame)
row_num = self.tr_row_num
@@ -577,8 +571,8 @@ def _to_str_columns(self):
if ix == 0:
dot_mode = 'left'
elif is_dot_col:
- cwidth = self.adj.len(strcols[self.tr_size_col][0])
- dot_mode = 'center'
+ cwidth = 4
+ dot_mode = 'right'
else:
dot_mode = 'right'
dot_str = self.adj.justify([my_str], cwidth, mode=dot_mode)[0]
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 31ab1e050d95c..5d922ccaf1fd5 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -345,6 +345,15 @@ def test_repr_truncates_terminal_size_full(self, monkeypatch):
lambda: terminal_size)
assert "..." not in str(df)
+ def test_repr_truncation_column_size(self):
+ # dataframe with last column very wide -> check it is not used to
+ # determine size of truncation (...) column
+ df = pd.DataFrame({'a': [108480, 30830], 'b': [12345, 12345],
+ 'c': [12345, 12345], 'd': [12345, 12345],
+ 'e': ['a' * 50] * 2})
+ assert "..." in str(df)
+ assert " ... " not in str(df)
+
def test_repr_max_columns_max_rows(self):
term_width, term_height = get_terminal_size()
if term_width < 10 or term_height < 10:
@@ -543,7 +552,7 @@ def test_to_string_with_formatters_unicode(self):
formatters={u('c/\u03c3'): lambda x: '{x}'.format(x=x)})
assert result == u(' c/\u03c3\n') + '0 1\n1 2\n2 3'
- def test_east_asian_unicode_frame(self):
+ def test_east_asian_unicode_false(self):
if PY3:
_rep = repr
else:
@@ -643,17 +652,23 @@ def test_east_asian_unicode_frame(self):
u'ああああ': [u'さ', u'し', u'す', u'せ']},
columns=['a', 'b', 'c', u'ああああ'])
- expected = (u" a ... ああああ\n0 あああああ ... さ\n"
- u".. ... ... ...\n3 えええ ... せ\n"
+ expected = (u" a ... ああああ\n0 あああああ ... さ\n"
+ u".. ... ... ...\n3 えええ ... せ\n"
u"\n[4 rows x 4 columns]")
assert _rep(df) == expected
df.index = [u'あああ', u'いいいい', u'う', 'aaa']
- expected = (u" a ... ああああ\nあああ あああああ ... さ\n"
- u".. ... ... ...\naaa えええ ... せ\n"
+ expected = (u" a ... ああああ\nあああ あああああ ... さ\n"
+ u".. ... ... ...\naaa えええ ... せ\n"
u"\n[4 rows x 4 columns]")
assert _rep(df) == expected
+ def test_east_asian_unicode_true(self):
+ if PY3:
+ _rep = repr
+ else:
+ _rep = unicode # noqa
+
# Emable Unicode option -----------------------------------------
with option_context('display.unicode.east_asian_width', True):
@@ -757,18 +772,18 @@ def test_east_asian_unicode_frame(self):
u'ああああ': [u'さ', u'し', u'す', u'せ']},
columns=['a', 'b', 'c', u'ああああ'])
- expected = (u" a ... ああああ\n"
- u"0 あああああ ... さ\n"
- u".. ... ... ...\n"
- u"3 えええ ... せ\n"
+ expected = (u" a ... ああああ\n"
+ u"0 あああああ ... さ\n"
+ u".. ... ... ...\n"
+ u"3 えええ ... せ\n"
u"\n[4 rows x 4 columns]")
assert _rep(df) == expected
df.index = [u'あああ', u'いいいい', u'う', 'aaa']
- expected = (u" a ... ああああ\n"
- u"あああ あああああ ... さ\n"
- u"... ... ... ...\n"
- u"aaa えええ ... せ\n"
+ expected = (u" a ... ああああ\n"
+ u"あああ あああああ ... さ\n"
+ u"... ... ... ...\n"
+ u"aaa えええ ... せ\n"
u"\n[4 rows x 4 columns]")
assert _rep(df) == expected
| Fixes https://github.com/pandas-dev/pandas/issues/24849
We were using the last column to infer the width of the truncation column (= the column with all the `...` if not all columns are shown). This gives wrong results in case the last column is very wide, and since the content of this columns is always fixed to `...`, we can also use a fixed width I think. | https://api.github.com/repos/pandas-dev/pandas/pulls/24905 | 2019-01-24T14:22:18Z | 2019-01-24T15:25:30Z | 2019-01-24T15:25:30Z | 2019-01-24T15:25:35Z |
Revert BUG-24212 fix usage of Index.take in pd.merge | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 9198c610f0f44..4f297d376ec57 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1826,7 +1826,6 @@ Reshaping
- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
-- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
- Bug showing an incorrect shape when throwing error during ``DataFrame`` construction. (:issue:`20742`)
.. _whatsnew_0240.bug_fixes.sparse:
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 0a51f2ee0dce7..e11847d2b8ce2 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -757,19 +757,13 @@ def _get_join_info(self):
if self.right_index:
if len(self.left) > 0:
- join_index = self._create_join_index(self.left.index,
- self.right.index,
- left_indexer,
- how='right')
+ join_index = self.left.index.take(left_indexer)
else:
join_index = self.right.index.take(right_indexer)
left_indexer = np.array([-1] * len(join_index))
elif self.left_index:
if len(self.right) > 0:
- join_index = self._create_join_index(self.right.index,
- self.left.index,
- right_indexer,
- how='left')
+ join_index = self.right.index.take(right_indexer)
else:
join_index = self.left.index.take(left_indexer)
right_indexer = np.array([-1] * len(join_index))
@@ -780,37 +774,6 @@ def _get_join_info(self):
join_index = join_index.astype(object)
return join_index, left_indexer, right_indexer
- def _create_join_index(self, index, other_index, indexer, how='left'):
- """
- Create a join index by rearranging one index to match another
-
- Parameters
- ----------
- index: Index being rearranged
- other_index: Index used to supply values not found in index
- indexer: how to rearrange index
- how: replacement is only necessary if indexer based on other_index
-
- Returns
- -------
- join_index
- """
- join_index = index.take(indexer)
- if (self.how in (how, 'outer') and
- not isinstance(other_index, MultiIndex)):
- # if final index requires values in other_index but not target
- # index, indexer may hold missing (-1) values, causing Index.take
- # to take the final value in target index
- mask = indexer == -1
- if np.any(mask):
- # if values missing (-1) from target index,
- # take from other_index instead
- join_list = join_index.to_numpy()
- join_list[mask] = other_index.to_numpy()[mask]
- join_index = Index(join_list, dtype=join_index.dtype,
- name=join_index.name)
- return join_index
-
def _get_merge_keys(self):
"""
Note: has side effects (copy/delete key columns)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index e123a5171769d..f0a3ddc8ce8a4 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -940,6 +940,7 @@ def test_merge_two_empty_df_no_division_error(self):
merge(a, a, on=('a', 'b'))
@pytest.mark.parametrize('how', ['left', 'outer'])
+ @pytest.mark.xfail(reason="GH-24897")
def test_merge_on_index_with_more_values(self, how):
# GH 24212
# pd.merge gets [-1, -1, 0, 1] as right_indexer, ensure that -1 is
@@ -959,6 +960,22 @@ def test_merge_on_index_with_more_values(self, how):
expected.set_index('a', drop=False, inplace=True)
assert_frame_equal(result, expected)
+ def test_merge_right_index_right(self):
+ # Note: the expected output here is probably incorrect.
+ # See https://github.com/pandas-dev/pandas/issues/17257 for more.
+ # We include this as a regression test for GH-24897.
+ left = pd.DataFrame({'a': [1, 2, 3], 'key': [0, 1, 1]})
+ right = pd.DataFrame({'b': [1, 2, 3]})
+
+ expected = pd.DataFrame({'a': [1, 2, 3, None],
+ 'key': [0, 1, 1, 2],
+ 'b': [1, 2, 2, 3]},
+ columns=['a', 'key', 'b'],
+ index=[0, 1, 2, 2])
+ result = left.merge(right, left_on='key', right_index=True,
+ how='right')
+ tm.assert_frame_equal(result, expected)
+
def _check_merge(x, y):
for how in ['inner', 'left', 'outer']:
| xref https://github.com/pandas-dev/pandas/pull/24733/
closes https://github.com/pandas-dev/pandas/issues/24897
reopens https://github.com/pandas-dev/pandas/issues/24212
| https://api.github.com/repos/pandas-dev/pandas/pulls/24904 | 2019-01-24T14:14:05Z | 2019-01-24T16:45:07Z | 2019-01-24T16:45:06Z | 2019-01-25T12:02:29Z |
DOC: No clean in sphinx_build | diff --git a/doc/make.py b/doc/make.py
index eb4a33a569c5a..bc458d6b53cb0 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -121,8 +121,6 @@ def _sphinx_build(self, kind):
raise ValueError('kind must be html or latex, '
'not {}'.format(kind))
- self.clean()
-
cmd = ['sphinx-build', '-b', kind]
if self.num_jobs:
cmd += ['-j', str(self.num_jobs)]
| Closes https://github.com/pandas-dev/pandas/issues/24727 | https://api.github.com/repos/pandas-dev/pandas/pulls/24902 | 2019-01-24T13:26:50Z | 2019-01-24T14:44:12Z | 2019-01-24T14:44:11Z | 2019-01-25T12:02:22Z |
[WIP] Excel table output | diff --git a/ci/deps/azure-35-compat.yaml b/ci/deps/azure-35-compat.yaml
index 97c45b2be27d7..6597ceb0ef086 100644
--- a/ci/deps/azure-35-compat.yaml
+++ b/ci/deps/azure-35-compat.yaml
@@ -8,7 +8,7 @@ dependencies:
- jinja2=2.8
- numexpr=2.6.2
- numpy=1.13.3
- - openpyxl=2.4.8
+ - openpyxl=2.5.0
- pytables=3.4.2
- python-dateutil=2.6.1
- python=3.5.3
diff --git a/ci/deps/azure-36-locale.yaml b/ci/deps/azure-36-locale.yaml
index 8f8273f57c3fe..d0d7ffea7e50f 100644
--- a/ci/deps/azure-36-locale.yaml
+++ b/ci/deps/azure-36-locale.yaml
@@ -9,7 +9,7 @@ dependencies:
- lxml
- matplotlib=2.2.2
- numpy=1.14.*
- - openpyxl=2.4.8
+ - openpyxl=2.5.0
- python-dateutil
- python-blosc
- python=3.6.*
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 352b56ebd3020..e1ab22e8bea55 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -280,7 +280,7 @@ gcsfs 0.2.2 Google Cloud Storage access
html5lib HTML parser for read_html (see :ref:`note <optional_html>`)
lxml 3.8.0 HTML parser for read_html (see :ref:`note <optional_html>`)
matplotlib 2.2.2 Visualization
-openpyxl 2.4.8 Reading / writing for xlsx files
+openpyxl 2.5.0 Reading / writing for xlsx files
pandas-gbq 0.8.0 Google Big Query access
psycopg2 PostgreSQL engine for sqlalchemy
pyarrow 0.9.0 Parquet and feather reading / writing
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index cd4e1b7e8aa4d..bc2eacfd6f8cc 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -14,7 +14,7 @@
"matplotlib": "2.2.2",
"numexpr": "2.6.2",
"odfpy": "1.3.0",
- "openpyxl": "2.4.8",
+ "openpyxl": "2.5.0",
"pandas_gbq": "0.8.0",
"pyarrow": "0.9.0",
"pytables": "3.4.2",
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b79bde9cc3cb1..30959bd0866bf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2185,6 +2185,9 @@ def _repr_data_resource_(self):
.. versionadded:: 0.20.0.
+ table : string, default None
+ Write the dataframe to a named and formatted excel table object
+
See Also
--------
to_csv : Write DataFrame to a comma-separated values (csv) file.
@@ -2249,6 +2252,7 @@ def to_excel(
inf_rep="inf",
verbose=True,
freeze_panes=None,
+ table=None,
):
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
@@ -2272,6 +2276,7 @@ def to_excel(
startcol=startcol,
freeze_panes=freeze_panes,
engine=engine,
+ table=table,
)
def to_json(
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index d8f5da5ab5bc6..eea3d9ece434b 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -409,7 +409,11 @@ def write_cells(
row=freeze_panes[0] + 1, column=freeze_panes[1] + 1
)
+ n_cols = 0
+ n_rows = 0
for cell in cells:
+ n_cols = max(n_cols, cell.col)
+ n_rows = max(n_rows, cell.row)
xcell = wks.cell(
row=startrow + cell.row + 1, column=startcol + cell.col + 1
)
@@ -456,6 +460,38 @@ def write_cells(
for k, v in style_kwargs.items():
setattr(xcell, k, v)
+ return wks, n_rows, n_cols, False
+
+ def format_table(
+ self, wks, table_name, table_range, header=True, index=True, first_row=None
+ ):
+ # Format the written cells as table
+
+ from openpyxl.worksheet.table import Table, TableStyleInfo
+ from openpyxl.worksheet.cell_range import CellRange
+
+ ref = str(
+ CellRange(
+ min_row=table_range[0] + 1,
+ min_col=table_range[1] + 1,
+ max_row=table_range[2] + 1,
+ max_col=table_range[3] + 1,
+ )
+ )
+
+ tab = Table(displayName=table_name, ref=ref, headerRowCount=1 if header else 0)
+
+ # Add a default style with striped rows
+ style = TableStyleInfo(
+ name="TableStyleMedium9",
+ showFirstColumn=index,
+ showLastColumn=False,
+ showRowStripes=True,
+ showColumnStripes=False,
+ )
+ tab.tableStyleInfo = style
+ wks.add_table(tab)
+
class _OpenpyxlReader(_BaseExcelReader):
def __init__(self, filepath_or_buffer: FilePathOrBuffer) -> None:
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index 07bf265da4863..1b6769329c24b 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -211,8 +211,15 @@ def write_cells(
if _validate_freeze_panes(freeze_panes):
wks.freeze_panes(*(freeze_panes))
+ n_cols = 0
+ n_rows = 0
+ first_row = {}
for cell in cells:
+ n_cols = max(n_cols, cell.col)
+ n_rows = max(n_rows, cell.row)
val, fmt = self._value_with_fmt(cell.val)
+ if cell.row == 0:
+ first_row[cell.col] = val
stylekey = json.dumps(cell.style)
if fmt:
@@ -235,3 +242,27 @@ def write_cells(
)
else:
wks.write(startrow + cell.row, startcol + cell.col, val, style)
+
+ return wks, n_rows, n_cols, first_row
+
+ def format_table(
+ self, wks, table_name, table_range, first_row={}, header=True, index=True
+ ):
+ # Format the written cells as table
+ options = dict(
+ autofilter=True,
+ header_row=header,
+ banded_columns=False,
+ banded_rows=True,
+ first_column=index,
+ last_column=False,
+ style="Table Style Medium 9",
+ total_row=False,
+ name=table_name,
+ )
+ if header:
+ options["columns"] = [
+ {"header": first_row[i]} for i in range(len(first_row))
+ ]
+
+ wks.add_table(*table_range, options=options)
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 012d2d9358241..e346cc09ddfb4 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -373,6 +373,16 @@ def __init__(
merge_cells=False,
inf_rep="inf",
style_converter=None,
+ header_style={
+ "font": {"bold": True},
+ "borders": {
+ "top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin",
+ },
+ "alignment": {"horizontal": "center", "vertical": "top"},
+ },
):
self.rowcounter = 0
self.na_rep = na_rep
@@ -408,19 +418,7 @@ def __init__(
self.header = header
self.merge_cells = merge_cells
self.inf_rep = inf_rep
-
- @property
- def header_style(self):
- return {
- "font": {"bold": True},
- "borders": {
- "top": "thin",
- "right": "thin",
- "bottom": "thin",
- "left": "thin",
- },
- "alignment": {"horizontal": "center", "vertical": "top"},
- }
+ self.header_style = header_style
def _format_value(self, val):
if is_scalar(val) and missing.isna(val):
@@ -695,6 +693,7 @@ def write(
startcol=0,
freeze_panes=None,
engine=None,
+ table=None,
):
"""
writer : string or ExcelWriter object
@@ -712,6 +711,9 @@ def write(
write engine to use if writer is a path - you can also set this
via the options ``io.excel.xlsx.writer``, ``io.excel.xls.writer``,
and ``io.excel.xlsm.writer``.
+ table : string, default None
+ Write the dataframe to a named and formatted excel table object
+
"""
from pandas.io.excel import ExcelWriter
from pandas.io.common import _stringify_path
@@ -730,13 +732,27 @@ def write(
writer = ExcelWriter(_stringify_path(writer), engine=engine)
need_save = True
+ if table is not None:
+ self.header_style = {}
formatted_cells = self.get_formatted_cells()
- writer.write_cells(
+
+ worksheet, n_rows, n_cols, first_row = writer.write_cells(
formatted_cells,
sheet_name,
startrow=startrow,
startcol=startcol,
freeze_panes=freeze_panes,
)
+
+ if table is not None:
+ table_range = (startrow, startcol, startrow + n_rows, startcol + n_cols)
+ writer.format_table(
+ worksheet,
+ table_name=table,
+ table_range=table_range,
+ first_row=first_row,
+ header=self.header,
+ index=self.index,
+ )
if need_save:
writer.save()
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 0908ed885a6ca..1be75310c482c 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -1212,6 +1212,67 @@ def test_raise_when_saving_timezones(self, engine, ext, dtype, tz_aware_fixture)
df.to_excel(self.path)
+@td.skip_if_no("xlrd")
+@td.skip_if_no("openpyxl")
+@pytest.mark.parametrize(
+ "engine,ext",
+ [
+ pytest.param("openpyxl", ".xlsx"),
+ pytest.param("openpyxl", ".xlsm"),
+ pytest.param("xlsxwriter", ".xlsx", marks=td.skip_if_no("xlsxwriter")),
+ ],
+)
+class TestTable(_WriterBase):
+ def read_table(self, tablename):
+ from openpyxl import load_workbook
+
+ wbk = load_workbook(self.path, data_only=True, read_only=False)
+
+ # first discover all tables in workbook
+ tables = {}
+ for wks in wbk:
+ for table in wks._tables:
+ tables[table.name] = (table, wks)
+
+ # then retrieve the desired one
+ table, wks = tables[tablename]
+
+ columns = [col.name for col in table.tableColumns]
+ data_rows = wks[table.ref][
+ (table.headerRowCount or 0) : -table.totalsRowCount
+ if table.totalsRowCount is not None
+ else None
+ ]
+
+ data = [[cell.value for cell in row] for row in data_rows]
+ frame = DataFrame(data, columns=columns, index=None)
+
+ if table.tableStyleInfo.showFirstColumn:
+ frame = frame.set_index(columns[0])
+
+ return frame
+
+ @pytest.mark.parametrize("header", (True, False))
+ @pytest.mark.parametrize("index", (True, False))
+ def test_excel_table_options(self, header, index):
+ df = DataFrame(np.random.randn(2, 4))
+
+ df.columns = ["1", "2", "a", "b"]
+ df.index.name = "foo"
+
+ df.to_excel(self.path, header=header, index=index, table="TestTable1")
+ result = self.read_table("TestTable1")
+ if not header:
+ result.columns = df.columns
+ if index:
+ result.index.name = df.index.name
+
+ if not index:
+ result.index = df.index
+
+ tm.assert_frame_equal(df, result)
+
+
class TestExcelWriterEngineTests:
@pytest.mark.parametrize(
"klass,ext",
| - [x] closes #24862
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This PR should allow writing excel table objects. Proof of concept implemented for XlsxWriter and OpenPyXL. Exactly which options of the `to_excel` api should be supported is up for discussion, see #24862.
```python
import pandas as pd
data = {'B' : dict(col1=1), 'A' : dict(col1=2), 'C' :dict(col1=3, col2=4.1)}
df = pd.DataFrame.from_dict(data, orient='index')
df.to_excel('test1.xlsx', engine='xlsxwriter', as_table=True, index=True)
df.to_excel('test2.xlsx', engine='openpyxl', as_table=True, index=False)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/24899 | 2019-01-24T09:45:33Z | 2019-08-24T08:02:00Z | null | 2019-08-24T08:02:00Z |
BUG: support dtypes in column_dtypes for to_records() | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index a9fa8b2174dd0..867007b2ba7f5 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -187,7 +187,7 @@ Reshaping
^^^^^^^^^
- Bug in :func:`merge` when merging by index name would sometimes result in an incorrectly numbered index (:issue:`24212`)
--
+- :func:`to_records` now accepts dtypes to its `column_dtypes` parameter (:issue:`24895`)
-
@@ -213,4 +213,3 @@ Contributors
~~~~~~~~~~~~
.. contributors:: v0.24.x..HEAD
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 28c6f3c23a3ce..2049a8aa960bf 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1719,7 +1719,8 @@ def to_records(self, index=True, convert_datetime64=None,
# string naming a type.
if dtype_mapping is None:
formats.append(v.dtype)
- elif isinstance(dtype_mapping, (type, compat.string_types)):
+ elif isinstance(dtype_mapping, (type, np.dtype,
+ compat.string_types)):
formats.append(dtype_mapping)
else:
element = "row" if i < index_len else "column"
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 7b98395dd6dec..601a4c6b72fe3 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -10,7 +10,9 @@
from pandas.compat import long
-from pandas import DataFrame, MultiIndex, Series, Timestamp, compat, date_range
+from pandas import (
+ CategoricalDtype, DataFrame, MultiIndex, Series, Timestamp, compat,
+ date_range)
from pandas.tests.frame.common import TestData
import pandas.util.testing as tm
@@ -220,6 +222,12 @@ def test_to_records_with_categorical(self):
dtype=[("index", "<i8"), ("A", "<U"),
("B", "<U"), ("C", "<U")])),
+ # Pass in a dtype instance.
+ (dict(column_dtypes=np.dtype('unicode')),
+ np.rec.array([("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")],
+ dtype=[("index", "<i8"), ("A", "<U"),
+ ("B", "<U"), ("C", "<U")])),
+
# Pass in a dictionary (name-only).
(dict(column_dtypes={"A": np.int8, "B": np.float32, "C": "<U2"}),
np.rec.array([("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")],
@@ -249,6 +257,12 @@ def test_to_records_with_categorical(self):
dtype=[("index", "<i8"), ("A", "i1"),
("B", "<f4"), ("C", "O")])),
+ # Names / indices not in dtype mapping default to array dtype.
+ (dict(column_dtypes={"A": np.dtype('int8'), "B": np.dtype('float32')}),
+ np.rec.array([("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")],
+ dtype=[("index", "<i8"), ("A", "i1"),
+ ("B", "<f4"), ("C", "O")])),
+
# Mixture of everything.
(dict(column_dtypes={"A": np.int8, "B": np.float32},
index_dtypes="<U2"),
@@ -258,17 +272,26 @@ def test_to_records_with_categorical(self):
# Invalid dype values.
(dict(index=False, column_dtypes=list()),
- "Invalid dtype \\[\\] specified for column A"),
+ (ValueError, "Invalid dtype \\[\\] specified for column A")),
(dict(index=False, column_dtypes={"A": "int32", "B": 5}),
- "Invalid dtype 5 specified for column B"),
+ (ValueError, "Invalid dtype 5 specified for column B")),
+
+ # Numpy can't handle EA types, so check error is raised
+ (dict(index=False, column_dtypes={"A": "int32",
+ "B": CategoricalDtype(['a', 'b'])}),
+ (ValueError, 'Invalid dtype category specified for column B')),
+
+ # Check that bad types raise
+ (dict(index=False, column_dtypes={"A": "int32", "B": "foo"}),
+ (TypeError, 'data type "foo" not understood')),
])
def test_to_records_dtype(self, kwargs, expected):
# see gh-18146
df = DataFrame({"A": [1, 2], "B": [0.2, 1.5], "C": ["a", "bc"]})
- if isinstance(expected, str):
- with pytest.raises(ValueError, match=expected):
+ if not isinstance(expected, np.recarray):
+ with pytest.raises(expected[0], match=expected[1]):
df.to_records(**kwargs)
else:
result = df.to_records(**kwargs)
| Despite the name, the `column_dtypes` param in `DataFrame.to_records()` doesn't accept `np.dtype` objects:
```
In [3]: df = pd.DataFrame([[0, 1], [2, 3]])
In [4]: df.to_records(column_dtypes=df.dtypes)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-e3541f39084e> in <module>()
----> 1 df.to_records(column_dtypes=df.dtypes)
~/projects/pandas/pandas/core/frame.py in to_records(self, index, convert_datetime64, column_dtypes, index_dtypes)
1724 msg = ("Invalid dtype {dtype} specified for "
1725 "{element} {name}").format(dtype=dtype_mapping,
-> 1726 element=element, name=name)
1727 raise ValueError(msg)
1728
ValueError: Invalid dtype int64 specified for column 0
```
The above also fails when `df.dtypes` is cast into a `dict` for overriding.
This PR simply adds `np.dtype` to the accepted types and adopts a subset of the tests that currently cover the numpy scalar types (`np.int64` and similar, which are not dtypes).
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24895 | 2019-01-24T08:31:56Z | 2019-01-30T22:29:35Z | 2019-01-30T22:29:35Z | 2019-01-30T22:50:27Z |
CLN: fix typo in ctors.SeriesDtypesConstructors setup | diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py
index 9082b4186bfa4..5715c4fb2d0d4 100644
--- a/asv_bench/benchmarks/ctors.py
+++ b/asv_bench/benchmarks/ctors.py
@@ -72,7 +72,7 @@ class SeriesDtypesConstructors(object):
def setup(self):
N = 10**4
- self.arr = np.random.randn(N, N)
+ self.arr = np.random.randn(N)
self.arr_str = np.array(['foo', 'bar', 'baz'], dtype=object)
self.s = Series([Timestamp('20110101'), Timestamp('20120101'),
Timestamp('20130101')] * N * 10)
| Our benchmark for creating an `Index` from an array of floats does so from a **2-d** array, seemingly dating from when this benchmark tested `DataFrame` creation (see https://github.com/pandas-dev/pandas/commit/04db779d4c93d286bb0ab87780a85d50ec490266#diff-408a9d5de471447a365854703ca1856dL22).
Changing this to a 1-d array has no significant impact on benchmark times while reducing setup by ~40 seconds.
Before:
```
$ time asv dev -b ctors.SeriesDtypes
· Discovering benchmarks
· Running 4 total benchmarks (1 commits * 1 environments * 4 benchmarks)
[ 0.00%] ·· Benchmarking existing-py_home_chris_anaconda3_bin_python
[ 12.50%] ··· ctors.SeriesDtypesConstructors.time_dtindex_from_index_with_series 2.44±0ms
[ 25.00%] ··· ctors.SeriesDtypesConstructors.time_dtindex_from_series 2.25±0ms
[ 37.50%] ··· ctors.SeriesDtypesConstructors.time_index_from_array_floats 760±0μs
[ 50.00%] ··· ctors.SeriesDtypesConstructors.time_index_from_array_string 942±0μs
real 0m51.040s
user 0m39.813s
sys 0m10.172s
```
After:
```
$ time asv dev -b ctors.SeriesDtypes
· Discovering benchmarks
· Running 4 total benchmarks (1 commits * 1 environments * 4 benchmarks)
[ 0.00%] ·· Benchmarking existing-py_home_chris_anaconda3_bin_python
[ 12.50%] ··· ctors.SeriesDtypesConstructors.time_dtindex_from_index_with_series 2.37±0ms
[ 25.00%] ··· ctors.SeriesDtypesConstructors.time_dtindex_from_series 2.33±0ms
[ 37.50%] ··· ctors.SeriesDtypesConstructors.time_index_from_array_floats 763±0μs
[ 50.00%] ··· ctors.SeriesDtypesConstructors.time_index_from_array_string 885±0μs
real 0m13.000s
user 0m9.031s
sys 0m3.547s
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24894 | 2019-01-24T07:19:39Z | 2019-01-24T12:44:55Z | 2019-01-24T12:44:55Z | 2019-01-24T12:44:55Z |
DOC: Final reorganization of documentation pages | diff --git a/doc/redirects.csv b/doc/redirects.csv
index 4f4b3d7fc0780..8c62ecc362ccd 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -4,6 +4,10 @@
# getting started
10min,getting_started/10min
basics,getting_started/basics
+comparison_with_r,getting_started/comparison/comparison_with_r
+comparison_with_sql,getting_started/comparison/comparison_with_sql
+comparison_with_sas,getting_started/comparison/comparison_with_sas
+comparison_with_stata,getting_started/comparison/comparison_with_stata
dsintro,getting_started/dsintro
overview,getting_started/overview
tutorials,getting_started/tutorials
@@ -12,6 +16,7 @@ tutorials,getting_started/tutorials
advanced,user_guide/advanced
categorical,user_guide/categorical
computation,user_guide/computation
+cookbook,user_guide/cookbook
enhancingperf,user_guide/enhancingperf
gotchas,user_guide/gotchas
groupby,user_guide/groupby
diff --git a/doc/source/comparison_with_r.rst b/doc/source/getting_started/comparison/comparison_with_r.rst
similarity index 100%
rename from doc/source/comparison_with_r.rst
rename to doc/source/getting_started/comparison/comparison_with_r.rst
diff --git a/doc/source/comparison_with_sas.rst b/doc/source/getting_started/comparison/comparison_with_sas.rst
similarity index 100%
rename from doc/source/comparison_with_sas.rst
rename to doc/source/getting_started/comparison/comparison_with_sas.rst
diff --git a/doc/source/comparison_with_sql.rst b/doc/source/getting_started/comparison/comparison_with_sql.rst
similarity index 100%
rename from doc/source/comparison_with_sql.rst
rename to doc/source/getting_started/comparison/comparison_with_sql.rst
diff --git a/doc/source/comparison_with_stata.rst b/doc/source/getting_started/comparison/comparison_with_stata.rst
similarity index 100%
rename from doc/source/comparison_with_stata.rst
rename to doc/source/getting_started/comparison/comparison_with_stata.rst
diff --git a/doc/source/getting_started/comparison/index.rst b/doc/source/getting_started/comparison/index.rst
new file mode 100644
index 0000000000000..998706ce0c639
--- /dev/null
+++ b/doc/source/getting_started/comparison/index.rst
@@ -0,0 +1,15 @@
+{{ header }}
+
+.. _comparison:
+
+===========================
+Comparison with other tools
+===========================
+
+.. toctree::
+ :maxdepth: 2
+
+ comparison_with_r
+ comparison_with_sql
+ comparison_with_sas
+ comparison_with_stata
diff --git a/doc/source/getting_started/index.rst b/doc/source/getting_started/index.rst
index 116efe79beef1..4c5d26461a667 100644
--- a/doc/source/getting_started/index.rst
+++ b/doc/source/getting_started/index.rst
@@ -13,4 +13,5 @@ Getting started
10min
basics
dsintro
+ comparison/index
tutorials
diff --git a/doc/source/getting_started/overview.rst b/doc/source/getting_started/overview.rst
index 1e07df47aadca..b531f686951fc 100644
--- a/doc/source/getting_started/overview.rst
+++ b/doc/source/getting_started/overview.rst
@@ -6,25 +6,80 @@
Package overview
****************
-:mod:`pandas` is an open source, BSD-licensed library providing high-performance,
-easy-to-use data structures and data analysis tools for the `Python <https://www.python.org/>`__
-programming language.
-
-:mod:`pandas` consists of the following elements:
-
-* A set of labeled array data structures, the primary of which are
- Series and DataFrame.
-* Index objects enabling both simple axis indexing and multi-level /
- hierarchical axis indexing.
-* An integrated group by engine for aggregating and transforming data sets.
-* Date range generation (date_range) and custom date offsets enabling the
- implementation of customized frequencies.
-* Input/Output tools: loading tabular data from flat files (CSV, delimited,
- Excel 2003), and saving and loading pandas objects from the fast and
- efficient PyTables/HDF5 format.
-* Memory-efficient "sparse" versions of the standard data structures for storing
- data that is mostly missing or mostly constant (some fixed value).
-* Moving window statistics (rolling mean, rolling standard deviation, etc.).
+**pandas** is a `Python <https://www.python.org>`__ package providing fast,
+flexible, and expressive data structures designed to make working with
+"relational" or "labeled" data both easy and intuitive. It aims to be the
+fundamental high-level building block for doing practical, **real world** data
+analysis in Python. Additionally, it has the broader goal of becoming **the
+most powerful and flexible open source data analysis / manipulation tool
+available in any language**. It is already well on its way toward this goal.
+
+pandas is well suited for many different kinds of data:
+
+ - Tabular data with heterogeneously-typed columns, as in an SQL table or
+ Excel spreadsheet
+ - Ordered and unordered (not necessarily fixed-frequency) time series data.
+ - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
+ column labels
+ - Any other form of observational / statistical data sets. The data actually
+ need not be labeled at all to be placed into a pandas data structure
+
+The two primary data structures of pandas, :class:`Series` (1-dimensional)
+and :class:`DataFrame` (2-dimensional), handle the vast majority of typical use
+cases in finance, statistics, social science, and many areas of
+engineering. For R users, :class:`DataFrame` provides everything that R's
+``data.frame`` provides and much more. pandas is built on top of `NumPy
+<https://www.numpy.org>`__ and is intended to integrate well within a scientific
+computing environment with many other 3rd party libraries.
+
+Here are just a few of the things that pandas does well:
+
+ - Easy handling of **missing data** (represented as NaN) in floating point as
+ well as non-floating point data
+ - Size mutability: columns can be **inserted and deleted** from DataFrame and
+ higher dimensional objects
+ - Automatic and explicit **data alignment**: objects can be explicitly
+ aligned to a set of labels, or the user can simply ignore the labels and
+ let `Series`, `DataFrame`, etc. automatically align the data for you in
+ computations
+ - Powerful, flexible **group by** functionality to perform
+ split-apply-combine operations on data sets, for both aggregating and
+ transforming data
+ - Make it **easy to convert** ragged, differently-indexed data in other
+ Python and NumPy data structures into DataFrame objects
+ - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
+ of large data sets
+ - Intuitive **merging** and **joining** data sets
+ - Flexible **reshaping** and pivoting of data sets
+ - **Hierarchical** labeling of axes (possible to have multiple labels per
+ tick)
+ - Robust IO tools for loading data from **flat files** (CSV and delimited),
+ Excel files, databases, and saving / loading data from the ultrafast **HDF5
+ format**
+ - **Time series**-specific functionality: date range generation and frequency
+ conversion, moving window statistics, moving window linear regressions,
+ date shifting and lagging, etc.
+
+Many of these principles are here to address the shortcomings frequently
+experienced using other languages / scientific research environments. For data
+scientists, working with data is typically divided into multiple stages:
+munging and cleaning data, analyzing / modeling it, then organizing the results
+of the analysis into a form suitable for plotting or tabular display. pandas
+is the ideal tool for all of these tasks.
+
+Some other notes
+
+ - pandas is **fast**. Many of the low-level algorithmic bits have been
+ extensively tweaked in `Cython <https://cython.org>`__ code. However, as with
+ anything else generalization usually sacrifices performance. So if you focus
+ on one feature for your application you may be able to create a faster
+ specialized tool.
+
+ - pandas is a dependency of `statsmodels
+ <https://www.statsmodels.org/stable/index.html>`__, making it an important part of the
+ statistical computing ecosystem in Python.
+
+ - pandas has been used extensively in production in financial applications.
Data Structures
---------------
diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index bc420a906b59c..ab51911a610e3 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -22,93 +22,15 @@ pandas: powerful Python data analysis toolkit
**Developer Mailing List:** https://groups.google.com/forum/#!forum/pydata
-**pandas** is a `Python <https://www.python.org>`__ package providing fast,
-flexible, and expressive data structures designed to make working with
-"relational" or "labeled" data both easy and intuitive. It aims to be the
-fundamental high-level building block for doing practical, **real world** data
-analysis in Python. Additionally, it has the broader goal of becoming **the
-most powerful and flexible open source data analysis / manipulation tool
-available in any language**. It is already well on its way toward this goal.
-
-pandas is well suited for many different kinds of data:
-
- - Tabular data with heterogeneously-typed columns, as in an SQL table or
- Excel spreadsheet
- - Ordered and unordered (not necessarily fixed-frequency) time series data.
- - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
- column labels
- - Any other form of observational / statistical data sets. The data actually
- need not be labeled at all to be placed into a pandas data structure
-
-The two primary data structures of pandas, :class:`Series` (1-dimensional)
-and :class:`DataFrame` (2-dimensional), handle the vast majority of typical use
-cases in finance, statistics, social science, and many areas of
-engineering. For R users, :class:`DataFrame` provides everything that R's
-``data.frame`` provides and much more. pandas is built on top of `NumPy
-<https://www.numpy.org>`__ and is intended to integrate well within a scientific
-computing environment with many other 3rd party libraries.
-
-Here are just a few of the things that pandas does well:
-
- - Easy handling of **missing data** (represented as NaN) in floating point as
- well as non-floating point data
- - Size mutability: columns can be **inserted and deleted** from DataFrame and
- higher dimensional objects
- - Automatic and explicit **data alignment**: objects can be explicitly
- aligned to a set of labels, or the user can simply ignore the labels and
- let `Series`, `DataFrame`, etc. automatically align the data for you in
- computations
- - Powerful, flexible **group by** functionality to perform
- split-apply-combine operations on data sets, for both aggregating and
- transforming data
- - Make it **easy to convert** ragged, differently-indexed data in other
- Python and NumPy data structures into DataFrame objects
- - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
- of large data sets
- - Intuitive **merging** and **joining** data sets
- - Flexible **reshaping** and pivoting of data sets
- - **Hierarchical** labeling of axes (possible to have multiple labels per
- tick)
- - Robust IO tools for loading data from **flat files** (CSV and delimited),
- Excel files, databases, and saving / loading data from the ultrafast **HDF5
- format**
- - **Time series**-specific functionality: date range generation and frequency
- conversion, moving window statistics, moving window linear regressions,
- date shifting and lagging, etc.
-
-Many of these principles are here to address the shortcomings frequently
-experienced using other languages / scientific research environments. For data
-scientists, working with data is typically divided into multiple stages:
-munging and cleaning data, analyzing / modeling it, then organizing the results
-of the analysis into a form suitable for plotting or tabular display. pandas
-is the ideal tool for all of these tasks.
-
-Some other notes
-
- - pandas is **fast**. Many of the low-level algorithmic bits have been
- extensively tweaked in `Cython <https://cython.org>`__ code. However, as with
- anything else generalization usually sacrifices performance. So if you focus
- on one feature for your application you may be able to create a faster
- specialized tool.
-
- - pandas is a dependency of `statsmodels
- <https://www.statsmodels.org/stable/index.html>`__, making it an important part of the
- statistical computing ecosystem in Python.
-
- - pandas has been used extensively in production in financial applications.
-
-.. note::
-
- This documentation assumes general familiarity with NumPy. If you haven't
- used NumPy much or at all, do invest some time in `learning about NumPy
- <https://docs.scipy.org>`__ first.
-
-See the package overview for more detail about what's in the library.
+:mod:`pandas` is an open source, BSD-licensed library providing high-performance,
+easy-to-use data structures and data analysis tools for the `Python <https://www.python.org/>`__
+programming language.
+See the :ref:`overview` for more detail about what's in the library.
{% if single_doc and single_doc.endswith('.rst') -%}
.. toctree::
- :maxdepth: 4
+ :maxdepth: 2
{{ single_doc[:-4] }}
{% elif single_doc %}
@@ -118,21 +40,15 @@ See the package overview for more detail about what's in the library.
{{ single_doc }}
{% else -%}
.. toctree::
- :maxdepth: 4
+ :maxdepth: 2
{% endif %}
{% if not single_doc -%}
What's New <whatsnew/v0.24.0>
install
getting_started/index
- cookbook
user_guide/index
- r_interface
ecosystem
- comparison_with_r
- comparison_with_sql
- comparison_with_sas
- comparison_with_stata
{% endif -%}
{% if include_api -%}
api/index
diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
deleted file mode 100644
index 9839bba4884d4..0000000000000
--- a/doc/source/r_interface.rst
+++ /dev/null
@@ -1,94 +0,0 @@
-.. _rpy:
-
-{{ header }}
-
-******************
-rpy2 / R interface
-******************
-
-.. warning::
-
- Up to pandas 0.19, a ``pandas.rpy`` module existed with functionality to
- convert between pandas and ``rpy2`` objects. This functionality now lives in
- the `rpy2 <https://rpy2.readthedocs.io/>`__ project itself.
- See the `updating section <http://pandas.pydata.org/pandas-docs/version/0.19.0/r_interface.html#updating-your-code-to-use-rpy2-functions>`__
- of the previous documentation for a guide to port your code from the
- removed ``pandas.rpy`` to ``rpy2`` functions.
-
-
-`rpy2 <http://rpy2.bitbucket.org/>`__ is an interface to R running embedded in a Python process, and also includes functionality to deal with pandas DataFrames.
-Converting data frames back and forth between rpy2 and pandas should be largely
-automated (no need to convert explicitly, it will be done on the fly in most
-rpy2 functions).
-To convert explicitly, the functions are ``pandas2ri.py2ri()`` and
-``pandas2ri.ri2py()``.
-
-
-See also the documentation of the `rpy2 <http://rpy2.bitbucket.org/>`__ project: https://rpy2.readthedocs.io.
-
-In the remainder of this page, a few examples of explicit conversion is given. The pandas conversion of rpy2 needs first to be activated:
-
-.. ipython::
- :verbatim:
-
- In [1]: from rpy2.robjects import pandas2ri
- ...: pandas2ri.activate()
-
-Transferring R data sets into Python
-------------------------------------
-
-Once the pandas conversion is activated (``pandas2ri.activate()``), many conversions
-of R to pandas objects will be done automatically. For example, to obtain the 'iris' dataset as a pandas DataFrame:
-
-.. ipython::
- :verbatim:
-
- In [2]: from rpy2.robjects import r
-
- In [3]: r.data('iris')
-
- In [4]: r['iris'].head()
- Out[4]:
- Sepal.Length Sepal.Width Petal.Length Petal.Width Species
- 0 5.1 3.5 1.4 0.2 setosa
- 1 4.9 3.0 1.4 0.2 setosa
- 2 4.7 3.2 1.3 0.2 setosa
- 3 4.6 3.1 1.5 0.2 setosa
- 4 5.0 3.6 1.4 0.2 setosa
-
-If the pandas conversion was not activated, the above could also be accomplished
-by explicitly converting it with the ``pandas2ri.ri2py`` function
-(``pandas2ri.ri2py(r['iris'])``).
-
-Converting DataFrames into R objects
-------------------------------------
-
-The ``pandas2ri.py2ri`` function support the reverse operation to convert
-DataFrames into the equivalent R object (that is, **data.frame**):
-
-.. ipython::
- :verbatim:
-
- In [5]: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]},
- ...: index=["one", "two", "three"])
-
- In [6]: r_dataframe = pandas2ri.py2ri(df)
-
- In [7]: print(type(r_dataframe))
- Out[7]: <class 'rpy2.robjects.vectors.DataFrame'>
-
- In [8]: print(r_dataframe)
- Out[8]:
- A B C
- one 1 4 7
- two 2 5 8
- three 3 6 9
-
-
-The DataFrame's index is stored as the ``rownames`` attribute of the
-data.frame instance.
-
-
-..
- Calling R functions with pandas objects
- High-level interface to R estimators
diff --git a/doc/source/cookbook.rst b/doc/source/user_guide/cookbook.rst
similarity index 100%
rename from doc/source/cookbook.rst
rename to doc/source/user_guide/cookbook.rst
diff --git a/doc/source/user_guide/index.rst b/doc/source/user_guide/index.rst
index 60e722808d647..d39cf7103ab63 100644
--- a/doc/source/user_guide/index.rst
+++ b/doc/source/user_guide/index.rst
@@ -37,3 +37,4 @@ Further information on any specific method can be obtained in the
enhancingperf
sparse
gotchas
+ cookbook
diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb
index a238c3b16e9ad..79a9848704eec 100644
--- a/doc/source/user_guide/style.ipynb
+++ b/doc/source/user_guide/style.ipynb
@@ -1133,7 +1133,7 @@
"metadata": {},
"outputs": [],
"source": [
- "with open(\"template_structure.html\") as f:\n",
+ "with open(\"templates/template_structure.html\") as f:\n",
" structure = f.read()\n",
" \n",
"HTML(structure)"
diff --git a/doc/source/templates/myhtml.tpl b/doc/source/user_guide/templates/myhtml.tpl
similarity index 100%
rename from doc/source/templates/myhtml.tpl
rename to doc/source/user_guide/templates/myhtml.tpl
diff --git a/doc/source/template_structure.html b/doc/source/user_guide/templates/template_structure.html
similarity index 100%
rename from doc/source/template_structure.html
rename to doc/source/user_guide/templates/template_structure.html
| - [X] xref #24499
You can see the changes here: https://datapythonista.github.io/pandas-doc-preview/
Pages moved:
- comparison_with_whatever -> getting_started/comparison/comparison_with_whatever
- ecosystem -> getting_started/ecosystem
- cookbook -> user_guide/cookbook
- rpy2 has been removed
Changes to the documentation home page:
- Moved the long description to the Package overview page, and left a small paragraph, and a link to it
- Made the toctree show just 2 levels, so it's useful and it can be seen what it's there
May be useful to see how the master docs look like, to compare: https://pandas-docs.github.io/pandas-docs-travis/
I will follow up with:
- A change of the style (the one we saw in the Austin sprint)
- An integration with the pandas website (pydata.pandas.org). I'll create a issue for this before, as there is some discussion needed.
@TomAugspurger @jorisvandenbossche @jreback
| https://api.github.com/repos/pandas-dev/pandas/pulls/24890 | 2019-01-24T00:26:41Z | 2019-01-25T12:40:38Z | 2019-01-25T12:40:38Z | 2019-01-29T17:33:44Z |
DOC: fixups | diff --git a/doc/source/api/scalars.rst b/doc/source/api/scalars.rst
deleted file mode 100644
index e69de29bb2d1d..0000000000000
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3dd345890881c..9198c610f0f44 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -13,6 +13,9 @@ What's New in 0.24.0 (January XX, 2019)
These are the changes in pandas 0.24.0. See :ref:`release` for a full changelog
including other versions of pandas.
+Enhancements
+~~~~~~~~~~~~
+
Highlights include
* :ref:`Optional Nullable Integer Support <whatsnew_0240.enhancements.intna>`
@@ -1165,7 +1168,7 @@ Other API Changes
.. _whatsnew_0240.api.extension:
ExtensionType Changes
-^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~
**Equality and Hashability**
| * Fixed heading on whatnew
* Remove empty scalars.rst
If you look at http://pandas-docs.github.io/pandas-docs-travis/whatsnew/v0.24.0.html, you'll see that the TOC-tree is messed up. It includes every "level-3" header, since the highlights aren't nested under a "level-2" header. | https://api.github.com/repos/pandas-dev/pandas/pulls/24888 | 2019-01-23T20:51:12Z | 2019-01-23T21:37:15Z | 2019-01-23T21:37:15Z | 2019-01-23T21:37:20Z |
TST: fixturize test_to_html alignment checks | diff --git a/pandas/tests/io/formats/conftest.py b/pandas/tests/io/formats/conftest.py
new file mode 100644
index 0000000000000..df0393d7ee71f
--- /dev/null
+++ b/pandas/tests/io/formats/conftest.py
@@ -0,0 +1,166 @@
+from io import open
+
+import numpy as np
+import pytest
+
+from pandas import DataFrame, Index, MultiIndex
+
+
+@pytest.fixture
+def expected_html(datapath):
+ def _expected_html(name):
+ """
+ Read HTML file from formats data directory.
+
+ Parameters
+ ----------
+ name : str
+ The name of the HTML file without the suffix.
+
+ Returns
+ -------
+ str : contents of HTML file.
+ """
+ filename = '.'.join([name, 'html'])
+ filepath = datapath('io', 'formats', 'data', 'html', filename)
+ with open(filepath, encoding='utf-8') as f:
+ html = f.read()
+ return html.rstrip()
+ return _expected_html
+
+
+@pytest.fixture(params=[True, False])
+def index_names_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=[True, False])
+def header_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=[True, False])
+def index_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=['standard', 'multi'])
+def index_type_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=['standard', 'multi'])
+def columns_index_type_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=['unnamed', 'named'])
+def index_naming_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=['unnamed', 'named'])
+def columns_index_naming_fixture(request):
+ return request.param
+
+
+@pytest.fixture(params=[[0, 1]])
+def index_labels_fixture(request):
+ return request.param
+
+
+def _df_index(index_labels, index_naming, index_name):
+ if index_naming == 'unnamed':
+ return Index(index_labels)
+ return Index(index_labels, name=index_name)
+
+
+@pytest.fixture()
+def df_index_fixture(index_labels_fixture, index_naming_fixture):
+ return _df_index(index_labels_fixture, index_naming_fixture, 'index.name')
+
+
+@pytest.fixture()
+def df_columns_index_fixture(
+ index_labels_fixture, columns_index_naming_fixture):
+ return _df_index(index_labels_fixture, columns_index_naming_fixture,
+ 'columns.name')
+
+
+@pytest.fixture(params=[[['a'], ['b', 'c']]])
+def multi_index_labels_fixture(request):
+ return request.param
+
+
+def _df_multi_index(multi_index_labels, index_naming, index_names):
+ if index_naming == 'unnamed':
+ return MultiIndex.from_product(multi_index_labels)
+ return MultiIndex.from_product(multi_index_labels, names=index_names)
+
+
+@pytest.fixture(params=[['index.name.0', 'index.name.1']])
+def df_multi_index_fixture(
+ request, multi_index_labels_fixture, index_naming_fixture):
+ names = request.param
+ return _df_multi_index(
+ multi_index_labels_fixture, index_naming_fixture, names)
+
+
+@pytest.fixture(params=[['columns.name.0', 'columns.name.1']])
+def df_columns_multi_index_fixture(
+ request, multi_index_labels_fixture, columns_index_naming_fixture):
+ names = request.param
+ return _df_multi_index(
+ multi_index_labels_fixture, columns_index_naming_fixture, names)
+
+
+@pytest.fixture()
+def df_indexes_fixture(
+ index_type_fixture, df_index_fixture, df_multi_index_fixture):
+ if index_type_fixture == 'multi':
+ return df_multi_index_fixture
+ return df_index_fixture
+
+
+@pytest.fixture()
+def df_columns_indexes_fixture(
+ columns_index_type_fixture, df_columns_index_fixture,
+ df_columns_multi_index_fixture):
+ if columns_index_type_fixture == 'multi':
+ return df_columns_multi_index_fixture
+ return df_columns_index_fixture
+
+
+@pytest.fixture(params=[np.zeros((2, 2), dtype=int)])
+def df_fixture(request, df_indexes_fixture, df_columns_indexes_fixture):
+ data = request.param
+ return DataFrame(data, index=df_indexes_fixture,
+ columns=df_columns_indexes_fixture)
+
+
+@pytest.fixture(params=[None])
+def expected_fixture(
+ request, expected_html, index_type_fixture, index_naming_fixture,
+ columns_index_type_fixture, columns_index_naming_fixture,
+ index_fixture, header_fixture, index_names_fixture):
+ filename_prefix = request.param
+ if not index_fixture:
+ index_naming_fixture = 'none'
+ else:
+ if not index_names_fixture:
+ index_naming_fixture = 'unnamed'
+ index_naming_fixture = index_naming_fixture + '_' + index_type_fixture
+
+ if not header_fixture:
+ columns_index_naming_fixture = 'none'
+ else:
+ if not index_names_fixture:
+ columns_index_naming_fixture = 'unnamed'
+ columns_index_naming_fixture = (
+ columns_index_naming_fixture + '_' + columns_index_type_fixture)
+
+ filename = '_'.join(['index', index_naming_fixture,
+ 'columns', columns_index_naming_fixture])
+ if filename_prefix:
+ filename = filename_prefix + filename
+ return expected_html(filename)
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 554cfd306e2a7..52af3696331f4 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
from datetime import datetime
-from io import open
import re
import numpy as np
@@ -16,28 +15,6 @@
import pandas.io.formats.format as fmt
-def expected_html(datapath, name):
- """
- Read HTML file from formats data directory.
-
- Parameters
- ----------
- datapath : pytest fixture
- The datapath fixture injected into a test by pytest.
- name : str
- The name of the HTML file without the suffix.
-
- Returns
- -------
- str : contents of HTML file.
- """
- filename = '.'.join([name, 'html'])
- filepath = datapath('io', 'formats', 'data', 'html', filename)
- with open(filepath, encoding='utf-8') as f:
- html = f.read()
- return html.rstrip()
-
-
@pytest.fixture(params=['mixed', 'empty'])
def biggie_df_fixture(request):
"""Fixture for a big mixed Dataframe and an empty Dataframe"""
@@ -83,17 +60,17 @@ def test_to_html_with_empty_string_label():
(DataFrame({u('\u03c3'): np.arange(10.)}), 'unicode_1'),
(DataFrame({'A': [u('\u03c3')]}), 'unicode_2')
])
-def test_to_html_unicode(df, expected, datapath):
- expected = expected_html(datapath, expected)
+def test_to_html_unicode(df, expected, expected_html):
+ expected = expected_html(expected)
result = df.to_html()
assert result == expected
-def test_to_html_decimal(datapath):
+def test_to_html_decimal(expected_html):
# GH 12031
df = DataFrame({'A': [6.0, 3.1, 2.2]})
result = df.to_html(decimal=',')
- expected = expected_html(datapath, 'gh12031_expected_output')
+ expected = expected_html('gh12031_expected_output')
assert result == expected
@@ -101,7 +78,7 @@ def test_to_html_decimal(datapath):
(dict(), "<type 'str'>", 'escaped'),
(dict(escape=False), "<b>bold</b>", 'escape_disabled')
])
-def test_to_html_escaped(kwargs, string, expected, datapath):
+def test_to_html_escaped(kwargs, string, expected, expected_html):
a = 'str<ing1 &'
b = 'stri>ng2 &'
@@ -110,12 +87,12 @@ def test_to_html_escaped(kwargs, string, expected, datapath):
'co>l2': {a: string,
b: string}}
result = DataFrame(test_dict).to_html(**kwargs)
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
@pytest.mark.parametrize('index_is_named', [True, False])
-def test_to_html_multiindex_index_false(index_is_named, datapath):
+def test_to_html_multiindex_index_false(index_is_named, expected_html):
# GH 8452
df = DataFrame({
'a': range(2),
@@ -127,7 +104,7 @@ def test_to_html_multiindex_index_false(index_is_named, datapath):
if index_is_named:
df.index = Index(df.index.values, name='idx')
result = df.to_html(index=False)
- expected = expected_html(datapath, 'gh8452_expected_output')
+ expected = expected_html('gh8452_expected_output')
assert result == expected
@@ -137,7 +114,7 @@ def test_to_html_multiindex_index_false(index_is_named, datapath):
(True, 'multiindex_sparsify_1'),
(True, 'multiindex_sparsify_2')
])
-def test_to_html_multiindex_sparsify(multi_sparse, expected, datapath):
+def test_to_html_multiindex_sparsify(multi_sparse, expected, expected_html):
index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
names=['foo', None])
df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index)
@@ -145,7 +122,7 @@ def test_to_html_multiindex_sparsify(multi_sparse, expected, datapath):
df.columns = index[::2]
with option_context('display.multi_sparse', multi_sparse):
result = df.to_html()
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
@@ -155,7 +132,8 @@ def test_to_html_multiindex_sparsify(multi_sparse, expected, datapath):
# Test that ... appears in a middle level
(56, 'gh14882_expected_output_2')
])
-def test_to_html_multiindex_odd_even_truncate(max_rows, expected, datapath):
+def test_to_html_multiindex_odd_even_truncate(
+ max_rows, expected, expected_html):
# GH 14882 - Issue on truncation with odd length DataFrame
index = MultiIndex.from_product([[100, 200, 300],
[10, 20, 30],
@@ -163,7 +141,7 @@ def test_to_html_multiindex_odd_even_truncate(max_rows, expected, datapath):
names=['a', 'b', 'c'])
df = DataFrame({'n': range(len(index))}, index=index)
result = df.to_html(max_rows=max_rows)
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
@@ -184,8 +162,8 @@ def test_to_html_multiindex_odd_even_truncate(max_rows, expected, datapath):
{'hod': lambda x: x.strftime('%H:%M')},
'datetime64_hourformatter')
])
-def test_to_html_formatters(df, formatters, expected, datapath):
- expected = expected_html(datapath, expected)
+def test_to_html_formatters(df, formatters, expected, expected_html):
+ expected = expected_html(expected)
result = df.to_html(formatters=formatters)
assert result == expected
@@ -201,11 +179,11 @@ def test_to_html_regression_GH6098():
df.pivot_table(index=[u('clé1')], columns=[u('clé2')])._repr_html_()
-def test_to_html_truncate(datapath):
+def test_to_html_truncate(expected_html):
index = pd.date_range(start='20010101', freq='D', periods=20)
df = DataFrame(index=index, columns=range(20))
result = df.to_html(max_rows=8, max_cols=4)
- expected = expected_html(datapath, 'truncate')
+ expected = expected_html('truncate')
assert result == expected
@@ -213,12 +191,12 @@ def test_to_html_truncate(datapath):
(True, 'truncate_multi_index'),
(False, 'truncate_multi_index_sparse_off')
])
-def test_to_html_truncate_multi_index(sparsify, expected, datapath):
+def test_to_html_truncate_multi_index(sparsify, expected, expected_html):
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
df = DataFrame(index=arrays, columns=arrays)
result = df.to_html(max_rows=7, max_cols=7, sparsify=sparsify)
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
@@ -306,20 +284,20 @@ def test_to_html_columns_arg():
'right',
'multiindex_2')
])
-def test_to_html_multiindex(columns, justify, expected, datapath):
+def test_to_html_multiindex(columns, justify, expected, expected_html):
df = DataFrame([list('abcd'), list('efgh')], columns=columns)
result = df.to_html(justify=justify)
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
-def test_to_html_justify(justify, datapath):
+def test_to_html_justify(justify, expected_html):
df = DataFrame({'A': [6, 30000, 2],
'B': [1, 2, 70000],
'C': [223442, 0, 1]},
columns=['A', 'B', 'C'])
result = df.to_html(justify=justify)
- expected = expected_html(datapath, 'justify').format(justify=justify)
+ expected = expected_html('justify').format(justify=justify)
assert result == expected
@@ -334,7 +312,7 @@ def test_to_html_invalid_justify(justify):
df.to_html(justify=justify)
-def test_to_html_index(datapath):
+def test_to_html_index(expected_html):
# TODO: split this test
index = ['foo', 'bar', 'baz']
df = DataFrame({'A': [1, 2, 3],
@@ -342,23 +320,23 @@ def test_to_html_index(datapath):
'C': ['one', 'two', np.nan]},
columns=['A', 'B', 'C'],
index=index)
- expected_with_index = expected_html(datapath, 'index_1')
+ expected_with_index = expected_html('index_1')
assert df.to_html() == expected_with_index
- expected_without_index = expected_html(datapath, 'index_2')
+ expected_without_index = expected_html('index_2')
result = df.to_html(index=False)
for i in index:
assert i not in result
assert result == expected_without_index
df.index = Index(['foo', 'bar', 'baz'], name='idx')
- expected_with_index = expected_html(datapath, 'index_3')
+ expected_with_index = expected_html('index_3')
assert df.to_html() == expected_with_index
assert df.to_html(index=False) == expected_without_index
tuples = [('foo', 'car'), ('foo', 'bike'), ('bar', 'car')]
df.index = MultiIndex.from_tuples(tuples)
- expected_with_index = expected_html(datapath, 'index_4')
+ expected_with_index = expected_html('index_4')
assert df.to_html() == expected_with_index
result = df.to_html(index=False)
@@ -368,7 +346,7 @@ def test_to_html_index(datapath):
assert result == expected_without_index
df.index = MultiIndex.from_tuples(tuples, names=['idx1', 'idx2'])
- expected_with_index = expected_html(datapath, 'index_5')
+ expected_with_index = expected_html('index_5')
assert df.to_html() == expected_with_index
assert df.to_html(index=False) == expected_without_index
@@ -377,22 +355,22 @@ def test_to_html_index(datapath):
"sortable draggable",
["sortable", "draggable"]
])
-def test_to_html_with_classes(classes, datapath):
+def test_to_html_with_classes(classes, expected_html):
df = DataFrame()
- expected = expected_html(datapath, 'with_classes')
+ expected = expected_html('with_classes')
result = df.to_html(classes=classes)
assert result == expected
-def test_to_html_no_index_max_rows(datapath):
+def test_to_html_no_index_max_rows(expected_html):
# GH 14998
df = DataFrame({"A": [1, 2, 3, 4]})
result = df.to_html(index=False, max_rows=1)
- expected = expected_html(datapath, 'gh14998_expected_output')
+ expected = expected_html('gh14998_expected_output')
assert result == expected
-def test_to_html_multiindex_max_cols(datapath):
+def test_to_html_multiindex_max_cols(expected_html):
# GH 6131
index = MultiIndex(levels=[['ba', 'bb', 'bc'], ['ca', 'cb', 'cc']],
codes=[[0, 1, 2], [0, 1, 2]],
@@ -404,11 +382,11 @@ def test_to_html_multiindex_max_cols(datapath):
[[1., np.nan, np.nan], [np.nan, 2., np.nan], [np.nan, np.nan, 3.]])
df = DataFrame(data, index, columns)
result = df.to_html(max_cols=2)
- expected = expected_html(datapath, 'gh6131_expected_output')
+ expected = expected_html('gh6131_expected_output')
assert result == expected
-def test_to_html_multi_indexes_index_false(datapath):
+def test_to_html_multi_indexes_index_false(expected_html):
# GH 22579
df = DataFrame({'a': range(10), 'b': range(10, 20), 'c': range(10, 20),
'd': range(10, 20)})
@@ -416,101 +394,42 @@ def test_to_html_multi_indexes_index_false(datapath):
df.index = MultiIndex.from_product([['a', 'b'],
['c', 'd', 'e', 'f', 'g']])
result = df.to_html(index=False)
- expected = expected_html(datapath, 'gh22579_expected_output')
+ expected = expected_html('gh22579_expected_output')
assert result == expected
-@pytest.mark.parametrize('index_names', [True, False])
-@pytest.mark.parametrize('header', [True, False])
-@pytest.mark.parametrize('index', [True, False])
-@pytest.mark.parametrize('column_index, column_type', [
- (Index([0, 1]), 'unnamed_standard'),
- (Index([0, 1], name='columns.name'), 'named_standard'),
- (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a'], ['b', 'c']], names=['columns.name.0',
- 'columns.name.1']), 'named_multi')
-])
-@pytest.mark.parametrize('row_index, row_type', [
- (Index([0, 1]), 'unnamed_standard'),
- (Index([0, 1], name='index.name'), 'named_standard'),
- (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a'], ['b', 'c']], names=['index.name.0',
- 'index.name.1']), 'named_multi')
-])
-def test_to_html_basic_alignment(
- datapath, row_index, row_type, column_index, column_type,
- index, header, index_names):
+def test_to_html_basic_alignment(df_fixture, index_fixture, header_fixture,
+ index_names_fixture, expected_fixture):
# GH 22747, GH 22579
- df = DataFrame(np.zeros((2, 2), dtype=int),
- index=row_index, columns=column_index)
- result = df.to_html(
- index=index, header=header, index_names=index_names)
-
- if not index:
- row_type = 'none'
- elif not index_names and row_type.startswith('named'):
- row_type = 'un' + row_type
-
- if not header:
- column_type = 'none'
- elif not index_names and column_type.startswith('named'):
- column_type = 'un' + column_type
-
- filename = 'index_' + row_type + '_columns_' + column_type
- expected = expected_html(datapath, filename)
- assert result == expected
-
-
-@pytest.mark.parametrize('index_names', [True, False])
-@pytest.mark.parametrize('header', [True, False])
-@pytest.mark.parametrize('index', [True, False])
-@pytest.mark.parametrize('column_index, column_type', [
- (Index(np.arange(8)), 'unnamed_standard'),
- (Index(np.arange(8), name='columns.name'), 'named_standard'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
- 'named_multi')
-])
-@pytest.mark.parametrize('row_index, row_type', [
- (Index(np.arange(8)), 'unnamed_standard'),
- (Index(np.arange(8), name='index.name'), 'named_standard'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
- 'named_multi')
-])
+ result = df_fixture.to_html(index=index_fixture, header=header_fixture,
+ index_names=index_names_fixture)
+ assert result == expected_fixture
+
+
+@pytest.mark.parametrize(
+ ('df_fixture', 'index_labels_fixture', 'multi_index_labels_fixture',
+ 'df_multi_index_fixture', 'df_columns_multi_index_fixture',
+ 'expected_fixture'),
+ [(np.arange(64).reshape(8, 8),
+ np.arange(8),
+ [['a', 'b'], ['c', 'd'], ['e', 'f']],
+ ['foo', None, 'baz'],
+ ['foo', None, 'baz'],
+ 'trunc_df_')
+ ], indirect=True)
def test_to_html_alignment_with_truncation(
- datapath, row_index, row_type, column_index, column_type,
- index, header, index_names):
+ df_fixture, index_fixture, header_fixture, index_names_fixture,
+ expected_fixture):
# GH 22747, GH 22579
- df = DataFrame(np.arange(64).reshape(8, 8),
- index=row_index, columns=column_index)
- result = df.to_html(
+ result = df_fixture.to_html(
max_rows=4, max_cols=4,
- index=index, header=header, index_names=index_names)
-
- if not index:
- row_type = 'none'
- elif not index_names and row_type.startswith('named'):
- row_type = 'un' + row_type
-
- if not header:
- column_type = 'none'
- elif not index_names and column_type.startswith('named'):
- column_type = 'un' + column_type
-
- filename = 'trunc_df_index_' + row_type + '_columns_' + column_type
- expected = expected_html(datapath, filename)
- assert result == expected
+ index=index_fixture, header=header_fixture,
+ index_names=index_names_fixture)
+ assert result == expected_fixture
@pytest.mark.parametrize('index', [False, 0])
-def test_to_html_truncation_index_false_max_rows(datapath, index):
+def test_to_html_truncation_index_false_max_rows(expected_html, index):
# GH 15019
data = [[1.764052, 0.400157],
[0.978738, 2.240893],
@@ -519,7 +438,7 @@ def test_to_html_truncation_index_false_max_rows(datapath, index):
[-0.103219, 0.410599]]
df = DataFrame(data)
result = df.to_html(max_rows=4, index=index)
- expected = expected_html(datapath, 'gh15019_expected_output')
+ expected = expected_html('gh15019_expected_output')
assert result == expected
@@ -529,7 +448,7 @@ def test_to_html_truncation_index_false_max_rows(datapath, index):
(True, 'gh22783_named_columns_index')
])
def test_to_html_truncation_index_false_max_cols(
- datapath, index, col_index_named, expected_output):
+ expected_html, index, col_index_named, expected_output):
# GH 22783
data = [[1.764052, 0.400157, 0.978738, 2.240893, 1.867558],
[-0.977278, 0.950088, -0.151357, -0.103219, 0.410599]]
@@ -537,7 +456,7 @@ def test_to_html_truncation_index_false_max_cols(
if col_index_named:
df.columns.rename('columns.name', inplace=True)
result = df.to_html(max_cols=4, index=index)
- expected = expected_html(datapath, expected_output)
+ expected = expected_html(expected_output)
assert result == expected
@@ -577,10 +496,10 @@ def test_to_html_with_id():
(100.0, '%.0f', 'gh22270_expected_output'),
])
def test_to_html_float_format_no_fixed_width(
- value, float_format, expected, datapath):
+ value, float_format, expected, expected_html):
# GH 21625, GH 22270
df = DataFrame({'x': [value]})
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
result = df.to_html(float_format=float_format)
assert result == expected
@@ -589,7 +508,7 @@ def test_to_html_float_format_no_fixed_width(
(True, 'render_links_true'),
(False, 'render_links_false'),
])
-def test_to_html_render_links(render_links, expected, datapath):
+def test_to_html_render_links(render_links, expected, expected_html):
# GH 2679
data = [
[0, 'http://pandas.pydata.org/?q1=a&q2=b', 'pydata.org'],
@@ -598,5 +517,5 @@ def test_to_html_render_links(render_links, expected, datapath):
df = DataFrame(data, columns=['foo', 'bar', None])
result = df.to_html(render_links=render_links)
- expected = expected_html(datapath, expected)
+ expected = expected_html(expected)
assert result == expected
| https://github.com/pandas-dev/pandas/pull/24873#issuecomment-456641936
re-use of fixtures through indirect parametrization.
a hierarchy of fixtures to ultimately create a dataframe fixture containing 16 dataframes of combinations of named and unnamed, single level and multi index independently on the row and columns index.
this dataframe fixture is then reused through indirect parametrization of the dataframe values, the index and column index labels and the index and column index names
both the test functions consist of only two lines
```
result = ...
assert result == ...
```
the diff is a bit cluttered since a function used by the fixtures, `expected_html` has been moved into conftest and converted to a fixture.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24887 | 2019-01-23T19:37:16Z | 2019-01-24T01:30:31Z | null | 2019-01-24T01:30:32Z |
TST: inline empty_frame = DataFrame({}) fixture | diff --git a/pandas/tests/frame/common.py b/pandas/tests/frame/common.py
index 2ea087c0510bf..5624f7c1303b6 100644
--- a/pandas/tests/frame/common.py
+++ b/pandas/tests/frame/common.py
@@ -85,7 +85,7 @@ def tzframe(self):
@cache_readonly
def empty(self):
- return pd.DataFrame({})
+ return pd.DataFrame()
@cache_readonly
def ts1(self):
diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 69ee614ab8d2a..fbe03325a3ad9 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -127,14 +127,6 @@ def timezone_frame():
return df
-@pytest.fixture
-def empty_frame():
- """
- Fixture for empty DataFrame
- """
- return DataFrame({})
-
-
@pytest.fixture
def simple_frame():
"""
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 43a45bb915819..994187a62d862 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -1096,7 +1096,9 @@ def test_operators_timedelta64(self):
assert df['off1'].dtype == 'timedelta64[ns]'
assert df['off2'].dtype == 'timedelta64[ns]'
- def test_sum_corner(self, empty_frame):
+ def test_sum_corner(self):
+ empty_frame = DataFrame()
+
axis0 = empty_frame.sum(0)
axis1 = empty_frame.sum(1)
assert isinstance(axis0, Series)
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 0934dd20638e4..e561b327e4fb0 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -142,7 +142,9 @@ def test_tab_completion(self):
assert key not in dir(df)
assert isinstance(df.__getitem__('A'), pd.DataFrame)
- def test_not_hashable(self, empty_frame):
+ def test_not_hashable(self):
+ empty_frame = DataFrame()
+
df = self.klass([1])
pytest.raises(TypeError, hash, df)
pytest.raises(TypeError, hash, empty_frame)
@@ -171,7 +173,8 @@ def test_get_agg_axis(self, float_frame):
pytest.raises(ValueError, float_frame._get_agg_axis, 2)
- def test_nonzero(self, float_frame, float_string_frame, empty_frame):
+ def test_nonzero(self, float_frame, float_string_frame):
+ empty_frame = DataFrame()
assert empty_frame.empty
assert not float_frame.empty
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index a4cd1aa3bacb6..4d1e3e7ae1f38 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -74,8 +74,10 @@ def test_apply_mixed_datetimelike(self):
result = df.apply(lambda x: x, axis=1)
assert_frame_equal(result, df)
- def test_apply_empty(self, float_frame, empty_frame):
+ def test_apply_empty(self, float_frame):
# empty
+ empty_frame = DataFrame()
+
applied = empty_frame.apply(np.sqrt)
assert applied.empty
@@ -97,8 +99,10 @@ def test_apply_empty(self, float_frame, empty_frame):
result = expected.apply(lambda x: x['a'], axis=1)
assert_frame_equal(expected, result)
- def test_apply_with_reduce_empty(self, empty_frame):
+ def test_apply_with_reduce_empty(self):
# reduce with an empty DataFrame
+ empty_frame = DataFrame()
+
x = []
result = empty_frame.apply(x.append, axis=1, result_type='expand')
assert_frame_equal(result, empty_frame)
@@ -116,7 +120,9 @@ def test_apply_with_reduce_empty(self, empty_frame):
# Ensure that x.append hasn't been called
assert x == []
- def test_apply_deprecate_reduce(self, empty_frame):
+ def test_apply_deprecate_reduce(self):
+ empty_frame = DataFrame()
+
x = []
with tm.assert_produces_warning(FutureWarning):
empty_frame.apply(x.append, axis=1, reduce=True)
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 5419f4d5127f6..39d84f2e6086c 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -347,7 +347,9 @@ def test_copy(self, float_frame, float_string_frame):
copy = float_string_frame.copy()
assert copy._data is not float_string_frame._data
- def test_pickle(self, float_string_frame, empty_frame, timezone_frame):
+ def test_pickle(self, float_string_frame, timezone_frame):
+ empty_frame = DataFrame()
+
unpickled = tm.round_trip_pickle(float_string_frame)
assert_frame_equal(float_string_frame, unpickled)
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index a8a78b26e317c..b32255da324f4 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -247,7 +247,7 @@ def test_constructor_dict(self):
assert isna(frame['col3']).all()
# Corner cases
- assert len(DataFrame({})) == 0
+ assert len(DataFrame()) == 0
# mix dict and array, wrong size - no spec for which error should raise
# first
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index daac084f657af..4fe5172fefbcd 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -58,7 +58,7 @@ def test_pivot_duplicates(self):
def test_pivot_empty(self):
df = DataFrame({}, columns=['a', 'b', 'c'])
result = df.pivot('a', 'b', 'c')
- expected = DataFrame({})
+ expected = DataFrame()
tm.assert_frame_equal(result, expected, check_names=False)
def test_pivot_integer_bug(self):
diff --git a/pandas/tests/series/conftest.py b/pandas/tests/series/conftest.py
index 431aacb1c8d56..367e7a1baa7f3 100644
--- a/pandas/tests/series/conftest.py
+++ b/pandas/tests/series/conftest.py
@@ -1,6 +1,5 @@
import pytest
-from pandas import Series
import pandas.util.testing as tm
@@ -32,11 +31,3 @@ def object_series():
s = tm.makeObjectSeries()
s.name = 'objects'
return s
-
-
-@pytest.fixture
-def empty_series():
- """
- Fixture for empty Series
- """
- return Series([], index=[])
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index d92ca48751d0a..8525b877618c9 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -47,7 +47,9 @@ def test_scalar_conversion(self):
assert int(Series([1.])) == 1
assert long(Series([1.])) == 1
- def test_constructor(self, datetime_series, empty_series):
+ def test_constructor(self, datetime_series):
+ empty_series = Series()
+
assert datetime_series.index.is_all_dates
# Pass in Series
| broken off from #24873 | https://api.github.com/repos/pandas-dev/pandas/pulls/24886 | 2019-01-23T18:05:06Z | 2019-03-04T18:40:36Z | 2019-03-04T18:40:36Z | 2019-03-04T18:41:41Z |
TST: remove never-used singleton fixtures | diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 377e737a53158..69ee614ab8d2a 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -29,16 +29,6 @@ def float_frame_with_na():
return df
-@pytest.fixture
-def float_frame2():
- """
- Fixture for DataFrame of floats with index of unique strings
-
- Columns are ['D', 'C', 'B', 'A']
- """
- return DataFrame(tm.getSeriesData(), columns=['D', 'C', 'B', 'A'])
-
-
@pytest.fixture
def bool_frame_with_na():
"""
@@ -104,21 +94,6 @@ def mixed_float_frame():
return df
-@pytest.fixture
-def mixed_float_frame2():
- """
- Fixture for DataFrame of different float types with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D'].
- """
- df = DataFrame(tm.getSeriesData())
- df.D = df.D.astype('float32')
- df.C = df.C.astype('float32')
- df.B = df.B.astype('float16')
- df.D = df.D.astype('float64')
- return df
-
-
@pytest.fixture
def mixed_int_frame():
"""
@@ -135,19 +110,6 @@ def mixed_int_frame():
return df
-@pytest.fixture
-def mixed_type_frame():
- """
- Fixture for DataFrame of float/int/string columns with RangeIndex
-
- Columns are ['a', 'b', 'c', 'float32', 'int32'].
- """
- return DataFrame({'a': 1., 'b': 2, 'c': 'foo',
- 'float32': np.array([1.] * 10, dtype='float32'),
- 'int32': np.array([1] * 10, dtype='int32')},
- index=np.arange(10))
-
-
@pytest.fixture
def timezone_frame():
"""
@@ -173,22 +135,6 @@ def empty_frame():
return DataFrame({})
-@pytest.fixture
-def datetime_series():
- """
- Fixture for Series of floats with DatetimeIndex
- """
- return tm.makeTimeSeries(nper=30)
-
-
-@pytest.fixture
-def datetime_series_short():
- """
- Fixture for Series of floats with DatetimeIndex
- """
- return tm.makeTimeSeries(nper=30)[5:]
-
-
@pytest.fixture
def simple_frame():
"""
| The fact that these are never used would be obvious if these were in a file not ignored by coverage, i.e. not relying on pytest magic.
broken off of #24873. | https://api.github.com/repos/pandas-dev/pandas/pulls/24885 | 2019-01-23T17:30:28Z | 2019-02-24T03:45:29Z | 2019-02-24T03:45:28Z | 2019-02-25T21:11:23Z |
DOC: Add experimental note to DatetimeArray and TimedeltaArray | diff --git a/doc/source/user_guide/integer_na.rst b/doc/source/user_guide/integer_na.rst
index eb0c5e3d05863..c5667e9319ca6 100644
--- a/doc/source/user_guide/integer_na.rst
+++ b/doc/source/user_guide/integer_na.rst
@@ -10,6 +10,12 @@ Nullable Integer Data Type
.. versionadded:: 0.24.0
+.. note::
+
+ IntegerArray is currently experimental. Its API or implementation may
+ change without warning.
+
+
In :ref:`missing_data`, we saw that pandas primarily uses ``NaN`` to represent
missing data. Because ``NaN`` is a float, this forces an array of integers with
any missing values to become floating point. In some cases, this may not matter
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 4efe24789af28..3b3fad22ce949 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -32,6 +32,11 @@ Optional Integer NA Support
Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of :ref:`extension types <extending.extension-types>`.
Here is an example of the usage.
+.. note::
+
+ IntegerArray is currently experimental. Its API or implementation may
+ change without warning.
+
We can construct a ``Series`` with the specified dtype. The dtype string ``Int64`` is a pandas ``ExtensionDtype``. Specifying a list or array using the traditional missing value
marker of ``np.nan`` will infer to integer dtype. The display of the ``Series`` will also use the ``NaN`` to indicate missing values in string outputs. (:issue:`20700`, :issue:`20747`, :issue:`22441`, :issue:`21789`, :issue:`22346`)
@@ -213,6 +218,9 @@ from the ``Series``:
ser.array
pser.array
+These return an instance of :class:`IntervalArray` or :class:`arrays.PeriodArray`,
+the new extension arrays that back interval and period data.
+
.. warning::
For backwards compatibility, :attr:`Series.values` continues to return
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index f2aeb1c1309de..d7a8417a71be2 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -218,6 +218,13 @@ class DatetimeArray(dtl.DatetimeLikeArrayMixin,
.. versionadded:: 0.24.0
+ .. warning::
+
+ DatetimeArray is currently experimental, and its API may change
+ without warning. In particular, :attr:`DatetimeArray.dtype` is
+ expected to change to always be an instance of an ``ExtensionDtype``
+ subclass.
+
Parameters
----------
values : Series, Index, DatetimeArray, ndarray
@@ -511,6 +518,12 @@ def dtype(self):
"""
The dtype for the DatetimeArray.
+ .. warning::
+
+ A future version of pandas will change dtype to never be a
+ ``numpy.dtype``. Instead, :attr:`DatetimeArray.dtype` will
+ always be an instance of an ``ExtensionDtype`` subclass.
+
Returns
-------
numpy.dtype or DatetimeTZDtype
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index b3dde6bf2bd93..a6a4a49d3a939 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -225,24 +225,57 @@ class IntegerArray(ExtensionArray, ExtensionOpsMixin):
"""
Array of integer (optional missing) values.
+ .. versionadded:: 0.24.0
+
+ .. warning::
+
+ IntegerArray is currently experimental, and its API or internal
+ implementation may change without warning.
+
We represent an IntegerArray with 2 numpy arrays:
- data: contains a numpy integer array of the appropriate dtype
- mask: a boolean array holding a mask on the data, True is missing
To construct an IntegerArray from generic array-like input, use
- ``integer_array`` function instead.
+ :func:`pandas.array` with one of the integer dtypes (see examples).
+
+ See :ref:`integer_na` for more.
Parameters
----------
- values : integer 1D numpy array
- mask : boolean 1D numpy array
+ values : numpy.ndarray
+ A 1-d integer-dtype array.
+ mask : numpy.ndarray
+ A 1-d boolean-dtype array indicating missing values.
copy : bool, default False
+ Whether to copy the `values` and `mask`.
Returns
-------
IntegerArray
+ Examples
+ --------
+ Create an IntegerArray with :func:`pandas.array`.
+
+ >>> int_array = pd.array([1, None, 3], dtype=pd.Int32Dtype())
+ >>> int_array
+ <IntegerArray>
+ [1, NaN, 3]
+ Length: 3, dtype: Int32
+
+ String aliases for the dtypes are also available. They are capitalized.
+
+ >>> pd.array([1, None, 3], dtype='Int32')
+ <IntegerArray>
+ [1, NaN, 3]
+ Length: 3, dtype: Int32
+
+ >>> pd.array([1, None, 3], dtype='UInt16')
+ <IntegerArray>
+ [1, NaN, 3]
+ Length: 3, dtype: UInt16
"""
@cache_readonly
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 910cb96a86216..4f0c96f7927da 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -107,6 +107,29 @@ def wrapper(self, other):
class TimedeltaArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps):
+ """
+ Pandas ExtensionArray for timedelta data.
+
+ .. versionadded:: 0.24.0
+
+ .. warning::
+
+ TimedeltaArray is currently experimental, and its API may change
+ without warning. In particular, :attr:`TimedeltaArray.dtype` is
+ expected to change to be an instance of an ``ExtensionDtype``
+ subclass.
+
+ Parameters
+ ----------
+ values : array-like
+ The timedelta data.
+
+ dtype : numpy.dtype
+ Currently, only ``numpy.dtype("timedelta64[ns]")`` is accepted.
+ freq : Offset, optional
+ copy : bool, default False
+ Whether to copy the underlying array of data.
+ """
_typ = "timedeltaarray"
_scalar_type = Timedelta
__array_priority__ = 1000
@@ -128,6 +151,19 @@ def _box_func(self):
@property
def dtype(self):
+ """
+ The dtype for the TimedeltaArray.
+
+ .. warning::
+
+ A future version of pandas will change dtype to be an instance
+ of a :class:`pandas.api.extensions.ExtensionDtype` subclass,
+ not a ``numpy.dtype``.
+
+ Returns
+ -------
+ numpy.dtype
+ """
return _TD_DTYPE
# ----------------------------------------------------------------
| DOC: Mention PeriodArray and IntervalArray
Closes #24870 | https://api.github.com/repos/pandas-dev/pandas/pulls/24882 | 2019-01-23T13:31:28Z | 2019-01-24T16:48:55Z | 2019-01-24T16:48:55Z | 2019-01-24T16:48:59Z |
DOC: Improve the docsting of Series.iteritems | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 04fe3b4407149..120dab3f7a7ae 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1453,6 +1453,29 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None, header=True,
def iteritems(self):
"""
Lazily iterate over (index, value) tuples.
+
+ This method returns an iterable tuple (index, value). This is
+ convienient if you want to create a lazy iterator. Note that the
+ methods Series.items and Series.iteritems are the same methods.
+
+ Returns
+ -------
+ iterable
+ Iterable of tuples containing the (index, value) pairs from a
+ Series.
+
+ See Also
+ --------
+ DataFrame.iteritems : Equivalent to Series.iteritems for DataFrame.
+
+ Examples
+ --------
+ >>> s = pd.Series(['A', 'B', 'C'])
+ >>> for index, value in s.iteritems():
+ ... print("Index : {}, Value : {}".format(index, value))
+ Index : 0, Value : A
+ Index : 1, Value : B
+ Index : 2, Value : C
"""
return zip(iter(self.index), iter(self))
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24879 | 2019-01-22T22:31:40Z | 2019-03-19T03:48:17Z | 2019-03-19T03:48:17Z | 2019-03-19T03:48:23Z |
REF/CLN: Move private method | diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
index 9a44198ba3b86..d840bf6ae71a2 100644
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -18,7 +18,6 @@
UndefinedVariableError, _arith_ops_syms, _bool_ops_syms, _cmp_ops_syms,
_mathops, _reductions, _unary_ops_syms, is_term)
from pandas.core.computation.scope import Scope
-from pandas.core.reshape.util import compose
import pandas.io.formats.printing as printing
@@ -103,8 +102,19 @@ def _replace_locals(tok):
return toknum, tokval
-def _preparse(source, f=compose(_replace_locals, _replace_booleans,
- _rewrite_assign)):
+def _compose2(f, g):
+ """Compose 2 callables"""
+ return lambda *args, **kwargs: f(g(*args, **kwargs))
+
+
+def _compose(*funcs):
+ """Compose 2 or more callables"""
+ assert len(funcs) > 1, 'At least 2 callables must be passed to compose'
+ return reduce(_compose2, funcs)
+
+
+def _preparse(source, f=_compose(_replace_locals, _replace_booleans,
+ _rewrite_assign)):
"""Compose a collection of tokenization functions
Parameters
@@ -701,8 +711,8 @@ def visitor(x, y):
class PandasExprVisitor(BaseExprVisitor):
def __init__(self, env, engine, parser,
- preparser=partial(_preparse, f=compose(_replace_locals,
- _replace_booleans))):
+ preparser=partial(_preparse, f=_compose(_replace_locals,
+ _replace_booleans))):
super(PandasExprVisitor, self).__init__(env, engine, parser, preparser)
diff --git a/pandas/core/reshape/util.py b/pandas/core/reshape/util.py
index 7f43a0e9719b8..9d4135a7f310e 100644
--- a/pandas/core/reshape/util.py
+++ b/pandas/core/reshape/util.py
@@ -1,7 +1,5 @@
import numpy as np
-from pandas.compat import reduce
-
from pandas.core.dtypes.common import is_list_like
from pandas.core import common as com
@@ -57,14 +55,3 @@ def cartesian_product(X):
return [np.tile(np.repeat(np.asarray(com.values_from_object(x)), b[i]),
np.product(a[i]))
for i, x in enumerate(X)]
-
-
-def _compose2(f, g):
- """Compose 2 callables"""
- return lambda *args, **kwargs: f(g(*args, **kwargs))
-
-
-def compose(*funcs):
- """Compose 2 or more callables"""
- assert len(funcs) > 1, 'At least 2 callables must be passed to compose'
- return reduce(_compose2, funcs)
| Moved these two internal function to the only location where they are used in the codebase.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24875 | 2019-01-22T09:44:14Z | 2019-01-22T14:49:33Z | 2019-01-22T14:49:33Z | 2019-01-24T20:17:23Z |
BUG: DataFrame respects dtype with masked recarray | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 69b59793f7c0d..d782e3d6858a4 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1692,8 +1692,8 @@ Missing
- Bug in :func:`Series.hasnans` that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (:issue:`19700`)
- :func:`Series.isin` now treats all NaN-floats as equal also for ``np.object``-dtype. This behavior is consistent with the behavior for float64 (:issue:`22119`)
- :func:`unique` no longer mangles NaN-floats and the ``NaT``-object for ``np.object``-dtype, i.e. ``NaT`` is no longer coerced to a NaN-value and is treated as a different entity. (:issue:`22295`)
-- :func:`DataFrame` and :func:`Series` now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (:issue:`24574`)
-
+- :class:`DataFrame` and :class:`Series` now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (:issue:`24574`)
+- Bug in :class:`DataFrame` constructor where ``dtype`` argument was not honored when handling numpy masked record arrays. (:issue:`24874`)
MultiIndex
^^^^^^^^^^
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 7af347a141781..c05a9a0f8f3c7 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -93,7 +93,7 @@ def masked_rec_array_to_mgr(data, index, columns, dtype, copy):
if columns is None:
columns = arr_columns
- mgr = arrays_to_mgr(arrays, arr_columns, index, columns)
+ mgr = arrays_to_mgr(arrays, arr_columns, index, columns, dtype)
if copy:
mgr = mgr.copy()
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 4f6a2e2bfbebf..90ad48cac3a5f 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -787,6 +787,17 @@ def test_constructor_maskedarray_hardened(self):
dtype=float)
tm.assert_frame_equal(result, expected)
+ def test_constructor_maskedrecarray_dtype(self):
+ # Ensure constructor honors dtype
+ data = np.ma.array(
+ np.ma.zeros(5, dtype=[('date', '<f8'), ('price', '<f8')]),
+ mask=[False] * 5)
+ data = data.view(ma.mrecords.mrecarray)
+ result = pd.DataFrame(data, dtype=int)
+ expected = pd.DataFrame(np.zeros((5, 2), dtype=int),
+ columns=['date', 'price'])
+ tm.assert_frame_equal(result, expected)
+
def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
# from GH3479
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Appeared that like `dtype` was simply not being passed into `arrays_to_mgr` | https://api.github.com/repos/pandas-dev/pandas/pulls/24874 | 2019-01-22T09:39:31Z | 2019-01-22T13:04:02Z | 2019-01-22T13:04:02Z | 2019-01-24T20:17:15Z |
TST: Remove subset of singleton fixtures | diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 377e737a53158..aaad4fd29804c 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -1,7 +1,7 @@
import numpy as np
import pytest
-from pandas import DataFrame, NaT, compat, date_range
+from pandas import DataFrame
import pandas.util.testing as tm
@@ -15,30 +15,6 @@ def float_frame():
return DataFrame(tm.getSeriesData())
-@pytest.fixture
-def float_frame_with_na():
- """
- Fixture for DataFrame of floats with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D']; some entries are missing
- """
- df = DataFrame(tm.getSeriesData())
- # set some NAs
- df.loc[5:10] = np.nan
- df.loc[15:20, -2:] = np.nan
- return df
-
-
-@pytest.fixture
-def float_frame2():
- """
- Fixture for DataFrame of floats with index of unique strings
-
- Columns are ['D', 'C', 'B', 'A']
- """
- return DataFrame(tm.getSeriesData(), columns=['D', 'C', 'B', 'A'])
-
-
@pytest.fixture
def bool_frame_with_na():
"""
@@ -54,168 +30,9 @@ def bool_frame_with_na():
return df
-@pytest.fixture
-def int_frame():
- """
- Fixture for DataFrame of ints with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D']
- """
- df = DataFrame({k: v.astype(int)
- for k, v in compat.iteritems(tm.getSeriesData())})
- # force these all to int64 to avoid platform testing issues
- return DataFrame({c: s for c, s in compat.iteritems(df)}, dtype=np.int64)
-
-
-@pytest.fixture
-def datetime_frame():
- """
- Fixture for DataFrame of floats with DatetimeIndex
-
- Columns are ['A', 'B', 'C', 'D']
- """
- return DataFrame(tm.getTimeSeriesData())
-
-
-@pytest.fixture
-def float_string_frame():
- """
- Fixture for DataFrame of floats and strings with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D', 'foo'].
- """
- df = DataFrame(tm.getSeriesData())
- df['foo'] = 'bar'
- return df
-
-
-@pytest.fixture
-def mixed_float_frame():
- """
- Fixture for DataFrame of different float types with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D'].
- """
- df = DataFrame(tm.getSeriesData())
- df.A = df.A.astype('float32')
- df.B = df.B.astype('float32')
- df.C = df.C.astype('float16')
- df.D = df.D.astype('float64')
- return df
-
-
-@pytest.fixture
-def mixed_float_frame2():
- """
- Fixture for DataFrame of different float types with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D'].
- """
- df = DataFrame(tm.getSeriesData())
- df.D = df.D.astype('float32')
- df.C = df.C.astype('float32')
- df.B = df.B.astype('float16')
- df.D = df.D.astype('float64')
- return df
-
-
-@pytest.fixture
-def mixed_int_frame():
- """
- Fixture for DataFrame of different int types with index of unique strings
-
- Columns are ['A', 'B', 'C', 'D'].
- """
- df = DataFrame({k: v.astype(int)
- for k, v in compat.iteritems(tm.getSeriesData())})
- df.A = df.A.astype('int32')
- df.B = np.ones(len(df.B), dtype='uint64')
- df.C = df.C.astype('uint8')
- df.D = df.C.astype('int64')
- return df
-
-
-@pytest.fixture
-def mixed_type_frame():
- """
- Fixture for DataFrame of float/int/string columns with RangeIndex
-
- Columns are ['a', 'b', 'c', 'float32', 'int32'].
- """
- return DataFrame({'a': 1., 'b': 2, 'c': 'foo',
- 'float32': np.array([1.] * 10, dtype='float32'),
- 'int32': np.array([1] * 10, dtype='int32')},
- index=np.arange(10))
-
-
-@pytest.fixture
-def timezone_frame():
- """
- Fixture for DataFrame of date_range Series with different time zones
-
- Columns are ['A', 'B', 'C']; some entries are missing
- """
- df = DataFrame({'A': date_range('20130101', periods=3),
- 'B': date_range('20130101', periods=3,
- tz='US/Eastern'),
- 'C': date_range('20130101', periods=3,
- tz='CET')})
- df.iloc[1, 1] = NaT
- df.iloc[1, 2] = NaT
- return df
-
-
@pytest.fixture
def empty_frame():
"""
Fixture for empty DataFrame
"""
return DataFrame({})
-
-
-@pytest.fixture
-def datetime_series():
- """
- Fixture for Series of floats with DatetimeIndex
- """
- return tm.makeTimeSeries(nper=30)
-
-
-@pytest.fixture
-def datetime_series_short():
- """
- Fixture for Series of floats with DatetimeIndex
- """
- return tm.makeTimeSeries(nper=30)[5:]
-
-
-@pytest.fixture
-def simple_frame():
- """
- Fixture for simple 3x3 DataFrame
-
- Columns are ['one', 'two', 'three'], index is ['a', 'b', 'c'].
- """
- arr = np.array([[1., 2., 3.],
- [4., 5., 6.],
- [7., 8., 9.]])
-
- return DataFrame(arr, columns=['one', 'two', 'three'],
- index=['a', 'b', 'c'])
-
-
-@pytest.fixture
-def frame_of_index_cols():
- """
- Fixture for DataFrame of columns that can be used for indexing
-
- Columns are ['A', 'B', 'C', 'D', 'E', ('tuple', 'as', 'label')];
- 'A' & 'B' contain duplicates (but are jointly unique), the rest are unique.
- """
- df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
- 'B': ['one', 'two', 'three', 'one', 'two'],
- 'C': ['a', 'b', 'c', 'd', 'e'],
- 'D': np.random.randn(5),
- 'E': np.random.randn(5),
- ('tuple', 'as', 'label'): np.random.randn(5)})
- return df
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index c2355742199dc..a2832ed722e0d 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -21,8 +21,8 @@
class TestDataFrameAlterAxes():
- def test_set_index_directly(self, float_string_frame):
- df = float_string_frame
+ def test_set_index_directly(self):
+ df = tm.get_float_string_frame()
idx = Index(np.arange(len(df))[::-1])
df.index = idx
@@ -30,8 +30,8 @@ def test_set_index_directly(self, float_string_frame):
with pytest.raises(ValueError, match='Length mismatch'):
df.index = idx[::2]
- def test_set_index(self, float_string_frame):
- df = float_string_frame
+ def test_set_index(self):
+ df = tm.get_float_string_frame()
idx = Index(np.arange(len(df))[::-1])
df = df.set_index(idx)
@@ -51,9 +51,8 @@ def test_set_index_cast(self):
('tuple', 'as', 'label')])
@pytest.mark.parametrize('inplace', [True, False])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_drop_inplace(self, frame_of_index_cols,
- drop, inplace, keys):
- df = frame_of_index_cols
+ def test_set_index_drop_inplace(self, drop, inplace, keys):
+ df = tm.get_frame_of_index_cols()
if isinstance(keys, list):
idx = MultiIndex.from_arrays([df[x] for x in keys], names=keys)
@@ -74,8 +73,8 @@ def test_set_index_drop_inplace(self, frame_of_index_cols,
@pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B'],
('tuple', 'as', 'label')])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_append(self, frame_of_index_cols, drop, keys):
- df = frame_of_index_cols
+ def test_set_index_append(self, drop, keys):
+ df = tm.get_frame_of_index_cols()
keys = keys if isinstance(keys, list) else [keys]
idx = MultiIndex.from_arrays([df.index] + [df[x] for x in keys],
@@ -91,8 +90,8 @@ def test_set_index_append(self, frame_of_index_cols, drop, keys):
@pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B'],
('tuple', 'as', 'label')])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_append_to_multiindex(self, frame_of_index_cols,
- drop, keys):
+ def test_set_index_append_to_multiindex(self, drop, keys):
+ frame_of_index_cols = tm.get_frame_of_index_cols()
# append to existing multiindex
df = frame_of_index_cols.set_index(['D'], drop=drop, append=True)
@@ -123,9 +122,8 @@ def test_set_index_after_mutation(self):
@pytest.mark.parametrize('append, index_name', [(True, None),
(True, 'B'), (True, 'test'), (False, None)])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_pass_single_array(self, frame_of_index_cols,
- drop, append, index_name, box):
- df = frame_of_index_cols
+ def test_set_index_pass_single_array(self, drop, append, index_name, box):
+ df = tm.get_frame_of_index_cols()
df.index.name = index_name
key = box(df['B'])
@@ -156,9 +154,8 @@ def test_set_index_pass_single_array(self, frame_of_index_cols,
[(True, None), (True, 'A'), (True, 'B'),
(True, 'test'), (False, None)])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_pass_arrays(self, frame_of_index_cols,
- drop, append, index_name, box):
- df = frame_of_index_cols
+ def test_set_index_pass_arrays(self, drop, append, index_name, box):
+ df = tm.get_frame_of_index_cols()
df.index.name = index_name
keys = ['A', box(df['B'])]
@@ -187,9 +184,9 @@ def test_set_index_pass_arrays(self, frame_of_index_cols,
@pytest.mark.parametrize('append, index_name', [(True, None),
(True, 'A'), (True, 'test'), (False, None)])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_pass_arrays_duplicate(self, frame_of_index_cols, drop,
+ def test_set_index_pass_arrays_duplicate(self, drop,
append, index_name, box1, box2):
- df = frame_of_index_cols
+ df = tm.get_frame_of_index_cols()
df.index.name = index_name
keys = [box1(df['A']), box2(df['A'])]
@@ -209,9 +206,8 @@ def test_set_index_pass_arrays_duplicate(self, frame_of_index_cols, drop,
@pytest.mark.parametrize('append', [True, False])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_pass_multiindex(self, frame_of_index_cols,
- drop, append):
- df = frame_of_index_cols
+ def test_set_index_pass_multiindex(self, drop, append):
+ df = tm.get_frame_of_index_cols()
keys = MultiIndex.from_arrays([df['A'], df['B']], names=['A', 'B'])
result = df.set_index(keys, drop=drop, append=append)
@@ -221,8 +217,8 @@ def test_set_index_pass_multiindex(self, frame_of_index_cols,
tm.assert_frame_equal(result, expected)
- def test_set_index_verify_integrity(self, frame_of_index_cols):
- df = frame_of_index_cols
+ def test_set_index_verify_integrity(self):
+ df = tm.get_frame_of_index_cols()
with pytest.raises(ValueError, match='Index has duplicate keys'):
df.set_index('A', verify_integrity=True)
@@ -232,8 +228,8 @@ def test_set_index_verify_integrity(self, frame_of_index_cols):
@pytest.mark.parametrize('append', [True, False])
@pytest.mark.parametrize('drop', [True, False])
- def test_set_index_raise_keys(self, frame_of_index_cols, drop, append):
- df = frame_of_index_cols
+ def test_set_index_raise_keys(self, drop, append):
+ df = tm.get_frame_of_index_cols()
with pytest.raises(KeyError, match="['foo', 'bar', 'baz']"):
# column names are A-E, as well as one tuple
@@ -256,9 +252,8 @@ def test_set_index_raise_keys(self, frame_of_index_cols, drop, append):
@pytest.mark.parametrize('append', [True, False])
@pytest.mark.parametrize('drop', [True, False])
@pytest.mark.parametrize('box', [set, iter])
- def test_set_index_raise_on_type(self, frame_of_index_cols, box,
- drop, append):
- df = frame_of_index_cols
+ def test_set_index_raise_on_type(self, box, drop, append):
+ df = tm.get_frame_of_index_cols()
msg = 'The parameter "keys" may be a column key, .*'
# forbidden type, e.g. set/tuple/iter
@@ -440,7 +435,9 @@ def test_set_index_empty_column(self):
names=['a', 'x'])
tm.assert_frame_equal(result, expected)
- def test_set_columns(self, float_string_frame):
+ def test_set_columns(self):
+ float_string_frame = tm.get_float_string_frame()
+
cols = Index(np.arange(len(float_string_frame.columns)))
float_string_frame.columns = cols
with pytest.raises(ValueError, match='Length mismatch'):
@@ -1015,7 +1012,8 @@ def test_set_index_names(self):
# Check equality
tm.assert_index_equal(df.set_index([df.index, idx2]).index, mi2)
- def test_rename_objects(self, float_string_frame):
+ def test_rename_objects(self):
+ float_string_frame = tm.get_float_string_frame()
renamed = float_string_frame.rename(columns=str.upper)
assert 'FOO' in renamed
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index f2c3f50c291c3..8c47b45ea5c41 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -263,7 +263,9 @@ def _check_method(self, frame, method='pearson'):
tm.assert_almost_equal(correls['A']['C'], expected)
@td.skip_if_no_scipy
- def test_corr_non_numeric(self, float_frame, float_string_frame):
+ def test_corr_non_numeric(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
float_frame['A'][:5] = np.nan
float_frame['B'][5:10] = np.nan
@@ -337,7 +339,9 @@ def test_corr_invalid_method(self):
with pytest.raises(ValueError, match=msg):
df.corr(method="____")
- def test_cov(self, float_frame, float_string_frame):
+ def test_cov(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
# min_periods no NAs (corner case)
expected = float_frame.cov()
result = float_frame.cov(min_periods=len(float_frame))
@@ -381,7 +385,8 @@ def test_cov(self, float_frame, float_string_frame):
index=df.columns, columns=df.columns)
tm.assert_frame_equal(result, expected)
- def test_corrwith(self, datetime_frame):
+ def test_corrwith(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
a = datetime_frame
noise = Series(np.random.randn(len(a)), index=a.index)
@@ -431,7 +436,9 @@ def test_corrwith_with_objects(self):
expected = df1.loc[:, cols].corrwith(df2.loc[:, cols], axis=1)
tm.assert_series_equal(result, expected)
- def test_corrwith_series(self, datetime_frame):
+ def test_corrwith_series(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+
result = datetime_frame.corrwith(datetime_frame['A'])
expected = datetime_frame.apply(datetime_frame['A'].corr)
@@ -706,7 +713,10 @@ def test_reduce_mixed_frame(self):
np.array([2, 150, 'abcde'], dtype=object))
tm.assert_series_equal(test, df.T.sum(axis=1))
- def test_count(self, float_frame_with_na, float_frame, float_string_frame):
+ def test_count(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
f = lambda s: notna(s).sum()
assert_stat_op_calc('count', f, float_frame_with_na, has_skipna=False,
check_dtype=False, check_dates=True)
@@ -737,8 +747,10 @@ def test_count(self, float_frame_with_na, float_frame, float_string_frame):
expected = Series(0, index=[])
tm.assert_series_equal(result, expected)
- def test_nunique(self, float_frame_with_na, float_frame,
- float_string_frame):
+ def test_nunique(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
f = lambda s: len(algorithms.unique1d(s.dropna()))
assert_stat_op_calc('nunique', f, float_frame_with_na,
has_skipna=False, check_dtype=False,
@@ -755,8 +767,11 @@ def test_nunique(self, float_frame_with_na, float_frame,
tm.assert_series_equal(df.nunique(axis=1, dropna=False),
Series({0: 1, 1: 3, 2: 2}))
- def test_sum(self, float_frame_with_na, mixed_float_frame,
- float_frame, float_string_frame):
+ def test_sum(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ mixed_float_frame = tm.get_mixed_float_frame()
+ float_string_frame = tm.get_float_string_frame()
+
assert_stat_op_api('sum', float_frame, float_string_frame,
has_numeric_only=True)
assert_stat_op_calc('sum', np.sum, float_frame_with_na,
@@ -789,20 +804,27 @@ def test_stat_operators_attempt_obj_array(self, method):
if method in ['sum', 'prod']:
tm.assert_series_equal(result, expected)
- def test_mean(self, float_frame_with_na, float_frame, float_string_frame):
+ def test_mean(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
assert_stat_op_calc('mean', np.mean, float_frame_with_na,
check_dates=True)
assert_stat_op_api('mean', float_frame, float_string_frame)
- def test_product(self, float_frame_with_na, float_frame,
- float_string_frame):
+ def test_product(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
assert_stat_op_calc('product', np.prod, float_frame_with_na)
assert_stat_op_api('product', float_frame, float_string_frame)
# TODO: Ensure warning isn't emitted in the first place
@pytest.mark.filterwarnings("ignore:All-NaN:RuntimeWarning")
- def test_median(self, float_frame_with_na, float_frame,
- float_string_frame):
+ def test_median(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
def wrapper(x):
if isna(x).any():
return np.nan
@@ -812,8 +834,11 @@ def wrapper(x):
check_dates=True)
assert_stat_op_api('median', float_frame, float_string_frame)
- def test_min(self, float_frame_with_na, int_frame,
- float_frame, float_string_frame):
+ def test_min(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+ int_frame = tm.get_int_frame()
+
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore", RuntimeWarning)
assert_stat_op_calc('min', np.min, float_frame_with_na,
@@ -821,7 +846,9 @@ def test_min(self, float_frame_with_na, int_frame,
assert_stat_op_calc('min', np.min, int_frame)
assert_stat_op_api('min', float_frame, float_string_frame)
- def test_cummin(self, datetime_frame):
+ def test_cummin(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+
datetime_frame.loc[5:10, 0] = np.nan
datetime_frame.loc[10:15, 1] = np.nan
datetime_frame.loc[15:, 2] = np.nan
@@ -844,7 +871,9 @@ def test_cummin(self, datetime_frame):
cummin_xs = datetime_frame.cummin(axis=1)
assert np.shape(cummin_xs) == np.shape(datetime_frame)
- def test_cummax(self, datetime_frame):
+ def test_cummax(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+
datetime_frame.loc[5:10, 0] = np.nan
datetime_frame.loc[10:15, 1] = np.nan
datetime_frame.loc[15:, 2] = np.nan
@@ -867,8 +896,11 @@ def test_cummax(self, datetime_frame):
cummax_xs = datetime_frame.cummax(axis=1)
assert np.shape(cummax_xs) == np.shape(datetime_frame)
- def test_max(self, float_frame_with_na, int_frame,
- float_frame, float_string_frame):
+ def test_max(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+ int_frame = tm.get_int_frame()
+
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore", RuntimeWarning)
assert_stat_op_calc('max', np.max, float_frame_with_na,
@@ -876,13 +908,19 @@ def test_max(self, float_frame_with_na, int_frame,
assert_stat_op_calc('max', np.max, int_frame)
assert_stat_op_api('max', float_frame, float_string_frame)
- def test_mad(self, float_frame_with_na, float_frame, float_string_frame):
+ def test_mad(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
f = lambda x: np.abs(x - x.mean()).mean()
assert_stat_op_calc('mad', f, float_frame_with_na)
assert_stat_op_api('mad', float_frame, float_string_frame)
- def test_var_std(self, float_frame_with_na, datetime_frame, float_frame,
- float_string_frame):
+ def test_var_std(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+ float_string_frame = tm.get_float_string_frame()
+
alt = lambda x: np.var(x, ddof=1)
assert_stat_op_calc('var', alt, float_frame_with_na)
assert_stat_op_api('var', float_frame, float_string_frame)
@@ -948,7 +986,9 @@ def test_mixed_ops(self, op):
result = getattr(df, op)()
assert len(result) == 2
- def test_cumsum(self, datetime_frame):
+ def test_cumsum(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+
datetime_frame.loc[5:10, 0] = np.nan
datetime_frame.loc[10:15, 1] = np.nan
datetime_frame.loc[15:, 2] = np.nan
@@ -971,7 +1011,9 @@ def test_cumsum(self, datetime_frame):
cumsum_xs = datetime_frame.cumsum(axis=1)
assert np.shape(cumsum_xs) == np.shape(datetime_frame)
- def test_cumprod(self, datetime_frame):
+ def test_cumprod(self):
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+
datetime_frame.loc[5:10, 0] = np.nan
datetime_frame.loc[10:15, 1] = np.nan
datetime_frame.loc[15:, 2] = np.nan
@@ -1000,8 +1042,11 @@ def test_cumprod(self, datetime_frame):
df.cumprod(0)
df.cumprod(1)
- def test_sem(self, float_frame_with_na, datetime_frame,
- float_frame, float_string_frame):
+ def test_sem(self, float_frame):
+ float_frame_with_na = tm.get_float_frame_with_na()
+ datetime_frame = DataFrame(tm.getTimeSeriesData())
+ float_string_frame = tm.get_float_string_frame()
+
alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
assert_stat_op_calc('sem', alt, float_frame_with_na)
assert_stat_op_api('sem', float_frame, float_string_frame)
@@ -1020,9 +1065,12 @@ def test_sem(self, float_frame_with_na, datetime_frame,
assert not (result < 0).any()
@td.skip_if_no_scipy
- def test_skew(self, float_frame_with_na, float_frame, float_string_frame):
+ def test_skew(self, float_frame):
from scipy.stats import skew
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
def alt(x):
if len(x) < 3:
return np.nan
@@ -1032,9 +1080,12 @@ def alt(x):
assert_stat_op_api('skew', float_frame, float_string_frame)
@td.skip_if_no_scipy
- def test_kurt(self, float_frame_with_na, float_frame, float_string_frame):
+ def test_kurt(self, float_frame):
from scipy.stats import kurtosis
+ float_frame_with_na = tm.get_float_frame_with_na()
+ float_string_frame = tm.get_float_string_frame()
+
def alt(x):
if len(x) < 4:
return np.nan
@@ -1280,8 +1331,10 @@ def test_sum_bool(self, float_frame):
bools.sum(1)
bools.sum(0)
- def test_mean_corner(self, float_frame, float_string_frame):
+ def test_mean_corner(self, float_frame):
# unit test when have object data
+ float_string_frame = tm.get_float_string_frame()
+
the_mean = float_string_frame.mean(axis=0)
the_sum = float_string_frame.sum(axis=0, numeric_only=True)
tm.assert_index_equal(the_sum.index, the_mean.index)
@@ -1297,8 +1350,10 @@ def test_mean_corner(self, float_frame, float_string_frame):
means = float_frame.mean(0)
assert means['bool'] == float_frame['bool'].values.mean()
- def test_stats_mixed_type(self, float_string_frame):
+ def test_stats_mixed_type(self):
# don't blow up
+ float_string_frame = tm.get_float_string_frame()
+
float_string_frame.std(1)
float_string_frame.var(1)
float_string_frame.mean(1)
@@ -1306,7 +1361,10 @@ def test_stats_mixed_type(self, float_string_frame):
# TODO: Ensure warning isn't emitted in the first place
@pytest.mark.filterwarnings("ignore:All-NaN:RuntimeWarning")
- def test_median_corner(self, int_frame, float_frame, float_string_frame):
+ def test_median_corner(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+ int_frame = tm.get_int_frame()
+
def wrapper(x):
if isna(x).any():
return np.nan
@@ -1318,7 +1376,9 @@ def wrapper(x):
# Miscellanea
- def test_count_objects(self, float_string_frame):
+ def test_count_objects(self):
+ float_string_frame = tm.get_float_string_frame()
+
dm = DataFrame(float_string_frame._series)
df = DataFrame(float_string_frame._series)
@@ -1338,8 +1398,10 @@ def test_sum_bools(self):
# Index of max / min
- def test_idxmin(self, float_frame, int_frame):
+ def test_idxmin(self, float_frame):
+ int_frame = tm.get_int_frame()
frame = float_frame
+
frame.loc[5:10] = np.nan
frame.loc[15:20, -2:] = np.nan
for skipna in [True, False]:
@@ -1352,8 +1414,10 @@ def test_idxmin(self, float_frame, int_frame):
pytest.raises(ValueError, frame.idxmin, axis=2)
- def test_idxmax(self, float_frame, int_frame):
+ def test_idxmax(self, float_frame):
+ int_frame = tm.get_int_frame()
frame = float_frame
+
frame.loc[5:10] = np.nan
frame.loc[15:20, -2:] = np.nan
for skipna in [True, False]:
@@ -1370,7 +1434,9 @@ def test_idxmax(self, float_frame, int_frame):
# Logical reductions
@pytest.mark.parametrize('opname', ['any', 'all'])
- def test_any_all(self, opname, bool_frame_with_na, float_string_frame):
+ def test_any_all(self, opname, bool_frame_with_na):
+ float_string_frame = tm.get_float_string_frame()
+
assert_bool_op_calc(opname, getattr(np, opname), bool_frame_with_na,
has_skipna=True)
assert_bool_op_api(opname, bool_frame_with_na, float_string_frame,
@@ -1969,10 +2035,10 @@ def test_clip_against_series(self, inplace):
(0, [[2., 2., 3.], [4., 5., 6.], [7., 7., 7.]]),
(1, [[2., 3., 4.], [4., 5., 6.], [5., 6., 7.]])
])
- def test_clip_against_list_like(self, simple_frame,
- inplace, lower, axis, res):
+ def test_clip_against_list_like(self, inplace, lower, axis, res):
# GH 15390
- original = simple_frame.copy(deep=True)
+
+ original = tm.get_simple_frame()
result = original.clip(lower=lower, upper=[5, 6, 7],
axis=axis, inplace=inplace)
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 0934dd20638e4..c823ef087a106 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -171,7 +171,9 @@ def test_get_agg_axis(self, float_frame):
pytest.raises(ValueError, float_frame._get_agg_axis, 2)
- def test_nonzero(self, float_frame, float_string_frame, empty_frame):
+ def test_nonzero(self, float_frame, empty_frame):
+ float_string_frame = tm.get_float_string_frame()
+
assert empty_frame.empty
assert not float_frame.empty
@@ -201,7 +203,9 @@ def test_items(self):
def test_iter(self, float_frame):
assert tm.equalContents(list(float_frame), float_frame.columns)
- def test_iterrows(self, float_frame, float_string_frame):
+ def test_iterrows(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
for k, v in float_frame.iterrows():
exp = float_frame.loc[k]
self._assert_series_equal(v, exp)
@@ -288,7 +292,9 @@ def test_sequence_like_with_categorical(self):
def test_len(self, float_frame):
assert len(float_frame) == len(float_frame.index)
- def test_values(self, float_frame, float_string_frame):
+ def test_values(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
frame = float_frame
arr = frame.values
@@ -376,22 +382,29 @@ def test_class_axis(self):
assert pydoc.getdoc(DataFrame.index)
assert pydoc.getdoc(DataFrame.columns)
- def test_more_values(self, float_string_frame):
+ def test_more_values(self):
+ float_string_frame = tm.get_float_string_frame()
+
values = float_string_frame.values
assert values.shape[1] == len(float_string_frame.columns)
- def test_repr_with_mi_nat(self, float_string_frame):
+ def test_repr_with_mi_nat(self):
+
df = self.klass({'X': [1, 2]},
index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']])
result = repr(df)
expected = ' X\nNaT a 1\n2013-01-01 b 2'
assert result == expected
- def test_iteritems_names(self, float_string_frame):
+ def test_iteritems_names(self):
+ float_string_frame = tm.get_float_string_frame()
+
for k, v in compat.iteritems(float_string_frame):
assert v.name == k
- def test_series_put_names(self, float_string_frame):
+ def test_series_put_names(self):
+ float_string_frame = tm.get_float_string_frame()
+
series = float_string_frame._series
for k, v in compat.iteritems(series):
assert v.name == k
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index ade527a16c902..dc7262b92ca48 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -228,7 +228,9 @@ def test_apply_axis1(self, float_frame):
tapplied = float_frame.apply(np.mean, axis=1)
assert tapplied[d] == np.mean(float_frame.xs(d))
- def test_apply_ignore_failures(self, float_string_frame):
+ def test_apply_ignore_failures(self):
+ float_string_frame = tm.get_float_string_frame()
+
result = frame_apply(float_string_frame, np.mean, 0,
ignore_failures=True).apply_standard()
expected = float_string_frame._get_numeric_data().apply(np.mean)
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index f14ecae448723..6b48f15be0f6f 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -322,11 +322,12 @@ def test_df_add_flex_filled_mixed_dtypes(self):
'B': ser * 2})
tm.assert_frame_equal(result, expected)
- def test_arith_flex_frame(self, all_arithmetic_operators, float_frame,
- mixed_float_frame):
+ def test_arith_flex_frame(self, all_arithmetic_operators, float_frame):
# one instance of parametrized fixture
op = all_arithmetic_operators
+ mixed_float_frame = tm.get_mixed_float_frame()
+
def f(x, y):
# r-versions not in operator-stdlib; get op without "r" and invert
if op.startswith('__r'):
@@ -344,10 +345,13 @@ def f(x, y):
_check_mixed_float(result, dtype=dict(C=None))
@pytest.mark.parametrize('op', ['__add__', '__sub__', '__mul__'])
- def test_arith_flex_frame_mixed(self, op, int_frame, mixed_int_frame,
- mixed_float_frame):
+ def test_arith_flex_frame_mixed(self, op):
f = getattr(operator, op)
+ int_frame = tm.get_int_frame()
+ mixed_int_frame = tm.get_mixed_int_frame()
+ mixed_float_frame = tm.get_mixed_float_frame()
+
# vs mix int
result = getattr(mixed_int_frame, op)(2 + mixed_int_frame)
expected = f(mixed_int_frame, 2 + mixed_int_frame)
@@ -402,8 +406,8 @@ def test_arith_flex_frame_corner(self, float_frame):
with pytest.raises(NotImplementedError, match='fill_value'):
float_frame.add(float_frame.iloc[0], axis='index', fill_value=3)
- def test_arith_flex_series(self, simple_frame):
- df = simple_frame
+ def test_arith_flex_series(self):
+ df = tm.get_simple_frame()
row = df.xs('a')
col = df['two']
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 5419f4d5127f6..e981466773989 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -104,7 +104,10 @@ def test_values_numeric_cols(self, float_frame):
values = float_frame[['A', 'B', 'C', 'D']].values
assert values.dtype == np.float64
- def test_values_lcd(self, mixed_float_frame, mixed_int_frame):
+ def test_values_lcd(self):
+
+ mixed_int_frame = tm.get_mixed_int_frame()
+ mixed_float_frame = tm.get_mixed_float_frame()
# mixed lcd
values = mixed_float_frame[['A', 'B', 'C', 'D']].values
@@ -211,8 +214,9 @@ def test_constructor_with_convert(self):
None], np.object_), name='A')
assert_series_equal(result, expected)
- def test_construction_with_mixed(self, float_string_frame):
+ def test_construction_with_mixed(self):
# test construction edge cases with mixed types
+ float_string_frame = tm.get_float_string_frame()
# f7u12, this does not work without extensive workaround
data = [[datetime(2001, 1, 5), np.nan, datetime(2001, 1, 2)],
@@ -338,7 +342,9 @@ def test_no_copy_blocks(self, float_frame):
# make sure we did change the original DataFrame
assert _df[column].equals(df[column])
- def test_copy(self, float_frame, float_string_frame):
+ def test_copy(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
cop = float_frame.copy()
cop['E'] = cop['A']
assert 'E' not in float_frame
@@ -347,7 +353,10 @@ def test_copy(self, float_frame, float_string_frame):
copy = float_string_frame.copy()
assert copy._data is not float_string_frame._data
- def test_pickle(self, float_string_frame, empty_frame, timezone_frame):
+ def test_pickle(self, empty_frame):
+ timezone_frame = tm.get_timezone_frame()
+ float_string_frame = tm.get_float_string_frame()
+
unpickled = tm.round_trip_pickle(float_string_frame)
assert_frame_equal(float_string_frame, unpickled)
@@ -394,7 +403,9 @@ def test_consolidate_datetime64(self):
df.starting), ser_starting.index)
tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index)
- def test_is_mixed_type(self, float_frame, float_string_frame):
+ def test_is_mixed_type(self, float_frame):
+ float_string_frame = tm.get_float_string_frame()
+
assert not float_frame._is_mixed_type
assert float_string_frame._is_mixed_type
@@ -454,7 +465,8 @@ def test_get_numeric_data_extension_dtype(self):
expected = df.loc[:, ['A', 'C']]
assert_frame_equal(result, expected)
- def test_convert_objects(self, float_string_frame):
+ def test_convert_objects(self):
+ float_string_frame = tm.get_float_string_frame()
oops = float_string_frame.T.T
converted = oops._convert(datetime=True)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index f441dd20f3982..b197a5f73d801 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -33,7 +33,8 @@
import pandas as pd
from pandas import (
Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Index,
- IntervalIndex, MultiIndex, Panel, RangeIndex, Series, bdate_range)
+ IntervalIndex, MultiIndex, NaT, Panel, RangeIndex, Series, bdate_range,
+ date_range)
from pandas.core.algorithms import take_1d
from pandas.core.arrays import (
DatetimeArray, ExtensionArray, IntervalArray, PeriodArray, TimedeltaArray,
@@ -3065,3 +3066,117 @@ def convert_rows_list_to_csv_str(rows_list):
sep = os.linesep
expected = sep.join(rows_list) + sep
return expected
+
+
+# -----------------------------------------------------------------------------
+# Fixture-Like Singletons
+
+def get_simple_frame():
+ """
+ Fixture for simple 3x3 DataFrame
+
+ Columns are ['one', 'two', 'three'], index is ['a', 'b', 'c'].
+ """
+ arr = np.array([[1., 2., 3.],
+ [4., 5., 6.],
+ [7., 8., 9.]])
+
+ return DataFrame(arr, columns=['one', 'two', 'three'],
+ index=['a', 'b', 'c'])
+
+
+def get_int_frame():
+ """
+ Fixture for DataFrame of ints with index of unique strings
+
+ Columns are ['A', 'B', 'C', 'D']
+ """
+ df = DataFrame({k: v.astype(int)
+ for k, v in compat.iteritems(getSeriesData())})
+ # force these all to int64 to avoid platform testing issues
+ return DataFrame({c: s for c, s in compat.iteritems(df)}, dtype=np.int64)
+
+
+def get_mixed_int_frame():
+ """
+ Fixture for DataFrame of different int types with index of unique strings
+
+ Columns are ['A', 'B', 'C', 'D'].
+ """
+ df = DataFrame({k: v.astype(int)
+ for k, v in compat.iteritems(getSeriesData())})
+ df.A = df.A.astype('int32')
+ df.B = np.ones(len(df.B), dtype='uint64')
+ df.C = df.C.astype('uint8')
+ df.D = df.C.astype('int64')
+ return df
+
+
+def get_float_frame_with_na():
+ """
+ Fixture for DataFrame of floats with index of unique strings
+
+ Columns are ['A', 'B', 'C', 'D']; some entries are missing
+ """
+ df = DataFrame(getSeriesData())
+ # set some NAs
+ df.loc[5:10] = np.nan
+ df.loc[15:20, -2:] = np.nan
+ return df
+
+
+def get_float_string_frame():
+ """
+ Fixture for DataFrame of floats and strings with index of unique strings
+
+ Columns are ['A', 'B', 'C', 'D', 'foo'].
+ """
+ df = DataFrame(getSeriesData())
+ df['foo'] = 'bar'
+ return df
+
+
+def get_mixed_float_frame():
+ """
+ Fixture for DataFrame of different float types with index of unique strings
+
+ Columns are ['A', 'B', 'C', 'D'].
+ """
+ df = DataFrame(getSeriesData())
+ df.A = df.A.astype('float32')
+ df.B = df.B.astype('float32')
+ df.C = df.C.astype('float16')
+ df.D = df.D.astype('float64')
+ return df
+
+
+def get_timezone_frame():
+ """
+ Fixture for DataFrame of date_range Series with different time zones
+
+ Columns are ['A', 'B', 'C']; some entries are missing
+ """
+ df = DataFrame({'A': date_range('20130101', periods=3),
+ 'B': date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'C': date_range('20130101', periods=3,
+ tz='CET')})
+ df.iloc[1, 1] = NaT
+ df.iloc[1, 2] = NaT
+ return df
+
+
+def get_frame_of_index_cols():
+ """
+ Fixture for DataFrame of columns that can be used for indexing
+
+ Columns are ['A', 'B', 'C', 'D', 'E', ('tuple', 'as', 'label')];
+ 'A' & 'B' contain duplicates (but are jointly unique), the rest are unique.
+ """
+ df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
+ 'B': ['one', 'two', 'three', 'one', 'two'],
+ 'C': ['a', 'b', 'c', 'd', 'e'],
+ 'D': np.random.randn(5),
+ 'E': np.random.randn(5),
+ ('tuple', 'as', 'label'): np.random.randn(5)})
+ return df
| Broken off of #24769 where we have learned that some test behavior depends on whether or not a fixture is being used.
I claim that this is another point in favor of not using fixtures if there is a regular-python alternative (in this case, a "function"). Whenever with-fixture behavior is different from without-fixture behavior, it is definitely the latter that better represents user runtime environments. That is what we should be testing. | https://api.github.com/repos/pandas-dev/pandas/pulls/24873 | 2019-01-22T03:00:08Z | 2019-02-03T04:04:46Z | null | 2020-04-05T17:37:09Z |
POC: linting cython files | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index c8bfc564e7573..4c808a5b5ef55 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -69,6 +69,10 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
flake8 --format="$FLAKE8_FORMAT" pandas/_libs --filename=*.pxi.in,*.pxd --select=E501,E302,E203,E111,E114,E221,E303,E231,E126,F403
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Linting python-compatible .pyx files' ; echo $MSG
+ python $BASE_DIR/scripts/cyflake.py pandas/_libs/indexing.pyx
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
echo "flake8-rst --version"
flake8-rst --version
diff --git a/pandas/_libs/indexing.pyx b/pandas/_libs/indexing.pyx
index af6e00bad7f6b..15fc9aa7fa7b0 100644
--- a/pandas/_libs/indexing.pyx
+++ b/pandas/_libs/indexing.pyx
@@ -1,12 +1,16 @@
# -*- coding: utf-8 -*-
+import cython
-cdef class _NDFrameIndexerBase:
+@cython.cclass
+class _NDFrameIndexerBase:
"""
A base class for _NDFrameIndexer for fast instantiation and attribute
access.
"""
- cdef public object obj, name, _ndim
+ obj = cython.declare(object, visibility='public')
+ name = cython.declare(object, visibility='public')
+ _ndim = cython.declare(object, visibility='public')
def __init__(self, name, obj):
self.obj = obj
diff --git a/scripts/cyflake.py b/scripts/cyflake.py
new file mode 100644
index 0000000000000..ad87d9d84d915
--- /dev/null
+++ b/scripts/cyflake.py
@@ -0,0 +1,119 @@
+import contextlib
+import os
+import re
+import subprocess
+import sys
+import tempfile
+
+
+def check_file(path):
+ """
+ Run a flake8-like check on the cython file at the given path.
+
+ Parameters
+ ----------
+ path : str
+
+ Returns
+ -------
+ return_code : int
+ """
+ with open(path, 'rb') as fd:
+ content = fd.read().decode('utf-8')
+
+ py_content = clean_cy_content(content)
+
+ fname = os.path.split(path)[1]
+
+ with ensure_clean(fname) as temp_path:
+ with open(temp_path, 'wb') as fd:
+ fd.write(py_content.encode('utf-8'))
+
+ rc = call_flake8(temp_path, path)
+
+ return rc
+
+
+@contextlib.contextmanager
+def ensure_clean(filename):
+ """
+ A poor-man's version of pandas.util.testing.ensure_clean
+ """
+ fd, filename = tempfile.mkstemp(suffix=filename)
+
+ try:
+ yield filename
+ finally:
+ try:
+ os.close(fd)
+ except Exception:
+ print("Couldn't close file descriptor: {fdesc} (file: {fname})"
+ .format(fdesc=fd, fname=filename))
+ try:
+ if os.path.exists(filename):
+ os.remove(filename)
+ except Exception as e:
+ print("Exception on removing file: {error}".format(error=e))
+
+
+def clean_cy_content(content):
+ """
+ For cython code that we cannot make into valid python, try to edit it
+ into something that the linter will recognize.
+
+ Parameters
+ ----------
+ content : unicode
+
+ Returns
+ -------
+ unicode
+ """
+
+ # Note: this may mess up subsequent lines indentation alignment if there
+ # are multi-line cimports
+ content = re.sub(u'^cimport ', u'import ', content)
+ content = re.sub(r'(?<=\s)cimport ', u'import ', content)
+ return content
+
+
+def call_flake8(temp_path, orig_path):
+ """
+ Wrapper to call flake8 on the file at the given temp_path, editing any
+ messages to point to the original file's path.
+
+ Parameters
+ ----------
+ temp_path : str
+ orig_path : str
+
+ Returns
+ -------
+ return_code : int
+ """
+ p = subprocess.Popen(['flake8', temp_path],
+ close_fds=True,
+ stderr=subprocess.PIPE,
+ stdout=subprocess.PIPE)
+ stdout, stderr = p.communicate()
+
+ # Edit the messages to include the original path
+ stdout = stdout.decode('utf-8').replace(temp_path, orig_path)
+ stderr = stderr.decode('utf-8').replace(temp_path, orig_path)
+
+ # TODO: better to just print?
+ sys.stdout.write(stdout)
+ sys.stderr.write(stderr)
+
+ return p.returncode
+
+
+if __name__ == "__main__":
+ args = sys.argv[1:]
+ rc = 0
+ for path in args:
+ # no validation, we just assume these are paths
+ rc2 = check_file(path)
+ rc = rc or rc2
+
+ sys.exit(rc)
| The edits in indexing.pyx are just to get the linting to pass for demonstration purposes.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24872 | 2019-01-22T00:31:08Z | 2019-02-02T20:41:43Z | null | 2019-09-17T18:57:12Z |