title
stringlengths 1
185
| diff
stringlengths 0
32.2M
| body
stringlengths 0
123k
⌀ | url
stringlengths 57
58
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
⌀ | updated_at
stringlengths 20
20
|
---|---|---|---|---|---|---|---|
move 3.9 travis build to allowed failures | diff --git a/.travis.yml b/.travis.yml
index c5dbddacc6a43..fdea9876d5d89 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -69,9 +69,9 @@ matrix:
env:
- JOB="3.7, arm64" PYTEST_WORKERS=8 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard)"
- dist: bionic
- python: 3.9-dev
env:
- - JOB="3.9-dev" PATTERN="(not slow and not network)"
+ - JOB="3.9-dev" PATTERN="(not slow and not network and not clipboard)"
+
before_install:
- echo "before_install"
| xref #34881
| https://api.github.com/repos/pandas-dev/pandas/pulls/34894 | 2020-06-20T12:47:03Z | 2020-06-20T13:07:52Z | 2020-06-20T13:07:52Z | 2020-06-20T13:07:53Z |
DOC: add mention of recommended dependencies in users guide | diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index 2056fe2f754f8..24fcb369804c6 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -13,6 +13,14 @@ when we use Cython and Numba on a test function operating row-wise on the
``DataFrame``. Using :func:`pandas.eval` we will speed up a sum by an order of
~2.
+.. note::
+
+ In addition to following the steps in this tutorial, users interested in enhancing
+ performance are highly encouraged to install the
+ :ref:`recommended dependencies<install.recommended_dependencies>` for pandas.
+ These dependencies are often not installed by default, but will offer speed
+ improvements if present.
+
.. _enhancingperf.cython:
Cython (writing C extensions for pandas)
| First-time contributor here!
Ian Ozsvald mentioned installing the optional dependencies for pandas in his talk at PyData Amsterdam 2020. Although these are mentioned in the Getting Started section, I though they should be mentioned in the Users Guide, which is where I typically go for pandas information. It seems like many users aren't aware that these dependencies are available to increase performance.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34890 | 2020-06-20T11:59:46Z | 2020-06-20T13:45:11Z | 2020-06-20T13:45:11Z | 2020-06-20T17:14:05Z |
TST: IntegerNA Support for DataFrame.diff() | diff --git a/pandas/tests/frame/methods/test_diff.py b/pandas/tests/frame/methods/test_diff.py
index e876e40aa2eb1..45f134a93a23a 100644
--- a/pandas/tests/frame/methods/test_diff.py
+++ b/pandas/tests/frame/methods/test_diff.py
@@ -169,3 +169,48 @@ def test_diff_sparse(self):
)
tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "axis,expected",
+ [
+ (
+ 0,
+ pd.DataFrame(
+ {
+ "a": [np.nan, 0, 1, 0, np.nan, np.nan, np.nan, 0],
+ "b": [np.nan, 1, np.nan, np.nan, -2, 1, np.nan, np.nan],
+ "c": np.repeat(np.nan, 8),
+ "d": [np.nan, 3, 5, 7, 9, 11, 13, 15],
+ },
+ dtype="Int64",
+ ),
+ ),
+ (
+ 1,
+ pd.DataFrame(
+ {
+ "a": np.repeat(np.nan, 8),
+ "b": [0, 1, np.nan, 1, np.nan, np.nan, np.nan, 0],
+ "c": np.repeat(np.nan, 8),
+ "d": np.repeat(np.nan, 8),
+ },
+ dtype="Int64",
+ ),
+ ),
+ ],
+ )
+ def test_diff_integer_na(self, axis, expected):
+ # GH#24171 IntegerNA Support for DataFrame.diff()
+ df = pd.DataFrame(
+ {
+ "a": np.repeat([0, 1, np.nan, 2], 2),
+ "b": np.tile([0, 1, np.nan, 2], 2),
+ "c": np.repeat(np.nan, 8),
+ "d": np.arange(1, 9) ** 2,
+ },
+ dtype="Int64",
+ )
+
+ # Test case for default behaviour of diff
+ result = df.diff(axis=axis)
+ tm.assert_frame_equal(result, expected)
| - [x] closes #24171
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Test for IntegerNA support DataFrame.diff().
``` bash
pytest tests/frame/methods/test_diff.py::TestDataFrameDiff::test_diff_integer_na
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/34889 | 2020-06-20T11:36:49Z | 2020-06-20T16:50:34Z | 2020-06-20T16:50:33Z | 2020-06-20T16:50:37Z |
DOC: document support for in-memory HDFStore GH33166 | diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index 56ef6fc479f2c..50b946999092a 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -1166,6 +1166,25 @@ Storing Attributes to a group node
store.close()
os.remove('test.h5')
+You can create or load a HDFStore in-memory by passing the ``driver``
+parameter to PyTables. Changes are only written to disk when the HDFStore
+is closed.
+
+.. ipython:: python
+
+ store = pd.HDFStore('test.h5', 'w', diver='H5FD_CORE')
+
+ df = pd.DataFrame(np.random.randn(8, 3))
+ store['test'] = df
+
+ # only after closing the store, data is written to disk:
+ store.close()
+
+.. ipython:: python
+ :suppress:
+
+ os.remove('test.h5')
+
.. _cookbook.binary:
Binary files
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 8aac8f9531512..800e9474cc0f8 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -447,8 +447,8 @@ class HDFStore:
Parameters
----------
- path : string
- File path to HDF5 file
+ path : str
+ File path to HDF5 file.
mode : {'a', 'w', 'r', 'r+'}, default 'a'
``'r'``
@@ -462,18 +462,20 @@ class HDFStore:
``'r+'``
It is similar to ``'a'``, but the file must already exist.
complevel : int, 0-9, default None
- Specifies a compression level for data.
- A value of 0 or None disables compression.
+ Specifies a compression level for data.
+ A value of 0 or None disables compression.
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
- Specifies the compression library to be used.
- As of v0.20.2 these additional compressors for Blosc are supported
- (default if no compressor specified: 'blosc:blosclz'):
- {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
- 'blosc:zlib', 'blosc:zstd'}.
- Specifying a compression library which is not available issues
- a ValueError.
+ Specifies the compression library to be used.
+ As of v0.20.2 these additional compressors for Blosc are supported
+ (default if no compressor specified: 'blosc:blosclz'):
+ {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
+ 'blosc:zlib', 'blosc:zstd'}.
+ Specifying a compression library which is not available issues
+ a ValueError.
fletcher32 : bool, default False
- If applying compression use the fletcher32 checksum
+ If applying compression use the fletcher32 checksum.
+ **kwargs
+ These parameters will be passed to the PyTables open_file method.
Examples
--------
@@ -482,6 +484,17 @@ class HDFStore:
>>> store['foo'] = bar # write to HDF5
>>> bar = store['foo'] # retrieve
>>> store.close()
+
+ **Create or load HDF5 file in-memory**
+
+ When passing the `driver` option to the PyTables open_file method through
+ **kwargs, the HDF5 file is loaded or created in-memory and will only be
+ written when closed:
+
+ >>> bar = pd.DataFrame(np.random.randn(10, 4))
+ >>> store = pd.HDFStore('test.h5', driver='H5FD_CORE')
+ >>> store['foo'] = bar
+ >>> store.close() # only now, data is written to disk
"""
_handle: Optional["File"]
@@ -634,6 +647,8 @@ def open(self, mode: str = "a", **kwargs):
----------
mode : {'a', 'w', 'r', 'r+'}, default 'a'
See HDFStore docstring or tables.open_file for info about modes
+ **kwargs
+ These parameters will be passed to the PyTables open_file method.
"""
tables = _tables()
| I currently have added to the docstring that **kwargs passes its parameters to
PyTables. Maybe this is too much but I also added an example using **kwargs,
passing the driver paramter to create an in-memory HDFStore.
Furthermore, I have added HDFStore class to the reference api, as it was not
autogenerated and also made **kwargs more clear that it passes its parameters
to PyTables.
Added in the cookbook the method of creating in-memory HDFStores, including
an example.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
closes #33166 | https://api.github.com/repos/pandas-dev/pandas/pulls/34888 | 2020-06-20T10:52:30Z | 2020-06-20T20:08:40Z | 2020-06-20T20:08:40Z | 2020-06-20T20:08:44Z |
Deprecate `center` on `df.expanding` | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 9bd4ddbb624d9..e7f417cbe52c3 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -820,6 +820,9 @@ Deprecations
precision through the ``rtol``, and ``atol`` parameters, thus deprecating the
``check_less_precise`` parameter. (:issue:`13357`).
- :func:`DataFrame.melt` accepting a value_name that already exists is deprecated, and will be removed in a future version (:issue:`34731`)
+- the ``center`` keyword in the :meth:`DataFrame.expanding` function is deprecated and will be removed in a future version (:issue:`20647`)
+
+
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d892e2487b31c..fa15385fed930 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10501,8 +10501,18 @@ def rolling(
cls.rolling = rolling
@doc(Expanding)
- def expanding(self, min_periods=1, center=False, axis=0):
+ def expanding(self, min_periods=1, center=None, axis=0):
axis = self._get_axis_number(axis)
+ if center is not None:
+ warnings.warn(
+ "The `center` argument on `expanding` "
+ "will be removed in the future",
+ FutureWarning,
+ stacklevel=2,
+ )
+ else:
+ center = False
+
return Expanding(self, min_periods=min_periods, center=center, axis=axis)
cls.expanding = expanding
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index bbc19fad8b799..8267cd4f0971e 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -57,7 +57,7 @@ class Expanding(_Rolling_and_Expanding):
_attributes = ["min_periods", "center", "axis"]
- def __init__(self, obj, min_periods=1, center=False, axis=0, **kwargs):
+ def __init__(self, obj, min_periods=1, center=None, axis=0, **kwargs):
super().__init__(obj=obj, min_periods=min_periods, center=center, axis=axis)
@property
diff --git a/pandas/tests/window/moments/test_moments_consistency_expanding.py b/pandas/tests/window/moments/test_moments_consistency_expanding.py
index ee3579d76d1db..3ec91dcb60610 100644
--- a/pandas/tests/window/moments/test_moments_consistency_expanding.py
+++ b/pandas/tests/window/moments/test_moments_consistency_expanding.py
@@ -119,8 +119,8 @@ def test_expanding_corr_pairwise(frame):
ids=["sum", "mean", "max", "min"],
)
def test_expanding_func(func, static_comp, has_min_periods, series, frame, nan_locs):
- def expanding_func(x, min_periods=1, center=False, axis=0):
- exp = x.expanding(min_periods=min_periods, center=center, axis=axis)
+ def expanding_func(x, min_periods=1, axis=0):
+ exp = x.expanding(min_periods=min_periods, axis=axis)
return getattr(exp, func)()
_check_expanding(
@@ -166,7 +166,7 @@ def test_expanding_apply_consistency(
with warnings.catch_warnings():
warnings.filterwarnings(
- "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning,
+ "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning
)
# test consistency between expanding_xyz() and either (a)
# expanding_apply of Series.xyz(), or (b) expanding_apply of
@@ -267,7 +267,7 @@ def test_expanding_consistency(consistency_data, min_periods):
# with empty/0-length Series/DataFrames
with warnings.catch_warnings():
warnings.filterwarnings(
- "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning,
+ "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning
)
# test consistency between different expanding_* moments
@@ -454,7 +454,7 @@ def test_expanding_cov_pairwise_diff_length():
def test_expanding_corr_pairwise_diff_length():
# GH 7512
df1 = DataFrame(
- [[1, 2], [3, 2], [3, 4]], columns=["A", "B"], index=Index(range(3), name="bar"),
+ [[1, 2], [3, 2], [3, 4]], columns=["A", "B"], index=Index(range(3), name="bar")
)
df1a = DataFrame(
[[1, 2], [3, 4]], index=Index([0, 2], name="bar"), columns=["A", "B"]
diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py
index 33fb79d98a324..2c3d8b4608806 100644
--- a/pandas/tests/window/test_api.py
+++ b/pandas/tests/window/test_api.py
@@ -107,10 +107,7 @@ def test_agg():
with pytest.raises(SpecificationError, match=msg):
r.aggregate(
- {
- "A": {"mean": "mean", "sum": "sum"},
- "B": {"mean2": "mean", "sum2": "sum"},
- }
+ {"A": {"mean": "mean", "sum": "sum"}, "B": {"mean2": "mean", "sum2": "sum"}}
)
result = r.aggregate({"A": ["mean", "std"], "B": ["mean", "std"]})
@@ -191,11 +188,7 @@ def test_count_nonnumeric_types():
"dt_nat",
"periods_nat",
]
- dt_nat_col = [
- Timestamp("20170101"),
- Timestamp("20170203"),
- Timestamp(None),
- ]
+ dt_nat_col = [Timestamp("20170101"), Timestamp("20170203"), Timestamp(None)]
df = DataFrame(
{
diff --git a/pandas/tests/window/test_expanding.py b/pandas/tests/window/test_expanding.py
index 30d65ebe84a1f..146eca07c523e 100644
--- a/pandas/tests/window/test_expanding.py
+++ b/pandas/tests/window/test_expanding.py
@@ -16,6 +16,9 @@ def test_doc_string():
df.expanding(2).sum()
+@pytest.mark.filterwarnings(
+ "ignore:The `center` argument on `expanding` will be removed in the future"
+)
def test_constructor(which):
# GH 12669
@@ -213,3 +216,16 @@ def test_iter_expanding_series(ser, expected, min_periods):
for (expected, actual) in zip(expected, ser.expanding(min_periods)):
tm.assert_series_equal(actual, expected)
+
+
+def test_center_deprecate_warning():
+ # GH 20647
+ df = pd.DataFrame()
+ with tm.assert_produces_warning(FutureWarning):
+ df.expanding(center=True)
+
+ with tm.assert_produces_warning(FutureWarning):
+ df.expanding(center=False)
+
+ with tm.assert_produces_warning(None):
+ df.expanding()
| `df.expanding(center=True)` currently returns non-sensical results and it is unclear what results would be expected. It was previously removed in #7934 for that same reason,but was reintroduced for unclear reasons in a later refactoring
- [x] closes #20647
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34887 | 2020-06-20T10:52:14Z | 2020-07-14T17:08:45Z | 2020-07-14T17:08:45Z | 2020-07-14T17:08:50Z |
TST: Ensure dtypes are set correctly for empty integer columns #24386 | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index baac87755c6d2..6207b270faefc 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2245,6 +2245,33 @@ def test_from_records_empty_with_nonempty_fields_gh3682(self):
tm.assert_index_equal(df.index, Index([], name="id"))
assert df.index.name == "id"
+ @pytest.mark.parametrize(
+ "dtype",
+ tm.ALL_INT_DTYPES
+ + tm.ALL_EA_INT_DTYPES
+ + tm.FLOAT_DTYPES
+ + tm.COMPLEX_DTYPES
+ + tm.DATETIME64_DTYPES
+ + tm.TIMEDELTA64_DTYPES
+ + tm.BOOL_DTYPES,
+ )
+ def test_check_dtype_empty_numeric_column(self, dtype):
+ # GH24386: Ensure dtypes are set correctly for an empty DataFrame.
+ # Empty DataFrame is generated via dictionary data with non-overlapping columns.
+ data = pd.DataFrame({"a": [1, 2]}, columns=["b"], dtype=dtype)
+
+ assert data.b.dtype == dtype
+
+ @pytest.mark.parametrize(
+ "dtype", tm.STRING_DTYPES + tm.BYTES_DTYPES + tm.OBJECT_DTYPES
+ )
+ def test_check_dtype_empty_string_column(self, dtype):
+ # GH24386: Ensure dtypes are set correctly for an empty DataFrame.
+ # Empty DataFrame is generated via dictionary data with non-overlapping columns.
+ data = pd.DataFrame({"a": [1, 2]}, columns=["b"], dtype=dtype)
+
+ assert data.b.dtype.name == "object"
+
def test_from_records_with_datetimes(self):
# this may fail on certain platforms because of a numpy issue
| - [x ] closes #24386
- [x ] tests added / passed
- [x ] passes `black pandas`
- [x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x ] added tests to verify whether empty integer columns are loaded in as integer columns
| https://api.github.com/repos/pandas-dev/pandas/pulls/34886 | 2020-06-20T10:51:35Z | 2020-06-20T22:26:00Z | 2020-06-20T22:25:59Z | 2020-06-20T22:26:14Z |
DOC: Fixed table formatting in box plot section | diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 4cd7b9e8cecca..305221b767aff 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -443,9 +443,8 @@ Faceting, created by ``DataFrame.boxplot`` with the ``by``
keyword, will affect the output type as well:
================ ======= ==========================
-``return_type=`` Faceted Output type
----------------- ------- --------------------------
-
+``return_type`` Faceted Output type
+================ ======= ==========================
``None`` No axes
``None`` Yes 2-D ndarray of axes
``'axes'`` No axes
| [DOC]fixed table formatting in the box plot section keeing simple rst table format
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34885 | 2020-06-20T10:40:06Z | 2020-06-20T11:58:14Z | 2020-06-20T11:58:14Z | 2020-06-20T12:19:13Z |
DOC: fixed labels in "Plotting with error bars" | diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 4cd7b9e8cecca..228f67dadf2d9 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -1424,7 +1424,7 @@ Here is an example of one way to easily plot group means with standard deviation
# Plot
fig, ax = plt.subplots()
@savefig errorbar_example.png
- means.plot.bar(yerr=errors, ax=ax, capsize=4)
+ means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0)
.. ipython:: python
:suppress:
@@ -1445,9 +1445,9 @@ Plotting with matplotlib table is now supported in :meth:`DataFrame.plot` and :
.. ipython:: python
- fig, ax = plt.subplots(1, 1)
+ fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
df = pd.DataFrame(np.random.rand(5, 3), columns=['a', 'b', 'c'])
- ax.get_xaxis().set_visible(False) # Hide Ticks
+ ax.xaxis.tick_top() # Display x-axis ticks on top.
@savefig line_plot_table_true.png
df.plot(table=True, ax=ax)
@@ -1464,8 +1464,9 @@ as seen in the example below.
.. ipython:: python
- fig, ax = plt.subplots(1, 1)
- ax.get_xaxis().set_visible(False) # Hide Ticks
+ fig, ax = plt.subplots(1, 1, figsize=(7, 6.75))
+ ax.xaxis.tick_top() # Display x-axis ticks on top.
+
@savefig line_plot_table_data.png
df.plot(table=np.round(df.T, 2), ax=ax)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
x-labels should now be visible in the plot. | https://api.github.com/repos/pandas-dev/pandas/pulls/34884 | 2020-06-20T10:35:51Z | 2020-06-20T18:26:40Z | 2020-06-20T18:26:40Z | 2020-06-20T18:26:53Z |
TST: Feather RoundTrip Column Ordering | diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index e59100146249a..a8a5c8f00e6bf 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -115,6 +115,12 @@ def test_read_columns(self):
columns = ["col1", "col3"]
self.check_round_trip(df, expected=df[columns], columns=columns)
+ @td.skip_if_no("pyarrow", min_version="0.17.1")
+ def read_columns_different_order(self):
+ # GH 33878
+ df = pd.DataFrame({"A": [1, 2], "B": ["x", "y"], "C": [True, False]})
+ self.check_round_trip(df, columns=["B", "A"])
+
def test_unsupported_other(self):
# mixed python objects
| Looks like this in fixed from pyarrow 0.17.1 -> Added a Test.
- [x] closes #https://github.com/pandas-dev/pandas/issues/33878
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/34883 | 2020-06-20T09:44:58Z | 2020-06-20T12:30:43Z | 2020-06-20T12:30:43Z | 2020-06-20T12:30:47Z |
DOC: Fix validation issues with Index.is_ docstring | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 057adceda7efd..b12a556a8291d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -518,7 +518,12 @@ def is_(self, other) -> bool:
Returns
-------
- True if both have same underlying data, False otherwise : bool
+ bool
+ True if both have same underlying data, False otherwise.
+
+ See Also
+ --------
+ Index.identical : Works like ``Index.is_`` but also checks metadata.
"""
# use something other than None to be clearer
return self._id is getattr(other, "_id", Ellipsis) and self._id is not None
| - [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Fix validation issues with the `Index.is_ docstring` and add `Index.identical` to see also section. Trying to finish what was started in #32357 | https://api.github.com/repos/pandas-dev/pandas/pulls/34882 | 2020-06-20T09:19:57Z | 2020-06-20T11:50:50Z | 2020-06-20T11:50:50Z | 2020-06-20T12:19:17Z |
TST: Add test to verify align behaviour on CategoricalIndex | diff --git a/pandas/tests/frame/methods/test_align.py b/pandas/tests/frame/methods/test_align.py
index 5dae719283d17..d19b59debfdea 100644
--- a/pandas/tests/frame/methods/test_align.py
+++ b/pandas/tests/frame/methods/test_align.py
@@ -129,6 +129,39 @@ def test_align_mixed_int(self, mixed_int_frame):
)
tm.assert_index_equal(bf.index, Index([]))
+ @pytest.mark.parametrize(
+ "l_ordered,r_ordered,expected",
+ [
+ [True, True, pd.CategoricalIndex],
+ [True, False, pd.Index],
+ [False, True, pd.Index],
+ [False, False, pd.CategoricalIndex],
+ ],
+ )
+ def test_align_categorical(self, l_ordered, r_ordered, expected):
+ # GH-28397
+ df_1 = DataFrame(
+ {
+ "A": np.arange(6, dtype="int64"),
+ "B": Series(list("aabbca")).astype(
+ pd.CategoricalDtype(list("cab"), ordered=l_ordered)
+ ),
+ }
+ ).set_index("B")
+ df_2 = DataFrame(
+ {
+ "A": np.arange(5, dtype="int64"),
+ "B": Series(list("babca")).astype(
+ pd.CategoricalDtype(list("cab"), ordered=r_ordered)
+ ),
+ }
+ ).set_index("B")
+
+ aligned_1, aligned_2 = df_1.align(df_2)
+ assert isinstance(aligned_1.index, expected)
+ assert isinstance(aligned_2.index, expected)
+ tm.assert_index_equal(aligned_1.index, aligned_2.index)
+
def test_align_multiindex(self):
# GH#10665
# same test cases as test_align_multiindex in test_series.py
| verify that aligning two dataframes with a `CategoricalIndex` does not change the type of the index.
- [x] closes #28397
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry (not applicable?)
| https://api.github.com/repos/pandas-dev/pandas/pulls/34880 | 2020-06-20T08:53:23Z | 2020-07-06T23:27:27Z | 2020-07-06T23:27:27Z | 2020-07-06T23:27:32Z |
TST: Add test to verify 'dropna' behaviour on SparseArray | diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 2f2907fbaaebc..d0cdec712f39d 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -1295,3 +1295,15 @@ def test_map_missing():
result = arr.map({0: 10, 1: 11})
tm.assert_sp_array_equal(result, expected)
+
+
+@pytest.mark.parametrize("fill_value", [np.nan, 1])
+def test_dropna(fill_value):
+ # GH-28287
+ arr = SparseArray([np.nan, 1], fill_value=fill_value)
+ exp = SparseArray([1.0], fill_value=fill_value)
+ tm.assert_sp_array_equal(arr.dropna(), exp)
+
+ df = pd.DataFrame({"a": [0, 1], "b": arr})
+ expected_df = pd.DataFrame({"a": [1], "b": exp}, index=pd.Int64Index([1]))
+ tm.assert_equal(df.dropna(), expected_df)
| verify that calling `dropna` on a pd.SparseArray does not inadvertently drop non-na records on both DataFrames and the SparseArray itself.
- [x] closes #28287
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry (not applicable?)
| https://api.github.com/repos/pandas-dev/pandas/pulls/34879 | 2020-06-20T08:38:43Z | 2020-06-20T13:35:18Z | 2020-06-20T13:35:18Z | 2020-06-20T15:08:10Z |
BUG/TST: Read from Public s3 Bucket Without Creds | diff --git a/pandas/io/common.py b/pandas/io/common.py
index 51323c5ff3ef5..32ec088f00d88 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -202,9 +202,37 @@ def get_filepath_or_buffer(
filepath_or_buffer = filepath_or_buffer.replace("s3n://", "s3://")
fsspec = import_optional_dependency("fsspec")
- file_obj = fsspec.open(
- filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
- ).open()
+ # If botocore is installed we fallback to reading with anon=True
+ # to allow reads from public buckets
+ err_types_to_retry_with_anon: List[Any] = []
+ try:
+ import_optional_dependency("botocore")
+ from botocore.exceptions import ClientError, NoCredentialsError
+
+ err_types_to_retry_with_anon = [
+ ClientError,
+ NoCredentialsError,
+ PermissionError,
+ ]
+ except ImportError:
+ pass
+
+ try:
+ file_obj = fsspec.open(
+ filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ ).open()
+ # GH 34626 Reads from Public Buckets without Credentials needs anon=True
+ except tuple(err_types_to_retry_with_anon):
+ if storage_options is None:
+ storage_options = {"anon": True}
+ else:
+ # don't mutate user input.
+ storage_options = dict(storage_options)
+ storage_options["anon"] = True
+ file_obj = fsspec.open(
+ filepath_or_buffer, mode=mode or "rb", **(storage_options or {})
+ ).open()
+
return file_obj, encoding, compression, True
if isinstance(filepath_or_buffer, (str, bytes, mmap.mmap)):
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index a76be9465f62a..5e0f7edf4d8ae 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -1,8 +1,12 @@
from io import BytesIO
+import os
import pytest
+import pandas.util._test_decorators as td
+
from pandas import read_csv
+import pandas._testing as tm
def test_streaming_s3_objects():
@@ -15,3 +19,30 @@ def test_streaming_s3_objects():
for el in data:
body = StreamingBody(BytesIO(el), content_length=len(el))
read_csv(body)
+
+
+@tm.network
+@td.skip_if_no("s3fs")
+def test_read_without_creds_from_pub_bucket():
+ # GH 34626
+ # Use Amazon Open Data Registry - https://registry.opendata.aws/gdelt
+ result = read_csv("s3://gdelt-open-data/events/1981.csv", nrows=3)
+ assert len(result) == 3
+
+
+@tm.network
+@td.skip_if_no("s3fs")
+def test_read_with_creds_from_pub_bucke():
+ # Ensure we can read from a public bucket with credentials
+ # GH 34626
+ # Use Amazon Open Data Registry - https://registry.opendata.aws/gdelt
+
+ with tm.ensure_safe_environment_variables():
+ # temporary workaround as moto fails for botocore >= 1.11 otherwise,
+ # see https://github.com/spulec/moto/issues/1924 & 1952
+ os.environ.setdefault("AWS_ACCESS_KEY_ID", "foobar_key")
+ os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "foobar_secret")
+ df = read_csv(
+ "s3://gdelt-open-data/events/1981.csv", nrows=5, sep="\t", header=None,
+ )
+ assert len(df) == 5
| - [x] Closes #34626
We should potentially merge this before https://github.com/pandas-dev/pandas/pull/34266 to confirm reading from s3 without credentials works.
Ref discussion here: https://github.com/pandas-dev/pandas/pull/34793#issuecomment-645023794
cc. @jorisvandenbossche | https://api.github.com/repos/pandas-dev/pandas/pulls/34877 | 2020-06-19T23:07:29Z | 2020-07-15T19:07:53Z | 2020-07-15T19:07:52Z | 2020-07-15T19:07:56Z |
DF.__setitem__ creates extension column when given extension scalar | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 798a3d838ef7e..9795e58995ab5 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1144,6 +1144,7 @@ ExtensionArray
- Fixed bug where :meth:`StringArray.memory_usage` was not implemented (:issue:`33963`)
- Fixed bug where :meth:`DataFrameGroupBy` would ignore the ``min_count`` argument for aggregations on nullable boolean dtypes (:issue:`34051`)
- Fixed bug that `DataFrame(columns=.., dtype='string')` would fail (:issue:`27953`, :issue:`33623`)
+- Bug where :class:`DataFrame` column set to scalar extension type was considered an object type rather than the extension type (:issue:`34832`)
- Fixed bug in ``IntegerArray.astype`` to correctly copy the mask as well (:issue:`34931`).
Other
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3d2200cb45c6e..bc09761803b75 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -519,25 +519,43 @@ def __init__(
mgr = init_ndarray(data, index, columns, dtype=dtype, copy=copy)
else:
mgr = init_dict({}, index, columns, dtype=dtype)
+ # For data is scalar
else:
- try:
- arr = np.array(data, dtype=dtype, copy=copy)
- except (ValueError, TypeError) as err:
- exc = TypeError(
- "DataFrame constructor called with "
- f"incompatible data and dtype: {err}"
- )
- raise exc from err
+ if index is None or columns is None:
+ raise ValueError("DataFrame constructor not properly called!")
+
+ if not dtype:
+ dtype, _ = infer_dtype_from_scalar(data, pandas_dtype=True)
+
+ # For data is a scalar extension dtype
+ if is_extension_array_dtype(dtype):
+
+ values = [
+ construct_1d_arraylike_from_scalar(data, len(index), dtype)
+ for _ in range(len(columns))
+ ]
+ mgr = arrays_to_mgr(values, columns, index, columns, dtype=None)
+ else:
+ # Attempt to coerce to a numpy array
+ try:
+ arr = np.array(data, dtype=dtype, copy=copy)
+ except (ValueError, TypeError) as err:
+ exc = TypeError(
+ "DataFrame constructor called with "
+ f"incompatible data and dtype: {err}"
+ )
+ raise exc from err
+
+ if arr.ndim != 0:
+ raise ValueError("DataFrame constructor not properly called!")
- if arr.ndim == 0 and index is not None and columns is not None:
values = cast_scalar_to_array(
(len(index), len(columns)), data, dtype=dtype
)
+
mgr = init_ndarray(
values, index, columns, dtype=values.dtype, copy=False
)
- else:
- raise ValueError("DataFrame constructor not properly called!")
NDFrame.__init__(self, mgr)
@@ -3739,7 +3757,13 @@ def reindexer(value):
infer_dtype, _ = infer_dtype_from_scalar(value, pandas_dtype=True)
# upcast
- value = cast_scalar_to_array(len(self.index), value)
+ if is_extension_array_dtype(infer_dtype):
+ value = construct_1d_arraylike_from_scalar(
+ value, len(self.index), infer_dtype
+ )
+ else:
+ value = cast_scalar_to_array(len(self.index), value)
+
value = maybe_cast_to_datetime(value, infer_dtype)
# return internal types directly
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 8fcdae95fbab5..9bb5338f1e07f 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -1,7 +1,18 @@
import numpy as np
import pytest
-from pandas import Categorical, DataFrame, Index, Series, Timestamp, date_range
+from pandas.core.dtypes.dtypes import DatetimeTZDtype, IntervalDtype, PeriodDtype
+
+from pandas import (
+ Categorical,
+ DataFrame,
+ Index,
+ Interval,
+ Period,
+ Series,
+ Timestamp,
+ date_range,
+)
import pandas._testing as tm
from pandas.core.arrays import SparseArray
@@ -150,3 +161,23 @@ def test_setitem_dict_preserves_dtypes(self):
"c": float(b),
}
tm.assert_frame_equal(df, expected)
+
+ @pytest.mark.parametrize(
+ "obj,dtype",
+ [
+ (Period("2020-01"), PeriodDtype("M")),
+ (Interval(left=0, right=5), IntervalDtype("int64")),
+ (
+ Timestamp("2011-01-01", tz="US/Eastern"),
+ DatetimeTZDtype(tz="US/Eastern"),
+ ),
+ ],
+ )
+ def test_setitem_extension_types(self, obj, dtype):
+ # GH: 34832
+ expected = DataFrame({"idx": [1, 2, 3], "obj": Series([obj] * 3, dtype=dtype)})
+
+ df = DataFrame({"idx": [1, 2, 3]})
+ df["obj"] = obj
+
+ tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/frame/methods/test_combine_first.py b/pandas/tests/frame/methods/test_combine_first.py
index 7715cb1cb6eec..78f265d32f8df 100644
--- a/pandas/tests/frame/methods/test_combine_first.py
+++ b/pandas/tests/frame/methods/test_combine_first.py
@@ -199,12 +199,14 @@ def test_combine_first_timezone(self):
columns=["UTCdatetime", "abc"],
data=data1,
index=pd.date_range("20140627", periods=1),
+ dtype="object",
)
data2 = pd.to_datetime("20121212 12:12").tz_localize("UTC")
df2 = pd.DataFrame(
columns=["UTCdatetime", "xyz"],
data=data2,
index=pd.date_range("20140628", periods=1),
+ dtype="object",
)
res = df2[["UTCdatetime"]].combine_first(df1)
exp = pd.DataFrame(
@@ -217,10 +219,14 @@ def test_combine_first_timezone(self):
},
columns=["UTCdatetime", "abc"],
index=pd.date_range("20140627", periods=2, freq="D"),
+ dtype="object",
)
- tm.assert_frame_equal(res, exp)
assert res["UTCdatetime"].dtype == "datetime64[ns, UTC]"
assert res["abc"].dtype == "datetime64[ns, UTC]"
+ # Need to cast all to "obejct" because combine_first does not retain dtypes:
+ # GH Issue 7509
+ res = res.astype("object")
+ tm.assert_frame_equal(res, exp)
# see gh-10567
dts1 = pd.date_range("2015-01-01", "2015-01-05", tz="UTC")
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index ab4f7781467e7..64ae29e6de63c 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -14,13 +14,16 @@
from pandas.compat.numpy import _np_version_under1p19
from pandas.core.dtypes.common import is_integer_dtype
+from pandas.core.dtypes.dtypes import DatetimeTZDtype, IntervalDtype, PeriodDtype
import pandas as pd
from pandas import (
Categorical,
DataFrame,
Index,
+ Interval,
MultiIndex,
+ Period,
RangeIndex,
Series,
Timedelta,
@@ -700,7 +703,7 @@ def create_data(constructor):
tm.assert_frame_equal(result_timedelta, expected)
tm.assert_frame_equal(result_Timedelta, expected)
- def test_constructor_period(self):
+ def test_constructor_period_dict(self):
# PeriodIndex
a = pd.PeriodIndex(["2012-01", "NaT", "2012-04"], freq="M")
b = pd.PeriodIndex(["2012-02-01", "2012-03-01", "NaT"], freq="D")
@@ -713,6 +716,29 @@ def test_constructor_period(self):
assert df["a"].dtype == a.dtype
assert df["b"].dtype == b.dtype
+ @pytest.mark.parametrize(
+ "data,dtype",
+ [
+ (Period("2020-01"), PeriodDtype("M")),
+ (Interval(left=0, right=5), IntervalDtype("int64")),
+ (
+ Timestamp("2011-01-01", tz="US/Eastern"),
+ DatetimeTZDtype(tz="US/Eastern"),
+ ),
+ ],
+ )
+ def test_constructor_extension_scalar_data(self, data, dtype):
+ # GH 34832
+ df = DataFrame(index=[0, 1], columns=["a", "b"], data=data)
+
+ assert df["a"].dtype == dtype
+ assert df["b"].dtype == dtype
+
+ arr = pd.array([data] * 2, dtype=dtype)
+ expected = DataFrame({"a": arr, "b": arr})
+
+ tm.assert_frame_equal(df, expected)
+
def test_nested_dict_frame_constructor(self):
rng = pd.period_range("1/1/2000", periods=5)
df = DataFrame(np.random.randn(10, 5), columns=rng)
| This PR is in response to [Issue 34832]( https://github.com/pandas-dev/pandas/issues/34832).
These changes follow the suggestions from reviewers on the PR to fix an issue where `df['a']=pd.Period('2020-01')` would create an object column instead of a `period[M]` column.
One potential issue that I see with these changes is that this code requires the `shape` parameter passed into `cast_scalar_to_array` to be an int, even though the functions docstring claims that it accepts a tuple.
I'm happy to work on resolving that issue, but first wanted to clarify that it is necessary. Based on my perspective, there would be no reason to call this function to create a multi-dimensional array, but I surely don't understand the full interconnectivity of this package.
- [x] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34875 | 2020-06-19T18:54:51Z | 2020-07-11T02:02:55Z | 2020-07-11T02:02:55Z | 2020-07-13T13:01:58Z |
PERF: to speed up rendering of styler | diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index 7c10a2d17775a..4583fac85b776 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -53,6 +53,7 @@
"xlwt": [],
"odfpy": [],
"pytest": [],
+ "jinja2": [],
// If using Windows with python 2.7 and want to build using the
// mingw toolchain (rather than MSVC), uncomment the following line.
// "libpython": [],
diff --git a/asv_bench/benchmarks/io/style.py b/asv_bench/benchmarks/io/style.py
new file mode 100644
index 0000000000000..4fc07bbabda06
--- /dev/null
+++ b/asv_bench/benchmarks/io/style.py
@@ -0,0 +1,34 @@
+import numpy as np
+
+from pandas import DataFrame
+
+
+class RenderApply:
+
+ params = [[12, 24, 36], [12, 120]]
+ param_names = ["cols", "rows"]
+
+ def setup(self, cols, rows):
+ self.df = DataFrame(
+ np.random.randn(rows, cols),
+ columns=[f"float_{i+1}" for i in range(cols)],
+ index=[f"row_{i+1}" for i in range(rows)],
+ )
+ self._style_apply()
+
+ def time_render(self, cols, rows):
+ self.st.render()
+
+ def peakmem_apply(self, cols, rows):
+ self._style_apply()
+
+ def peakmem_render(self, cols, rows):
+ self.st.render()
+
+ def _style_apply(self):
+ def _apply_func(s):
+ return [
+ "background-color: lightcyan" if s.name == "row_1" else "" for v in s
+ ]
+
+ self.st = self.df.style.apply(_apply_func, axis=1)
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 14d1e1b49a726..8b8c17689e136 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -806,6 +806,7 @@ Performance improvements
- Performance improvement in :class:`pandas.core.groupby.RollingGroupby` (:issue:`34052`)
- Performance improvement in arithmetic operations (sub, add, mul, div) for MultiIndex (:issue:`34297`)
- Performance improvement in `DataFrame[bool_indexer]` when `bool_indexer` is a list (:issue:`33924`)
+- Significant performance improvement of :meth:`io.formats.style.Styler.render` with styles added with various ways such as :meth:`io.formats.style.Styler.apply`, :meth:`io.formats.style.Styler.applymap` or :meth:`io.formats.style.Styler.bar` (:issue:`19917`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index f7ba4750bc2ad..8405c8ae8bee5 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -561,11 +561,19 @@ def _update_ctx(self, attrs: DataFrame) -> None:
Whitespace shouldn't matter and the final trailing ';' shouldn't
matter.
"""
- for row_label, v in attrs.iterrows():
- for col_label, col in v.items():
- i = self.index.get_indexer([row_label])[0]
- j = self.columns.get_indexer([col_label])[0]
- for pair in col.rstrip(";").split(";"):
+ coli = {k: i for i, k in enumerate(self.columns)}
+ rowi = {k: i for i, k in enumerate(self.index)}
+ for jj in range(len(attrs.columns)):
+ cn = attrs.columns[jj]
+ j = coli[cn]
+ for rn, c in attrs[[cn]].itertuples():
+ if not c:
+ continue
+ c = c.rstrip(";")
+ if not c:
+ continue
+ i = rowi[rn]
+ for pair in c.split(";"):
self.ctx[(i, j)].append(pair)
def _copy(self, deepcopy: bool = False) -> "Styler":
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index ec4614538004c..9c6910637fa7e 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -405,9 +405,10 @@ def f(x):
result = self.df.style.where(f, style1)._compute().ctx
expected = {
- (r, c): [style1 if f(self.df.loc[row, col]) else ""]
+ (r, c): [style1]
for r, row in enumerate(self.df.index)
for c, col in enumerate(self.df.columns)
+ if f(self.df.loc[row, col])
}
assert result == expected
@@ -966,7 +967,6 @@ def test_bar_align_mid_nans(self):
"transparent 25.0%, #d65f5f 25.0%, "
"#d65f5f 50.0%, transparent 50.0%)",
],
- (1, 0): [""],
(0, 1): [
"width: 10em",
" height: 80%",
@@ -994,7 +994,6 @@ def test_bar_align_zero_nans(self):
"transparent 50.0%, #d65f5f 50.0%, "
"#d65f5f 75.0%, transparent 75.0%)",
],
- (1, 0): [""],
(0, 1): [
"width: 10em",
" height: 80%",
@@ -1091,7 +1090,7 @@ def test_format_with_bad_na_rep(self):
def test_highlight_null(self, null_color="red"):
df = pd.DataFrame({"A": [0, np.nan]})
result = df.style.highlight_null()._compute().ctx
- expected = {(0, 0): [""], (1, 0): ["background-color: red"]}
+ expected = {(1, 0): ["background-color: red"]}
assert result == expected
def test_highlight_null_subset(self):
@@ -1104,9 +1103,7 @@ def test_highlight_null_subset(self):
.ctx
)
expected = {
- (0, 0): [""],
(1, 0): ["background-color: red"],
- (0, 1): [""],
(1, 1): ["background-color: green"],
}
assert result == expected
@@ -1219,8 +1216,6 @@ def test_highlight_max(self):
expected = {
(1, 0): ["background-color: yellow"],
(1, 1): ["background-color: yellow"],
- (0, 1): [""],
- (0, 0): [""],
}
assert result == expected
@@ -1228,8 +1223,6 @@ def test_highlight_max(self):
expected = {
(0, 1): ["background-color: yellow"],
(1, 1): ["background-color: yellow"],
- (0, 0): [""],
- (1, 0): [""],
}
assert result == expected
| see https://github.com/pandas-dev/pandas/issues/19917#issuecomment-646209895
- [x] closes #19917
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34863 | 2020-06-18T18:59:20Z | 2020-07-09T23:35:58Z | 2020-07-09T23:35:58Z | 2020-07-09T23:36:05Z |
DOC: Fix syntax in 1.1.0 whatsnew | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 10522ff797c59..6b11ddc8149e6 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -292,7 +292,7 @@ Other enhancements
- :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` now accept an ``errors`` argument (:issue:`22610`)
- :meth:`groupby.transform` now allows ``func`` to be ``pad``, ``backfill`` and ``cumcount`` (:issue:`31269`).
- :meth:`~pandas.io.json.read_json` now accepts `nrows` parameter. (:issue:`33916`).
-- :meth `~pandas.io.gbq.read_gbq` now allows to disable progress bar (:issue:`33360`).
+- :meth:`~pandas.io.gbq.read_gbq` now allows to disable progress bar (:issue:`33360`).
- :meth:`~pandas.io.gbq.read_gbq` now supports the ``max_results`` kwarg from ``pandas-gbq`` (:issue:`34639`).
.. ---------------------------------------------------------------------------
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34856 | 2020-06-18T05:37:34Z | 2020-06-18T07:22:44Z | 2020-06-18T07:22:44Z | 2020-06-20T11:08:40Z |
DEPR: to_perioddelta | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 10522ff797c59..2c792d5f10b00 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -776,6 +776,7 @@ Deprecations
instead (:issue:`34191`).
- The ``squeeze`` keyword in the ``groupby`` function is deprecated and will be removed in a future version (:issue:`32380`)
- The ``tz`` keyword in :meth:`Period.to_timestamp` is deprecated and will be removed in a future version; use `per.to_timestamp(...).tz_localize(tz)`` instead (:issue:`34522`)
+- :meth:`DatetimeIndex.to_perioddelta` is deprecated and will be removed in a future version. Use ``index - index.to_period(freq).to_timestamp()`` instead (:issue:`34853`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index b6c27abc321e1..461f71ff821fa 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1125,7 +1125,14 @@ def to_perioddelta(self, freq):
-------
TimedeltaArray/Index
"""
- # TODO: consider privatizing (discussion in GH#23113)
+ # Deprecaation GH#34853
+ warnings.warn(
+ "to_perioddelta is deprecated and will be removed in a "
+ "future version. "
+ "Use `dtindex - dtindex.to_period(freq).to_timestamp()` instead",
+ FutureWarning,
+ stacklevel=3,
+ )
from pandas.core.arrays.timedeltas import TimedeltaArray
i8delta = self.asi8 - self.to_period(freq).to_timestamp().asi8
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 1a61b379de943..b1ab700427c28 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -511,8 +511,13 @@ def test_to_perioddelta(self, datetime_index, freqstr):
dti = datetime_index
arr = DatetimeArray(dti)
- expected = dti.to_perioddelta(freq=freqstr)
- result = arr.to_perioddelta(freq=freqstr)
+ with tm.assert_produces_warning(FutureWarning):
+ # Deprecation GH#34853
+ expected = dti.to_perioddelta(freq=freqstr)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # stacklevel is chosen to be "correct" for DatetimeIndex, not
+ # DatetimeArray
+ result = arr.to_perioddelta(freq=freqstr)
assert isinstance(result, TimedeltaArray)
# placeholder until these become actual EA subclasses and we can use
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
The only usage was in liboffsets and that has now been removed. | https://api.github.com/repos/pandas-dev/pandas/pulls/34853 | 2020-06-17T15:19:24Z | 2020-06-18T16:33:53Z | 2020-06-18T16:33:53Z | 2020-06-18T16:47:15Z |
base/test_unique.py: regression test for bad unicode string | diff --git a/pandas/tests/base/test_unique.py b/pandas/tests/base/test_unique.py
index 8cf234012d02f..e5592cef59592 100644
--- a/pandas/tests/base/test_unique.py
+++ b/pandas/tests/base/test_unique.py
@@ -105,3 +105,19 @@ def test_nunique_null(null_obj, index_or_series_obj):
num_unique_values = len(obj.unique())
assert obj.nunique() == max(0, num_unique_values - 1)
assert obj.nunique(dropna=False) == max(0, num_unique_values)
+
+
+@pytest.mark.parametrize(
+ "idx_or_series_w_bad_unicode", [pd.Index(["\ud83d"] * 2), pd.Series(["\ud83d"] * 2)]
+)
+def test_unique_bad_unicode(idx_or_series_w_bad_unicode):
+ # regression test for #34550
+ obj = idx_or_series_w_bad_unicode
+ result = obj.unique()
+
+ if isinstance(obj, pd.Index):
+ expected = pd.Index(["\ud83d"], dtype=object)
+ tm.assert_index_equal(result, expected)
+ else:
+ expected = np.array(["\ud83d"], dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
| - [x] closes #34550
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
I ran the tests with pandas 1.0.4, where it segfaults as expected (shown below), and `master` passes the test.
tests/test_mytest.py::test_unique2[idx_or_series_w_bad_unicode0] Fatal Python error: Segmentation fault
Current thread 0x00007f09d2dc4740 (most recent call first):
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pandas/core/algorithms.py", line 382 in unique
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pandas/core/base.py", line 1246 in unique
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 1974 in unique
File "/home/suvali/tmp/pandas-tests/tests/test_mytest.py", line 16 in test_unique2
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/python.py", line 182 in pytest_pyfunc_call
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/python.py", line 1477 in runtest
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 135 in pytest_runtest_call
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 217 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 244 in from_call
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 216 in call_runtest_hook
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 186 in call_and_report
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 100 in runtestprotocol
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 85 in pytest_runtest_protocol
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 272 in pytest_runtestloop
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 247 in _main
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 191 in wrap_session
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 240 in pytest_cmdline_main
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/opt/conda/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/opt/conda/envs/py38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 124 in main
File "/opt/conda/envs/py38/bin/pytest", line 11 in <module>
Segmentation fault (core dumped) | https://api.github.com/repos/pandas-dev/pandas/pulls/34851 | 2020-06-17T13:27:02Z | 2020-06-18T23:08:41Z | 2020-06-18T23:08:41Z | 2020-06-18T23:08:46Z |
Remove redundant lists in array.py | diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 3132b39a7d6d6..447a6108fc3c7 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -179,13 +179,11 @@ def astype(self, dtype, copy=True):
def unique(self):
# Parent method doesn't work since np.array will try to infer
# a 2-dim object.
- return type(self)(
- [dict(x) for x in list({tuple(d.items()) for d in self.data})]
- )
+ return type(self)([dict(x) for x in {tuple(d.items()) for d in self.data}])
@classmethod
def _concat_same_type(cls, to_concat):
- data = list(itertools.chain.from_iterable([x.data for x in to_concat]))
+ data = list(itertools.chain.from_iterable(x.data for x in to_concat))
return cls(data)
def _values_for_factorize(self):
| As far as I know these lists do nothing but taking up memory. | https://api.github.com/repos/pandas-dev/pandas/pulls/34847 | 2020-06-17T10:58:46Z | 2020-06-20T14:47:21Z | 2020-06-20T14:47:21Z | 2020-06-20T14:47:24Z |
Backport PR #34845 on branch 1.0.x (DOC: 1.0.5 release date) | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index fdf08dd381050..9a5128a07bbfd 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -1,7 +1,7 @@
.. _whatsnew_105:
-What's new in 1.0.5 (June XX, 2020)
+What's new in 1.0.5 (June 17, 2020)
-----------------------------------
These are the changes in pandas 1.0.5. See :ref:`release` for a full changelog
| Backport PR #34845: DOC: 1.0.5 release date | https://api.github.com/repos/pandas-dev/pandas/pulls/34846 | 2020-06-17T10:57:34Z | 2020-06-17T11:45:01Z | 2020-06-17T11:45:01Z | 2020-06-17T11:45:01Z |
DOC: 1.0.5 release date | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index fdf08dd381050..9a5128a07bbfd 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -1,7 +1,7 @@
.. _whatsnew_105:
-What's new in 1.0.5 (June XX, 2020)
+What's new in 1.0.5 (June 17, 2020)
-----------------------------------
These are the changes in pandas 1.0.5. See :ref:`release` for a full changelog
| https://api.github.com/repos/pandas-dev/pandas/pulls/34845 | 2020-06-17T09:15:02Z | 2020-06-17T10:57:00Z | 2020-06-17T10:57:00Z | 2020-06-17T11:00:19Z |
|
BUG: fix construction from read-only non-ns datetime64 numpy array | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 10522ff797c59..6a6c7ebd49db1 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -843,6 +843,9 @@ Datetimelike
- Bug in :meth:`DatetimeIndex.intersection` and :meth:`TimedeltaIndex.intersection` with results not having the correct ``name`` attribute (:issue:`33904`)
- Bug in :meth:`DatetimeArray.__setitem__`, :meth:`TimedeltaArray.__setitem__`, :meth:`PeriodArray.__setitem__` incorrectly allowing values with ``int64`` dtype to be silently cast (:issue:`33717`)
- Bug in subtracting :class:`TimedeltaIndex` from :class:`Period` incorrectly raising ``TypeError`` in some cases where it should succeed and ``IncompatibleFrequency`` in some cases where it should raise ``TypeError`` (:issue:`33883`)
+- Bug in constructing a Series or Index from a read-only NumPy array with non-ns
+ resolution which converted to object dtype instead of coercing to ``datetime64[ns]``
+ dtype when within the timestamp bounds (:issue:`34843`).
- The ``freq`` keyword in :class:`Period`, :func:`date_range`, :func:`period_range`, :func:`pd.tseries.frequencies.to_offset` no longer allows tuples, pass as string instead (:issue:`34703`)
Timedelta
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 40b2d44235d8b..0811ba22977fd 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -167,7 +167,8 @@ def ensure_datetime64ns(arr: ndarray, copy: bool=True):
"""
cdef:
Py_ssize_t i, n = arr.size
- int64_t[:] ivalues, iresult
+ const int64_t[:] ivalues
+ int64_t[:] iresult
NPY_DATETIMEUNIT unit
npy_datetimestruct dts
diff --git a/pandas/tests/base/test_constructors.py b/pandas/tests/base/test_constructors.py
index e27b5c307cd99..697364fc87175 100644
--- a/pandas/tests/base/test_constructors.py
+++ b/pandas/tests/base/test_constructors.py
@@ -13,6 +13,19 @@
from pandas.core.base import NoNewAttributesMixin, PandasObject
+@pytest.fixture(
+ params=[
+ Series,
+ lambda x, **kwargs: DataFrame({"a": x}, **kwargs)["a"],
+ lambda x, **kwargs: DataFrame(x, **kwargs)[0],
+ Index,
+ ],
+ ids=["Series", "DataFrame-dict", "DataFrame-array", "Index"],
+)
+def constructor(request):
+ return request.param
+
+
class TestPandasDelegate:
class Delegator:
_properties = ["foo"]
@@ -145,3 +158,14 @@ def test_constructor_datetime_outofbound(self, a, klass):
msg = "Out of bounds"
with pytest.raises(pd.errors.OutOfBoundsDatetime, match=msg):
klass(a, dtype="datetime64[ns]")
+
+ def test_constructor_datetime_nonns(self, constructor):
+ arr = np.array(["2020-01-01T00:00:00.000000"], dtype="datetime64[us]")
+ expected = constructor(pd.to_datetime(["2020-01-01"]))
+ result = constructor(arr)
+ tm.assert_equal(result, expected)
+
+ # https://github.com/pandas-dev/pandas/issues/34843
+ arr.flags.writeable = False
+ result = constructor(arr)
+ tm.assert_equal(result, expected)
| Closes #34843 | https://api.github.com/repos/pandas-dev/pandas/pulls/34844 | 2020-06-17T09:03:34Z | 2020-06-18T06:47:45Z | 2020-06-18T06:47:45Z | 2020-06-18T10:26:49Z |
ENH: Add support for calculating EWMA with a time component | diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py
index b1f6d052919bd..f0dd908f81043 100644
--- a/asv_bench/benchmarks/rolling.py
+++ b/asv_bench/benchmarks/rolling.py
@@ -91,11 +91,18 @@ class EWMMethods:
def setup(self, constructor, window, dtype, method):
N = 10 ** 5
arr = (100 * np.random.random(N)).astype(dtype)
+ times = pd.date_range("1900", periods=N, freq="23s")
self.ewm = getattr(pd, constructor)(arr).ewm(halflife=window)
+ self.ewm_times = getattr(pd, constructor)(arr).ewm(
+ halflife="1 Day", times=times
+ )
def time_ewm(self, constructor, window, dtype, method):
getattr(self.ewm, method)()
+ def time_ewm_times(self, constructor, window, dtype, method):
+ self.ewm.mean()
+
class VariableWindowMethods(Methods):
params = (
diff --git a/doc/source/user_guide/computation.rst b/doc/source/user_guide/computation.rst
index f36c6e06044f2..d7875e5b8d861 100644
--- a/doc/source/user_guide/computation.rst
+++ b/doc/source/user_guide/computation.rst
@@ -1095,6 +1095,25 @@ and **alpha** to the EW functions:
one half.
* **Alpha** specifies the smoothing factor directly.
+.. versionadded:: 1.1.0
+
+You can also specify ``halflife`` in terms of a timedelta convertible unit to specify the amount of
+time it takes for an observation to decay to half its value when also specifying a sequence
+of ``times``.
+
+.. ipython:: python
+
+ df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
+ df
+ times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17']
+ df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean()
+
+The following formula is used to compute exponentially weighted mean with an input vector of times:
+
+.. math::
+
+ y_t = \frac{\sum_{i=0}^t 0.5^\frac{t_{t} - t_{i}}{\lambda} x_{t-i}}{0.5^\frac{t_{t} - t_{i}}{\lambda}},
+
Here is an example for a univariate time series:
.. ipython:: python
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 9bd4ddbb624d9..563ae6d1d2ed1 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -329,6 +329,7 @@ Other enhancements
- :meth:`DataFrame.to_excel` can now also write OpenOffice spreadsheet (.ods) files (:issue:`27222`)
- :meth:`~Series.explode` now accepts ``ignore_index`` to reset the index, similarly to :meth:`pd.concat` or :meth:`DataFrame.sort_values` (:issue:`34932`).
- :meth:`read_csv` now accepts string values like "0", "0.0", "1", "1.0" as convertible to the nullable boolean dtype (:issue:`34859`)
+- :class:`pandas.core.window.ExponentialMovingWindow` now supports a ``times`` argument that allows ``mean`` to be calculated with observations spaced by the timestamps in ``times`` (:issue:`34839`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index ec4a412b5adc7..362d0e6263697 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -8,7 +8,7 @@ from libc.stdlib cimport malloc, free
import numpy as np
cimport numpy as cnp
-from numpy cimport ndarray, int64_t, float64_t, float32_t
+from numpy cimport ndarray, int64_t, float64_t, float32_t, uint8_t
cnp.import_array()
@@ -1752,6 +1752,51 @@ def roll_weighted_var(float64_t[:] values, float64_t[:] weights,
# ----------------------------------------------------------------------
# Exponentially weighted moving average
+def ewma_time(ndarray[float64_t] vals, int minp, ndarray[int64_t] times,
+ int64_t halflife):
+ """
+ Compute exponentially-weighted moving average using halflife and time
+ distances.
+
+ Parameters
+ ----------
+ vals : ndarray[float_64]
+ minp : int
+ times : ndarray[int64]
+ halflife : int64
+
+ Returns
+ -------
+ ndarray
+ """
+ cdef:
+ Py_ssize_t i, num_not_nan = 0, N = len(vals)
+ bint is_not_nan
+ float64_t last_result
+ ndarray[uint8_t] mask = np.zeros(N, dtype=np.uint8)
+ ndarray[float64_t] weights, observations, output = np.empty(N, dtype=np.float64)
+
+ if N == 0:
+ return output
+
+ last_result = vals[0]
+
+ for i in range(N):
+ is_not_nan = vals[i] == vals[i]
+ num_not_nan += is_not_nan
+ if is_not_nan:
+ mask[i] = 1
+ weights = 0.5 ** ((times[i] - times[mask.view(np.bool_)]) / halflife)
+ observations = vals[mask.view(np.bool_)]
+ last_result = np.sum(weights * observations) / np.sum(weights)
+
+ if num_not_nan >= minp:
+ output[i] = last_result
+ else:
+ output[i] = NaN
+
+ return output
+
def ewma(float64_t[:] vals, float64_t com, bint adjust, bint ignore_na, int minp):
"""
@@ -1761,9 +1806,9 @@ def ewma(float64_t[:] vals, float64_t com, bint adjust, bint ignore_na, int minp
----------
vals : ndarray (float64 type)
com : float64
- adjust: int
- ignore_na: bool
- minp: int
+ adjust : int
+ ignore_na : bool
+ minp : int
Returns
-------
@@ -1831,10 +1876,10 @@ def ewmcov(float64_t[:] input_x, float64_t[:] input_y,
input_x : ndarray (float64 type)
input_y : ndarray (float64 type)
com : float64
- adjust: int
- ignore_na: bool
- minp: int
- bias: int
+ adjust : int
+ ignore_na : bool
+ minp : int
+ bias : int
Returns
-------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d892e2487b31c..0b76960558721 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10518,6 +10518,7 @@ def ewm(
adjust=True,
ignore_na=False,
axis=0,
+ times=None,
):
axis = self._get_axis_number(axis)
return ExponentialMovingWindow(
@@ -10530,6 +10531,7 @@ def ewm(
adjust=adjust,
ignore_na=ignore_na,
axis=axis,
+ times=times,
)
cls.ewm = ewm
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index ee80f80b320e4..7a2d8e84bec76 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -1,18 +1,21 @@
+import datetime
from functools import partial
from textwrap import dedent
from typing import Optional, Union
import numpy as np
+from pandas._libs.tslibs import Timedelta
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import FrameOrSeries
+from pandas._typing import FrameOrSeries, TimedeltaConvertibleTypes
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, doc
+from pandas.core.dtypes.common import is_datetime64_ns_dtype
from pandas.core.dtypes.generic import ABCDataFrame
from pandas.core.base import DataError
-import pandas.core.common as com
+import pandas.core.common as common
from pandas.core.window.common import _doc_template, _shared_docs, zsqrt
from pandas.core.window.rolling import _flex_binary_moment, _Rolling
@@ -32,7 +35,7 @@ def get_center_of_mass(
halflife: Optional[float],
alpha: Optional[float],
) -> float:
- valid_count = com.count_not_none(comass, span, halflife, alpha)
+ valid_count = common.count_not_none(comass, span, halflife, alpha)
if valid_count > 1:
raise ValueError("comass, span, halflife, and alpha are mutually exclusive")
@@ -76,10 +79,17 @@ class ExponentialMovingWindow(_Rolling):
span : float, optional
Specify decay in terms of span,
:math:`\alpha = 2 / (span + 1)`, for :math:`span \geq 1`.
- halflife : float, optional
+ halflife : float, str, timedelta, optional
Specify decay in terms of half-life,
:math:`\alpha = 1 - \exp\left(-\ln(2) / halflife\right)`, for
:math:`halflife > 0`.
+
+ If ``times`` is specified, the time unit (str or timedelta) over which an
+ observation decays to half its value. Only applicable to ``mean()``
+ and halflife value will not apply to the other functions.
+
+ .. versionadded:: 1.1.0
+
alpha : float, optional
Specify smoothing factor :math:`\alpha` directly,
:math:`0 < \alpha \leq 1`.
@@ -124,6 +134,18 @@ class ExponentialMovingWindow(_Rolling):
axis : {0, 1}, default 0
The axis to use. The value 0 identifies the rows, and 1
identifies the columns.
+ times : str, np.ndarray, Series, default None
+
+ .. versionadded:: 1.1.0
+
+ Times corresponding to the observations. Must be monotonically increasing and
+ ``datetime64[ns]`` dtype.
+
+ If str, the name of the column in the DataFrame representing the times.
+
+ If 1-D array like, a sequence with the same shape as the observations.
+
+ Only applicable to ``mean()``.
Returns
-------
@@ -159,6 +181,17 @@ class ExponentialMovingWindow(_Rolling):
2 1.615385
3 1.615385
4 3.670213
+
+ Specifying ``times`` with a timedelta ``halflife`` when computing mean.
+
+ >>> times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17']
+ >>> df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean()
+ B
+ 0 0.000000
+ 1 0.585786
+ 2 1.523889
+ 3 1.523889
+ 4 3.233686
"""
_attributes = ["com", "min_periods", "adjust", "ignore_na", "axis"]
@@ -168,20 +201,49 @@ def __init__(
obj,
com: Optional[float] = None,
span: Optional[float] = None,
- halflife: Optional[float] = None,
+ halflife: Optional[Union[float, TimedeltaConvertibleTypes]] = None,
alpha: Optional[float] = None,
min_periods: int = 0,
adjust: bool = True,
ignore_na: bool = False,
axis: int = 0,
+ times: Optional[Union[str, np.ndarray, FrameOrSeries]] = None,
):
+ self.com: Optional[float]
self.obj = obj
- self.com = get_center_of_mass(com, span, halflife, alpha)
self.min_periods = max(int(min_periods), 1)
self.adjust = adjust
self.ignore_na = ignore_na
self.axis = axis
self.on = None
+ if times is not None:
+ if isinstance(times, str):
+ times = self._selected_obj[times]
+ if not is_datetime64_ns_dtype(times):
+ raise ValueError("times must be datetime64[ns] dtype.")
+ if len(times) != len(obj):
+ raise ValueError("times must be the same length as the object.")
+ if not isinstance(halflife, (str, datetime.timedelta)):
+ raise ValueError(
+ "halflife must be a string or datetime.timedelta object"
+ )
+ self.times = np.asarray(times.astype(np.int64))
+ self.halflife = Timedelta(halflife).value
+ # Halflife is no longer applicable when calculating COM
+ # But allow COM to still be calculated if the user passes other decay args
+ if common.count_not_none(com, span, alpha) > 0:
+ self.com = get_center_of_mass(com, span, None, alpha)
+ else:
+ self.com = None
+ else:
+ if halflife is not None and isinstance(halflife, (str, datetime.timedelta)):
+ raise ValueError(
+ "halflife can only be a timedelta convertible argument if "
+ "times is not None."
+ )
+ self.times = None
+ self.halflife = None
+ self.com = get_center_of_mass(com, span, halflife, alpha)
@property
def _constructor(self):
@@ -277,14 +339,23 @@ def mean(self, *args, **kwargs):
Arguments and keyword arguments to be passed into func.
"""
nv.validate_window_func("mean", args, kwargs)
- window_func = self._get_roll_func("ewma")
- window_func = partial(
- window_func,
- com=self.com,
- adjust=self.adjust,
- ignore_na=self.ignore_na,
- minp=self.min_periods,
- )
+ if self.times is not None:
+ window_func = self._get_roll_func("ewma_time")
+ window_func = partial(
+ window_func,
+ minp=self.min_periods,
+ times=self.times,
+ halflife=self.halflife,
+ )
+ else:
+ window_func = self._get_roll_func("ewma")
+ window_func = partial(
+ window_func,
+ com=self.com,
+ adjust=self.adjust,
+ ignore_na=self.ignore_na,
+ minp=self.min_periods,
+ )
return self._apply(window_func)
@Substitution(name="ewm", func_name="std")
diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py
index 74f3406d30225..eb8252d5731be 100644
--- a/pandas/tests/window/conftest.py
+++ b/pandas/tests/window/conftest.py
@@ -1,4 +1,4 @@
-from datetime import datetime
+from datetime import datetime, timedelta
import numpy as np
from numpy.random import randn
@@ -302,3 +302,9 @@ def series():
def which(request):
"""Turn parametrized which as fixture for series and frame"""
return request.param
+
+
+@pytest.fixture(params=["1 day", timedelta(days=1)])
+def halflife_with_times(request):
+ """Halflife argument for EWM when times is specified."""
+ return request.param
diff --git a/pandas/tests/window/test_ewm.py b/pandas/tests/window/test_ewm.py
index 44015597ddb19..12c314d5e9ec9 100644
--- a/pandas/tests/window/test_ewm.py
+++ b/pandas/tests/window/test_ewm.py
@@ -3,7 +3,8 @@
from pandas.errors import UnsupportedFunctionCall
-from pandas import DataFrame, Series
+from pandas import DataFrame, DatetimeIndex, Series, date_range
+import pandas._testing as tm
from pandas.core.window import ExponentialMovingWindow
@@ -69,3 +70,60 @@ def test_numpy_compat(method):
getattr(e, method)(1, 2, 3)
with pytest.raises(UnsupportedFunctionCall, match=msg):
getattr(e, method)(dtype=np.float64)
+
+
+def test_ewma_times_not_datetime_type():
+ msg = r"times must be datetime64\[ns\] dtype."
+ with pytest.raises(ValueError, match=msg):
+ Series(range(5)).ewm(times=np.arange(5))
+
+
+def test_ewma_times_not_same_length():
+ msg = "times must be the same length as the object."
+ with pytest.raises(ValueError, match=msg):
+ Series(range(5)).ewm(times=np.arange(4).astype("datetime64[ns]"))
+
+
+def test_ewma_halflife_not_correct_type():
+ msg = "halflife must be a string or datetime.timedelta object"
+ with pytest.raises(ValueError, match=msg):
+ Series(range(5)).ewm(halflife=1, times=np.arange(5).astype("datetime64[ns]"))
+
+
+def test_ewma_halflife_without_times(halflife_with_times):
+ msg = "halflife can only be a timedelta convertible argument if times is not None."
+ with pytest.raises(ValueError, match=msg):
+ Series(range(5)).ewm(halflife=halflife_with_times)
+
+
+@pytest.mark.parametrize(
+ "times",
+ [
+ np.arange(10).astype("datetime64[D]").astype("datetime64[ns]"),
+ date_range("2000", freq="D", periods=10),
+ date_range("2000", freq="D", periods=10).tz_localize("UTC"),
+ "time_col",
+ ],
+)
+@pytest.mark.parametrize("min_periods", [0, 2])
+def test_ewma_with_times_equal_spacing(halflife_with_times, times, min_periods):
+ halflife = halflife_with_times
+ data = np.arange(10)
+ data[::2] = np.nan
+ df = DataFrame({"A": data, "time_col": date_range("2000", freq="D", periods=10)})
+ result = df.ewm(halflife=halflife, min_periods=min_periods, times=times).mean()
+ expected = df.ewm(halflife=1.0, min_periods=min_periods).mean()
+ tm.assert_frame_equal(result, expected)
+
+
+def test_ewma_with_times_variable_spacing(tz_aware_fixture):
+ tz = tz_aware_fixture
+ halflife = "23 days"
+ times = DatetimeIndex(
+ ["2020-01-01", "2020-01-10T00:04:05", "2020-02-23T05:00:23"]
+ ).tz_localize(tz)
+ data = np.arange(3)
+ df = DataFrame(data)
+ result = df.ewm(halflife=halflife, times=times).mean()
+ expected = DataFrame([0.0, 0.5674161888241773, 1.545239952073459])
+ tm.assert_frame_equal(result, expected)
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This enhancement allows EWMA to be calculated relative to the timestamp in which an observation occurs instead of assuming each observation is equally spaced. | https://api.github.com/repos/pandas-dev/pandas/pulls/34839 | 2020-06-17T05:21:43Z | 2020-07-06T22:37:57Z | 2020-07-06T22:37:57Z | 2020-07-07T00:31:46Z |
REF: implement _shared_docs to de-circularize dependencies | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e5de9b428e2d5..e57b88f1be040 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -135,6 +135,7 @@
sanitize_index,
to_arrays,
)
+from pandas.core.reshape.melt import melt
from pandas.core.series import Series
from pandas.core.sorting import ensure_key_mapped
@@ -7069,104 +7070,6 @@ def unstack(self, level=-1, fill_value=None):
return unstack(self, level, fill_value)
- _shared_docs[
- "melt"
- ] = """
- Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
-
- This function is useful to massage a DataFrame into a format where one
- or more columns are identifier variables (`id_vars`), while all other
- columns, considered measured variables (`value_vars`), are "unpivoted" to
- the row axis, leaving just two non-identifier columns, 'variable' and
- 'value'.
- %(versionadded)s
- Parameters
- ----------
- id_vars : tuple, list, or ndarray, optional
- Column(s) to use as identifier variables.
- value_vars : tuple, list, or ndarray, optional
- Column(s) to unpivot. If not specified, uses all columns that
- are not set as `id_vars`.
- var_name : scalar
- Name to use for the 'variable' column. If None it uses
- ``frame.columns.name`` or 'variable'.
- value_name : scalar, default 'value'
- Name to use for the 'value' column.
- col_level : int or str, optional
- If columns are a MultiIndex then use this level to melt.
-
- Returns
- -------
- DataFrame
- Unpivoted DataFrame.
-
- See Also
- --------
- %(other)s : Identical method.
- pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
- DataFrame.pivot : Return reshaped DataFrame organized
- by given index / column values.
- DataFrame.explode : Explode a DataFrame from list-like
- columns to long format.
-
- Examples
- --------
- >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
- ... 'B': {0: 1, 1: 3, 2: 5},
- ... 'C': {0: 2, 1: 4, 2: 6}})
- >>> df
- A B C
- 0 a 1 2
- 1 b 3 4
- 2 c 5 6
-
- >>> %(caller)sid_vars=['A'], value_vars=['B'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
- 3 a C 2
- 4 b C 4
- 5 c C 6
-
- The names of 'variable' and 'value' columns can be customized:
-
- >>> %(caller)sid_vars=['A'], value_vars=['B'],
- ... var_name='myVarname', value_name='myValname')
- A myVarname myValname
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- If you have multi-index columns:
-
- >>> df.columns = [list('ABC'), list('DEF')]
- >>> df
- A B C
- D E F
- 0 a 1 2
- 1 b 3 4
- 2 c 5 6
-
- >>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- >>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
- (A, D) variable_0 variable_1 value
- 0 a B E 1
- 1 b B E 3
- 2 c B E 5
- """
-
@Appender(
_shared_docs["melt"]
% dict(
@@ -7183,7 +7086,6 @@ def melt(
value_name="value",
col_level=None,
) -> "DataFrame":
- from pandas.core.reshape.melt import melt
return melt(
self,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 61361c3331d5e..a459f2f403550 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -97,6 +97,7 @@
from pandas.core.internals import BlockManager
from pandas.core.missing import find_valid_index
from pandas.core.ops import _align_method_FRAME
+from pandas.core.shared_docs import _shared_docs
from pandas.io.formats import format as fmt
from pandas.io.formats.format import DataFrameFormatter, format_percentiles
@@ -109,7 +110,6 @@
# goal is to be able to define the docs close to function, while still being
# able to share
-_shared_docs: Dict[str, str] = dict()
_shared_doc_kwargs = dict(
axes="keywords for axes",
klass="Series/DataFrame",
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index db7e9265ac21d..299b68c6e71e0 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -3,7 +3,7 @@
"""
from collections import abc
-from typing import Iterable, List, Mapping, Union, overload
+from typing import TYPE_CHECKING, Iterable, List, Mapping, Union, overload
import numpy as np
@@ -12,14 +12,14 @@
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
-from pandas import DataFrame, Index, MultiIndex, Series
from pandas.core.arrays.categorical import (
factorize_from_iterable,
factorize_from_iterables,
)
import pandas.core.common as com
-from pandas.core.generic import NDFrame
from pandas.core.indexes.api import (
+ Index,
+ MultiIndex,
all_indexes_same,
ensure_index,
get_consensus_names,
@@ -28,6 +28,9 @@
import pandas.core.indexes.base as ibase
from pandas.core.internals import concatenate_block_managers
+if TYPE_CHECKING:
+ from pandas import DataFrame
+
# ---------------------------------------------------------------------
# Concatenate DataFrame objects
@@ -291,7 +294,7 @@ class _Concatenator:
def __init__(
self,
- objs,
+ objs: Union[Iterable[FrameOrSeries], Mapping[Label, FrameOrSeries]],
axis=0,
join: str = "outer",
keys=None,
@@ -302,7 +305,7 @@ def __init__(
copy: bool = True,
sort=False,
):
- if isinstance(objs, (NDFrame, str)):
+ if isinstance(objs, (ABCSeries, ABCDataFrame, str)):
raise TypeError(
"first argument must be an iterable of pandas "
f'objects, you passed an object of type "{type(objs).__name__}"'
@@ -348,7 +351,7 @@ def __init__(
# consolidate data & figure out what our result ndim is going to be
ndims = set()
for obj in objs:
- if not isinstance(obj, (Series, DataFrame)):
+ if not isinstance(obj, (ABCSeries, ABCDataFrame)):
msg = (
f"cannot concatenate object of type '{type(obj)}'; "
"only Series and DataFrame objs are valid"
@@ -374,7 +377,7 @@ def __init__(
# filter out the empties if we have not multi-index possibilities
# note to keep empty Series as it affect to result columns / name
non_empties = [
- obj for obj in objs if sum(obj.shape) > 0 or isinstance(obj, Series)
+ obj for obj in objs if sum(obj.shape) > 0 or isinstance(obj, ABCSeries)
]
if len(non_empties) and (
@@ -388,15 +391,15 @@ def __init__(
self.objs = objs
# Standardize axis parameter to int
- if isinstance(sample, Series):
- axis = DataFrame._get_axis_number(axis)
+ if isinstance(sample, ABCSeries):
+ axis = sample._constructor_expanddim._get_axis_number(axis)
else:
axis = sample._get_axis_number(axis)
# Need to flip BlockManager axis in the DataFrame special case
self._is_frame = isinstance(sample, ABCDataFrame)
if self._is_frame:
- axis = DataFrame._get_block_manager_axis(axis)
+ axis = sample._get_block_manager_axis(axis)
self._is_series = isinstance(sample, ABCSeries)
if not 0 <= axis <= sample.ndim:
@@ -543,7 +546,7 @@ def _get_concat_axis(self) -> Index:
num = 0
has_names = False
for i, x in enumerate(self.objs):
- if not isinstance(x, Series):
+ if not isinstance(x, ABCSeries):
raise TypeError(
f"Cannot concatenate type 'Series' with "
f"object of type '{type(x).__name__}'"
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 845f6b67693f4..cd0619738677d 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -11,13 +11,13 @@
from pandas.core.arrays import Categorical
import pandas.core.common as com
-from pandas.core.frame import DataFrame, _shared_docs
from pandas.core.indexes.api import Index, MultiIndex
from pandas.core.reshape.concat import concat
+from pandas.core.shared_docs import _shared_docs
from pandas.core.tools.numeric import to_numeric
if TYPE_CHECKING:
- from pandas import Series # noqa: F401
+ from pandas import DataFrame, Series # noqa: F401
@Appender(
@@ -25,13 +25,13 @@
% dict(caller="pd.melt(df, ", versionadded="", other="DataFrame.melt")
)
def melt(
- frame: DataFrame,
+ frame: "DataFrame",
id_vars=None,
value_vars=None,
var_name=None,
value_name="value",
col_level=None,
-) -> DataFrame:
+) -> "DataFrame":
# TODO: what about the existing index?
# If multiindex, gather names of columns on all level for checking presence
# of `id_vars` and `value_vars`
@@ -125,7 +125,7 @@ def melt(
@deprecate_kwarg(old_arg_name="label", new_arg_name=None)
-def lreshape(data: DataFrame, groups, dropna: bool = True, label=None) -> DataFrame:
+def lreshape(data: "DataFrame", groups, dropna: bool = True, label=None) -> "DataFrame":
"""
Reshape long-format data to wide. Generalized inverse of DataFrame.pivot
@@ -195,8 +195,8 @@ def lreshape(data: DataFrame, groups, dropna: bool = True, label=None) -> DataFr
def wide_to_long(
- df: DataFrame, stubnames, i, j, sep: str = "", suffix: str = r"\d+"
-) -> DataFrame:
+ df: "DataFrame", stubnames, i, j, sep: str = "", suffix: str = r"\d+"
+) -> "DataFrame":
r"""
Wide panel to long format. Less flexible but more user-friendly than melt.
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
new file mode 100644
index 0000000000000..1894f551afea5
--- /dev/null
+++ b/pandas/core/shared_docs.py
@@ -0,0 +1,102 @@
+from typing import Dict
+
+_shared_docs: Dict[str, str] = dict()
+
+
+_shared_docs[
+ "melt"
+] = """
+ Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
+
+ This function is useful to massage a DataFrame into a format where one
+ or more columns are identifier variables (`id_vars`), while all other
+ columns, considered measured variables (`value_vars`), are "unpivoted" to
+ the row axis, leaving just two non-identifier columns, 'variable' and
+ 'value'.
+ %(versionadded)s
+ Parameters
+ ----------
+ id_vars : tuple, list, or ndarray, optional
+ Column(s) to use as identifier variables.
+ value_vars : tuple, list, or ndarray, optional
+ Column(s) to unpivot. If not specified, uses all columns that
+ are not set as `id_vars`.
+ var_name : scalar
+ Name to use for the 'variable' column. If None it uses
+ ``frame.columns.name`` or 'variable'.
+ value_name : scalar, default 'value'
+ Name to use for the 'value' column.
+ col_level : int or str, optional
+ If columns are a MultiIndex then use this level to melt.
+
+ Returns
+ -------
+ DataFrame
+ Unpivoted DataFrame.
+
+ See Also
+ --------
+ %(other)s : Identical method.
+ pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
+ DataFrame.pivot : Return reshaped DataFrame organized
+ by given index / column values.
+ DataFrame.explode : Explode a DataFrame from list-like
+ columns to long format.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
+ ... 'B': {0: 1, 1: 3, 2: 5},
+ ... 'C': {0: 2, 1: 4, 2: 6}})
+ >>> df
+ A B C
+ 0 a 1 2
+ 1 b 3 4
+ 2 c 5 6
+
+ >>> %(caller)sid_vars=['A'], value_vars=['B'])
+ A variable value
+ 0 a B 1
+ 1 b B 3
+ 2 c B 5
+
+ >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
+ A variable value
+ 0 a B 1
+ 1 b B 3
+ 2 c B 5
+ 3 a C 2
+ 4 b C 4
+ 5 c C 6
+
+ The names of 'variable' and 'value' columns can be customized:
+
+ >>> %(caller)sid_vars=['A'], value_vars=['B'],
+ ... var_name='myVarname', value_name='myValname')
+ A myVarname myValname
+ 0 a B 1
+ 1 b B 3
+ 2 c B 5
+
+ If you have multi-index columns:
+
+ >>> df.columns = [list('ABC'), list('DEF')]
+ >>> df
+ A B C
+ D E F
+ 0 a 1 2
+ 1 b 3 4
+ 2 c 5 6
+
+ >>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
+ A variable value
+ 0 a B 1
+ 1 b B 3
+ 2 c B 5
+
+ >>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
+ (A, D) variable_0 variable_1 value
+ 0 a B E 1
+ 1 b B E 3
+ 2 c B E 5
+ """
| A side effect of this is that a bunch of pd.concat imports can be done at import-time instead of at runtime, will do this in a separate pass. | https://api.github.com/repos/pandas-dev/pandas/pulls/34837 | 2020-06-17T02:20:16Z | 2020-06-24T23:36:11Z | 2020-06-24T23:36:11Z | 2020-06-24T23:51:38Z |
Replaced np.bool refs; fix CI failure | diff --git a/asv_bench/benchmarks/pandas_vb_common.py b/asv_bench/benchmarks/pandas_vb_common.py
index fd1770df8e5d3..23286343d7367 100644
--- a/asv_bench/benchmarks/pandas_vb_common.py
+++ b/asv_bench/benchmarks/pandas_vb_common.py
@@ -33,7 +33,7 @@
np.uint8,
]
datetime_dtypes = [np.datetime64, np.timedelta64]
-string_dtypes = [np.object]
+string_dtypes = [object]
try:
extension_dtypes = [
pd.Int8Dtype,
diff --git a/asv_bench/benchmarks/series_methods.py b/asv_bench/benchmarks/series_methods.py
index d78419c12ce0d..258c29c145721 100644
--- a/asv_bench/benchmarks/series_methods.py
+++ b/asv_bench/benchmarks/series_methods.py
@@ -58,17 +58,15 @@ def time_isin_nan_values(self):
class IsInForObjects:
def setup(self):
- self.s_nans = Series(np.full(10 ** 4, np.nan)).astype(np.object)
- self.vals_nans = np.full(10 ** 4, np.nan).astype(np.object)
- self.s_short = Series(np.arange(2)).astype(np.object)
- self.s_long = Series(np.arange(10 ** 5)).astype(np.object)
- self.vals_short = np.arange(2).astype(np.object)
- self.vals_long = np.arange(10 ** 5).astype(np.object)
+ self.s_nans = Series(np.full(10 ** 4, np.nan)).astype(object)
+ self.vals_nans = np.full(10 ** 4, np.nan).astype(object)
+ self.s_short = Series(np.arange(2)).astype(object)
+ self.s_long = Series(np.arange(10 ** 5)).astype(object)
+ self.vals_short = np.arange(2).astype(object)
+ self.vals_long = np.arange(10 ** 5).astype(object)
# because of nans floats are special:
- self.s_long_floats = Series(np.arange(10 ** 5, dtype=np.float)).astype(
- np.object
- )
- self.vals_long_floats = np.arange(10 ** 5, dtype=np.float).astype(np.object)
+ self.s_long_floats = Series(np.arange(10 ** 5, dtype=np.float)).astype(object)
+ self.vals_long_floats = np.arange(10 ** 5, dtype=np.float).astype(object)
def time_isin_nans(self):
# if nan-objects are different objects,
diff --git a/asv_bench/benchmarks/sparse.py b/asv_bench/benchmarks/sparse.py
index d6aa41a7e0f32..28ceb25eebd96 100644
--- a/asv_bench/benchmarks/sparse.py
+++ b/asv_bench/benchmarks/sparse.py
@@ -32,7 +32,7 @@ def time_series_to_frame(self):
class SparseArrayConstructor:
- params = ([0.1, 0.01], [0, np.nan], [np.int64, np.float64, np.object])
+ params = ([0.1, 0.01], [0, np.nan], [np.int64, np.float64, object])
param_names = ["dense_proportion", "fill_value", "dtype"]
def setup(self, dense_proportion, fill_value, dtype):
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index df6b44ac654ce..d4be9d802d697 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -1884,7 +1884,7 @@ Fallback behavior
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
-* if the dtype is unsupported (e.g. ``np.complex``) then the ``default_handler``, if provided, will be called
+* if the dtype is unsupported (e.g. ``np.complex_``) then the ``default_handler``, if provided, will be called
for each value, otherwise an exception is raised.
* if an object is unsupported it will attempt the following:
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index ad65f9707610b..e0e026fe7cb5e 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -178,7 +178,7 @@ cdef class StringVector:
Py_ssize_t n
object val
- ao = np.empty(self.data.n, dtype=np.object)
+ ao = np.empty(self.data.n, dtype=object)
for i in range(self.data.n):
val = self.data.data[i]
ao[i] = val
diff --git a/pandas/_libs/hashtable_func_helper.pxi.in b/pandas/_libs/hashtable_func_helper.pxi.in
index 326ae36c6a12c..0cc0a6b192df5 100644
--- a/pandas/_libs/hashtable_func_helper.pxi.in
+++ b/pandas/_libs/hashtable_func_helper.pxi.in
@@ -94,7 +94,7 @@ cpdef value_count_{{dtype}}({{c_type}}[:] values, bint dropna):
build_count_table_{{dtype}}(values, table, dropna)
{{endif}}
- result_keys = np.empty(table.n_occupied, dtype=np.{{dtype}})
+ result_keys = np.empty(table.n_occupied, '{{dtype}}')
result_counts = np.zeros(table.n_occupied, dtype=np.int64)
{{if dtype == 'object'}}
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 461419239c730..6ffb036e01595 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -2037,7 +2037,7 @@ def _concatenate_chunks(list chunks):
numpy_dtypes = {x for x in dtypes if not is_categorical_dtype(x)}
if len(numpy_dtypes) > 1:
common_type = np.find_common_type(numpy_dtypes, [])
- if common_type == np.object:
+ if common_type == object:
warning_columns.append(str(name))
dtype = dtypes.pop()
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index d853ddf3de7d4..7c9575d921dc9 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -791,4 +791,4 @@ def make_mask_object_ndarray(ndarray[object, ndim=1] arr, object fill_value):
if value == fill_value and type(value) == type(fill_value):
mask[i] = 0
- return mask.view(dtype=np.bool)
+ return mask.view(dtype=bool)
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index 9d3959d0a070a..ca18afebf410b 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -11,7 +11,7 @@ cdef NUMERIC_TYPES = (
bool,
int,
float,
- np.bool,
+ np.bool_,
np.int8,
np.int16,
np.int32,
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index d270a6431be56..dcf2015245518 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -171,7 +171,7 @@ def _ensure_data(
return values, dtype
# we have failed, return object
- values = np.asarray(values, dtype=np.object)
+ values = np.asarray(values, dtype=object)
return ensure_object(values), np.dtype("object")
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 9b89ec99e8df6..4996a10002c63 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -150,7 +150,7 @@ def _sparse_array_op(
# to make template simple, cast here
left_sp_values = left.sp_values.view(np.uint8)
right_sp_values = right.sp_values.view(np.uint8)
- result_dtype = np.bool
+ result_dtype = bool
else:
opname = f"sparse_{name}_{dtype}"
left_sp_values = left.sp_values
@@ -183,7 +183,7 @@ def _wrap_result(name, data, sparse_index, fill_value, dtype=None):
name = name[2:-2]
if name in ("eq", "ne", "lt", "gt", "le", "ge"):
- dtype = np.bool
+ dtype = bool
fill_value = lib.item_from_zerodim(fill_value)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index bb1afc8f8ef20..e790b1d7f106e 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1520,7 +1520,7 @@ def drop_duplicates(self, keep="first"):
def duplicated(self, keep="first"):
if isinstance(self, ABCIndexClass):
if self.is_unique:
- return np.zeros(len(self), dtype=np.bool)
+ return np.zeros(len(self), dtype=bool)
return duplicated(self, keep=keep)
else:
return self._constructor(
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 2a47a03b8d387..e69e3bab10af8 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -225,7 +225,7 @@ def trans(x):
# if we have any nulls, then we are done
return result
- elif not isinstance(r[0], (np.integer, np.floating, np.bool, int, float, bool)):
+ elif not isinstance(r[0], (np.integer, np.floating, int, float, bool)):
# a comparable, e.g. a Decimal may slip in here
return result
@@ -315,7 +315,7 @@ def maybe_cast_result_dtype(dtype: DtypeObj, how: str) -> DtypeObj:
from pandas.core.arrays.boolean import BooleanDtype
from pandas.core.arrays.integer import Int64Dtype
- if how in ["add", "cumsum", "sum"] and (dtype == np.dtype(np.bool)):
+ if how in ["add", "cumsum", "sum"] and (dtype == np.dtype(bool)):
return np.dtype(np.int64)
elif how in ["add", "cumsum", "sum"] and isinstance(dtype, BooleanDtype):
return Int64Dtype()
@@ -597,7 +597,7 @@ def _ensure_dtype_type(value, dtype):
"""
Ensure that the given value is an instance of the given dtype.
- e.g. if out dtype is np.complex64, we should have an instance of that
+ e.g. if out dtype is np.complex64_, we should have an instance of that
as opposed to a python complex object.
Parameters
@@ -1483,7 +1483,7 @@ def find_common_type(types: List[DtypeObj]) -> DtypeObj:
if has_bools:
for t in types:
if is_integer_dtype(t) or is_float_dtype(t) or is_complex_dtype(t):
- return np.object
+ return object
return np.find_common_type(types, [])
@@ -1742,7 +1742,7 @@ def validate_numeric_casting(dtype: np.dtype, value):
if is_float(value) and np.isnan(value):
raise ValueError("Cannot assign nan to integer series")
- if issubclass(dtype.type, (np.integer, np.floating, np.complex)) and not issubclass(
+ if issubclass(dtype.type, (np.integer, np.floating, complex)) and not issubclass(
dtype.type, np.bool_
):
if is_bool(value):
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index a4a5ae1bfefff..9e960375e9bf4 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1354,7 +1354,7 @@ def is_bool_dtype(arr_or_dtype) -> bool:
False
>>> is_bool_dtype(bool)
True
- >>> is_bool_dtype(np.bool)
+ >>> is_bool_dtype(np.bool_)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
@@ -1526,7 +1526,7 @@ def is_complex_dtype(arr_or_dtype) -> bool:
False
>>> is_complex_dtype(int)
False
- >>> is_complex_dtype(np.complex)
+ >>> is_complex_dtype(np.complex_)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9014e576eeb39..26770efb5c9f9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10024,7 +10024,7 @@ def describe(
Including only string columns in a ``DataFrame`` description.
- >>> df.describe(include=[np.object]) # doctest: +SKIP
+ >>> df.describe(include=[object]) # doctest: +SKIP
object
count 3
unique 3
@@ -10051,7 +10051,7 @@ def describe(
Excluding object columns from a ``DataFrame`` description.
- >>> df.describe(exclude=[np.object]) # doctest: +SKIP
+ >>> df.describe(exclude=[object]) # doctest: +SKIP
categorical numeric
count 3 3.0
unique 3 NaN
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 904049923859d..48fdb14ebe90c 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1267,9 +1267,9 @@ def objs_to_bool(vals: np.ndarray) -> Tuple[np.ndarray, Type]:
if is_object_dtype(vals):
vals = np.array([bool(x) for x in vals])
else:
- vals = vals.astype(np.bool)
+ vals = vals.astype(bool)
- return vals.view(np.uint8), np.bool
+ return vals.view(np.uint8), bool
def result_to_bool(result: np.ndarray, inference: Type) -> np.ndarray:
return result.astype(inference, copy=False)
@@ -2059,7 +2059,7 @@ def pre_processor(vals: np.ndarray) -> Tuple[np.ndarray, Optional[Type]]:
vals = vals.to_numpy(dtype=float, na_value=np.nan)
elif is_datetime64_dtype(vals.dtype):
inference = "datetime64[ns]"
- vals = np.asarray(vals).astype(np.float)
+ vals = np.asarray(vals).astype(float)
return vals, inference
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index c046d6465ce67..057adceda7efd 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -374,7 +374,7 @@ def __new__(
return UInt64Index(data, copy=copy, dtype=dtype, name=name)
elif is_float_dtype(data.dtype):
return Float64Index(data, copy=copy, dtype=dtype, name=name)
- elif issubclass(data.dtype.type, np.bool) or is_bool_dtype(data):
+ elif issubclass(data.dtype.type, bool) or is_bool_dtype(data):
subarr = data.astype("object")
else:
subarr = com.asarray_tuplesafe(data, dtype=object)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index e496694ee7899..eaf59051205d6 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1951,7 +1951,7 @@ def _check_comparison_types(
if isinstance(result, np.ndarray):
# The shape of the mask can differ to that of the result
# since we may compare only a subset of a's or b's elements
- tmp = np.zeros(mask.shape, dtype=np.bool)
+ tmp = np.zeros(mask.shape, dtype=np.bool_)
tmp[mask] = result
result = tmp
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 1d6e02254e44a..1b56b6d5a46fa 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -264,7 +264,7 @@ def hash_array(
# First, turn whatever array this is into unsigned 64-bit ints, if we can
# manage it.
- elif isinstance(dtype, np.bool):
+ elif isinstance(dtype, bool):
vals = vals.astype("u8")
elif issubclass(dtype.type, (np.datetime64, np.timedelta64)):
vals = vals.view("i8").astype("u8", copy=False)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index c54e264faedd2..679cf4c2d8929 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -3476,13 +3476,13 @@ def _get_empty_meta(columns, index_col, index_names, dtype=None):
# This will enable us to write `dtype[col_name]`
# without worrying about KeyError issues later on.
if not isinstance(dtype, dict):
- # if dtype == None, default will be np.object.
- default_dtype = dtype or np.object
+ # if dtype == None, default will be object.
+ default_dtype = dtype or object
dtype = defaultdict(lambda: default_dtype)
else:
# Save a copy of the dictionary.
_dtype = dtype.copy()
- dtype = defaultdict(lambda: np.object)
+ dtype = defaultdict(lambda: object)
# Convert column indexes to column names.
for k, v in _dtype.items():
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index c8f1336bcec60..3d9be7c15726b 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -685,7 +685,7 @@ def read(self, nrows=None):
nd = self._column_types.count(b"d")
ns = self._column_types.count(b"s")
- self._string_chunk = np.empty((ns, nrows), dtype=np.object)
+ self._string_chunk = np.empty((ns, nrows), dtype=object)
self._byte_chunk = np.zeros((nd, 8 * nrows), dtype=np.uint8)
self._current_row_in_chunk_index = 0
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index e9adf5292ef6f..7677d8a94d521 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -322,7 +322,7 @@ def convert_delta_safe(base, deltas, unit) -> Series:
elif fmt.startswith(("%tC", "tC")):
warnings.warn("Encountered %tC format. Leaving in Stata Internal Format.")
- conv_dates = Series(dates, dtype=np.object)
+ conv_dates = Series(dates, dtype=object)
if has_bad_values:
conv_dates[bad_locs] = NaT
return conv_dates
@@ -451,7 +451,7 @@ def g(x: datetime.datetime) -> int:
conv_dates = 4 * (d.year - stata_epoch.year) + (d.month - 1) // 3
elif fmt in ["%th", "th"]:
d = parse_dates_safe(dates, year=True)
- conv_dates = 2 * (d.year - stata_epoch.year) + (d.month > 6).astype(np.int)
+ conv_dates = 2 * (d.year - stata_epoch.year) + (d.month > 6).astype(int)
elif fmt in ["%ty", "ty"]:
d = parse_dates_safe(dates, year=True)
conv_dates = d.year
@@ -553,7 +553,7 @@ def _cast_to_stata_types(data: DataFrame) -> DataFrame:
ws = ""
# original, if small, if large
conversion_data = (
- (np.bool, np.int8, np.int8),
+ (np.bool_, np.int8, np.int8),
(np.uint8, np.int8, np.int16),
(np.uint16, np.int16, np.int32),
(np.uint32, np.int32, np.int64),
@@ -1725,7 +1725,7 @@ def _do_convert_missing(self, data: DataFrame, convert_missing: bool) -> DataFra
if convert_missing: # Replacement follows Stata notation
missing_loc = np.nonzero(np.asarray(missing))[0]
umissing, umissing_loc = np.unique(series[missing], return_inverse=True)
- replacement = Series(series, dtype=np.object)
+ replacement = Series(series, dtype=object)
for j, um in enumerate(umissing):
missing_value = StataMissingValue(um)
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index ef8376bfef8a9..caf2f27de9276 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -301,7 +301,7 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey):
try:
# first find out the ax layout,
# so that we can correctly handle 'gaps"
- layout = np.zeros((nrows + 1, ncols + 1), dtype=np.bool)
+ layout = np.zeros((nrows + 1, ncols + 1), dtype=np.bool_)
for ax in axarr:
layout[row_num(ax), col_num(ax)] = ax.get_visible()
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index ccd03e841a40d..6c7b989bb9f2e 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -457,27 +457,27 @@ def test_pi_comp_period(self):
)
f = lambda x: x == pd.Period("2011-03", freq="M")
- exp = np.array([False, False, True, False], dtype=np.bool)
+ exp = np.array([False, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") == x
self._check(idx, f, exp)
f = lambda x: x != pd.Period("2011-03", freq="M")
- exp = np.array([True, True, False, True], dtype=np.bool)
+ exp = np.array([True, True, False, True], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") != x
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") >= x
- exp = np.array([True, True, True, False], dtype=np.bool)
+ exp = np.array([True, True, True, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: x > pd.Period("2011-03", freq="M")
- exp = np.array([False, False, False, True], dtype=np.bool)
+ exp = np.array([False, False, False, True], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") >= x
- exp = np.array([True, True, True, False], dtype=np.bool)
+ exp = np.array([True, True, True, False], dtype=np.bool_)
self._check(idx, f, exp)
def test_pi_comp_period_nat(self):
@@ -486,43 +486,43 @@ def test_pi_comp_period_nat(self):
)
f = lambda x: x == pd.Period("2011-03", freq="M")
- exp = np.array([False, False, True, False], dtype=np.bool)
+ exp = np.array([False, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") == x
self._check(idx, f, exp)
f = lambda x: x == pd.NaT
- exp = np.array([False, False, False, False], dtype=np.bool)
+ exp = np.array([False, False, False, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.NaT == x
self._check(idx, f, exp)
f = lambda x: x != pd.Period("2011-03", freq="M")
- exp = np.array([True, True, False, True], dtype=np.bool)
+ exp = np.array([True, True, False, True], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") != x
self._check(idx, f, exp)
f = lambda x: x != pd.NaT
- exp = np.array([True, True, True, True], dtype=np.bool)
+ exp = np.array([True, True, True, True], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.NaT != x
self._check(idx, f, exp)
f = lambda x: pd.Period("2011-03", freq="M") >= x
- exp = np.array([True, False, True, False], dtype=np.bool)
+ exp = np.array([True, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: x < pd.Period("2011-03", freq="M")
- exp = np.array([True, False, False, False], dtype=np.bool)
+ exp = np.array([True, False, False, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: x > pd.NaT
- exp = np.array([False, False, False, False], dtype=np.bool)
+ exp = np.array([False, False, False, False], dtype=np.bool_)
self._check(idx, f, exp)
f = lambda x: pd.NaT >= x
- exp = np.array([False, False, False, False], dtype=np.bool)
+ exp = np.array([False, False, False, False], dtype=np.bool_)
self._check(idx, f, exp)
diff --git a/pandas/tests/arrays/boolean/test_logical.py b/pandas/tests/arrays/boolean/test_logical.py
index bf4775bbd7b32..e79262e1b7934 100644
--- a/pandas/tests/arrays/boolean/test_logical.py
+++ b/pandas/tests/arrays/boolean/test_logical.py
@@ -14,8 +14,8 @@ def test_numpy_scalars_ok(self, all_logical_operators):
a = pd.array([True, False, None], dtype="boolean")
op = getattr(a, all_logical_operators)
- tm.assert_extension_array_equal(op(True), op(np.bool(True)))
- tm.assert_extension_array_equal(op(False), op(np.bool(False)))
+ tm.assert_extension_array_equal(op(True), op(np.bool_(True)))
+ tm.assert_extension_array_equal(op(False), op(np.bool_(False)))
def get_op_from_name(self, op_name):
short_opname = op_name.strip("_")
diff --git a/pandas/tests/arrays/categorical/test_dtypes.py b/pandas/tests/arrays/categorical/test_dtypes.py
index 9922a8863ebc2..47ce9cb4089f9 100644
--- a/pandas/tests/arrays/categorical/test_dtypes.py
+++ b/pandas/tests/arrays/categorical/test_dtypes.py
@@ -127,11 +127,11 @@ def test_astype(self, ordered):
tm.assert_numpy_array_equal(result, expected)
result = cat.astype(int)
- expected = np.array(cat, dtype=np.int)
+ expected = np.array(cat, dtype=int)
tm.assert_numpy_array_equal(result, expected)
result = cat.astype(float)
- expected = np.array(cat, dtype=np.float)
+ expected = np.array(cat, dtype=float)
tm.assert_numpy_array_equal(result, expected)
@pytest.mark.parametrize("dtype_ordered", [True, False])
diff --git a/pandas/tests/arrays/sparse/test_arithmetics.py b/pandas/tests/arrays/sparse/test_arithmetics.py
index 4ae1c1e6b63ce..c9f1dd7f589fc 100644
--- a/pandas/tests/arrays/sparse/test_arithmetics.py
+++ b/pandas/tests/arrays/sparse/test_arithmetics.py
@@ -53,7 +53,7 @@ def _check_numeric_ops(self, a, b, a_dense, b_dense, mix, op):
def _check_bool_result(self, res):
assert isinstance(res, self._klass)
assert isinstance(res.dtype, SparseDtype)
- assert res.dtype.subtype == np.bool
+ assert res.dtype.subtype == np.bool_
assert isinstance(res.fill_value, bool)
def _check_comparison_ops(self, a, b, a_dense, b_dense):
@@ -306,22 +306,22 @@ def test_int_array_comparison(self, kind):
def test_bool_same_index(self, kind, fill_value):
# GH 14000
# when sp_index are the same
- values = self._base([True, False, True, True], dtype=np.bool)
- rvalues = self._base([True, False, True, True], dtype=np.bool)
+ values = self._base([True, False, True, True], dtype=np.bool_)
+ rvalues = self._base([True, False, True, True], dtype=np.bool_)
- a = self._klass(values, kind=kind, dtype=np.bool, fill_value=fill_value)
- b = self._klass(rvalues, kind=kind, dtype=np.bool, fill_value=fill_value)
+ a = self._klass(values, kind=kind, dtype=np.bool_, fill_value=fill_value)
+ b = self._klass(rvalues, kind=kind, dtype=np.bool_, fill_value=fill_value)
self._check_logical_ops(a, b, values, rvalues)
@pytest.mark.parametrize("fill_value", [True, False, np.nan])
def test_bool_array_logical(self, kind, fill_value):
# GH 14000
# when sp_index are the same
- values = self._base([True, False, True, False, True, True], dtype=np.bool)
- rvalues = self._base([True, False, False, True, False, True], dtype=np.bool)
+ values = self._base([True, False, True, False, True, True], dtype=np.bool_)
+ rvalues = self._base([True, False, False, True, False, True], dtype=np.bool_)
- a = self._klass(values, kind=kind, dtype=np.bool, fill_value=fill_value)
- b = self._klass(rvalues, kind=kind, dtype=np.bool, fill_value=fill_value)
+ a = self._klass(values, kind=kind, dtype=np.bool_, fill_value=fill_value)
+ b = self._klass(rvalues, kind=kind, dtype=np.bool_, fill_value=fill_value)
self._check_logical_ops(a, b, values, rvalues)
def test_mixed_array_float_int(self, kind, mix, all_arithmetic_functions):
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 8450253f853c3..2f2907fbaaebc 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -74,22 +74,22 @@ def test_constructor_sparse_dtype_str(self):
def test_constructor_object_dtype(self):
# GH 11856
- arr = SparseArray(["A", "A", np.nan, "B"], dtype=np.object)
- assert arr.dtype == SparseDtype(np.object)
+ arr = SparseArray(["A", "A", np.nan, "B"], dtype=object)
+ assert arr.dtype == SparseDtype(object)
assert np.isnan(arr.fill_value)
- arr = SparseArray(["A", "A", np.nan, "B"], dtype=np.object, fill_value="A")
- assert arr.dtype == SparseDtype(np.object, "A")
+ arr = SparseArray(["A", "A", np.nan, "B"], dtype=object, fill_value="A")
+ assert arr.dtype == SparseDtype(object, "A")
assert arr.fill_value == "A"
# GH 17574
data = [False, 0, 100.0, 0.0]
- arr = SparseArray(data, dtype=np.object, fill_value=False)
- assert arr.dtype == SparseDtype(np.object, False)
+ arr = SparseArray(data, dtype=object, fill_value=False)
+ assert arr.dtype == SparseDtype(object, False)
assert arr.fill_value is False
- arr_expected = np.array(data, dtype=np.object)
+ arr_expected = np.array(data, dtype=object)
it = (type(x) == type(y) and x == y for x, y in zip(arr, arr_expected))
- assert np.fromiter(it, dtype=np.bool).all()
+ assert np.fromiter(it, dtype=np.bool_).all()
@pytest.mark.parametrize("dtype", [SparseDtype(int, 0), int])
def test_constructor_na_dtype(self, dtype):
@@ -445,15 +445,15 @@ def test_constructor_bool(self):
def test_constructor_bool_fill_value(self):
arr = SparseArray([True, False, True], dtype=None)
- assert arr.dtype == SparseDtype(np.bool)
+ assert arr.dtype == SparseDtype(np.bool_)
assert not arr.fill_value
- arr = SparseArray([True, False, True], dtype=np.bool)
- assert arr.dtype == SparseDtype(np.bool)
+ arr = SparseArray([True, False, True], dtype=np.bool_)
+ assert arr.dtype == SparseDtype(np.bool_)
assert not arr.fill_value
- arr = SparseArray([True, False, True], dtype=np.bool, fill_value=True)
- assert arr.dtype == SparseDtype(np.bool, True)
+ arr = SparseArray([True, False, True], dtype=np.bool_, fill_value=True)
+ assert arr.dtype == SparseDtype(np.bool_, True)
assert arr.fill_value
def test_constructor_float32(self):
@@ -588,7 +588,7 @@ def test_set_fill_value(self):
arr.fill_value = np.nan
assert np.isnan(arr.fill_value)
- arr = SparseArray([True, False, True], fill_value=False, dtype=np.bool)
+ arr = SparseArray([True, False, True], fill_value=False, dtype=np.bool_)
arr.fill_value = True
assert arr.fill_value
@@ -605,7 +605,7 @@ def test_set_fill_value(self):
@pytest.mark.parametrize("val", [[1, 2, 3], np.array([1, 2]), (1, 2, 3)])
def test_set_fill_invalid_non_scalar(self, val):
- arr = SparseArray([True, False, True], fill_value=False, dtype=np.bool)
+ arr = SparseArray([True, False, True], fill_value=False, dtype=np.bool_)
msg = "fill_value must be a scalar"
with pytest.raises(ValueError, match=msg):
@@ -625,7 +625,7 @@ def test_values_asarray(self):
([0, 0, 0, 0, 0], (5,), None),
([], (0,), None),
([0], (1,), None),
- (["A", "A", np.nan, "B"], (4,), np.object),
+ (["A", "A", np.nan, "B"], (4,), object),
],
)
def test_shape(self, data, shape, dtype):
diff --git a/pandas/tests/dtypes/cast/test_find_common_type.py b/pandas/tests/dtypes/cast/test_find_common_type.py
index ac7a5221d3469..8dac92f469703 100644
--- a/pandas/tests/dtypes/cast/test_find_common_type.py
+++ b/pandas/tests/dtypes/cast/test_find_common_type.py
@@ -11,7 +11,7 @@
((np.int64,), np.int64),
((np.uint64,), np.uint64),
((np.float32,), np.float32),
- ((np.object,), np.object),
+ ((object,), object),
# Into ints.
((np.int16, np.int64), np.int64),
((np.int32, np.uint32), np.int64),
@@ -25,20 +25,20 @@
((np.float16, np.int64), np.float64),
# Into others.
((np.complex128, np.int32), np.complex128),
- ((np.object, np.float32), np.object),
- ((np.object, np.int16), np.object),
+ ((object, np.float32), object),
+ ((object, np.int16), object),
# Bool with int.
- ((np.dtype("bool"), np.int64), np.object),
- ((np.dtype("bool"), np.int32), np.object),
- ((np.dtype("bool"), np.int16), np.object),
- ((np.dtype("bool"), np.int8), np.object),
- ((np.dtype("bool"), np.uint64), np.object),
- ((np.dtype("bool"), np.uint32), np.object),
- ((np.dtype("bool"), np.uint16), np.object),
- ((np.dtype("bool"), np.uint8), np.object),
+ ((np.dtype("bool"), np.int64), object),
+ ((np.dtype("bool"), np.int32), object),
+ ((np.dtype("bool"), np.int16), object),
+ ((np.dtype("bool"), np.int8), object),
+ ((np.dtype("bool"), np.uint64), object),
+ ((np.dtype("bool"), np.uint32), object),
+ ((np.dtype("bool"), np.uint16), object),
+ ((np.dtype("bool"), np.uint8), object),
# Bool with float.
- ((np.dtype("bool"), np.float64), np.object),
- ((np.dtype("bool"), np.float32), np.object),
+ ((np.dtype("bool"), np.float64), object),
+ ((np.dtype("bool"), np.float32), object),
(
(np.dtype("datetime64[ns]"), np.dtype("datetime64[ns]")),
np.dtype("datetime64[ns]"),
@@ -55,8 +55,8 @@
(np.dtype("timedelta64[ms]"), np.dtype("timedelta64[ns]")),
np.dtype("timedelta64[ns]"),
),
- ((np.dtype("datetime64[ns]"), np.dtype("timedelta64[ns]")), np.object),
- ((np.dtype("datetime64[ns]"), np.int64), np.object),
+ ((np.dtype("datetime64[ns]"), np.dtype("timedelta64[ns]")), object),
+ ((np.dtype("datetime64[ns]"), np.int64), object),
],
)
def test_numpy_dtypes(source_dtypes, expected_common_dtype):
@@ -72,7 +72,7 @@ def test_raises_empty_input():
"dtypes,exp_type",
[
([CategoricalDtype()], "category"),
- ([np.object, CategoricalDtype()], np.object),
+ ([object, CategoricalDtype()], object),
([CategoricalDtype(), CategoricalDtype()], "category"),
],
)
@@ -90,14 +90,14 @@ def test_datetimetz_dtype_match():
[
DatetimeTZDtype(unit="ns", tz="Asia/Tokyo"),
np.dtype("datetime64[ns]"),
- np.object,
+ object,
np.int64,
],
)
def test_datetimetz_dtype_mismatch(dtype2):
dtype = DatetimeTZDtype(unit="ns", tz="US/Eastern")
- assert find_common_type([dtype, dtype2]) == np.object
- assert find_common_type([dtype2, dtype]) == np.object
+ assert find_common_type([dtype, dtype2]) == object
+ assert find_common_type([dtype2, dtype]) == object
def test_period_dtype_match():
@@ -112,11 +112,11 @@ def test_period_dtype_match():
PeriodDtype(freq="2D"),
PeriodDtype(freq="H"),
np.dtype("datetime64[ns]"),
- np.object,
+ object,
np.int64,
],
)
def test_period_dtype_mismatch(dtype2):
dtype = PeriodDtype(freq="D")
- assert find_common_type([dtype, dtype2]) == np.object
- assert find_common_type([dtype2, dtype]) == np.object
+ assert find_common_type([dtype, dtype2]) == object
+ assert find_common_type([dtype2, dtype]) == object
diff --git a/pandas/tests/dtypes/cast/test_infer_dtype.py b/pandas/tests/dtypes/cast/test_infer_dtype.py
index 2744cfa8ddc62..70d38aad951cc 100644
--- a/pandas/tests/dtypes/cast/test_infer_dtype.py
+++ b/pandas/tests/dtypes/cast/test_infer_dtype.py
@@ -43,7 +43,9 @@ def test_infer_dtype_from_float_scalar(float_dtype):
assert dtype == float_dtype
-@pytest.mark.parametrize("data,exp_dtype", [(12, np.int64), (np.float(12), np.float64)])
+@pytest.mark.parametrize(
+ "data,exp_dtype", [(12, np.int64), (np.float_(12), np.float64)]
+)
def test_infer_dtype_from_python_scalar(data, exp_dtype):
dtype, val = infer_dtype_from_scalar(data)
assert dtype == exp_dtype
@@ -184,8 +186,8 @@ def test_infer_dtype_from_array(arr, expected, pandas_dtype):
(1, np.int64),
(1.1, np.float64),
(Timestamp("2011-01-01"), "datetime64[ns]"),
- (Timestamp("2011-01-01", tz="US/Eastern"), np.object),
- (Period("2011-01-01", freq="D"), np.object),
+ (Timestamp("2011-01-01", tz="US/Eastern"), object),
+ (Period("2011-01-01", freq="D"), object),
],
)
def test_cast_scalar_to_array(obj, dtype):
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 1708139a397ab..ce12718e48d0d 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -112,7 +112,7 @@ def test_period_dtype(self, dtype):
period=PeriodDtype("D"),
integer=np.dtype(np.int64),
float=np.dtype(np.float64),
- object=np.dtype(np.object),
+ object=np.dtype(object),
category=com.pandas_dtype("category"),
)
@@ -547,7 +547,7 @@ def test_is_bool_dtype():
assert not com.is_bool_dtype(pd.Index(["a", "b"]))
assert com.is_bool_dtype(bool)
- assert com.is_bool_dtype(np.bool)
+ assert com.is_bool_dtype(np.bool_)
assert com.is_bool_dtype(np.array([True, False]))
assert com.is_bool_dtype(pd.Index([True, False]))
@@ -615,7 +615,8 @@ def test_is_complex_dtype():
assert not com.is_complex_dtype(pd.Series([1, 2]))
assert not com.is_complex_dtype(np.array(["a", "b"]))
- assert com.is_complex_dtype(np.complex)
+ assert com.is_complex_dtype(np.complex_)
+ assert com.is_complex_dtype(complex)
assert com.is_complex_dtype(np.array([1 + 1j, 5]))
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index 3b9d3dc0b91f6..b1fe673e9e2f1 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -951,7 +951,7 @@ def test_registry_find(dtype, expected):
(str, False),
(int, False),
(bool, True),
- (np.bool, True),
+ (np.bool_, True),
(np.array(["a", "b"]), False),
(pd.Series([1, 2]), False),
(np.array([True, False]), True),
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index e97716f7a5e9c..e40a12f7bc8d1 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1246,7 +1246,6 @@ def test_is_number(self):
assert is_number(1)
assert is_number(1.1)
assert is_number(1 + 3j)
- assert is_number(np.bool(False))
assert is_number(np.int64(1))
assert is_number(np.float64(1.1))
assert is_number(np.complex128(1 + 3j))
@@ -1267,7 +1266,7 @@ def test_is_number(self):
def test_is_bool(self):
assert is_bool(True)
- assert is_bool(np.bool(False))
+ assert is_bool(False)
assert is_bool(np.bool_(False))
assert not is_bool(1)
@@ -1294,7 +1293,7 @@ def test_is_integer(self):
assert not is_integer(True)
assert not is_integer(1.1)
assert not is_integer(1 + 3j)
- assert not is_integer(np.bool(False))
+ assert not is_integer(False)
assert not is_integer(np.bool_(False))
assert not is_integer(np.float64(1.1))
assert not is_integer(np.complex128(1 + 3j))
@@ -1317,7 +1316,7 @@ def test_is_float(self):
assert not is_float(True)
assert not is_float(1)
assert not is_float(1 + 3j)
- assert not is_float(np.bool(False))
+ assert not is_float(False)
assert not is_float(np.bool_(False))
assert not is_float(np.int64(1))
assert not is_float(np.complex128(1 + 3j))
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index eed9a584cc030..bfa53ad02525b 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -319,13 +319,13 @@ def test_setitem_dataframe_column_without_index(self, data):
def test_setitem_series_with_index(self, data):
# https://github.com/pandas-dev/pandas/issues/32395
ser = expected = pd.Series(data, name="data")
- result = pd.Series(index=ser.index, dtype=np.object, name="data")
+ result = pd.Series(index=ser.index, dtype=object, name="data")
result.loc[ser.index] = ser
self.assert_series_equal(result, expected)
def test_setitem_series_without_index(self, data):
# https://github.com/pandas-dev/pandas/issues/32395
ser = expected = pd.Series(data, name="data")
- result = pd.Series(index=ser.index, dtype=np.object, name="data")
+ result = pd.Series(index=ser.index, dtype=object, name="data")
result.loc[:] = ser
self.assert_series_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_duplicated.py b/pandas/tests/frame/methods/test_duplicated.py
index 82fd6d88b82b9..7a1c16adc2a09 100644
--- a/pandas/tests/frame/methods/test_duplicated.py
+++ b/pandas/tests/frame/methods/test_duplicated.py
@@ -30,7 +30,7 @@ def test_duplicated_do_not_fail_on_wide_dataframes():
# calculation. Actual values doesn't matter here, though usually it's all
# False in this case
assert isinstance(result, Series)
- assert result.dtype == np.bool
+ assert result.dtype == np.bool_
@pytest.mark.parametrize(
diff --git a/pandas/tests/frame/methods/test_isin.py b/pandas/tests/frame/methods/test_isin.py
index 6307738021f68..79ea70a38f145 100644
--- a/pandas/tests/frame/methods/test_isin.py
+++ b/pandas/tests/frame/methods/test_isin.py
@@ -164,7 +164,7 @@ def test_isin_multiIndex(self):
tm.assert_frame_equal(result, expected)
df2.index = idx
- expected = df2.values.astype(np.bool)
+ expected = df2.values.astype(bool)
expected[:, 1] = ~expected[:, 1]
expected = DataFrame(expected, columns=["A", "B"], index=idx)
diff --git a/pandas/tests/frame/methods/test_to_records.py b/pandas/tests/frame/methods/test_to_records.py
index 34b323e55d8cd..d9c999c9119f4 100644
--- a/pandas/tests/frame/methods/test_to_records.py
+++ b/pandas/tests/frame/methods/test_to_records.py
@@ -163,7 +163,7 @@ def test_to_records_with_categorical(self):
),
# Pass in a type instance.
(
- dict(column_dtypes=np.unicode),
+ dict(column_dtypes=str),
np.rec.array(
[("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")],
dtype=[("index", "<i8"), ("A", "<U"), ("B", "<U"), ("C", "<U")],
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index b4842e8d5e8ed..db21161f84cf7 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -1021,7 +1021,7 @@ def test_any_all_bool_only(self):
)
result = df.all(bool_only=True)
- expected = Series(dtype=np.bool)
+ expected = Series(dtype=np.bool_)
tm.assert_series_equal(result, expected)
df = DataFrame(
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 9d0c221923cda..9c415564fd99a 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -37,15 +37,13 @@ def test_concat_empty_dataframe_dtypes(self):
def test_empty_frame_dtypes(self):
empty_df = pd.DataFrame()
- tm.assert_series_equal(empty_df.dtypes, pd.Series(dtype=np.object))
+ tm.assert_series_equal(empty_df.dtypes, pd.Series(dtype=object))
nocols_df = pd.DataFrame(index=[1, 2, 3])
- tm.assert_series_equal(nocols_df.dtypes, pd.Series(dtype=np.object))
+ tm.assert_series_equal(nocols_df.dtypes, pd.Series(dtype=object))
norows_df = pd.DataFrame(columns=list("abc"))
- tm.assert_series_equal(
- norows_df.dtypes, pd.Series(np.object, index=list("abc"))
- )
+ tm.assert_series_equal(norows_df.dtypes, pd.Series(object, index=list("abc")))
norows_int_df = pd.DataFrame(columns=list("abc")).astype(np.int32)
tm.assert_series_equal(
@@ -55,7 +53,7 @@ def test_empty_frame_dtypes(self):
odict = OrderedDict
df = pd.DataFrame(odict([("a", 1), ("b", True), ("c", 1.0)]), index=[1, 2, 3])
ex_dtypes = pd.Series(
- odict([("a", np.int64), ("b", np.bool), ("c", np.float64)])
+ odict([("a", np.int64), ("b", np.bool_), ("c", np.float64)])
)
tm.assert_series_equal(df.dtypes, ex_dtypes)
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index a6c4089dc71e6..1634baacf6d6e 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -171,7 +171,7 @@ def test_unstack_fill(self):
# From a series with incorrect data type for fill_value
result = data.unstack(fill_value=0.5)
expected = DataFrame(
- {"a": [1, 0.5, 5], "b": [2, 4, 0.5]}, index=["x", "y", "z"], dtype=np.float
+ {"a": [1, 0.5, 5], "b": [2, 4, 0.5]}, index=["x", "y", "z"], dtype=float
)
tm.assert_frame_equal(result, expected)
@@ -229,7 +229,7 @@ def test_unstack_fill_frame(self):
result = df.unstack(fill_value=0.5)
rows = [[1, 3, 2, 4], [0.5, 5, 0.5, 6], [7, 0.5, 8, 0.5]]
- expected = DataFrame(rows, index=list("xyz"), dtype=np.float)
+ expected = DataFrame(rows, index=list("xyz"), dtype=float)
expected.columns = MultiIndex.from_tuples(
[("A", "a"), ("A", "b"), ("B", "a"), ("B", "b")]
)
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 9c656dd69abe2..2b7b3af8f4705 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -771,8 +771,8 @@ def create_cols(name):
for n, dtype in [
("float", np.float64),
("int", np.int64),
- ("bool", np.bool),
- ("object", np.object),
+ ("bool", np.bool_),
+ ("object", object),
]:
for c in create_cols(n):
dtypes[c] = dtype
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index a152bc203721f..4d0f1a326225d 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -700,8 +700,6 @@ def test_datetime_method(method):
"second",
"microsecond",
"nanosecond",
- "week",
- "weekofyear",
"dayofweek",
"dayofyear",
"quarter",
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 1b726860eeb66..96db519578106 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -232,7 +232,7 @@ def test_wrap_agg_out(three_group):
grouped = three_group.groupby(["A", "B"])
def func(ser):
- if ser.dtype == np.object:
+ if ser.dtype == object:
raise TypeError
else:
return ser.sum()
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 8468a21904bf8..d03b03b3f862c 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -868,7 +868,7 @@ def test_groupby_apply_datetime_result_dtypes():
)
result = data.groupby("color").apply(lambda g: g.iloc[0]).dtypes
expected = Series(
- [np.dtype("datetime64[ns]"), np.object, np.object, np.int64, np.object],
+ [np.dtype("datetime64[ns]"), object, object, np.int64, object],
index=["observation", "color", "mood", "intensity", "score"],
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/categorical/test_category.py b/pandas/tests/indexes/categorical/test_category.py
index 8a84090ea6e94..7a1ccba08853b 100644
--- a/pandas/tests/indexes/categorical/test_category.py
+++ b/pandas/tests/indexes/categorical/test_category.py
@@ -270,9 +270,9 @@ def test_has_duplicates(self):
[2, "a", "b"],
list("abc"),
{
- "first": np.zeros(shape=(3), dtype=np.bool),
- "last": np.zeros(shape=(3), dtype=np.bool),
- False: np.zeros(shape=(3), dtype=np.bool),
+ "first": np.zeros(shape=(3), dtype=np.bool_),
+ "last": np.zeros(shape=(3), dtype=np.bool_),
+ False: np.zeros(shape=(3), dtype=np.bool_),
},
),
(
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 37ff97f028e81..ae297bf1069b0 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -795,10 +795,10 @@ def test_putmask_with_wrong_mask(self):
msg = "putmask: mask and data must be the same size"
with pytest.raises(ValueError, match=msg):
- index.putmask(np.ones(len(index) + 1, np.bool), 1)
+ index.putmask(np.ones(len(index) + 1, np.bool_), 1)
with pytest.raises(ValueError, match=msg):
- index.putmask(np.ones(len(index) - 1, np.bool), 1)
+ index.putmask(np.ones(len(index) - 1, np.bool_), 1)
with pytest.raises(ValueError, match=msg):
index.putmask("foo", 1)
diff --git a/pandas/tests/indexes/multi/test_indexing.py b/pandas/tests/indexes/multi/test_indexing.py
index 03ae2ae6a1f85..6b27682ed5674 100644
--- a/pandas/tests/indexes/multi/test_indexing.py
+++ b/pandas/tests/indexes/multi/test_indexing.py
@@ -132,10 +132,10 @@ def test_putmask_with_wrong_mask(idx):
msg = "putmask: mask and data must be the same size"
with pytest.raises(ValueError, match=msg):
- idx.putmask(np.ones(len(idx) + 1, np.bool), 1)
+ idx.putmask(np.ones(len(idx) + 1, np.bool_), 1)
with pytest.raises(ValueError, match=msg):
- idx.putmask(np.ones(len(idx) - 1, np.bool), 1)
+ idx.putmask(np.ones(len(idx) - 1, np.bool_), 1)
with pytest.raises(ValueError, match=msg):
idx.putmask("foo", 1)
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 47617802be11c..8d767663fc208 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -121,7 +121,7 @@ def test_view_asi8(self):
def test_values(self):
idx = PeriodIndex([], freq="M")
- exp = np.array([], dtype=np.object)
+ exp = np.array([], dtype=object)
tm.assert_numpy_array_equal(idx.values, exp)
tm.assert_numpy_array_equal(idx.to_numpy(), exp)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 466b491eb7a2c..f31b49ab82f3b 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1375,8 +1375,8 @@ def test_get_indexer_with_NA_values(
# is mangled
if unique_nulls_fixture is unique_nulls_fixture2:
return # skip it, values are not unique
- arr = np.array([unique_nulls_fixture, unique_nulls_fixture2], dtype=np.object)
- index = pd.Index(arr, dtype=np.object)
+ arr = np.array([unique_nulls_fixture, unique_nulls_fixture2], dtype=object)
+ index = pd.Index(arr, dtype=object)
result = index.get_indexer(
[unique_nulls_fixture, unique_nulls_fixture2, "Unknown"]
)
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 8c528a521f0ed..1512c88a68778 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -87,19 +87,18 @@ def _assert_setitem_series_conversion(
# tm.assert_series_equal(temp, expected_series)
@pytest.mark.parametrize(
- "val,exp_dtype",
- [(1, np.object), (1.1, np.object), (1 + 1j, np.object), (True, np.object)],
+ "val,exp_dtype", [(1, object), (1.1, object), (1 + 1j, object), (True, object)],
)
def test_setitem_series_object(self, val, exp_dtype):
obj = pd.Series(list("abcd"))
- assert obj.dtype == np.object
+ assert obj.dtype == object
exp = pd.Series(["a", val, "c", "d"])
self._assert_setitem_series_conversion(obj, val, exp, exp_dtype)
@pytest.mark.parametrize(
"val,exp_dtype",
- [(1, np.int64), (1.1, np.float64), (1 + 1j, np.complex128), (True, np.object)],
+ [(1, np.int64), (1.1, np.float64), (1 + 1j, np.complex128), (True, object)],
)
def test_setitem_series_int64(self, val, exp_dtype, request):
obj = pd.Series([1, 2, 3, 4])
@@ -134,12 +133,7 @@ def test_setitem_series_int8(self, val, exp_dtype, request):
@pytest.mark.parametrize(
"val,exp_dtype",
- [
- (1, np.float64),
- (1.1, np.float64),
- (1 + 1j, np.complex128),
- (True, np.object),
- ],
+ [(1, np.float64), (1.1, np.float64), (1 + 1j, np.complex128), (True, object)],
)
def test_setitem_series_float64(self, val, exp_dtype):
obj = pd.Series([1.1, 2.2, 3.3, 4.4])
@@ -154,7 +148,7 @@ def test_setitem_series_float64(self, val, exp_dtype):
(1, np.complex128),
(1.1, np.complex128),
(1 + 1j, np.complex128),
- (True, np.object),
+ (True, object),
],
)
def test_setitem_series_complex128(self, val, exp_dtype):
@@ -171,25 +165,25 @@ def test_setitem_series_complex128(self, val, exp_dtype):
(3, np.int64),
(1.1, np.float64),
(1 + 1j, np.complex128),
- (True, np.bool),
+ (True, np.bool_),
],
)
def test_setitem_series_bool(self, val, exp_dtype, request):
obj = pd.Series([True, False, True, False])
- assert obj.dtype == np.bool
+ assert obj.dtype == np.bool_
mark = None
if exp_dtype is np.int64:
exp = pd.Series([True, True, True, False])
- self._assert_setitem_series_conversion(obj, val, exp, np.bool)
+ self._assert_setitem_series_conversion(obj, val, exp, np.bool_)
mark = pytest.mark.xfail(reason="TODO_GH12747 The result must be int")
elif exp_dtype is np.float64:
exp = pd.Series([True, True, True, False])
- self._assert_setitem_series_conversion(obj, val, exp, np.bool)
+ self._assert_setitem_series_conversion(obj, val, exp, np.bool_)
mark = pytest.mark.xfail(reason="TODO_GH12747 The result must be float")
elif exp_dtype is np.complex128:
exp = pd.Series([True, True, True, False])
- self._assert_setitem_series_conversion(obj, val, exp, np.bool)
+ self._assert_setitem_series_conversion(obj, val, exp, np.bool_)
mark = pytest.mark.xfail(reason="TODO_GH12747 The result must be complex")
if mark is not None:
request.node.add_marker(mark)
@@ -199,11 +193,7 @@ def test_setitem_series_bool(self, val, exp_dtype, request):
@pytest.mark.parametrize(
"val,exp_dtype",
- [
- (pd.Timestamp("2012-01-01"), "datetime64[ns]"),
- (1, np.object),
- ("x", np.object),
- ],
+ [(pd.Timestamp("2012-01-01"), "datetime64[ns]"), (1, object), ("x", object)],
)
def test_setitem_series_datetime64(self, val, exp_dtype):
obj = pd.Series(
@@ -230,9 +220,9 @@ def test_setitem_series_datetime64(self, val, exp_dtype):
"val,exp_dtype",
[
(pd.Timestamp("2012-01-01", tz="US/Eastern"), "datetime64[ns, US/Eastern]"),
- (pd.Timestamp("2012-01-01", tz="US/Pacific"), np.object),
- (pd.Timestamp("2012-01-01"), np.object),
- (1, np.object),
+ (pd.Timestamp("2012-01-01", tz="US/Pacific"), object),
+ (pd.Timestamp("2012-01-01"), object),
+ (1, object),
],
)
def test_setitem_series_datetime64tz(self, val, exp_dtype):
@@ -259,7 +249,7 @@ def test_setitem_series_datetime64tz(self, val, exp_dtype):
@pytest.mark.parametrize(
"val,exp_dtype",
- [(pd.Timedelta("12 day"), "timedelta64[ns]"), (1, np.object), ("x", np.object)],
+ [(pd.Timedelta("12 day"), "timedelta64[ns]"), (1, object), ("x", object)],
)
def test_setitem_series_timedelta64(self, val, exp_dtype):
obj = pd.Series(
@@ -296,11 +286,11 @@ def _assert_setitem_index_conversion(
assert temp.index.dtype == expected_dtype
@pytest.mark.parametrize(
- "val,exp_dtype", [("x", np.object), (5, IndexError), (1.1, np.object)]
+ "val,exp_dtype", [("x", object), (5, IndexError), (1.1, object)]
)
def test_setitem_index_object(self, val, exp_dtype):
obj = pd.Series([1, 2, 3, 4], index=list("abcd"))
- assert obj.index.dtype == np.object
+ assert obj.index.dtype == object
if exp_dtype is IndexError:
temp = obj.copy()
@@ -312,7 +302,7 @@ def test_setitem_index_object(self, val, exp_dtype):
self._assert_setitem_index_conversion(obj, val, exp_index, exp_dtype)
@pytest.mark.parametrize(
- "val,exp_dtype", [(5, np.int64), (1.1, np.float64), ("x", np.object)]
+ "val,exp_dtype", [(5, np.int64), (1.1, np.float64), ("x", object)]
)
def test_setitem_index_int64(self, val, exp_dtype):
obj = pd.Series([1, 2, 3, 4])
@@ -322,7 +312,7 @@ def test_setitem_index_int64(self, val, exp_dtype):
self._assert_setitem_index_conversion(obj, val, exp_index, exp_dtype)
@pytest.mark.parametrize(
- "val,exp_dtype", [(5, IndexError), (5.1, np.float64), ("x", np.object)]
+ "val,exp_dtype", [(5, IndexError), (5.1, np.float64), ("x", object)]
)
def test_setitem_index_float64(self, val, exp_dtype, request):
obj = pd.Series([1, 2, 3, 4], index=[1.1, 2.1, 3.1, 4.1])
@@ -375,15 +365,15 @@ def _assert_insert_conversion(self, original, value, expected, expected_dtype):
@pytest.mark.parametrize(
"insert, coerced_val, coerced_dtype",
[
- (1, 1, np.object),
- (1.1, 1.1, np.object),
- (False, False, np.object),
- ("x", "x", np.object),
+ (1, 1, object),
+ (1.1, 1.1, object),
+ (False, False, object),
+ ("x", "x", object),
],
)
def test_insert_index_object(self, insert, coerced_val, coerced_dtype):
obj = pd.Index(list("abcd"))
- assert obj.dtype == np.object
+ assert obj.dtype == object
exp = pd.Index(["a", coerced_val, "b", "c", "d"])
self._assert_insert_conversion(obj, insert, exp, coerced_dtype)
@@ -394,7 +384,7 @@ def test_insert_index_object(self, insert, coerced_val, coerced_dtype):
(1, 1, np.int64),
(1.1, 1.1, np.float64),
(False, 0, np.int64),
- ("x", "x", np.object),
+ ("x", "x", object),
],
)
def test_insert_index_int64(self, insert, coerced_val, coerced_dtype):
@@ -410,7 +400,7 @@ def test_insert_index_int64(self, insert, coerced_val, coerced_dtype):
(1, 1.0, np.float64),
(1.1, 1.1, np.float64),
(False, 0.0, np.float64),
- ("x", "x", np.object),
+ ("x", "x", object),
],
)
def test_insert_index_float64(self, insert, coerced_val, coerced_dtype):
@@ -484,9 +474,9 @@ def test_insert_index_timedelta64(self):
"insert, coerced_val, coerced_dtype",
[
(pd.Period("2012-01", freq="M"), "2012-01", "period[M]"),
- (pd.Timestamp("2012-01-01"), pd.Timestamp("2012-01-01"), np.object),
- (1, 1, np.object),
- ("x", "x", np.object),
+ (pd.Timestamp("2012-01-01"), pd.Timestamp("2012-01-01"), object),
+ (1, 1, object),
+ ("x", "x", object),
],
)
def test_insert_index_period(self, insert, coerced_val, coerced_dtype):
@@ -529,12 +519,12 @@ def _assert_where_conversion(
@pytest.mark.parametrize(
"fill_val,exp_dtype",
- [(1, np.object), (1.1, np.object), (1 + 1j, np.object), (True, np.object)],
+ [(1, object), (1.1, object), (1 + 1j, object), (True, object)],
)
def test_where_object(self, index_or_series, fill_val, exp_dtype):
klass = index_or_series
obj = klass(list("abcd"))
- assert obj.dtype == np.object
+ assert obj.dtype == object
cond = klass([True, False, True, False])
if fill_val is True and klass is pd.Series:
@@ -555,7 +545,7 @@ def test_where_object(self, index_or_series, fill_val, exp_dtype):
@pytest.mark.parametrize(
"fill_val,exp_dtype",
- [(1, np.int64), (1.1, np.float64), (1 + 1j, np.complex128), (True, np.object)],
+ [(1, np.int64), (1.1, np.float64), (1 + 1j, np.complex128), (True, object)],
)
def test_where_int64(self, index_or_series, fill_val, exp_dtype):
klass = index_or_series
@@ -577,12 +567,7 @@ def test_where_int64(self, index_or_series, fill_val, exp_dtype):
@pytest.mark.parametrize(
"fill_val, exp_dtype",
- [
- (1, np.float64),
- (1.1, np.float64),
- (1 + 1j, np.complex128),
- (True, np.object),
- ],
+ [(1, np.float64), (1.1, np.float64), (1 + 1j, np.complex128), (True, object)],
)
def test_where_float64(self, index_or_series, fill_val, exp_dtype):
klass = index_or_series
@@ -608,7 +593,7 @@ def test_where_float64(self, index_or_series, fill_val, exp_dtype):
(1, np.complex128),
(1.1, np.complex128),
(1 + 1j, np.complex128),
- (True, np.object),
+ (True, object),
],
)
def test_where_series_complex128(self, fill_val, exp_dtype):
@@ -628,12 +613,12 @@ def test_where_series_complex128(self, fill_val, exp_dtype):
@pytest.mark.parametrize(
"fill_val,exp_dtype",
- [(1, np.object), (1.1, np.object), (1 + 1j, np.object), (True, np.bool)],
+ [(1, object), (1.1, object), (1 + 1j, object), (True, np.bool_)],
)
def test_where_series_bool(self, fill_val, exp_dtype):
obj = pd.Series([True, False, True, False])
- assert obj.dtype == np.bool
+ assert obj.dtype == np.bool_
cond = pd.Series([True, False, True, False])
exp = pd.Series([True, fill_val, True, fill_val])
@@ -650,7 +635,7 @@ def test_where_series_bool(self, fill_val, exp_dtype):
"fill_val,exp_dtype",
[
(pd.Timestamp("2012-01-01"), "datetime64[ns]"),
- (pd.Timestamp("2012-01-01", tz="US/Eastern"), np.object),
+ (pd.Timestamp("2012-01-01", tz="US/Eastern"), object),
],
ids=["datetime64", "datetime64tz"],
)
@@ -733,7 +718,7 @@ def test_where_index_datetime(self, fill_val):
@pytest.mark.xfail(reason="GH 22839: do not ignore timezone, must be object")
def test_where_index_datetime64tz(self):
fill_val = pd.Timestamp("2012-01-01", tz="US/Eastern")
- exp_dtype = np.object
+ exp_dtype = object
obj = pd.Index(
[
pd.Timestamp("2011-01-01"),
@@ -834,24 +819,19 @@ def _assert_fillna_conversion(self, original, value, expected, expected_dtype):
@pytest.mark.parametrize(
"fill_val, fill_dtype",
- [(1, np.object), (1.1, np.object), (1 + 1j, np.object), (True, np.object)],
+ [(1, object), (1.1, object), (1 + 1j, object), (True, object)],
)
def test_fillna_object(self, index_or_series, fill_val, fill_dtype):
klass = index_or_series
obj = klass(["a", np.nan, "c", "d"])
- assert obj.dtype == np.object
+ assert obj.dtype == object
exp = klass(["a", fill_val, "c", "d"])
self._assert_fillna_conversion(obj, fill_val, exp, fill_dtype)
@pytest.mark.parametrize(
"fill_val,fill_dtype",
- [
- (1, np.float64),
- (1.1, np.float64),
- (1 + 1j, np.complex128),
- (True, np.object),
- ],
+ [(1, np.float64), (1.1, np.float64), (1 + 1j, np.complex128), (True, object)],
)
def test_fillna_float64(self, index_or_series, fill_val, fill_dtype):
klass = index_or_series
@@ -863,7 +843,7 @@ def test_fillna_float64(self, index_or_series, fill_val, fill_dtype):
# complex for Series,
# object for Index
if fill_dtype == np.complex128 and klass == pd.Index:
- fill_dtype = np.object
+ fill_dtype = object
self._assert_fillna_conversion(obj, fill_val, exp, fill_dtype)
@pytest.mark.parametrize(
@@ -872,7 +852,7 @@ def test_fillna_float64(self, index_or_series, fill_val, fill_dtype):
(1, np.complex128),
(1.1, np.complex128),
(1 + 1j, np.complex128),
- (True, np.object),
+ (True, object),
],
)
def test_fillna_series_complex128(self, fill_val, fill_dtype):
@@ -886,9 +866,9 @@ def test_fillna_series_complex128(self, fill_val, fill_dtype):
"fill_val,fill_dtype",
[
(pd.Timestamp("2012-01-01"), "datetime64[ns]"),
- (pd.Timestamp("2012-01-01", tz="US/Eastern"), np.object),
- (1, np.object),
- ("x", np.object),
+ (pd.Timestamp("2012-01-01", tz="US/Eastern"), object),
+ (1, object),
+ ("x", object),
],
ids=["datetime64", "datetime64tz", "object", "object"],
)
@@ -918,10 +898,10 @@ def test_fillna_datetime(self, index_or_series, fill_val, fill_dtype):
"fill_val,fill_dtype",
[
(pd.Timestamp("2012-01-01", tz="US/Eastern"), "datetime64[ns, US/Eastern]"),
- (pd.Timestamp("2012-01-01"), np.object),
- (pd.Timestamp("2012-01-01", tz="Asia/Tokyo"), np.object),
- (1, np.object),
- ("x", np.object),
+ (pd.Timestamp("2012-01-01"), object),
+ (pd.Timestamp("2012-01-01", tz="Asia/Tokyo"), object),
+ (1, object),
+ ("x", object),
],
)
def test_fillna_datetime64tz(self, index_or_series, fill_val, fill_dtype):
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 51a7aa9bb586b..dd63a26f139e9 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -28,7 +28,7 @@ def test_setitem_ndarray_1d(self):
# len of indexer vs length of the 1d ndarray
df = DataFrame(index=Index(np.arange(1, 11)))
df["foo"] = np.zeros(10, dtype=np.float64)
- df["bar"] = np.zeros(10, dtype=np.complex)
+ df["bar"] = np.zeros(10, dtype=complex)
# invalid
with pytest.raises(ValueError):
@@ -46,7 +46,7 @@ def test_setitem_ndarray_1d(self):
# dtype getting changed?
df = DataFrame(index=Index(np.arange(1, 11)))
df["foo"] = np.zeros(10, dtype=np.float64)
- df["bar"] = np.zeros(10, dtype=np.complex)
+ df["bar"] = np.zeros(10, dtype=complex)
with pytest.raises(ValueError):
df[2:5] = np.arange(1, 4) * 1j
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 30416985f2020..47980e88f76d4 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -925,7 +925,7 @@ def test_loc_setitem_empty_append(self):
# only appends one value
expected = DataFrame({"x": [1.0], "y": [np.nan]})
- df = DataFrame(columns=["x", "y"], dtype=np.float)
+ df = DataFrame(columns=["x", "y"], dtype=float)
df.loc[0, "x"] = expected.loc[0, "x"]
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index c1850826926d8..23cad043f2177 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -2813,7 +2813,7 @@ def test_to_string_multindex_header(self):
class TestGenericArrayFormatter:
def test_1d_array(self):
# GenericArrayFormatter is used on types for which there isn't a dedicated
- # formatter. np.bool is one of those types.
+ # formatter. np.bool_ is one of those types.
obj = fmt.GenericArrayFormatter(np.array([True, False]))
res = obj.get_result()
assert len(res) == 2
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 0437052e2740d..df64af6ac2265 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -100,21 +100,19 @@ def test_multiindex(self):
class TestTableSchemaType:
- @pytest.mark.parametrize("int_type", [np.int, np.int16, np.int32, np.int64])
+ @pytest.mark.parametrize("int_type", [int, np.int16, np.int32, np.int64])
def test_as_json_table_type_int_data(self, int_type):
int_data = [1, 2, 3]
assert as_json_table_type(np.array(int_data, dtype=int_type).dtype) == "integer"
- @pytest.mark.parametrize(
- "float_type", [np.float, np.float16, np.float32, np.float64]
- )
+ @pytest.mark.parametrize("float_type", [float, np.float16, np.float32, np.float64])
def test_as_json_table_type_float_data(self, float_type):
float_data = [1.0, 2.0, 3.0]
assert (
as_json_table_type(np.array(float_data, dtype=float_type).dtype) == "number"
)
- @pytest.mark.parametrize("bool_type", [bool, np.bool])
+ @pytest.mark.parametrize("bool_type", [bool, np.bool_])
def test_as_json_table_type_bool_data(self, bool_type):
bool_data = [True, False]
assert (
@@ -154,17 +152,15 @@ def test_as_json_table_type_categorical_data(self, cat_data):
# ------
# dtypes
# ------
- @pytest.mark.parametrize("int_dtype", [np.int, np.int16, np.int32, np.int64])
+ @pytest.mark.parametrize("int_dtype", [int, np.int16, np.int32, np.int64])
def test_as_json_table_type_int_dtypes(self, int_dtype):
assert as_json_table_type(int_dtype) == "integer"
- @pytest.mark.parametrize(
- "float_dtype", [np.float, np.float16, np.float32, np.float64]
- )
+ @pytest.mark.parametrize("float_dtype", [float, np.float16, np.float32, np.float64])
def test_as_json_table_type_float_dtypes(self, float_dtype):
assert as_json_table_type(float_dtype) == "number"
- @pytest.mark.parametrize("bool_dtype", [bool, np.bool])
+ @pytest.mark.parametrize("bool_dtype", [bool, np.bool_])
def test_as_json_table_type_bool_dtypes(self, bool_dtype):
assert as_json_table_type(bool_dtype) == "boolean"
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 137e4c991d080..56b854bee77d7 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -159,7 +159,7 @@ def test_roundtrip_intframe(self, orient, convert_axes, numpy, dtype, int_frame)
assert_json_roundtrip_equal(result, expected, orient)
- @pytest.mark.parametrize("dtype", [None, np.float64, np.int, "U3"])
+ @pytest.mark.parametrize("dtype", [None, np.float64, int, "U3"])
@pytest.mark.parametrize("convert_axes", [True, False])
@pytest.mark.parametrize("numpy", [True, False])
def test_roundtrip_str_axes(self, orient, convert_axes, numpy, dtype):
@@ -673,7 +673,7 @@ def test_series_roundtrip_timeseries(self, orient, numpy, datetime_series):
tm.assert_series_equal(result, expected)
- @pytest.mark.parametrize("dtype", [np.float64, np.int])
+ @pytest.mark.parametrize("dtype", [np.float64, int])
@pytest.mark.parametrize("numpy", [True, False])
def test_series_roundtrip_numeric(self, orient, numpy, dtype):
s = Series(range(6), index=["a", "b", "c", "d", "e", "f"])
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 28b043e65b848..7dc73d5be1538 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -676,14 +676,14 @@ def my_obj_handler(_):
class TestNumpyJSONTests:
@pytest.mark.parametrize("bool_input", [True, False])
def test_bool(self, bool_input):
- b = np.bool(bool_input)
+ b = bool(bool_input)
assert ujson.decode(ujson.encode(b)) == b
def test_bool_array(self):
bool_array = np.array(
- [True, False, True, True, False, True, False, False], dtype=np.bool
+ [True, False, True, True, False, True, False, False], dtype=bool
)
- output = np.array(ujson.decode(ujson.encode(bool_array)), dtype=np.bool)
+ output = np.array(ujson.decode(ujson.encode(bool_array)), dtype=bool)
tm.assert_numpy_array_equal(bool_array, output)
def test_int(self, any_int_dtype):
@@ -693,7 +693,7 @@ def test_int(self, any_int_dtype):
assert klass(ujson.decode(ujson.encode(num))) == num
def test_int_array(self, any_int_dtype):
- arr = np.arange(100, dtype=np.int)
+ arr = np.arange(100, dtype=int)
arr_input = arr.astype(any_int_dtype)
arr_output = np.array(
@@ -723,7 +723,7 @@ def test_float(self, float_dtype):
assert klass(ujson.decode(ujson.encode(num))) == num
def test_float_array(self, float_dtype):
- arr = np.arange(12.5, 185.72, 1.7322, dtype=np.float)
+ arr = np.arange(12.5, 185.72, 1.7322, dtype=float)
float_input = arr.astype(float_dtype)
float_output = np.array(
@@ -901,7 +901,7 @@ def test_dataframe_numpy_labelled(self, orient):
[[1, 2, 3], [4, 5, 6]],
index=["a", "b"],
columns=["x", "y", "z"],
- dtype=np.int,
+ dtype=int,
)
kwargs = {} if orient is None else dict(orient=orient)
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 5bbabc8e18c47..d76d01904731a 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -207,8 +207,8 @@ def test_usecols_dtypes(c_parser_only):
dtype={"b": int, "c": float},
)
- assert (result.dtypes == [object, np.int, np.float]).all()
- assert (result2.dtypes == [object, np.float]).all()
+ assert (result.dtypes == [object, int, float]).all()
+ assert (result2.dtypes == [object, float]).all()
def test_disable_bool_parsing(c_parser_only):
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 55256499c6bb2..e38fcf1380220 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1147,7 +1147,7 @@ def test_chunks_have_consistent_numerical_type(all_parsers):
result = parser.read_csv(StringIO(data))
assert type(result.a[0]) is np.float64
- assert result.a.dtype == np.float
+ assert result.a.dtype == float
def test_warn_if_chunks_have_mismatched_type(all_parsers):
@@ -1163,7 +1163,7 @@ def test_warn_if_chunks_have_mismatched_type(all_parsers):
with tm.assert_produces_warning(warning_type):
df = parser.read_csv(StringIO(data))
- assert df.a.dtype == np.object
+ assert df.a.dtype == object
@pytest.mark.parametrize("sep", [" ", r"\s+"])
diff --git a/pandas/tests/io/parser/test_dtypes.py b/pandas/tests/io/parser/test_dtypes.py
index d1ed85cc6f466..6298d1e5498f3 100644
--- a/pandas/tests/io/parser/test_dtypes.py
+++ b/pandas/tests/io/parser/test_dtypes.py
@@ -368,7 +368,7 @@ def test_empty_pass_dtype(all_parsers):
result = parser.read_csv(StringIO(data), dtype={"one": "u1"})
expected = DataFrame(
- {"one": np.empty(0, dtype="u1"), "two": np.empty(0, dtype=np.object)},
+ {"one": np.empty(0, dtype="u1"), "two": np.empty(0, dtype=object)},
index=Index([], dtype=object),
)
tm.assert_frame_equal(result, expected)
@@ -399,7 +399,7 @@ def test_empty_with_multi_index_pass_dtype(all_parsers):
exp_idx = MultiIndex.from_arrays(
[np.empty(0, dtype="u1"), np.empty(0, dtype=np.float64)], names=["one", "two"]
)
- expected = DataFrame({"three": np.empty(0, dtype=np.object)}, index=exp_idx)
+ expected = DataFrame({"three": np.empty(0, dtype=object)}, index=exp_idx)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 30b64b1750aa9..524e9f41a7731 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -2493,7 +2493,7 @@ def test_empty_series_frame(self, setup_path):
@td.xfail_non_writeable
@pytest.mark.parametrize(
- "dtype", [np.int64, np.float64, np.object, "m8[ns]", "M8[ns]"]
+ "dtype", [np.int64, np.float64, object, "m8[ns]", "M8[ns]"]
)
def test_empty_series(self, dtype, setup_path):
s = Series(dtype=dtype)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 7d4716e1b7d0c..fa04eabb71627 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -1364,7 +1364,7 @@ def test_default_type_conversion(self):
# Int column with NA values stays as float
assert issubclass(df.IntColWithNull.dtype.type, np.floating)
# Bool column with NA values becomes object
- assert issubclass(df.BoolColWithNull.dtype.type, np.object)
+ assert issubclass(df.BoolColWithNull.dtype.type, object)
def test_bigint(self):
# int64 should be converted to BigInteger, GH7433
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index aa3aa61bbb984..6d7fec803a8e0 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -689,7 +689,7 @@ def test_write_missing_strings(self):
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("byteorder", [">", "<"])
def test_bool_uint(self, byteorder, version):
- s0 = Series([0, 1, True], dtype=np.bool)
+ s0 = Series([0, 1, True], dtype=np.bool_)
s1 = Series([0, 1, 100], dtype=np.uint8)
s2 = Series([0, 1, 255], dtype=np.uint8)
s3 = Series([0, 1, 2 ** 15 - 100], dtype=np.uint16)
@@ -855,7 +855,7 @@ def test_big_dates(self):
expected[5][2] = expected[5][3] = expected[5][4] = datetime(1677, 10, 1)
expected[5][5] = expected[5][6] = datetime(1678, 1, 1)
- expected = DataFrame(expected, columns=columns, dtype=np.object)
+ expected = DataFrame(expected, columns=columns, dtype=object)
parsed_115 = read_stata(self.dta18_115)
parsed_117 = read_stata(self.dta18_117)
tm.assert_frame_equal(expected, parsed_115, check_datetimelike_compat=True)
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 5a30e9fbb91c6..0d3425d001229 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -205,7 +205,7 @@ def test_hist_df_legacy(self):
def test_hist_non_numerical_raises(self):
# gh-10444
df = DataFrame(np.random.rand(10, 2))
- df_o = df.astype(np.object)
+ df_o = df.astype(object)
msg = "hist method requires numerical columns, nothing to plot."
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 5341878d4986e..6da892c15f489 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -617,7 +617,7 @@ def test_kde_kwargs(self):
sample_points = np.linspace(-100, 100, 20)
_check_plot_works(self.ts.plot.kde, bw_method="scott", ind=20)
_check_plot_works(self.ts.plot.kde, bw_method=None, ind=20)
- _check_plot_works(self.ts.plot.kde, bw_method=None, ind=np.int(20))
+ _check_plot_works(self.ts.plot.kde, bw_method=None, ind=np.int_(20))
_check_plot_works(self.ts.plot.kde, bw_method=0.5, ind=sample_points)
_check_plot_works(self.ts.plot.density, bw_method=0.5, ind=sample_points)
_, ax = self.plt.subplots()
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 485535bec20d0..28d33ebb23c20 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -180,7 +180,7 @@ def test_resample_size_empty_dataframe(freq, empty_frame_dti):
@pytest.mark.parametrize("index", tm.all_timeseries_index_generator(0))
-@pytest.mark.parametrize("dtype", [np.float, np.int, np.object, "datetime64[ns]"])
+@pytest.mark.parametrize("dtype", [float, int, object, "datetime64[ns]"])
def test_resample_empty_dtypes(index, dtype, resample_method):
# Empty series were sometimes causing a segfault (for the functions
# with Cython bounds-checking disabled) or an IndexError. We just run
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 19fd8db5322ed..1c9d00a4b4c90 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -2759,8 +2759,8 @@ def test_concat_sparse():
def test_concat_dense_sparse():
# GH 30668
- a = pd.Series(pd.arrays.SparseArray([1, None]), dtype=np.float)
- b = pd.Series([1], dtype=np.float)
+ a = pd.Series(pd.arrays.SparseArray([1, None]), dtype=float)
+ b = pd.Series([1], dtype=float)
expected = pd.Series(data=[1, None, 1], index=[0, 1, 0]).astype(
pd.SparseDtype(np.float64, None)
)
diff --git a/pandas/tests/series/indexing/test_where.py b/pandas/tests/series/indexing/test_where.py
index 8daea84492871..3f85abb4b2817 100644
--- a/pandas/tests/series/indexing/test_where.py
+++ b/pandas/tests/series/indexing/test_where.py
@@ -278,7 +278,7 @@ def test_where_setitem_invalid():
"mask", [[True, False, False, False, False], [True, False], [False]]
)
@pytest.mark.parametrize(
- "item", [2.0, np.nan, np.finfo(np.float).max, np.finfo(np.float).min]
+ "item", [2.0, np.nan, np.finfo(float).max, np.finfo(float).min]
)
# Test numpy arrays, lists and tuples as the input to be
# broadcast
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index e6f86dda05893..d51dceae53a1c 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -180,7 +180,7 @@ def test_apply_categorical(self):
result = ser.apply(lambda x: "A")
exp = pd.Series(["A"] * 7, name="XX", index=list("abcdefg"))
tm.assert_series_equal(result, exp)
- assert result.dtype == np.object
+ assert result.dtype == object
@pytest.mark.parametrize("series", [["1-1", "1-1", np.NaN], ["1-1", "1-2", np.NaN]])
def test_apply_categorical_with_nan_values(self, series):
@@ -717,7 +717,7 @@ def test_map_categorical(self):
result = s.map(lambda x: "A")
exp = pd.Series(["A"] * 7, name="XX", index=list("abcdefg"))
tm.assert_series_equal(result, exp)
- assert result.dtype == np.object
+ assert result.dtype == object
with pytest.raises(NotImplementedError):
s.map(lambda x: x, na_action="ignore")
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 0766bfc37d7ca..95eba6ccc4df8 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -68,9 +68,9 @@ def get_result_type(dtype, dtype2):
(np.bool_, np.int32, np.int32),
(np.bool_, np.float32, np.object_),
# datetime-like
- ("m8[ns]", np.bool, np.object_),
+ ("m8[ns]", np.bool_, np.object_),
("m8[ns]", np.int64, np.object_),
- ("M8[ns]", np.bool, np.object_),
+ ("M8[ns]", np.bool_, np.object_),
("M8[ns]", np.int64, np.object_),
# categorical
("category", "category", "category"),
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index ff5f890cc41f8..44a8452964f5a 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -713,7 +713,7 @@ def test_first_nan_kept(self):
NAN2 = struct.unpack("d", struct.pack("=Q", bits_for_nan2))[0]
assert NAN1 != NAN1
assert NAN2 != NAN2
- for el_type in [np.float64, np.object]:
+ for el_type in [np.float64, object]:
a = np.array([NAN1, NAN2], dtype=el_type)
result = pd.unique(a)
assert result.size == 1
@@ -725,7 +725,7 @@ def test_do_not_mangle_na_values(self, unique_nulls_fixture, unique_nulls_fixtur
# GH 22295
if unique_nulls_fixture is unique_nulls_fixture2:
return # skip it, values not unique
- a = np.array([unique_nulls_fixture, unique_nulls_fixture2], dtype=np.object)
+ a = np.array([unique_nulls_fixture, unique_nulls_fixture2], dtype=object)
result = pd.unique(a)
assert result.size == 2
assert a[0] is unique_nulls_fixture
@@ -886,7 +886,7 @@ def test_different_nans(self):
# as object-array:
result = algos.isin(
- np.asarray(comps, dtype=np.object), np.asarray(values, dtype=np.object)
+ np.asarray(comps, dtype=object), np.asarray(values, dtype=object)
)
tm.assert_numpy_array_equal(np.array([True]), result)
@@ -916,8 +916,8 @@ def test_empty(self, empty):
def test_different_nan_objects(self):
# GH 22119
- comps = np.array(["nan", np.nan * 1j, float("nan")], dtype=np.object)
- vals = np.array([float("nan")], dtype=np.object)
+ comps = np.array(["nan", np.nan * 1j, float("nan")], dtype=object)
+ vals = np.array([float("nan")], dtype=object)
expected = np.array([False, False, True])
result = algos.isin(comps, vals)
tm.assert_numpy_array_equal(expected, result)
@@ -1157,7 +1157,7 @@ def test_dropna(self):
def test_value_counts_normalized(self):
# GH12558
s = Series([1, 2, np.nan, np.nan, np.nan])
- dtypes = (np.float64, np.object, "M8[ns]")
+ dtypes = (np.float64, object, "M8[ns]")
for t in dtypes:
s_typed = s.astype(t)
result = s_typed.value_counts(normalize=True, dropna=False)
@@ -2290,10 +2290,10 @@ def test_mode_single(self):
exp = Series(exp_multi, dtype=dt)
tm.assert_series_equal(algos.mode(s), exp)
- exp = Series([1], dtype=np.int)
+ exp = Series([1], dtype=int)
tm.assert_series_equal(algos.mode([1]), exp)
- exp = Series(["a", "b", "c"], dtype=np.object)
+ exp = Series(["a", "b", "c"], dtype=object)
tm.assert_series_equal(algos.mode(["a", "b", "c"]), exp)
def test_number_mode(self):
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py
index 2a42eb5d73136..9f0632917037c 100644
--- a/pandas/tests/test_take.py
+++ b/pandas/tests/test_take.py
@@ -31,7 +31,7 @@ def writeable(request):
(np.int16, False),
(np.int8, False),
(np.object_, True),
- (np.bool, False),
+ (np.bool_, False),
]
)
def dtype_can_hold_na(request):
diff --git a/pandas/tests/tslibs/test_fields.py b/pandas/tests/tslibs/test_fields.py
index 943f4207df543..a45fcab56759f 100644
--- a/pandas/tests/tslibs/test_fields.py
+++ b/pandas/tests/tslibs/test_fields.py
@@ -12,9 +12,7 @@ def test_fields_readonly():
dtindex.flags.writeable = False
result = fields.get_date_name_field(dtindex, "month_name")
- expected = np.array(
- ["January", "February", "March", "April", "May"], dtype=np.object
- )
+ expected = np.array(["January", "February", "March", "April", "May"], dtype=object)
tm.assert_numpy_array_equal(result, expected)
result = fields.get_date_field(dtindex, "Y")
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py
index 3e5475e6b274f..f6e2834965da3 100644
--- a/pandas/tests/window/moments/test_moments_rolling.py
+++ b/pandas/tests/window/moments/test_moments_rolling.py
@@ -515,7 +515,7 @@ def test_cmov_window_regular(win_types):
@td.skip_if_no_scipy
def test_cmov_window_regular_linear_range(win_types):
# GH 8238
- vals = np.array(range(10), dtype=np.float)
+ vals = np.array(range(10), dtype=float)
xp = vals.copy()
xp[:2] = np.nan
xp[-2:] = np.nan
@@ -718,7 +718,7 @@ def test_cmov_window_special_linear_range(win_types_special):
"exponential": {"tau": 10},
}
- vals = np.array(range(10), dtype=np.float)
+ vals = np.array(range(10), dtype=float)
xp = vals.copy()
xp[:2] = np.nan
xp[-2:] = np.nan
| I mostly just replace `np.bool` with `np.bool_` though there are some other cases where I just used bool. I don't know if it matters...so happy to conform one way or another
For now just seeing if this fixes CI
Closes https://github.com/pandas-dev/pandas/issues/34848. | https://api.github.com/repos/pandas-dev/pandas/pulls/34835 | 2020-06-16T23:48:06Z | 2020-06-17T17:26:57Z | 2020-06-17T17:26:57Z | 2020-06-30T16:05:52Z |
CLN: GH29547 change string formatting with f-strings (6 files changed) | diff --git a/pandas/tests/io/parser/test_header.py b/pandas/tests/io/parser/test_header.py
index 7dc106ef0c186..4cd110136d7b0 100644
--- a/pandas/tests/io/parser/test_header.py
+++ b/pandas/tests/io/parser/test_header.py
@@ -528,12 +528,11 @@ def test_multi_index_unnamed(all_parsers, index_col, columns):
parser.read_csv(StringIO(data), header=header, index_col=index_col)
else:
result = parser.read_csv(StringIO(data), header=header, index_col=index_col)
- template = "Unnamed: {i}_level_0"
exp_columns = []
for i, col in enumerate(columns):
if not col: # Unnamed.
- col = template.format(i=i if index_col is None else i + 1)
+ col = f"Unnamed: {i if index_col is None else i + 1}_level_0"
exp_columns.append(col)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index fa04eabb71627..70713768c8d1e 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -1900,9 +1900,9 @@ class _TestMySQLAlchemy:
@classmethod
def connect(cls):
- url = "mysql+{driver}://root@localhost/pandas_nosetest"
return sqlalchemy.create_engine(
- url.format(driver=cls.driver), connect_args=cls.connect_args
+ f"mysql+{cls.driver}://root@localhost/pandas_nosetest",
+ connect_args=cls.connect_args,
)
@classmethod
@@ -1969,8 +1969,9 @@ class _TestPostgreSQLAlchemy:
@classmethod
def connect(cls):
- url = "postgresql+{driver}://postgres@localhost/pandas_nosetest"
- return sqlalchemy.create_engine(url.format(driver=cls.driver))
+ return sqlalchemy.create_engine(
+ f"postgresql+{cls.driver}://postgres@localhost/pandas_nosetest"
+ )
@classmethod
def setup_driver(cls):
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index f6e0d2f0c1751..a112bc80b60b0 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -349,11 +349,10 @@ def test_invalid_td64_reductions(self, opname):
msg = "|".join(
[
- "reduction operation '{op}' not allowed for this dtype",
- r"cannot perform {op} with type timedelta64\[ns\]",
+ f"reduction operation '{opname}' not allowed for this dtype",
+ rf"cannot perform {opname} with type timedelta64\[ns\]",
]
)
- msg = msg.format(op=opname)
with pytest.raises(TypeError, match=msg):
getattr(td, opname)()
| Replace old string formatting syntax with f-strings #29547
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Changed 6 files:
- pandas/tests/io/parser/test_header.py
- pandas/tests/io/test_sql.py
- pandas/tests/io/test_html.py
- pandas/tests/reductions/test_reductions.py
- pandas/tests/reshape/test_melt.py
- pandas/tests/scalar/timedelta/test_timedelta.py
| https://api.github.com/repos/pandas-dev/pandas/pulls/34831 | 2020-06-16T21:09:49Z | 2020-06-18T13:59:00Z | 2020-06-18T13:59:00Z | 2020-06-20T09:01:29Z |
REF: move registry, Registry to dtypes.base | diff --git a/pandas/api/extensions/__init__.py b/pandas/api/extensions/__init__.py
index 3019dd0e9b371..401e7081d2422 100644
--- a/pandas/api/extensions/__init__.py
+++ b/pandas/api/extensions/__init__.py
@@ -4,7 +4,7 @@
from pandas._libs.lib import no_default
-from pandas.core.dtypes.dtypes import ExtensionDtype, register_extension_dtype
+from pandas.core.dtypes.base import ExtensionDtype, register_extension_dtype
from pandas.core.accessor import (
register_dataframe_accessor,
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index b5cb681812939..b0958af41158c 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -10,6 +10,7 @@
from pandas.compat.numpy import function as nv
from pandas.util._decorators import cache_readonly
+from pandas.core.dtypes.base import register_extension_dtype
from pandas.core.dtypes.common import (
is_bool_dtype,
is_datetime64_dtype,
@@ -21,7 +22,6 @@
is_object_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.core.dtypes.missing import isna
from pandas.core import ops
diff --git a/pandas/core/arrays/sparse/dtype.py b/pandas/core/arrays/sparse/dtype.py
index b3da9cbeb44af..ccf2825162f51 100644
--- a/pandas/core/arrays/sparse/dtype.py
+++ b/pandas/core/arrays/sparse/dtype.py
@@ -9,7 +9,7 @@
from pandas._typing import Dtype, DtypeObj
from pandas.errors import PerformanceWarning
-from pandas.core.dtypes.base import ExtensionDtype
+from pandas.core.dtypes.base import ExtensionDtype, register_extension_dtype
from pandas.core.dtypes.cast import astype_nansafe
from pandas.core.dtypes.common import (
is_bool_dtype,
@@ -19,7 +19,6 @@
is_string_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.core.dtypes.missing import isna, na_value_for_dtype
if TYPE_CHECKING:
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index ac501a8afbe09..5104e3f12f5b4 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -5,9 +5,8 @@
from pandas._libs import lib, missing as libmissing
-from pandas.core.dtypes.base import ExtensionDtype
+from pandas.core.dtypes.base import ExtensionDtype, register_extension_dtype
from pandas.core.dtypes.common import pandas_dtype
-from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.inference import is_array_like
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 9ac661f97a56e..6c58698989e96 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -15,6 +15,7 @@
from pandas._libs.tslibs import IncompatibleFrequency, OutOfBoundsDatetime
from pandas._typing import AnyArrayLike, ArrayLike, Dtype, DtypeObj
+from pandas.core.dtypes.base import ExtensionDtype, registry
from pandas.core.dtypes.cast import (
construct_1d_arraylike_from_scalar,
construct_1d_ndarray_preserving_na,
@@ -36,7 +37,6 @@
is_object_dtype,
is_timedelta64_ns_dtype,
)
-from pandas.core.dtypes.dtypes import ExtensionDtype, registry
from pandas.core.dtypes.generic import (
ABCExtensionArray,
ABCIndexClass,
diff --git a/pandas/core/dtypes/base.py b/pandas/core/dtypes/base.py
index 2d81dd4d884a3..07c73876954d0 100644
--- a/pandas/core/dtypes/base.py
+++ b/pandas/core/dtypes/base.py
@@ -2,7 +2,7 @@
Extend pandas with custom array types.
"""
-from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type
+from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union
import numpy as np
@@ -352,3 +352,92 @@ def _get_common_dtype(self, dtypes: List[DtypeObj]) -> Optional[DtypeObj]:
return self
else:
return None
+
+
+def register_extension_dtype(cls: Type[ExtensionDtype]) -> Type[ExtensionDtype]:
+ """
+ Register an ExtensionType with pandas as class decorator.
+
+ .. versionadded:: 0.24.0
+
+ This enables operations like ``.astype(name)`` for the name
+ of the ExtensionDtype.
+
+ Returns
+ -------
+ callable
+ A class decorator.
+
+ Examples
+ --------
+ >>> from pandas.api.extensions import register_extension_dtype
+ >>> from pandas.api.extensions import ExtensionDtype
+ >>> @register_extension_dtype
+ ... class MyExtensionDtype(ExtensionDtype):
+ ... name = "myextension"
+ """
+ registry.register(cls)
+ return cls
+
+
+class Registry:
+ """
+ Registry for dtype inference.
+
+ The registry allows one to map a string repr of a extension
+ dtype to an extension dtype. The string alias can be used in several
+ places, including
+
+ * Series and Index constructors
+ * :meth:`pandas.array`
+ * :meth:`pandas.Series.astype`
+
+ Multiple extension types can be registered.
+ These are tried in order.
+ """
+
+ def __init__(self):
+ self.dtypes: List[Type[ExtensionDtype]] = []
+
+ def register(self, dtype: Type[ExtensionDtype]) -> None:
+ """
+ Parameters
+ ----------
+ dtype : ExtensionDtype class
+ """
+ if not issubclass(dtype, ExtensionDtype):
+ raise ValueError("can only register pandas extension dtypes")
+
+ self.dtypes.append(dtype)
+
+ def find(
+ self, dtype: Union[Type[ExtensionDtype], str]
+ ) -> Optional[Type[ExtensionDtype]]:
+ """
+ Parameters
+ ----------
+ dtype : Type[ExtensionDtype] or str
+
+ Returns
+ -------
+ return the first matching dtype, otherwise return None
+ """
+ if not isinstance(dtype, str):
+ dtype_type = dtype
+ if not isinstance(dtype, type):
+ dtype_type = type(dtype)
+ if issubclass(dtype_type, ExtensionDtype):
+ return dtype
+
+ return None
+
+ for dtype_type in self.dtypes:
+ try:
+ return dtype_type.construct_from_string(dtype)
+ except TypeError:
+ pass
+
+ return None
+
+
+registry = Registry()
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 9e960375e9bf4..a2ca4d84b2bf6 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -11,13 +11,13 @@
from pandas._libs.tslibs import conversion
from pandas._typing import ArrayLike, DtypeObj
+from pandas.core.dtypes.base import registry
from pandas.core.dtypes.dtypes import (
CategoricalDtype,
DatetimeTZDtype,
ExtensionDtype,
IntervalDtype,
PeriodDtype,
- registry,
)
from pandas.core.dtypes.generic import ABCCategorical, ABCIndexClass
from pandas.core.dtypes.inference import ( # noqa:F401
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index a9d2430717e4f..22480fbc47508 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -24,7 +24,7 @@
from pandas._libs.tslibs.offsets import BaseOffset
from pandas._typing import DtypeObj, Ordered
-from pandas.core.dtypes.base import ExtensionDtype
+from pandas.core.dtypes.base import ExtensionDtype, register_extension_dtype
from pandas.core.dtypes.generic import ABCCategoricalIndex, ABCIndexClass
from pandas.core.dtypes.inference import is_bool, is_list_like
@@ -40,95 +40,6 @@
str_type = str
-def register_extension_dtype(cls: Type[ExtensionDtype]) -> Type[ExtensionDtype]:
- """
- Register an ExtensionType with pandas as class decorator.
-
- .. versionadded:: 0.24.0
-
- This enables operations like ``.astype(name)`` for the name
- of the ExtensionDtype.
-
- Returns
- -------
- callable
- A class decorator.
-
- Examples
- --------
- >>> from pandas.api.extensions import register_extension_dtype
- >>> from pandas.api.extensions import ExtensionDtype
- >>> @register_extension_dtype
- ... class MyExtensionDtype(ExtensionDtype):
- ... pass
- """
- registry.register(cls)
- return cls
-
-
-class Registry:
- """
- Registry for dtype inference.
-
- The registry allows one to map a string repr of a extension
- dtype to an extension dtype. The string alias can be used in several
- places, including
-
- * Series and Index constructors
- * :meth:`pandas.array`
- * :meth:`pandas.Series.astype`
-
- Multiple extension types can be registered.
- These are tried in order.
- """
-
- def __init__(self):
- self.dtypes: List[Type[ExtensionDtype]] = []
-
- def register(self, dtype: Type[ExtensionDtype]) -> None:
- """
- Parameters
- ----------
- dtype : ExtensionDtype class
- """
- if not issubclass(dtype, ExtensionDtype):
- raise ValueError("can only register pandas extension dtypes")
-
- self.dtypes.append(dtype)
-
- def find(
- self, dtype: Union[Type[ExtensionDtype], str]
- ) -> Optional[Type[ExtensionDtype]]:
- """
- Parameters
- ----------
- dtype : Type[ExtensionDtype] or str
-
- Returns
- -------
- return the first matching dtype, otherwise return None
- """
- if not isinstance(dtype, str):
- dtype_type = dtype
- if not isinstance(dtype, type):
- dtype_type = type(dtype)
- if issubclass(dtype_type, ExtensionDtype):
- return dtype
-
- return None
-
- for dtype_type in self.dtypes:
- try:
- return dtype_type.construct_from_string(dtype)
- except TypeError:
- pass
-
- return None
-
-
-registry = Registry()
-
-
class PandasExtensionDtype(ExtensionDtype):
"""
A np.dtype duck-typed class, suitable for holding a custom dtype.
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index ad6e6e4a98057..a0525aa511ee2 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -5,7 +5,7 @@
import pytest
import pytz
-from pandas.core.dtypes.dtypes import registry
+from pandas.core.dtypes.base import registry
import pandas as pd
import pandas._testing as tm
diff --git a/pandas/tests/arrays/test_period.py b/pandas/tests/arrays/test_period.py
index 27e6334788284..8887dd0278afe 100644
--- a/pandas/tests/arrays/test_period.py
+++ b/pandas/tests/arrays/test_period.py
@@ -5,7 +5,8 @@
from pandas._libs.tslibs.period import IncompatibleFrequency
import pandas.util._test_decorators as td
-from pandas.core.dtypes.dtypes import PeriodDtype, registry
+from pandas.core.dtypes.base import registry
+from pandas.core.dtypes.dtypes import PeriodDtype
import pandas as pd
import pandas._testing as tm
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index b1fe673e9e2f1..a58dc5e5ec74a 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -4,6 +4,7 @@
import pytest
import pytz
+from pandas.core.dtypes.base import registry
from pandas.core.dtypes.common import (
is_bool_dtype,
is_categorical,
@@ -22,7 +23,6 @@
DatetimeTZDtype,
IntervalDtype,
PeriodDtype,
- registry,
)
import pandas as pd
| These are more closely related to ExtensionDtype than to our internal EADtypes. More concretely, they are low-dependency, while dtypes.dtypes has some unfortunate circular dependencies.
I'd also like to move pandas_dtype and is_dtype_equal from dtypes.common so that those can be stricty "above" dtypes.dtypes in the dependency structure. Saving that for another pass. | https://api.github.com/repos/pandas-dev/pandas/pulls/34830 | 2020-06-16T21:07:39Z | 2020-07-10T23:54:35Z | 2020-07-10T23:54:35Z | 2020-07-11T00:27:07Z |
REF: remove libfrequencies | diff --git a/pandas/_libs/tslibs/frequencies.pxd b/pandas/_libs/tslibs/frequencies.pxd
deleted file mode 100644
index b3ad6e6c19ee3..0000000000000
--- a/pandas/_libs/tslibs/frequencies.pxd
+++ /dev/null
@@ -1 +0,0 @@
-cpdef int get_to_timestamp_base(int base)
diff --git a/pandas/_libs/tslibs/frequencies.pyx b/pandas/_libs/tslibs/frequencies.pyx
deleted file mode 100644
index fd28240abd882..0000000000000
--- a/pandas/_libs/tslibs/frequencies.pyx
+++ /dev/null
@@ -1,40 +0,0 @@
-
-from .dtypes import FreqGroup
-
-# ----------------------------------------------------------------------
-
-
-cpdef int get_to_timestamp_base(int base):
- """
- Return frequency code group used for base of to_timestamp against
- frequency code.
-
- Parameters
- ----------
- base : int (member of FreqGroup)
-
- Returns
- -------
- base : int
-
- Examples
- --------
- # Return day freq code against longer freq than day
- >>> get_to_timestamp_base(get_freq_code('D')[0])
- 6000
- >>> get_to_timestamp_base(get_freq_code('W')[0])
- 6000
- >>> get_to_timestamp_base(get_freq_code('M')[0])
- 6000
-
- # Return second freq code against hour between second
- >>> get_to_timestamp_base(get_freq_code('H')[0])
- 9000
- >>> get_to_timestamp_base(get_freq_code('S')[0])
- 9000
- """
- if base < FreqGroup.FR_BUS:
- return FreqGroup.FR_DAY
- elif FreqGroup.FR_HR <= base <= FreqGroup.FR_SEC:
- return FreqGroup.FR_SEC
- return base
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 30caddf81b6e8..a2250234dbd14 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -74,7 +74,6 @@ from pandas._libs.tslibs.dtypes cimport (
attrname_to_abbrevs,
)
-from pandas._libs.tslibs.frequencies cimport get_to_timestamp_base
from pandas._libs.tslibs.parsing cimport get_rule_month
from pandas._libs.tslibs.parsing import parse_time_string
from pandas._libs.tslibs.nattype cimport (
@@ -1478,7 +1477,30 @@ class IncompatibleFrequency(ValueError):
pass
-cdef class _Period:
+cdef class PeriodMixin:
+ # Methods shared between Period and PeriodArray
+
+ cpdef int _get_to_timestamp_base(self):
+ """
+ Return frequency code group used for base of to_timestamp against
+ frequency code.
+
+ Return day freq code against longer freq than day.
+ Return second freq code against hour between second.
+
+ Returns
+ -------
+ int
+ """
+ base = self._dtype._dtype_code
+ if base < FR_BUS:
+ return FR_DAY
+ elif FR_HR <= base <= FR_SEC:
+ return FR_SEC
+ return base
+
+
+cdef class _Period(PeriodMixin):
cdef readonly:
int64_t ordinal
@@ -1734,8 +1756,7 @@ cdef class _Period:
return endpoint - Timedelta(1, 'ns')
if freq is None:
- base = self._dtype._dtype_code
- freq = get_to_timestamp_base(base)
+ freq = self._get_to_timestamp_base()
base = freq
else:
freq = self._maybe_convert_freq(freq)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 0d866aa7eae26..7902dd0410910 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -9,17 +9,18 @@
NaTType,
Timedelta,
delta_to_nanoseconds,
- frequencies as libfrequencies,
iNaT,
period as libperiod,
to_offset,
)
+from pandas._libs.tslibs.dtypes import FreqGroup
from pandas._libs.tslibs.fields import isleapyear_arr
from pandas._libs.tslibs.offsets import Tick, delta_to_tick
from pandas._libs.tslibs.period import (
DIFFERENT_FREQ,
IncompatibleFrequency,
Period,
+ PeriodMixin,
get_period_field_arr,
period_asfreq_arr,
)
@@ -61,7 +62,7 @@ def f(self):
return property(f)
-class PeriodArray(dtl.DatetimeLikeArrayMixin, dtl.DatelikeOps):
+class PeriodArray(PeriodMixin, dtl.DatetimeLikeArrayMixin, dtl.DatelikeOps):
"""
Pandas ExtensionArray for storing Period data.
@@ -440,8 +441,7 @@ def to_timestamp(self, freq=None, how="start"):
return (self + self.freq).to_timestamp(how="start") - adjust
if freq is None:
- base = self.freq._period_dtype_code
- freq = libfrequencies.get_to_timestamp_base(base)
+ freq = self._get_to_timestamp_base()
base = freq
else:
freq = Period._maybe_convert_freq(freq)
@@ -1027,11 +1027,11 @@ def _range_from_fields(
if quarter is not None:
if freq is None:
freq = to_offset("Q")
- base = libfrequencies.FreqGroup.FR_QTR
+ base = FreqGroup.FR_QTR
else:
freq = to_offset(freq)
base = libperiod.freq_to_dtype_code(freq)
- if base != libfrequencies.FreqGroup.FR_QTR:
+ if base != FreqGroup.FR_QTR:
raise AssertionError("base must equal FR_QTR")
year, quarter = _make_field_arrays(year, quarter)
diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index fa8051954e435..8ffd30567b9ac 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -6,7 +6,7 @@
import numpy as np
from pandas._libs.tslibs import Period, to_offset
-from pandas._libs.tslibs.frequencies import FreqGroup
+from pandas._libs.tslibs.dtypes import FreqGroup
from pandas._typing import FrameOrSeriesUnion
from pandas.core.dtypes.generic import (
diff --git a/pandas/tests/tseries/frequencies/test_freq_code.py b/pandas/tests/tseries/frequencies/test_freq_code.py
index 5383c1ff1c2c9..20cadde45e7a0 100644
--- a/pandas/tests/tseries/frequencies/test_freq_code.py
+++ b/pandas/tests/tseries/frequencies/test_freq_code.py
@@ -1,8 +1,7 @@
import pytest
-from pandas._libs.tslibs import Resolution, to_offset
+from pandas._libs.tslibs import Period, Resolution, to_offset
from pandas._libs.tslibs.dtypes import _attrname_to_abbrevs
-from pandas._libs.tslibs.frequencies import get_to_timestamp_base
@pytest.mark.parametrize(
@@ -10,9 +9,12 @@
[("D", "D"), ("W", "D"), ("M", "D"), ("S", "S"), ("T", "S"), ("H", "S")],
)
def test_get_to_timestamp_base(freqstr, exp_freqstr):
- left_code = to_offset(freqstr)._period_dtype_code
+ off = to_offset(freqstr)
+ per = Period._from_ordinal(1, off)
exp_code = to_offset(exp_freqstr)._period_dtype_code
- assert get_to_timestamp_base(left_code) == exp_code
+
+ result_code = per._get_to_timestamp_base()
+ assert result_code == exp_code
@pytest.mark.parametrize(
diff --git a/pandas/tests/tslibs/test_api.py b/pandas/tests/tslibs/test_api.py
index b0c524a257684..a119db6c68635 100644
--- a/pandas/tests/tslibs/test_api.py
+++ b/pandas/tests/tslibs/test_api.py
@@ -11,7 +11,6 @@ def test_namespace():
"conversion",
"dtypes",
"fields",
- "frequencies",
"nattype",
"np_datetime",
"offsets",
diff --git a/setup.py b/setup.py
index 3caea5c5e79da..e9d305d831653 100755
--- a/setup.py
+++ b/setup.py
@@ -319,7 +319,6 @@ class CheckSDist(sdist_class):
"pandas/_libs/tslibs/conversion.pyx",
"pandas/_libs/tslibs/fields.pyx",
"pandas/_libs/tslibs/offsets.pyx",
- "pandas/_libs/tslibs/frequencies.pyx",
"pandas/_libs/tslibs/resolution.pyx",
"pandas/_libs/tslibs/parsing.pyx",
"pandas/_libs/tslibs/tzconversion.pyx",
@@ -615,7 +614,6 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
"pyxfile": "_libs/tslibs/fields",
"depends": tseries_depends,
},
- "_libs.tslibs.frequencies": {"pyxfile": "_libs/tslibs/frequencies"},
"_libs.tslibs.nattype": {"pyxfile": "_libs/tslibs/nattype"},
"_libs.tslibs.np_datetime": {
"pyxfile": "_libs/tslibs/np_datetime",
| https://api.github.com/repos/pandas-dev/pandas/pulls/34828 | 2020-06-16T17:30:02Z | 2020-06-16T20:47:02Z | 2020-06-16T20:47:02Z | 2020-06-16T20:51:28Z |
|
Backport PR #34733 on branch 1.0.x (BUG: Fixed Series.replace for EA with casting) | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index 7dfac54279e6f..fdf08dd381050 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -24,6 +24,8 @@ Note this disables the ability to read Parquet files from directories on S3
again (:issue:`26388`, :issue:`34632`), which was added in the 1.0.4 release,
but is now targeted for pandas 1.1.0.
+- Fixed regression in :meth:`~DataFrame.replace` raising an ``AssertionError`` when replacing values in an extension dtype with values of a different dtype (:issue:`34530`)
+
.. _whatsnew_105.bug_fixes:
Bug fixes
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 84dd189a8a512..317d3c303011b 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -779,7 +779,11 @@ def replace(
if is_object_dtype(self):
raise
- assert not self._can_hold_element(value), value
+ if not self.is_extension:
+ # TODO: https://github.com/pandas-dev/pandas/issues/32586
+ # Need an ExtensionArray._can_hold_element to indicate whether
+ # a scalar value can be placed in the array.
+ assert not self._can_hold_element(value), value
# try again with a compatible block
block = self.astype(object)
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index b20baa2836363..e5ccf16685c4c 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -362,3 +362,8 @@ def test_replace_no_cast(self, ser, exp):
expected = pd.Series(exp)
tm.assert_series_equal(result, expected)
+
+ def test_replace_extension_other(self):
+ # https://github.com/pandas-dev/pandas/issues/34530
+ ser = pd.Series(pd.array([1, 2, 3], dtype="Int64"))
+ ser.replace("", "") # no exception
| xref #34733 | https://api.github.com/repos/pandas-dev/pandas/pulls/34819 | 2020-06-16T09:46:06Z | 2020-06-16T11:24:26Z | 2020-06-16T11:24:26Z | 2020-06-16T11:24:42Z |
CLN: remove unused args/kwargs in BlockManager.reduce | diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 8e16d31b49150..e496694ee7899 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -327,16 +327,16 @@ def _verify_integrity(self) -> None:
f"tot_items: {tot_items}"
)
- def reduce(self, func, *args, **kwargs):
+ def reduce(self, func):
# If 2D, we assume that we're operating column-wise
if self.ndim == 1:
# we'll be returning a scalar
blk = self.blocks[0]
- return func(blk.values, *args, **kwargs)
+ return func(blk.values)
res = {}
for blk in self.blocks:
- bres = func(blk.values, *args, **kwargs)
+ bres = func(blk.values)
if np.ndim(bres) == 0:
# EA
@@ -344,7 +344,7 @@ def reduce(self, func, *args, **kwargs):
new_res = zip(blk.mgr_locs.as_array, [bres])
else:
assert bres.ndim == 1, bres.shape
- assert blk.shape[0] == len(bres), (blk.shape, bres.shape, args, kwargs)
+ assert blk.shape[0] == len(bres), (blk.shape, bres.shape)
new_res = zip(blk.mgr_locs.as_array, bres)
nr = dict(new_res)
| Small clean-up broken off from https://github.com/pandas-dev/pandas/pull/32867 cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/34818 | 2020-06-16T07:48:52Z | 2020-06-16T12:44:28Z | 2020-06-16T12:44:28Z | 2020-06-16T15:33:28Z |
DOC: move 'Other API changes' under correct section | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index f68135bf8cf9c..47f9d526dfd79 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -297,116 +297,10 @@ Other enhancements
.. ---------------------------------------------------------------------------
-Increased minimum versions for dependencies
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Some minimum supported versions of dependencies were updated (:issue:`33718`, :issue:`29766`, :issue:`29723`, pytables >= 3.4.3).
-If installed, we now require:
-
-+-----------------+-----------------+----------+---------+
-| Package | Minimum Version | Required | Changed |
-+=================+=================+==========+=========+
-| numpy | 1.15.4 | X | X |
-+-----------------+-----------------+----------+---------+
-| pytz | 2015.4 | X | |
-+-----------------+-----------------+----------+---------+
-| python-dateutil | 2.7.3 | X | X |
-+-----------------+-----------------+----------+---------+
-| bottleneck | 1.2.1 | | |
-+-----------------+-----------------+----------+---------+
-| numexpr | 2.6.2 | | |
-+-----------------+-----------------+----------+---------+
-| pytest (dev) | 4.0.2 | | |
-+-----------------+-----------------+----------+---------+
-
-For `optional libraries <https://dev.pandas.io/docs/install.html#dependencies>`_ the general recommendation is to use the latest version.
-The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
-Optional libraries below the lowest tested version may still work, but are not considered supported.
-
-+-----------------+-----------------+---------+
-| Package | Minimum Version | Changed |
-+=================+=================+=========+
-| beautifulsoup4 | 4.6.0 | |
-+-----------------+-----------------+---------+
-| fastparquet | 0.3.2 | |
-+-----------------+-----------------+---------+
-| gcsfs | 0.2.2 | |
-+-----------------+-----------------+---------+
-| lxml | 3.8.0 | |
-+-----------------+-----------------+---------+
-| matplotlib | 2.2.2 | |
-+-----------------+-----------------+---------+
-| numba | 0.46.0 | |
-+-----------------+-----------------+---------+
-| openpyxl | 2.5.7 | |
-+-----------------+-----------------+---------+
-| pyarrow | 0.13.0 | |
-+-----------------+-----------------+---------+
-| pymysql | 0.7.1 | |
-+-----------------+-----------------+---------+
-| pytables | 3.4.3 | X |
-+-----------------+-----------------+---------+
-| s3fs | 0.3.0 | |
-+-----------------+-----------------+---------+
-| scipy | 1.2.0 | X |
-+-----------------+-----------------+---------+
-| sqlalchemy | 1.1.4 | |
-+-----------------+-----------------+---------+
-| xarray | 0.8.2 | |
-+-----------------+-----------------+---------+
-| xlrd | 1.1.0 | |
-+-----------------+-----------------+---------+
-| xlsxwriter | 0.9.8 | |
-+-----------------+-----------------+---------+
-| xlwt | 1.2.0 | |
-+-----------------+-----------------+---------+
-| pandas-gbq | 1.2.0 | X |
-+-----------------+-----------------+---------+
-
-See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
-
-Development Changes
-^^^^^^^^^^^^^^^^^^^
-
-- The minimum version of Cython is now the most recent bug-fix version (0.29.16) (:issue:`33334`).
-
-.. _whatsnew_110.api.other:
-
-Other API changes
-^^^^^^^^^^^^^^^^^
-
-- :meth:`Series.describe` will now show distribution percentiles for ``datetime`` dtypes, statistics ``first`` and ``last``
- will now be ``min`` and ``max`` to match with numeric dtypes in :meth:`DataFrame.describe` (:issue:`30164`)
-- Added :meth:`DataFrame.value_counts` (:issue:`5377`)
-- :meth:`Groupby.groups` now returns an abbreviated representation when called on large dataframes (:issue:`1135`)
-- ``loc`` lookups with an object-dtype :class:`Index` and an integer key will now raise ``KeyError`` instead of ``TypeError`` when key is missing (:issue:`31905`)
-- Using a :func:`pandas.api.indexers.BaseIndexer` with ``count``, ``min``, ``max``, ``median``, ``skew``, ``cov``, ``corr`` will now return correct results for any monotonic :func:`pandas.api.indexers.BaseIndexer` descendant (:issue:`32865`)
-- Added a :func:`pandas.api.indexers.FixedForwardWindowIndexer` class to support forward-looking windows during ``rolling`` operations.
-- Added :class:`pandas.errors.InvalidIndexError` (:issue:`34570`).
+.. _whatsnew_110.api:
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :meth:`DataFrame.swaplevels` now raises a ``TypeError`` if the axis is not a :class:`MultiIndex`.
- Previously an ``AttributeError`` was raised (:issue:`31126`)
-- :meth:`DataFrame.xs` now raises a ``TypeError`` if a ``level`` keyword is supplied and the axis is not a :class:`MultiIndex`.
- Previously an ``AttributeError`` was raised (:issue:`33610`)
-- :meth:`DataFrameGroupby.mean` and :meth:`SeriesGroupby.mean` (and similarly for :meth:`~DataFrameGroupby.median`, :meth:`~DataFrameGroupby.std` and :meth:`~DataFrameGroupby.var`)
- now raise a ``TypeError`` if a not-accepted keyword argument is passed into it.
- Previously a ``UnsupportedFunctionCall`` was raised (``AssertionError`` if ``min_count`` passed into :meth:`~DataFrameGroupby.median`) (:issue:`31485`)
-- :meth:`DataFrame.at` and :meth:`Series.at` will raise a ``TypeError`` instead of a ``ValueError`` if an incompatible key is passed, and ``KeyError`` if a missing key is passed, matching the behavior of ``.loc[]`` (:issue:`31722`)
-- Passing an integer dtype other than ``int64`` to ``np.array(period_index, dtype=...)`` will now raise ``TypeError`` instead of incorrectly using ``int64`` (:issue:`32255`)
-- Passing an invalid ``fill_value`` to :meth:`Categorical.take` raises a ``ValueError`` instead of ``TypeError`` (:issue:`33660`)
-- Combining a ``Categorical`` with integer categories and which contains missing values
- with a float dtype column in operations such as :func:`concat` or :meth:`~DataFrame.append`
- will now result in a float column instead of an object dtyped column (:issue:`33607`)
-- :meth:`Series.to_timestamp` now raises a ``TypeError`` if the axis is not a :class:`PeriodIndex`. Previously an ``AttributeError`` was raised (:issue:`33327`)
-- :meth:`Series.to_period` now raises a ``TypeError`` if the axis is not a :class:`DatetimeIndex`. Previously an ``AttributeError`` was raised (:issue:`33327`)
-- :func: `pandas.api.dtypes.is_string_dtype` no longer incorrectly identifies categorical series as string.
-- :func:`read_excel` no longer takes ``**kwds`` arguments. This means that passing in keyword ``chunksize`` now raises a ``TypeError``
- (previously raised a ``NotImplementedError``), while passing in keyword ``encoding`` now raises a ``TypeError`` (:issue:`34464`)
-- :func: `merge` now checks ``suffixes`` parameter type to be ``tuple`` and raises ``TypeError``, whereas before a ``list`` or ``set`` were accepted and that the ``set`` could produce unexpected results (:issue:`33740`)
-- :class:`Period` no longer accepts tuples for the ``freq`` argument (:issue:`34658`)
-- :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` now raises ValueError if ``limit_direction`` is 'forward' or 'both' and ``method`` is 'backfill' or 'bfill' or ``limit_direction`` is 'backward' or 'both' and ``method`` is 'pad' or 'ffill' (:issue:`34746`)
``MultiIndex.get_indexer`` interprets `method` argument differently
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -733,6 +627,115 @@ apply and applymap on ``DataFrame`` evaluates first row/column only once
df.apply(func, axis=1)
+.. _whatsnew_110.api.other:
+
+Other API changes
+^^^^^^^^^^^^^^^^^
+
+- :meth:`Series.describe` will now show distribution percentiles for ``datetime`` dtypes, statistics ``first`` and ``last``
+ will now be ``min`` and ``max`` to match with numeric dtypes in :meth:`DataFrame.describe` (:issue:`30164`)
+- Added :meth:`DataFrame.value_counts` (:issue:`5377`)
+- :meth:`Groupby.groups` now returns an abbreviated representation when called on large dataframes (:issue:`1135`)
+- ``loc`` lookups with an object-dtype :class:`Index` and an integer key will now raise ``KeyError`` instead of ``TypeError`` when key is missing (:issue:`31905`)
+- Using a :func:`pandas.api.indexers.BaseIndexer` with ``count``, ``min``, ``max``, ``median``, ``skew``, ``cov``, ``corr`` will now return correct results for any monotonic :func:`pandas.api.indexers.BaseIndexer` descendant (:issue:`32865`)
+- Added a :func:`pandas.api.indexers.FixedForwardWindowIndexer` class to support forward-looking windows during ``rolling`` operations.
+- Added :class:`pandas.errors.InvalidIndexError` (:issue:`34570`).
+- :meth:`DataFrame.swaplevels` now raises a ``TypeError`` if the axis is not a :class:`MultiIndex`.
+ Previously an ``AttributeError`` was raised (:issue:`31126`)
+- :meth:`DataFrame.xs` now raises a ``TypeError`` if a ``level`` keyword is supplied and the axis is not a :class:`MultiIndex`.
+ Previously an ``AttributeError`` was raised (:issue:`33610`)
+- :meth:`DataFrameGroupby.mean` and :meth:`SeriesGroupby.mean` (and similarly for :meth:`~DataFrameGroupby.median`, :meth:`~DataFrameGroupby.std` and :meth:`~DataFrameGroupby.var`)
+ now raise a ``TypeError`` if a not-accepted keyword argument is passed into it.
+ Previously a ``UnsupportedFunctionCall`` was raised (``AssertionError`` if ``min_count`` passed into :meth:`~DataFrameGroupby.median`) (:issue:`31485`)
+- :meth:`DataFrame.at` and :meth:`Series.at` will raise a ``TypeError`` instead of a ``ValueError`` if an incompatible key is passed, and ``KeyError`` if a missing key is passed, matching the behavior of ``.loc[]`` (:issue:`31722`)
+- Passing an integer dtype other than ``int64`` to ``np.array(period_index, dtype=...)`` will now raise ``TypeError`` instead of incorrectly using ``int64`` (:issue:`32255`)
+- Passing an invalid ``fill_value`` to :meth:`Categorical.take` raises a ``ValueError`` instead of ``TypeError`` (:issue:`33660`)
+- Combining a ``Categorical`` with integer categories and which contains missing values
+ with a float dtype column in operations such as :func:`concat` or :meth:`~DataFrame.append`
+ will now result in a float column instead of an object dtyped column (:issue:`33607`)
+- :meth:`Series.to_timestamp` now raises a ``TypeError`` if the axis is not a :class:`PeriodIndex`. Previously an ``AttributeError`` was raised (:issue:`33327`)
+- :meth:`Series.to_period` now raises a ``TypeError`` if the axis is not a :class:`DatetimeIndex`. Previously an ``AttributeError`` was raised (:issue:`33327`)
+- :func: `pandas.api.dtypes.is_string_dtype` no longer incorrectly identifies categorical series as string.
+- :func:`read_excel` no longer takes ``**kwds`` arguments. This means that passing in keyword ``chunksize`` now raises a ``TypeError``
+ (previously raised a ``NotImplementedError``), while passing in keyword ``encoding`` now raises a ``TypeError`` (:issue:`34464`)
+- :func: `merge` now checks ``suffixes`` parameter type to be ``tuple`` and raises ``TypeError``, whereas before a ``list`` or ``set`` were accepted and that the ``set`` could produce unexpected results (:issue:`33740`)
+- :class:`Period` no longer accepts tuples for the ``freq`` argument (:issue:`34658`)
+- :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` now raises ValueError if ``limit_direction`` is 'forward' or 'both' and ``method`` is 'backfill' or 'bfill' or ``limit_direction`` is 'backward' or 'both' and ``method`` is 'pad' or 'ffill' (:issue:`34746`)
+
+
+Increased minimum versions for dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some minimum supported versions of dependencies were updated (:issue:`33718`, :issue:`29766`, :issue:`29723`, pytables >= 3.4.3).
+If installed, we now require:
+
++-----------------+-----------------+----------+---------+
+| Package | Minimum Version | Required | Changed |
++=================+=================+==========+=========+
+| numpy | 1.15.4 | X | X |
++-----------------+-----------------+----------+---------+
+| pytz | 2015.4 | X | |
++-----------------+-----------------+----------+---------+
+| python-dateutil | 2.7.3 | X | X |
++-----------------+-----------------+----------+---------+
+| bottleneck | 1.2.1 | | |
++-----------------+-----------------+----------+---------+
+| numexpr | 2.6.2 | | |
++-----------------+-----------------+----------+---------+
+| pytest (dev) | 4.0.2 | | |
++-----------------+-----------------+----------+---------+
+
+For `optional libraries <https://dev.pandas.io/docs/install.html#dependencies>`_ the general recommendation is to use the latest version.
+The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
+Optional libraries below the lowest tested version may still work, but are not considered supported.
+
++-----------------+-----------------+---------+
+| Package | Minimum Version | Changed |
++=================+=================+=========+
+| beautifulsoup4 | 4.6.0 | |
++-----------------+-----------------+---------+
+| fastparquet | 0.3.2 | |
++-----------------+-----------------+---------+
+| gcsfs | 0.2.2 | |
++-----------------+-----------------+---------+
+| lxml | 3.8.0 | |
++-----------------+-----------------+---------+
+| matplotlib | 2.2.2 | |
++-----------------+-----------------+---------+
+| numba | 0.46.0 | |
++-----------------+-----------------+---------+
+| openpyxl | 2.5.7 | |
++-----------------+-----------------+---------+
+| pyarrow | 0.13.0 | |
++-----------------+-----------------+---------+
+| pymysql | 0.7.1 | |
++-----------------+-----------------+---------+
+| pytables | 3.4.3 | X |
++-----------------+-----------------+---------+
+| s3fs | 0.3.0 | |
++-----------------+-----------------+---------+
+| scipy | 1.2.0 | X |
++-----------------+-----------------+---------+
+| sqlalchemy | 1.1.4 | |
++-----------------+-----------------+---------+
+| xarray | 0.8.2 | |
++-----------------+-----------------+---------+
+| xlrd | 1.1.0 | |
++-----------------+-----------------+---------+
+| xlsxwriter | 0.9.8 | |
++-----------------+-----------------+---------+
+| xlwt | 1.2.0 | |
++-----------------+-----------------+---------+
+| pandas-gbq | 1.2.0 | X |
++-----------------+-----------------+---------+
+
+See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
+
+Development Changes
+^^^^^^^^^^^^^^^^^^^
+
+- The minimum version of Cython is now the most recent bug-fix version (0.29.16) (:issue:`33334`).
+
.. _whatsnew_110.deprecations:
| The "Other API changes" bullet points were split in two (one part under "Other API changes", and one part directly under "Backwards incompatible API changes"), and in previous whatsnew files, the "Other API changes" is also a subsection of "Backwards incompatible API changes".
So moved a few things around to make this consistent. | https://api.github.com/repos/pandas-dev/pandas/pulls/34817 | 2020-06-16T07:27:09Z | 2020-06-16T12:53:05Z | 2020-06-16T12:53:05Z | 2020-06-16T13:00:35Z |
BUG: Respect center=True in rolling.apply when numba engine is used | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index f68135bf8cf9c..7e04d8f906cb0 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1015,7 +1015,7 @@ Groupby/resample/rolling
The behaviour now is consistent, independent of internal heuristics. (:issue:`31612`, :issue:`14927`, :issue:`13056`)
- Bug in :meth:`SeriesGroupBy.agg` where any column name was accepted in the named aggregation of ``SeriesGroupBy`` previously. The behaviour now allows only ``str`` and callables else would raise ``TypeError``. (:issue:`34422`)
- Bug in :meth:`DataFrame.groupby` lost index, when one of the ``agg`` keys referenced an empty list (:issue:`32580`)
-
+- Bug in :meth:`Rolling.apply` where ``center=True`` was ignored when ``engine='numba'`` was specified (:issue:`34784`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 92be2d056cfcb..ce0a2a9b95025 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -150,7 +150,7 @@ def __init__(
obj,
window=None,
min_periods: Optional[int] = None,
- center: Optional[bool] = False,
+ center: bool = False,
win_type: Optional[str] = None,
axis: Axis = 0,
on: Optional[Union[str, Index]] = None,
@@ -1353,17 +1353,20 @@ def apply(
kwargs = {}
kwargs.pop("_level", None)
kwargs.pop("floor", None)
- window = self._get_window()
- offset = calculate_center_offset(window) if self.center else 0
if not is_bool(raw):
raise ValueError("raw parameter must be `True` or `False`")
if engine == "cython":
if engine_kwargs is not None:
raise ValueError("cython engine does not accept engine_kwargs")
+ # Cython apply functions handle center, so don't need to use
+ # _apply's center handling
+ window = self._get_window()
+ offset = calculate_center_offset(window) if self.center else 0
apply_func = self._generate_cython_apply_func(
args, kwargs, raw, offset, func
)
+ center = False
elif engine == "numba":
if raw is False:
raise ValueError("raw must be `True` when using the numba engine")
@@ -1375,14 +1378,14 @@ def apply(
apply_func = generate_numba_apply_func(
args, kwargs, func, engine_kwargs
)
+ center = self.center
else:
raise ValueError("engine must be either 'numba' or 'cython'")
- # TODO: Why do we always pass center=False?
# name=func & raw=raw for WindowGroupByMixin._apply
return self._apply(
apply_func,
- center=False,
+ center=center,
floor=0,
name=func,
use_numba_cache=engine == "numba",
diff --git a/pandas/tests/window/test_numba.py b/pandas/tests/window/test_numba.py
index 8ecf64b171df4..7e049af0ca1f8 100644
--- a/pandas/tests/window/test_numba.py
+++ b/pandas/tests/window/test_numba.py
@@ -13,7 +13,7 @@
# Filter warnings when parallel=True and the function can't be parallelized by Numba
class TestApply:
@pytest.mark.parametrize("jit", [True, False])
- def test_numba_vs_cython(self, jit, nogil, parallel, nopython):
+ def test_numba_vs_cython(self, jit, nogil, parallel, nopython, center):
def f(x, *args):
arg_sum = 0
for arg in args:
@@ -29,10 +29,12 @@ def f(x, *args):
args = (2,)
s = Series(range(10))
- result = s.rolling(2).apply(
+ result = s.rolling(2, center=center).apply(
f, args=args, engine="numba", engine_kwargs=engine_kwargs, raw=True
)
- expected = s.rolling(2).apply(f, engine="cython", args=args, raw=True)
+ expected = s.rolling(2, center=center).apply(
+ f, engine="cython", args=args, raw=True
+ )
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("jit", [True, False])
| - [x] closes #34784
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34816 | 2020-06-16T04:07:34Z | 2020-06-16T12:47:57Z | 2020-06-16T12:47:57Z | 2020-06-16T15:40:30Z |
CLN: liboffsets annotations | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index bf2998bfcd9d1..df43ebcfd9df2 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -358,7 +358,6 @@ cdef class BaseOffset:
Base class for DateOffset methods that are not overridden by subclasses
and will (after pickle errors are resolved) go into a cdef class.
"""
- _typ = "dateoffset"
_day_opt = None
_attributes = tuple(["n", "normalize"])
_use_relativedelta = False
@@ -394,7 +393,7 @@ cdef class BaseOffset:
def __ne__(self, other):
return not self == other
- def __hash__(self):
+ def __hash__(self) -> int:
return hash(self._params)
@cache_readonly
@@ -422,10 +421,10 @@ cdef class BaseOffset:
return params
@property
- def kwds(self):
+ def kwds(self) -> dict:
# for backwards-compatibility
kwds = {name: getattr(self, name, None) for name in self._attributes
- if name not in ['n', 'normalize']}
+ if name not in ["n", "normalize"]}
return {name: kwds[name] for name in kwds if kwds[name] is not None}
@property
@@ -582,7 +581,7 @@ cdef class BaseOffset:
"does not have a vectorized implementation"
)
- def rollback(self, dt):
+ def rollback(self, dt) -> datetime:
"""
Roll provided date backward to next offset only if not on offset.
@@ -596,7 +595,7 @@ cdef class BaseOffset:
dt = dt - type(self)(1, normalize=self.normalize, **self.kwds)
return dt
- def rollforward(self, dt):
+ def rollforward(self, dt) -> datetime:
"""
Roll provided date forward to next offset only if not on offset.
@@ -618,7 +617,7 @@ cdef class BaseOffset:
pydate_to_dtstruct(other, &dts)
return get_day_of_month(&dts, self._day_opt)
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
@@ -780,6 +779,8 @@ cdef class Tick(SingleConstructorOffset):
def nanos(self) -> int64_t:
return self.n * self._nanos_inc
+ # FIXME: This should be typed as datetime, but we DatetimeLikeIndex.insert
+ # checks self.freq.is_on_offset with a Timedelta sometimes.
def is_on_offset(self, dt) -> bool:
return True
@@ -861,16 +862,8 @@ cdef class Tick(SingleConstructorOffset):
def apply(self, other):
# Timestamp can handle tz and nano sec, thus no need to use apply_wraps
if isinstance(other, _Timestamp):
-
# GH#15126
- # in order to avoid a recursive
- # call of __add__ and __radd__ if there is
- # an exception, when we call using the + operator,
- # we directly call the known method
- result = other.__add__(self)
- if result is NotImplemented:
- raise OverflowError
- return result
+ return other + self.delta
elif other is NaT:
return NaT
elif is_datetime64_object(other) or PyDate_Check(other):
@@ -1097,7 +1090,7 @@ cdef class RelativeDeltaOffset(BaseOffset):
"applied vectorized"
)
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
# TODO: see GH#1395
@@ -1384,7 +1377,7 @@ cdef class BusinessDay(BusinessMixin):
i8other = dtindex.view("i8")
return shift_bdays(i8other, self.n)
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
return dt.weekday() < 5
@@ -1788,7 +1781,7 @@ cdef class WeekOfMonthMixin(SingleConstructorOffset):
to_day = self._get_offset_day(shifted)
return shift_day(shifted, to_day - shifted.day)
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
return dt.day == self._get_offset_day(dt)
@@ -1843,12 +1836,12 @@ cdef class YearOffset(SingleConstructorOffset):
month = MONTH_ALIASES[self.month]
return f"{self._prefix}-{month}"
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
return dt.month == self.month and dt.day == self._get_offset_day(dt)
- def _get_offset_day(self, other) -> int:
+ def _get_offset_day(self, other: datetime) -> int:
# override BaseOffset method to use self.month instead of other.month
cdef:
npy_datetimestruct dts
@@ -1995,7 +1988,7 @@ cdef class QuarterOffset(SingleConstructorOffset):
def is_anchored(self) -> bool:
return self.n == 1 and self.startingMonth is not None
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
mod_month = (dt.month - self.startingMonth) % 3
@@ -2119,7 +2112,7 @@ cdef class QuarterBegin(QuarterOffset):
# Month-Based Offset Classes
cdef class MonthOffset(SingleConstructorOffset):
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
return dt.day == self._get_offset_day(dt)
@@ -2339,7 +2332,7 @@ cdef class SemiMonthEnd(SemiMonthOffset):
_prefix = "SM"
_min_day_of_month = 1
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
days_in_month = get_days_in_month(dt.year, dt.month)
@@ -2360,7 +2353,7 @@ cdef class SemiMonthBegin(SemiMonthOffset):
_prefix = "SMS"
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
return dt.day in (1, self.day_of_month)
@@ -2375,8 +2368,8 @@ cdef class Week(SingleConstructorOffset):
Weekly offset.
Parameters
- ----------f
- weekday : int, default None
+ ----------
+ weekday : int or None, default None
Always generate specific day of week. 0 for Monday.
"""
| It looks like rollforward/rollback we have some tests that pass `date` instead of `datetime`. That and a couple other outliers means there are some things that are not yet annotated. | https://api.github.com/repos/pandas-dev/pandas/pulls/34815 | 2020-06-16T03:35:11Z | 2020-06-16T12:49:30Z | 2020-06-16T12:49:30Z | 2020-06-16T15:34:51Z |
ENH: add masked algorithm for mean() function | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index c7573ee860744..2d82ffd95adb6 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -160,7 +160,7 @@ Deprecations
Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- Performance improvement in :meth:`IntervalIndex.isin` (:issue:`38353`)
--
+- Performance improvement in :meth:`Series.mean` for nullable data types (:issue:`34814`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/array_algos/masked_reductions.py b/pandas/core/array_algos/masked_reductions.py
index bce6f1aafb2c5..ec0f2c61e0a29 100644
--- a/pandas/core/array_algos/masked_reductions.py
+++ b/pandas/core/array_algos/masked_reductions.py
@@ -107,3 +107,12 @@ def min(values: np.ndarray, mask: np.ndarray, *, skipna: bool = True):
def max(values: np.ndarray, mask: np.ndarray, *, skipna: bool = True):
return _minmax(np.max, values=values, mask=mask, skipna=skipna)
+
+
+def mean(values: np.ndarray, mask: np.ndarray, skipna: bool = True):
+ if not values.size or mask.all():
+ return libmissing.NA
+ _sum = _sumprod(np.sum, values=values, mask=mask, skipna=skipna)
+ count = np.count_nonzero(~mask)
+ mean_value = _sum / count
+ return mean_value
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 7821f103909da..3cf25847ed3d0 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -394,7 +394,7 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
data = self._data
mask = self._mask
- if name in {"sum", "prod", "min", "max"}:
+ if name in {"sum", "prod", "min", "max", "mean"}:
op = getattr(masked_reductions, name)
return op(data, mask, skipna=skipna, **kwargs)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 8c2297699807d..cc5dc675c36e6 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -679,6 +679,23 @@ def test_empty_multi(self, method, unit):
expected = Series([1, np.nan], index=["a", "b"])
tm.assert_series_equal(result, expected)
+ @pytest.mark.parametrize("method", ["mean"])
+ @pytest.mark.parametrize("dtype", ["Float64", "Int64", "boolean"])
+ def test_ops_consistency_on_empty_nullable(self, method, dtype):
+
+ # GH#34814
+ # consistency for nullable dtypes on empty or ALL-NA mean
+
+ # empty series
+ eser = Series([], dtype=dtype)
+ result = getattr(eser, method)()
+ assert result is pd.NA
+
+ # ALL-NA series
+ nser = Series([np.nan], dtype=dtype)
+ result = getattr(nser, method)()
+ assert result is pd.NA
+
@pytest.mark.parametrize("method", ["mean", "median", "std", "var"])
def test_ops_consistency_on_empty(self, method):
| - [x] closes #34754
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34814 | 2020-06-16T02:57:34Z | 2021-01-01T21:46:48Z | 2021-01-01T21:46:48Z | 2021-01-01T21:46:52Z |
REF: move Resolution to tslibs.dtypes | diff --git a/pandas/_libs/tslibs/dtypes.pyx b/pandas/_libs/tslibs/dtypes.pyx
index 143eac7f1ef6e..70acb42712201 100644
--- a/pandas/_libs/tslibs/dtypes.pyx
+++ b/pandas/_libs/tslibs/dtypes.pyx
@@ -1,5 +1,6 @@
# period frequency constants corresponding to scikits timeseries
# originals
+from enum import Enum
cdef class PeriodDtypeBase:
@@ -112,6 +113,9 @@ _period_code_map.update({
"C": 5000, # Custom Business Day
})
+cdef set _month_names = {
+ x.split("-")[-1] for x in _period_code_map.keys() if x.startswith("A-")
+}
# Map attribute-name resolutions to resolution abbreviations
_attrname_to_abbrevs = {
@@ -127,6 +131,7 @@ _attrname_to_abbrevs = {
"nanosecond": "N",
}
cdef dict attrname_to_abbrevs = _attrname_to_abbrevs
+cdef dict _abbrev_to_attrnames = {v: k for k, v in attrname_to_abbrevs.items()}
class FreqGroup:
@@ -149,3 +154,124 @@ class FreqGroup:
def get_freq_group(code: int) -> int:
# See also: PeriodDtypeBase.freq_group
return (code // 1000) * 1000
+
+
+class Resolution(Enum):
+
+ # Note: cython won't allow us to reference the cdef versions at the
+ # module level
+ RESO_NS = 0
+ RESO_US = 1
+ RESO_MS = 2
+ RESO_SEC = 3
+ RESO_MIN = 4
+ RESO_HR = 5
+ RESO_DAY = 6
+ RESO_MTH = 7
+ RESO_QTR = 8
+ RESO_YR = 9
+
+ def __lt__(self, other):
+ return self.value < other.value
+
+ def __ge__(self, other):
+ return self.value >= other.value
+
+ @property
+ def freq_group(self):
+ # TODO: annotate as returning FreqGroup once that is an enum
+ if self == Resolution.RESO_NS:
+ return FreqGroup.FR_NS
+ elif self == Resolution.RESO_US:
+ return FreqGroup.FR_US
+ elif self == Resolution.RESO_MS:
+ return FreqGroup.FR_MS
+ elif self == Resolution.RESO_SEC:
+ return FreqGroup.FR_SEC
+ elif self == Resolution.RESO_MIN:
+ return FreqGroup.FR_MIN
+ elif self == Resolution.RESO_HR:
+ return FreqGroup.FR_HR
+ elif self == Resolution.RESO_DAY:
+ return FreqGroup.FR_DAY
+ elif self == Resolution.RESO_MTH:
+ return FreqGroup.FR_MTH
+ elif self == Resolution.RESO_QTR:
+ return FreqGroup.FR_QTR
+ elif self == Resolution.RESO_YR:
+ return FreqGroup.FR_ANN
+ else:
+ raise ValueError(self)
+
+ @property
+ def attrname(self) -> str:
+ """
+ Return datetime attribute name corresponding to this Resolution.
+
+ Examples
+ --------
+ >>> Resolution.RESO_SEC.attrname
+ 'second'
+ """
+ return _reso_str_map[self.value]
+
+ @classmethod
+ def from_attrname(cls, attrname: str) -> "Resolution":
+ """
+ Return resolution str against resolution code.
+
+ Examples
+ --------
+ >>> Resolution.from_attrname('second')
+ 2
+
+ >>> Resolution.from_attrname('second') == Resolution.RESO_SEC
+ True
+ """
+ return cls(_str_reso_map[attrname])
+
+ @classmethod
+ def get_reso_from_freq(cls, freq: str) -> "Resolution":
+ """
+ Return resolution code against frequency str.
+
+ `freq` is given by the `offset.freqstr` for some DateOffset object.
+
+ Examples
+ --------
+ >>> Resolution.get_reso_from_freq('H')
+ 4
+
+ >>> Resolution.get_reso_from_freq('H') == Resolution.RESO_HR
+ True
+ """
+ try:
+ attr_name = _abbrev_to_attrnames[freq]
+ except KeyError:
+ # For quarterly and yearly resolutions, we need to chop off
+ # a month string.
+ split_freq = freq.split("-")
+ if len(split_freq) != 2:
+ raise
+ if split_freq[1] not in _month_names:
+ # i.e. we want e.g. "Q-DEC", not "Q-INVALID"
+ raise
+ attr_name = _abbrev_to_attrnames[split_freq[0]]
+
+ return cls.from_attrname(attr_name)
+
+
+cdef dict _reso_str_map = {
+ Resolution.RESO_NS.value: "nanosecond",
+ Resolution.RESO_US.value: "microsecond",
+ Resolution.RESO_MS.value: "millisecond",
+ Resolution.RESO_SEC.value: "second",
+ Resolution.RESO_MIN.value: "minute",
+ Resolution.RESO_HR.value: "hour",
+ Resolution.RESO_DAY.value: "day",
+ Resolution.RESO_MTH.value: "month",
+ Resolution.RESO_QTR.value: "quarter",
+ Resolution.RESO_YR.value: "year",
+}
+
+cdef dict _str_reso_map = {v: k for k, v in _reso_str_map.items()}
diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx
index 55522e99459cb..4dbecc76ad986 100644
--- a/pandas/_libs/tslibs/resolution.pyx
+++ b/pandas/_libs/tslibs/resolution.pyx
@@ -1,17 +1,15 @@
-from enum import Enum
import numpy as np
from numpy cimport ndarray, int64_t, int32_t
from pandas._libs.tslibs.util cimport get_nat
-from pandas._libs.tslibs.dtypes cimport attrname_to_abbrevs
+from pandas._libs.tslibs.dtypes import Resolution
from pandas._libs.tslibs.np_datetime cimport (
npy_datetimestruct, dt64_to_dtstruct)
-from pandas._libs.tslibs.frequencies import FreqGroup
from pandas._libs.tslibs.timezones cimport (
is_utc, is_tzlocal, maybe_get_tz, get_dst_info)
-from pandas._libs.tslibs.ccalendar cimport get_days_in_month, c_MONTH_NUMBERS
+from pandas._libs.tslibs.ccalendar cimport get_days_in_month
from pandas._libs.tslibs.tzconversion cimport tz_convert_utc_to_tzlocal
# ----------------------------------------------------------------------
@@ -31,22 +29,6 @@ cdef:
int RESO_QTR = 8
int RESO_YR = 9
-_abbrev_to_attrnames = {v: k for k, v in attrname_to_abbrevs.items()}
-
-_reso_str_map = {
- RESO_NS: "nanosecond",
- RESO_US: "microsecond",
- RESO_MS: "millisecond",
- RESO_SEC: "second",
- RESO_MIN: "minute",
- RESO_HR: "hour",
- RESO_DAY: "day",
- RESO_MTH: "month",
- RESO_QTR: "quarter",
- RESO_YR: "year",
-}
-
-_str_reso_map = {v: k for k, v in _reso_str_map.items()}
# ----------------------------------------------------------------------
@@ -122,111 +104,6 @@ cdef inline int _reso_stamp(npy_datetimestruct *dts):
return RESO_DAY
-class Resolution(Enum):
-
- # Note: cython won't allow us to reference the cdef versions at the
- # module level
- RESO_NS = 0
- RESO_US = 1
- RESO_MS = 2
- RESO_SEC = 3
- RESO_MIN = 4
- RESO_HR = 5
- RESO_DAY = 6
- RESO_MTH = 7
- RESO_QTR = 8
- RESO_YR = 9
-
- def __lt__(self, other):
- return self.value < other.value
-
- def __ge__(self, other):
- return self.value >= other.value
-
- @property
- def freq_group(self):
- # TODO: annotate as returning FreqGroup once that is an enum
- if self == Resolution.RESO_NS:
- return FreqGroup.FR_NS
- elif self == Resolution.RESO_US:
- return FreqGroup.FR_US
- elif self == Resolution.RESO_MS:
- return FreqGroup.FR_MS
- elif self == Resolution.RESO_SEC:
- return FreqGroup.FR_SEC
- elif self == Resolution.RESO_MIN:
- return FreqGroup.FR_MIN
- elif self == Resolution.RESO_HR:
- return FreqGroup.FR_HR
- elif self == Resolution.RESO_DAY:
- return FreqGroup.FR_DAY
- elif self == Resolution.RESO_MTH:
- return FreqGroup.FR_MTH
- elif self == Resolution.RESO_QTR:
- return FreqGroup.FR_QTR
- elif self == Resolution.RESO_YR:
- return FreqGroup.FR_ANN
- else:
- raise ValueError(self)
-
- @property
- def attrname(self) -> str:
- """
- Return datetime attribute name corresponding to this Resolution.
-
- Examples
- --------
- >>> Resolution.RESO_SEC.attrname
- 'second'
- """
- return _reso_str_map[self.value]
-
- @classmethod
- def from_attrname(cls, attrname: str) -> "Resolution":
- """
- Return resolution str against resolution code.
-
- Examples
- --------
- >>> Resolution.from_attrname('second')
- 2
-
- >>> Resolution.from_attrname('second') == Resolution.RESO_SEC
- True
- """
- return cls(_str_reso_map[attrname])
-
- @classmethod
- def get_reso_from_freq(cls, freq: str) -> "Resolution":
- """
- Return resolution code against frequency str.
-
- `freq` is given by the `offset.freqstr` for some DateOffset object.
-
- Examples
- --------
- >>> Resolution.get_reso_from_freq('H')
- 4
-
- >>> Resolution.get_reso_from_freq('H') == Resolution.RESO_HR
- True
- """
- try:
- attr_name = _abbrev_to_attrnames[freq]
- except KeyError:
- # For quarterly and yearly resolutions, we need to chop off
- # a month string.
- split_freq = freq.split("-")
- if len(split_freq) != 2:
- raise
- if split_freq[1] not in c_MONTH_NUMBERS:
- # i.e. we want e.g. "Q-DEC", not "Q-INVALID"
- raise
- attr_name = _abbrev_to_attrnames[split_freq[0]]
-
- return cls.from_attrname(attr_name)
-
-
# ----------------------------------------------------------------------
# Frequency Inference
| This is pretty much a clean move, in preparation for making FreqGroup an enum and trying to de-duplicate our 3+ enum-like classes | https://api.github.com/repos/pandas-dev/pandas/pulls/34813 | 2020-06-16T02:04:17Z | 2020-06-16T12:51:02Z | 2020-06-16T12:51:02Z | 2020-06-16T15:51:10Z |
BUG: apply() fails on some value types | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 10522ff797c59..9ef60e2c8bf2e 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1050,6 +1050,7 @@ Reshaping
- Bug in :func:`Dataframe.aggregate` and :func:`Series.aggregate` was causing recursive loop in some cases (:issue:`34224`)
- Fixed bug in :func:`melt` where melting MultiIndex columns with ``col_level`` > 0 would raise a ``KeyError`` on ``id_vars`` (:issue:`34129`)
- Bug in :meth:`Series.where` with an empty Series and empty ``cond`` having non-bool dtype (:issue:`34592`)
+- Fixed regression where :meth:`DataFrame.apply` would raise ``ValueError`` for elements whth ``S`` dtype (:issue:`34529`)
Sparse
^^^^^^
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index e69e3bab10af8..d0417d51da497 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1608,7 +1608,7 @@ def construct_1d_ndarray_preserving_na(
"""
subarr = np.array(values, dtype=dtype, copy=copy)
- if dtype is not None and dtype.kind in ("U", "S"):
+ if dtype is not None and dtype.kind == "U":
# GH-21083
# We can't just return np.array(subarr, dtype='str') since
# NumPy will convert the non-string objects into strings
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index d12699397d1e4..48a141a657cbb 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -785,6 +785,17 @@ def non_reducing_function(val):
df.applymap(func)
assert values == df.a.to_list()
+ def test_apply_with_byte_string(self):
+ # GH 34529
+ df = pd.DataFrame(np.array([b"abcd", b"efgh"]), columns=["col"])
+ expected = pd.DataFrame(
+ np.array([b"abcd", b"efgh"]), columns=["col"], dtype=object
+ )
+ # After we make the aply we exect a dataframe just
+ # like the original but with the object datatype
+ result = df.apply(lambda x: x.astype("object"))
+ tm.assert_frame_equal(result, expected)
+
class TestInferOutputShape:
# the user has supplied an opaque UDF where
| - [x] closes #34529
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34812 | 2020-06-15T23:47:32Z | 2020-06-19T22:33:21Z | 2020-06-19T22:33:21Z | 2020-06-19T22:33:26Z |
BUG: reading line-format JSON from file url #27135 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index c5eb2febe8ae9..70c45acec9f35 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1040,6 +1040,7 @@ I/O
- Bug in :meth:`read_excel` for ODS files removes 0.0 values (:issue:`27222`)
- Bug in :meth:`ujson.encode` was raising an `OverflowError` with numbers larger than sys.maxsize (:issue: `34395`)
- Bug in :meth:`HDFStore.append_to_multiple` was raising a ``ValueError`` when the min_itemsize parameter is set (:issue:`11238`)
+- :meth:`read_json` now could read line-delimited json file from a file url while `lines` and `chunksize` are set.
Plotting
^^^^^^^^
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index b973553a767ba..ff37c36962aec 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -1,6 +1,6 @@
from collections import abc
import functools
-from io import StringIO
+from io import BytesIO, StringIO
from itertools import islice
import os
from typing import Any, Callable, Optional, Type
@@ -724,6 +724,9 @@ def _get_data_from_filepath(self, filepath_or_buffer):
self.should_close = True
self.open_stream = data
+ if isinstance(data, BytesIO):
+ data = data.getvalue().decode()
+
return data
def _combine_lines(self, lines) -> str:
diff --git a/pandas/tests/io/json/data/line_delimited.json b/pandas/tests/io/json/data/line_delimited.json
new file mode 100644
index 0000000000000..be84245329583
--- /dev/null
+++ b/pandas/tests/io/json/data/line_delimited.json
@@ -0,0 +1,3 @@
+ {"a": 1, "b": 2}
+ {"a": 3, "b": 4}
+ {"a": 5, "b": 6}
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index 53462eaaada8d..b475fa2c514ff 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -1,4 +1,5 @@
from io import StringIO
+from pathlib import Path
import pytest
@@ -219,3 +220,18 @@ def test_readjson_nrows_requires_lines():
msg = "nrows can only be passed if lines=True"
with pytest.raises(ValueError, match=msg):
pd.read_json(jsonl, lines=False, nrows=2)
+
+
+def test_readjson_lines_chunks_fileurl(datapath):
+ # GH 27135
+ # Test reading line-format JSON from file url
+ df_list_expected = [
+ pd.DataFrame([[1, 2]], columns=["a", "b"], index=[0]),
+ pd.DataFrame([[3, 4]], columns=["a", "b"], index=[1]),
+ pd.DataFrame([[5, 6]], columns=["a", "b"], index=[2]),
+ ]
+ os_path = datapath("io", "json", "data", "line_delimited.json")
+ file_url = Path(os_path).as_uri()
+ url_reader = pd.read_json(file_url, lines=True, chunksize=1)
+ for index, chuck in enumerate(url_reader):
+ tm.assert_frame_equal(chuck, df_list_expected[index])
| - [x] closes #27135
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew note entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34811 | 2020-06-15T21:05:52Z | 2020-06-29T21:42:19Z | 2020-06-29T21:42:19Z | 2020-06-29T21:51:40Z |
API: Allow non-tuples in pandas.merge | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 0ca19ffd1f496..d32eeb493b2c2 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -693,7 +693,6 @@ Other API changes
- :func: `pandas.api.dtypes.is_string_dtype` no longer incorrectly identifies categorical series as string.
- :func:`read_excel` no longer takes ``**kwds`` arguments. This means that passing in keyword ``chunksize`` now raises a ``TypeError``
(previously raised a ``NotImplementedError``), while passing in keyword ``encoding`` now raises a ``TypeError`` (:issue:`34464`)
-- :func: `merge` now checks ``suffixes`` parameter type to be ``tuple`` and raises ``TypeError``, whereas before a ``list`` or ``set`` were accepted and that the ``set`` could produce unexpected results (:issue:`33740`)
- :class:`Period` no longer accepts tuples for the ``freq`` argument (:issue:`34658`)
- :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` now raises ValueError if ``limit_direction`` is 'forward' or 'both' and ``method`` is 'backfill' or 'bfill' or ``limit_direction`` is 'backward' or 'both' and ``method`` is 'pad' or 'ffill' (:issue:`34746`)
- The :class:`DataFrame` constructor no longer accepts a list of ``DataFrame`` objects. Because of changes to NumPy, ``DataFrame`` objects are now consistently treated as 2D objects, so a list of ``DataFrames`` is considered 3D, and no longer acceptible for the ``DataFrame`` constructor (:issue:`32289`).
@@ -787,6 +786,7 @@ Deprecations
- :meth:`DataFrame.to_dict` has deprecated accepting short names for ``orient`` in future versions (:issue:`32515`)
- :meth:`Categorical.to_dense` is deprecated and will be removed in a future version, use ``np.asarray(cat)`` instead (:issue:`32639`)
- The ``fastpath`` keyword in the ``SingleBlockManager`` constructor is deprecated and will be removed in a future version (:issue:`33092`)
+- Providing ``suffixes`` as a ``set`` in :func:`pandas.merge` is deprecated. Provide a tuple instead (:issue:`33740`, :issue:`34741`).
- :meth:`Index.is_mixed` is deprecated and will be removed in a future version, check ``index.inferred_type`` directly instead (:issue:`32922`)
- Passing any arguments but the first one to :func:`read_html` as
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a21a45f415a47..b6993e9ed851a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -227,10 +227,13 @@
sort : bool, default False
Sort the join keys lexicographically in the result DataFrame. If False,
the order of the join keys depends on the join type (how keyword).
-suffixes : tuple of (str, str), default ('_x', '_y')
- Suffix to apply to overlapping column names in the left and right
- side, respectively. To raise an exception on overlapping columns use
- (False, False).
+suffixes : list-like, default is ("_x", "_y")
+ A length-2 sequence where each element is optionally a string
+ indicating the suffix to add to overlapping column names in
+ `left` and `right` respectively. Pass a value of `None` instead
+ of a string to indicate that the column name from `left` or
+ `right` should be left as-is, with no suffix. At least one of the
+ values must not be None.
copy : bool, default True
If False, avoid copy if possible.
indicator : bool or str, default False
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 5e4eb89f0b45f..27b331babe692 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -194,7 +194,7 @@ def merge_ordered(
left DataFrame.
fill_method : {'ffill', None}, default None
Interpolation method for data.
- suffixes : Sequence, default is ("_x", "_y")
+ suffixes : list-like, default is ("_x", "_y")
A length-2 sequence where each element is optionally a string
indicating the suffix to add to overlapping column names in
`left` and `right` respectively. Pass a value of `None` instead
@@ -2072,9 +2072,13 @@ def _items_overlap_with_suffix(left: Index, right: Index, suffixes: Tuple[str, s
If corresponding suffix is empty, the entry is simply converted to string.
"""
- if not isinstance(suffixes, tuple):
- raise TypeError(
- f"suffixes should be tuple of (str, str). But got {type(suffixes).__name__}"
+ if not is_list_like(suffixes, allow_sets=False):
+ warnings.warn(
+ f"Passing 'suffixes' as a {type(suffixes)}, is not supported and may give "
+ "unexpected results. Provide 'suffixes' as a tuple instead. In the "
+ "future a 'TypeError' will be raised.",
+ FutureWarning,
+ stacklevel=4,
)
to_rename = left.intersection(right)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 0a4d5f17a48cc..4fd3c688b8771 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1999,6 +1999,7 @@ def test_merge_series(on, left_on, right_on, left_index, right_index, nm):
(0, 0, dict(suffixes=("", "_dup")), ["0", "0_dup"]),
(0, 0, dict(suffixes=(None, "_dup")), [0, "0_dup"]),
(0, 0, dict(suffixes=("_x", "_y")), ["0_x", "0_y"]),
+ (0, 0, dict(suffixes=["_x", "_y"]), ["0_x", "0_y"]),
("a", 0, dict(suffixes=(None, "_y")), ["a", 0]),
(0.0, 0.0, dict(suffixes=("_x", None)), ["0.0_x", 0.0]),
("b", "b", dict(suffixes=(None, "_y")), ["b", "b_y"]),
@@ -2069,18 +2070,13 @@ def test_merge_suffix_error(col1, col2, suffixes):
pd.merge(a, b, left_index=True, right_index=True, suffixes=suffixes)
-@pytest.mark.parametrize(
- "col1, col2, suffixes", [("a", "a", {"a", "b"}), ("a", "a", None), (0, 0, None)],
-)
-def test_merge_suffix_type_error(col1, col2, suffixes):
- a = pd.DataFrame({col1: [1, 2, 3]})
- b = pd.DataFrame({col2: [3, 4, 5]})
+@pytest.mark.parametrize("suffixes", [{"left", "right"}, {"left": 0, "right": 0}])
+def test_merge_suffix_warns(suffixes):
+ a = pd.DataFrame({"a": [1, 2, 3]})
+ b = pd.DataFrame({"b": [3, 4, 5]})
- msg = (
- f"suffixes should be tuple of \\(str, str\\). But got {type(suffixes).__name__}"
- )
- with pytest.raises(TypeError, match=msg):
- pd.merge(a, b, left_index=True, right_index=True, suffixes=suffixes)
+ with tm.assert_produces_warning(FutureWarning):
+ pd.merge(a, b, left_index=True, right_index=True, suffixes={"left", "right"})
@pytest.mark.parametrize(
| Closes https://github.com/pandas-dev/pandas/issues/34741, while
retaining the spirit of the spirit of
https://github.com/pandas-dev/pandas/pull/34208. | https://api.github.com/repos/pandas-dev/pandas/pulls/34810 | 2020-06-15T21:03:27Z | 2020-06-30T20:58:44Z | 2020-06-30T20:58:43Z | 2020-07-06T13:14:06Z |
CLN: liboffsets annotate, de-duplicate | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index d22f2b9117326..bf2998bfcd9d1 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -491,18 +491,20 @@ cdef class BaseOffset:
# Name and Rendering Methods
def __repr__(self) -> str:
- className = getattr(self, '_outputName', type(self).__name__)
+ # _output_name used by B(Year|Quarter)(End|Begin) to
+ # expand "B" -> "Business"
+ class_name = getattr(self, "_output_name", type(self).__name__)
if abs(self.n) != 1:
- plural = 's'
+ plural = "s"
else:
- plural = ''
+ plural = ""
n_str = ""
if self.n != 1:
n_str = f"{self.n} * "
- out = f'<{n_str}{className}{plural}{self._repr_attrs()}>'
+ out = f"<{n_str}{class_name}{plural}{self._repr_attrs()}>"
return out
def _repr_attrs(self) -> str:
@@ -608,7 +610,7 @@ cdef class BaseOffset:
dt = dt + type(self)(1, normalize=self.normalize, **self.kwds)
return dt
- def _get_offset_day(self, datetime other):
+ def _get_offset_day(self, other: datetime) -> int:
# subclass must implement `_day_opt`; calling from the base class
# will raise NotImplementedError.
cdef:
@@ -632,7 +634,7 @@ cdef class BaseOffset:
# Staticmethod so we can call from Tick.__init__, will be unnecessary
# once BaseOffset is a cdef class and is inherited by Tick
@staticmethod
- def _validate_n(n):
+ def _validate_n(n) -> int:
"""
Require that `n` be an integer.
@@ -1010,7 +1012,7 @@ cdef class RelativeDeltaOffset(BaseOffset):
self.__dict__.update(state)
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
if self._use_relativedelta:
other = _as_datetime(other)
@@ -1379,7 +1381,7 @@ cdef class BusinessDay(BusinessMixin):
@apply_index_wraps
def apply_index(self, dtindex):
- i8other = dtindex.asi8
+ i8other = dtindex.view("i8")
return shift_bdays(i8other, self.n)
def is_on_offset(self, dt) -> bool:
@@ -1482,7 +1484,7 @@ cdef class BusinessHour(BusinessMixin):
until = datetime(2014, 4, day, end.hour, end.minute)
return int((until - dtstart).total_seconds())
- def _get_closing_time(self, dt):
+ def _get_closing_time(self, dt: datetime) -> datetime:
"""
Get the closing time of a business hour interval by its opening time.
@@ -1582,7 +1584,7 @@ cdef class BusinessHour(BusinessMixin):
return datetime(other.year, other.month, other.day, hour, minute)
- def _prev_opening_time(self, other):
+ def _prev_opening_time(self, other: datetime) -> datetime:
"""
If n is positive, return the latest opening time earlier than or equal
to current time.
@@ -1602,7 +1604,7 @@ cdef class BusinessHour(BusinessMixin):
return self._next_opening_time(other, sign=-1)
@apply_wraps
- def rollback(self, dt):
+ def rollback(self, dt: datetime) -> datetime:
"""
Roll provided date backward to next offset only if not on offset.
"""
@@ -1615,7 +1617,7 @@ cdef class BusinessHour(BusinessMixin):
return dt
@apply_wraps
- def rollforward(self, dt):
+ def rollforward(self, dt: datetime) -> datetime:
"""
Roll provided date forward to next offset only if not on offset.
"""
@@ -1627,108 +1629,105 @@ cdef class BusinessHour(BusinessMixin):
return dt
@apply_wraps
- def apply(self, other):
- if PyDateTime_Check(other):
- # used for detecting edge condition
- nanosecond = getattr(other, "nanosecond", 0)
- # reset timezone and nanosecond
- # other may be a Timestamp, thus not use replace
- other = datetime(
- other.year,
- other.month,
- other.day,
- other.hour,
- other.minute,
- other.second,
- other.microsecond,
- )
- n = self.n
+ def apply(self, other: datetime) -> datetime:
+ # used for detecting edge condition
+ nanosecond = getattr(other, "nanosecond", 0)
+ # reset timezone and nanosecond
+ # other may be a Timestamp, thus not use replace
+ other = datetime(
+ other.year,
+ other.month,
+ other.day,
+ other.hour,
+ other.minute,
+ other.second,
+ other.microsecond,
+ )
+ n = self.n
- # adjust other to reduce number of cases to handle
- if n >= 0:
- if other.time() in self.end or not self._is_on_offset(other):
- other = self._next_opening_time(other)
+ # adjust other to reduce number of cases to handle
+ if n >= 0:
+ if other.time() in self.end or not self._is_on_offset(other):
+ other = self._next_opening_time(other)
+ else:
+ if other.time() in self.start:
+ # adjustment to move to previous business day
+ other = other - timedelta(seconds=1)
+ if not self._is_on_offset(other):
+ other = self._next_opening_time(other)
+ other = self._get_closing_time(other)
+
+ # get total business hours by sec in one business day
+ businesshours = sum(
+ self._get_business_hours_by_sec(st, en)
+ for st, en in zip(self.start, self.end)
+ )
+
+ bd, r = divmod(abs(n * 60), businesshours // 60)
+ if n < 0:
+ bd, r = -bd, -r
+
+ # adjust by business days first
+ if bd != 0:
+ if self._prefix.startswith("C"):
+ # GH#30593 this is a Custom offset
+ skip_bd = CustomBusinessDay(
+ n=bd,
+ weekmask=self.weekmask,
+ holidays=self.holidays,
+ calendar=self.calendar,
+ )
else:
- if other.time() in self.start:
- # adjustment to move to previous business day
- other = other - timedelta(seconds=1)
- if not self._is_on_offset(other):
- other = self._next_opening_time(other)
- other = self._get_closing_time(other)
-
- # get total business hours by sec in one business day
- businesshours = sum(
- self._get_business_hours_by_sec(st, en)
- for st, en in zip(self.start, self.end)
- )
+ skip_bd = BusinessDay(n=bd)
+ # midnight business hour may not on BusinessDay
+ if not self.next_bday.is_on_offset(other):
+ prev_open = self._prev_opening_time(other)
+ remain = other - prev_open
+ other = prev_open + skip_bd + remain
+ else:
+ other = other + skip_bd
- bd, r = divmod(abs(n * 60), businesshours // 60)
- if n < 0:
- bd, r = -bd, -r
-
- # adjust by business days first
- if bd != 0:
- if self._prefix.startswith("C"):
- # GH#30593 this is a Custom offset
- skip_bd = CustomBusinessDay(
- n=bd,
- weekmask=self.weekmask,
- holidays=self.holidays,
- calendar=self.calendar,
- )
+ # remaining business hours to adjust
+ bhour_remain = timedelta(minutes=r)
+
+ if n >= 0:
+ while bhour_remain != timedelta(0):
+ # business hour left in this business time interval
+ bhour = (
+ self._get_closing_time(self._prev_opening_time(other)) - other
+ )
+ if bhour_remain < bhour:
+ # finish adjusting if possible
+ other += bhour_remain
+ bhour_remain = timedelta(0)
else:
- skip_bd = BusinessDay(n=bd)
- # midnight business hour may not on BusinessDay
- if not self.next_bday.is_on_offset(other):
- prev_open = self._prev_opening_time(other)
- remain = other - prev_open
- other = prev_open + skip_bd + remain
+ # go to next business time interval
+ bhour_remain -= bhour
+ other = self._next_opening_time(other + bhour)
+ else:
+ while bhour_remain != timedelta(0):
+ # business hour left in this business time interval
+ bhour = self._next_opening_time(other) - other
+ if (
+ bhour_remain > bhour
+ or bhour_remain == bhour
+ and nanosecond != 0
+ ):
+ # finish adjusting if possible
+ other += bhour_remain
+ bhour_remain = timedelta(0)
else:
- other = other + skip_bd
-
- # remaining business hours to adjust
- bhour_remain = timedelta(minutes=r)
-
- if n >= 0:
- while bhour_remain != timedelta(0):
- # business hour left in this business time interval
- bhour = (
- self._get_closing_time(self._prev_opening_time(other)) - other
- )
- if bhour_remain < bhour:
- # finish adjusting if possible
- other += bhour_remain
- bhour_remain = timedelta(0)
- else:
- # go to next business time interval
- bhour_remain -= bhour
- other = self._next_opening_time(other + bhour)
- else:
- while bhour_remain != timedelta(0):
- # business hour left in this business time interval
- bhour = self._next_opening_time(other) - other
- if (
- bhour_remain > bhour
- or bhour_remain == bhour
- and nanosecond != 0
- ):
- # finish adjusting if possible
- other += bhour_remain
- bhour_remain = timedelta(0)
- else:
- # go to next business time interval
- bhour_remain -= bhour
- other = self._get_closing_time(
- self._next_opening_time(
- other + bhour - timedelta(seconds=1)
- )
+ # go to next business time interval
+ bhour_remain -= bhour
+ other = self._get_closing_time(
+ self._next_opening_time(
+ other + bhour - timedelta(seconds=1)
)
+ )
- return other
- else:
- raise ApplyTypeError("Only know how to combine business hour with datetime")
+ return other
- def is_on_offset(self, dt):
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
@@ -1740,7 +1739,7 @@ cdef class BusinessHour(BusinessMixin):
# Distinguish by the time spent from previous opening time
return self._is_on_offset(dt)
- def _is_on_offset(self, dt):
+ def _is_on_offset(self, dt: datetime) -> bool:
"""
Slight speedups using calculated values.
"""
@@ -1779,14 +1778,11 @@ cdef class WeekOfMonthMixin(SingleConstructorOffset):
raise ValueError(f"Day must be 0<=day<=6, got {weekday}")
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
compare_day = self._get_offset_day(other)
months = self.n
- if months > 0 and compare_day > other.day:
- months -= 1
- elif months <= 0 and compare_day < other.day:
- months += 1
+ months = roll_convention(other.day, months, compare_day)
shifted = shift_month(other, months, "start")
to_day = self._get_offset_day(shifted)
@@ -1861,7 +1857,7 @@ cdef class YearOffset(SingleConstructorOffset):
return get_day_of_month(&dts, self._day_opt)
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
years = roll_qtrday(other, self.n, self.month, self._day_opt, modby=12)
months = years * 12 + (self.month - other.month)
return shift_month(other, months, self._day_opt)
@@ -1869,7 +1865,7 @@ cdef class YearOffset(SingleConstructorOffset):
@apply_index_wraps
def apply_index(self, dtindex):
shifted = shift_quarters(
- dtindex.asi8, self.n, self.month, self._day_opt, modby=12
+ dtindex.view("i8"), self.n, self.month, self._day_opt, modby=12
)
return shifted
@@ -1963,8 +1959,8 @@ cdef class QuarterOffset(SingleConstructorOffset):
# startingMonth vs month attr names are resolved
# FIXME: python annotations here breaks things
- # _default_startingMonth: int
- # _from_name_startingMonth: int
+ # _default_starting_month: int
+ # _from_name_starting_month: int
cdef readonly:
int startingMonth
@@ -1973,7 +1969,7 @@ cdef class QuarterOffset(SingleConstructorOffset):
BaseOffset.__init__(self, n, normalize)
if startingMonth is None:
- startingMonth = self._default_startingMonth
+ startingMonth = self._default_starting_month
self.startingMonth = startingMonth
cpdef __setstate__(self, state):
@@ -1987,8 +1983,8 @@ cdef class QuarterOffset(SingleConstructorOffset):
if suffix:
kwargs["startingMonth"] = MONTH_TO_CAL_NUM[suffix]
else:
- if cls._from_name_startingMonth is not None:
- kwargs["startingMonth"] = cls._from_name_startingMonth
+ if cls._from_name_starting_month is not None:
+ kwargs["startingMonth"] = cls._from_name_starting_month
return cls(**kwargs)
@property
@@ -2006,7 +2002,7 @@ cdef class QuarterOffset(SingleConstructorOffset):
return mod_month == 0 and dt.day == self._get_offset_day(dt)
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
# months_since: find the calendar quarter containing other.month,
# e.g. if other.month == 8, the calendar quarter is [Jul, Aug, Sep].
# Then find the month in that quarter containing an is_on_offset date for
@@ -2022,7 +2018,7 @@ cdef class QuarterOffset(SingleConstructorOffset):
@apply_index_wraps
def apply_index(self, dtindex):
shifted = shift_quarters(
- dtindex.asi8, self.n, self.startingMonth, self._day_opt
+ dtindex.view("i8"), self.n, self.startingMonth, self._day_opt
)
return shifted
@@ -2048,9 +2044,9 @@ cdef class BQuarterEnd(QuarterOffset):
>>> ts + BQuarterEnd(startingMonth=2)
Timestamp('2020-05-29 05:01:15')
"""
- _outputName = "BusinessQuarterEnd"
- _default_startingMonth = 3
- _from_name_startingMonth = 12
+ _output_name = "BusinessQuarterEnd"
+ _default_starting_month = 3
+ _from_name_starting_month = 12
_prefix = "BQ"
_day_opt = "business_end"
@@ -2076,9 +2072,9 @@ cdef class BQuarterBegin(QuarterOffset):
>>> ts + BQuarterBegin(-1)
Timestamp('2020-03-02 05:01:15')
"""
- _outputName = "BusinessQuarterBegin"
- _default_startingMonth = 3
- _from_name_startingMonth = 1
+ _output_name = "BusinessQuarterBegin"
+ _default_starting_month = 3
+ _from_name_starting_month = 1
_prefix = "BQS"
_day_opt = "business_start"
@@ -2091,8 +2087,7 @@ cdef class QuarterEnd(QuarterOffset):
startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, ...
startingMonth = 3 corresponds to dates like 3/31/2007, 6/30/2007, ...
"""
- _outputName = "QuarterEnd"
- _default_startingMonth = 3
+ _default_starting_month = 3
_prefix = "Q"
_day_opt = "end"
@@ -2105,6 +2100,7 @@ cdef class QuarterEnd(QuarterOffset):
QuarterOffset.__init__(self, n, normalize, startingMonth)
self._period_dtype_code = PeriodDtypeCode.Q_DEC + self.startingMonth % 12
+
cdef class QuarterBegin(QuarterOffset):
"""
DateOffset increments between Quarter start dates.
@@ -2113,9 +2109,8 @@ cdef class QuarterBegin(QuarterOffset):
startingMonth = 2 corresponds to dates like 2/01/2007, 5/01/2007, ...
startingMonth = 3 corresponds to dates like 3/01/2007, 6/01/2007, ...
"""
- _outputName = "QuarterBegin"
- _default_startingMonth = 3
- _from_name_startingMonth = 1
+ _default_starting_month = 3
+ _from_name_starting_month = 1
_prefix = "QS"
_day_opt = "start"
@@ -2130,14 +2125,14 @@ cdef class MonthOffset(SingleConstructorOffset):
return dt.day == self._get_offset_day(dt)
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
compare_day = self._get_offset_day(other)
n = roll_convention(other.day, self.n, compare_day)
return shift_month(other, n, self._day_opt)
@apply_index_wraps
def apply_index(self, dtindex):
- shifted = shift_months(dtindex.asi8, self.n, self._day_opt)
+ shifted = shift_months(dtindex.view("i8"), self.n, self._day_opt)
return shifted
cpdef __setstate__(self, state):
@@ -2244,29 +2239,31 @@ cdef class SemiMonthOffset(SingleConstructorOffset):
return self._prefix + suffix
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
+ is_start = isinstance(self, SemiMonthBegin)
+
# shift `other` to self.day_of_month, incrementing `n` if necessary
n = roll_convention(other.day, self.n, self.day_of_month)
days_in_month = get_days_in_month(other.year, other.month)
-
# For SemiMonthBegin on other.day == 1 and
# SemiMonthEnd on other.day == days_in_month,
# shifting `other` to `self.day_of_month` _always_ requires
# incrementing/decrementing `n`, regardless of whether it is
# initially positive.
- if type(self) is SemiMonthBegin and (self.n <= 0 and other.day == 1):
+ if is_start and (self.n <= 0 and other.day == 1):
n -= 1
- elif type(self) is SemiMonthEnd and (self.n > 0 and other.day == days_in_month):
+ elif (not is_start) and (self.n > 0 and other.day == days_in_month):
n += 1
- return self._apply(n, other)
+ if is_start:
+ months = n // 2 + n % 2
+ to_day = 1 if n % 2 else self.day_of_month
+ else:
+ months = n // 2
+ to_day = 31 if n % 2 else self.day_of_month
- def _apply(self, n, other):
- """
- Handle specific apply logic for child classes.
- """
- raise NotImplementedError(self)
+ return shift_month(other, months, to_day)
@apply_index_wraps
@cython.wraparound(False)
@@ -2348,11 +2345,6 @@ cdef class SemiMonthEnd(SemiMonthOffset):
days_in_month = get_days_in_month(dt.year, dt.month)
return dt.day in (self.day_of_month, days_in_month)
- def _apply(self, n, other):
- months = n // 2
- day = 31 if n % 2 else self.day_of_month
- return shift_month(other, months, day)
-
cdef class SemiMonthBegin(SemiMonthOffset):
"""
@@ -2373,11 +2365,6 @@ cdef class SemiMonthBegin(SemiMonthOffset):
return False
return dt.day in (1, self.day_of_month)
- def _apply(self, n, other):
- months = n // 2 + n % 2
- day = 1 if n % 2 else self.day_of_month
- return shift_month(other, months, day)
-
# ---------------------------------------------------------------------
# Week-Based Offset Classes
@@ -2446,25 +2433,25 @@ cdef class Week(SingleConstructorOffset):
td64 = np.timedelta64(td, "ns")
return dtindex + td64
else:
- return self._end_apply_index(dtindex)
+ i8other = dtindex.view("i8")
+ return self._end_apply_index(i8other)
@cython.wraparound(False)
@cython.boundscheck(False)
- def _end_apply_index(self, dtindex):
+ cdef _end_apply_index(self, const int64_t[:] i8other):
"""
Add self to the given DatetimeIndex, specialized for case where
self.weekday is non-null.
Parameters
----------
- dtindex : DatetimeIndex
+ i8other : const int64_t[:]
Returns
-------
ndarray[int64_t]
"""
cdef:
- int64_t[:] i8other = dtindex.view("i8")
Py_ssize_t i, count = len(i8other)
int64_t val
int64_t[:] out = np.empty(count, dtype="i8")
@@ -2493,7 +2480,7 @@ cdef class Week(SingleConstructorOffset):
return out.base
- def is_on_offset(self, dt) -> bool:
+ def is_on_offset(self, dt: datetime) -> bool:
if self.normalize and not _is_normalized(dt):
return False
elif self.weekday is None:
@@ -2647,6 +2634,7 @@ cdef class LastWeekOfMonth(WeekOfMonthMixin):
weekday = weekday_to_int[suffix]
return cls(weekday=weekday)
+
# ---------------------------------------------------------------------
# Special Offset Classes
@@ -2767,7 +2755,7 @@ cdef class FY5253(FY5253Mixin):
return year_end == dt
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
norm = Timestamp(other).normalize()
n = self.n
@@ -2822,7 +2810,7 @@ cdef class FY5253(FY5253Mixin):
)
return result
- def get_year_end(self, dt):
+ def get_year_end(self, dt: datetime) -> datetime:
assert dt.tzinfo is None
dim = get_days_in_month(dt.year, self.startingMonth)
@@ -2968,7 +2956,7 @@ cdef class FY5253Quarter(FY5253Mixin):
variation=self.variation,
)
- def _rollback_to_year(self, other):
+ def _rollback_to_year(self, other: datetime):
"""
Roll `other` back to the most recent date that was on a fiscal year
end.
@@ -3016,7 +3004,7 @@ cdef class FY5253Quarter(FY5253Mixin):
return start, num_qtrs, tdelta
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
# Note: self.n == 0 is not allowed.
n = self.n
@@ -3044,7 +3032,7 @@ cdef class FY5253Quarter(FY5253Mixin):
return res
- def get_weeks(self, dt):
+ def get_weeks(self, dt: datetime):
ret = [13] * 4
year_has_extra_week = self.year_has_extra_week(dt)
@@ -3107,7 +3095,7 @@ cdef class Easter(SingleConstructorOffset):
self.normalize = state.pop("normalize")
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
current_easter = easter(other.year)
current_easter = datetime(
current_easter.year, current_easter.month, current_easter.day
@@ -3329,7 +3317,7 @@ cdef class _CustomBusinessMonth(BusinessMixin):
return roll_func
@apply_wraps
- def apply(self, other):
+ def apply(self, other: datetime) -> datetime:
# First move to month offset
cur_month_offset_date = self.month_roll(other)
@@ -3947,7 +3935,7 @@ cpdef int roll_convention(int other, int n, int compare) nogil:
def roll_qtrday(other: datetime, n: int, month: int,
- day_opt: object, modby: int) -> int:
+ day_opt: str, modby: int) -> int:
"""
Possibly increment or decrement the number of periods to shift
based on rollforward/rollbackward conventions.
@@ -3957,7 +3945,7 @@ def roll_qtrday(other: datetime, n: int, month: int,
other : datetime or Timestamp
n : number of periods to increment, before adjusting for rolling
month : int reference month giving the first month of the year
- day_opt : 'start', 'end', 'business_start', 'business_end', or int
+ day_opt : {'start', 'end', 'business_start', 'business_end'}
The convention to use in finding the day in a given month against
which to compare for rollforward/rollbackward decisions.
modby : int 3 for quarters, 12 for years
| https://api.github.com/repos/pandas-dev/pandas/pulls/34808 | 2020-06-15T18:11:49Z | 2020-06-15T19:19:55Z | 2020-06-15T19:19:55Z | 2020-06-15T22:40:42Z |
|
Backport PR #34804 on branch 1.0.x (TST: ensure read_parquet filter argument is correctly passed though (pyarrow engine)) | diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 853b4e754bcd0..0b883e2bd142f 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -591,6 +591,17 @@ def test_additional_extension_types(self, pa):
)
check_round_trip(df, pa)
+ @td.skip_if_no("pyarrow", min_version="0.17")
+ def test_filter_row_groups(self, pa):
+ # https://github.com/pandas-dev/pandas/issues/26551
+ df = pd.DataFrame({"a": list(range(0, 3))})
+ with tm.ensure_clean() as path:
+ df.to_parquet(path, pa)
+ result = read_parquet(
+ path, pa, filters=[("a", "==", 0)], use_legacy_dataset=False
+ )
+ assert len(result) == 1
+
class TestParquetFastParquet(Base):
@td.skip_if_no("fastparquet", min_version="0.3.2")
| xref #34804 | https://api.github.com/repos/pandas-dev/pandas/pulls/34807 | 2020-06-15T17:04:51Z | 2020-06-15T17:49:05Z | 2020-06-15T17:49:05Z | 2020-06-15T17:49:57Z |
Regression in to_timedelta with errors="coerce" and unit | diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index a5b502f3f4071..1c3e69e21aa18 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -237,7 +237,7 @@ def array_to_timedelta64(object[:] values, unit=None, errors='raise'):
if unit is not None:
for i in range(n):
- if isinstance(values[i], str):
+ if isinstance(values[i], str) and errors != "coerce":
raise ValueError(
"unit must not be specified if the input contains a str"
)
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index d0657994dd81c..f33b569b3d1f7 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -882,9 +882,10 @@ def sequence_to_td64ns(data, copy=False, unit=None, errors="raise"):
----------
data : list-like
copy : bool, default False
- unit : str, default "ns"
- The timedelta unit to treat integers as multiples of.
- Must be un-specifed if the data contains a str.
+ unit : str, optional
+ The timedelta unit to treat integers as multiples of. For numeric
+ data this defaults to ``'ns'``.
+ Must be un-specified if the data contains a str and ``errors=="raise"``.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 87eac93a6072c..a643c312ec358 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -26,15 +26,24 @@ def to_timedelta(arg, unit=None, errors="raise"):
----------
arg : str, timedelta, list-like or Series
The data to be converted to timedelta.
- unit : str, default 'ns'
- Must not be specified if the arg is/contains a str.
- Denotes the unit of the arg. Possible values:
- ('W', 'D', 'days', 'day', 'hours', hour', 'hr', 'h',
- 'm', 'minute', 'min', 'minutes', 'T', 'S', 'seconds',
- 'sec', 'second', 'ms', 'milliseconds', 'millisecond',
- 'milli', 'millis', 'L', 'us', 'microseconds', 'microsecond',
- 'micro', 'micros', 'U', 'ns', 'nanoseconds', 'nano', 'nanos',
- 'nanosecond', 'N').
+ unit : str, optional
+ Denotes the unit of the arg for numeric `arg`. Defaults to ``"ns"``.
+
+ Possible values:
+
+ * 'W'
+ * 'D' / 'days' / 'day'
+ * 'hours' / 'hour' / 'hr' / 'h'
+ * 'm' / 'minute' / 'min' / 'minutes' / 'T'
+ * 'S' / 'seconds' / 'sec' / 'second'
+ * 'ms' / 'milliseconds' / 'millisecond' / 'milli' / 'millis' / 'L'
+ * 'us' / 'microseconds' / 'microsecond' / 'micro' / 'micros' / 'U'
+ * 'ns' / 'nanoseconds' / 'nano' / 'nanos' / 'nanosecond' / 'N'
+
+ .. versionchanged:: 1.1.0
+
+ Must not be specified when `arg` context strings and
+ ``errors="raise"``.
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If 'raise', then invalid parsing will raise an exception.
diff --git a/pandas/tests/tools/test_to_timedelta.py b/pandas/tests/tools/test_to_timedelta.py
index e3cf3a7f16a82..1e193f22a6698 100644
--- a/pandas/tests/tools/test_to_timedelta.py
+++ b/pandas/tests/tools/test_to_timedelta.py
@@ -155,3 +155,14 @@ def test_to_timedelta_float(self):
result = pd.to_timedelta(arr, unit="s")
expected_asi8 = np.arange(999990000, int(1e9), 1000, dtype="int64")
tm.assert_numpy_array_equal(result.asi8, expected_asi8)
+
+ def test_to_timedelta_coerce_strings_unit(self):
+ arr = np.array([1, 2, "error"], dtype=object)
+ result = pd.to_timedelta(arr, unit="ns", errors="coerce")
+ expected = pd.to_timedelta([1, 2, pd.NaT], unit="ns")
+ tm.assert_index_equal(result, expected)
+
+ def test_to_timedelta_ignore_strings_unit(self):
+ arr = np.array([1, 2, "error"], dtype=object)
+ result = pd.to_timedelta(arr, unit="ns", errors="ignore")
+ tm.assert_numpy_array_equal(result, arr)
| Introduced in https://github.com/pandas-dev/pandas/commit/d3f686bb50c14594087171aa0493cb07eb5a874c
In pandas 1.0.3
```python
In [2]: pd.to_timedelta([1, 2, 'error'], errors="coerce", unit="ns")
Out[2]: TimedeltaIndex(['00:00:00.000000', '00:00:00.000000', NaT], dtype='timedelta64[ns]', freq=None)
```
In master, we raise.
```pytb
In [2]: pd.to_timedelta([1, 2, 'error'], errors="coerce", unit="ns")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-a3691c044041> in <module>
----> 1 pd.to_timedelta([1, 2, 'error'], errors="coerce", unit="ns")
~/Envs/dask-dev/lib/python3.7/site-packages/pandas/core/tools/timedeltas.py in to_timedelta(arg, unit, errors)
101 arg = arg.item()
102 elif is_list_like(arg) and getattr(arg, "ndim", 1) == 1:
--> 103 return _convert_listlike(arg, unit=unit, errors=errors)
104 elif getattr(arg, "ndim", 1) > 1:
105 raise TypeError(
~/Envs/dask-dev/lib/python3.7/site-packages/pandas/core/tools/timedeltas.py in _convert_listlike(arg, unit, errors, name)
140
141 try:
--> 142 value = sequence_to_td64ns(arg, unit=unit, errors=errors, copy=False)[0]
143 except ValueError:
144 if errors == "ignore":
~/Envs/dask-dev/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py in sequence_to_td64ns(data, copy, unit, errors)
927 if is_object_dtype(data.dtype) or is_string_dtype(data.dtype):
928 # no need to make a copy, need to convert if string-dtyped
--> 929 data = objects_to_td64ns(data, unit=unit, errors=errors)
930 copy = False
931
~/Envs/dask-dev/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py in objects_to_td64ns(data, unit, errors)
1037 values = np.array(data, dtype=np.object_, copy=False)
1038
-> 1039 result = array_to_timedelta64(values, unit=unit, errors=errors)
1040 return result.view("timedelta64[ns]")
1041
pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.array_to_timedelta64()
ValueError: unit must not be specified if the input contains a str
```
This restores the 1.0.3 behavior, and adds an additional test for `errors="ignore"`, and cleans up the `to_timedelta` docstring. | https://api.github.com/repos/pandas-dev/pandas/pulls/34806 | 2020-06-15T16:33:03Z | 2020-06-15T17:27:29Z | 2020-06-15T17:27:29Z | 2020-06-15T20:16:28Z |
TST: ensure read_parquet filter argument is correctly passed though (pyarrow engine) | diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 7ee551194bf76..efd34c58d7d19 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -671,6 +671,17 @@ def test_timestamp_nanoseconds(self, pa):
df = pd.DataFrame({"a": pd.date_range("2017-01-01", freq="1n", periods=10)})
check_round_trip(df, pa, write_kwargs={"version": "2.0"})
+ @td.skip_if_no("pyarrow", min_version="0.17")
+ def test_filter_row_groups(self, pa):
+ # https://github.com/pandas-dev/pandas/issues/26551
+ df = pd.DataFrame({"a": list(range(0, 3))})
+ with tm.ensure_clean() as path:
+ df.to_parquet(path, pa)
+ result = read_parquet(
+ path, pa, filters=[("a", "==", 0)], use_legacy_dataset=False
+ )
+ assert len(result) == 1
+
class TestParquetFastParquet(Base):
@td.skip_if_no("fastparquet", min_version="0.3.2")
| xref https://github.com/pandas-dev/pandas/issues/26551#issuecomment-643882543 | https://api.github.com/repos/pandas-dev/pandas/pulls/34804 | 2020-06-15T15:25:52Z | 2020-06-15T16:25:10Z | 2020-06-15T16:25:09Z | 2020-06-16T09:24:46Z |
Backport PR #34718 on branch 1.0.x (Removed __div__ impls) | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 1166768472449..08daedf5e5096 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -401,11 +401,6 @@ cdef class Interval(IntervalMixin):
return Interval(y.left * self, y.right * self, closed=y.closed)
return NotImplemented
- def __div__(self, y):
- if isinstance(y, numbers.Number):
- return Interval(self.left / y, self.right / y, closed=self.closed)
- return NotImplemented
-
def __truediv__(self, y):
if isinstance(y, numbers.Number):
return Interval(self.left / y, self.right / y, closed=self.closed)
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 2f972a3153e5e..fe447f57974fe 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -215,9 +215,6 @@ cdef class _NaT(datetime):
def __neg__(self):
return NaT
- def __div__(self, other):
- return _nat_divide_op(self, other)
-
def __truediv__(self, other):
return _nat_divide_op(self, other)
| https://github.com/pandas-dev/pandas/pull/34711#issuecomment-644148383
@WillAyd on 1.0.x. we have cython>=0.29.13 and on master cython>=0.29.16. is this an issue?
| https://api.github.com/repos/pandas-dev/pandas/pulls/34802 | 2020-06-15T14:47:33Z | 2020-06-15T16:10:49Z | 2020-06-15T16:10:49Z | 2020-06-15T17:54:07Z |
API: Make describe changes backwards compatible | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 088f1d1946fa9..cfac916157649 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -280,6 +280,7 @@ Other enhancements
- Added :meth:`DataFrame.value_counts` (:issue:`5377`)
- Added a :func:`pandas.api.indexers.FixedForwardWindowIndexer` class to support forward-looking windows during ``rolling`` operations.
- Added a :func:`pandas.api.indexers.VariableOffsetWindowIndexer` class to support ``rolling`` operations with non-fixed offsets (:issue:`34994`)
+- :meth:`~DataFrame.describe` now includes a ``datetime_is_numeric`` keyword to control how datetime columns are summarized (:issue:`30164`, :issue:`34798`)
- :class:`Styler` may now render CSS more efficiently where multiple cells have the same styling (:issue:`30876`)
- :meth:`Styler.highlight_null` now accepts ``subset`` argument (:issue:`31345`)
- When writing directly to a sqlite connection :func:`to_sql` now supports the ``multi`` method (:issue:`29921`)
@@ -675,15 +676,6 @@ apply and applymap on ``DataFrame`` evaluates first row/column only once
df.apply(func, axis=1)
-.. _whatsnew_110.api.other:
-
-Other API changes
-^^^^^^^^^^^^^^^^^
-
-- :meth:`Series.describe` will now show distribution percentiles for ``datetime`` dtypes, statistics ``first`` and ``last``
- will now be ``min`` and ``max`` to match with numeric dtypes in :meth:`DataFrame.describe` (:issue:`30164`)
-
-
Increased minimum versions for dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ece4281af3208..eb55369d83593 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9711,7 +9711,11 @@ def abs(self: FrameOrSeries) -> FrameOrSeries:
return np.abs(self)
def describe(
- self: FrameOrSeries, percentiles=None, include=None, exclude=None
+ self: FrameOrSeries,
+ percentiles=None,
+ include=None,
+ exclude=None,
+ datetime_is_numeric=False,
) -> FrameOrSeries:
"""
Generate descriptive statistics.
@@ -9757,6 +9761,12 @@ def describe(
``select_dtypes`` (e.g. ``df.describe(include=['O'])``). To
exclude pandas categorical columns, use ``'category'``
- None (default) : The result will exclude nothing.
+ datetime_is_numeric : bool, default False
+ Whether to treat datetime dtypes as numeric. This affects statistics
+ calculated for the column. For DataFrame input, this also
+ controls whether datetime columns are included by default.
+
+ .. versionadded:: 1.1.0
Returns
-------
@@ -9834,7 +9844,7 @@ def describe(
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
- >>> s.describe()
+ >>> s.describe(datetime_is_numeric=True)
count 3
mean 2006-09-01 08:00:00
min 2000-01-01 00:00:00
@@ -9992,8 +10002,37 @@ def describe_categorical_1d(data):
dtype = None
if result[1] > 0:
top, freq = objcounts.index[0], objcounts.iloc[0]
- names += ["top", "freq"]
- result += [top, freq]
+ if is_datetime64_any_dtype(data.dtype):
+ if self.ndim == 1:
+ stacklevel = 4
+ else:
+ stacklevel = 5
+ warnings.warn(
+ "Treating datetime data as categorical rather than numeric in "
+ "`.describe` is deprecated and will be removed in a future "
+ "version of pandas. Specify `datetime_is_numeric=True` to "
+ "silence this warning and adopt the future behavior now.",
+ FutureWarning,
+ stacklevel=stacklevel,
+ )
+ tz = data.dt.tz
+ asint = data.dropna().values.view("i8")
+ top = Timestamp(top)
+ if top.tzinfo is not None and tz is not None:
+ # Don't tz_localize(None) if key is already tz-aware
+ top = top.tz_convert(tz)
+ else:
+ top = top.tz_localize(tz)
+ names += ["top", "freq", "first", "last"]
+ result += [
+ top,
+ freq,
+ Timestamp(asint.min(), tz=tz),
+ Timestamp(asint.max(), tz=tz),
+ ]
+ else:
+ names += ["top", "freq"]
+ result += [top, freq]
# If the DataFrame is empty, set 'top' and 'freq' to None
# to maintain output shape consistency
@@ -10019,7 +10058,7 @@ def describe_1d(data):
return describe_categorical_1d(data)
elif is_numeric_dtype(data):
return describe_numeric_1d(data)
- elif is_datetime64_any_dtype(data.dtype):
+ elif is_datetime64_any_dtype(data.dtype) and datetime_is_numeric:
return describe_timestamp_1d(data)
elif is_timedelta64_dtype(data.dtype):
return describe_numeric_1d(data)
@@ -10030,7 +10069,10 @@ def describe_1d(data):
return describe_1d(self)
elif (include is None) and (exclude is None):
# when some numerics are found, keep only numerics
- data = self.select_dtypes(include=[np.number])
+ default_include = [np.number]
+ if datetime_is_numeric:
+ default_include.append("datetime")
+ data = self.select_dtypes(include=default_include)
if len(data.columns) == 0:
data = self
elif include == "all":
diff --git a/pandas/tests/frame/methods/test_describe.py b/pandas/tests/frame/methods/test_describe.py
index b61d0d28e2fba..0b70bead375da 100644
--- a/pandas/tests/frame/methods/test_describe.py
+++ b/pandas/tests/frame/methods/test_describe.py
@@ -267,7 +267,69 @@ def test_describe_tz_values(self, tz_naive_fixture):
},
index=["count", "mean", "min", "25%", "50%", "75%", "max", "std"],
)
- result = df.describe(include="all")
+ result = df.describe(include="all", datetime_is_numeric=True)
+ tm.assert_frame_equal(result, expected)
+
+ def test_datetime_is_numeric_includes_datetime(self):
+ df = pd.DataFrame({"a": pd.date_range("2012", periods=3), "b": [1, 2, 3]})
+ result = df.describe(datetime_is_numeric=True)
+ expected = pd.DataFrame(
+ {
+ "a": [
+ 3,
+ pd.Timestamp("2012-01-02"),
+ pd.Timestamp("2012-01-01"),
+ pd.Timestamp("2012-01-01T12:00:00"),
+ pd.Timestamp("2012-01-02"),
+ pd.Timestamp("2012-01-02T12:00:00"),
+ pd.Timestamp("2012-01-03"),
+ np.nan,
+ ],
+ "b": [3, 2, 1, 1.5, 2, 2.5, 3, 1],
+ },
+ index=["count", "mean", "min", "25%", "50%", "75%", "max", "std"],
+ )
+ tm.assert_frame_equal(result, expected)
+
+ def test_describe_tz_values2(self):
+ tz = "CET"
+ s1 = Series(range(5))
+ start = Timestamp(2018, 1, 1)
+ end = Timestamp(2018, 1, 5)
+ s2 = Series(date_range(start, end, tz=tz))
+ df = pd.DataFrame({"s1": s1, "s2": s2})
+
+ s1_ = s1.describe()
+ s2_ = pd.Series(
+ [
+ 5,
+ 5,
+ s2.value_counts().index[0],
+ 1,
+ start.tz_localize(tz),
+ end.tz_localize(tz),
+ ],
+ index=["count", "unique", "top", "freq", "first", "last"],
+ )
+ idx = [
+ "count",
+ "unique",
+ "top",
+ "freq",
+ "first",
+ "last",
+ "mean",
+ "std",
+ "min",
+ "25%",
+ "50%",
+ "75%",
+ "max",
+ ]
+ expected = pd.concat([s1_, s2_], axis=1, keys=["s1", "s2"]).loc[idx]
+
+ with tm.assert_produces_warning(FutureWarning):
+ result = df.describe(include="all")
tm.assert_frame_equal(result, expected)
def test_describe_percentiles_integer_idx(self):
diff --git a/pandas/tests/series/methods/test_describe.py b/pandas/tests/series/methods/test_describe.py
index 4e59c6995f4f2..a15dc0751aa7d 100644
--- a/pandas/tests/series/methods/test_describe.py
+++ b/pandas/tests/series/methods/test_describe.py
@@ -83,7 +83,7 @@ def test_describe_with_tz(self, tz_naive_fixture):
start = Timestamp(2018, 1, 1)
end = Timestamp(2018, 1, 5)
s = Series(date_range(start, end, tz=tz), name=name)
- result = s.describe()
+ result = s.describe(datetime_is_numeric=True)
expected = Series(
[
5,
@@ -98,3 +98,43 @@ def test_describe_with_tz(self, tz_naive_fixture):
index=["count", "mean", "min", "25%", "50%", "75%", "max"],
)
tm.assert_series_equal(result, expected)
+
+ def test_describe_with_tz_warns(self):
+ name = tz = "CET"
+ start = Timestamp(2018, 1, 1)
+ end = Timestamp(2018, 1, 5)
+ s = Series(date_range(start, end, tz=tz), name=name)
+
+ with tm.assert_produces_warning(FutureWarning):
+ result = s.describe()
+
+ expected = Series(
+ [
+ 5,
+ 5,
+ s.value_counts().index[0],
+ 1,
+ start.tz_localize(tz),
+ end.tz_localize(tz),
+ ],
+ name=name,
+ index=["count", "unique", "top", "freq", "first", "last"],
+ )
+ tm.assert_series_equal(result, expected)
+
+ def test_datetime_is_numeric_includes_datetime(self):
+ s = Series(date_range("2012", periods=3))
+ result = s.describe(datetime_is_numeric=True)
+ expected = Series(
+ [
+ 3,
+ Timestamp("2012-01-02"),
+ Timestamp("2012-01-01"),
+ Timestamp("2012-01-01T12:00:00"),
+ Timestamp("2012-01-02"),
+ Timestamp("2012-01-02T12:00:00"),
+ Timestamp("2012-01-03"),
+ ],
+ index=["count", "mean", "min", "25%", "50%", "75%", "max"],
+ )
+ tm.assert_series_equal(result, expected)
| Adds the new behavior as a feature flag / deprecation.
Closes https://github.com/pandas-dev/pandas/issues/33903
(Do we have a list of issues for deprecations introduced in 1.x?) | https://api.github.com/repos/pandas-dev/pandas/pulls/34798 | 2020-06-15T13:46:51Z | 2020-07-14T20:21:58Z | 2020-07-14T20:21:57Z | 2020-07-14T20:30:06Z |
Backport PR #34785 on branch 1.0.x (DOC: add release note about revert for 1.0.5) | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index 5dbc911407784..7dfac54279e6f 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -15,8 +15,14 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
--
--
+
+- Fix regression in :meth:`read_parquet` when reading from file-like objects
+ (:issue:`34467`).
+- Fix regression in reading from public S3 buckets (:issue:`34626`).
+
+Note this disables the ability to read Parquet files from directories on S3
+again (:issue:`26388`, :issue:`34632`), which was added in the 1.0.4 release,
+but is now targeted for pandas 1.1.0.
.. _whatsnew_105.bug_fixes:
@@ -24,7 +30,6 @@ Bug fixes
~~~~~~~~~
- Fixed building from source with Python 3.8 fetching the wrong version of NumPy (:issue:`34666`)
--
Contributors
~~~~~~~~~~~~
| Backport PR #34785: DOC: add release note about revert for 1.0.5 | https://api.github.com/repos/pandas-dev/pandas/pulls/34797 | 2020-06-15T13:36:51Z | 2020-06-15T14:20:07Z | 2020-06-15T14:20:07Z | 2020-06-15T14:20:08Z |
API: Removed PeriodDtype.dtype_code from public API | diff --git a/pandas/_libs/tslibs/dtypes.pxd b/pandas/_libs/tslibs/dtypes.pxd
index f43bc283d98c7..71b4eeabbaaf5 100644
--- a/pandas/_libs/tslibs/dtypes.pxd
+++ b/pandas/_libs/tslibs/dtypes.pxd
@@ -73,4 +73,4 @@ cdef enum PeriodDtypeCode:
cdef class PeriodDtypeBase:
cdef readonly:
- PeriodDtypeCode dtype_code
+ PeriodDtypeCode _dtype_code
diff --git a/pandas/_libs/tslibs/dtypes.pyx b/pandas/_libs/tslibs/dtypes.pyx
index 0752910317077..143eac7f1ef6e 100644
--- a/pandas/_libs/tslibs/dtypes.pyx
+++ b/pandas/_libs/tslibs/dtypes.pyx
@@ -8,10 +8,10 @@ cdef class PeriodDtypeBase:
describing a PeriodDtype in an integer code.
"""
# cdef readonly:
- # PeriodDtypeCode dtype_code
+ # PeriodDtypeCode _dtype_code
def __cinit__(self, PeriodDtypeCode code):
- self.dtype_code = code
+ self._dtype_code = code
def __eq__(self, other):
if not isinstance(other, PeriodDtypeBase):
@@ -19,12 +19,12 @@ cdef class PeriodDtypeBase:
if not isinstance(self, PeriodDtypeBase):
# cython semantics, this is a reversed op
return False
- return self.dtype_code == other.dtype_code
+ return self._dtype_code == other._dtype_code
@property
def freq_group(self) -> int:
# See also: libperiod.get_freq_group
- return (self.dtype_code // 1000) * 1000
+ return (self._dtype_code // 1000) * 1000
@property
def date_offset(self):
@@ -35,8 +35,8 @@ cdef class PeriodDtypeBase:
"""
from .offsets import to_offset
- freqstr = _reverse_period_code_map.get(self.dtype_code)
- # equiv: freqstr = libfrequencies.get_freq_str(self.dtype_code)
+ freqstr = _reverse_period_code_map.get(self._dtype_code)
+ # equiv: freqstr = libfrequencies.get_freq_str(self._dtype_code)
return to_offset(freqstr)
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index d14f9d82eb5be..30caddf81b6e8 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1645,7 +1645,7 @@ cdef class _Period:
"""
freq = self._maybe_convert_freq(freq)
how = validate_end_alias(how)
- base1 = self._dtype.dtype_code
+ base1 = self._dtype._dtype_code
base2 = freq_to_dtype_code(freq)
# self.n can't be negative or 0
@@ -1734,7 +1734,7 @@ cdef class _Period:
return endpoint - Timedelta(1, 'ns')
if freq is None:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
freq = get_to_timestamp_base(base)
base = freq
else:
@@ -1748,12 +1748,12 @@ cdef class _Period:
@property
def year(self) -> int:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pyear(self.ordinal, base)
@property
def month(self) -> int:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pmonth(self.ordinal, base)
@property
@@ -1776,7 +1776,7 @@ cdef class _Period:
>>> p.day
11
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pday(self.ordinal, base)
@property
@@ -1806,7 +1806,7 @@ cdef class _Period:
>>> p.hour
0
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return phour(self.ordinal, base)
@property
@@ -1830,7 +1830,7 @@ cdef class _Period:
>>> p.minute
3
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pminute(self.ordinal, base)
@property
@@ -1854,12 +1854,12 @@ cdef class _Period:
>>> p.second
12
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return psecond(self.ordinal, base)
@property
def weekofyear(self) -> int:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pweek(self.ordinal, base)
@property
@@ -1940,7 +1940,7 @@ cdef class _Period:
>>> per.end_time.dayofweek
2
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pweekday(self.ordinal, base)
@property
@@ -2028,12 +2028,12 @@ cdef class _Period:
>>> period.dayofyear
1
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pday_of_year(self.ordinal, base)
@property
def quarter(self) -> int:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pquarter(self.ordinal, base)
@property
@@ -2077,7 +2077,7 @@ cdef class _Period:
>>> per.year
2017
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pqyear(self.ordinal, base)
@property
@@ -2111,7 +2111,7 @@ cdef class _Period:
>>> p.days_in_month
29
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return pdays_in_month(self.ordinal, base)
@property
@@ -2149,7 +2149,7 @@ cdef class _Period:
return self.freq.freqstr
def __repr__(self) -> str:
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
formatted = period_format(self.ordinal, base)
return f"Period('{formatted}', '{self.freqstr}')"
@@ -2157,7 +2157,7 @@ cdef class _Period:
"""
Return a string representation for a particular DataFrame
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
formatted = period_format(self.ordinal, base)
value = str(formatted)
return value
@@ -2309,7 +2309,7 @@ cdef class _Period:
>>> a.strftime('%b. %d, %Y was a %A')
'Jan. 01, 2001 was a Monday'
"""
- base = self._dtype.dtype_code
+ base = self._dtype._dtype_code
return period_format(self.ordinal, base, fmt)
| Closes #34735 | https://api.github.com/repos/pandas-dev/pandas/pulls/34796 | 2020-06-15T12:41:52Z | 2020-06-15T14:24:12Z | 2020-06-15T14:24:12Z | 2020-06-15T14:24:13Z |
Backport PR #34721 on branch 1.0.x (Debug CI Issue) | diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py
index 27f1c32058941..7691eea230730 100644
--- a/pandas/compat/numpy/__init__.py
+++ b/pandas/compat/numpy/__init__.py
@@ -13,6 +13,8 @@
_np_version_under1p16 = _nlv < LooseVersion("1.16")
_np_version_under1p17 = _nlv < LooseVersion("1.17")
_np_version_under1p18 = _nlv < LooseVersion("1.18")
+_np_version_under1p19 = _nlv < LooseVersion("1.19")
+_np_version_under1p20 = _nlv < LooseVersion("1.20")
_is_numpy_dev = ".dev" in str(_nlv)
diff --git a/pandas/tests/extension/base/dtype.py b/pandas/tests/extension/base/dtype.py
index b6c12b5844086..3cb3c25d557ce 100644
--- a/pandas/tests/extension/base/dtype.py
+++ b/pandas/tests/extension/base/dtype.py
@@ -68,18 +68,22 @@ def test_check_dtype(self, data):
{"A": pd.Series(data, dtype=dtype), "B": data, "C": "foo", "D": 1}
)
- # np.dtype('int64') == 'Int64' == 'int64'
- # so can't distinguish
- if dtype.name == "Int64":
- expected = pd.Series([True, True, False, True], index=list("ABCD"))
- else:
- expected = pd.Series([True, True, False, False], index=list("ABCD"))
-
- # XXX: This should probably be *fixed* not ignored.
- # See libops.scalar_compare
+ # TODO(numpy-1.20): This warnings filter and if block can be removed
+ # once we require numpy>=1.20
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
result = df.dtypes == str(dtype)
+ # NumPy>=1.20.0, but not pandas.compat.numpy till there
+ # is a wheel available with this change.
+ try:
+ new_numpy_behavior = np.dtype("int64") != "Int64"
+ except TypeError:
+ new_numpy_behavior = True
+
+ if dtype.name == "Int64" and not new_numpy_behavior:
+ expected = pd.Series([True, True, False, True], index=list("ABCD"))
+ else:
+ expected = pd.Series([True, True, False, False], index=list("ABCD"))
self.assert_series_equal(result, expected)
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index c8aa1f23ccf1f..60788aaca9f6d 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -98,13 +98,17 @@ def test_bootstrap_plot(self):
class TestDataFramePlots(TestPlotBase):
@td.skip_if_no_scipy
def test_scatter_matrix_axis(self):
+ from pandas.plotting._matplotlib.compat import _mpl_ge_3_0_0
+
scatter_matrix = plotting.scatter_matrix
with tm.RNGContext(42):
df = DataFrame(randn(100, 3))
# we are plotting multiples on a sub-plot
- with tm.assert_produces_warning(UserWarning):
+ with tm.assert_produces_warning(
+ UserWarning, raise_on_extra_warnings=_mpl_ge_3_0_0()
+ ):
axes = _check_plot_works(
scatter_matrix, filterwarnings="always", frame=df, range_padding=0.1
)
| xref #34721 | https://api.github.com/repos/pandas-dev/pandas/pulls/34788 | 2020-06-15T09:24:33Z | 2020-06-15T09:57:55Z | 2020-06-15T09:57:55Z | 2020-06-15T09:58:01Z |
Backport Test Only from PR #34500 on branch 1.0.x (REG: Fix read_parquet from file-like objects) | diff --git a/pandas/tests/io/data/parquet/simple.parquet b/pandas/tests/io/data/parquet/simple.parquet
new file mode 100644
index 0000000000000..2862a91f508ea
Binary files /dev/null and b/pandas/tests/io/data/parquet/simple.parquet differ
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 70a05b93c9cc3..853b4e754bcd0 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -1,6 +1,7 @@
""" test parquet compat """
import datetime
from distutils.version import LooseVersion
+from io import BytesIO
import locale
import os
from warnings import catch_warnings
@@ -494,6 +495,23 @@ def test_s3_roundtrip(self, df_compat, s3_resource, pa):
# GH #19134
check_round_trip(df_compat, pa, path="s3://pandas-test/pyarrow.parquet")
+ @tm.network
+ @td.skip_if_no("pyarrow")
+ def test_parquet_read_from_url(self, df_compat):
+ url = (
+ "https://raw.githubusercontent.com/pandas-dev/pandas/"
+ "master/pandas/tests/io/data/parquet/simple.parquet"
+ )
+ df = pd.read_parquet(url)
+ tm.assert_frame_equal(df, df_compat)
+
+ @td.skip_if_no("pyarrow")
+ def test_read_file_like_obj_support(self, df_compat):
+ buffer = BytesIO()
+ df_compat.to_parquet(buffer)
+ df_from_buf = pd.read_parquet(buffer)
+ tm.assert_frame_equal(df_compat, df_from_buf)
+
def test_partition_cols_supported(self, pa, df_full):
# GH #23283
partition_cols = ["bool", "int"]
| xref #34500 | https://api.github.com/repos/pandas-dev/pandas/pulls/34787 | 2020-06-15T09:02:42Z | 2020-06-15T11:19:39Z | 2020-06-15T11:19:39Z | 2020-06-15T11:25:47Z |
Backport PR #34667 on branch 1.0.x (BLD: pyproject.toml for Py38) | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index 1edc7e1cad72f..5dbc911407784 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -22,7 +22,8 @@ Fixed regressions
Bug fixes
~~~~~~~~~
--
+
+- Fixed building from source with Python 3.8 fetching the wrong version of NumPy (:issue:`34666`)
-
Contributors
diff --git a/pyproject.toml b/pyproject.toml
index 28d7c3d55c919..05490d5060a28 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -6,9 +6,11 @@ requires = [
"wheel",
"Cython>=0.29.13", # Note: sync with setup.py
"numpy==1.13.3; python_version=='3.6' and platform_system!='AIX'",
- "numpy==1.14.5; python_version>='3.7' and platform_system!='AIX'",
+ "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'",
+ "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'",
"numpy==1.16.0; python_version=='3.6' and platform_system=='AIX'",
- "numpy==1.16.0; python_version>='3.7' and platform_system=='AIX'",
+ "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'",
+ "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'",
]
[tool.black]
| xref #34667
@jorisvandenbossche another set of eyes needed as this was not a clean cherry-pick | https://api.github.com/repos/pandas-dev/pandas/pulls/34786 | 2020-06-15T08:35:45Z | 2020-06-15T11:17:51Z | 2020-06-15T11:17:51Z | 2020-06-15T11:23:53Z |
DOC: add release note about revert for 1.0.5 | diff --git a/doc/source/whatsnew/v1.0.5.rst b/doc/source/whatsnew/v1.0.5.rst
index 5dbc911407784..7dfac54279e6f 100644
--- a/doc/source/whatsnew/v1.0.5.rst
+++ b/doc/source/whatsnew/v1.0.5.rst
@@ -15,8 +15,14 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
--
--
+
+- Fix regression in :meth:`read_parquet` when reading from file-like objects
+ (:issue:`34467`).
+- Fix regression in reading from public S3 buckets (:issue:`34626`).
+
+Note this disables the ability to read Parquet files from directories on S3
+again (:issue:`26388`, :issue:`34632`), which was added in the 1.0.4 release,
+but is now targeted for pandas 1.1.0.
.. _whatsnew_105.bug_fixes:
@@ -24,7 +30,6 @@ Bug fixes
~~~~~~~~~
- Fixed building from source with Python 3.8 fetching the wrong version of NumPy (:issue:`34666`)
--
Contributors
~~~~~~~~~~~~
| Whatsnew for https://github.com/pandas-dev/pandas/pull/34632
cc @simonjayhawkins @alimcmaster1 | https://api.github.com/repos/pandas-dev/pandas/pulls/34785 | 2020-06-15T08:08:22Z | 2020-06-15T13:36:13Z | 2020-06-15T13:36:13Z | 2020-06-15T13:38:02Z |
REF: avoid DTA/PA methods in SemiMonthOffset.apply_index | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 1dae34e1ac49c..f7f50cbaf582c 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -2269,56 +2269,62 @@ cdef class SemiMonthOffset(SingleConstructorOffset):
raise NotImplementedError(self)
@apply_index_wraps
+ @cython.wraparound(False)
+ @cython.boundscheck(False)
def apply_index(self, dtindex):
- # determine how many days away from the 1st of the month we are
-
- dti = dtindex
- i8other = dtindex.asi8
- days_from_start = dtindex.to_perioddelta("M").asi8
- delta = Timedelta(days=self.day_of_month - 1).value
-
- # get boolean array for each element before the day_of_month
- before_day_of_month = days_from_start < delta
-
- # get boolean array for each element after the day_of_month
- after_day_of_month = days_from_start > delta
-
- # determine the correct n for each date in dtindex
- roll = self._get_roll(i8other, before_day_of_month, after_day_of_month)
-
- # isolate the time since it will be striped away one the next line
- time = (i8other % DAY_NANOS).view("timedelta64[ns]")
-
- # apply the correct number of months
-
- # integer-array addition on PeriodIndex is deprecated,
- # so we use _addsub_int_array directly
- asper = dtindex.to_period("M")
+ cdef:
+ int64_t[:] i8other = dtindex.view("i8")
+ Py_ssize_t i, count = len(i8other)
+ int64_t val
+ int64_t[:] out = np.empty(count, dtype="i8")
+ npy_datetimestruct dts
+ int months, to_day, nadj, n = self.n
+ int days_in_month, day, anchor_dom = self.day_of_month
+ bint is_start = isinstance(self, SemiMonthBegin)
- shifted = asper._addsub_int_array(roll // 2, operator.add)
- dtindex = type(dti)(shifted.to_timestamp())
- dt64other = np.asarray(dtindex)
+ with nogil:
+ for i in range(count):
+ val = i8other[i]
+ if val == NPY_NAT:
+ out[i] = NPY_NAT
+ continue
- # apply the correct day
- dt64result = self._apply_index_days(dt64other, roll)
+ dt64_to_dtstruct(val, &dts)
+ day = dts.day
+
+ # Adjust so that we are always looking at self.day_of_month,
+ # incrementing/decrementing n if necessary.
+ nadj = roll_convention(day, n, anchor_dom)
+
+ days_in_month = get_days_in_month(dts.year, dts.month)
+ # For SemiMonthBegin on other.day == 1 and
+ # SemiMonthEnd on other.day == days_in_month,
+ # shifting `other` to `self.day_of_month` _always_ requires
+ # incrementing/decrementing `n`, regardless of whether it is
+ # initially positive.
+ if is_start and (n <= 0 and day == 1):
+ nadj -= 1
+ elif (not is_start) and (n > 0 and day == days_in_month):
+ nadj += 1
+
+ if is_start:
+ # See also: SemiMonthBegin._apply
+ months = nadj // 2 + nadj % 2
+ to_day = 1 if nadj % 2 else anchor_dom
- return dt64result + time
+ else:
+ # See also: SemiMonthEnd._apply
+ months = nadj // 2
+ to_day = 31 if nadj % 2 else anchor_dom
- def _get_roll(self, i8other, before_day_of_month, after_day_of_month):
- """
- Return an array with the correct n for each date in dtindex.
+ dts.year = year_add_months(dts, months)
+ dts.month = month_add_months(dts, months)
+ days_in_month = get_days_in_month(dts.year, dts.month)
+ dts.day = min(to_day, days_in_month)
- The roll array is based on the fact that dtindex gets rolled back to
- the first day of the month.
- """
- # before_day_of_month and after_day_of_month are ndarray[bool]
- raise NotImplementedError
+ out[i] = dtstruct_to_dt64(&dts)
- def _apply_index_days(self, dt64other, roll):
- """
- Apply the correct day for each date in dt64other.
- """
- raise NotImplementedError
+ return out.base
cdef class SemiMonthEnd(SemiMonthOffset):
@@ -2347,39 +2353,6 @@ cdef class SemiMonthEnd(SemiMonthOffset):
day = 31 if n % 2 else self.day_of_month
return shift_month(other, months, day)
- def _get_roll(self, i8other, before_day_of_month, after_day_of_month):
- # before_day_of_month and after_day_of_month are ndarray[bool]
- n = self.n
- is_month_end = get_start_end_field(i8other, "is_month_end")
- if n > 0:
- roll_end = np.where(is_month_end, 1, 0)
- roll_before = np.where(before_day_of_month, n, n + 1)
- roll = roll_end + roll_before
- elif n == 0:
- roll_after = np.where(after_day_of_month, 2, 0)
- roll_before = np.where(~after_day_of_month, 1, 0)
- roll = roll_before + roll_after
- else:
- roll = np.where(after_day_of_month, n + 2, n + 1)
- return roll
-
- def _apply_index_days(self, dt64other, roll):
- """
- Add days portion of offset to dt64other.
-
- Parameters
- ----------
- dt64other : ndarray[datetime64[ns]]
- roll : ndarray[int64_t]
-
- Returns
- -------
- ndarray[datetime64[ns]]
- """
- nanos = (roll % 2) * Timedelta(days=self.day_of_month).value
- dt64other += nanos.astype("timedelta64[ns]")
- return dt64other + Timedelta(days=-1)
-
cdef class SemiMonthBegin(SemiMonthOffset):
"""
@@ -2405,38 +2378,6 @@ cdef class SemiMonthBegin(SemiMonthOffset):
day = 1 if n % 2 else self.day_of_month
return shift_month(other, months, day)
- def _get_roll(self, i8other, before_day_of_month, after_day_of_month):
- # before_day_of_month and after_day_of_month are ndarray[bool]
- n = self.n
- is_month_start = get_start_end_field(i8other, "is_month_start")
- if n > 0:
- roll = np.where(before_day_of_month, n, n + 1)
- elif n == 0:
- roll_start = np.where(is_month_start, 0, 1)
- roll_after = np.where(after_day_of_month, 1, 0)
- roll = roll_start + roll_after
- else:
- roll_after = np.where(after_day_of_month, n + 2, n + 1)
- roll_start = np.where(is_month_start, -1, 0)
- roll = roll_after + roll_start
- return roll
-
- def _apply_index_days(self, dt64other, roll):
- """
- Add days portion of offset to dt64other.
-
- Parameters
- ----------
- dt64other : ndarray[datetime64[ns]]
- roll : ndarray[int64_t]
-
- Returns
- -------
- ndarray[datetime64[ns]]
- """
- nanos = (roll % 2) * Timedelta(days=self.day_of_month - 1).value
- return dt64other + nanos.astype("timedelta64[ns]")
-
# ---------------------------------------------------------------------
# Week-Based Offset Classes
| I think this gets the last implicit-external-dependencies in tslibs.
```
In [2]: dti = pd.date_range("2016-01-01", periods=10000, freq="S")
In [3]: off = pd.offsets.SemiMonthEnd(-2)
In [4]: %timeit dti + off
3.08 ms ± 108 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <-- master
486 µs ± 7.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # <-- PR
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34783 | 2020-06-15T03:31:47Z | 2020-06-15T12:23:06Z | 2020-06-15T12:23:06Z | 2020-06-15T15:00:32Z |
REF: avoid using DTA/PA methods in Week.apply_index | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 1dae34e1ac49c..0c947ca519a7d 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -2507,6 +2507,8 @@ cdef class Week(SingleConstructorOffset):
else:
return self._end_apply_index(dtindex)
+ @cython.wraparound(False)
+ @cython.boundscheck(False)
def _end_apply_index(self, dtindex):
"""
Add self to the given DatetimeIndex, specialized for case where
@@ -2518,31 +2520,37 @@ cdef class Week(SingleConstructorOffset):
Returns
-------
- result : DatetimeIndex
+ ndarray[int64_t]
"""
- i8other = dtindex.asi8
- off = (i8other % DAY_NANOS).view("timedelta64[ns]")
+ cdef:
+ int64_t[:] i8other = dtindex.view("i8")
+ Py_ssize_t i, count = len(i8other)
+ int64_t val
+ int64_t[:] out = np.empty(count, dtype="i8")
+ npy_datetimestruct dts
+ int wday, days, weeks, n = self.n
+ int anchor_weekday = self.weekday
- base = self._period_dtype_code
- base_period = dtindex.to_period(base)
+ with nogil:
+ for i in range(count):
+ val = i8other[i]
+ if val == NPY_NAT:
+ out[i] = NPY_NAT
+ continue
- if self.n > 0:
- # when adding, dates on end roll to next
- normed = dtindex - off + Timedelta(1, "D") - Timedelta(1, "ns")
- roll = np.where(
- base_period.to_timestamp(how="end") == normed, self.n, self.n - 1
- )
- # integer-array addition on PeriodIndex is deprecated,
- # so we use _addsub_int_array directly
- shifted = base_period._addsub_int_array(roll, operator.add)
- base = shifted.to_timestamp(how="end")
- else:
- # integer addition on PeriodIndex is deprecated,
- # so we use _time_shift directly
- roll = self.n
- base = base_period._time_shift(roll).to_timestamp(how="end")
+ dt64_to_dtstruct(val, &dts)
+ wday = dayofweek(dts.year, dts.month, dts.day)
+
+ days = 0
+ weeks = n
+ if wday != anchor_weekday:
+ days = (anchor_weekday - wday) % 7
+ if weeks > 0:
+ weeks -= 1
+
+ out[i] = val + (7 * weeks + days) * DAY_NANOS
- return base + off + Timedelta(1, "ns") - Timedelta(1, "D")
+ return out.base
def is_on_offset(self, dt) -> bool:
if self.normalize and not _is_normalized(dt):
| We avoid a couple of array allocations in the process.
```
In [2]: dti = pd.date_range("2016-01-01", periods=10000, freq="S")
In [3]: off = pd.offsets.Week(1, False, 3)
In [4]: %timeit dti + off
3.22 ms ± 30.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <-- master
344 µs ± 4.97 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # <-- PR
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34782 | 2020-06-15T01:35:35Z | 2020-06-15T02:20:08Z | 2020-06-15T02:20:08Z | 2020-06-15T03:14:31Z |
REF: reuse roll_qtrday in liboffsets | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 1dae34e1ac49c..1e002f4a1af88 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3786,7 +3786,7 @@ cdef inline void _shift_quarters(const int64_t[:] dtindex,
"""See shift_quarters.__doc__"""
cdef:
Py_ssize_t i
- int months_since, compare_day, n
+ int months_since, n
npy_datetimestruct dts
for i in range(count):
@@ -3798,18 +3798,7 @@ cdef inline void _shift_quarters(const int64_t[:] dtindex,
n = quarters
months_since = (dts.month - q1start_month) % modby
- compare_day = get_day_of_month(&dts, day_opt)
-
- # offset semantics - if on the anchor point and going backwards
- # shift to next
- if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > compare_day)):
- # make sure to roll forward, so negate
- n += 1
- elif n > 0 and (months_since == 0 and dts.day < compare_day):
- # pretend to roll back if on same month but
- # before compare_day
- n -= 1
+ n = _roll_qtrday(&dts, n, months_since, day_opt)
dts.year = year_add_months(dts, modby * n - months_since)
dts.month = month_add_months(dts, modby * n - months_since)
@@ -4009,7 +3998,7 @@ cpdef int roll_convention(int other, int n, int compare) nogil:
def roll_qtrday(other: datetime, n: int, month: int,
- day_opt: object, modby: int=3) -> int:
+ day_opt: object, modby: int) -> int:
"""
Possibly increment or decrement the number of periods to shift
based on rollforward/rollbackward conventions.
@@ -4037,25 +4026,30 @@ def roll_qtrday(other: datetime, n: int, month: int,
npy_datetimestruct dts
pydate_to_dtstruct(other, &dts)
- # TODO: with small adjustments this could be used in shift_quarters
-
if modby == 12:
# We care about the month-of-year, not month-of-quarter, so skip mod
months_since = other.month - month
else:
months_since = other.month % modby - month % modby
+ return _roll_qtrday(&dts, n, months_since, day_opt)
+
+
+cdef inline int _roll_qtrday(npy_datetimestruct* dts,
+ int n,
+ int months_since,
+ str day_opt) nogil except? -1:
+ """See roll_qtrday.__doc__"""
+
if n > 0:
if months_since < 0 or (months_since == 0 and
- other.day < get_day_of_month(&dts,
- day_opt)):
+ dts.day < get_day_of_month(dts, day_opt)):
# pretend to roll back if on same month but
# before compare_day
n -= 1
else:
if months_since > 0 or (months_since == 0 and
- other.day > get_day_of_month(&dts,
- day_opt)):
+ dts.day > get_day_of_month(dts, day_opt)):
# make sure to roll forward, so negate
n += 1
return n
| Last one in this sequence. | https://api.github.com/repos/pandas-dev/pandas/pulls/34781 | 2020-06-15T01:18:58Z | 2020-06-15T02:19:54Z | 2020-06-15T02:19:54Z | 2020-06-15T03:13:37Z |
TST: remove super slow cases on upsample_nearest_limit | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 896765722bf32..b7fd797fb7230 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,7 +3,7 @@ repos:
rev: 19.10b0
hooks:
- id: black
- language_version: python3.7
+ language_version: python3
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.7
hooks:
diff --git a/pandas/tests/io/parser/test_multi_thread.py b/pandas/tests/io/parser/test_multi_thread.py
index 458ff4da55ed3..d50560c684084 100644
--- a/pandas/tests/io/parser/test_multi_thread.py
+++ b/pandas/tests/io/parser/test_multi_thread.py
@@ -6,6 +6,7 @@
from multiprocessing.pool import ThreadPool
import numpy as np
+import pytest
import pandas as pd
from pandas import DataFrame
@@ -34,6 +35,7 @@ def _construct_dataframe(num_rows):
return df
+@pytest.mark.slow
def test_multi_thread_string_io_read_csv(all_parsers):
# see gh-11786
parser = all_parsers
@@ -126,6 +128,7 @@ def reader(arg):
return final_dataframe
+@pytest.mark.slow
def test_multi_thread_path_multipart_read_csv(all_parsers):
# see gh-11786
num_tasks = 4
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 8d7d45f54ad5f..43d2bf80505db 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -2,7 +2,6 @@
from functools import partial
from io import StringIO
-from dateutil.tz import tzlocal
import numpy as np
import pytest
import pytz
@@ -477,15 +476,10 @@ def test_upsample_with_limit():
tm.assert_series_equal(result, expected)
-@pytest.mark.parametrize("freq", ["Y", "10M", "5D", "10H", "5Min", "10S"])
+@pytest.mark.parametrize("freq", ["5D", "10H", "5Min", "10S"])
@pytest.mark.parametrize("rule", ["Y", "3M", "15D", "30H", "15Min", "30S"])
def test_nearest_upsample_with_limit(tz_aware_fixture, freq, rule):
# GH 33939
- tz = tz_aware_fixture
- if str(tz) == "tzlocal()" and rule == "30S" and freq in ["Y", "10M"]:
- # GH#34413 separate these so we can mark as slow, see
- # test_nearest_upsample_with_limit_tzlocal
- return
rng = date_range("1/1/2000", periods=3, freq=freq, tz=tz_aware_fixture)
ts = Series(np.random.randn(len(rng)), rng)
@@ -494,20 +488,6 @@ def test_nearest_upsample_with_limit(tz_aware_fixture, freq, rule):
tm.assert_series_equal(result, expected)
-@pytest.mark.slow
-@pytest.mark.parametrize("freq", ["Y", "10M"])
-def test_nearest_upsample_with_limit_tzlocal(freq):
- # GH#33939, GH#34413 split off from test_nearest_upsample_with_limit
- rule = "30S"
- tz = tzlocal()
- rng = date_range("1/1/2000", periods=3, freq=freq, tz=tz)
- ts = Series(np.random.randn(len(rng)), rng)
-
- result = ts.resample(rule).nearest(limit=2)
- expected = ts.reindex(result.index, method="nearest", limit=2)
- tm.assert_series_equal(result, expected)
-
-
def test_resample_ohlc(series):
s = series
| make multi_thread csv parsing test slow | https://api.github.com/repos/pandas-dev/pandas/pulls/34780 | 2020-06-15T00:42:37Z | 2020-06-15T02:19:39Z | 2020-06-15T02:19:39Z | 2020-06-15T02:19:40Z |
REF: de-duplicate code in liboffsets | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 093d53db21dc1..37be1e7aeda40 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3723,136 +3723,14 @@ cdef shift_quarters(
out : ndarray[int64_t]
"""
cdef:
- Py_ssize_t i
- npy_datetimestruct dts
- int count = len(dtindex)
- int months_to_roll, months_since, n, compare_day
+ Py_ssize_t count = len(dtindex)
int64_t[:] out = np.empty(count, dtype="int64")
- if day_opt == "start":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
-
- dt64_to_dtstruct(dtindex[i], &dts)
- n = quarters
-
- months_since = (dts.month - q1start_month) % modby
- compare_day = get_day_of_month(&dts, day_opt)
-
- # offset semantics - if on the anchor point and going backwards
- # shift to next
- if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > compare_day)):
- # make sure to roll forward, so negate
- n += 1
- elif n > 0 and (months_since == 0 and dts.day < compare_day):
- # pretend to roll back if on same month but
- # before compare_day
- n -= 1
-
- dts.year = year_add_months(dts, modby * n - months_since)
- dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = get_day_of_month(&dts, day_opt)
-
- out[i] = dtstruct_to_dt64(&dts)
-
- elif day_opt == "end":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
-
- dt64_to_dtstruct(dtindex[i], &dts)
- n = quarters
-
- months_since = (dts.month - q1start_month) % modby
- compare_day = get_day_of_month(&dts, day_opt)
-
- if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > compare_day)):
- # make sure to roll forward, so negate
- n += 1
- elif n > 0 and (months_since == 0 and dts.day < compare_day):
- # pretend to roll back if on same month but
- # before compare_day
- n -= 1
-
- dts.year = year_add_months(dts, modby * n - months_since)
- dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = get_day_of_month(&dts, day_opt)
-
- out[i] = dtstruct_to_dt64(&dts)
-
- elif day_opt == "business_start":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
-
- dt64_to_dtstruct(dtindex[i], &dts)
- n = quarters
-
- months_since = (dts.month - q1start_month) % modby
- # compare_day is only relevant for comparison in the case
- # where months_since == 0.
- compare_day = get_day_of_month(&dts, day_opt)
-
- if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > compare_day)):
- # make sure to roll forward, so negate
- n += 1
- elif n > 0 and (months_since == 0 and dts.day < compare_day):
- # pretend to roll back if on same month but
- # before compare_day
- n -= 1
-
- dts.year = year_add_months(dts, modby * n - months_since)
- dts.month = month_add_months(dts, modby * n - months_since)
-
- dts.day = get_day_of_month(&dts, day_opt)
-
- out[i] = dtstruct_to_dt64(&dts)
-
- elif day_opt == "business_end":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
-
- dt64_to_dtstruct(dtindex[i], &dts)
- n = quarters
-
- months_since = (dts.month - q1start_month) % modby
- # compare_day is only relevant for comparison in the case
- # where months_since == 0.
- compare_day = get_day_of_month(&dts, day_opt)
-
- if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > compare_day)):
- # make sure to roll forward, so negate
- n += 1
- elif n > 0 and (months_since == 0 and dts.day < compare_day):
- # pretend to roll back if on same month but
- # before compare_day
- n -= 1
-
- dts.year = year_add_months(dts, modby * n - months_since)
- dts.month = month_add_months(dts, modby * n - months_since)
-
- dts.day = get_day_of_month(&dts, day_opt)
-
- out[i] = dtstruct_to_dt64(&dts)
-
- else:
+ if day_opt not in ["start", "end", "business_start", "business_end"]:
raise ValueError("day must be None, 'start', 'end', "
"'business_start', or 'business_end'")
+ _shift_quarters(dtindex, out, count, quarters, q1start_month, day_opt, modby)
return np.asarray(out)
@@ -3872,7 +3750,6 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
Py_ssize_t i
npy_datetimestruct dts
int count = len(dtindex)
- int months_to_roll
int64_t[:] out = np.empty(count, dtype="int64")
if day_opt is None:
@@ -3888,94 +3765,90 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.day = min(dts.day, get_days_in_month(dts.year, dts.month))
out[i] = dtstruct_to_dt64(&dts)
- elif day_opt == "start":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
-
- dt64_to_dtstruct(dtindex[i], &dts)
- months_to_roll = months
- compare_day = get_day_of_month(&dts, day_opt)
+ elif day_opt in ["start", "end", "business_start", "business_end"]:
+ _shift_months(dtindex, out, count, months, day_opt)
- # offset semantics - if on the anchor point and going backwards
- # shift to next
- months_to_roll = roll_convention(dts.day, months_to_roll,
- compare_day)
-
- dts.year = year_add_months(dts, months_to_roll)
- dts.month = month_add_months(dts, months_to_roll)
- dts.day = get_day_of_month(&dts, day_opt)
-
- out[i] = dtstruct_to_dt64(&dts)
- elif day_opt == "end":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
+ else:
+ raise ValueError("day must be None, 'start', 'end', "
+ "'business_start', or 'business_end'")
- dt64_to_dtstruct(dtindex[i], &dts)
- months_to_roll = months
- compare_day = get_day_of_month(&dts, day_opt)
+ return np.asarray(out)
- # similar semantics - when adding shift forward by one
- # month if already at an end of month
- months_to_roll = roll_convention(dts.day, months_to_roll,
- compare_day)
- dts.year = year_add_months(dts, months_to_roll)
- dts.month = month_add_months(dts, months_to_roll)
+@cython.wraparound(False)
+@cython.boundscheck(False)
+cdef inline void _shift_months(const int64_t[:] dtindex,
+ int64_t[:] out,
+ Py_ssize_t count,
+ int months,
+ str day_opt) nogil:
+ """See shift_months.__doc__"""
+ cdef:
+ Py_ssize_t i
+ int months_to_roll, compare_day
+ npy_datetimestruct dts
- dts.day = get_day_of_month(&dts, day_opt)
- out[i] = dtstruct_to_dt64(&dts)
+ for i in range(count):
+ if dtindex[i] == NPY_NAT:
+ out[i] = NPY_NAT
+ continue
- elif day_opt == "business_start":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
+ dt64_to_dtstruct(dtindex[i], &dts)
+ months_to_roll = months
+ compare_day = get_day_of_month(&dts, day_opt)
- dt64_to_dtstruct(dtindex[i], &dts)
- months_to_roll = months
- compare_day = get_day_of_month(&dts, day_opt)
+ months_to_roll = roll_convention(dts.day, months_to_roll,
+ compare_day)
- months_to_roll = roll_convention(dts.day, months_to_roll,
- compare_day)
+ dts.year = year_add_months(dts, months_to_roll)
+ dts.month = month_add_months(dts, months_to_roll)
+ dts.day = get_day_of_month(&dts, day_opt)
- dts.year = year_add_months(dts, months_to_roll)
- dts.month = month_add_months(dts, months_to_roll)
+ out[i] = dtstruct_to_dt64(&dts)
- dts.day = get_day_of_month(&dts, day_opt)
- out[i] = dtstruct_to_dt64(&dts)
- elif day_opt == "business_end":
- with nogil:
- for i in range(count):
- if dtindex[i] == NPY_NAT:
- out[i] = NPY_NAT
- continue
+@cython.wraparound(False)
+@cython.boundscheck(False)
+cdef inline void _shift_quarters(const int64_t[:] dtindex,
+ int64_t[:] out,
+ Py_ssize_t count,
+ int quarters,
+ int q1start_month,
+ str day_opt,
+ int modby) nogil:
+ """See shift_quarters.__doc__"""
+ cdef:
+ Py_ssize_t i
+ int months_since, compare_day, n
+ npy_datetimestruct dts
- dt64_to_dtstruct(dtindex[i], &dts)
- months_to_roll = months
- compare_day = get_day_of_month(&dts, day_opt)
+ for i in range(count):
+ if dtindex[i] == NPY_NAT:
+ out[i] = NPY_NAT
+ continue
- months_to_roll = roll_convention(dts.day, months_to_roll,
- compare_day)
+ dt64_to_dtstruct(dtindex[i], &dts)
+ n = quarters
- dts.year = year_add_months(dts, months_to_roll)
- dts.month = month_add_months(dts, months_to_roll)
+ months_since = (dts.month - q1start_month) % modby
+ compare_day = get_day_of_month(&dts, day_opt)
- dts.day = get_day_of_month(&dts, day_opt)
- out[i] = dtstruct_to_dt64(&dts)
+ # offset semantics - if on the anchor point and going backwards
+ # shift to next
+ if n <= 0 and (months_since != 0 or
+ (months_since == 0 and dts.day > compare_day)):
+ # make sure to roll forward, so negate
+ n += 1
+ elif n > 0 and (months_since == 0 and dts.day < compare_day):
+ # pretend to roll back if on same month but
+ # before compare_day
+ n -= 1
- else:
- raise ValueError("day must be None, 'start', 'end', "
- "'business_start', or 'business_end'")
+ dts.year = year_add_months(dts, modby * n - months_since)
+ dts.month = month_add_months(dts, modby * n - months_since)
+ dts.day = get_day_of_month(&dts, day_opt)
- return np.asarray(out)
+ out[i] = dtstruct_to_dt64(&dts)
cdef ndarray[int64_t] shift_bdays(const int64_t[:] i8other, int periods):
@@ -4035,8 +3908,7 @@ cdef ndarray[int64_t] shift_bdays(const int64_t[:] i8other, int periods):
return result.base
-def shift_month(stamp: datetime, months: int,
- day_opt: object=None) -> datetime:
+def shift_month(stamp: datetime, months: int, day_opt: object=None) -> datetime:
"""
Given a datetime (or Timestamp) `stamp`, an integer `months` and an
option `day_opt`, return a new datetimelike that many months later,
@@ -4078,14 +3950,14 @@ def shift_month(stamp: datetime, months: int,
if day_opt is None:
days_in_month = get_days_in_month(year, month)
day = min(stamp.day, days_in_month)
- elif day_opt == 'start':
+ elif day_opt == "start":
day = 1
- elif day_opt == 'end':
+ elif day_opt == "end":
day = get_days_in_month(year, month)
- elif day_opt == 'business_start':
+ elif day_opt == "business_start":
# first business day of month
day = get_firstbday(year, month)
- elif day_opt == 'business_end':
+ elif day_opt == "business_end":
# last business day of month
day = get_lastbday(year, month)
elif is_integer_object(day_opt):
@@ -4126,15 +3998,15 @@ cdef inline int get_day_of_month(npy_datetimestruct* dts, day_opt) nogil except?
cdef:
int days_in_month
- if day_opt == 'start':
+ if day_opt == "start":
return 1
- elif day_opt == 'end':
+ elif day_opt == "end":
days_in_month = get_days_in_month(dts.year, dts.month)
return days_in_month
- elif day_opt == 'business_start':
+ elif day_opt == "business_start":
# first business day of month
return get_firstbday(dts.year, dts.month)
- elif day_opt == 'business_end':
+ elif day_opt == "business_end":
# last business day of month
return get_lastbday(dts.year, dts.month)
elif day_opt is not None:
| ```
In [2]: dti = pd.date_range("2016-01-01", periods=10000, freq="S")
In [3]: off = pd.offsets.BMonthEnd(2)
In [4]: %timeit dti + off
3.36 ms ± 14 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <-- master
998 µs ± 12.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # <-- PR
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34778 | 2020-06-14T21:58:07Z | 2020-06-15T00:17:51Z | 2020-06-15T00:17:51Z | 2020-06-15T00:59:10Z |
DOC: updated multi.py docstring for SS06 errors | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index fc2d4cf4621c4..fa35d683101c3 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -903,8 +903,7 @@ def _set_codes(
def set_codes(self, codes, level=None, inplace=False, verify_integrity=True):
"""
- Set new codes on MultiIndex. Defaults to returning
- new index.
+ Set new codes on MultiIndex. Defaults to returning new index.
.. versionadded:: 0.24.0
@@ -1541,8 +1540,9 @@ def _get_level_values(self, level, unique=False):
def get_level_values(self, level):
"""
- Return vector of label values for requested level,
- equal to the length of the index.
+ Return vector of label values for requested level.
+
+ Length of returned vector is equal to the length of the index.
Parameters
----------
@@ -1797,12 +1797,12 @@ def _sort_levels_monotonic(self):
def remove_unused_levels(self):
"""
- Create a new MultiIndex from the current that removes
- unused levels, meaning that they are not expressed in the labels.
+ Create new MultiIndex from current that removes unused levels.
- The resulting MultiIndex will have the same outward
- appearance, meaning the same .values and ordering. It will also
- be .equals() to the original.
+ Unused level(s) means levels that are not expressed in the
+ labels. The resulting MultiIndex will have the same outward
+ appearance, meaning the same .values and ordering. It will
+ also be .equals() to the original.
Returns
-------
@@ -2195,8 +2195,10 @@ def cats(level_codes):
def sortlevel(self, level=0, ascending=True, sort_remaining=True):
"""
- Sort MultiIndex at the requested level. The result will respect the
- original ordering of the associated factor at that level.
+ Sort MultiIndex at the requested level.
+
+ The result will respect the original ordering of the associated
+ factor at that level.
Parameters
----------
@@ -2634,8 +2636,10 @@ def _get_loc_single_level_index(self, level_index: Index, key: Hashable) -> int:
def get_loc(self, key, method=None):
"""
- Get location for a label or a tuple of labels as an integer, slice or
- boolean mask.
+ Get location for a label or a tuple of labels.
+
+ The location is returned as an integer/slice or boolean
+ mask.
Parameters
----------
@@ -2743,8 +2747,7 @@ def _maybe_to_slice(loc):
def get_loc_level(self, key, level=0, drop_level: bool = True):
"""
- Get both the location for the requested label(s) and the
- resulting sliced index.
+ Get location and sliced index for requested label(s)/level(s).
Parameters
----------
| https://api.github.com/repos/pandas-dev/pandas/pulls/34775 | 2020-06-14T19:29:09Z | 2020-06-14T22:21:47Z | 2020-06-14T22:21:47Z | 2020-06-14T22:21:51Z |
|
REF: inline get_day_of_month | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 6931360997420..9e6356b55dcec 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3757,16 +3757,22 @@ cdef shift_quarters(
n = quarters
months_since = (dts.month - q1start_month) % modby
+ compare_day = get_day_of_month(&dts, day_opt)
# offset semantics - if on the anchor point and going backwards
# shift to next
if n <= 0 and (months_since != 0 or
- (months_since == 0 and dts.day > 1)):
+ (months_since == 0 and dts.day > compare_day)):
+ # make sure to roll forward, so negate
n += 1
+ elif n > 0 and (months_since == 0 and dts.day < compare_day):
+ # pretend to roll back if on same month but
+ # before compare_day
+ n -= 1
dts.year = year_add_months(dts, modby * n - months_since)
dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = 1
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
@@ -3781,21 +3787,20 @@ cdef shift_quarters(
n = quarters
months_since = (dts.month - q1start_month) % modby
+ compare_day = get_day_of_month(&dts, day_opt)
- if n <= 0 and months_since != 0:
- # The general case of this condition would be
- # `months_since != 0 or (months_since == 0 and
- # dts.day > get_days_in_month(dts.year, dts.month))`
- # but the get_days_in_month inequality would never hold.
+ if n <= 0 and (months_since != 0 or
+ (months_since == 0 and dts.day > compare_day)):
+ # make sure to roll forward, so negate
n += 1
- elif n > 0 and (months_since == 0 and
- dts.day < get_days_in_month(dts.year,
- dts.month)):
+ elif n > 0 and (months_since == 0 and dts.day < compare_day):
+ # pretend to roll back if on same month but
+ # before compare_day
n -= 1
dts.year = year_add_months(dts, modby * n - months_since)
dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = get_days_in_month(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
@@ -3812,7 +3817,7 @@ cdef shift_quarters(
months_since = (dts.month - q1start_month) % modby
# compare_day is only relevant for comparison in the case
# where months_since == 0.
- compare_day = get_firstbday(dts.year, dts.month)
+ compare_day = get_day_of_month(&dts, day_opt)
if n <= 0 and (months_since != 0 or
(months_since == 0 and dts.day > compare_day)):
@@ -3826,7 +3831,7 @@ cdef shift_quarters(
dts.year = year_add_months(dts, modby * n - months_since)
dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = get_firstbday(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
@@ -3843,7 +3848,7 @@ cdef shift_quarters(
months_since = (dts.month - q1start_month) % modby
# compare_day is only relevant for comparison in the case
# where months_since == 0.
- compare_day = get_lastbday(dts.year, dts.month)
+ compare_day = get_day_of_month(&dts, day_opt)
if n <= 0 and (months_since != 0 or
(months_since == 0 and dts.day > compare_day)):
@@ -3857,7 +3862,7 @@ cdef shift_quarters(
dts.year = year_add_months(dts, modby * n - months_since)
dts.month = month_add_months(dts, modby * n - months_since)
- dts.day = get_lastbday(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
@@ -3909,7 +3914,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
- compare_day = 1
+ compare_day = get_day_of_month(&dts, day_opt)
# offset semantics - if on the anchor point and going backwards
# shift to next
@@ -3918,7 +3923,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
- dts.day = 1
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
elif day_opt == "end":
@@ -3930,7 +3935,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
- compare_day = get_days_in_month(dts.year, dts.month)
+ compare_day = get_day_of_month(&dts, day_opt)
# similar semantics - when adding shift forward by one
# month if already at an end of month
@@ -3940,7 +3945,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
- dts.day = get_days_in_month(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
elif day_opt == "business_start":
@@ -3952,7 +3957,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
- compare_day = get_firstbday(dts.year, dts.month)
+ compare_day = get_day_of_month(&dts, day_opt)
months_to_roll = roll_convention(dts.day, months_to_roll,
compare_day)
@@ -3960,7 +3965,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
- dts.day = get_firstbday(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
elif day_opt == "business_end":
@@ -3972,7 +3977,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
- compare_day = get_lastbday(dts.year, dts.month)
+ compare_day = get_day_of_month(&dts, day_opt)
months_to_roll = roll_convention(dts.day, months_to_roll,
compare_day)
@@ -3980,7 +3985,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
- dts.day = get_lastbday(dts.year, dts.month)
+ dts.day = get_day_of_month(&dts, day_opt)
out[i] = dtstruct_to_dt64(&dts)
else:
@@ -4051,7 +4056,7 @@ def shift_month(stamp: datetime, months: int,
return stamp.replace(year=year, month=month, day=day)
-cdef int get_day_of_month(npy_datetimestruct* dts, day_opt) nogil except? -1:
+cdef inline int get_day_of_month(npy_datetimestruct* dts, day_opt) nogil except? -1:
"""
Find the day in `other`'s month that satisfies a DateOffset's is_on_offset
policy, as described by the `day_opt` argument.
| Makes the logic in all cases of shift_months identical, same for shift_quarters. Next pass does de-duplication. | https://api.github.com/repos/pandas-dev/pandas/pulls/34772 | 2020-06-14T17:45:56Z | 2020-06-14T19:45:25Z | 2020-06-14T19:45:25Z | 2020-06-14T19:52:45Z |
CLN/TYPE: EWM | diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 9e088062d7280..646444d10e416 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -1759,7 +1759,7 @@ def roll_weighted_var(float64_t[:] values, float64_t[:] weights,
# Exponentially weighted moving average
-def ewma(float64_t[:] vals, float64_t com, int adjust, bint ignore_na, int minp):
+def ewma(float64_t[:] vals, float64_t com, bint adjust, bint ignore_na, int minp):
"""
Compute exponentially-weighted moving average using center-of-mass.
@@ -1777,17 +1777,14 @@ def ewma(float64_t[:] vals, float64_t com, int adjust, bint ignore_na, int minp)
"""
cdef:
- Py_ssize_t N = len(vals)
+ Py_ssize_t i, nobs, N = len(vals)
ndarray[float64_t] output = np.empty(N, dtype=float)
float64_t alpha, old_wt_factor, new_wt, weighted_avg, old_wt, cur
- Py_ssize_t i, nobs
bint is_observation
if N == 0:
return output
- minp = max(minp, 1)
-
alpha = 1. / (1. + com)
old_wt_factor = 1. - alpha
new_wt = 1. if adjust else alpha
@@ -1831,7 +1828,7 @@ def ewma(float64_t[:] vals, float64_t com, int adjust, bint ignore_na, int minp)
def ewmcov(float64_t[:] input_x, float64_t[:] input_y,
- float64_t com, int adjust, bint ignore_na, int minp, int bias):
+ float64_t com, bint adjust, bint ignore_na, int minp, bint bias):
"""
Compute exponentially-weighted moving variance using center-of-mass.
@@ -1851,11 +1848,10 @@ def ewmcov(float64_t[:] input_x, float64_t[:] input_y,
"""
cdef:
- Py_ssize_t N = len(input_x), M = len(input_y)
+ Py_ssize_t i, nobs, N = len(input_x), M = len(input_y)
float64_t alpha, old_wt_factor, new_wt, mean_x, mean_y, cov
float64_t sum_wt, sum_wt2, old_wt, cur_x, cur_y, old_mean_x, old_mean_y
float64_t numerator, denominator
- Py_ssize_t i, nobs
ndarray[float64_t] output
bint is_observation
@@ -1866,8 +1862,6 @@ def ewmcov(float64_t[:] input_x, float64_t[:] input_y,
if N == 0:
return output
- minp = max(minp, 1)
-
alpha = 1. / (1. + com)
old_wt_factor = 1. - alpha
new_wt = 1. if adjust else alpha
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index a5e30c900cae2..0e39b94574a12 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -1,9 +1,11 @@
from functools import partial
from textwrap import dedent
+from typing import Optional, Union
import numpy as np
import pandas._libs.window.aggregations as window_aggregations
+from pandas._typing import FrameOrSeries
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution
@@ -24,7 +26,12 @@
"""
-def get_center_of_mass(comass, span, halflife, alpha) -> float:
+def get_center_of_mass(
+ comass: Optional[float],
+ span: Optional[float],
+ halflife: Optional[float],
+ alpha: Optional[float],
+) -> float:
valid_count = com.count_not_none(comass, span, halflife, alpha)
if valid_count > 1:
raise ValueError("comass, span, halflife, and alpha are mutually exclusive")
@@ -114,7 +121,7 @@ class EWM(_Rolling):
used in calculating the final weighted average of
[:math:`x_0`, None, :math:`x_2`] are :math:`1-\alpha` and :math:`1` if
``adjust=True``, and :math:`1-\alpha` and :math:`\alpha` if ``adjust=False``.
- axis : {0 or 'index', 1 or 'columns'}, default 0
+ axis : {0, 1}, default 0
The axis to use. The value 0 identifies the rows, and 1
identifies the columns.
@@ -159,18 +166,18 @@ class EWM(_Rolling):
def __init__(
self,
obj,
- com=None,
- span=None,
- halflife=None,
- alpha=None,
- min_periods=0,
- adjust=True,
- ignore_na=False,
- axis=0,
+ com: Optional[float] = None,
+ span: Optional[float] = None,
+ halflife: Optional[float] = None,
+ alpha: Optional[float] = None,
+ min_periods: int = 0,
+ adjust: bool = True,
+ ignore_na: bool = False,
+ axis: int = 0,
):
self.obj = obj
self.com = get_center_of_mass(com, span, halflife, alpha)
- self.min_periods = min_periods
+ self.min_periods = max(int(min_periods), 1)
self.adjust = adjust
self.ignore_na = ignore_na
self.axis = axis
@@ -274,16 +281,16 @@ def mean(self, *args, **kwargs):
window_func = partial(
window_func,
com=self.com,
- adjust=int(self.adjust),
+ adjust=self.adjust,
ignore_na=self.ignore_na,
- minp=int(self.min_periods),
+ minp=self.min_periods,
)
return self._apply(window_func)
@Substitution(name="ewm", func_name="std")
@Appender(_doc_template)
@Appender(_bias_template)
- def std(self, bias=False, *args, **kwargs):
+ def std(self, bias: bool = False, *args, **kwargs):
"""
Exponential weighted moving stddev.
"""
@@ -295,7 +302,7 @@ def std(self, bias=False, *args, **kwargs):
@Substitution(name="ewm", func_name="var")
@Appender(_doc_template)
@Appender(_bias_template)
- def var(self, bias=False, *args, **kwargs):
+ def var(self, bias: bool = False, *args, **kwargs):
"""
Exponential weighted moving variance.
"""
@@ -303,20 +310,20 @@ def var(self, bias=False, *args, **kwargs):
def f(arg):
return window_aggregations.ewmcov(
- arg,
- arg,
- self.com,
- int(self.adjust),
- int(self.ignore_na),
- int(self.min_periods),
- int(bias),
+ arg, arg, self.com, self.adjust, self.ignore_na, self.min_periods, bias,
)
return self._apply(f)
@Substitution(name="ewm", func_name="cov")
@Appender(_doc_template)
- def cov(self, other=None, pairwise=None, bias=False, **kwargs):
+ def cov(
+ self,
+ other: Optional[Union[np.ndarray, FrameOrSeries]] = None,
+ pairwise: Optional[bool] = None,
+ bias: bool = False,
+ **kwargs,
+ ):
"""
Exponential weighted sample covariance.
@@ -350,10 +357,10 @@ def _get_cov(X, Y):
X._prep_values(),
Y._prep_values(),
self.com,
- int(self.adjust),
- int(self.ignore_na),
- int(self.min_periods),
- int(bias),
+ self.adjust,
+ self.ignore_na,
+ self.min_periods,
+ bias,
)
return X._wrap_result(cov)
@@ -363,7 +370,12 @@ def _get_cov(X, Y):
@Substitution(name="ewm", func_name="corr")
@Appender(_doc_template)
- def corr(self, other=None, pairwise=None, **kwargs):
+ def corr(
+ self,
+ other: Optional[Union[np.ndarray, FrameOrSeries]] = None,
+ pairwise: Optional[bool] = None,
+ **kwargs,
+ ):
"""
Exponential weighted sample correlation.
@@ -394,13 +406,7 @@ def _get_corr(X, Y):
def _cov(x, y):
return window_aggregations.ewmcov(
- x,
- y,
- self.com,
- int(self.adjust),
- int(self.ignore_na),
- int(self.min_periods),
- 1,
+ x, y, self.com, self.adjust, self.ignore_na, self.min_periods, 1,
)
x_values = X._prep_values()
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
1. Move `min_periods` validation to constructor
2. Stronger cython type defintions
3. Typing EWM methods | https://api.github.com/repos/pandas-dev/pandas/pulls/34770 | 2020-06-14T16:53:17Z | 2020-06-14T18:36:23Z | 2020-06-14T18:36:23Z | 2020-06-14T18:36:26Z |
BUG, TST: fix-_check_ticks_props | diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index f2f7b37170ec9..896d3278cdde1 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -272,7 +272,7 @@ def _check_ticks_props(
axes = self._flatten_visible(axes)
for ax in axes:
- if xlabelsize or xrot:
+ if xlabelsize is not None or xrot is not None:
if isinstance(ax.xaxis.get_minor_formatter(), NullFormatter):
# If minor ticks has NullFormatter, rot / fontsize are not
# retained
@@ -286,7 +286,7 @@ def _check_ticks_props(
if xrot is not None:
tm.assert_almost_equal(label.get_rotation(), xrot)
- if ylabelsize or yrot:
+ if ylabelsize is not None or yrot is not None:
if isinstance(ax.yaxis.get_minor_formatter(), NullFormatter):
labels = ax.get_yticklabels()
else:
diff --git a/pandas/tests/plotting/test_common.py b/pandas/tests/plotting/test_common.py
new file mode 100644
index 0000000000000..af67ed7ec215b
--- /dev/null
+++ b/pandas/tests/plotting/test_common.py
@@ -0,0 +1,24 @@
+import pytest
+
+import pandas.util._test_decorators as td
+
+from pandas import DataFrame
+from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
+
+
+@td.skip_if_no_mpl
+class TestCommon(TestPlotBase):
+ def test__check_ticks_props(self):
+ # GH 34768
+ df = DataFrame({"b": [0, 1, 0], "a": [1, 2, 3]})
+ ax = _check_plot_works(df.plot, rot=30)
+ ax.yaxis.set_tick_params(rotation=30)
+ msg = "expected 0.00000 but got "
+ with pytest.raises(AssertionError, match=msg):
+ self._check_ticks_props(ax, xrot=0)
+ with pytest.raises(AssertionError, match=msg):
+ self._check_ticks_props(ax, xlabelsize=0)
+ with pytest.raises(AssertionError, match=msg):
+ self._check_ticks_props(ax, yrot=0)
+ with pytest.raises(AssertionError, match=msg):
+ self._check_ticks_props(ax, ylabelsize=0)
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index c84a09f21f46b..fb8f62c946caf 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -48,6 +48,7 @@ def _assert_xtickslabels_visibility(self, axes, expected):
for ax, exp in zip(axes, expected):
self._check_visible(ax.get_xticklabels(), visible=exp)
+ @pytest.mark.xfail(reason="Waiting for PR 34334", strict=True)
@pytest.mark.slow
def test_plot(self):
from pandas.plotting._matplotlib.compat import _mpl_ge_3_1_0
@@ -467,6 +468,7 @@ def test_groupby_boxplot_sharex(self):
expected = [False, False, True, True]
self._assert_xtickslabels_visibility(axes, expected)
+ @pytest.mark.xfail(reason="Waiting for PR 34334", strict=True)
@pytest.mark.slow
def test_subplots_timeseries(self):
idx = date_range(start="2014-07-01", freq="M", periods=10)
| Here's something I noticed while working on #34334 : `self._check_ticks_props(ax, ylabelsize=0)` always passes!
This is because of
```python
if ylabelsize
```
instead of
```python
if ylabelsize is not None
```
being used.
The test I've added fails on master:
```python-traceback
(pandas-dev) marco@marco-Predator-PH315-52:~/pandas-dev$ pytest pandas/tests/plotting/test_frame.py::TestDataFramePlots::test_plot_with_rot
============================================================== test session starts ==============================================================
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/marco/pandas-dev, inifile: setup.cfg
plugins: cov-2.8.1, xdist-1.31.0, asyncio-0.10.0, hypothesis-5.8.0, forked-1.1.2
collected 1 item
pandas/tests/plotting/test_frame.py . [100%]
=============================================================== 1 passed in 0.33s ===============================================================
(pandas-dev) marco@marco-Predator-PH315-52:~/pandas-dev$ git checkout upstream/master -- pandas/tests/plotting/common.py
(pandas-dev) marco@marco-Predator-PH315-52:~/pandas-dev$ pytest pandas/tests/plotting/test_frame.py::TestDataFramePlots::test_plot_with_rot
============================================================== test session starts ==============================================================
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/marco/pandas-dev, inifile: setup.cfg
plugins: cov-2.8.1, xdist-1.31.0, asyncio-0.10.0, hypothesis-5.8.0, forked-1.1.2
collected 1 item
pandas/tests/plotting/test_frame.py F [100%]
=================================================================== FAILURES ====================================================================
_____________________________________________________ TestDataFramePlots.test_plot_with_rot _____________________________________________________
self = <pandas.tests.plotting.test_frame.TestDataFramePlots object at 0x7f8c7578f3a0>
def test_plot_with_rot(self):
# GH 34768
df = pd.DataFrame({"b": [0, 1, 0], "a": [1, 2, 3]})
ax = _check_plot_works(df.plot, rot=30)
ax.yaxis.set_tick_params(rotation=30)
msg = "expected 0.00000 but got "
with pytest.raises(AssertionError, match=msg):
> self._check_ticks_props(ax, xrot=0)
E Failed: DID NOT RAISE <class 'AssertionError'>
pandas/tests/plotting/test_frame.py:3360: Failed
============================================================ short test summary info ============================================================
FAILED pandas/tests/plotting/test_frame.py::TestDataFramePlots::test_plot_with_rot - Failed: DID NOT RAISE <class 'AssertionError'>
=============================================================== 1 failed in 0.46s ===============================================================
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34768 | 2020-06-14T14:02:33Z | 2020-06-16T19:29:38Z | 2020-06-16T19:29:38Z | 2020-06-16T19:34:12Z |
BUG: Groupby with as_index=False raises error when type is Category | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 2243790a663df..e998c60d3ce85 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -670,6 +670,25 @@ Using :meth:`DataFrame.groupby` with ``as_index=False`` and the function ``idxma
df.groupby("a", as_index=False).nunique()
+The method :meth:`core.DataFrameGroupBy.size` would previously ignore ``as_index=False``. Now the grouping columns are returned as columns, making the result a `DataFrame` instead of a `Series`. (:issue:`32599`)
+
+*Previous behavior*:
+
+.. code-block:: ipython
+
+ In [3]: df.groupby("a", as_index=False).size()
+ Out[4]:
+ a
+ x 2
+ y 2
+ dtype: int64
+
+*New behavior*:
+
+.. ipython:: python
+
+ df.groupby("a", as_index=False).size()
+
.. _whatsnew_110.api_breaking.apply_applymap_first_once:
apply and applymap on ``DataFrame`` evaluates first row/column only once
@@ -983,6 +1002,7 @@ Groupby/resample/rolling
The behaviour now is consistent, independent of internal heuristics. (:issue:`31612`, :issue:`14927`, :issue:`13056`)
- Bug in :meth:`SeriesGroupBy.agg` where any column name was accepted in the named aggregation of ``SeriesGroupBy`` previously. The behaviour now allows only ``str`` and callables else would raise ``TypeError``. (:issue:`34422`)
+
Reshaping
^^^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5f8ab8966c1f0..560c4acf10d06 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5440,7 +5440,7 @@ def value_counts(
if subset is None:
subset = self.columns.tolist()
- counts = self.groupby(subset).size()
+ counts = self.groupby(subset).grouper.size()
if sort:
counts = counts.sort_values(ascending=ascending)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 9838cff9b34f9..5c3dd8ea4fac0 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -942,9 +942,9 @@ def _transform_should_cast(self, func_nm: str) -> bool:
bool
Whether transform should attempt to cast the result of aggregation
"""
- return (self.size().fillna(0) > 0).any() and (
- func_nm not in base.cython_cast_blacklist
- )
+ filled_series = self.grouper.size().fillna(0)
+ assert filled_series is not None
+ return filled_series.gt(0).any() and func_nm not in base.cython_cast_blacklist
def _cython_transform(self, how: str, numeric_only: bool = True, **kwargs):
output: Dict[base.OutputKey, np.ndarray] = {}
@@ -1507,14 +1507,15 @@ def sem(self, ddof: int = 1):
@Substitution(name="groupby")
@Appender(_common_see_also)
- def size(self):
+ def size(self) -> FrameOrSeriesUnion:
"""
Compute group sizes.
Returns
-------
- Series
- Number of rows in each group.
+ DataFrame or Series
+ Number of rows in each group as a Series if as_index is True
+ or a DataFrame if as_index is False.
"""
result = self.grouper.size()
@@ -1523,6 +1524,10 @@ def size(self):
result = self._obj_1d_constructor(result, name=self.obj.name)
else:
result = self._obj_1d_constructor(result)
+
+ if not self.as_index:
+ result = result.rename("size").reset_index()
+
return self._reindex_output(result, fill_value=0)
@doc(_groupby_agg_method_template, fname="sum", no=True, mc=0)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 80f34bb91cdfd..664c30e003632 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -668,11 +668,14 @@ def test_ops_not_as_index(reduction_func):
if reduction_func in ("corrwith",):
pytest.skip("Test not applicable")
- if reduction_func in ("nth", "ngroup", "size",):
+ if reduction_func in ("nth", "ngroup",):
pytest.skip("Skip until behavior is determined (GH #5755)")
df = DataFrame(np.random.randint(0, 5, size=(100, 2)), columns=["a", "b"])
- expected = getattr(df.groupby("a"), reduction_func)().reset_index()
+ expected = getattr(df.groupby("a"), reduction_func)()
+ if reduction_func == "size":
+ expected = expected.rename("size")
+ expected = expected.reset_index()
g = df.groupby("a", as_index=False)
diff --git a/pandas/tests/groupby/test_size.py b/pandas/tests/groupby/test_size.py
index 42bccc67fe0f8..9cff8b966dad0 100644
--- a/pandas/tests/groupby/test_size.py
+++ b/pandas/tests/groupby/test_size.py
@@ -44,3 +44,19 @@ def test_size_period_index():
grp = ser.groupby(level="A")
result = grp.size()
tm.assert_series_equal(result, ser)
+
+
+@pytest.mark.parametrize("as_index", [True, False])
+def test_size_on_categorical(as_index):
+ df = DataFrame([[1, 1], [2, 2]], columns=["A", "B"])
+ df["A"] = df["A"].astype("category")
+ result = df.groupby(["A", "B"], as_index=as_index).size()
+
+ expected = DataFrame(
+ [[1, 1, 1], [1, 2, 0], [2, 1, 0], [2, 2, 1]], columns=["A", "B", "size"],
+ )
+ expected["A"] = expected["A"].astype("category")
+ if as_index:
+ expected = expected.set_index(["A", "B"])["size"].rename(None)
+
+ tm.assert_equal(result, expected)
| - [x] closes #32599
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This regression was a symptom of `DataFrameGroupBy.size` ignoring `as_index=False`. Fixing this now makes the result either a frame or series, so where the internals use `size()` they now use `grouper.size()` which always returns a series.
I also implemented @dsaxton's suggestion to make the column named "size" when `as_index=False` (https://github.com/pandas-dev/pandas/issues/5755#issuecomment-522321259). | https://api.github.com/repos/pandas-dev/pandas/pulls/34767 | 2020-06-14T12:58:48Z | 2020-06-15T12:29:20Z | 2020-06-15T12:29:19Z | 2020-07-11T16:02:03Z |
REF: make get_day_of_month nogil | diff --git a/pandas/_libs/tslibs/np_datetime.pxd b/pandas/_libs/tslibs/np_datetime.pxd
index 038632e1575c3..eebdcb3ace507 100644
--- a/pandas/_libs/tslibs/np_datetime.pxd
+++ b/pandas/_libs/tslibs/np_datetime.pxd
@@ -63,6 +63,7 @@ cdef void td64_to_tdstruct(int64_t td64, pandas_timedeltastruct* out) nogil
cdef int64_t pydatetime_to_dt64(datetime val, npy_datetimestruct *dts)
cdef int64_t pydate_to_dt64(date val, npy_datetimestruct *dts)
+cdef void pydate_to_dtstruct(date val, npy_datetimestruct *dts)
cdef npy_datetime get_datetime64_value(object obj) nogil
cdef npy_timedelta get_timedelta64_value(object obj) nogil
diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx
index 5ac0e4fa44bee..31cc55ad981bb 100644
--- a/pandas/_libs/tslibs/np_datetime.pyx
+++ b/pandas/_libs/tslibs/np_datetime.pyx
@@ -152,12 +152,16 @@ cdef inline int64_t pydatetime_to_dt64(datetime val,
return dtstruct_to_dt64(dts)
-cdef inline int64_t pydate_to_dt64(date val, npy_datetimestruct *dts):
+cdef inline void pydate_to_dtstruct(date val, npy_datetimestruct *dts):
dts.year = PyDateTime_GET_YEAR(val)
dts.month = PyDateTime_GET_MONTH(val)
dts.day = PyDateTime_GET_DAY(val)
dts.hour = dts.min = dts.sec = dts.us = 0
dts.ps = dts.as = 0
+ return
+
+cdef inline int64_t pydate_to_dt64(date val, npy_datetimestruct *dts):
+ pydate_to_dtstruct(val, dts)
return dtstruct_to_dt64(dts)
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index c9c2672c55be0..3d6a9c2310c2f 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -42,7 +42,11 @@ from pandas._libs.tslibs.conversion cimport (
)
from pandas._libs.tslibs.nattype cimport NPY_NAT, c_NaT as NaT
from pandas._libs.tslibs.np_datetime cimport (
- npy_datetimestruct, dtstruct_to_dt64, dt64_to_dtstruct)
+ npy_datetimestruct,
+ dtstruct_to_dt64,
+ dt64_to_dtstruct,
+ pydate_to_dtstruct,
+)
from pandas._libs.tslibs.timezones cimport utc_pytz as UTC
from pandas._libs.tslibs.tzconversion cimport tz_convert_single
@@ -607,7 +611,10 @@ cdef class BaseOffset:
def _get_offset_day(self, datetime other):
# subclass must implement `_day_opt`; calling from the base class
# will raise NotImplementedError.
- return get_day_of_month(other, self._day_opt)
+ cdef:
+ npy_datetimestruct dts
+ pydate_to_dtstruct(other, &dts)
+ return get_day_of_month(&dts, self._day_opt)
def is_on_offset(self, dt) -> bool:
if self.normalize and not _is_normalized(dt):
@@ -1864,10 +1871,11 @@ cdef class YearOffset(SingleConstructorOffset):
def _get_offset_day(self, other) -> int:
# override BaseOffset method to use self.month instead of other.month
- # TODO: there may be a more performant way to do this
- return get_day_of_month(
- other.replace(month=self.month), self._day_opt
- )
+ cdef:
+ npy_datetimestruct dts
+ pydate_to_dtstruct(other, &dts)
+ dts.month = self.month
+ return get_day_of_month(&dts, self._day_opt)
@apply_wraps
def apply(self, other):
@@ -4052,14 +4060,14 @@ def shift_month(stamp: datetime, months: int,
return stamp.replace(year=year, month=month, day=day)
-cdef int get_day_of_month(datetime other, day_opt) except? -1:
+cdef int get_day_of_month(npy_datetimestruct* dts, day_opt) nogil except? -1:
"""
Find the day in `other`'s month that satisfies a DateOffset's is_on_offset
policy, as described by the `day_opt` argument.
Parameters
----------
- other : datetime or Timestamp
+ dts : npy_datetimestruct*
day_opt : {'start', 'end', 'business_start', 'business_end'}
'start': returns 1
'end': returns last day of the month
@@ -4085,20 +4093,20 @@ cdef int get_day_of_month(datetime other, day_opt) except? -1:
if day_opt == 'start':
return 1
elif day_opt == 'end':
- days_in_month = get_days_in_month(other.year, other.month)
+ days_in_month = get_days_in_month(dts.year, dts.month)
return days_in_month
elif day_opt == 'business_start':
# first business day of month
- return get_firstbday(other.year, other.month)
+ return get_firstbday(dts.year, dts.month)
elif day_opt == 'business_end':
# last business day of month
- return get_lastbday(other.year, other.month)
+ return get_lastbday(dts.year, dts.month)
+ elif day_opt is not None:
+ raise ValueError(day_opt)
elif day_opt is None:
# Note: unlike `shift_month`, get_day_of_month does not
# allow day_opt = None
raise NotImplementedError
- else:
- raise ValueError(day_opt)
cpdef int roll_convention(int other, int n, int compare) nogil:
@@ -4151,6 +4159,9 @@ def roll_qtrday(other: datetime, n: int, month: int,
"""
cdef:
int months_since
+ npy_datetimestruct dts
+ pydate_to_dtstruct(other, &dts)
+
# TODO: Merge this with roll_yearday by setting modby=12 there?
# code de-duplication versus perf hit?
# TODO: with small adjustments this could be used in shift_quarters
@@ -4158,14 +4169,14 @@ def roll_qtrday(other: datetime, n: int, month: int,
if n > 0:
if months_since < 0 or (months_since == 0 and
- other.day < get_day_of_month(other,
+ other.day < get_day_of_month(&dts,
day_opt)):
# pretend to roll back if on same month but
# before compare_day
n -= 1
else:
if months_since > 0 or (months_since == 0 and
- other.day > get_day_of_month(other,
+ other.day > get_day_of_month(&dts,
day_opt)):
# make sure to roll forward, so negate
n += 1
@@ -4232,18 +4243,22 @@ def roll_yearday(other: datetime, n: int, month: int, day_opt: object) -> int:
-6
"""
+ cdef:
+ npy_datetimestruct dts
+ pydate_to_dtstruct(other, &dts)
+
# Note: The other.day < ... condition will never hold when day_opt=='start'
# and the other.day > ... condition will never hold when day_opt=='end'.
# At some point these extra checks may need to be optimized away.
# But that point isn't today.
if n > 0:
if other.month < month or (other.month == month and
- other.day < get_day_of_month(other,
+ other.day < get_day_of_month(&dts,
day_opt)):
n -= 1
else:
if other.month > month or (other.month == month and
- other.day > get_day_of_month(other,
+ other.day > get_day_of_month(&dts,
day_opt)):
n += 1
return n
| https://api.github.com/repos/pandas-dev/pandas/pulls/34764 | 2020-06-14T03:08:24Z | 2020-06-14T16:39:03Z | 2020-06-14T16:39:03Z | 2020-06-14T16:53:00Z |
|
REF: remove roll_check, use roll_convention | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index c9c2672c55be0..ef22a90c24775 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3736,7 +3736,6 @@ cdef shift_quarters(
npy_datetimestruct dts
int count = len(dtindex)
int months_to_roll, months_since, n, compare_day
- bint roll_check
int64_t[:] out = np.empty(count, dtype="int64")
if day_opt == "start":
@@ -3878,7 +3877,6 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
npy_datetimestruct dts
int count = len(dtindex)
int months_to_roll
- bint roll_check
int64_t[:] out = np.empty(count, dtype="int64")
if day_opt is None:
@@ -3895,10 +3893,6 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dts.day = min(dts.day, get_days_in_month(dts.year, dts.month))
out[i] = dtstruct_to_dt64(&dts)
elif day_opt == "start":
- roll_check = False
- if months <= 0:
- months += 1
- roll_check = True
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3907,11 +3901,12 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
+ compare_day = 1
# offset semantics - if on the anchor point and going backwards
# shift to next
- if roll_check and dts.day == 1:
- months_to_roll -= 1
+ months_to_roll = roll_convention(dts.day, months_to_roll,
+ compare_day)
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
@@ -3919,10 +3914,6 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
out[i] = dtstruct_to_dt64(&dts)
elif day_opt == "end":
- roll_check = False
- if months > 0:
- months -= 1
- roll_check = True
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3931,12 +3922,12 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
+ compare_day = get_days_in_month(dts.year, dts.month)
# similar semantics - when adding shift forward by one
# month if already at an end of month
- if roll_check and dts.day == get_days_in_month(dts.year,
- dts.month):
- months_to_roll += 1
+ months_to_roll = roll_convention(dts.day, months_to_roll,
+ compare_day)
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
| Avoid bespoke logic for these two cases. This will make it feasible to collapse shift_months down to a single case (following #34762 and the one after that that makes get_day_of_month nogil) | https://api.github.com/repos/pandas-dev/pandas/pulls/34763 | 2020-06-14T01:49:07Z | 2020-06-14T16:52:40Z | 2020-06-14T16:52:40Z | 2020-06-14T17:01:22Z |
CLN: day->day_opt, remove unused case in liboffsets.get_day_of_month | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4069d192d9e88..75049cabe81d5 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3712,7 +3712,7 @@ cdef shift_quarters(
const int64_t[:] dtindex,
int quarters,
int q1start_month,
- object day,
+ object day_opt,
int modby=3,
):
"""
@@ -3724,7 +3724,7 @@ cdef shift_quarters(
dtindex : int64_t[:] timestamps for input dates
quarters : int number of quarters to shift
q1start_month : int month in which Q1 begins by convention
- day : {'start', 'end', 'business_start', 'business_end'}
+ day_opt : {'start', 'end', 'business_start', 'business_end'}
modby : int (3 for quarters, 12 for years)
Returns
@@ -3737,9 +3737,9 @@ cdef shift_quarters(
int count = len(dtindex)
int months_to_roll, months_since, n, compare_day
bint roll_check
- int64_t[:] out = np.empty(count, dtype='int64')
+ int64_t[:] out = np.empty(count, dtype="int64")
- if day == 'start':
+ if day_opt == "start":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3763,7 +3763,7 @@ cdef shift_quarters(
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'end':
+ elif day_opt == "end":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3792,7 +3792,7 @@ cdef shift_quarters(
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'business_start':
+ elif day_opt == "business_start":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3823,7 +3823,7 @@ cdef shift_quarters(
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'business_end':
+ elif day_opt == "business_end":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3863,12 +3863,12 @@ cdef shift_quarters(
@cython.wraparound(False)
@cython.boundscheck(False)
-def shift_months(const int64_t[:] dtindex, int months, object day=None):
+def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
"""
Given an int64-based datetime index, shift all elements
specified number of months using DateOffset semantics
- day: {None, 'start', 'end'}
+ day_opt: {None, 'start', 'end', 'business_start', 'business_end'}
* None: day of month
* 'start' 1st day of month
* 'end' last day of month
@@ -3879,9 +3879,9 @@ def shift_months(const int64_t[:] dtindex, int months, object day=None):
int count = len(dtindex)
int months_to_roll
bint roll_check
- int64_t[:] out = np.empty(count, dtype='int64')
+ int64_t[:] out = np.empty(count, dtype="int64")
- if day is None:
+ if day_opt is None:
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3894,7 +3894,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day=None):
dts.day = min(dts.day, get_days_in_month(dts.year, dts.month))
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'start':
+ elif day_opt == "start":
roll_check = False
if months <= 0:
months += 1
@@ -3918,7 +3918,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day=None):
dts.day = 1
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'end':
+ elif day_opt == "end":
roll_check = False
if months > 0:
months -= 1
@@ -3944,7 +3944,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day=None):
dts.day = get_days_in_month(dts.year, dts.month)
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'business_start':
+ elif day_opt == "business_start":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -3964,7 +3964,7 @@ def shift_months(const int64_t[:] dtindex, int months, object day=None):
dts.day = get_firstbday(dts.year, dts.month)
out[i] = dtstruct_to_dt64(&dts)
- elif day == 'business_end':
+ elif day_opt == "business_end":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -4060,13 +4060,11 @@ cdef int get_day_of_month(datetime other, day_opt) except? -1:
Parameters
----------
other : datetime or Timestamp
- day_opt : 'start', 'end', 'business_start', 'business_end', or int
+ day_opt : {'start', 'end', 'business_start', 'business_end'}
'start': returns 1
'end': returns last day of the month
'business_start': returns the first business day of the month
'business_end': returns the last business day of the month
- int: returns the day in the month indicated by `other`, or the last of
- day the month if the value exceeds in that month's number of days.
Returns
-------
@@ -4095,9 +4093,6 @@ cdef int get_day_of_month(datetime other, day_opt) except? -1:
elif day_opt == 'business_end':
# last business day of month
return get_lastbday(other.year, other.month)
- elif is_integer_object(day_opt):
- days_in_month = get_days_in_month(other.year, other.month)
- return min(day_opt, days_in_month)
elif day_opt is None:
# Note: unlike `shift_month`, get_day_of_month does not
# allow day_opt = None
| Removing this unused case turns out to be a blocker to making get_day_of_month nogil, which in turn will allow a bunch of de-duplication in this file. | https://api.github.com/repos/pandas-dev/pandas/pulls/34762 | 2020-06-14T01:46:21Z | 2020-06-14T14:21:54Z | 2020-06-14T14:21:54Z | 2020-06-14T15:20:08Z |
REF: implement shift_bday | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 6931360997420..15de4332d9992 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -18,7 +18,7 @@ from dateutil.easter import easter
import numpy as np
cimport numpy as cnp
-from numpy cimport int64_t
+from numpy cimport int64_t, ndarray
cnp.import_array()
# TODO: formalize having _libs.properties "above" tslibs in the dependency structure
@@ -1380,24 +1380,7 @@ cdef class BusinessDay(BusinessMixin):
@apply_index_wraps
def apply_index(self, dtindex):
i8other = dtindex.asi8
- time = (i8other % DAY_NANOS).view("timedelta64[ns]")
-
- # to_period rolls forward to next BDay; track and
- # reduce n where it does when rolling forward
- asper = dtindex.to_period("B")
-
- if self.n > 0:
- shifted = (dtindex.to_perioddelta("B") - time).asi8 != 0
-
- roll = np.where(shifted, self.n - 1, self.n)
- shifted = asper._addsub_int_array(roll, operator.add)
- else:
- # Integer addition is deprecated, so we use _time_shift directly
- roll = self.n
- shifted = asper._time_shift(roll)
-
- result = shifted.to_timestamp() + time
- return result
+ return shift_bdays(i8other, self.n)
def is_on_offset(self, dt) -> bool:
if self.normalize and not _is_normalized(dt):
@@ -3990,6 +3973,63 @@ def shift_months(const int64_t[:] dtindex, int months, object day_opt=None):
return np.asarray(out)
+cdef ndarray[int64_t] shift_bdays(const int64_t[:] i8other, int periods):
+ """
+ Implementation of BusinessDay.apply_offset.
+
+ Parameters
+ ----------
+ i8other : const int64_t[:]
+ periods : int
+
+ Returns
+ -------
+ ndarray[int64_t]
+ """
+ cdef:
+ Py_ssize_t i, n = len(i8other)
+ int64_t[:] result = np.empty(n, dtype="i8")
+ int64_t val, res
+ int wday, nadj, days
+ npy_datetimestruct dts
+
+ for i in range(n):
+ val = i8other[i]
+ if val == NPY_NAT:
+ result[i] = NPY_NAT
+ else:
+ # The rest of this is effectively a copy of BusinessDay.apply
+ nadj = periods
+ weeks = nadj // 5
+ dt64_to_dtstruct(val, &dts)
+ wday = dayofweek(dts.year, dts.month, dts.day)
+
+ if nadj <= 0 and wday > 4:
+ # roll forward
+ nadj += 1
+
+ nadj -= 5 * weeks
+
+ # nadj is always >= 0 at this point
+ if nadj == 0 and wday > 4:
+ # roll back
+ days = 4 - wday
+ elif wday > 4:
+ # roll forward
+ days = (7 - wday) + (nadj - 1)
+ elif wday + nadj <= 4:
+ # shift by n days without leaving the current week
+ days = nadj
+ else:
+ # shift by nadj days plus 2 to get past the weekend
+ days = nadj + 2
+
+ res = val + (7 * weeks + days) * DAY_NANOS
+ result[i] = res
+
+ return result.base
+
+
def shift_month(stamp: datetime, months: int,
day_opt: object=None) -> datetime:
"""
| Avoids depending on DatetimeArray/PeriodArray methods, also avoids a couple of array allocations
```
In [2]: dti = pd.date_range("2016-01-01", periods=10**5, freq="S")
In [3]: off = pd.offsets.BDay()
In [4]: %timeit dti + off
24.1 ms ± 202 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- master
20.1 ms ± 664 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # <-- PR
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34761 | 2020-06-14T01:33:30Z | 2020-06-14T22:55:41Z | 2020-06-14T22:55:40Z | 2020-06-14T23:06:50Z |
REF: De-duplicate roll_yearday/roll_qtrday | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 6931360997420..b1bd7c6a1461c 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -1879,7 +1879,7 @@ cdef class YearOffset(SingleConstructorOffset):
@apply_wraps
def apply(self, other):
- years = roll_yearday(other, self.n, self.month, self._day_opt)
+ years = roll_qtrday(other, self.n, self.month, self._day_opt, modby=12)
months = years * 12 + (self.month - other.month)
return shift_month(other, months, self._day_opt)
@@ -4153,10 +4153,13 @@ def roll_qtrday(other: datetime, n: int, month: int,
npy_datetimestruct dts
pydate_to_dtstruct(other, &dts)
- # TODO: Merge this with roll_yearday by setting modby=12 there?
- # code de-duplication versus perf hit?
# TODO: with small adjustments this could be used in shift_quarters
- months_since = other.month % modby - month % modby
+
+ if modby == 12:
+ # We care about the month-of-year, not month-of-quarter, so skip mod
+ months_since = other.month - month
+ else:
+ months_since = other.month % modby - month % modby
if n > 0:
if months_since < 0 or (months_since == 0 and
@@ -4172,84 +4175,3 @@ def roll_qtrday(other: datetime, n: int, month: int,
# make sure to roll forward, so negate
n += 1
return n
-
-
-def roll_yearday(other: datetime, n: int, month: int, day_opt: object) -> int:
- """
- Possibly increment or decrement the number of periods to shift
- based on rollforward/rollbackward conventions.
-
- Parameters
- ----------
- other : datetime or Timestamp
- n : number of periods to increment, before adjusting for rolling
- month : reference month giving the first month of the year
- day_opt : 'start', 'end', 'business_start', 'business_end', or int
- The day of the month to compare against that of `other` when
- incrementing or decrementing the number of periods:
-
- 'start': 1
- 'end': last day of the month
- 'business_start': first business day of the month
- 'business_end': last business day of the month
- int: day in the month indicated by `other`, or the last of day
- the month if the value exceeds in that month's number of days.
-
- Returns
- -------
- n : int number of periods to increment
-
- Notes
- -----
- * Mirrors `roll_check` in shift_months
-
- Examples
- -------
- >>> month = 3
- >>> day_opt = 'start' # `other` will be compared to March 1
- >>> other = datetime(2017, 2, 10) # before March 1
- >>> roll_yearday(other, 2, month, day_opt)
- 1
- >>> roll_yearday(other, -7, month, day_opt)
- -7
- >>>
- >>> other = Timestamp('2014-03-15', tz='US/Eastern') # after March 1
- >>> roll_yearday(other, 2, month, day_opt)
- 2
- >>> roll_yearday(other, -7, month, day_opt)
- -6
-
- >>> month = 6
- >>> day_opt = 'end' # `other` will be compared to June 30
- >>> other = datetime(1999, 6, 29) # before June 30
- >>> roll_yearday(other, 5, month, day_opt)
- 4
- >>> roll_yearday(other, -7, month, day_opt)
- -7
- >>>
- >>> other = Timestamp(2072, 8, 24, 6, 17, 18) # after June 30
- >>> roll_yearday(other, 5, month, day_opt)
- 5
- >>> roll_yearday(other, -7, month, day_opt)
- -6
-
- """
- cdef:
- npy_datetimestruct dts
- pydate_to_dtstruct(other, &dts)
-
- # Note: The other.day < ... condition will never hold when day_opt=='start'
- # and the other.day > ... condition will never hold when day_opt=='end'.
- # At some point these extra checks may need to be optimized away.
- # But that point isn't today.
- if n > 0:
- if other.month < month or (other.month == month and
- other.day < get_day_of_month(&dts,
- day_opt)):
- n -= 1
- else:
- if other.month > month or (other.month == month and
- other.day > get_day_of_month(&dts,
- day_opt)):
- n += 1
- return n
diff --git a/pandas/tests/tslibs/test_liboffsets.py b/pandas/tests/tslibs/test_liboffsets.py
index 6ff2ae669c8df..206a604788c7e 100644
--- a/pandas/tests/tslibs/test_liboffsets.py
+++ b/pandas/tests/tslibs/test_liboffsets.py
@@ -88,11 +88,11 @@ def test_shift_month_error():
],
)
@pytest.mark.parametrize("n", [2, -7, 0])
-def test_roll_yearday(other, expected, n):
+def test_roll_qtrday_year(other, expected, n):
month = 3
day_opt = "start" # `other` will be compared to March 1.
- assert liboffsets.roll_yearday(other, n, month, day_opt) == expected[n]
+ assert roll_qtrday(other, n, month, day_opt, modby=12) == expected[n]
@pytest.mark.parametrize(
@@ -105,22 +105,22 @@ def test_roll_yearday(other, expected, n):
],
)
@pytest.mark.parametrize("n", [5, -7, 0])
-def test_roll_yearday2(other, expected, n):
+def test_roll_qtrday_year2(other, expected, n):
month = 6
day_opt = "end" # `other` will be compared to June 30.
- assert liboffsets.roll_yearday(other, n, month, day_opt) == expected[n]
+ assert roll_qtrday(other, n, month, day_opt, modby=12) == expected[n]
def test_get_day_of_month_error():
# get_day_of_month is not directly exposed.
- # We test it via roll_yearday.
+ # We test it via roll_qtrday.
dt = datetime(2017, 11, 15)
day_opt = "foo"
with pytest.raises(ValueError, match=day_opt):
# To hit the raising case we need month == dt.month and n > 0.
- liboffsets.roll_yearday(dt, n=3, month=11, day_opt=day_opt)
+ roll_qtrday(dt, n=3, month=11, day_opt=day_opt, modby=12)
@pytest.mark.parametrize(
| https://api.github.com/repos/pandas-dev/pandas/pulls/34760 | 2020-06-14T00:55:42Z | 2020-06-14T20:47:19Z | 2020-06-14T20:47:19Z | 2020-06-14T21:23:21Z |
|
typo: pivot_table -> pivot | diff --git a/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst b/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
index a9652969ffc79..c16fec6aaba9f 100644
--- a/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
+++ b/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
@@ -196,7 +196,7 @@ I want the values for the three stations as separate columns next to each other
no2_subset.pivot(columns="location", values="value")
-The :meth:`~pandas.pivot_table` function is purely reshaping of the data: a single value
+The :meth:`~pandas.pivot` function is purely reshaping of the data: a single value
for each index/column combination is required.
.. raw:: html
| I'm new to pandas and reading the docs for the first time.
I believe that the reference in the *pivot* section should be to `pivot`, not `pivot_table`, to match the code in the example.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34758 | 2020-06-13T23:28:48Z | 2020-06-14T01:29:28Z | 2020-06-14T01:29:28Z | 2020-06-14T01:29:40Z |
typo: rows -> columns | diff --git a/doc/source/getting_started/intro_tutorials/01_table_oriented.rst b/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
index 9ee3bfc3b8e79..dc9bec2284aab 100644
--- a/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
+++ b/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
@@ -51,7 +51,7 @@ I want to store passenger data of the Titanic. For a number of passengers, I kno
df
To manually store data in a table, create a ``DataFrame``. When using a Python dictionary of lists, the dictionary keys will be used as column headers and
-the values in each list as rows of the ``DataFrame``.
+the values in each list as columns of the ``DataFrame``.
.. raw:: html
@@ -215,4 +215,4 @@ A more extended explanation to ``DataFrame`` and ``Series`` is provided in the :
.. raw:: html
- </div>
\ No newline at end of file
+ </div>
| I'm new to pandas and reading the docs for the first time. By my reading of the first part of this paragraph, there's a dictionary of lists and the lists are `["Braund...", "Allen...", ...]`, `[22, 35, 58]`, and `["male", "male", "female"]`.
The original sentence says: "... and the values in each list as rows of the DataFrame", but the values in the first list (`["Baund...]`) are the values in the first **column** of the illustration. Likewise the list `[22,...]` is the second column and `["male",...]` is the third.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34757 | 2020-06-13T20:32:47Z | 2020-06-14T01:30:27Z | 2020-06-14T01:30:27Z | 2020-06-14T13:56:04Z |
BUG: DataFrameGroupBy.quantile raises for non-numeric dtypes rather than dropping columns | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index de3a05a2ccdfb..31d2d29c71386 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1122,6 +1122,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.groupby` lost index, when one of the ``agg`` keys referenced an empty list (:issue:`32580`)
- Bug in :meth:`Rolling.apply` where ``center=True`` was ignored when ``engine='numba'`` was specified (:issue:`34784`)
- Bug in :meth:`DataFrame.ewm.cov` was throwing ``AssertionError`` for :class:`MultiIndex` inputs (:issue:`34440`)
+- Bug in :meth:`core.groupby.DataFrameGroupBy.quantile` raises ``TypeError`` for non-numeric types rather than dropping columns (:issue:`27892`)
- Bug in :meth:`core.groupby.DataFrameGroupBy.transform` when ``func='nunique'`` and columns are of type ``datetime64``, the result would also be of type ``datetime64`` instead of ``int64`` (:issue:`35109`)
- Bug in :meth:'DataFrameGroupBy.first' and :meth:'DataFrameGroupBy.last' that would raise an unnecessary ``ValueError`` when grouping on multiple ``Categoricals`` (:issue:`34951`)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 65483abbd2a6e..ac45222625569 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2403,7 +2403,7 @@ def _get_cythonized_result(
signature
needs_2d : bool, default False
Whether the values and result of the Cython call signature
- are at least 2-dimensional.
+ are 2-dimensional.
min_count : int, default None
When not None, min_count for the Cython call
needs_mask : bool, default False
@@ -2419,7 +2419,9 @@ def _get_cythonized_result(
Function should return a tuple where the first element is the
values to be passed to Cython and the second element is an optional
type which the values should be converted to after being returned
- by the Cython operation. Raises if `needs_values` is False.
+ by the Cython operation. This function is also responsible for
+ raising a TypeError if the values have an invalid type. Raises
+ if `needs_values` is False.
post_processing : function, default None
Function to be applied to result of Cython function. Should accept
an array of values as the first argument and type inferences as its
@@ -2451,6 +2453,7 @@ def _get_cythonized_result(
output: Dict[base.OutputKey, np.ndarray] = {}
base_func = getattr(libgroupby, how)
+ error_msg = ""
for idx, obj in enumerate(self._iterate_slices()):
name = obj.name
values = obj._values
@@ -2477,7 +2480,11 @@ def _get_cythonized_result(
if needs_values:
vals = values
if pre_processing:
- vals, inferences = pre_processing(vals)
+ try:
+ vals, inferences = pre_processing(vals)
+ except TypeError as e:
+ error_msg = str(e)
+ continue
if needs_2d:
vals = vals.reshape((-1, 1))
vals = vals.astype(cython_dtype, copy=False)
@@ -2509,6 +2516,10 @@ def _get_cythonized_result(
key = base.OutputKey(label=name, position=idx)
output[key] = result
+ # error_msg is "" on an frame/series with no rows or columns
+ if len(output) == 0 and error_msg != "":
+ raise TypeError(error_msg)
+
if aggregate:
return self._wrap_aggregated_output(output)
else:
diff --git a/pandas/tests/groupby/test_quantile.py b/pandas/tests/groupby/test_quantile.py
index 8cfd8035502c3..9338742195bfe 100644
--- a/pandas/tests/groupby/test_quantile.py
+++ b/pandas/tests/groupby/test_quantile.py
@@ -232,3 +232,11 @@ def test_groupby_quantile_nullable_array(values, q):
expected = pd.Series(true_quantiles * 2, index=idx, name="b")
tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("q", [0.5, [0.0, 0.5, 1.0]])
+def test_groupby_quantile_skips_invalid_dtype(q):
+ df = pd.DataFrame({"a": [1], "b": [2.0], "c": ["x"]})
+ result = df.groupby("a").quantile(q)
+ expected = df.groupby("a")[["b"]].quantile(q)
+ tm.assert_frame_equal(result, expected)
| - [x] closes #27892
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Unlike what is mentioned in #27892, this will raise if there are no columns to aggregate. Both mean and median raise with "No numeric types to aggregate" in such a case, so I was thinking perhaps we should be consistent with them. Any thoughts @WillAyd and @TomAugspurger?
| https://api.github.com/repos/pandas-dev/pandas/pulls/34756 | 2020-06-13T20:17:17Z | 2020-07-16T22:48:30Z | 2020-07-16T22:48:30Z | 2020-10-11T13:22:14Z |
TST: Period with Timestamp overflow | diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index 795021a260028..5006e16b6a7e0 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -6,6 +6,7 @@
from pandas._libs.tslibs import iNaT, period as libperiod
from pandas._libs.tslibs.ccalendar import DAYS, MONTHS
+from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime
from pandas._libs.tslibs.parsing import DateParseError
from pandas._libs.tslibs.period import INVALID_FREQ_ERR_MSG, IncompatibleFrequency
from pandas._libs.tslibs.timezones import dateutil_gettz, maybe_get_tz
@@ -776,6 +777,35 @@ def test_period_deprecated_freq(self):
assert isinstance(p1, Period)
assert isinstance(p2, Period)
+ def _period_constructor(bound, offset):
+ return Period(
+ year=bound.year,
+ month=bound.month,
+ day=bound.day,
+ hour=bound.hour,
+ minute=bound.minute,
+ second=bound.second + offset,
+ freq="us",
+ )
+
+ @pytest.mark.parametrize("bound, offset", [(Timestamp.min, -1), (Timestamp.max, 1)])
+ @pytest.mark.parametrize("period_property", ["start_time", "end_time"])
+ def test_outter_bounds_start_and_end_time(self, bound, offset, period_property):
+ # GH #13346
+ period = TestPeriodProperties._period_constructor(bound, offset)
+ with pytest.raises(OutOfBoundsDatetime, match="Out of bounds nanosecond"):
+ getattr(period, period_property)
+
+ @pytest.mark.parametrize("bound, offset", [(Timestamp.min, -1), (Timestamp.max, 1)])
+ @pytest.mark.parametrize("period_property", ["start_time", "end_time"])
+ def test_inner_bounds_start_and_end_time(self, bound, offset, period_property):
+ # GH #13346
+ period = TestPeriodProperties._period_constructor(bound, -offset)
+ expected = period.to_timestamp().round(freq="S")
+ assert getattr(period, period_property).round(freq="S") == expected
+ expected = (bound - offset * Timedelta(1, unit="S")).floor("S")
+ assert getattr(period, period_property).floor("S") == expected
+
def test_start_time(self):
freq_lst = ["A", "Q", "M", "D", "H", "T", "S"]
xp = datetime(2012, 1, 1)
| - [x] closes #13346
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
The first part of the test checks that the period inside boundaries does not raise exception when calling `start_time` and `end_time` methods. The second part checks that methods raise exceptions when outside of boundaries.
The parameters are `Timestamp.min` and `Timestamp.max`, so the tests will follow any change of the boundaries.
| https://api.github.com/repos/pandas-dev/pandas/pulls/34755 | 2020-06-13T19:04:44Z | 2020-10-05T20:52:52Z | 2020-10-05T20:52:52Z | 2021-01-02T08:31:59Z |
REF: refactor NDFrame.interpolate to avoid dispatching to fillna | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6183638ab587e..823a0a6a35f9e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6888,42 +6888,33 @@ def interpolate(
inplace = validate_bool_kwarg(inplace, "inplace")
axis = self._get_axis_number(axis)
- index = self._get_axis(axis)
- if isinstance(self.index, MultiIndex) and method != "linear":
+ fillna_methods = ["ffill", "bfill", "pad", "backfill"]
+ should_transpose = axis == 1 and method not in fillna_methods
+
+ obj = self.T if should_transpose else self
+
+ if method not in fillna_methods:
+ axis = self._info_axis_number
+
+ if isinstance(obj.index, MultiIndex) and method != "linear":
raise ValueError(
"Only `method=linear` interpolation is supported on MultiIndexes."
)
- # for the methods backfill, bfill, pad, ffill limit_direction and limit_area
- # are being ignored, see gh-26796 for more information
- if method in ["backfill", "bfill", "pad", "ffill"]:
- return self.fillna(
- method=method,
- axis=axis,
- inplace=inplace,
- limit=limit,
- downcast=downcast,
- )
-
- # Currently we need this to call the axis correctly inside the various
- # interpolation methods
- if axis == 0:
- df = self
- else:
- df = self.T
-
- if self.ndim == 2 and np.all(self.dtypes == np.dtype(object)):
+ if obj.ndim == 2 and np.all(obj.dtypes == np.dtype(object)):
raise TypeError(
"Cannot interpolate with all object-dtype columns "
"in the DataFrame. Try setting at least one "
"column to a numeric dtype."
)
+ # create/use the index
if method == "linear":
# prior default
- index = np.arange(len(df.index))
+ index = np.arange(len(obj.index))
else:
+ index = obj.index
methods = {"index", "values", "nearest", "time"}
is_numeric_or_datetime = (
is_numeric_dtype(index.dtype)
@@ -6944,10 +6935,9 @@ def interpolate(
"has not been implemented. Try filling "
"those NaNs before interpolating."
)
- data = df._mgr
- new_data = data.interpolate(
+ new_data = obj._mgr.interpolate(
method=method,
- axis=self._info_axis_number,
+ axis=axis,
index=index,
limit=limit,
limit_direction=limit_direction,
@@ -6958,7 +6948,7 @@ def interpolate(
)
result = self._constructor(new_data)
- if axis == 1:
+ if should_transpose:
result = result.T
if inplace:
return self._update_inplace(result)
| xref https://github.com/pandas-dev/pandas/pull/31048#issuecomment-643658604, #33959 | https://api.github.com/repos/pandas-dev/pandas/pulls/34752 | 2020-06-13T18:41:21Z | 2020-06-14T14:31:07Z | 2020-06-14T14:31:06Z | 2020-06-14T15:25:57Z |
Bump up minimum numpy version in windows37 job | diff --git a/ci/azure/windows.yml b/ci/azure/windows.yml
index 187a5db99802f..87f1bfd2adb79 100644
--- a/ci/azure/windows.yml
+++ b/ci/azure/windows.yml
@@ -13,7 +13,7 @@ jobs:
CONDA_PY: "36"
PATTERN: "not slow and not network"
- py37_np141:
+ py37_np18:
ENV_FILE: ci/deps/azure-windows-37.yaml
CONDA_PY: "37"
PATTERN: "not slow and not network"
diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml
index e491fd57b240b..889d5c1bcfcdd 100644
--- a/ci/deps/azure-windows-37.yaml
+++ b/ci/deps/azure-windows-37.yaml
@@ -22,7 +22,7 @@ dependencies:
- matplotlib=2.2.*
- moto
- numexpr
- - numpy=1.14.*
+ - numpy=1.18.*
- openpyxl
- pyarrow=0.14
- pytables
| - [x] closes #34724
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/34750 | 2020-06-13T17:14:27Z | 2020-06-14T14:29:35Z | 2020-06-14T14:29:35Z | 2020-06-14T16:04:55Z |
API: validate `limit_direction` parameter of NDFrame.interpolate | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index e680c2db55a43..a1be07f3fc386 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -406,6 +406,7 @@ Backwards incompatible API changes
(previously raised a ``NotImplementedError``), while passing in keyword ``encoding`` now raises a ``TypeError`` (:issue:`34464`)
- :func: `merge` now checks ``suffixes`` parameter type to be ``tuple`` and raises ``TypeError``, whereas before a ``list`` or ``set`` were accepted and that the ``set`` could produce unexpected results (:issue:`33740`)
- :class:`Period` no longer accepts tuples for the ``freq`` argument (:issue:`34658`)
+- :meth:`Series.interpolate` and :meth:`DataFrame.interpolate` now raises ValueError if ``limit_direction`` is 'forward' or 'both' and ``method`` is 'backfill' or 'bfill' or ``limit_direction`` is 'backward' or 'both' and ``method`` is 'pad' or 'ffill' (:issue:`34746`)
``MultiIndex.get_indexer`` interprets `method` argument differently
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c340460857b9f..bad61a440b8c5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6728,9 +6728,24 @@ def replace(
0.
inplace : bool, default False
Update the data in place if possible.
- limit_direction : {'forward', 'backward', 'both'}, default 'forward'
- If limit is specified, consecutive NaNs will be filled in this
- direction.
+ limit_direction : {'forward', 'backward', 'both'}, Optional
+ Consecutive NaNs will be filled in this direction.
+
+ If limit is specified:
+ * If 'method' is 'pad' or 'ffill', 'limit_direction' must be 'forward'.
+ * If 'method' is 'backfill' or 'bfill', 'limit_direction' must be
+ 'backwards'.
+
+ If 'limit' is not specified:
+ * If 'method' is 'backfill' or 'bfill', the default is 'backward'
+ * else the default is 'forward'
+
+ .. versionchanged:: 1.1.0
+ raises ValueError if `limit_direction` is 'forward' or 'both' and
+ method is 'backfill' or 'bfill'.
+ raises ValueError if `limit_direction` is 'backward' or 'both' and
+ method is 'pad' or 'ffill'.
+
limit_area : {`None`, 'inside', 'outside'}, default None
If limit is specified, consecutive NaNs will be filled with this
restriction.
@@ -6881,7 +6896,7 @@ def interpolate(
axis: Axis = 0,
limit: Optional[int] = None,
inplace: bool_t = False,
- limit_direction: str = "forward",
+ limit_direction: Optional[str] = None,
limit_area: Optional[str] = None,
downcast: Optional[str] = None,
**kwargs,
@@ -6906,6 +6921,21 @@ def interpolate(
"Only `method=linear` interpolation is supported on MultiIndexes."
)
+ # Set `limit_direction` depending on `method`
+ if limit_direction is None:
+ limit_direction = (
+ "backward" if method in ("backfill", "bfill") else "forward"
+ )
+ else:
+ if method in ("pad", "ffill") and limit_direction != "forward":
+ raise ValueError(
+ f"`limit_direction` must be 'forward' for method `{method}`"
+ )
+ if method in ("backfill", "bfill") and limit_direction != "backward":
+ raise ValueError(
+ f"`limit_direction` must be 'backward' for method `{method}`"
+ )
+
if obj.ndim == 2 and np.all(obj.dtypes == np.dtype(object)):
raise TypeError(
"Cannot interpolate with all object-dtype columns "
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index db1c07e1bd276..c4b10e0ccdc3e 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -429,6 +429,27 @@ def test_interp_limit_area(self):
with pytest.raises(ValueError, match=msg):
s.interpolate(method="linear", limit_area="abc")
+ @pytest.mark.parametrize(
+ "method, limit_direction, expected",
+ [
+ ("pad", "backward", "forward"),
+ ("ffill", "backward", "forward"),
+ ("backfill", "forward", "backward"),
+ ("bfill", "forward", "backward"),
+ ("pad", "both", "forward"),
+ ("ffill", "both", "forward"),
+ ("backfill", "both", "backward"),
+ ("bfill", "both", "backward"),
+ ],
+ )
+ def test_interp_limit_direction_raises(self, method, limit_direction, expected):
+ # https://github.com/pandas-dev/pandas/pull/34746
+ s = Series([1, 2, 3])
+
+ msg = f"`limit_direction` must be '{expected}' for method `{method}`"
+ with pytest.raises(ValueError, match=msg):
+ s.interpolate(method=method, limit_direction=limit_direction)
+
def test_interp_limit_direction(self):
# These tests are for issue #9218 -- fill NaNs in both directions.
s = Series([1, 3, np.nan, np.nan, np.nan, 11])
| broken off #31048
| https://api.github.com/repos/pandas-dev/pandas/pulls/34746 | 2020-06-13T13:14:10Z | 2020-06-14T18:06:41Z | 2020-06-14T18:06:41Z | 2020-06-14T18:43:12Z |
DOC: updated strings.py for SS06 errors | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index b27ad744dbdba..a1db7742916de 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -570,9 +570,9 @@ def str_endswith(arr, pat, na=np.nan):
def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
r"""
- Replace occurrences of pattern/regex in the Series/Index with
- some other string. Equivalent to :meth:`str.replace` or
- :func:`re.sub`, depending on the regex value.
+ Replace each occurrence of pattern/regex in the Series/Index.
+
+ Equivalent to :meth:`str.replace` or :func:`re.sub`, depending on the regex value.
Parameters
----------
@@ -1063,6 +1063,8 @@ def str_extract(arr, pat, flags=0, expand=True):
def str_extractall(arr, pat, flags=0):
r"""
+ Extract capture groups in the regex `pat` as columns in DataFrame.
+
For each subject string in the Series, extract groups from all
matches of regular expression pat. When each subject string in the
Series has exactly one match, extractall(pat).xs(0, level='match')
@@ -1174,7 +1176,9 @@ def str_extractall(arr, pat, flags=0):
def str_get_dummies(arr, sep="|"):
"""
- Split each string in the Series by sep and return a DataFrame
+ Return DataFrame of dummy/indicator variables for Series.
+
+ Each string in Series is split by sep and returned as a DataFrame
of dummy/indicator variables.
Parameters
@@ -1743,8 +1747,7 @@ def str_strip(arr, to_strip=None, side="both"):
def str_wrap(arr, width, **kwargs):
r"""
- Wrap long strings in the Series/Index to be formatted in
- paragraphs with length less than a given width.
+ Wrap strings in Series/Index at specified line width.
This method has the same keyword parameters and defaults as
:class:`textwrap.TextWrapper`.
@@ -1807,6 +1810,7 @@ def str_wrap(arr, width, **kwargs):
def str_translate(arr, table):
"""
Map all characters in the string through the given mapping table.
+
Equivalent to standard :meth:`str.translate`.
Parameters
@@ -1889,6 +1893,7 @@ def f(x):
def str_decode(arr, encoding, errors="strict"):
"""
Decode character string in the Series/Index using indicated encoding.
+
Equivalent to :meth:`str.decode` in python2 and :meth:`bytes.decode` in
python3.
@@ -1913,6 +1918,7 @@ def str_decode(arr, encoding, errors="strict"):
def str_encode(arr, encoding, errors="strict"):
"""
Encode character string in the Series/Index using indicated encoding.
+
Equivalent to :meth:`str.encode`.
Parameters
@@ -2068,9 +2074,11 @@ def do_copy(target):
class StringMethods(NoNewAttributesMixin):
"""
- Vectorized string functions for Series and Index. NAs stay NA unless
- handled otherwise by a particular method. Patterned after Python's string
- methods, with some inspiration from R's stringr package.
+ Vectorized string functions for Series and Index.
+
+ NAs stay NA unless handled otherwise by a particular method.
+ Patterned after Python's string methods, with some inspiration from
+ R's stringr package.
Examples
--------
@@ -2853,8 +2861,9 @@ def pad(self, width, side="left", fillchar=" "):
_shared_docs[
"str_pad"
] = """
- Filling %(side)s side of strings in the Series/Index with an
- additional character. Equivalent to :meth:`str.%(method)s`.
+ Pad %(side)s side of strings in the Series/Index.
+
+ Equivalent to :meth:`str.%(method)s`.
Parameters
----------
@@ -3117,9 +3126,11 @@ def extractall(self, pat, flags=0):
_shared_docs[
"find"
] = """
- Return %(side)s indexes in each strings in the Series/Index
- where the substring is fully contained between [start:end].
- Return -1 on failure. Equivalent to standard :meth:`str.%(method)s`.
+ Return %(side)s indexes in each strings in the Series/Index.
+
+ Each of returned indexes corresponds to the position where the
+ substring is fully contained between [start:end]. Return -1 on
+ failure. Equivalent to standard :meth:`str.%(method)s`.
Parameters
----------
@@ -3169,6 +3180,7 @@ def rfind(self, sub, start=0, end=None):
def normalize(self, form):
"""
Return the Unicode normal form for the strings in the Series/Index.
+
For more information on the forms, see the
:func:`unicodedata.normalize`.
@@ -3190,10 +3202,13 @@ def normalize(self, form):
_shared_docs[
"index"
] = """
- Return %(side)s indexes in each strings where the substring is
- fully contained between [start:end]. This is the same as
- ``str.%(similar)s`` except instead of returning -1, it raises a ValueError
- when the substring is not found. Equivalent to standard ``str.%(method)s``.
+ Return %(side)s indexes in each string in Series/Index.
+
+ Each of the returned indexes corresponds to the position where the
+ substring is fully contained between [start:end]. This is the same
+ as ``str.%(similar)s`` except instead of returning -1, it raises a
+ ValueError when the substring is not found. Equivalent to standard
+ ``str.%(method)s``.
Parameters
----------
@@ -3244,8 +3259,9 @@ def rindex(self, sub, start=0, end=None):
_shared_docs[
"len"
] = """
- Compute the length of each element in the Series/Index. The element may be
- a sequence (such as a string, tuple or list) or a collection
+ Compute the length of each element in the Series/Index.
+
+ The element may be a sequence (such as a string, tuple or list) or a collection
(such as a dictionary).
Returns
| https://api.github.com/repos/pandas-dev/pandas/pulls/34745 | 2020-06-13T12:43:38Z | 2020-06-13T17:04:13Z | 2020-06-13T17:04:13Z | 2020-06-13T17:04:18Z |
|
CLN: clean and deduplicate in core.missing.interpolate_1d | diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index d8671616f944e..7802c5cbdbfb3 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -94,30 +94,37 @@ def clean_fill_method(method, allow_nearest=False):
return method
+# interpolation methods that dispatch to np.interp
+
+NP_METHODS = ["linear", "time", "index", "values"]
+
+# interpolation methods that dispatch to _interpolate_scipy_wrapper
+
+SP_METHODS = [
+ "nearest",
+ "zero",
+ "slinear",
+ "quadratic",
+ "cubic",
+ "barycentric",
+ "krogh",
+ "spline",
+ "polynomial",
+ "from_derivatives",
+ "piecewise_polynomial",
+ "pchip",
+ "akima",
+ "cubicspline",
+]
+
+
def clean_interp_method(method: str, **kwargs) -> str:
order = kwargs.get("order")
- valid = [
- "linear",
- "time",
- "index",
- "values",
- "nearest",
- "zero",
- "slinear",
- "quadratic",
- "cubic",
- "barycentric",
- "polynomial",
- "krogh",
- "piecewise_polynomial",
- "pchip",
- "akima",
- "spline",
- "from_derivatives",
- "cubicspline",
- ]
+
if method in ("spline", "polynomial") and order is None:
raise ValueError("You must specify the order of the spline or polynomial.")
+
+ valid = NP_METHODS + SP_METHODS
if method not in valid:
raise ValueError(f"method must be one of {valid}. Got '{method}' instead.")
@@ -180,8 +187,6 @@ def interpolate_1d(
Bounds_error is currently hardcoded to False since non-scipy ones don't
take it as an argument.
"""
- # Treat the original, non-scipy methods first.
-
invalid = isna(yvalues)
valid = ~invalid
@@ -261,50 +266,32 @@ def interpolate_1d(
# sort preserve_nans and covert to list
preserve_nans = sorted(preserve_nans)
- xvalues = getattr(xvalues, "values", xvalues)
yvalues = getattr(yvalues, "values", yvalues)
result = yvalues.copy()
- if method in ["linear", "time", "index", "values"]:
+ # xvalues to pass to NumPy/SciPy
+
+ xvalues = getattr(xvalues, "values", xvalues)
+ if method == "linear":
+ inds = xvalues
+ else:
+ inds = np.asarray(xvalues)
+
+ # hack for DatetimeIndex, #1646
+ if needs_i8_conversion(inds.dtype):
+ inds = inds.view(np.int64)
+
if method in ("values", "index"):
- inds = np.asarray(xvalues)
- # hack for DatetimeIndex, #1646
- if needs_i8_conversion(inds.dtype):
- inds = inds.view(np.int64)
if inds.dtype == np.object_:
inds = lib.maybe_convert_objects(inds)
- else:
- inds = xvalues
+
+ if method in NP_METHODS:
# np.interp requires sorted X values, #21037
indexer = np.argsort(inds[valid])
result[invalid] = np.interp(
inds[invalid], inds[valid][indexer], yvalues[valid][indexer]
)
- result[preserve_nans] = np.nan
- return result
-
- sp_methods = [
- "nearest",
- "zero",
- "slinear",
- "quadratic",
- "cubic",
- "barycentric",
- "krogh",
- "spline",
- "polynomial",
- "from_derivatives",
- "piecewise_polynomial",
- "pchip",
- "akima",
- "cubicspline",
- ]
-
- if method in sp_methods:
- inds = np.asarray(xvalues)
- # hack for DatetimeIndex, #1646
- if issubclass(inds.dtype.type, np.datetime64):
- inds = inds.view(np.int64)
+ else:
result[invalid] = _interpolate_scipy_wrapper(
inds[valid],
yvalues[valid],
@@ -315,8 +302,9 @@ def interpolate_1d(
order=order,
**kwargs,
)
- result[preserve_nans] = np.nan
- return result
+
+ result[preserve_nans] = np.nan
+ return result
def _interpolate_scipy_wrapper(
| broken-off #34728 | https://api.github.com/repos/pandas-dev/pandas/pulls/34744 | 2020-06-13T10:48:26Z | 2020-06-13T20:12:53Z | 2020-06-13T20:12:53Z | 2020-06-13T20:34:32Z |
CLN: make Info and DataFrameInfo subclasses | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d12ebeafe8510..5134ddcf1cc67 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -140,7 +140,7 @@
from pandas.io.common import get_filepath_or_buffer
from pandas.io.formats import console, format as fmt
-from pandas.io.formats.info import info
+from pandas.io.formats.info import DataFrameInfo
import pandas.plotting
if TYPE_CHECKING:
@@ -2459,11 +2459,11 @@ def to_html(
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
- # Column Non-Null Count Dtype
+ # Column Non-Null Count Dtype
--- ------ -------------- -----
- 0 int_col 5 non-null int64
- 1 text_col 5 non-null object
- 2 float_col 5 non-null float64
+ 0 int_col 5 non-null int64
+ 1 text_col 5 non-null object
+ 2 float_col 5 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 248.0+ bytes
@@ -2502,11 +2502,11 @@ def to_html(
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
- # Column Non-Null Count Dtype
+ # Column Non-Null Count Dtype
--- ------ -------------- -----
- 0 column_1 1000000 non-null object
- 1 column_2 1000000 non-null object
- 2 column_3 1000000 non-null object
+ 0 column_1 1000000 non-null object
+ 1 column_2 1000000 non-null object
+ 2 column_3 1000000 non-null object
dtypes: object(3)
memory usage: 22.9+ MB
@@ -2514,11 +2514,11 @@ def to_html(
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
- # Column Non-Null Count Dtype
+ # Column Non-Null Count Dtype
--- ------ -------------- -----
- 0 column_1 1000000 non-null object
- 1 column_2 1000000 non-null object
- 2 column_3 1000000 non-null object
+ 0 column_1 1000000 non-null object
+ 1 column_2 1000000 non-null object
+ 2 column_3 1000000 non-null object
dtypes: object(3)
memory usage: 188.8 MB"""
),
@@ -2529,7 +2529,7 @@ def to_html(
DataFrame.memory_usage: Memory usage of DataFrame columns."""
),
)
- @doc(info)
+ @doc(DataFrameInfo.info)
def info(
self,
verbose: Optional[bool] = None,
@@ -2538,7 +2538,9 @@ def info(
memory_usage: Optional[Union[bool, str]] = None,
null_counts: Optional[bool] = None,
) -> None:
- return info(self, verbose, buf, max_cols, memory_usage, null_counts)
+ return DataFrameInfo(
+ self, verbose, buf, max_cols, memory_usage, null_counts
+ ).info()
def memory_usage(self, index=True, deep=False) -> Series:
"""
diff --git a/pandas/io/formats/info.py b/pandas/io/formats/info.py
index b1dcafa7a7a8f..7a53b46a4ac0f 100644
--- a/pandas/io/formats/info.py
+++ b/pandas/io/formats/info.py
@@ -1,15 +1,17 @@
+from abc import ABCMeta, abstractmethod
import sys
-from typing import IO, TYPE_CHECKING, Optional, Tuple, Union
+from typing import IO, TYPE_CHECKING, List, Optional, Tuple, Union
from pandas._config import get_option
from pandas._typing import Dtype, FrameOrSeries
+from pandas.core.indexes.api import Index
+
from pandas.io.formats import format as fmt
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
- from pandas.core.indexes.api import Index # noqa: F401
from pandas.core.series import Series # noqa: F401
@@ -39,115 +41,247 @@ def _put_str(s: Union[str, Dtype], space: int) -> str:
return str(s)[:space].ljust(space)
-def _get_ids_and_dtypes(data: FrameOrSeries) -> Tuple["Index", "Series"]:
+def _sizeof_fmt(num: Union[int, float], size_qualifier: str) -> str:
"""
- Get DataFrame's columns and dtypes.
+ Return size in human readable format.
Parameters
----------
- data : DataFrame
- Object that `info` was called on.
+ num : int
+ Size in bytes.
+ size_qualifier : str
+ Either empty, or '+' (if lower bound).
Returns
-------
- ids : Index
- DataFrame's columns.
- dtypes : Series
- Dtype of each of the DataFrame's columns.
- """
- ids = data.columns
- dtypes = data.dtypes
- return ids, dtypes
-
-
-def info(
- data: FrameOrSeries,
- verbose: Optional[bool] = None,
- buf: Optional[IO[str]] = None,
- max_cols: Optional[int] = None,
- memory_usage: Optional[Union[bool, str]] = None,
- null_counts: Optional[bool] = None,
-) -> None:
- """
- Print a concise summary of a %(klass)s.
-
- This method prints information about a %(klass)s including
- the index dtype%(type_sub)s, non-null values and memory usage.
-
- Parameters
- ----------
- data : %(klass)s
- %(klass)s to print information about.
- verbose : bool, optional
- Whether to print the full summary. By default, the setting in
- ``pandas.options.display.max_info_columns`` is followed.
- buf : writable buffer, defaults to sys.stdout
- Where to send the output. By default, the output is printed to
- sys.stdout. Pass a writable buffer if you need to further process
- the output.
- %(max_cols_sub)s
- memory_usage : bool, str, optional
- Specifies whether total memory usage of the %(klass)s
- elements (including the index) should be displayed. By default,
- this follows the ``pandas.options.display.memory_usage`` setting.
-
- True always show memory usage. False never shows memory usage.
- A value of 'deep' is equivalent to "True with deep introspection".
- Memory usage is shown in human-readable units (base-2
- representation). Without deep introspection a memory estimation is
- made based in column dtype and number of rows assuming values
- consume the same memory amount for corresponding dtypes. With deep
- memory introspection, a real memory usage calculation is performed
- at the cost of computational resources.
- null_counts : bool, optional
- Whether to show the non-null counts. By default, this is shown
- only if the %(klass)s is smaller than
- ``pandas.options.display.max_info_rows`` and
- ``pandas.options.display.max_info_columns``. A value of True always
- shows the counts, and False never shows the counts.
-
- Returns
- -------
- None
- This method prints a summary of a %(klass)s and returns None.
-
- See Also
- --------
- %(see_also_sub)s
+ str
+ Size in human readable format.
Examples
--------
- %(examples_sub)s
- """
- if buf is None: # pragma: no cover
- buf = sys.stdout
-
- lines = []
-
- lines.append(str(type(data)))
- lines.append(data.index._summary())
-
- ids, dtypes = _get_ids_and_dtypes(data)
- col_count = len(ids)
-
- if col_count == 0:
- lines.append(f"Empty {type(data).__name__}")
- fmt.buffer_put_lines(buf, lines)
- return
-
- # hack
- if max_cols is None:
- max_cols = get_option("display.max_info_columns", col_count + 1)
-
- max_rows = get_option("display.max_info_rows", len(data) + 1)
+ >>> _sizeof_fmt(23028, '')
+ '22.5 KB'
- if null_counts is None:
- show_counts = (col_count <= max_cols) and (len(data) < max_rows)
- else:
- show_counts = null_counts
- exceeds_info_cols = col_count > max_cols
+ >>> _sizeof_fmt(23028, '+')
+ '22.5+ KB'
+ """
+ for x in ["bytes", "KB", "MB", "GB", "TB"]:
+ if num < 1024.0:
+ return f"{num:3.1f}{size_qualifier} {x}"
+ num /= 1024.0
+ return f"{num:3.1f}{size_qualifier} PB"
+
+
+class BaseInfo(metaclass=ABCMeta):
+ def __init__(
+ self,
+ data: FrameOrSeries,
+ verbose: Optional[bool] = None,
+ buf: Optional[IO[str]] = None,
+ max_cols: Optional[int] = None,
+ memory_usage: Optional[Union[bool, str]] = None,
+ null_counts: Optional[bool] = None,
+ ):
+ if buf is None: # pragma: no cover
+ buf = sys.stdout
+ if memory_usage is None:
+ memory_usage = get_option("display.memory_usage")
+
+ self.data = data
+ self.verbose = verbose
+ self.buf = buf
+ self.max_cols = max_cols
+ self.memory_usage = memory_usage
+ self.null_counts = null_counts
+
+ @abstractmethod
+ def _get_mem_usage(self, deep: bool) -> int:
+ """
+ Get memory usage in bytes.
+
+ Parameters
+ ----------
+ deep : bool
+ If True, introspect the data deeply by interrogating object dtypes
+ for system-level memory consumption, and include it in the returned
+ values.
+
+ Returns
+ -------
+ mem_usage : int
+ Object's total memory usage in bytes.
+ """
+ pass
+
+ @abstractmethod
+ def _get_ids_and_dtypes(self) -> Tuple["Index", "Series"]:
+ """
+ Get column names and dtypes.
+
+ Returns
+ -------
+ ids : Index
+ DataFrame's column names.
+ dtypes : Series
+ Dtype of each of the DataFrame's columns.
+ """
+ pass
+
+ @abstractmethod
+ def _verbose_repr(
+ self, lines: List[str], ids: "Index", dtypes: "Series", show_counts: bool
+ ) -> None:
+ """
+ Append name, non-null count (optional), and dtype for each column to `lines`.
+
+ Parameters
+ ----------
+ lines : List[str]
+ Lines that will contain `info` representation.
+ ids : Index
+ The DataFrame's column names.
+ dtypes : Series
+ The DataFrame's columns' dtypes.
+ show_counts : bool
+ If True, count of non-NA cells for each column will be appended to `lines`.
+ """
+ pass
+
+ @abstractmethod
+ def _non_verbose_repr(self, lines: List[str], ids: "Index") -> None:
+ """
+ Append short summary of columns' names to `lines`.
+
+ Parameters
+ ----------
+ lines : List[str]
+ Lines that will contain `info` representation.
+ ids : Index
+ The DataFrame's column names.
+ """
+ pass
+
+ def info(self) -> None:
+ """
+ Print a concise summary of a %(klass)s.
+
+ This method prints information about a %(klass)s including
+ the index dtype%(type_sub)s, non-null values and memory usage.
+
+ Parameters
+ ----------
+ data : %(klass)s
+ %(klass)s to print information about.
+ verbose : bool, optional
+ Whether to print the full summary. By default, the setting in
+ ``pandas.options.display.max_info_columns`` is followed.
+ buf : writable buffer, defaults to sys.stdout
+ Where to send the output. By default, the output is printed to
+ sys.stdout. Pass a writable buffer if you need to further process
+ the output.
+ %(max_cols_sub)s
+ memory_usage : bool, str, optional
+ Specifies whether total memory usage of the %(klass)s
+ elements (including the index) should be displayed. By default,
+ this follows the ``pandas.options.display.memory_usage`` setting.
+
+ True always show memory usage. False never shows memory usage.
+ A value of 'deep' is equivalent to "True with deep introspection".
+ Memory usage is shown in human-readable units (base-2
+ representation). Without deep introspection a memory estimation is
+ made based in column dtype and number of rows assuming values
+ consume the same memory amount for corresponding dtypes. With deep
+ memory introspection, a real memory usage calculation is performed
+ at the cost of computational resources.
+ null_counts : bool, optional
+ Whether to show the non-null counts. By default, this is shown
+ only if the %(klass)s is smaller than
+ ``pandas.options.display.max_info_rows`` and
+ ``pandas.options.display.max_info_columns``. A value of True always
+ shows the counts, and False never shows the counts.
+
+ Returns
+ -------
+ None
+ This method prints a summary of a %(klass)s and returns None.
+
+ See Also
+ --------
+ %(see_also_sub)s
+
+ Examples
+ --------
+ %(examples_sub)s
+ """
+ lines = []
+
+ lines.append(str(type(self.data)))
+ lines.append(self.data.index._summary())
+
+ ids, dtypes = self._get_ids_and_dtypes()
+ col_count = len(ids)
+
+ if col_count == 0:
+ lines.append(f"Empty {type(self.data).__name__}")
+ fmt.buffer_put_lines(self.buf, lines)
+ return
+
+ # hack
+ max_cols = self.max_cols
+ if max_cols is None:
+ max_cols = get_option("display.max_info_columns", col_count + 1)
+
+ max_rows = get_option("display.max_info_rows", len(self.data) + 1)
+
+ if self.null_counts is None:
+ show_counts = (col_count <= max_cols) and (len(self.data) < max_rows)
+ else:
+ show_counts = self.null_counts
+ exceeds_info_cols = col_count > max_cols
- def _verbose_repr():
+ if self.verbose:
+ self._verbose_repr(lines, ids, dtypes, show_counts)
+ elif self.verbose is False: # specifically set to False, not necessarily None
+ self._non_verbose_repr(lines, ids)
+ else:
+ if exceeds_info_cols:
+ self._non_verbose_repr(lines, ids)
+ else:
+ self._verbose_repr(lines, ids, dtypes, show_counts)
+
+ # groupby dtype.name to collect e.g. Categorical columns
+ counts = dtypes.value_counts().groupby(lambda x: x.name).sum()
+ collected_dtypes = [f"{k[0]}({k[1]:d})" for k in sorted(counts.items())]
+ lines.append(f"dtypes: {', '.join(collected_dtypes)}")
+
+ if self.memory_usage:
+ # append memory usage of df to display
+ size_qualifier = ""
+ if self.memory_usage == "deep":
+ deep = True
+ else:
+ # size_qualifier is just a best effort; not guaranteed to catch
+ # all cases (e.g., it misses categorical data even with object
+ # categories)
+ deep = False
+ if "object" in counts or self.data.index._is_memory_usage_qualified():
+ size_qualifier = "+"
+ mem_usage = self._get_mem_usage(deep=deep)
+ lines.append(f"memory usage: {_sizeof_fmt(mem_usage, size_qualifier)}\n")
+ fmt.buffer_put_lines(self.buf, lines)
+
+
+class DataFrameInfo(BaseInfo):
+ def _get_mem_usage(self, deep: bool) -> int:
+ return self.data.memory_usage(index=True, deep=deep).sum()
+
+ def _get_ids_and_dtypes(self) -> Tuple["Index", "Series"]:
+ return self.data.columns, self.data.dtypes
+
+ def _verbose_repr(
+ self, lines: List[str], ids: "Index", dtypes: "Series", show_counts: bool
+ ) -> None:
+ col_count = len(ids)
lines.append(f"Data columns (total {col_count} columns):")
id_head = " # "
@@ -164,7 +298,7 @@ def _verbose_repr():
header = _put_str(id_head, space_num) + _put_str(column_head, space)
if show_counts:
- counts = data.count()
+ counts = self.data.count()
if col_count != len(counts): # pragma: no cover
raise AssertionError(
f"Columns must equal counts ({col_count} != {len(counts)})"
@@ -213,46 +347,5 @@ def _verbose_repr():
+ _put_str(dtype, space_dtype)
)
- def _non_verbose_repr():
+ def _non_verbose_repr(self, lines: List[str], ids: "Index") -> None:
lines.append(ids._summary(name="Columns"))
-
- def _sizeof_fmt(num, size_qualifier):
- # returns size in human readable format
- for x in ["bytes", "KB", "MB", "GB", "TB"]:
- if num < 1024.0:
- return f"{num:3.1f}{size_qualifier} {x}"
- num /= 1024.0
- return f"{num:3.1f}{size_qualifier} PB"
-
- if verbose:
- _verbose_repr()
- elif verbose is False: # specifically set to False, not necessarily None
- _non_verbose_repr()
- else:
- if exceeds_info_cols:
- _non_verbose_repr()
- else:
- _verbose_repr()
-
- # groupby dtype.name to collect e.g. Categorical columns
- counts = dtypes.value_counts().groupby(lambda x: x.name).sum()
- collected_dtypes = [f"{k[0]}({k[1]:d})" for k in sorted(counts.items())]
- lines.append(f"dtypes: {', '.join(collected_dtypes)}")
-
- if memory_usage is None:
- memory_usage = get_option("display.memory_usage")
- if memory_usage:
- # append memory usage of df to display
- size_qualifier = ""
- if memory_usage == "deep":
- deep = True
- else:
- # size_qualifier is just a best effort; not guaranteed to catch
- # all cases (e.g., it misses categorical data even with object
- # categories)
- deep = False
- if "object" in counts or data.index._is_memory_usage_qualified():
- size_qualifier = "+"
- mem_usage = data.memory_usage(index=True, deep=deep).sum()
- lines.append(f"memory usage: {_sizeof_fmt(mem_usage, size_qualifier)}\n")
- fmt.buffer_put_lines(buf, lines)
| precursor to #31796
Makes Info class and DataFrameInfo subclass so there's no need for all the
```python
if isinstance(data, ABCDataFrame)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/34743 | 2020-06-13T10:19:10Z | 2020-06-29T23:19:45Z | 2020-06-29T23:19:44Z | 2020-10-10T14:14:45Z |
CLN: remove the old 'nature_with_gtoc' sphinx doc theme | diff --git a/doc/source/themes/nature_with_gtoc/layout.html b/doc/source/themes/nature_with_gtoc/layout.html
deleted file mode 100644
index 6e7d8ece35133..0000000000000
--- a/doc/source/themes/nature_with_gtoc/layout.html
+++ /dev/null
@@ -1,108 +0,0 @@
-{#
-
-Subset of agogo theme
-agogo/layout.html
-
-Sphinx layout template for the agogo theme, originally written
-by Andi Albrecht.
-
-:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
-:license: BSD, see LICENSE for details.
-#}
-{% extends "basic/layout.html" %}
-
-{%- block content %}
-<div class="content-wrapper">
- <div class="content">
- <div class="document">
- <div class="sphinxsidebar">
- {%- block sidebar1 %}
- {%- block sidebartoc %}
- <h3>{{ _('Table Of Contents') }}</h3>
- {{ toctree(includehidden=True) }}
- {%- endblock %}
- {%- block sidebarsearch %}
- <h3 style="margin-top: 1.5em;">{{ _('Search') }}</h3>
-
- <form class="search" action="{{ pathto('search') }}" method="get">
- <input type="text" name="q" size="18"/>
- <input type="submit" value="{{ _('Go') }}"/>
- <input type="hidden" name="check_keywords" value="yes"/>
- <input type="hidden" name="area" value="default"/>
- </form>
- <p class="searchtip" style="font-size: 90%">
- {{ _('Enter search terms or a module, class or function name.') }}
- </p>
-
- </div>
- {%- endblock %}
- {# possible location for sidebar #} {% endblock %}
-
-
- {%- block document %}
- <div class="documentwrapper">
- {%- if render_sidebar %}
- <div class="bodywrapper">
- {%- endif %}
- <div class="body">
- {% block body %} {% endblock %}
- </div>
- {%- if render_sidebar %}
- </div>
- {%- endif %}
- </div>
- {%- endblock %}
-
- {%- block sidebar2 %}
-
- {% endblock %}
- <div class="clearer"></div>
- </div>
- </div>
-</div>
-{%- endblock %}
-
-{%- block footer %}
-<style type="text/css">
- .scrollToTop {
- text-align: center;
- font-weight: bold;
- position: fixed;
- bottom: 60px;
- right: 40px;
- display: none;
- }
-</style>
-<a href="#" class="scrollToTop">Scroll To Top</a>
-<script type="text/javascript">
-$(document).ready(function() {
- //Check to see if the window is top if not then display button
- $(window).scroll(function() {
- if ($(this).scrollTop() > 200) {
- $('.scrollToTop').fadeIn();
- } else {
- $('.scrollToTop').fadeOut();
- }
- });
-
- //Click event to scroll to top
- $('.scrollToTop').click(function() {
- $('html, body').animate({
- scrollTop: 0
- }, 500);
- return false;
- });
-});
-</script>
-
-<!-- Google Analytics -->
-<script>
-window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
-ga('create', 'UA-27880019-2', 'auto');
-ga('set', 'anonymizeIp', true);
-ga('send', 'pageview');
-</script>
-<script async src='https://www.google-analytics.com/analytics.js'></script>
-<!-- End Google Analytics -->
-
-{% endblock %}
diff --git a/doc/source/themes/nature_with_gtoc/static/nature.css_t b/doc/source/themes/nature_with_gtoc/static/nature.css_t
deleted file mode 100644
index 4571d97ec50ba..0000000000000
--- a/doc/source/themes/nature_with_gtoc/static/nature.css_t
+++ /dev/null
@@ -1,356 +0,0 @@
-/*
- * nature.css_t
- * ~~~~~~~~~~~~
- *
- * Sphinx stylesheet -- nature theme.
- *
- * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
- * :license: BSD, see LICENSE for details.
- *
- */
-
-@import url("basic.css");
-
-/* -- page layout ----------------------------------------------------------- */
-
-body {
- font-family: Arial, sans-serif;
- font-size: 100%;
- background-color: #111;
- color: #555;
- margin: 0;
- padding: 0;
-}
-
-
-div.documentwrapper {
- width: 100%;
-}
-
-div.bodywrapper {
-/* ugly hack, probably not attractive with other font size for re*/
- margin: 0 0 0 {{ theme_sidebarwidth|toint}}px;
- min-width: 540px;
- max-width: 800px;
-}
-
-
-hr {
- border: 1px solid #B1B4B6;
-}
-
-div.document {
- background-color: #eee;
-}
-
-div.body {
- background-color: #ffffff;
- color: #3E4349;
- padding: 0 30px 30px 30px;
- font-size: 0.9em;
-}
-
-div.footer {
- color: #555;
- width: 100%;
- padding: 13px 0;
- text-align: center;
- font-size: 75%;
-}
-
-div.footer a {
- color: #444;
- text-decoration: underline;
-}
-
-div.related {
- background-color: #6BA81E;
- line-height: 32px;
- color: #fff;
- text-shadow: 0px 1px 0 #444;
- font-size: 0.9em;
-}
-
-div.related a {
- color: #E2F3CC;
-}
-
-div.sphinxsidebar {
- font-size: 0.75em;
- line-height: 1.5em;
- width: {{ theme_sidebarwidth|toint }}px;
- margin: 0 ;
- float: left;
-
- background-color: #eee;
-}
-/*
-div.sphinxsidebarwrapper{
- padding: 20px 0;
-}
-*/
-div.sphinxsidebar h3,
-div.sphinxsidebar h4 {
- font-family: Arial, sans-serif;
- color: #222;
- font-size: 1.2em;
- font-weight: normal;
- margin: 20px 0 0 0;
- padding: 5px 10px;
- background-color: #ddd;
- text-shadow: 1px 1px 0 white
-}
-
-div.sphinxsidebar h4{
- font-size: 1.1em;
-}
-
-div.sphinxsidebar h3 a {
- color: #444;
-}
-
-
-div.sphinxsidebar p {
- color: #888;
-/* padding: 5px 20px;*/
-}
-
-div.sphinxsidebar p.searchtip {
- color: #888;
- padding: 5px 20px;
-}
-
-
-div.sphinxsidebar p.topless {
-}
-
-div.sphinxsidebar ul {
- margin: 10px 20px;
- padding: 0;
- color: #000;
-}
-
-div.sphinxsidebar a {
- color: #444;
-}
-
-div.sphinxsidebar input {
- border: 1px solid #ccc;
- font-family: sans-serif;
- font-size: 1em;
-}
-
-div.sphinxsidebar input[type=text]{
- margin-left: 20px;
-}
-
-/* -- body styles ----------------------------------------------------------- */
-
-a {
- color: #005B81;
- text-decoration: none;
-}
-
-a:hover {
- color: #E32E00;
- text-decoration: underline;
-}
-
-div.body h1,
-div.body h2,
-div.body h3,
-div.body h4,
-div.body h5,
-div.body h6 {
- font-family: Arial, sans-serif;
- background-color: #BED4EB;
- font-weight: normal;
- color: #212224;
- margin: 30px 0px 10px 0px;
- padding: 5px 0 5px 10px;
- text-shadow: 0px 1px 0 white
-}
-
-div.body h1 { border-top: 20px solid white; margin-top: 0; font-size: 200%; }
-div.body h2 { font-size: 150%; background-color: #C8D5E3; }
-div.body h3 { font-size: 120%; background-color: #D8DEE3; }
-div.body h4 { font-size: 110%; background-color: #D8DEE3; }
-div.body h5 { font-size: 100%; background-color: #D8DEE3; }
-div.body h6 { font-size: 100%; background-color: #D8DEE3; }
-
-p.rubric {
- border-bottom: 1px solid rgb(201, 201, 201);
-}
-
-a.headerlink {
- color: #c60f0f;
- font-size: 0.8em;
- padding: 0 4px 0 4px;
- text-decoration: none;
-}
-
-a.headerlink:hover {
- background-color: #c60f0f;
- color: white;
-}
-
-div.body p, div.body dd, div.body li {
- line-height: 1.5em;
-}
-
-div.admonition p.admonition-title + p, div.deprecated p {
- display: inline;
-}
-
-div.deprecated {
- margin-bottom: 10px;
- margin-top: 10px;
- padding: 7px;
- background-color: #ffe4e4;
- border: 1px solid #f66;
-}
-
-div.highlight{
- background-color: white;
-}
-
-div.note {
- background-color: #eee;
- border: 1px solid #ccc;
-}
-
-div.seealso {
- background-color: #ffc;
- border: 1px solid #ff6;
-}
-
-div.topic {
- background-color: #eee;
-}
-
-div.warning {
- background-color: #ffe4e4;
- border: 1px solid #f66;
-}
-
-p.admonition-title {
- display: inline;
-}
-
-p.admonition-title:after {
- content: ":";
-}
-
-pre {
- padding: 10px;
- background-color: rgb(250,250,250);
- color: #222;
- line-height: 1.2em;
- border: 1px solid rgb(201,201,201);
- font-size: 1.1em;
- margin: 1.5em 0 1.5em 0;
- -webkit-box-shadow: 1px 1px 1px #d8d8d8;
- -moz-box-shadow: 1px 1px 1px #d8d8d8;
-}
-
-tt {
- background-color: #ecf0f3;
- color: #222;
- /* padding: 1px 2px; */
- font-size: 1.1em;
- font-family: monospace;
-}
-
-.viewcode-back {
- font-family: Arial, sans-serif;
-}
-
-div.viewcode-block:target {
- background-color: #f4debf;
- border-top: 1px solid #ac9;
- border-bottom: 1px solid #ac9;
-}
-
-
-/**
- * Styling for field lists
- */
-
- /* grey highlighting of 'parameter' and 'returns' field */
-table.field-list {
- border-collapse: separate;
- border-spacing: 10px;
- margin-left: 1px;
- /* border-left: 5px solid rgb(238, 238, 238) !important; */
-}
-
-table.field-list th.field-name {
- /* display: inline-block; */
- padding: 1px 8px 1px 5px;
- white-space: nowrap;
- background-color: rgb(238, 238, 238);
-}
-
-/* italic font for parameter types */
-table.field-list td.field-body > p {
- font-style: italic;
-}
-
-table.field-list td.field-body > p > strong {
- font-style: normal;
-}
-
-/* reduced space around parameter description */
-td.field-body blockquote {
- border-left: none;
- margin: 0em 0em 0.3em;
- padding-left: 30px;
-}
-
-// Adapted from the new Jupyter notebook style
-// https://github.com/jupyter/notebook/blob/c8841b68c4c0739bbee1291e0214771f24194079/notebook/static/notebook/less/renderedhtml.less#L59
-table {
- margin-left: auto;
- margin-right: auto;
- border: none;
- border-collapse: collapse;
- border-spacing: 0;
- color: @rendered_html_border_color;
- table-layout: fixed;
-}
-thead {
- border-bottom: 1px solid @rendered_html_border_color;
- vertical-align: bottom;
-}
-tr, th, td {
- vertical-align: middle;
- padding: 0.5em 0.5em;
- line-height: normal;
- white-space: normal;
- max-width: none;
- border: none;
-}
-th {
- font-weight: bold;
-}
-th.col_heading {
- text-align: right;
-}
-tbody tr:nth-child(odd) {
- background: #f5f5f5;
-}
-
-table td.data, table th.row_heading table th.col_heading {
- font-family: monospace;
- text-align: right;
-}
-
-
-/**
- * See also
- */
-
-div.seealso dd {
- margin-top: 0;
- margin-bottom: 0;
-}
diff --git a/doc/source/themes/nature_with_gtoc/theme.conf b/doc/source/themes/nature_with_gtoc/theme.conf
deleted file mode 100644
index 290a07bde8806..0000000000000
--- a/doc/source/themes/nature_with_gtoc/theme.conf
+++ /dev/null
@@ -1,7 +0,0 @@
-[theme]
-inherit = basic
-stylesheet = nature.css
-pygments_style = tango
-
-[options]
-sidebarwidth = 270
| Noticed we still have the old theme customizations in the doc sources. | https://api.github.com/repos/pandas-dev/pandas/pulls/34742 | 2020-06-13T10:11:10Z | 2020-06-13T17:06:51Z | 2020-06-13T17:06:51Z | 2020-06-13T18:31:48Z |
BLD: Suppressing errors while compling pandas/_libs/groupby | diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 35a6963165194..e7ac3b8442c6d 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -869,7 +869,9 @@ def group_last(rank_t[:, :] out,
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ # TODO(cython 3.0):
+ # Instead of `labels.shape[0]` use `len(labels)`
+ if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -960,7 +962,9 @@ def group_nth(rank_t[:, :] out,
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ # TODO(cython 3.0):
+ # Instead of `labels.shape[0]` use `len(labels)`
+ if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -1254,7 +1258,9 @@ def group_max(groupby_t[:, :] out,
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ # TODO(cython 3.0):
+ # Instead of `labels.shape[0]` use `len(labels)`
+ if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -1327,7 +1333,9 @@ def group_min(groupby_t[:, :] out,
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ # TODO(cython 3.0):
+ # Instead of `labels.shape[0]` use `len(labels)`
+ if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
| - [x] ref #32163
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This are the errors that this PR is getting rid of:
```
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_116group_last’:
pandas/_libs/groupby.c:37458:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
37458 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_118group_last’:
pandas/_libs/groupby.c:38233:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
38233 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_120group_last’:
pandas/_libs/groupby.c:39008:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
39008 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_122group_last’:
pandas/_libs/groupby.c:39781:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
39781 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_124group_last’:
pandas/_libs/groupby.c:40563:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
40563 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_128group_nth’:
pandas/_libs/groupby.c:41999:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
41999 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_130group_nth’:
pandas/_libs/groupby.c:42820:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
42820 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_132group_nth’:
pandas/_libs/groupby.c:43641:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
43641 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_134group_nth’:
pandas/_libs/groupby.c:44460:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
44460 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_136group_nth’:
pandas/_libs/groupby.c:45288:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
45288 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_152group_max’:
pandas/_libs/groupby.c:55168:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
55168 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_154group_max’:
pandas/_libs/groupby.c:55970:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
55970 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_156group_max’:
pandas/_libs/groupby.c:56772:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
56772 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_158group_max’:
pandas/_libs/groupby.c:57568:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
57568 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_162group_min’:
pandas/_libs/groupby.c:58985:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
58985 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_164group_min’:
pandas/_libs/groupby.c:59784:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
59784 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_166group_min’:
pandas/_libs/groupby.c:60583:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
60583 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
| ^~
pandas/_libs/groupby.c: In function ‘__pyx_pf_6pandas_5_libs_7groupby_168group_min’:
pandas/_libs/groupby.c:61376:30: warning: comparison of integer expressions of different signedness: ‘Py_ssize_t’ {aka ‘long int’} and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
61376 | __pyx_t_3 = ((!((__pyx_t_2 == __pyx_t_1) != 0)) != 0);
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/32794 | 2020-03-17T23:52:40Z | 2020-03-19T00:25:33Z | 2020-03-19T00:25:33Z | 2020-03-20T00:05:35Z |
CLN: remove align kwarg from Block.where, Block.putmask | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f2f8b6067c415..c429a65ed3369 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -909,13 +909,7 @@ def setitem(self, indexer, value):
return block
def putmask(
- self,
- mask,
- new,
- align: bool = True,
- inplace: bool = False,
- axis: int = 0,
- transpose: bool = False,
+ self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False,
):
"""
putmask the data to the block; it is possible that we may create a
@@ -927,7 +921,6 @@ def putmask(
----------
mask : the condition to respect
new : a ndarray/object
- align : boolean, perform alignment on other/cond, default is True
inplace : perform inplace modification, default is False
axis : int
transpose : boolean
@@ -1312,13 +1305,7 @@ def shift(self, periods, axis: int = 0, fill_value=None):
return [self.make_block(new_values)]
def where(
- self,
- other,
- cond,
- align: bool = True,
- errors="raise",
- try_cast: bool = False,
- axis: int = 0,
+ self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0,
) -> List["Block"]:
"""
evaluate the block; return result block(s) from the result
@@ -1327,8 +1314,6 @@ def where(
----------
other : a ndarray/object
cond : the condition to respect
- align : bool, default True
- Perform alignment on other/cond.
errors : str, {'raise', 'ignore'}, default 'raise'
- ``raise`` : allow exceptions to be raised
- ``ignore`` : suppress exceptions. On error return original object
@@ -1394,12 +1379,7 @@ def where_func(cond, values, other):
# we are explicitly ignoring errors
block = self.coerce_to_target_dtype(other)
blocks = block.where(
- orig_other,
- cond,
- align=align,
- errors=errors,
- try_cast=try_cast,
- axis=axis,
+ orig_other, cond, errors=errors, try_cast=try_cast, axis=axis,
)
return self._maybe_downcast(blocks, "infer")
@@ -1646,7 +1626,7 @@ def set(self, locs, values):
self.values[:] = values
def putmask(
- self, mask, new, align=True, inplace=False, axis=0, transpose=False,
+ self, mask, new, inplace=False, axis=0, transpose=False,
):
"""
putmask the data to the block; we must be a single block and not
@@ -1658,7 +1638,6 @@ def putmask(
----------
mask : the condition to respect
new : a ndarray/object
- align : boolean, perform alignment on other/cond, default is True
inplace : perform inplace modification, default is False
Returns
@@ -1896,13 +1875,7 @@ def shift(
]
def where(
- self,
- other,
- cond,
- align=True,
- errors="raise",
- try_cast: bool = False,
- axis: int = 0,
+ self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0,
) -> List["Block"]:
if isinstance(other, ABCDataFrame):
# ExtensionArrays are 1-D, so if we get here then
| https://api.github.com/repos/pandas-dev/pandas/pulls/32791 | 2020-03-17T22:04:40Z | 2020-03-22T00:14:25Z | 2020-03-22T00:14:25Z | 2020-03-22T00:49:39Z |
|
ERR: Raise on invalid na_action in Series.map | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 720ce7af47a18..346b603344e9a 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -405,6 +405,7 @@ Other
- Fixed :func:`pandas.testing.assert_series_equal` to correctly raise if left object is a different subclass with ``check_series_type=True`` (:issue:`32670`).
- :meth:`IntegerArray.astype` now supports ``datetime64`` dtype (:issue:32538`)
- Fixed bug in :func:`pandas.testing.assert_series_equal` where dtypes were checked for ``Interval`` and ``ExtensionArray`` operands when ``check_dtype`` was ``False`` (:issue:`32747`)
+- Bug in :meth:`Series.map` not raising on invalid ``na_action`` (:issue:`32815`)
- Bug in :meth:`DataFrame.__dir__` caused a segfault when using unicode surrogates in a column name (:issue:`25509`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/base.py b/pandas/core/base.py
index e1c6bef66239d..de9c0ca1ccd78 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1156,8 +1156,14 @@ def _map_values(self, mapper, na_action=None):
def map_f(values, f):
return lib.map_infer_mask(values, f, isna(values).view(np.uint8))
- else:
+ elif na_action is None:
map_f = lib.map_infer
+ else:
+ msg = (
+ "na_action must either be 'ignore' or None, "
+ f"{na_action} was passed"
+ )
+ raise ValueError(msg)
# mapper is a function
new_values = map_f(values, mapper)
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index dbe3ca27fa06d..f3717ff895a59 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -788,6 +788,13 @@ def test_map_float_to_string_precision(self):
expected = {0: "0.3333333333333333"}
assert result == expected
+ def test_map_with_invalid_na_action_raises(self):
+ # https://github.com/pandas-dev/pandas/issues/32815
+ s = pd.Series([1, 2, 3])
+ msg = "na_action must either be 'ignore' or None"
+ with pytest.raises(ValueError, match=msg):
+ s.map(lambda x: x, na_action="____")
+
def test_apply_to_timedelta(self):
list_of_valid_strings = ["00:00:01", "00:00:02"]
a = pd.to_timedelta(list_of_valid_strings)
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
`Series.map` treats any `na_action` other than `"ignore"` like `None`, but it should probably raise if someone provides an invalid option:
```python
In [1]: import pandas as pd
In [2]: pd.Series([1, 2, 3]).map(lambda x: x, na_action="xxxxx")
Out[2]:
0 1
1 2
2 3
dtype: int64
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32790 | 2020-03-17T21:56:43Z | 2020-03-22T00:13:12Z | 2020-03-22T00:13:12Z | 2020-03-22T00:14:03Z |
TST: Define sections in pandas/conftest.py | diff --git a/pandas/conftest.py b/pandas/conftest.py
index e12acb5dd56d5..903e1a5dec132 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1,3 +1,23 @@
+"""
+This file is very long and growing, but it was decided to not split it yet, as
+it's still manageable (2020-03-17, ~1.1k LoC). See gh-31989
+
+Instead of splitting it was decided to define sections here:
+- Configuration / Settings
+- Autouse fixtures
+- Common arguments
+- Missing values & co.
+- Classes
+- Indices
+- Series'
+- DataFrames
+- Operators & Operations
+- Data sets/files
+- Time zones
+- Dtypes
+- Misc
+"""
+
from collections import abc
from datetime import date, time, timedelta, timezone
from decimal import Decimal
@@ -19,19 +39,11 @@
from pandas.core import ops
from pandas.core.indexes.api import Index, MultiIndex
-hypothesis.settings.register_profile(
- "ci",
- # Hypothesis timing checks are tuned for scalars by default, so we bump
- # them from 200ms to 500ms per test case as the global default. If this
- # is too short for a specific test, (a) try to make it faster, and (b)
- # if it really is slow add `@settings(deadline=...)` with a working value,
- # or `deadline=None` to entirely disable timeouts for that test.
- deadline=500,
- suppress_health_check=(hypothesis.HealthCheck.too_slow,),
-)
-hypothesis.settings.load_profile("ci")
-
+# ----------------------------------------------------------------
+# Configuration / Settings
+# ----------------------------------------------------------------
+# pytest
def pytest_addoption(parser):
parser.addoption("--skip-slow", action="store_true", help="skip slow tests")
parser.addoption("--skip-network", action="store_true", help="skip network tests")
@@ -66,6 +78,55 @@ def pytest_runtest_setup(item):
pytest.skip("skipping high memory test since --run-high-memory was not set")
+# Hypothesis
+hypothesis.settings.register_profile(
+ "ci",
+ # Hypothesis timing checks are tuned for scalars by default, so we bump
+ # them from 200ms to 500ms per test case as the global default. If this
+ # is too short for a specific test, (a) try to make it faster, and (b)
+ # if it really is slow add `@settings(deadline=...)` with a working value,
+ # or `deadline=None` to entirely disable timeouts for that test.
+ deadline=500,
+ suppress_health_check=(hypothesis.HealthCheck.too_slow,),
+)
+hypothesis.settings.load_profile("ci")
+
+# Registering these strategies makes them globally available via st.from_type,
+# which is use for offsets in tests/tseries/offsets/test_offsets_properties.py
+for name in "MonthBegin MonthEnd BMonthBegin BMonthEnd".split():
+ cls = getattr(pd.tseries.offsets, name)
+ st.register_type_strategy(
+ cls, st.builds(cls, n=st.integers(-99, 99), normalize=st.booleans())
+ )
+
+for name in "YearBegin YearEnd BYearBegin BYearEnd".split():
+ cls = getattr(pd.tseries.offsets, name)
+ st.register_type_strategy(
+ cls,
+ st.builds(
+ cls,
+ n=st.integers(-5, 5),
+ normalize=st.booleans(),
+ month=st.integers(min_value=1, max_value=12),
+ ),
+ )
+
+for name in "QuarterBegin QuarterEnd BQuarterBegin BQuarterEnd".split():
+ cls = getattr(pd.tseries.offsets, name)
+ st.register_type_strategy(
+ cls,
+ st.builds(
+ cls,
+ n=st.integers(-24, 24),
+ normalize=st.booleans(),
+ startingMonth=st.integers(min_value=1, max_value=12),
+ ),
+ )
+
+
+# ----------------------------------------------------------------
+# Autouse fixtures
+# ----------------------------------------------------------------
@pytest.fixture(autouse=True)
def configure_tests():
"""
@@ -83,16 +144,9 @@ def add_imports(doctest_namespace):
doctest_namespace["pd"] = pd
-@pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"])
-def spmatrix(request):
- """
- Yields scipy sparse matrix classes.
- """
- from scipy import sparse
-
- return getattr(sparse, request.param + "_matrix")
-
-
+# ----------------------------------------------------------------
+# Common arguments
+# ----------------------------------------------------------------
@pytest.fixture(params=[0, 1, "index", "columns"], ids=lambda x: f"axis {repr(x)}")
def axis(request):
"""
@@ -112,19 +166,6 @@ def axis_series(request):
return request.param
-@pytest.fixture
-def ip():
- """
- Get an instance of IPython.InteractiveShell.
-
- Will raise a skip if IPython is not installed.
- """
- pytest.importorskip("IPython", minversion="6.0.0")
- from IPython.core.interactiveshell import InteractiveShell
-
- return InteractiveShell()
-
-
@pytest.fixture(params=[True, False, None])
def observed(request):
"""
@@ -146,338 +187,547 @@ def ordered_fixture(request):
return request.param
-_all_arithmetic_operators = [
- "__add__",
- "__radd__",
- "__sub__",
- "__rsub__",
- "__mul__",
- "__rmul__",
- "__floordiv__",
- "__rfloordiv__",
- "__truediv__",
- "__rtruediv__",
- "__pow__",
- "__rpow__",
- "__mod__",
- "__rmod__",
-]
-
-
-@pytest.fixture(params=_all_arithmetic_operators)
-def all_arithmetic_operators(request):
+@pytest.fixture(params=["first", "last", False])
+def keep(request):
"""
- Fixture for dunder names for common arithmetic operations.
+ Valid values for the 'keep' parameter used in
+ .duplicated or .drop_duplicates
"""
return request.param
-@pytest.fixture(
- params=[
- operator.add,
- ops.radd,
- operator.sub,
- ops.rsub,
- operator.mul,
- ops.rmul,
- operator.truediv,
- ops.rtruediv,
- operator.floordiv,
- ops.rfloordiv,
- operator.mod,
- ops.rmod,
- operator.pow,
- ops.rpow,
- ]
-)
-def all_arithmetic_functions(request):
+@pytest.fixture(params=["left", "right", "both", "neither"])
+def closed(request):
"""
- Fixture for operator and roperator arithmetic functions.
-
- Notes
- -----
- This includes divmod and rdivmod, whereas all_arithmetic_operators
- does not.
+ Fixture for trying all interval closed parameters.
"""
return request.param
-_all_numeric_reductions = [
- "sum",
- "max",
- "min",
- "mean",
- "prod",
- "std",
- "var",
- "median",
- "kurt",
- "skew",
-]
-
-
-@pytest.fixture(params=_all_numeric_reductions)
-def all_numeric_reductions(request):
+@pytest.fixture(params=["left", "right", "both", "neither"])
+def other_closed(request):
"""
- Fixture for numeric reduction names.
+ Secondary closed fixture to allow parametrizing over all pairs of closed.
"""
return request.param
-_all_boolean_reductions = ["all", "any"]
-
-
-@pytest.fixture(params=_all_boolean_reductions)
-def all_boolean_reductions(request):
+@pytest.fixture(params=[None, "gzip", "bz2", "zip", "xz"])
+def compression(request):
"""
- Fixture for boolean reduction names.
+ Fixture for trying common compression types in compression tests.
"""
return request.param
-_cython_table = pd.core.base.SelectionMixin._cython_table.items()
-
-
-@pytest.fixture(params=list(_cython_table))
-def cython_table_items(request):
+@pytest.fixture(params=["gzip", "bz2", "zip", "xz"])
+def compression_only(request):
"""
- Yields a tuple of a function and its corresponding name. Correspond to
- the list of aggregator "Cython functions" used on selected table items.
+ Fixture for trying common compression types in compression tests excluding
+ uncompressed case.
"""
return request.param
-def _get_cython_table_params(ndframe, func_names_and_expected):
+@pytest.fixture(params=[True, False])
+def writable(request):
"""
- Combine frame, functions from SelectionMixin._cython_table
- keys and expected result.
-
- Parameters
- ----------
- ndframe : DataFrame or Series
- func_names_and_expected : Sequence of two items
- The first item is a name of a NDFrame method ('sum', 'prod') etc.
- The second item is the expected return value.
-
- Returns
- -------
- list
- List of three items (DataFrame, function, expected result)
+ Fixture that an array is writable.
"""
- results = []
- for func_name, expected in func_names_and_expected:
- results.append((ndframe, func_name, expected))
- results += [
- (ndframe, func, expected)
- for func, name in _cython_table
- if name == func_name
- ]
- return results
+ return request.param
-@pytest.fixture(params=["__eq__", "__ne__", "__le__", "__lt__", "__ge__", "__gt__"])
-def all_compare_operators(request):
+@pytest.fixture(params=["inner", "outer", "left", "right"])
+def join_type(request):
"""
- Fixture for dunder names for common compare operations
-
- * >=
- * >
- * ==
- * !=
- * <
- * <=
+ Fixture for trying all types of join operations.
"""
return request.param
-@pytest.fixture(params=["__le__", "__lt__", "__ge__", "__gt__"])
-def compare_operators_no_eq_ne(request):
+@pytest.fixture(params=["nlargest", "nsmallest"])
+def nselect_method(request):
"""
- Fixture for dunder names for compare operations except == and !=
-
- * >=
- * >
- * <
- * <=
+ Fixture for trying all nselect methods.
"""
return request.param
-@pytest.fixture(
- params=["__and__", "__rand__", "__or__", "__ror__", "__xor__", "__rxor__"]
-)
-def all_logical_operators(request):
+# ----------------------------------------------------------------
+# Missing values & co.
+# ----------------------------------------------------------------
+@pytest.fixture(params=[None, np.nan, pd.NaT, float("nan"), np.float("NaN"), pd.NA])
+def nulls_fixture(request):
"""
- Fixture for dunder names for common logical operations
-
- * |
- * &
- * ^
+ Fixture for each null type in pandas.
"""
return request.param
-@pytest.fixture(params=[None, "gzip", "bz2", "zip", "xz"])
-def compression(request):
+nulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture
+
+
+@pytest.fixture(params=[None, np.nan, pd.NaT])
+def unique_nulls_fixture(request):
"""
- Fixture for trying common compression types in compression tests.
+ Fixture for each null type in pandas, each null type exactly once.
"""
return request.param
-@pytest.fixture(params=["gzip", "bz2", "zip", "xz"])
-def compression_only(request):
- """
- Fixture for trying common compression types in compression tests excluding
- uncompressed case.
- """
- return request.param
+# Generate cartesian product of unique_nulls_fixture:
+unique_nulls_fixture2 = unique_nulls_fixture
-@pytest.fixture(params=[True, False])
-def writable(request):
+# ----------------------------------------------------------------
+# Classes
+# ----------------------------------------------------------------
+@pytest.fixture(params=[pd.Index, pd.Series], ids=["index", "series"])
+def index_or_series(request):
"""
- Fixture that an array is writable.
+ Fixture to parametrize over Index and Series, made necessary by a mypy
+ bug, giving an error:
+
+ List item 0 has incompatible type "Type[Series]"; expected "Type[PandasObject]"
+
+ See GH#29725
"""
return request.param
-@pytest.fixture(scope="module")
-def datetime_tz_utc():
+@pytest.fixture
+def dict_subclass():
"""
- Yields the UTC timezone object from the datetime module.
+ Fixture for a dictionary subclass.
"""
- return timezone.utc
+ class TestSubDict(dict):
+ def __init__(self, *args, **kwargs):
+ dict.__init__(self, *args, **kwargs)
-@pytest.fixture(params=["utc", "dateutil/UTC", utc, tzutc(), timezone.utc])
-def utc_fixture(request):
- """
- Fixture to provide variants of UTC timezone strings and tzinfo objects.
- """
- return request.param
+ return TestSubDict
-@pytest.fixture(params=["inner", "outer", "left", "right"])
-def join_type(request):
+@pytest.fixture
+def non_mapping_dict_subclass():
"""
- Fixture for trying all types of join operations.
+ Fixture for a non-mapping dictionary subclass.
"""
- return request.param
+
+ class TestNonDictMapping(abc.Mapping):
+ def __init__(self, underlying_dict):
+ self._data = underlying_dict
+
+ def __getitem__(self, key):
+ return self._data.__getitem__(key)
+
+ def __iter__(self):
+ return self._data.__iter__()
+
+ def __len__(self):
+ return self._data.__len__()
+
+ return TestNonDictMapping
+# ----------------------------------------------------------------
+# Indices
+# ----------------------------------------------------------------
@pytest.fixture
-def strict_data_files(pytestconfig):
+def multiindex_year_month_day_dataframe_random_data():
"""
- Returns the configuration for the test setting `--strict-data-files`.
+ DataFrame with 3 level MultiIndex (year, month, day) covering
+ first 100 business days from 2000-01-01 with random data
"""
- return pytestconfig.getoption("--strict-data-files")
+ tdf = tm.makeTimeDataFrame(100)
+ ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()
+ # use Int64Index, to make sure things work
+ ymd.index.set_levels([lev.astype("i8") for lev in ymd.index.levels], inplace=True)
+ ymd.index.set_names(["year", "month", "day"], inplace=True)
+ return ymd
-@pytest.fixture
-def datapath(strict_data_files):
+def _create_multiindex():
+ """
+ MultiIndex used to test the general functionality of this object
"""
- Get the path to a data file.
- Parameters
- ----------
- path : str
- Path to the file, relative to ``pandas/tests/``
+ # See Also: tests.multi.conftest.idx
+ major_axis = Index(["foo", "bar", "baz", "qux"])
+ minor_axis = Index(["one", "two"])
- Returns
- -------
- path including ``pandas/tests``.
+ major_codes = np.array([0, 0, 1, 2, 3, 3])
+ minor_codes = np.array([0, 1, 0, 1, 0, 1])
+ index_names = ["first", "second"]
+ mi = MultiIndex(
+ levels=[major_axis, minor_axis],
+ codes=[major_codes, minor_codes],
+ names=index_names,
+ verify_integrity=False,
+ )
+ return mi
- Raises
- ------
- ValueError
- If the path doesn't exist and the --strict-data-files option is set.
+
+indices_dict = {
+ "unicode": tm.makeUnicodeIndex(100),
+ "string": tm.makeStringIndex(100),
+ "datetime": tm.makeDateIndex(100),
+ "datetime-tz": tm.makeDateIndex(100, tz="US/Pacific"),
+ "period": tm.makePeriodIndex(100),
+ "timedelta": tm.makeTimedeltaIndex(100),
+ "int": tm.makeIntIndex(100),
+ "uint": tm.makeUIntIndex(100),
+ "range": tm.makeRangeIndex(100),
+ "float": tm.makeFloatIndex(100),
+ "bool": tm.makeBoolIndex(10),
+ "categorical": tm.makeCategoricalIndex(100),
+ "interval": tm.makeIntervalIndex(100),
+ "empty": Index([]),
+ "tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
+ "multi": _create_multiindex(),
+ "repeats": Index([0, 0, 1, 1, 2, 2]),
+}
+
+
+@pytest.fixture(params=indices_dict.keys())
+def indices(request):
"""
- BASE_PATH = os.path.join(os.path.dirname(__file__), "tests")
+ Fixture for many "simple" kinds of indices.
- def deco(*args):
- path = os.path.join(BASE_PATH, *args)
- if not os.path.exists(path):
- if strict_data_files:
- raise ValueError(
- f"Could not find file {path} and --strict-data-files is set."
- )
- else:
- pytest.skip(f"Could not find {path}.")
- return path
+ These indices are unlikely to cover corner cases, e.g.
+ - no names
+ - no NaTs/NaNs
+ - no values near implementation bounds
+ - ...
+ """
+ # copy to avoid mutation, e.g. setting .name
+ return indices_dict[request.param].copy()
- return deco
+
+# ----------------------------------------------------------------
+# Series'
+# ----------------------------------------------------------------
+@pytest.fixture
+def empty_series():
+ return pd.Series([], index=[], dtype=np.float64)
@pytest.fixture
-def iris(datapath):
+def string_series():
"""
- The iris dataset as a DataFrame.
+ Fixture for Series of floats with Index of unique strings
"""
- return pd.read_csv(datapath("data", "iris.csv"))
+ s = tm.makeStringSeries()
+ s.name = "series"
+ return s
-@pytest.fixture(params=["nlargest", "nsmallest"])
-def nselect_method(request):
+@pytest.fixture
+def object_series():
"""
- Fixture for trying all nselect methods.
+ Fixture for Series of dtype object with Index of unique strings
"""
- return request.param
+ s = tm.makeObjectSeries()
+ s.name = "objects"
+ return s
-@pytest.fixture(params=["first", "last", False])
-def keep(request):
+@pytest.fixture
+def datetime_series():
"""
- Valid values for the 'keep' parameter used in
- .duplicated or .drop_duplicates
+ Fixture for Series of floats with DatetimeIndex
"""
- return request.param
+ s = tm.makeTimeSeries()
+ s.name = "ts"
+ return s
-@pytest.fixture(params=["left", "right", "both", "neither"])
-def closed(request):
- """
- Fixture for trying all interval closed parameters.
- """
- return request.param
+def _create_series(index):
+ """ Helper for the _series dict """
+ size = len(index)
+ data = np.random.randn(size)
+ return pd.Series(data, index=index, name="a")
-@pytest.fixture(params=["left", "right", "both", "neither"])
-def other_closed(request):
+_series = {
+ f"series-with-{index_id}-index": _create_series(index)
+ for index_id, index in indices_dict.items()
+}
+
+
+@pytest.fixture
+def series_with_simple_index(indices):
"""
- Secondary closed fixture to allow parametrizing over all pairs of closed.
+ Fixture for tests on series with changing types of indices.
"""
- return request.param
+ return _create_series(indices)
-@pytest.fixture(params=[None, np.nan, pd.NaT, float("nan"), np.float("NaN"), pd.NA])
-def nulls_fixture(request):
+_narrow_dtypes = [
+ np.float16,
+ np.float32,
+ np.int8,
+ np.int16,
+ np.int32,
+ np.uint8,
+ np.uint16,
+ np.uint32,
+]
+_narrow_series = {
+ f"{dtype.__name__}-series": tm.makeFloatSeries(name="a").astype(dtype)
+ for dtype in _narrow_dtypes
+}
+
+
+@pytest.fixture(params=_narrow_series.keys())
+def narrow_series(request):
"""
- Fixture for each null type in pandas.
+ Fixture for Series with low precision data types
"""
- return request.param
+ # copy to avoid mutation, e.g. setting .name
+ return _narrow_series[request.param].copy()
-nulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture
+_index_or_series_objs = {**indices_dict, **_series, **_narrow_series}
-@pytest.fixture(params=[None, np.nan, pd.NaT])
-def unique_nulls_fixture(request):
+@pytest.fixture(params=_index_or_series_objs.keys())
+def index_or_series_obj(request):
"""
- Fixture for each null type in pandas, each null type exactly once.
+ Fixture for tests on indexes, series and series with a narrow dtype
+ copy to avoid mutation, e.g. setting .name
"""
- return request.param
+ return _index_or_series_objs[request.param].copy(deep=True)
-# Generate cartesian product of unique_nulls_fixture:
-unique_nulls_fixture2 = unique_nulls_fixture
+# ----------------------------------------------------------------
+# DataFrames
+# ----------------------------------------------------------------
+@pytest.fixture
+def float_frame():
+ """
+ Fixture for DataFrame of floats with index of unique strings
+ Columns are ['A', 'B', 'C', 'D'].
-TIMEZONES = [
- None,
- "UTC",
- "US/Eastern",
- "Asia/Tokyo",
+ A B C D
+ P7GACiRnxd -0.465578 -0.361863 0.886172 -0.053465
+ qZKh6afn8n -0.466693 -0.373773 0.266873 1.673901
+ tkp0r6Qble 0.148691 -0.059051 0.174817 1.598433
+ wP70WOCtv8 0.133045 -0.581994 -0.992240 0.261651
+ M2AeYQMnCz -1.207959 -0.185775 0.588206 0.563938
+ QEPzyGDYDo -0.381843 -0.758281 0.502575 -0.565053
+ r78Jwns6dn -0.653707 0.883127 0.682199 0.206159
+ ... ... ... ... ...
+ IHEGx9NO0T -0.277360 0.113021 -1.018314 0.196316
+ lPMj8K27FA -1.313667 -0.604776 -1.305618 -0.863999
+ qa66YMWQa5 1.110525 0.475310 -0.747865 0.032121
+ yOa0ATsmcE -0.431457 0.067094 0.096567 -0.264962
+ 65znX3uRNG 1.528446 0.160416 -0.109635 -0.032987
+ eCOBvKqf3e 0.235281 1.622222 0.781255 0.392871
+ xSucinXxuV -1.263557 0.252799 -0.552247 0.400426
+
+ [30 rows x 4 columns]
+ """
+ return DataFrame(tm.getSeriesData())
+
+
+# ----------------------------------------------------------------
+# Operators & Operations
+# ----------------------------------------------------------------
+_all_arithmetic_operators = [
+ "__add__",
+ "__radd__",
+ "__sub__",
+ "__rsub__",
+ "__mul__",
+ "__rmul__",
+ "__floordiv__",
+ "__rfloordiv__",
+ "__truediv__",
+ "__rtruediv__",
+ "__pow__",
+ "__rpow__",
+ "__mod__",
+ "__rmod__",
+]
+
+
+@pytest.fixture(params=_all_arithmetic_operators)
+def all_arithmetic_operators(request):
+ """
+ Fixture for dunder names for common arithmetic operations.
+ """
+ return request.param
+
+
+@pytest.fixture(
+ params=[
+ operator.add,
+ ops.radd,
+ operator.sub,
+ ops.rsub,
+ operator.mul,
+ ops.rmul,
+ operator.truediv,
+ ops.rtruediv,
+ operator.floordiv,
+ ops.rfloordiv,
+ operator.mod,
+ ops.rmod,
+ operator.pow,
+ ops.rpow,
+ ]
+)
+def all_arithmetic_functions(request):
+ """
+ Fixture for operator and roperator arithmetic functions.
+
+ Notes
+ -----
+ This includes divmod and rdivmod, whereas all_arithmetic_operators
+ does not.
+ """
+ return request.param
+
+
+_all_numeric_reductions = [
+ "sum",
+ "max",
+ "min",
+ "mean",
+ "prod",
+ "std",
+ "var",
+ "median",
+ "kurt",
+ "skew",
+]
+
+
+@pytest.fixture(params=_all_numeric_reductions)
+def all_numeric_reductions(request):
+ """
+ Fixture for numeric reduction names.
+ """
+ return request.param
+
+
+_all_boolean_reductions = ["all", "any"]
+
+
+@pytest.fixture(params=_all_boolean_reductions)
+def all_boolean_reductions(request):
+ """
+ Fixture for boolean reduction names.
+ """
+ return request.param
+
+
+@pytest.fixture(params=["__eq__", "__ne__", "__le__", "__lt__", "__ge__", "__gt__"])
+def all_compare_operators(request):
+ """
+ Fixture for dunder names for common compare operations
+
+ * >=
+ * >
+ * ==
+ * !=
+ * <
+ * <=
+ """
+ return request.param
+
+
+@pytest.fixture(params=["__le__", "__lt__", "__ge__", "__gt__"])
+def compare_operators_no_eq_ne(request):
+ """
+ Fixture for dunder names for compare operations except == and !=
+
+ * >=
+ * >
+ * <
+ * <=
+ """
+ return request.param
+
+
+@pytest.fixture(
+ params=["__and__", "__rand__", "__or__", "__ror__", "__xor__", "__rxor__"]
+)
+def all_logical_operators(request):
+ """
+ Fixture for dunder names for common logical operations
+
+ * |
+ * &
+ * ^
+ """
+ return request.param
+
+
+# ----------------------------------------------------------------
+# Data sets/files
+# ----------------------------------------------------------------
+@pytest.fixture
+def strict_data_files(pytestconfig):
+ """
+ Returns the configuration for the test setting `--strict-data-files`.
+ """
+ return pytestconfig.getoption("--strict-data-files")
+
+
+@pytest.fixture
+def datapath(strict_data_files):
+ """
+ Get the path to a data file.
+
+ Parameters
+ ----------
+ path : str
+ Path to the file, relative to ``pandas/tests/``
+
+ Returns
+ -------
+ path including ``pandas/tests``.
+
+ Raises
+ ------
+ ValueError
+ If the path doesn't exist and the --strict-data-files option is set.
+ """
+ BASE_PATH = os.path.join(os.path.dirname(__file__), "tests")
+
+ def deco(*args):
+ path = os.path.join(BASE_PATH, *args)
+ if not os.path.exists(path):
+ if strict_data_files:
+ raise ValueError(
+ f"Could not find file {path} and --strict-data-files is set."
+ )
+ else:
+ pytest.skip(f"Could not find {path}.")
+ return path
+
+ return deco
+
+
+@pytest.fixture
+def iris(datapath):
+ """
+ The iris dataset as a DataFrame.
+ """
+ return pd.read_csv(datapath("data", "iris.csv"))
+
+
+# ----------------------------------------------------------------
+# Time zones
+# ----------------------------------------------------------------
+TIMEZONES = [
+ None,
+ "UTC",
+ "US/Eastern",
+ "Asia/Tokyo",
"dateutil/US/Pacific",
"dateutil/Asia/Singapore",
tzutc(),
@@ -514,6 +764,22 @@ def tz_aware_fixture(request):
tz_aware_fixture2 = tz_aware_fixture
+@pytest.fixture(scope="module")
+def datetime_tz_utc():
+ """
+ Yields the UTC timezone object from the datetime module.
+ """
+ return timezone.utc
+
+
+@pytest.fixture(params=["utc", "dateutil/UTC", utc, tzutc(), timezone.utc])
+def utc_fixture(request):
+ """
+ Fixture to provide variants of UTC timezone strings and tzinfo objects.
+ """
+ return request.param
+
+
# ----------------------------------------------------------------
# Dtypes
# ----------------------------------------------------------------
@@ -827,291 +1093,81 @@ def any_skipna_inferred_dtype(request):
return inferred_dtype, values
-@pytest.fixture(
- params=[
- getattr(pd.offsets, o)
- for o in pd.offsets.__all__
- if issubclass(getattr(pd.offsets, o), pd.offsets.Tick)
- ]
-)
-def tick_classes(request):
- """
- Fixture for Tick based datetime offsets available for a time series.
+# ----------------------------------------------------------------
+# Misc
+# ----------------------------------------------------------------
+@pytest.fixture
+def ip():
"""
- return request.param
+ Get an instance of IPython.InteractiveShell.
+ Will raise a skip if IPython is not installed.
+ """
+ pytest.importorskip("IPython", minversion="6.0.0")
+ from IPython.core.interactiveshell import InteractiveShell
-# ----------------------------------------------------------------
-# Global setup for tests using Hypothesis
+ return InteractiveShell()
-# Registering these strategies makes them globally available via st.from_type,
-# which is use for offsets in tests/tseries/offsets/test_offsets_properties.py
-for name in "MonthBegin MonthEnd BMonthBegin BMonthEnd".split():
- cls = getattr(pd.tseries.offsets, name)
- st.register_type_strategy(
- cls, st.builds(cls, n=st.integers(-99, 99), normalize=st.booleans())
- )
-
-for name in "YearBegin YearEnd BYearBegin BYearEnd".split():
- cls = getattr(pd.tseries.offsets, name)
- st.register_type_strategy(
- cls,
- st.builds(
- cls,
- n=st.integers(-5, 5),
- normalize=st.booleans(),
- month=st.integers(min_value=1, max_value=12),
- ),
- )
-
-for name in "QuarterBegin QuarterEnd BQuarterBegin BQuarterEnd".split():
- cls = getattr(pd.tseries.offsets, name)
- st.register_type_strategy(
- cls,
- st.builds(
- cls,
- n=st.integers(-24, 24),
- normalize=st.booleans(),
- startingMonth=st.integers(min_value=1, max_value=12),
- ),
- )
-
-
-@pytest.fixture
-def empty_series():
- return pd.Series([], index=[], dtype=np.float64)
-
-
-@pytest.fixture
-def datetime_series():
- """
- Fixture for Series of floats with DatetimeIndex
- """
- s = tm.makeTimeSeries()
- s.name = "ts"
- return s
-
-
-@pytest.fixture
-def string_series():
- """
- Fixture for Series of floats with Index of unique strings
- """
- s = tm.makeStringSeries()
- s.name = "series"
- return s
-
-
-@pytest.fixture
-def object_series():
- """
- Fixture for Series of dtype object with Index of unique strings
+@pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"])
+def spmatrix(request):
"""
- s = tm.makeObjectSeries()
- s.name = "objects"
- return s
-
-
-@pytest.fixture
-def float_frame():
+ Yields scipy sparse matrix classes.
"""
- Fixture for DataFrame of floats with index of unique strings
+ from scipy import sparse
- Columns are ['A', 'B', 'C', 'D'].
+ return getattr(sparse, request.param + "_matrix")
- A B C D
- P7GACiRnxd -0.465578 -0.361863 0.886172 -0.053465
- qZKh6afn8n -0.466693 -0.373773 0.266873 1.673901
- tkp0r6Qble 0.148691 -0.059051 0.174817 1.598433
- wP70WOCtv8 0.133045 -0.581994 -0.992240 0.261651
- M2AeYQMnCz -1.207959 -0.185775 0.588206 0.563938
- QEPzyGDYDo -0.381843 -0.758281 0.502575 -0.565053
- r78Jwns6dn -0.653707 0.883127 0.682199 0.206159
- ... ... ... ... ...
- IHEGx9NO0T -0.277360 0.113021 -1.018314 0.196316
- lPMj8K27FA -1.313667 -0.604776 -1.305618 -0.863999
- qa66YMWQa5 1.110525 0.475310 -0.747865 0.032121
- yOa0ATsmcE -0.431457 0.067094 0.096567 -0.264962
- 65znX3uRNG 1.528446 0.160416 -0.109635 -0.032987
- eCOBvKqf3e 0.235281 1.622222 0.781255 0.392871
- xSucinXxuV -1.263557 0.252799 -0.552247 0.400426
- [30 rows x 4 columns]
- """
- return DataFrame(tm.getSeriesData())
+_cython_table = pd.core.base.SelectionMixin._cython_table.items()
-@pytest.fixture(params=[pd.Index, pd.Series], ids=["index", "series"])
-def index_or_series(request):
+@pytest.fixture(params=list(_cython_table))
+def cython_table_items(request):
"""
- Fixture to parametrize over Index and Series, made necessary by a mypy
- bug, giving an error:
-
- List item 0 has incompatible type "Type[Series]"; expected "Type[PandasObject]"
-
- See GH#29725
+ Yields a tuple of a function and its corresponding name. Correspond to
+ the list of aggregator "Cython functions" used on selected table items.
"""
return request.param
-@pytest.fixture
-def dict_subclass():
- """
- Fixture for a dictionary subclass.
- """
-
- class TestSubDict(dict):
- def __init__(self, *args, **kwargs):
- dict.__init__(self, *args, **kwargs)
-
- return TestSubDict
-
-
-@pytest.fixture
-def non_mapping_dict_subclass():
- """
- Fixture for a non-mapping dictionary subclass.
- """
-
- class TestNonDictMapping(abc.Mapping):
- def __init__(self, underlying_dict):
- self._data = underlying_dict
-
- def __getitem__(self, key):
- return self._data.__getitem__(key)
-
- def __iter__(self):
- return self._data.__iter__()
-
- def __len__(self):
- return self._data.__len__()
-
- return TestNonDictMapping
-
-
-def _gen_mi():
- # a MultiIndex used to test the general functionality of this object
-
- # See Also: tests.multi.conftest.idx
- major_axis = Index(["foo", "bar", "baz", "qux"])
- minor_axis = Index(["one", "two"])
-
- major_codes = np.array([0, 0, 1, 2, 3, 3])
- minor_codes = np.array([0, 1, 0, 1, 0, 1])
- index_names = ["first", "second"]
- mi = MultiIndex(
- levels=[major_axis, minor_axis],
- codes=[major_codes, minor_codes],
- names=index_names,
- verify_integrity=False,
- )
- return mi
-
-
-indices_dict = {
- "unicode": tm.makeUnicodeIndex(100),
- "string": tm.makeStringIndex(100),
- "datetime": tm.makeDateIndex(100),
- "datetime-tz": tm.makeDateIndex(100, tz="US/Pacific"),
- "period": tm.makePeriodIndex(100),
- "timedelta": tm.makeTimedeltaIndex(100),
- "int": tm.makeIntIndex(100),
- "uint": tm.makeUIntIndex(100),
- "range": tm.makeRangeIndex(100),
- "float": tm.makeFloatIndex(100),
- "bool": tm.makeBoolIndex(10),
- "categorical": tm.makeCategoricalIndex(100),
- "interval": tm.makeIntervalIndex(100),
- "empty": Index([]),
- "tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
- "multi": _gen_mi(),
- "repeats": Index([0, 0, 1, 1, 2, 2]),
-}
-
-
-@pytest.fixture(params=indices_dict.keys())
-def indices(request):
- """
- Fixture for many "simple" kinds of indices.
-
- These indices are unlikely to cover corner cases, e.g.
- - no names
- - no NaTs/NaNs
- - no values near implementation bounds
- - ...
- """
- # copy to avoid mutation, e.g. setting .name
- return indices_dict[request.param].copy()
-
-
-def _create_series(index):
- """ Helper for the _series dict """
- size = len(index)
- data = np.random.randn(size)
- return pd.Series(data, index=index, name="a")
-
-
-_series = {
- f"series-with-{index_id}-index": _create_series(index)
- for index_id, index in indices_dict.items()
-}
-
-
-@pytest.fixture
-def series_with_simple_index(indices):
- """
- Fixture for tests on series with changing types of indices.
- """
- return _create_series(indices)
-
-
-_narrow_dtypes = [
- np.float16,
- np.float32,
- np.int8,
- np.int16,
- np.int32,
- np.uint8,
- np.uint16,
- np.uint32,
-]
-_narrow_series = {
- f"{dtype.__name__}-series": tm.makeFloatSeries(name="a").astype(dtype)
- for dtype in _narrow_dtypes
-}
-
-
-@pytest.fixture(params=_narrow_series.keys())
-def narrow_series(request):
- """
- Fixture for Series with low precision data types
+def _get_cython_table_params(ndframe, func_names_and_expected):
"""
- # copy to avoid mutation, e.g. setting .name
- return _narrow_series[request.param].copy()
-
-
-_index_or_series_objs = {**indices_dict, **_series, **_narrow_series}
+ Combine frame, functions from SelectionMixin._cython_table
+ keys and expected result.
+ Parameters
+ ----------
+ ndframe : DataFrame or Series
+ func_names_and_expected : Sequence of two items
+ The first item is a name of a NDFrame method ('sum', 'prod') etc.
+ The second item is the expected return value.
-@pytest.fixture(params=_index_or_series_objs.keys())
-def index_or_series_obj(request):
- """
- Fixture for tests on indexes, series and series with a narrow dtype
- copy to avoid mutation, e.g. setting .name
+ Returns
+ -------
+ list
+ List of three items (DataFrame, function, expected result)
"""
- return _index_or_series_objs[request.param].copy(deep=True)
+ results = []
+ for func_name, expected in func_names_and_expected:
+ results.append((ndframe, func_name, expected))
+ results += [
+ (ndframe, func, expected)
+ for func, name in _cython_table
+ if name == func_name
+ ]
+ return results
-@pytest.fixture
-def multiindex_year_month_day_dataframe_random_data():
+@pytest.fixture(
+ params=[
+ getattr(pd.offsets, o)
+ for o in pd.offsets.__all__
+ if issubclass(getattr(pd.offsets, o), pd.offsets.Tick)
+ ]
+)
+def tick_classes(request):
"""
- DataFrame with 3 level MultiIndex (year, month, day) covering
- first 100 business days from 2000-01-01 with random data
+ Fixture for Tick based datetime offsets available for a time series.
"""
- tdf = tm.makeTimeDataFrame(100)
- ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()
- # use Int64Index, to make sure things work
- ymd.index.set_levels([lev.astype("i8") for lev in ymd.index.levels], inplace=True)
- ymd.index.set_names(["year", "month", "day"], inplace=True)
- return ymd
+ return request.param
| part of #31989
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/32789 | 2020-03-17T21:41:09Z | 2020-03-20T07:48:07Z | 2020-03-20T07:48:07Z | 2020-03-20T07:48:21Z |
replaced one occurrence of `Appender` by `doc` decorator | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6b0f7de11a3e7..83df09d6b2cf3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1912,117 +1912,7 @@ def _repr_data_resource_(self):
%(klass)s in Markdown-friendly format.
"""
- _shared_docs[
- "to_excel"
- ] = """
- Write %(klass)s to an Excel sheet.
-
- To write a single %(klass)s to an Excel .xlsx file it is only necessary to
- specify a target file name. To write to multiple sheets it is necessary to
- create an `ExcelWriter` object with a target file name, and specify a sheet
- in the file to write to.
-
- Multiple sheets may be written to by specifying unique `sheet_name`.
- With all data written to the file it is necessary to save the changes.
- Note that creating an `ExcelWriter` object with a file name that already
- exists will result in the contents of the existing file being erased.
-
- Parameters
- ----------
- excel_writer : str or ExcelWriter object
- File path or existing ExcelWriter.
- sheet_name : str, default 'Sheet1'
- Name of sheet which will contain DataFrame.
- na_rep : str, default ''
- Missing data representation.
- float_format : str, optional
- Format string for floating point numbers. For example
- ``float_format="%%.2f"`` will format 0.1234 to 0.12.
- columns : sequence or list of str, optional
- Columns to write.
- header : bool or list of str, default True
- Write out the column names. If a list of string is given it is
- assumed to be aliases for the column names.
- index : bool, default True
- Write row names (index).
- index_label : str or sequence, optional
- Column label for index column(s) if desired. If not specified, and
- `header` and `index` are True, then the index names are used. A
- sequence should be given if the DataFrame uses MultiIndex.
- startrow : int, default 0
- Upper left cell row to dump data frame.
- startcol : int, default 0
- Upper left cell column to dump data frame.
- engine : str, optional
- Write engine to use, 'openpyxl' or 'xlsxwriter'. You can also set this
- via the options ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
- ``io.excel.xlsm.writer``.
- merge_cells : bool, default True
- Write MultiIndex and Hierarchical Rows as merged cells.
- encoding : str, optional
- Encoding of the resulting excel file. Only necessary for xlwt,
- other writers support unicode natively.
- inf_rep : str, default 'inf'
- Representation for infinity (there is no native representation for
- infinity in Excel).
- verbose : bool, default True
- Display more information in the error logs.
- freeze_panes : tuple of int (length 2), optional
- Specifies the one-based bottommost row and rightmost column that
- is to be frozen.
-
- See Also
- --------
- to_csv : Write DataFrame to a comma-separated values (csv) file.
- ExcelWriter : Class for writing DataFrame objects into excel sheets.
- read_excel : Read an Excel file into a pandas DataFrame.
- read_csv : Read a comma-separated values (csv) file into DataFrame.
-
- Notes
- -----
- For compatibility with :meth:`~DataFrame.to_csv`,
- to_excel serializes lists and dicts to strings before writing.
-
- Once a workbook has been saved it is not possible write further data
- without rewriting the whole workbook.
-
- Examples
- --------
-
- Create, write to and save a workbook:
-
- >>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
- ... index=['row 1', 'row 2'],
- ... columns=['col 1', 'col 2'])
- >>> df1.to_excel("output.xlsx") # doctest: +SKIP
-
- To specify the sheet name:
-
- >>> df1.to_excel("output.xlsx",
- ... sheet_name='Sheet_name_1') # doctest: +SKIP
-
- If you wish to write to more than one sheet in the workbook, it is
- necessary to specify an ExcelWriter object:
-
- >>> df2 = df1.copy()
- >>> with pd.ExcelWriter('output.xlsx') as writer: # doctest: +SKIP
- ... df1.to_excel(writer, sheet_name='Sheet_name_1')
- ... df2.to_excel(writer, sheet_name='Sheet_name_2')
-
- ExcelWriter can also be used to append to an existing Excel file:
-
- >>> with pd.ExcelWriter('output.xlsx',
- ... mode='a') as writer: # doctest: +SKIP
- ... df.to_excel(writer, sheet_name='Sheet_name_3')
-
- To set the library that is used to write the Excel file,
- you can pass the `engine` keyword (the default engine is
- automatically chosen depending on the file extension):
-
- >>> df1.to_excel('output1.xlsx', engine='xlsxwriter') # doctest: +SKIP
- """
-
- @Appender(_shared_docs["to_excel"] % dict(klass="object"))
+ @doc(klass="object")
def to_excel(
self,
excel_writer,
@@ -2042,6 +1932,114 @@ def to_excel(
verbose=True,
freeze_panes=None,
) -> None:
+ """
+ Write {klass} to an Excel sheet.
+
+ To write a single {klass} to an Excel .xlsx file it is only necessary to
+ specify a target file name. To write to multiple sheets it is necessary to
+ create an `ExcelWriter` object with a target file name, and specify a sheet
+ in the file to write to.
+
+ Multiple sheets may be written to by specifying unique `sheet_name`.
+ With all data written to the file it is necessary to save the changes.
+ Note that creating an `ExcelWriter` object with a file name that already
+ exists will result in the contents of the existing file being erased.
+
+ Parameters
+ ----------
+ excel_writer : str or ExcelWriter object
+ File path or existing ExcelWriter.
+ sheet_name : str, default 'Sheet1'
+ Name of sheet which will contain DataFrame.
+ na_rep : str, default ''
+ Missing data representation.
+ float_format : str, optional
+ Format string for floating point numbers. For example
+ ``float_format="%.2f"`` will format 0.1234 to 0.12.
+ columns : sequence or list of str, optional
+ Columns to write.
+ header : bool or list of str, default True
+ Write out the column names. If a list of string is given it is
+ assumed to be aliases for the column names.
+ index : bool, default True
+ Write row names (index).
+ index_label : str or sequence, optional
+ Column label for index column(s) if desired. If not specified, and
+ `header` and `index` are True, then the index names are used. A
+ sequence should be given if the DataFrame uses MultiIndex.
+ startrow : int, default 0
+ Upper left cell row to dump data frame.
+ startcol : int, default 0
+ Upper left cell column to dump data frame.
+ engine : str, optional
+ Write engine to use, 'openpyxl' or 'xlsxwriter'. You can also set this
+ via the options ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
+ ``io.excel.xlsm.writer``.
+ merge_cells : bool, default True
+ Write MultiIndex and Hierarchical Rows as merged cells.
+ encoding : str, optional
+ Encoding of the resulting excel file. Only necessary for xlwt,
+ other writers support unicode natively.
+ inf_rep : str, default 'inf'
+ Representation for infinity (there is no native representation for
+ infinity in Excel).
+ verbose : bool, default True
+ Display more information in the error logs.
+ freeze_panes : tuple of int (length 2), optional
+ Specifies the one-based bottommost row and rightmost column that
+ is to be frozen.
+
+ See Also
+ --------
+ to_csv : Write DataFrame to a comma-separated values (csv) file.
+ ExcelWriter : Class for writing DataFrame objects into excel sheets.
+ read_excel : Read an Excel file into a pandas DataFrame.
+ read_csv : Read a comma-separated values (csv) file into DataFrame.
+
+ Notes
+ -----
+ For compatibility with :meth:`~DataFrame.to_csv`,
+ to_excel serializes lists and dicts to strings before writing.
+
+ Once a workbook has been saved it is not possible write further data
+ without rewriting the whole workbook.
+
+ Examples
+ --------
+
+ Create, write to and save a workbook:
+
+ >>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
+ ... index=['row 1', 'row 2'],
+ ... columns=['col 1', 'col 2'])
+ >>> df1.to_excel("output.xlsx") # doctest: +SKIP
+
+ To specify the sheet name:
+
+ >>> df1.to_excel("output.xlsx",
+ ... sheet_name='Sheet_name_1') # doctest: +SKIP
+
+ If you wish to write to more than one sheet in the workbook, it is
+ necessary to specify an ExcelWriter object:
+
+ >>> df2 = df1.copy()
+ >>> with pd.ExcelWriter('output.xlsx') as writer: # doctest: +SKIP
+ ... df1.to_excel(writer, sheet_name='Sheet_name_1')
+ ... df2.to_excel(writer, sheet_name='Sheet_name_2')
+
+ ExcelWriter can also be used to append to an existing Excel file:
+
+ >>> with pd.ExcelWriter('output.xlsx',
+ ... mode='a') as writer: # doctest: +SKIP
+ ... df.to_excel(writer, sheet_name='Sheet_name_3')
+
+ To set the library that is used to write the Excel file,
+ you can pass the `engine` keyword (the default engine is
+ automatically chosen depending on the file extension):
+
+ >>> df1.to_excel('output1.xlsx', engine='xlsxwriter') # doctest: +SKIP
+ """
+
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
from pandas.io.formats.excel import ExcelFormatter
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 9cdb56dc6362a..718534e42ec25 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -27,7 +27,7 @@
from pandas._libs import lib
from pandas._typing import Axis, FrameOrSeries, FrameOrSeriesUnion, Label
from pandas.compat._optional import import_optional_dependency
-from pandas.util._decorators import Appender
+from pandas.util._decorators import doc
from pandas.core.dtypes.common import is_float
@@ -35,7 +35,7 @@
from pandas.api.types import is_dict_like, is_list_like
import pandas.core.common as com
from pandas.core.frame import DataFrame
-from pandas.core.generic import _shared_docs
+from pandas.core.generic import NDFrame
from pandas.core.indexing import _maybe_numeric_slice, _non_reducing_slice
jinja2 = import_optional_dependency("jinja2", extra="DataFrame.style requires jinja2.")
@@ -192,18 +192,7 @@ def _repr_html_(self) -> str:
"""
return self.render()
- @Appender(
- _shared_docs["to_excel"]
- % dict(
- axes="index, columns",
- klass="Styler",
- axes_single_arg="{0 or 'index', 1 or 'columns'}",
- optional_by="""
- by : str or list of str
- Name or list of names which refer to the axis items.""",
- versionadded_to_excel="\n .. versionadded:: 0.20",
- )
- )
+ @doc(NDFrame.to_excel, klass="Styler")
def to_excel(
self,
excel_writer,
| - [X] xref #31942
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] replaces one use of `Appender` in pandas/core/generic.py with `doc`
| https://api.github.com/repos/pandas-dev/pandas/pulls/32787 | 2020-03-17T21:08:17Z | 2020-03-20T22:19:09Z | 2020-03-20T22:19:09Z | 2020-03-20T22:21:11Z |
CLN: remove unnecessary alias | diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 88cefd3ebfebf..da334561385d6 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -825,17 +825,15 @@ def as_array(self, transpose: bool = False) -> np.ndarray:
arr = np.empty(self.shape, dtype=float)
return arr.transpose() if transpose else arr
- mgr = self
-
- if self._is_single_block and mgr.blocks[0].is_datetimetz:
+ if self._is_single_block and self.blocks[0].is_datetimetz:
# TODO(Block.get_values): Make DatetimeTZBlock.get_values
# always be object dtype. Some callers seem to want the
# DatetimeArray (previously DTI)
- arr = mgr.blocks[0].get_values(dtype=object)
+ arr = self.blocks[0].get_values(dtype=object)
elif self._is_single_block or not self.is_mixed_type:
- arr = np.asarray(mgr.blocks[0].get_values())
+ arr = np.asarray(self.blocks[0].get_values())
else:
- arr = mgr._interleave()
+ arr = self._interleave()
return arr.transpose() if transpose else arr
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32786 | 2020-03-17T20:54:56Z | 2020-03-17T22:03:25Z | 2020-03-17T22:03:25Z | 2020-03-17T22:13:44Z |
Backport PR #32783 on branch 1.0.x (DOC: 1.0.3 release date) | diff --git a/doc/source/whatsnew/v1.0.3.rst b/doc/source/whatsnew/v1.0.3.rst
index 0ca5f5f548885..26d06433bda0c 100644
--- a/doc/source/whatsnew/v1.0.3.rst
+++ b/doc/source/whatsnew/v1.0.3.rst
@@ -1,7 +1,7 @@
.. _whatsnew_103:
-What's new in 1.0.3 (March ??, 2020)
+What's new in 1.0.3 (March 17, 2020)
------------------------------------
These are the changes in pandas 1.0.3. See :ref:`release` for a full changelog
| Backport PR #32783: DOC: 1.0.3 release date | https://api.github.com/repos/pandas-dev/pandas/pulls/32784 | 2020-03-17T19:34:00Z | 2020-03-17T20:52:03Z | 2020-03-17T20:52:03Z | 2020-03-17T20:52:03Z |
DOC: 1.0.3 release date | diff --git a/doc/source/whatsnew/v1.0.3.rst b/doc/source/whatsnew/v1.0.3.rst
index 482222fbddbb8..14f982212f1e5 100644
--- a/doc/source/whatsnew/v1.0.3.rst
+++ b/doc/source/whatsnew/v1.0.3.rst
@@ -1,7 +1,7 @@
.. _whatsnew_103:
-What's new in 1.0.3 (March ??, 2020)
+What's new in 1.0.3 (March 17, 2020)
------------------------------------
These are the changes in pandas 1.0.3. See :ref:`release` for a full changelog
| https://api.github.com/repos/pandas-dev/pandas/pulls/32783 | 2020-03-17T19:33:00Z | 2020-03-17T19:33:22Z | 2020-03-17T19:33:22Z | 2020-03-17T19:34:11Z |
|
BUG/API: prohibit dtype-changing IntervalArray.__setitem__ | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index cbfc6d63e8ea3..8e5b083b7b058 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -346,7 +346,7 @@ Strings
Interval
^^^^^^^^
--
+- Bug in :class:`IntervalArray` incorrectly allowing the underlying data to be changed when setting values (:issue:`32782`)
-
Indexing
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 22ce5a6f87a43..220b70ff71b28 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -542,19 +542,19 @@ def __setitem__(self, key, value):
msg = f"'value' should be an interval type, got {type(value)} instead."
raise TypeError(msg) from err
+ if needs_float_conversion:
+ raise ValueError("Cannot set float NaN to integer-backed IntervalArray")
+
key = check_array_indexer(self, key)
+
# Need to ensure that left and right are updated atomically, so we're
# forced to copy, update the copy, and swap in the new values.
left = self.left.copy(deep=True)
- if needs_float_conversion:
- left = left.astype("float")
- left.values[key] = value_left
+ left._values[key] = value_left
self._left = left
right = self.right.copy(deep=True)
- if needs_float_conversion:
- right = right.astype("float")
- right.values[key] = value_right
+ right._values[key] = value_right
self._right = right
def __eq__(self, other):
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index 7e7762d8973a0..fef11f0ff3bb2 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -104,6 +104,13 @@ class TestSetitem:
def test_set_na(self, left_right_dtypes):
left, right = left_right_dtypes
result = IntervalArray.from_arrays(left, right)
+
+ if result.dtype.subtype.kind in ["i", "u"]:
+ msg = "Cannot set float NaN to integer-backed IntervalArray"
+ with pytest.raises(ValueError, match=msg):
+ result[0] = np.NaN
+ return
+
result[0] = np.nan
expected_left = Index([left._na_value] + list(left[1:]))
@@ -182,7 +189,7 @@ def test_arrow_array_missing():
import pyarrow as pa
from pandas.core.arrays._arrow_utils import ArrowIntervalType
- arr = IntervalArray.from_breaks([0, 1, 2, 3])
+ arr = IntervalArray.from_breaks([0.0, 1.0, 2.0, 3.0])
arr[1] = None
result = pa.array(arr)
@@ -209,8 +216,8 @@ def test_arrow_array_missing():
@pyarrow_skip
@pytest.mark.parametrize(
"breaks",
- [[0, 1, 2, 3], pd.date_range("2017", periods=4, freq="D")],
- ids=["int", "datetime64[ns]"],
+ [[0.0, 1.0, 2.0, 3.0], pd.date_range("2017", periods=4, freq="D")],
+ ids=["float", "datetime64[ns]"],
)
def test_arrow_table_roundtrip(breaks):
import pyarrow as pa
diff --git a/pandas/tests/series/methods/test_convert_dtypes.py b/pandas/tests/series/methods/test_convert_dtypes.py
index a41f893e3753f..dd4bf642e68e8 100644
--- a/pandas/tests/series/methods/test_convert_dtypes.py
+++ b/pandas/tests/series/methods/test_convert_dtypes.py
@@ -3,6 +3,8 @@
import numpy as np
import pytest
+from pandas.core.dtypes.common import is_interval_dtype
+
import pandas as pd
import pandas._testing as tm
@@ -266,7 +268,12 @@ def test_convert_dtypes(self, data, maindtype, params, answerdict):
# Test that it is a copy
copy = series.copy(deep=True)
- ns[ns.notna()] = np.nan
+ if is_interval_dtype(ns.dtype) and ns.dtype.subtype.kind in ["i", "u"]:
+ msg = "Cannot set float NaN to integer-backed IntervalArray"
+ with pytest.raises(ValueError, match=msg):
+ ns[ns.notna()] = np.nan
+ else:
+ ns[ns.notna()] = np.nan
# Make sure original not changed
tm.assert_series_equal(series, copy)
| - [ ] <s>closes #27147</s> Not quite
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Still needs a dedicated test, but this is also a non-trivial API change, so I want to get the ball rolling on discussion. cc @jschendel | https://api.github.com/repos/pandas-dev/pandas/pulls/32782 | 2020-03-17T19:21:30Z | 2020-04-10T16:17:34Z | 2020-04-10T16:17:34Z | 2020-04-10T17:25:46Z |
CLN: .value -> ._values outside of core/ | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index f011293273c5b..17cc897136aad 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -58,11 +58,8 @@
)
from pandas.core.dtypes.generic import (
ABCDatetimeIndex,
- ABCIndexClass,
ABCMultiIndex,
ABCPeriodIndex,
- ABCSeries,
- ABCSparseArray,
ABCTimedeltaIndex,
)
from pandas.core.dtypes.missing import isna, notna
@@ -71,6 +68,7 @@
from pandas.core.arrays.timedeltas import TimedeltaArray
from pandas.core.base import PandasObject
import pandas.core.common as com
+from pandas.core.construction import extract_array
from pandas.core.indexes.api import Index, ensure_index
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
@@ -1228,11 +1226,7 @@ def _format(x):
# object dtype
return str(formatter(x))
- vals = self.values
- if isinstance(vals, Index):
- vals = vals._values
- elif isinstance(vals, ABCSparseArray):
- vals = vals.values
+ vals = extract_array(self.values, extract_numpy=True)
is_float_type = lib.map_infer(vals, is_float) & notna(vals)
leading_space = self.leading_space
@@ -1457,9 +1451,7 @@ def _format_strings(self) -> List[str]:
class ExtensionArrayFormatter(GenericArrayFormatter):
def _format_strings(self) -> List[str]:
- values = self.values
- if isinstance(values, (ABCIndexClass, ABCSeries)):
- values = values._values
+ values = extract_array(self.values, extract_numpy=True)
formatter = values._formatter(boxed=True)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 77a0c2f99496b..d6b90ae99973e 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -973,9 +973,9 @@ def _try_convert_to_date(self, data):
# ignore numbers that are out of range
if issubclass(new_data.dtype.type, np.number):
in_range = (
- isna(new_data.values)
+ isna(new_data._values)
| (new_data > self.min_stamp)
- | (new_data.values == iNaT)
+ | (new_data._values == iNaT)
)
if not in_range.all():
return data, False
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 544d45999c14b..527f68bd228ca 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2382,7 +2382,7 @@ def convert(self, values: np.ndarray, nan_rep, encoding: str, errors: str):
mask = isna(categories)
if mask.any():
categories = categories[~mask]
- codes[codes != -1] -= mask.astype(int).cumsum().values
+ codes[codes != -1] -= mask.astype(int).cumsum()._values
converted = Categorical.from_codes(
codes, categories=categories, ordered=ordered
@@ -4826,7 +4826,9 @@ def _convert_string_array(data: np.ndarray, encoding: str, errors: str) -> np.nd
# encode if needed
if len(data):
data = (
- Series(data.ravel()).str.encode(encoding, errors).values.reshape(data.shape)
+ Series(data.ravel())
+ .str.encode(encoding, errors)
+ ._values.reshape(data.shape)
)
# create the sized dtype
@@ -4865,7 +4867,7 @@ def _unconvert_string_array(
dtype = f"U{itemsize}"
if isinstance(data[0], bytes):
- data = Series(data).str.decode(encoding, errors=errors).values
+ data = Series(data).str.decode(encoding, errors=errors)._values
else:
data = data.astype(dtype, copy=False).astype(object, copy=False)
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 6e79f5890f76d..8f3aa60b7a9cc 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -351,10 +351,10 @@ def _datetime_to_stata_elapsed_vec(dates: Series, fmt: str) -> Series:
def parse_dates_safe(dates, delta=False, year=False, days=False):
d = {}
- if is_datetime64_dtype(dates.values):
+ if is_datetime64_dtype(dates.dtype):
if delta:
time_delta = dates - stata_epoch
- d["delta"] = time_delta.values.astype(np.int64) // 1000 # microseconds
+ d["delta"] = time_delta._values.astype(np.int64) // 1000 # microseconds
if days or year:
# ignore since mypy reports that DatetimeIndex has no year/month
date_index = DatetimeIndex(dates)
@@ -368,7 +368,7 @@ def parse_dates_safe(dates, delta=False, year=False, days=False):
elif infer_dtype(dates, skipna=False) == "datetime":
if delta:
- delta = dates.values - stata_epoch
+ delta = dates._values - stata_epoch
def f(x: datetime.timedelta) -> float:
return US_PER_DAY * x.days + 1000000 * x.seconds + x.microseconds
@@ -377,8 +377,8 @@ def f(x: datetime.timedelta) -> float:
d["delta"] = v(delta)
if year:
year_month = dates.apply(lambda x: 100 * x.year + x.month)
- d["year"] = year_month.values // 100
- d["month"] = year_month.values - d["year"] * 100
+ d["year"] = year_month._values // 100
+ d["month"] = year_month._values - d["year"] * 100
if days:
def g(x: datetime.datetime) -> int:
@@ -1956,7 +1956,7 @@ def _dtype_to_stata_type(dtype: np.dtype, column: Series) -> int:
if dtype.type == np.object_: # try to coerce it to the biggest string
# not memory efficient, what else could we
# do?
- itemsize = max_len_string_array(ensure_object(column.values))
+ itemsize = max_len_string_array(ensure_object(column._values))
return max(itemsize, 1)
elif dtype == np.float64:
return 255
@@ -1998,7 +1998,7 @@ def _dtype_to_default_stata_fmt(
if force_strl:
return "%9s"
if dtype.type == np.object_:
- itemsize = max_len_string_array(ensure_object(column.values))
+ itemsize = max_len_string_array(ensure_object(column._values))
if itemsize > max_str_len:
if dta_version >= 117:
return "%9s"
@@ -2151,7 +2151,7 @@ def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
"It is not possible to export "
"int64-based categorical data to Stata."
)
- values = data[col].cat.codes.values.copy()
+ values = data[col].cat.codes._values.copy()
# Upcast if needed so that correct missing values can be set
if values.max() >= get_base_missing_value(dtype):
@@ -2384,7 +2384,7 @@ def _encode_strings(self) -> None:
encoded = self.data[col].str.encode(self._encoding)
# If larger than _max_string_length do nothing
if (
- max_len_string_array(ensure_object(encoded.values))
+ max_len_string_array(ensure_object(encoded._values))
<= self._max_string_length
):
self.data[col] = encoded
@@ -2650,7 +2650,7 @@ def _dtype_to_stata_type_117(dtype: np.dtype, column: Series, force_strl: bool)
if dtype.type == np.object_: # try to coerce it to the biggest string
# not memory efficient, what else could we
# do?
- itemsize = max_len_string_array(ensure_object(column.values))
+ itemsize = max_len_string_array(ensure_object(column._values))
itemsize = max(itemsize, 1)
if itemsize <= 2045:
return itemsize
diff --git a/pandas/plotting/_matplotlib/misc.py b/pandas/plotting/_matplotlib/misc.py
index 0720f544203f7..7319e8de3ec6e 100644
--- a/pandas/plotting/_matplotlib/misc.py
+++ b/pandas/plotting/_matplotlib/misc.py
@@ -260,6 +260,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
import matplotlib.pyplot as plt
+ # TODO: is the failure mentioned below still relevant?
# random.sample(ndarray, int) fails on python 3.3, sigh
data = list(series.values)
samplings = [random.sample(data, size) for _ in range(samples)]
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 2477ff29fbfd5..03a9f2e879dd8 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -289,7 +289,7 @@ def infer_freq(index, warn: bool = True) -> Optional[str]:
raise TypeError(
f"cannot infer freq from a non-convertible index type {type(index)}"
)
- index = index.values
+ index = index._values
if not isinstance(index, pd.DatetimeIndex):
index = pd.DatetimeIndex(index)
@@ -305,13 +305,13 @@ class _FrequencyInferer:
def __init__(self, index, warn: bool = True):
self.index = index
- self.values = index.asi8
+ self.i8values = index.asi8
# This moves the values, which are implicitly in UTC, to the
# the timezone so they are in local time
if hasattr(index, "tz"):
if index.tz is not None:
- self.values = tz_convert(self.values, UTC, index.tz)
+ self.i8values = tz_convert(self.i8values, UTC, index.tz)
self.warn = warn
@@ -324,10 +324,12 @@ def __init__(self, index, warn: bool = True):
@cache_readonly
def deltas(self):
- return unique_deltas(self.values)
+ return unique_deltas(self.i8values)
@cache_readonly
def deltas_asi8(self):
+ # NB: we cannot use self.i8values here because we may have converted
+ # the tz in __init__
return unique_deltas(self.index.asi8)
@cache_readonly
@@ -341,7 +343,7 @@ def is_unique_asi8(self) -> bool:
def get_freq(self) -> Optional[str]:
"""
Find the appropriate frequency string to describe the inferred
- frequency of self.values
+ frequency of self.i8values
Returns
-------
@@ -393,11 +395,11 @@ def hour_deltas(self):
@cache_readonly
def fields(self):
- return build_field_sarray(self.values)
+ return build_field_sarray(self.i8values)
@cache_readonly
def rep_stamp(self):
- return Timestamp(self.values[0])
+ return Timestamp(self.i8values[0])
def month_position_check(self):
return libresolution.month_position_check(self.fields, self.index.dayofweek)
diff --git a/pandas/util/_doctools.py b/pandas/util/_doctools.py
index 8fd4566d7763b..71965b8e7dd9d 100644
--- a/pandas/util/_doctools.py
+++ b/pandas/util/_doctools.py
@@ -126,7 +126,7 @@ def _insert_index(self, data):
if col_nlevels > 1:
col = data.columns._get_level_values(0)
values = [
- data.columns._get_level_values(i).values for i in range(1, col_nlevels)
+ data.columns._get_level_values(i)._values for i in range(1, col_nlevels)
]
col_df = pd.DataFrame(values)
data.columns = col_df.columns
| pandas.plotting I left alone | https://api.github.com/repos/pandas-dev/pandas/pulls/32781 | 2020-03-17T18:56:15Z | 2020-03-21T21:06:49Z | 2020-03-21T21:06:49Z | 2020-03-21T21:10:05Z |
CLN: Split up Boolean array tests | diff --git a/pandas/tests/arrays/boolean/__init__.py b/pandas/tests/arrays/boolean/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/arrays/boolean/test_arithmetic.py b/pandas/tests/arrays/boolean/test_arithmetic.py
new file mode 100644
index 0000000000000..df4c218cbf9bf
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_arithmetic.py
@@ -0,0 +1,42 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+from pandas.tests.extension.base import BaseOpsUtil
+
+
+@pytest.fixture
+def data():
+ return pd.array(
+ [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
+ dtype="boolean",
+ )
+
+
+class TestArithmeticOps(BaseOpsUtil):
+ def test_error(self, data, all_arithmetic_operators):
+ # invalid ops
+
+ op = all_arithmetic_operators
+ s = pd.Series(data)
+ ops = getattr(s, op)
+ opa = getattr(data, op)
+
+ # invalid scalars
+ with pytest.raises(TypeError):
+ ops("foo")
+ with pytest.raises(TypeError):
+ ops(pd.Timestamp("20180101"))
+
+ # invalid array-likes
+ if op not in ("__mul__", "__rmul__"):
+ # TODO(extension) numpy's mul with object array sees booleans as numbers
+ with pytest.raises(TypeError):
+ ops(pd.Series("foo", index=s.index))
+
+ # 2d
+ result = opa(pd.DataFrame({"A": s}))
+ assert result is NotImplemented
+
+ with pytest.raises(NotImplementedError):
+ opa(np.arange(len(s)).reshape(-1, len(s)))
diff --git a/pandas/tests/arrays/boolean/test_astype.py b/pandas/tests/arrays/boolean/test_astype.py
new file mode 100644
index 0000000000000..90fe9a6905d40
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_astype.py
@@ -0,0 +1,53 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas._testing as tm
+
+
+def test_astype():
+ # with missing values
+ arr = pd.array([True, False, None], dtype="boolean")
+
+ with pytest.raises(ValueError, match="cannot convert NA to integer"):
+ arr.astype("int64")
+
+ with pytest.raises(ValueError, match="cannot convert float NaN to"):
+ arr.astype("bool")
+
+ result = arr.astype("float64")
+ expected = np.array([1, 0, np.nan], dtype="float64")
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = arr.astype("str")
+ expected = np.array(["True", "False", "<NA>"], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ # no missing values
+ arr = pd.array([True, False, True], dtype="boolean")
+ result = arr.astype("int64")
+ expected = np.array([1, 0, 1], dtype="int64")
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = arr.astype("bool")
+ expected = np.array([True, False, True], dtype="bool")
+ tm.assert_numpy_array_equal(result, expected)
+
+
+def test_astype_to_boolean_array():
+ # astype to BooleanArray
+ arr = pd.array([True, False, None], dtype="boolean")
+
+ result = arr.astype("boolean")
+ tm.assert_extension_array_equal(result, arr)
+ result = arr.astype(pd.BooleanDtype())
+ tm.assert_extension_array_equal(result, arr)
+
+
+def test_astype_to_integer_array():
+ # astype to IntegerArray
+ arr = pd.array([True, False, None], dtype="boolean")
+
+ result = arr.astype("Int64")
+ expected = pd.array([1, 0, None], dtype="Int64")
+ tm.assert_extension_array_equal(result, expected)
diff --git a/pandas/tests/arrays/boolean/test_comparison.py b/pandas/tests/arrays/boolean/test_comparison.py
new file mode 100644
index 0000000000000..726b78fbd43bd
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_comparison.py
@@ -0,0 +1,94 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas._testing as tm
+from pandas.arrays import BooleanArray
+from pandas.tests.extension.base import BaseOpsUtil
+
+
+@pytest.fixture
+def data():
+ return pd.array(
+ [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
+ dtype="boolean",
+ )
+
+
+class TestComparisonOps(BaseOpsUtil):
+ def _compare_other(self, data, op_name, other):
+ op = self.get_op_from_name(op_name)
+
+ # array
+ result = pd.Series(op(data, other))
+ expected = pd.Series(op(data._data, other), dtype="boolean")
+ # propagate NAs
+ expected[data._mask] = pd.NA
+
+ tm.assert_series_equal(result, expected)
+
+ # series
+ s = pd.Series(data)
+ result = op(s, other)
+
+ expected = pd.Series(data._data)
+ expected = op(expected, other)
+ expected = expected.astype("boolean")
+ # propagate NAs
+ expected[data._mask] = pd.NA
+
+ tm.assert_series_equal(result, expected)
+
+ def test_compare_scalar(self, data, all_compare_operators):
+ op_name = all_compare_operators
+ self._compare_other(data, op_name, True)
+
+ def test_compare_array(self, data, all_compare_operators):
+ op_name = all_compare_operators
+ other = pd.array([True] * len(data), dtype="boolean")
+ self._compare_other(data, op_name, other)
+ other = np.array([True] * len(data))
+ self._compare_other(data, op_name, other)
+ other = pd.Series([True] * len(data))
+ self._compare_other(data, op_name, other)
+
+ @pytest.mark.parametrize("other", [True, False, pd.NA])
+ def test_scalar(self, other, all_compare_operators):
+ op = self.get_op_from_name(all_compare_operators)
+ a = pd.array([True, False, None], dtype="boolean")
+
+ result = op(a, other)
+
+ if other is pd.NA:
+ expected = pd.array([None, None, None], dtype="boolean")
+ else:
+ values = op(a._data, other)
+ expected = BooleanArray(values, a._mask, copy=True)
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ result[0] = None
+ tm.assert_extension_array_equal(
+ a, pd.array([True, False, None], dtype="boolean")
+ )
+
+ def test_array(self, all_compare_operators):
+ op = self.get_op_from_name(all_compare_operators)
+ a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ b = pd.array([True, False, None] * 3, dtype="boolean")
+
+ result = op(a, b)
+
+ values = op(a._data, b._data)
+ mask = a._mask | b._mask
+ expected = BooleanArray(values, mask)
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ result[0] = None
+ tm.assert_extension_array_equal(
+ a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ )
+ tm.assert_extension_array_equal(
+ b, pd.array([True, False, None] * 3, dtype="boolean")
+ )
diff --git a/pandas/tests/arrays/boolean/test_construction.py b/pandas/tests/arrays/boolean/test_construction.py
new file mode 100644
index 0000000000000..bf1aba190f3e2
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_construction.py
@@ -0,0 +1,376 @@
+import numpy as np
+import pytest
+
+import pandas.util._test_decorators as td
+
+import pandas as pd
+import pandas._testing as tm
+from pandas.arrays import BooleanArray
+from pandas.core.arrays.boolean import coerce_to_array
+
+
+@pytest.fixture
+def data():
+ return pd.array(
+ [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
+ dtype="boolean",
+ )
+
+
+def test_boolean_array_constructor():
+ values = np.array([True, False, True, False], dtype="bool")
+ mask = np.array([False, False, False, True], dtype="bool")
+
+ result = BooleanArray(values, mask)
+ expected = pd.array([True, False, True, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ with pytest.raises(TypeError, match="values should be boolean numpy array"):
+ BooleanArray(values.tolist(), mask)
+
+ with pytest.raises(TypeError, match="mask should be boolean numpy array"):
+ BooleanArray(values, mask.tolist())
+
+ with pytest.raises(TypeError, match="values should be boolean numpy array"):
+ BooleanArray(values.astype(int), mask)
+
+ with pytest.raises(TypeError, match="mask should be boolean numpy array"):
+ BooleanArray(values, None)
+
+ with pytest.raises(ValueError, match="values must be a 1D array"):
+ BooleanArray(values.reshape(1, -1), mask)
+
+ with pytest.raises(ValueError, match="mask must be a 1D array"):
+ BooleanArray(values, mask.reshape(1, -1))
+
+
+def test_boolean_array_constructor_copy():
+ values = np.array([True, False, True, False], dtype="bool")
+ mask = np.array([False, False, False, True], dtype="bool")
+
+ result = BooleanArray(values, mask)
+ assert result._data is values
+ assert result._mask is mask
+
+ result = BooleanArray(values, mask, copy=True)
+ assert result._data is not values
+ assert result._mask is not mask
+
+
+def test_to_boolean_array():
+ expected = BooleanArray(
+ np.array([True, False, True]), np.array([False, False, False])
+ )
+
+ result = pd.array([True, False, True], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+ result = pd.array(np.array([True, False, True]), dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+ result = pd.array(np.array([True, False, True], dtype=object), dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ # with missing values
+ expected = BooleanArray(
+ np.array([True, False, True]), np.array([False, False, True])
+ )
+
+ result = pd.array([True, False, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+ result = pd.array(np.array([True, False, None], dtype=object), dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_to_boolean_array_all_none():
+ expected = BooleanArray(np.array([True, True, True]), np.array([True, True, True]))
+
+ result = pd.array([None, None, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+ result = pd.array(np.array([None, None, None], dtype=object), dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "a, b",
+ [
+ ([True, False, None, np.nan, pd.NA], [True, False, None, None, None]),
+ ([True, np.nan], [True, None]),
+ ([True, pd.NA], [True, None]),
+ ([np.nan, np.nan], [None, None]),
+ (np.array([np.nan, np.nan], dtype=float), [None, None]),
+ ],
+)
+def test_to_boolean_array_missing_indicators(a, b):
+ result = pd.array(a, dtype="boolean")
+ expected = pd.array(b, dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "values",
+ [
+ ["foo", "bar"],
+ ["1", "2"],
+ # "foo",
+ [1, 2],
+ [1.0, 2.0],
+ pd.date_range("20130101", periods=2),
+ np.array(["foo"]),
+ np.array([1, 2]),
+ np.array([1.0, 2.0]),
+ [np.nan, {"a": 1}],
+ ],
+)
+def test_to_boolean_array_error(values):
+ # error in converting existing arrays to BooleanArray
+ msg = "Need to pass bool-like value"
+ with pytest.raises(TypeError, match=msg):
+ pd.array(values, dtype="boolean")
+
+
+def test_to_boolean_array_from_integer_array():
+ result = pd.array(np.array([1, 0, 1, 0]), dtype="boolean")
+ expected = pd.array([True, False, True, False], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ # with missing values
+ result = pd.array(np.array([1, 0, 1, None]), dtype="boolean")
+ expected = pd.array([True, False, True, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_to_boolean_array_from_float_array():
+ result = pd.array(np.array([1.0, 0.0, 1.0, 0.0]), dtype="boolean")
+ expected = pd.array([True, False, True, False], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ # with missing values
+ result = pd.array(np.array([1.0, 0.0, 1.0, np.nan]), dtype="boolean")
+ expected = pd.array([True, False, True, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_to_boolean_array_integer_like():
+ # integers of 0's and 1's
+ result = pd.array([1, 0, 1, 0], dtype="boolean")
+ expected = pd.array([True, False, True, False], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ # with missing values
+ result = pd.array([1, 0, 1, None], dtype="boolean")
+ expected = pd.array([True, False, True, None], dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_coerce_to_array():
+ # TODO this is currently not public API
+ values = np.array([True, False, True, False], dtype="bool")
+ mask = np.array([False, False, False, True], dtype="bool")
+ result = BooleanArray(*coerce_to_array(values, mask=mask))
+ expected = BooleanArray(values, mask)
+ tm.assert_extension_array_equal(result, expected)
+ assert result._data is values
+ assert result._mask is mask
+ result = BooleanArray(*coerce_to_array(values, mask=mask, copy=True))
+ expected = BooleanArray(values, mask)
+ tm.assert_extension_array_equal(result, expected)
+ assert result._data is not values
+ assert result._mask is not mask
+
+ # mixed missing from values and mask
+ values = [True, False, None, False]
+ mask = np.array([False, False, False, True], dtype="bool")
+ result = BooleanArray(*coerce_to_array(values, mask=mask))
+ expected = BooleanArray(
+ np.array([True, False, True, True]), np.array([False, False, True, True])
+ )
+ tm.assert_extension_array_equal(result, expected)
+ result = BooleanArray(*coerce_to_array(np.array(values, dtype=object), mask=mask))
+ tm.assert_extension_array_equal(result, expected)
+ result = BooleanArray(*coerce_to_array(values, mask=mask.tolist()))
+ tm.assert_extension_array_equal(result, expected)
+
+ # raise errors for wrong dimension
+ values = np.array([True, False, True, False], dtype="bool")
+ mask = np.array([False, False, False, True], dtype="bool")
+
+ with pytest.raises(ValueError, match="values must be a 1D list-like"):
+ coerce_to_array(values.reshape(1, -1))
+
+ with pytest.raises(ValueError, match="mask must be a 1D list-like"):
+ coerce_to_array(values, mask=mask.reshape(1, -1))
+
+
+def test_coerce_to_array_from_boolean_array():
+ # passing BooleanArray to coerce_to_array
+ values = np.array([True, False, True, False], dtype="bool")
+ mask = np.array([False, False, False, True], dtype="bool")
+ arr = BooleanArray(values, mask)
+ result = BooleanArray(*coerce_to_array(arr))
+ tm.assert_extension_array_equal(result, arr)
+ # no copy
+ assert result._data is arr._data
+ assert result._mask is arr._mask
+
+ result = BooleanArray(*coerce_to_array(arr), copy=True)
+ tm.assert_extension_array_equal(result, arr)
+ assert result._data is not arr._data
+ assert result._mask is not arr._mask
+
+ with pytest.raises(ValueError, match="cannot pass mask for BooleanArray input"):
+ coerce_to_array(arr, mask=mask)
+
+
+def test_coerce_to_numpy_array():
+ # with missing values -> object dtype
+ arr = pd.array([True, False, None], dtype="boolean")
+ result = np.array(arr)
+ expected = np.array([True, False, pd.NA], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ # also with no missing values -> object dtype
+ arr = pd.array([True, False, True], dtype="boolean")
+ result = np.array(arr)
+ expected = np.array([True, False, True], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ # force bool dtype
+ result = np.array(arr, dtype="bool")
+ expected = np.array([True, False, True], dtype="bool")
+ tm.assert_numpy_array_equal(result, expected)
+ # with missing values will raise error
+ arr = pd.array([True, False, None], dtype="boolean")
+ msg = (
+ "cannot convert to 'bool'-dtype NumPy array with missing values. "
+ "Specify an appropriate 'na_value' for this dtype."
+ )
+ with pytest.raises(ValueError, match=msg):
+ np.array(arr, dtype="bool")
+
+
+def test_to_boolean_array_from_strings():
+ result = BooleanArray._from_sequence_of_strings(
+ np.array(["True", "False", np.nan], dtype=object)
+ )
+ expected = BooleanArray(
+ np.array([True, False, False]), np.array([False, False, True])
+ )
+
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_to_boolean_array_from_strings_invalid_string():
+ with pytest.raises(ValueError, match="cannot be cast"):
+ BooleanArray._from_sequence_of_strings(["donkey"])
+
+
+@pytest.mark.parametrize("box", [True, False], ids=["series", "array"])
+def test_to_numpy(box):
+ con = pd.Series if box else pd.array
+ # default (with or without missing values) -> object dtype
+ arr = con([True, False, True], dtype="boolean")
+ result = arr.to_numpy()
+ expected = np.array([True, False, True], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ arr = con([True, False, None], dtype="boolean")
+ result = arr.to_numpy()
+ expected = np.array([True, False, pd.NA], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ arr = con([True, False, None], dtype="boolean")
+ result = arr.to_numpy(dtype="str")
+ expected = np.array([True, False, pd.NA], dtype="<U5")
+ tm.assert_numpy_array_equal(result, expected)
+
+ # no missing values -> can convert to bool, otherwise raises
+ arr = con([True, False, True], dtype="boolean")
+ result = arr.to_numpy(dtype="bool")
+ expected = np.array([True, False, True], dtype="bool")
+ tm.assert_numpy_array_equal(result, expected)
+
+ arr = con([True, False, None], dtype="boolean")
+ with pytest.raises(ValueError, match="cannot convert to 'bool'-dtype"):
+ result = arr.to_numpy(dtype="bool")
+
+ # specify dtype and na_value
+ arr = con([True, False, None], dtype="boolean")
+ result = arr.to_numpy(dtype=object, na_value=None)
+ expected = np.array([True, False, None], dtype="object")
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = arr.to_numpy(dtype=bool, na_value=False)
+ expected = np.array([True, False, False], dtype="bool")
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = arr.to_numpy(dtype="int64", na_value=-99)
+ expected = np.array([1, 0, -99], dtype="int64")
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = arr.to_numpy(dtype="float64", na_value=np.nan)
+ expected = np.array([1, 0, np.nan], dtype="float64")
+ tm.assert_numpy_array_equal(result, expected)
+
+ # converting to int or float without specifying na_value raises
+ with pytest.raises(ValueError, match="cannot convert to 'int64'-dtype"):
+ arr.to_numpy(dtype="int64")
+ with pytest.raises(ValueError, match="cannot convert to 'float64'-dtype"):
+ arr.to_numpy(dtype="float64")
+
+
+def test_to_numpy_copy():
+ # to_numpy can be zero-copy if no missing values
+ arr = pd.array([True, False, True], dtype="boolean")
+ result = arr.to_numpy(dtype=bool)
+ result[0] = False
+ tm.assert_extension_array_equal(
+ arr, pd.array([False, False, True], dtype="boolean")
+ )
+
+ arr = pd.array([True, False, True], dtype="boolean")
+ result = arr.to_numpy(dtype=bool, copy=True)
+ result[0] = False
+ tm.assert_extension_array_equal(arr, pd.array([True, False, True], dtype="boolean"))
+
+
+# FIXME: don't leave commented out
+# TODO when BooleanArray coerces to object dtype numpy array, need to do conversion
+# manually in the indexing code
+# def test_indexing_boolean_mask():
+# arr = pd.array([1, 2, 3, 4], dtype="Int64")
+# mask = pd.array([True, False, True, False], dtype="boolean")
+# result = arr[mask]
+# expected = pd.array([1, 3], dtype="Int64")
+# tm.assert_extension_array_equal(result, expected)
+
+# # missing values -> error
+# mask = pd.array([True, False, True, None], dtype="boolean")
+# with pytest.raises(IndexError):
+# result = arr[mask]
+
+
+@td.skip_if_no("pyarrow", min_version="0.15.0")
+def test_arrow_array(data):
+ # protocol added in 0.15.0
+ import pyarrow as pa
+
+ arr = pa.array(data)
+
+ # TODO use to_numpy(na_value=None) here
+ data_object = np.array(data, dtype=object)
+ data_object[data.isna()] = None
+ expected = pa.array(data_object, type=pa.bool_(), from_pandas=True)
+ assert arr.equals(expected)
+
+
+@td.skip_if_no("pyarrow", min_version="0.15.1.dev")
+def test_arrow_roundtrip():
+ # roundtrip possible from arrow 1.0.0
+ import pyarrow as pa
+
+ data = pd.array([True, False, None], dtype="boolean")
+ df = pd.DataFrame({"a": data})
+ table = pa.table(df)
+ assert table.field("a").type == "bool"
+ result = table.to_pandas()
+ assert isinstance(result["a"].dtype, pd.BooleanDtype)
+ tm.assert_frame_equal(result, df)
diff --git a/pandas/tests/arrays/boolean/test_function.py b/pandas/tests/arrays/boolean/test_function.py
new file mode 100644
index 0000000000000..c2987dc37b960
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_function.py
@@ -0,0 +1,107 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas._testing as tm
+
+
+@pytest.fixture
+def data():
+ return pd.array(
+ [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
+ dtype="boolean",
+ )
+
+
+@pytest.mark.parametrize(
+ "ufunc", [np.add, np.logical_or, np.logical_and, np.logical_xor]
+)
+def test_ufuncs_binary(ufunc):
+ # two BooleanArrays
+ a = pd.array([True, False, None], dtype="boolean")
+ result = ufunc(a, a)
+ expected = pd.array(ufunc(a._data, a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ s = pd.Series(a)
+ result = ufunc(s, a)
+ expected = pd.Series(ufunc(a._data, a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_series_equal(result, expected)
+
+ # Boolean with numpy array
+ arr = np.array([True, True, False])
+ result = ufunc(a, arr)
+ expected = pd.array(ufunc(a._data, arr), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ result = ufunc(arr, a)
+ expected = pd.array(ufunc(arr, a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ # BooleanArray with scalar
+ result = ufunc(a, True)
+ expected = pd.array(ufunc(a._data, True), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ result = ufunc(True, a)
+ expected = pd.array(ufunc(True, a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ # not handled types
+ with pytest.raises(TypeError):
+ ufunc(a, "test")
+
+
+@pytest.mark.parametrize("ufunc", [np.logical_not])
+def test_ufuncs_unary(ufunc):
+ a = pd.array([True, False, None], dtype="boolean")
+ result = ufunc(a)
+ expected = pd.array(ufunc(a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_extension_array_equal(result, expected)
+
+ s = pd.Series(a)
+ result = ufunc(s)
+ expected = pd.Series(ufunc(a._data), dtype="boolean")
+ expected[a._mask] = np.nan
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("values", [[True, False], [True, None]])
+def test_ufunc_reduce_raises(values):
+ a = pd.array(values, dtype="boolean")
+ with pytest.raises(NotImplementedError):
+ np.add.reduce(a)
+
+
+def test_value_counts_na():
+ arr = pd.array([True, False, pd.NA], dtype="boolean")
+ result = arr.value_counts(dropna=False)
+ expected = pd.Series([1, 1, 1], index=[True, False, pd.NA], dtype="Int64")
+ tm.assert_series_equal(result, expected)
+
+ result = arr.value_counts(dropna=True)
+ expected = pd.Series([1, 1], index=[True, False], dtype="Int64")
+ tm.assert_series_equal(result, expected)
+
+
+def test_diff():
+ a = pd.array(
+ [True, True, False, False, True, None, True, None, False], dtype="boolean"
+ )
+ result = pd.core.algorithms.diff(a, 1)
+ expected = pd.array(
+ [None, False, True, False, True, None, None, None, None], dtype="boolean"
+ )
+ tm.assert_extension_array_equal(result, expected)
+
+ s = pd.Series(a)
+ result = s.diff()
+ expected = pd.Series(expected)
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/arrays/boolean/test_indexing.py b/pandas/tests/arrays/boolean/test_indexing.py
new file mode 100644
index 0000000000000..6a7daea16963c
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_indexing.py
@@ -0,0 +1,13 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas._testing as tm
+
+
+@pytest.mark.parametrize("na", [None, np.nan, pd.NA])
+def test_setitem_missing_values(na):
+ arr = pd.array([True, False, None], dtype="boolean")
+ expected = pd.array([True, None, None], dtype="boolean")
+ arr[1] = na
+ tm.assert_extension_array_equal(arr, expected)
diff --git a/pandas/tests/arrays/boolean/test_logical.py b/pandas/tests/arrays/boolean/test_logical.py
new file mode 100644
index 0000000000000..6cfe19e2fe3eb
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_logical.py
@@ -0,0 +1,230 @@
+import operator
+
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas._testing as tm
+from pandas.arrays import BooleanArray
+from pandas.tests.extension.base import BaseOpsUtil
+
+
+class TestLogicalOps(BaseOpsUtil):
+ def test_numpy_scalars_ok(self, all_logical_operators):
+ a = pd.array([True, False, None], dtype="boolean")
+ op = getattr(a, all_logical_operators)
+
+ tm.assert_extension_array_equal(op(True), op(np.bool(True)))
+ tm.assert_extension_array_equal(op(False), op(np.bool(False)))
+
+ def get_op_from_name(self, op_name):
+ short_opname = op_name.strip("_")
+ short_opname = short_opname if "xor" in short_opname else short_opname + "_"
+ try:
+ op = getattr(operator, short_opname)
+ except AttributeError:
+ # Assume it is the reverse operator
+ rop = getattr(operator, short_opname[1:])
+ op = lambda x, y: rop(y, x)
+
+ return op
+
+ def test_empty_ok(self, all_logical_operators):
+ a = pd.array([], dtype="boolean")
+ op_name = all_logical_operators
+ result = getattr(a, op_name)(True)
+ tm.assert_extension_array_equal(a, result)
+
+ result = getattr(a, op_name)(False)
+ tm.assert_extension_array_equal(a, result)
+
+ # TODO: pd.NA
+ # result = getattr(a, op_name)(pd.NA)
+ # tm.assert_extension_array_equal(a, result)
+
+ def test_logical_length_mismatch_raises(self, all_logical_operators):
+ op_name = all_logical_operators
+ a = pd.array([True, False, None], dtype="boolean")
+ msg = "Lengths must match to compare"
+
+ with pytest.raises(ValueError, match=msg):
+ getattr(a, op_name)([True, False])
+
+ with pytest.raises(ValueError, match=msg):
+ getattr(a, op_name)(np.array([True, False]))
+
+ with pytest.raises(ValueError, match=msg):
+ getattr(a, op_name)(pd.array([True, False], dtype="boolean"))
+
+ def test_logical_nan_raises(self, all_logical_operators):
+ op_name = all_logical_operators
+ a = pd.array([True, False, None], dtype="boolean")
+ msg = "Got float instead"
+
+ with pytest.raises(TypeError, match=msg):
+ getattr(a, op_name)(np.nan)
+
+ @pytest.mark.parametrize("other", ["a", 1])
+ def test_non_bool_or_na_other_raises(self, other, all_logical_operators):
+ a = pd.array([True, False], dtype="boolean")
+ with pytest.raises(TypeError, match=str(type(other).__name__)):
+ getattr(a, all_logical_operators)(other)
+
+ def test_kleene_or(self):
+ # A clear test of behavior.
+ a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ b = pd.array([True, False, None] * 3, dtype="boolean")
+ result = a | b
+ expected = pd.array(
+ [True, True, True, True, False, None, True, None, None], dtype="boolean"
+ )
+ tm.assert_extension_array_equal(result, expected)
+
+ result = b | a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ )
+ tm.assert_extension_array_equal(
+ b, pd.array([True, False, None] * 3, dtype="boolean")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (pd.NA, [True, None, None]),
+ (True, [True, True, True]),
+ (np.bool_(True), [True, True, True]),
+ (False, [True, False, None]),
+ (np.bool_(False), [True, False, None]),
+ ],
+ )
+ def test_kleene_or_scalar(self, other, expected):
+ # TODO: test True & False
+ a = pd.array([True, False, None], dtype="boolean")
+ result = a | other
+ expected = pd.array(expected, dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ result = other | a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True, False, None], dtype="boolean")
+ )
+
+ def test_kleene_and(self):
+ # A clear test of behavior.
+ a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ b = pd.array([True, False, None] * 3, dtype="boolean")
+ result = a & b
+ expected = pd.array(
+ [True, False, None, False, False, False, None, False, None], dtype="boolean"
+ )
+ tm.assert_extension_array_equal(result, expected)
+
+ result = b & a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ )
+ tm.assert_extension_array_equal(
+ b, pd.array([True, False, None] * 3, dtype="boolean")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (pd.NA, [None, False, None]),
+ (True, [True, False, None]),
+ (False, [False, False, False]),
+ (np.bool_(True), [True, False, None]),
+ (np.bool_(False), [False, False, False]),
+ ],
+ )
+ def test_kleene_and_scalar(self, other, expected):
+ a = pd.array([True, False, None], dtype="boolean")
+ result = a & other
+ expected = pd.array(expected, dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ result = other & a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True, False, None], dtype="boolean")
+ )
+
+ def test_kleene_xor(self):
+ a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ b = pd.array([True, False, None] * 3, dtype="boolean")
+ result = a ^ b
+ expected = pd.array(
+ [False, True, None, True, False, None, None, None, None], dtype="boolean"
+ )
+ tm.assert_extension_array_equal(result, expected)
+
+ result = b ^ a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ )
+ tm.assert_extension_array_equal(
+ b, pd.array([True, False, None] * 3, dtype="boolean")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (pd.NA, [None, None, None]),
+ (True, [False, True, None]),
+ (np.bool_(True), [False, True, None]),
+ (np.bool_(False), [True, False, None]),
+ ],
+ )
+ def test_kleene_xor_scalar(self, other, expected):
+ a = pd.array([True, False, None], dtype="boolean")
+ result = a ^ other
+ expected = pd.array(expected, dtype="boolean")
+ tm.assert_extension_array_equal(result, expected)
+
+ result = other ^ a
+ tm.assert_extension_array_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_extension_array_equal(
+ a, pd.array([True, False, None], dtype="boolean")
+ )
+
+ @pytest.mark.parametrize(
+ "other", [True, False, pd.NA, [True, False, None] * 3],
+ )
+ def test_no_masked_assumptions(self, other, all_logical_operators):
+ # The logical operations should not assume that masked values are False!
+ a = pd.arrays.BooleanArray(
+ np.array([True, True, True, False, False, False, True, False, True]),
+ np.array([False] * 6 + [True, True, True]),
+ )
+ b = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
+ if isinstance(other, list):
+ other = pd.array(other, dtype="boolean")
+
+ result = getattr(a, all_logical_operators)(other)
+ expected = getattr(b, all_logical_operators)(other)
+ tm.assert_extension_array_equal(result, expected)
+
+ if isinstance(other, BooleanArray):
+ other._data[other._mask] = True
+ a._data[a._mask] = False
+
+ result = getattr(a, all_logical_operators)(other)
+ expected = getattr(b, all_logical_operators)(other)
+ tm.assert_extension_array_equal(result, expected)
diff --git a/pandas/tests/arrays/boolean/test_ops.py b/pandas/tests/arrays/boolean/test_ops.py
new file mode 100644
index 0000000000000..52f602258a049
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_ops.py
@@ -0,0 +1,20 @@
+import pandas as pd
+import pandas._testing as tm
+
+
+class TestUnaryOps:
+ def test_invert(self):
+ a = pd.array([True, False, None], dtype="boolean")
+ expected = pd.array([False, True, None], dtype="boolean")
+ tm.assert_extension_array_equal(~a, expected)
+
+ expected = pd.Series(expected, index=["a", "b", "c"], name="name")
+ result = ~pd.Series(a, index=["a", "b", "c"], name="name")
+ tm.assert_series_equal(result, expected)
+
+ df = pd.DataFrame({"A": a, "B": [True, False, False]}, index=["a", "b", "c"])
+ result = ~df
+ expected = pd.DataFrame(
+ {"A": expected, "B": [False, True, True]}, index=["a", "b", "c"]
+ )
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/arrays/boolean/test_reduction.py b/pandas/tests/arrays/boolean/test_reduction.py
new file mode 100644
index 0000000000000..7a8146ef14de0
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_reduction.py
@@ -0,0 +1,55 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+
+
+@pytest.fixture
+def data():
+ return pd.array(
+ [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False],
+ dtype="boolean",
+ )
+
+
+@pytest.mark.parametrize(
+ "values, exp_any, exp_all, exp_any_noskip, exp_all_noskip",
+ [
+ ([True, pd.NA], True, True, True, pd.NA),
+ ([False, pd.NA], False, False, pd.NA, False),
+ ([pd.NA], False, True, pd.NA, pd.NA),
+ ([], False, True, False, True),
+ ],
+)
+def test_any_all(values, exp_any, exp_all, exp_any_noskip, exp_all_noskip):
+ # the methods return numpy scalars
+ exp_any = pd.NA if exp_any is pd.NA else np.bool_(exp_any)
+ exp_all = pd.NA if exp_all is pd.NA else np.bool_(exp_all)
+ exp_any_noskip = pd.NA if exp_any_noskip is pd.NA else np.bool_(exp_any_noskip)
+ exp_all_noskip = pd.NA if exp_all_noskip is pd.NA else np.bool_(exp_all_noskip)
+
+ for con in [pd.array, pd.Series]:
+ a = con(values, dtype="boolean")
+ assert a.any() is exp_any
+ assert a.all() is exp_all
+ assert a.any(skipna=False) is exp_any_noskip
+ assert a.all(skipna=False) is exp_all_noskip
+
+ assert np.any(a.any()) is exp_any
+ assert np.all(a.all()) is exp_all
+
+
+@pytest.mark.parametrize("dropna", [True, False])
+def test_reductions_return_types(dropna, data, all_numeric_reductions):
+ op = all_numeric_reductions
+ s = pd.Series(data)
+ if dropna:
+ s = s.dropna()
+
+ if op in ("sum", "prod"):
+ assert isinstance(getattr(s, op)(), np.int64)
+ elif op in ("min", "max"):
+ assert isinstance(getattr(s, op)(), np.bool_)
+ else:
+ # "mean", "std", "var", "median", "kurt", "skew"
+ assert isinstance(getattr(s, op)(), np.float64)
diff --git a/pandas/tests/arrays/boolean/test_repr.py b/pandas/tests/arrays/boolean/test_repr.py
new file mode 100644
index 0000000000000..0ee904b18cc9e
--- /dev/null
+++ b/pandas/tests/arrays/boolean/test_repr.py
@@ -0,0 +1,13 @@
+import pandas as pd
+
+
+def test_repr():
+ df = pd.DataFrame({"A": pd.array([True, False, None], dtype="boolean")})
+ expected = " A\n0 True\n1 False\n2 <NA>"
+ assert repr(df) == expected
+
+ expected = "0 True\n1 False\n2 <NA>\nName: A, dtype: boolean"
+ assert repr(df.A) == expected
+
+ expected = "<BooleanArray>\n[True, False, <NA>]\nLength: 3, dtype: boolean"
+ assert repr(df.A.array) == expected
diff --git a/pandas/tests/arrays/test_boolean.py b/pandas/tests/arrays/test_boolean.py
deleted file mode 100644
index f4b466f4804c7..0000000000000
--- a/pandas/tests/arrays/test_boolean.py
+++ /dev/null
@@ -1,936 +0,0 @@
-import operator
-
-import numpy as np
-import pytest
-
-import pandas.util._test_decorators as td
-
-import pandas as pd
-import pandas._testing as tm
-from pandas.arrays import BooleanArray
-from pandas.core.arrays.boolean import coerce_to_array
-from pandas.tests.extension.base import BaseOpsUtil
-
-
-def make_data():
- return [True, False] * 4 + [np.nan] + [True, False] * 44 + [np.nan] + [True, False]
-
-
-@pytest.fixture
-def dtype():
- return pd.BooleanDtype()
-
-
-@pytest.fixture
-def data(dtype):
- return pd.array(make_data(), dtype=dtype)
-
-
-def test_boolean_array_constructor():
- values = np.array([True, False, True, False], dtype="bool")
- mask = np.array([False, False, False, True], dtype="bool")
-
- result = BooleanArray(values, mask)
- expected = pd.array([True, False, True, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- with pytest.raises(TypeError, match="values should be boolean numpy array"):
- BooleanArray(values.tolist(), mask)
-
- with pytest.raises(TypeError, match="mask should be boolean numpy array"):
- BooleanArray(values, mask.tolist())
-
- with pytest.raises(TypeError, match="values should be boolean numpy array"):
- BooleanArray(values.astype(int), mask)
-
- with pytest.raises(TypeError, match="mask should be boolean numpy array"):
- BooleanArray(values, None)
-
- with pytest.raises(ValueError, match="values must be a 1D array"):
- BooleanArray(values.reshape(1, -1), mask)
-
- with pytest.raises(ValueError, match="mask must be a 1D array"):
- BooleanArray(values, mask.reshape(1, -1))
-
-
-def test_boolean_array_constructor_copy():
- values = np.array([True, False, True, False], dtype="bool")
- mask = np.array([False, False, False, True], dtype="bool")
-
- result = BooleanArray(values, mask)
- assert result._data is values
- assert result._mask is mask
-
- result = BooleanArray(values, mask, copy=True)
- assert result._data is not values
- assert result._mask is not mask
-
-
-def test_to_boolean_array():
- expected = BooleanArray(
- np.array([True, False, True]), np.array([False, False, False])
- )
-
- result = pd.array([True, False, True], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
- result = pd.array(np.array([True, False, True]), dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
- result = pd.array(np.array([True, False, True], dtype=object), dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- # with missing values
- expected = BooleanArray(
- np.array([True, False, True]), np.array([False, False, True])
- )
-
- result = pd.array([True, False, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
- result = pd.array(np.array([True, False, None], dtype=object), dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-def test_to_boolean_array_all_none():
- expected = BooleanArray(np.array([True, True, True]), np.array([True, True, True]))
-
- result = pd.array([None, None, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
- result = pd.array(np.array([None, None, None], dtype=object), dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "a, b",
- [
- ([True, False, None, np.nan, pd.NA], [True, False, None, None, None]),
- ([True, np.nan], [True, None]),
- ([True, pd.NA], [True, None]),
- ([np.nan, np.nan], [None, None]),
- (np.array([np.nan, np.nan], dtype=float), [None, None]),
- ],
-)
-def test_to_boolean_array_missing_indicators(a, b):
- result = pd.array(a, dtype="boolean")
- expected = pd.array(b, dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "values",
- [
- ["foo", "bar"],
- ["1", "2"],
- # "foo",
- [1, 2],
- [1.0, 2.0],
- pd.date_range("20130101", periods=2),
- np.array(["foo"]),
- np.array([1, 2]),
- np.array([1.0, 2.0]),
- [np.nan, {"a": 1}],
- ],
-)
-def test_to_boolean_array_error(values):
- # error in converting existing arrays to BooleanArray
- msg = "Need to pass bool-like value"
- with pytest.raises(TypeError, match=msg):
- pd.array(values, dtype="boolean")
-
-
-def test_to_boolean_array_from_integer_array():
- result = pd.array(np.array([1, 0, 1, 0]), dtype="boolean")
- expected = pd.array([True, False, True, False], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- # with missing values
- result = pd.array(np.array([1, 0, 1, None]), dtype="boolean")
- expected = pd.array([True, False, True, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-def test_to_boolean_array_from_float_array():
- result = pd.array(np.array([1.0, 0.0, 1.0, 0.0]), dtype="boolean")
- expected = pd.array([True, False, True, False], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- # with missing values
- result = pd.array(np.array([1.0, 0.0, 1.0, np.nan]), dtype="boolean")
- expected = pd.array([True, False, True, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-def test_to_boolean_array_integer_like():
- # integers of 0's and 1's
- result = pd.array([1, 0, 1, 0], dtype="boolean")
- expected = pd.array([True, False, True, False], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- # with missing values
- result = pd.array([1, 0, 1, None], dtype="boolean")
- expected = pd.array([True, False, True, None], dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
-
-def test_coerce_to_array():
- # TODO this is currently not public API
- values = np.array([True, False, True, False], dtype="bool")
- mask = np.array([False, False, False, True], dtype="bool")
- result = BooleanArray(*coerce_to_array(values, mask=mask))
- expected = BooleanArray(values, mask)
- tm.assert_extension_array_equal(result, expected)
- assert result._data is values
- assert result._mask is mask
- result = BooleanArray(*coerce_to_array(values, mask=mask, copy=True))
- expected = BooleanArray(values, mask)
- tm.assert_extension_array_equal(result, expected)
- assert result._data is not values
- assert result._mask is not mask
-
- # mixed missing from values and mask
- values = [True, False, None, False]
- mask = np.array([False, False, False, True], dtype="bool")
- result = BooleanArray(*coerce_to_array(values, mask=mask))
- expected = BooleanArray(
- np.array([True, False, True, True]), np.array([False, False, True, True])
- )
- tm.assert_extension_array_equal(result, expected)
- result = BooleanArray(*coerce_to_array(np.array(values, dtype=object), mask=mask))
- tm.assert_extension_array_equal(result, expected)
- result = BooleanArray(*coerce_to_array(values, mask=mask.tolist()))
- tm.assert_extension_array_equal(result, expected)
-
- # raise errors for wrong dimension
- values = np.array([True, False, True, False], dtype="bool")
- mask = np.array([False, False, False, True], dtype="bool")
-
- with pytest.raises(ValueError, match="values must be a 1D list-like"):
- coerce_to_array(values.reshape(1, -1))
-
- with pytest.raises(ValueError, match="mask must be a 1D list-like"):
- coerce_to_array(values, mask=mask.reshape(1, -1))
-
-
-def test_coerce_to_array_from_boolean_array():
- # passing BooleanArray to coerce_to_array
- values = np.array([True, False, True, False], dtype="bool")
- mask = np.array([False, False, False, True], dtype="bool")
- arr = BooleanArray(values, mask)
- result = BooleanArray(*coerce_to_array(arr))
- tm.assert_extension_array_equal(result, arr)
- # no copy
- assert result._data is arr._data
- assert result._mask is arr._mask
-
- result = BooleanArray(*coerce_to_array(arr), copy=True)
- tm.assert_extension_array_equal(result, arr)
- assert result._data is not arr._data
- assert result._mask is not arr._mask
-
- with pytest.raises(ValueError, match="cannot pass mask for BooleanArray input"):
- coerce_to_array(arr, mask=mask)
-
-
-def test_coerce_to_numpy_array():
- # with missing values -> object dtype
- arr = pd.array([True, False, None], dtype="boolean")
- result = np.array(arr)
- expected = np.array([True, False, pd.NA], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- # also with no missing values -> object dtype
- arr = pd.array([True, False, True], dtype="boolean")
- result = np.array(arr)
- expected = np.array([True, False, True], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- # force bool dtype
- result = np.array(arr, dtype="bool")
- expected = np.array([True, False, True], dtype="bool")
- tm.assert_numpy_array_equal(result, expected)
- # with missing values will raise error
- arr = pd.array([True, False, None], dtype="boolean")
- msg = (
- "cannot convert to 'bool'-dtype NumPy array with missing values. "
- "Specify an appropriate 'na_value' for this dtype."
- )
- with pytest.raises(ValueError, match=msg):
- np.array(arr, dtype="bool")
-
-
-def test_to_boolean_array_from_strings():
- result = BooleanArray._from_sequence_of_strings(
- np.array(["True", "False", np.nan], dtype=object)
- )
- expected = BooleanArray(
- np.array([True, False, False]), np.array([False, False, True])
- )
-
- tm.assert_extension_array_equal(result, expected)
-
-
-def test_to_boolean_array_from_strings_invalid_string():
- with pytest.raises(ValueError, match="cannot be cast"):
- BooleanArray._from_sequence_of_strings(["donkey"])
-
-
-def test_repr():
- df = pd.DataFrame({"A": pd.array([True, False, None], dtype="boolean")})
- expected = " A\n0 True\n1 False\n2 <NA>"
- assert repr(df) == expected
-
- expected = "0 True\n1 False\n2 <NA>\nName: A, dtype: boolean"
- assert repr(df.A) == expected
-
- expected = "<BooleanArray>\n[True, False, <NA>]\nLength: 3, dtype: boolean"
- assert repr(df.A.array) == expected
-
-
-@pytest.mark.parametrize("box", [True, False], ids=["series", "array"])
-def test_to_numpy(box):
- con = pd.Series if box else pd.array
- # default (with or without missing values) -> object dtype
- arr = con([True, False, True], dtype="boolean")
- result = arr.to_numpy()
- expected = np.array([True, False, True], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- arr = con([True, False, None], dtype="boolean")
- result = arr.to_numpy()
- expected = np.array([True, False, pd.NA], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- arr = con([True, False, None], dtype="boolean")
- result = arr.to_numpy(dtype="str")
- expected = np.array([True, False, pd.NA], dtype="<U5")
- tm.assert_numpy_array_equal(result, expected)
-
- # no missing values -> can convert to bool, otherwise raises
- arr = con([True, False, True], dtype="boolean")
- result = arr.to_numpy(dtype="bool")
- expected = np.array([True, False, True], dtype="bool")
- tm.assert_numpy_array_equal(result, expected)
-
- arr = con([True, False, None], dtype="boolean")
- with pytest.raises(ValueError, match="cannot convert to 'bool'-dtype"):
- result = arr.to_numpy(dtype="bool")
-
- # specify dtype and na_value
- arr = con([True, False, None], dtype="boolean")
- result = arr.to_numpy(dtype=object, na_value=None)
- expected = np.array([True, False, None], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- result = arr.to_numpy(dtype=bool, na_value=False)
- expected = np.array([True, False, False], dtype="bool")
- tm.assert_numpy_array_equal(result, expected)
-
- result = arr.to_numpy(dtype="int64", na_value=-99)
- expected = np.array([1, 0, -99], dtype="int64")
- tm.assert_numpy_array_equal(result, expected)
-
- result = arr.to_numpy(dtype="float64", na_value=np.nan)
- expected = np.array([1, 0, np.nan], dtype="float64")
- tm.assert_numpy_array_equal(result, expected)
-
- # converting to int or float without specifying na_value raises
- with pytest.raises(ValueError, match="cannot convert to 'int64'-dtype"):
- arr.to_numpy(dtype="int64")
- with pytest.raises(ValueError, match="cannot convert to 'float64'-dtype"):
- arr.to_numpy(dtype="float64")
-
-
-def test_to_numpy_copy():
- # to_numpy can be zero-copy if no missing values
- arr = pd.array([True, False, True], dtype="boolean")
- result = arr.to_numpy(dtype=bool)
- result[0] = False
- tm.assert_extension_array_equal(
- arr, pd.array([False, False, True], dtype="boolean")
- )
-
- arr = pd.array([True, False, True], dtype="boolean")
- result = arr.to_numpy(dtype=bool, copy=True)
- result[0] = False
- tm.assert_extension_array_equal(arr, pd.array([True, False, True], dtype="boolean"))
-
-
-def test_astype():
- # with missing values
- arr = pd.array([True, False, None], dtype="boolean")
-
- with pytest.raises(ValueError, match="cannot convert NA to integer"):
- arr.astype("int64")
-
- with pytest.raises(ValueError, match="cannot convert float NaN to"):
- arr.astype("bool")
-
- result = arr.astype("float64")
- expected = np.array([1, 0, np.nan], dtype="float64")
- tm.assert_numpy_array_equal(result, expected)
-
- result = arr.astype("str")
- expected = np.array(["True", "False", "<NA>"], dtype="object")
- tm.assert_numpy_array_equal(result, expected)
-
- # no missing values
- arr = pd.array([True, False, True], dtype="boolean")
- result = arr.astype("int64")
- expected = np.array([1, 0, 1], dtype="int64")
- tm.assert_numpy_array_equal(result, expected)
-
- result = arr.astype("bool")
- expected = np.array([True, False, True], dtype="bool")
- tm.assert_numpy_array_equal(result, expected)
-
-
-def test_astype_to_boolean_array():
- # astype to BooleanArray
- arr = pd.array([True, False, None], dtype="boolean")
-
- result = arr.astype("boolean")
- tm.assert_extension_array_equal(result, arr)
- result = arr.astype(pd.BooleanDtype())
- tm.assert_extension_array_equal(result, arr)
-
-
-def test_astype_to_integer_array():
- # astype to IntegerArray
- arr = pd.array([True, False, None], dtype="boolean")
-
- result = arr.astype("Int64")
- expected = pd.array([1, 0, None], dtype="Int64")
- tm.assert_extension_array_equal(result, expected)
-
-
-@pytest.mark.parametrize("na", [None, np.nan, pd.NA])
-def test_setitem_missing_values(na):
- arr = pd.array([True, False, None], dtype="boolean")
- expected = pd.array([True, None, None], dtype="boolean")
- arr[1] = na
- tm.assert_extension_array_equal(arr, expected)
-
-
-@pytest.mark.parametrize(
- "ufunc", [np.add, np.logical_or, np.logical_and, np.logical_xor]
-)
-def test_ufuncs_binary(ufunc):
- # two BooleanArrays
- a = pd.array([True, False, None], dtype="boolean")
- result = ufunc(a, a)
- expected = pd.array(ufunc(a._data, a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- s = pd.Series(a)
- result = ufunc(s, a)
- expected = pd.Series(ufunc(a._data, a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_series_equal(result, expected)
-
- # Boolean with numpy array
- arr = np.array([True, True, False])
- result = ufunc(a, arr)
- expected = pd.array(ufunc(a._data, arr), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- result = ufunc(arr, a)
- expected = pd.array(ufunc(arr, a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- # BooleanArray with scalar
- result = ufunc(a, True)
- expected = pd.array(ufunc(a._data, True), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- result = ufunc(True, a)
- expected = pd.array(ufunc(True, a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- # not handled types
- with pytest.raises(TypeError):
- ufunc(a, "test")
-
-
-@pytest.mark.parametrize("ufunc", [np.logical_not])
-def test_ufuncs_unary(ufunc):
- a = pd.array([True, False, None], dtype="boolean")
- result = ufunc(a)
- expected = pd.array(ufunc(a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_extension_array_equal(result, expected)
-
- s = pd.Series(a)
- result = ufunc(s)
- expected = pd.Series(ufunc(a._data), dtype="boolean")
- expected[a._mask] = np.nan
- tm.assert_series_equal(result, expected)
-
-
-@pytest.mark.parametrize("values", [[True, False], [True, None]])
-def test_ufunc_reduce_raises(values):
- a = pd.array(values, dtype="boolean")
- with pytest.raises(NotImplementedError):
- np.add.reduce(a)
-
-
-class TestUnaryOps:
- def test_invert(self):
- a = pd.array([True, False, None], dtype="boolean")
- expected = pd.array([False, True, None], dtype="boolean")
- tm.assert_extension_array_equal(~a, expected)
-
- expected = pd.Series(expected, index=["a", "b", "c"], name="name")
- result = ~pd.Series(a, index=["a", "b", "c"], name="name")
- tm.assert_series_equal(result, expected)
-
- df = pd.DataFrame({"A": a, "B": [True, False, False]}, index=["a", "b", "c"])
- result = ~df
- expected = pd.DataFrame(
- {"A": expected, "B": [False, True, True]}, index=["a", "b", "c"]
- )
- tm.assert_frame_equal(result, expected)
-
-
-class TestLogicalOps(BaseOpsUtil):
- def test_numpy_scalars_ok(self, all_logical_operators):
- a = pd.array([True, False, None], dtype="boolean")
- op = getattr(a, all_logical_operators)
-
- tm.assert_extension_array_equal(op(True), op(np.bool(True)))
- tm.assert_extension_array_equal(op(False), op(np.bool(False)))
-
- def get_op_from_name(self, op_name):
- short_opname = op_name.strip("_")
- short_opname = short_opname if "xor" in short_opname else short_opname + "_"
- try:
- op = getattr(operator, short_opname)
- except AttributeError:
- # Assume it is the reverse operator
- rop = getattr(operator, short_opname[1:])
- op = lambda x, y: rop(y, x)
-
- return op
-
- def test_empty_ok(self, all_logical_operators):
- a = pd.array([], dtype="boolean")
- op_name = all_logical_operators
- result = getattr(a, op_name)(True)
- tm.assert_extension_array_equal(a, result)
-
- result = getattr(a, op_name)(False)
- tm.assert_extension_array_equal(a, result)
-
- # TODO: pd.NA
- # result = getattr(a, op_name)(pd.NA)
- # tm.assert_extension_array_equal(a, result)
-
- def test_logical_length_mismatch_raises(self, all_logical_operators):
- op_name = all_logical_operators
- a = pd.array([True, False, None], dtype="boolean")
- msg = "Lengths must match to compare"
-
- with pytest.raises(ValueError, match=msg):
- getattr(a, op_name)([True, False])
-
- with pytest.raises(ValueError, match=msg):
- getattr(a, op_name)(np.array([True, False]))
-
- with pytest.raises(ValueError, match=msg):
- getattr(a, op_name)(pd.array([True, False], dtype="boolean"))
-
- def test_logical_nan_raises(self, all_logical_operators):
- op_name = all_logical_operators
- a = pd.array([True, False, None], dtype="boolean")
- msg = "Got float instead"
-
- with pytest.raises(TypeError, match=msg):
- getattr(a, op_name)(np.nan)
-
- @pytest.mark.parametrize("other", ["a", 1])
- def test_non_bool_or_na_other_raises(self, other, all_logical_operators):
- a = pd.array([True, False], dtype="boolean")
- with pytest.raises(TypeError, match=str(type(other).__name__)):
- getattr(a, all_logical_operators)(other)
-
- def test_kleene_or(self):
- # A clear test of behavior.
- a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- b = pd.array([True, False, None] * 3, dtype="boolean")
- result = a | b
- expected = pd.array(
- [True, True, True, True, False, None, True, None, None], dtype="boolean"
- )
- tm.assert_extension_array_equal(result, expected)
-
- result = b | a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- )
- tm.assert_extension_array_equal(
- b, pd.array([True, False, None] * 3, dtype="boolean")
- )
-
- @pytest.mark.parametrize(
- "other, expected",
- [
- (pd.NA, [True, None, None]),
- (True, [True, True, True]),
- (np.bool_(True), [True, True, True]),
- (False, [True, False, None]),
- (np.bool_(False), [True, False, None]),
- ],
- )
- def test_kleene_or_scalar(self, other, expected):
- # TODO: test True & False
- a = pd.array([True, False, None], dtype="boolean")
- result = a | other
- expected = pd.array(expected, dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- result = other | a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True, False, None], dtype="boolean")
- )
-
- def test_kleene_and(self):
- # A clear test of behavior.
- a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- b = pd.array([True, False, None] * 3, dtype="boolean")
- result = a & b
- expected = pd.array(
- [True, False, None, False, False, False, None, False, None], dtype="boolean"
- )
- tm.assert_extension_array_equal(result, expected)
-
- result = b & a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- )
- tm.assert_extension_array_equal(
- b, pd.array([True, False, None] * 3, dtype="boolean")
- )
-
- @pytest.mark.parametrize(
- "other, expected",
- [
- (pd.NA, [None, False, None]),
- (True, [True, False, None]),
- (False, [False, False, False]),
- (np.bool_(True), [True, False, None]),
- (np.bool_(False), [False, False, False]),
- ],
- )
- def test_kleene_and_scalar(self, other, expected):
- a = pd.array([True, False, None], dtype="boolean")
- result = a & other
- expected = pd.array(expected, dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- result = other & a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True, False, None], dtype="boolean")
- )
-
- def test_kleene_xor(self):
- a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- b = pd.array([True, False, None] * 3, dtype="boolean")
- result = a ^ b
- expected = pd.array(
- [False, True, None, True, False, None, None, None, None], dtype="boolean"
- )
- tm.assert_extension_array_equal(result, expected)
-
- result = b ^ a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- )
- tm.assert_extension_array_equal(
- b, pd.array([True, False, None] * 3, dtype="boolean")
- )
-
- @pytest.mark.parametrize(
- "other, expected",
- [
- (pd.NA, [None, None, None]),
- (True, [False, True, None]),
- (np.bool_(True), [False, True, None]),
- (np.bool_(False), [True, False, None]),
- ],
- )
- def test_kleene_xor_scalar(self, other, expected):
- a = pd.array([True, False, None], dtype="boolean")
- result = a ^ other
- expected = pd.array(expected, dtype="boolean")
- tm.assert_extension_array_equal(result, expected)
-
- result = other ^ a
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- tm.assert_extension_array_equal(
- a, pd.array([True, False, None], dtype="boolean")
- )
-
- @pytest.mark.parametrize(
- "other", [True, False, pd.NA, [True, False, None] * 3],
- )
- def test_no_masked_assumptions(self, other, all_logical_operators):
- # The logical operations should not assume that masked values are False!
- a = pd.arrays.BooleanArray(
- np.array([True, True, True, False, False, False, True, False, True]),
- np.array([False] * 6 + [True, True, True]),
- )
- b = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- if isinstance(other, list):
- other = pd.array(other, dtype="boolean")
-
- result = getattr(a, all_logical_operators)(other)
- expected = getattr(b, all_logical_operators)(other)
- tm.assert_extension_array_equal(result, expected)
-
- if isinstance(other, BooleanArray):
- other._data[other._mask] = True
- a._data[a._mask] = False
-
- result = getattr(a, all_logical_operators)(other)
- expected = getattr(b, all_logical_operators)(other)
- tm.assert_extension_array_equal(result, expected)
-
-
-class TestComparisonOps(BaseOpsUtil):
- def _compare_other(self, data, op_name, other):
- op = self.get_op_from_name(op_name)
-
- # array
- result = pd.Series(op(data, other))
- expected = pd.Series(op(data._data, other), dtype="boolean")
- # propagate NAs
- expected[data._mask] = pd.NA
-
- tm.assert_series_equal(result, expected)
-
- # series
- s = pd.Series(data)
- result = op(s, other)
-
- expected = pd.Series(data._data)
- expected = op(expected, other)
- expected = expected.astype("boolean")
- # propagate NAs
- expected[data._mask] = pd.NA
-
- tm.assert_series_equal(result, expected)
-
- def test_compare_scalar(self, data, all_compare_operators):
- op_name = all_compare_operators
- self._compare_other(data, op_name, True)
-
- def test_compare_array(self, data, all_compare_operators):
- op_name = all_compare_operators
- other = pd.array([True] * len(data), dtype="boolean")
- self._compare_other(data, op_name, other)
- other = np.array([True] * len(data))
- self._compare_other(data, op_name, other)
- other = pd.Series([True] * len(data))
- self._compare_other(data, op_name, other)
-
- @pytest.mark.parametrize("other", [True, False, pd.NA])
- def test_scalar(self, other, all_compare_operators):
- op = self.get_op_from_name(all_compare_operators)
- a = pd.array([True, False, None], dtype="boolean")
-
- result = op(a, other)
-
- if other is pd.NA:
- expected = pd.array([None, None, None], dtype="boolean")
- else:
- values = op(a._data, other)
- expected = BooleanArray(values, a._mask, copy=True)
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- result[0] = None
- tm.assert_extension_array_equal(
- a, pd.array([True, False, None], dtype="boolean")
- )
-
- def test_array(self, all_compare_operators):
- op = self.get_op_from_name(all_compare_operators)
- a = pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- b = pd.array([True, False, None] * 3, dtype="boolean")
-
- result = op(a, b)
-
- values = op(a._data, b._data)
- mask = a._mask | b._mask
- expected = BooleanArray(values, mask)
- tm.assert_extension_array_equal(result, expected)
-
- # ensure we haven't mutated anything inplace
- result[0] = None
- tm.assert_extension_array_equal(
- a, pd.array([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean")
- )
- tm.assert_extension_array_equal(
- b, pd.array([True, False, None] * 3, dtype="boolean")
- )
-
-
-class TestArithmeticOps(BaseOpsUtil):
- def test_error(self, data, all_arithmetic_operators):
- # invalid ops
-
- op = all_arithmetic_operators
- s = pd.Series(data)
- ops = getattr(s, op)
- opa = getattr(data, op)
-
- # invalid scalars
- with pytest.raises(TypeError):
- ops("foo")
- with pytest.raises(TypeError):
- ops(pd.Timestamp("20180101"))
-
- # invalid array-likes
- if op not in ("__mul__", "__rmul__"):
- # TODO(extension) numpy's mul with object array sees booleans as numbers
- with pytest.raises(TypeError):
- ops(pd.Series("foo", index=s.index))
-
- # 2d
- result = opa(pd.DataFrame({"A": s}))
- assert result is NotImplemented
-
- with pytest.raises(NotImplementedError):
- opa(np.arange(len(s)).reshape(-1, len(s)))
-
-
-@pytest.mark.parametrize("dropna", [True, False])
-def test_reductions_return_types(dropna, data, all_numeric_reductions):
- op = all_numeric_reductions
- s = pd.Series(data)
- if dropna:
- s = s.dropna()
-
- if op in ("sum", "prod"):
- assert isinstance(getattr(s, op)(), np.int64)
- elif op in ("min", "max"):
- assert isinstance(getattr(s, op)(), np.bool_)
- else:
- # "mean", "std", "var", "median", "kurt", "skew"
- assert isinstance(getattr(s, op)(), np.float64)
-
-
-@pytest.mark.parametrize(
- "values, exp_any, exp_all, exp_any_noskip, exp_all_noskip",
- [
- ([True, pd.NA], True, True, True, pd.NA),
- ([False, pd.NA], False, False, pd.NA, False),
- ([pd.NA], False, True, pd.NA, pd.NA),
- ([], False, True, False, True),
- ],
-)
-def test_any_all(values, exp_any, exp_all, exp_any_noskip, exp_all_noskip):
- # the methods return numpy scalars
- exp_any = pd.NA if exp_any is pd.NA else np.bool_(exp_any)
- exp_all = pd.NA if exp_all is pd.NA else np.bool_(exp_all)
- exp_any_noskip = pd.NA if exp_any_noskip is pd.NA else np.bool_(exp_any_noskip)
- exp_all_noskip = pd.NA if exp_all_noskip is pd.NA else np.bool_(exp_all_noskip)
-
- for con in [pd.array, pd.Series]:
- a = con(values, dtype="boolean")
- assert a.any() is exp_any
- assert a.all() is exp_all
- assert a.any(skipna=False) is exp_any_noskip
- assert a.all(skipna=False) is exp_all_noskip
-
- assert np.any(a.any()) is exp_any
- assert np.all(a.all()) is exp_all
-
-
-# TODO when BooleanArray coerces to object dtype numpy array, need to do conversion
-# manually in the indexing code
-# def test_indexing_boolean_mask():
-# arr = pd.array([1, 2, 3, 4], dtype="Int64")
-# mask = pd.array([True, False, True, False], dtype="boolean")
-# result = arr[mask]
-# expected = pd.array([1, 3], dtype="Int64")
-# tm.assert_extension_array_equal(result, expected)
-
-# # missing values -> error
-# mask = pd.array([True, False, True, None], dtype="boolean")
-# with pytest.raises(IndexError):
-# result = arr[mask]
-
-
-@td.skip_if_no("pyarrow", min_version="0.15.0")
-def test_arrow_array(data):
- # protocol added in 0.15.0
- import pyarrow as pa
-
- arr = pa.array(data)
-
- # TODO use to_numpy(na_value=None) here
- data_object = np.array(data, dtype=object)
- data_object[data.isna()] = None
- expected = pa.array(data_object, type=pa.bool_(), from_pandas=True)
- assert arr.equals(expected)
-
-
-@td.skip_if_no("pyarrow", min_version="0.15.1.dev")
-def test_arrow_roundtrip():
- # roundtrip possible from arrow 1.0.0
- import pyarrow as pa
-
- data = pd.array([True, False, None], dtype="boolean")
- df = pd.DataFrame({"a": data})
- table = pa.table(df)
- assert table.field("a").type == "bool"
- result = table.to_pandas()
- assert isinstance(result["a"].dtype, pd.BooleanDtype)
- tm.assert_frame_equal(result, df)
-
-
-def test_value_counts_na():
- arr = pd.array([True, False, pd.NA], dtype="boolean")
- result = arr.value_counts(dropna=False)
- expected = pd.Series([1, 1, 1], index=[True, False, pd.NA], dtype="Int64")
- tm.assert_series_equal(result, expected)
-
- result = arr.value_counts(dropna=True)
- expected = pd.Series([1, 1], index=[True, False], dtype="Int64")
- tm.assert_series_equal(result, expected)
-
-
-def test_diff():
- a = pd.array(
- [True, True, False, False, True, None, True, None, False], dtype="boolean"
- )
- result = pd.core.algorithms.diff(a, 1)
- expected = pd.array(
- [None, False, True, False, True, None, None, None, None], dtype="boolean"
- )
- tm.assert_extension_array_equal(result, expected)
-
- s = pd.Series(a)
- result = s.diff()
- expected = pd.Series(expected)
- tm.assert_series_equal(result, expected)
| Apologies for the size of this PR, but it looks like the ExtensionArray tests sometimes have their own directories by type, and other times all tests for the type are in a single file (e.g., https://github.com/pandas-dev/pandas/tree/master/pandas/tests/arrays/categorical vs. https://github.com/pandas-dev/pandas/blob/master/pandas/tests/arrays/test_boolean.py). I think the "whole directory for type" structure makes it easier to find and add new tests, so I'm doing that here for the Boolean tests. There shouldn't be any changes other than moving things around. | https://api.github.com/repos/pandas-dev/pandas/pulls/32780 | 2020-03-17T17:57:07Z | 2020-03-21T21:09:27Z | 2020-03-21T21:09:27Z | 2020-03-21T21:14:27Z |
PERF: block-wise arithmetic for frame-with-frame | diff --git a/asv_bench/benchmarks/arithmetic.py b/asv_bench/benchmarks/arithmetic.py
index 8aa29468559b2..08a11ba2607a5 100644
--- a/asv_bench/benchmarks/arithmetic.py
+++ b/asv_bench/benchmarks/arithmetic.py
@@ -101,6 +101,59 @@ def time_frame_op_with_series_axis1(self, opname):
getattr(operator, opname)(self.df, self.ser)
+class FrameWithFrameWide:
+ # Many-columns, mixed dtypes
+
+ params = [
+ [
+ # GH#32779 has discussion of which operators are included here
+ operator.add,
+ operator.floordiv,
+ operator.gt,
+ ]
+ ]
+ param_names = ["op"]
+
+ def setup(self, op):
+ # we choose dtypes so as to make the blocks
+ # a) not perfectly match between right and left
+ # b) appreciably bigger than single columns
+ n_cols = 2000
+ n_rows = 500
+
+ # construct dataframe with 2 blocks
+ arr1 = np.random.randn(n_rows, int(n_cols / 2)).astype("f8")
+ arr2 = np.random.randn(n_rows, int(n_cols / 2)).astype("f4")
+ df = pd.concat(
+ [pd.DataFrame(arr1), pd.DataFrame(arr2)], axis=1, ignore_index=True,
+ )
+ # should already be the case, but just to be sure
+ df._consolidate_inplace()
+
+ # TODO: GH#33198 the setting here shoudlnt need two steps
+ arr1 = np.random.randn(n_rows, int(n_cols / 4)).astype("f8")
+ arr2 = np.random.randn(n_rows, int(n_cols / 2)).astype("i8")
+ arr3 = np.random.randn(n_rows, int(n_cols / 4)).astype("f8")
+ df2 = pd.concat(
+ [pd.DataFrame(arr1), pd.DataFrame(arr2), pd.DataFrame(arr3)],
+ axis=1,
+ ignore_index=True,
+ )
+ # should already be the case, but just to be sure
+ df2._consolidate_inplace()
+
+ self.left = df
+ self.right = df2
+
+ def time_op_different_blocks(self, op):
+ # blocks (and dtypes) are not aligned
+ op(self.left, self.right)
+
+ def time_op_same_blocks(self, op):
+ # blocks (and dtypes) are aligned
+ op(self.left, self.left)
+
+
class Ops:
params = [[True, False], ["default", 1]]
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 73892da2cbf71..e04c8cbcf68c6 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -610,7 +610,7 @@ Performance improvements
and :meth:`~pandas.core.groupby.groupby.Groupby.last` (:issue:`34178`)
- Performance improvement in :func:`factorize` for nullable (integer and boolean) dtypes (:issue:`33064`).
- Performance improvement in reductions (sum, prod, min, max) for nullable (integer and boolean) dtypes (:issue:`30982`, :issue:`33261`, :issue:`33442`).
-
+- Performance improvement in arithmetic operations between two :class:`DataFrame` objects (:issue:`32779`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 1aa95e92b73d1..db452cb0f1fa4 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -49,7 +49,7 @@ cdef class BlockPlacement:
else:
# Cython memoryview interface requires ndarray to be writeable.
arr = np.require(val, dtype=np.int64, requirements='W')
- assert arr.ndim == 1
+ assert arr.ndim == 1, arr.shape
self._as_array = arr
self._has_array = True
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 145654805cc6b..fabe0f03be011 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -97,6 +97,10 @@ def _validate_comparison_value(self, other):
@unpack_zerodim_and_defer(opname)
def wrapper(self, other):
+ if self.ndim > 1 and getattr(other, "shape", None) == self.shape:
+ # TODO: handle 2D-like listlikes
+ return op(self.ravel(), other.ravel()).reshape(self.shape)
+
try:
other = _validate_comparison_value(self, other)
except InvalidComparison:
@@ -1307,10 +1311,12 @@ def _addsub_object_array(self, other: np.ndarray, op):
"""
assert op in [operator.add, operator.sub]
if len(other) == 1:
+ # If both 1D then broadcasting is unambiguous
+ # TODO(EA2D): require self.ndim == other.ndim here
return op(self, other[0])
warnings.warn(
- "Adding/subtracting array of DateOffsets to "
+ "Adding/subtracting object-dtype array to "
f"{type(self).__name__} not vectorized",
PerformanceWarning,
)
@@ -1318,7 +1324,7 @@ def _addsub_object_array(self, other: np.ndarray, op):
# Caller is responsible for broadcasting if necessary
assert self.shape == other.shape, (self.shape, other.shape)
- res_values = op(self.astype("O"), np.array(other))
+ res_values = op(self.astype("O"), np.asarray(other))
result = array(res_values.ravel())
result = extract_array(result, extract_numpy=True).reshape(self.shape)
return result
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 31015e3095e7d..f2d8e38df6842 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -455,6 +455,7 @@ def __init__(
mgr = self._init_mgr(
data, axes=dict(index=index, columns=columns), dtype=dtype, copy=copy
)
+
elif isinstance(data, dict):
mgr = init_dict(data, index, columns, dtype=dtype)
elif isinstance(data, ma.MaskedArray):
@@ -5754,10 +5755,11 @@ def _construct_result(self, result) -> "DataFrame":
-------
DataFrame
"""
- out = self._constructor(result, index=self.index, copy=False)
+ out = self._constructor(result, copy=False)
# Pin columns instead of passing to constructor for compat with
# non-unique columns case
out.columns = self.columns
+ out.index = self.index
return out
def combine(
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 4f6d84e52ea54..590b92481feca 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1269,12 +1269,22 @@ def reindex_indexer(
return type(self).from_blocks(new_blocks, new_axes)
- def _slice_take_blocks_ax0(self, slice_or_indexer, fill_value=lib.no_default):
+ def _slice_take_blocks_ax0(
+ self, slice_or_indexer, fill_value=lib.no_default, only_slice: bool = False
+ ):
"""
Slice/take blocks along axis=0.
Overloaded for SingleBlock
+ Parameters
+ ----------
+ slice_or_indexer : slice, ndarray[bool], or list-like of ints
+ fill_value : scalar, default lib.no_default
+ only_slice : bool, default False
+ If True, we always return views on existing arrays, never copies.
+ This is used when called from ops.blockwise.operate_blockwise.
+
Returns
-------
new_blocks : list of Block
@@ -1298,14 +1308,23 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_value=lib.no_default):
if allow_fill and fill_value is None:
_, fill_value = maybe_promote(blk.dtype)
- return [
- blk.take_nd(
- slobj,
- axis=0,
- new_mgr_locs=slice(0, sllen),
- fill_value=fill_value,
- )
- ]
+ if not allow_fill and only_slice:
+ # GH#33597 slice instead of take, so we get
+ # views instead of copies
+ blocks = [
+ blk.getitem_block([ml], new_mgr_locs=i)
+ for i, ml in enumerate(slobj)
+ ]
+ return blocks
+ else:
+ return [
+ blk.take_nd(
+ slobj,
+ axis=0,
+ new_mgr_locs=slice(0, sllen),
+ fill_value=fill_value,
+ )
+ ]
if sl_type in ("slice", "mask"):
blknos = self.blknos[slobj]
@@ -1342,11 +1361,25 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_value=lib.no_default):
blocks.append(newblk)
else:
- blocks.append(
- blk.take_nd(
- blklocs[mgr_locs.indexer], axis=0, new_mgr_locs=mgr_locs,
- )
- )
+ # GH#32779 to avoid the performance penalty of copying,
+ # we may try to only slice
+ taker = blklocs[mgr_locs.indexer]
+ max_len = max(len(mgr_locs), taker.max() + 1)
+ if only_slice:
+ taker = lib.maybe_indices_to_slice(taker, max_len)
+
+ if isinstance(taker, slice):
+ nb = blk.getitem_block(taker, new_mgr_locs=mgr_locs)
+ blocks.append(nb)
+ elif only_slice:
+ # GH#33597 slice instead of take, so we get
+ # views instead of copies
+ for i, ml in zip(taker, mgr_locs):
+ nb = blk.getitem_block([i], new_mgr_locs=ml)
+ blocks.append(nb)
+ else:
+ nb = blk.take_nd(taker, axis=0, new_mgr_locs=mgr_locs)
+ blocks.append(nb)
return blocks
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index da1caea13b598..585e6d0eb0811 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -26,6 +26,7 @@
logical_op,
)
from pandas.core.ops.array_ops import comp_method_OBJECT_ARRAY # noqa:F401
+from pandas.core.ops.blockwise import operate_blockwise
from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.dispatch import should_series_dispatch
from pandas.core.ops.docstrings import (
@@ -325,8 +326,9 @@ def dispatch_to_series(left, right, func, str_rep=None, axis=None):
elif isinstance(right, ABCDataFrame):
assert right._indexed_same(left)
- def column_op(a, b):
- return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))}
+ array_op = get_array_op(func, str_rep=str_rep)
+ bm = operate_blockwise(left, right, array_op)
+ return type(left)(bm)
elif isinstance(right, ABCSeries) and axis == "columns":
# We only get here if called via _combine_series_frame,
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 59ac2a2071f0a..eef42592d2b30 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -6,6 +6,7 @@
from functools import partial
import operator
from typing import Any, Optional, Tuple
+import warnings
import numpy as np
@@ -120,7 +121,7 @@ def masked_arith_op(x: np.ndarray, y, op):
return result
-def define_na_arithmetic_op(op, str_rep: str):
+def define_na_arithmetic_op(op, str_rep: Optional[str]):
def na_op(x, y):
return na_arithmetic_op(x, y, op, str_rep)
@@ -191,7 +192,8 @@ def arithmetic_op(left: ArrayLike, right: Any, op, str_rep: str):
# NB: We assume that extract_array has already been called
# on `left` and `right`.
lvalues = maybe_upcast_datetimelike_array(left)
- rvalues = maybe_upcast_for_op(right, lvalues.shape)
+ rvalues = maybe_upcast_datetimelike_array(right)
+ rvalues = maybe_upcast_for_op(rvalues, lvalues.shape)
if should_extension_dispatch(lvalues, rvalues) or isinstance(rvalues, Timedelta):
# Timedelta is included because numexpr will fail on it, see GH#31457
@@ -254,8 +256,13 @@ def comparison_op(
res_values = comp_method_OBJECT_ARRAY(op, lvalues, rvalues)
else:
- with np.errstate(all="ignore"):
- res_values = na_arithmetic_op(lvalues, rvalues, op, str_rep, is_cmp=True)
+ with warnings.catch_warnings():
+ # suppress warnings from numpy about element-wise comparison
+ warnings.simplefilter("ignore", DeprecationWarning)
+ with np.errstate(all="ignore"):
+ res_values = na_arithmetic_op(
+ lvalues, rvalues, op, str_rep, is_cmp=True
+ )
return res_values
diff --git a/pandas/core/ops/blockwise.py b/pandas/core/ops/blockwise.py
new file mode 100644
index 0000000000000..f41a30b136637
--- /dev/null
+++ b/pandas/core/ops/blockwise.py
@@ -0,0 +1,102 @@
+from typing import TYPE_CHECKING, List, Tuple
+
+import numpy as np
+
+from pandas._typing import ArrayLike
+
+if TYPE_CHECKING:
+ from pandas.core.internals.blocks import Block # noqa:F401
+
+
+def operate_blockwise(left, right, array_op):
+ # At this point we have already checked
+ # assert right._indexed_same(left)
+
+ res_blks: List["Block"] = []
+ rmgr = right._mgr
+ for n, blk in enumerate(left._mgr.blocks):
+ locs = blk.mgr_locs
+ blk_vals = blk.values
+
+ left_ea = not isinstance(blk_vals, np.ndarray)
+
+ rblks = rmgr._slice_take_blocks_ax0(locs.indexer, only_slice=True)
+
+ # Assertions are disabled for performance, but should hold:
+ # if left_ea:
+ # assert len(locs) == 1, locs
+ # assert len(rblks) == 1, rblks
+ # assert rblks[0].shape[0] == 1, rblks[0].shape
+
+ for k, rblk in enumerate(rblks):
+ right_ea = not isinstance(rblk.values, np.ndarray)
+
+ lvals, rvals = _get_same_shape_values(blk, rblk, left_ea, right_ea)
+
+ res_values = array_op(lvals, rvals)
+ if left_ea and not right_ea and hasattr(res_values, "reshape"):
+ res_values = res_values.reshape(1, -1)
+ nbs = rblk._split_op_result(res_values)
+
+ # Assertions are disabled for performance, but should hold:
+ # if right_ea or left_ea:
+ # assert len(nbs) == 1
+ # else:
+ # assert res_values.shape == lvals.shape, (res_values.shape, lvals.shape)
+
+ _reset_block_mgr_locs(nbs, locs)
+
+ res_blks.extend(nbs)
+
+ # Assertions are disabled for performance, but should hold:
+ # slocs = {y for nb in res_blks for y in nb.mgr_locs.as_array}
+ # nlocs = sum(len(nb.mgr_locs.as_array) for nb in res_blks)
+ # assert nlocs == len(left.columns), (nlocs, len(left.columns))
+ # assert len(slocs) == nlocs, (len(slocs), nlocs)
+ # assert slocs == set(range(nlocs)), slocs
+
+ new_mgr = type(rmgr)(res_blks, axes=rmgr.axes, do_integrity_check=False)
+ return new_mgr
+
+
+def _reset_block_mgr_locs(nbs: List["Block"], locs):
+ """
+ Reset mgr_locs to correspond to our original DataFrame.
+ """
+ for nb in nbs:
+ nblocs = locs.as_array[nb.mgr_locs.indexer]
+ nb.mgr_locs = nblocs
+ # Assertions are disabled for performance, but should hold:
+ # assert len(nblocs) == nb.shape[0], (len(nblocs), nb.shape)
+ # assert all(x in locs.as_array for x in nb.mgr_locs.as_array)
+
+
+def _get_same_shape_values(
+ lblk: "Block", rblk: "Block", left_ea: bool, right_ea: bool
+) -> Tuple[ArrayLike, ArrayLike]:
+ """
+ Slice lblk.values to align with rblk. Squeeze if we have EAs.
+ """
+ lvals = lblk.values
+ rvals = rblk.values
+
+ # Require that the indexing into lvals be slice-like
+ assert rblk.mgr_locs.is_slice_like, rblk.mgr_locs
+
+ # TODO(EA2D): with 2D EAs pnly this first clause would be needed
+ if not (left_ea or right_ea):
+ lvals = lvals[rblk.mgr_locs.indexer, :]
+ assert lvals.shape == rvals.shape, (lvals.shape, rvals.shape)
+ elif left_ea and right_ea:
+ assert lvals.shape == rvals.shape, (lvals.shape, rvals.shape)
+ elif right_ea:
+ # lvals are 2D, rvals are 1D
+ lvals = lvals[rblk.mgr_locs.indexer, :]
+ assert lvals.shape[0] == 1, lvals.shape
+ lvals = lvals[0, :]
+ else:
+ # lvals are 1D, rvals are 2D
+ assert rvals.shape[0] == 1, rvals.shape
+ rvals = rvals[0, :]
+
+ return lvals, rvals
diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index ccc49adc5da82..755fbd0d9036c 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -70,7 +70,14 @@ def assert_invalid_comparison(left, right, box):
result = right != left
tm.assert_equal(result, ~expected)
- msg = "Invalid comparison between|Cannot compare type|not supported between"
+ msg = "|".join(
+ [
+ "Invalid comparison between",
+ "Cannot compare type",
+ "not supported between",
+ "invalid type promotion",
+ ]
+ )
with pytest.raises(TypeError, match=msg):
left < right
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 8c480faa4ee81..b3f4d5f5d9ee5 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -962,7 +962,9 @@ def test_dt64arr_sub_dt64object_array(self, box_with_array, tz_naive_fixture):
obj = tm.box_expected(dti, box_with_array)
expected = tm.box_expected(expected, box_with_array)
- warn = PerformanceWarning if box_with_array is not pd.DataFrame else None
+ warn = None
+ if box_with_array is not pd.DataFrame or tz_naive_fixture is None:
+ warn = PerformanceWarning
with tm.assert_produces_warning(warn):
result = obj - obj.astype(object)
tm.assert_equal(result, expected)
@@ -1465,7 +1467,7 @@ def test_dt64arr_add_sub_offset_array(
other = tm.box_expected(other, box_with_array)
warn = PerformanceWarning
- if box_with_array is pd.DataFrame and not (tz is None and not box_other):
+ if box_with_array is pd.DataFrame and tz is not None:
warn = None
with tm.assert_produces_warning(warn):
res = op(dtarr, other)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 65e3c6a07d4f3..904846c5fa099 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -552,8 +552,7 @@ def test_tda_add_dt64_object_array(self, box_with_array, tz_naive_fixture):
obj = tm.box_expected(tdi, box)
other = tm.box_expected(dti, box)
- warn = PerformanceWarning if box is not pd.DataFrame else None
- with tm.assert_produces_warning(warn):
+ with tm.assert_produces_warning(PerformanceWarning):
result = obj + other.astype(object)
tm.assert_equal(result, other)
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index b9102b1f84c4a..5cb27c697a64d 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -49,9 +49,11 @@ def check(df, df2):
)
tm.assert_frame_equal(result, expected)
- msg = re.escape(
- "Invalid comparison between dtype=datetime64[ns] and ndarray"
- )
+ msgs = [
+ r"Invalid comparison between dtype=datetime64\[ns\] and ndarray",
+ "invalid type promotion",
+ ]
+ msg = "|".join(msgs)
with pytest.raises(TypeError, match=msg):
x >= y
with pytest.raises(TypeError, match=msg):
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32779 | 2020-03-17T17:01:08Z | 2020-05-19T13:16:14Z | 2020-05-19T13:16:14Z | 2020-05-19T15:48:31Z |
CLN: .values -> ._values | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 52423c4008399..2cae09ed08f36 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -243,8 +243,11 @@ def _simple_new(cls, values: Categorical, name: Label = None):
@Appender(Index._shallow_copy.__doc__)
def _shallow_copy(self, values=None, name: Label = no_default):
+ name = self.name if name is no_default else name
+
if values is not None:
values = Categorical(values, dtype=self.dtype)
+
return super()._shallow_copy(values=values, name=name)
def _is_dtype_compat(self, other) -> bool:
| https://api.github.com/repos/pandas-dev/pandas/pulls/32778 | 2020-03-17T16:29:33Z | 2020-03-26T00:54:18Z | 2020-03-26T00:54:18Z | 2020-03-26T01:04:31Z |
|
Backport PR #32734 on branch 1.0.x | diff --git a/doc/source/whatsnew/v1.0.3.rst b/doc/source/whatsnew/v1.0.3.rst
index 482222fbddbb8..0ca5f5f548885 100644
--- a/doc/source/whatsnew/v1.0.3.rst
+++ b/doc/source/whatsnew/v1.0.3.rst
@@ -16,6 +16,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression in ``resample.agg`` when the underlying data is non-writeable (:issue:`31710`)
+- Fixed regression in :class:`DataFrame` exponentiation with reindexing (:issue:`32685`)
.. _whatsnew_103.bug_fixes:
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 9db0306e53a50..42a0bacbd902a 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -679,13 +679,17 @@ def to_series(right):
def _should_reindex_frame_op(
- left: "DataFrame", right, axis, default_axis: int, fill_value, level
+ left: "DataFrame", right, op, axis, default_axis: int, fill_value, level
) -> bool:
"""
Check if this is an operation between DataFrames that will need to reindex.
"""
assert isinstance(left, ABCDataFrame)
+ if op is operator.pow or op is rpow:
+ # GH#32685 pow has special semantics for operating with null values
+ return False
+
if not isinstance(right, ABCDataFrame):
return False
@@ -747,7 +751,9 @@ def _arith_method_FRAME(cls, op, special):
@Appender(doc)
def f(self, other, axis=default_axis, level=None, fill_value=None):
- if _should_reindex_frame_op(self, other, axis, default_axis, fill_value, level):
+ if _should_reindex_frame_op(
+ self, other, op, axis, default_axis, fill_value, level
+ ):
return _frame_arith_method_with_reindex(self, other, op)
other = _align_method_FRAME(self, other, axis)
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index fadba0a8673d0..141144c13717c 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -756,3 +756,27 @@ def test_frame_single_columns_object_sum_axis_1():
result = df.sum(axis=1)
expected = pd.Series(["A", 1.2, 0])
tm.assert_series_equal(result, expected)
+
+
+def test_pow_with_realignment():
+ # GH#32685 pow has special semantics for operating with null values
+ left = pd.DataFrame({"A": [0, 1, 2]})
+ right = pd.DataFrame(index=[0, 1, 2])
+
+ result = left ** right
+ expected = pd.DataFrame({"A": [np.nan, 1.0, np.nan]})
+ tm.assert_frame_equal(result, expected)
+
+
+# TODO: move to tests.arithmetic and parametrize
+def test_pow_nan_with_zero():
+ left = pd.DataFrame({"A": [np.nan, np.nan, np.nan]})
+ right = pd.DataFrame({"A": [0, 0, 0]})
+
+ expected = pd.DataFrame({"A": [1.0, 1.0, 1.0]})
+
+ result = left ** right
+ tm.assert_frame_equal(result, expected)
+
+ result = left["A"] ** right["A"]
+ tm.assert_series_equal(result, expected["A"])
| https://api.github.com/repos/pandas-dev/pandas/pulls/32777 | 2020-03-17T16:08:20Z | 2020-03-17T19:14:17Z | 2020-03-17T19:14:17Z | 2020-04-04T20:41:41Z |
|
TYP: PandasObject._cache | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 40ff0640a5bc4..0685ce5c92815 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -4,7 +4,7 @@
import builtins
import textwrap
-from typing import Dict, FrozenSet, List, Optional, Union
+from typing import Any, Dict, FrozenSet, List, Optional, Union
import numpy as np
@@ -49,6 +49,8 @@ class PandasObject(DirNamesMixin):
Baseclass for various pandas objects.
"""
+ _cache: Dict[str, Any]
+
@property
def _constructor(self):
"""
@@ -63,7 +65,7 @@ def __repr__(self) -> str:
# Should be overwritten by base classes
return object.__repr__(self)
- def _reset_cache(self, key=None):
+ def _reset_cache(self, key: Optional[str] = None) -> None:
"""
Reset cached properties. If ``key`` is passed, only clears that key.
"""
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 31966489403f4..d73c5de0253e8 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1,7 +1,7 @@
from datetime import datetime
import operator
from textwrap import dedent
-from typing import TYPE_CHECKING, Any, Dict, FrozenSet, Hashable, Union
+from typing import TYPE_CHECKING, Any, FrozenSet, Hashable, Union
import warnings
import numpy as np
@@ -250,7 +250,6 @@ def _outer_indexer(self, left, right):
_typ = "index"
_data: Union[ExtensionArray, np.ndarray]
- _cache: Dict[str, Any]
_id = None
_name: Label = None
# MultiIndex.levels previously allowed setting the index name. We
| Move ``_cache: Dict[str, Any]`` fragment to PandasObject, where it belongs + related changes. | https://api.github.com/repos/pandas-dev/pandas/pulls/32775 | 2020-03-17T14:19:12Z | 2020-03-19T00:52:10Z | 2020-03-19T00:52:10Z | 2020-03-21T11:45:31Z |
CLN: simplify MultiIndex._shallow_copy | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 5bffc4ec552af..cf5127757b356 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -990,15 +990,11 @@ def _constructor(self):
def _shallow_copy(self, values=None, **kwargs):
if values is not None:
names = kwargs.pop("names", kwargs.pop("name", self.names))
- # discards freq
- kwargs.pop("freq", None)
return MultiIndex.from_tuples(values, names=names, **kwargs)
result = self.copy(**kwargs)
result._cache = self._cache.copy()
- # GH32669
- if "levels" in result._cache:
- del result._cache["levels"]
+ result._cache.pop("levels", None) # GH32669
return result
def _shallow_copy_with_infer(self, values, **kwargs):
| Minor simplification of ``MultiIndex._shallow_copy``. | https://api.github.com/repos/pandas-dev/pandas/pulls/32772 | 2020-03-17T08:12:32Z | 2020-03-19T21:30:21Z | 2020-03-19T21:30:21Z | 2020-03-21T11:45:21Z |
Backport PR #32708 on branch 1.0.x (skip 32 bit linux) | diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
index c9a2e4eefd19d..437cc9b161e8a 100644
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -38,11 +38,11 @@ jobs:
LC_ALL: "it_IT.utf8"
EXTRA_APT: "language-pack-it xsel"
- py36_32bit:
- ENV_FILE: ci/deps/azure-36-32bit.yaml
- CONDA_PY: "36"
- PATTERN: "not slow and not network and not clipboard"
- BITS32: "yes"
+ #py36_32bit:
+ # ENV_FILE: ci/deps/azure-36-32bit.yaml
+ # CONDA_PY: "36"
+ # PATTERN: "not slow and not network and not clipboard"
+ # BITS32: "yes"
py37_locale:
ENV_FILE: ci/deps/azure-37-locale.yaml
| Backport PR #32708: skip 32 bit linux | https://api.github.com/repos/pandas-dev/pandas/pulls/32771 | 2020-03-17T07:57:50Z | 2020-03-17T12:45:46Z | 2020-03-17T12:45:46Z | 2020-03-17T12:46:24Z |
CLN: Consolidate numba facilities | diff --git a/pandas/core/util/numba_.py b/pandas/core/util/numba_.py
new file mode 100644
index 0000000000000..e4debab2c22ee
--- /dev/null
+++ b/pandas/core/util/numba_.py
@@ -0,0 +1,58 @@
+"""Common utilities for Numba operations"""
+import types
+from typing import Callable, Dict, Optional
+
+import numpy as np
+
+from pandas.compat._optional import import_optional_dependency
+
+
+def check_kwargs_and_nopython(
+ kwargs: Optional[Dict] = None, nopython: Optional[bool] = None
+):
+ if kwargs and nopython:
+ raise ValueError(
+ "numba does not support kwargs with nopython=True: "
+ "https://github.com/numba/numba/issues/2916"
+ )
+
+
+def get_jit_arguments(engine_kwargs: Optional[Dict[str, bool]] = None):
+ """
+ Return arguments to pass to numba.JIT, falling back on pandas default JIT settings.
+ """
+ if engine_kwargs is None:
+ engine_kwargs = {}
+
+ nopython = engine_kwargs.get("nopython", True)
+ nogil = engine_kwargs.get("nogil", False)
+ parallel = engine_kwargs.get("parallel", False)
+ return nopython, nogil, parallel
+
+
+def jit_user_function(func: Callable, nopython: bool, nogil: bool, parallel: bool):
+ """
+ JIT the user's function given the configurable arguments.
+ """
+ numba = import_optional_dependency("numba")
+
+ if isinstance(func, numba.targets.registry.CPUDispatcher):
+ # Don't jit a user passed jitted function
+ numba_func = func
+ else:
+
+ @numba.generated_jit(nopython=nopython, nogil=nogil, parallel=parallel)
+ def numba_func(data, *_args):
+ if getattr(np, func.__name__, False) is func or isinstance(
+ func, types.BuiltinFunctionType
+ ):
+ jf = func
+ else:
+ jf = numba.jit(func, nopython=nopython, nogil=nogil)
+
+ def impl(data, *_args):
+ return jf(data, *_args)
+
+ return impl
+
+ return numba_func
diff --git a/pandas/core/window/numba_.py b/pandas/core/window/numba_.py
index d6e8194c861fa..5d35ec7457ab0 100644
--- a/pandas/core/window/numba_.py
+++ b/pandas/core/window/numba_.py
@@ -1,4 +1,3 @@
-import types
from typing import Any, Callable, Dict, Optional, Tuple
import numpy as np
@@ -6,35 +5,49 @@
from pandas._typing import Scalar
from pandas.compat._optional import import_optional_dependency
+from pandas.core.util.numba_ import (
+ check_kwargs_and_nopython,
+ get_jit_arguments,
+ jit_user_function,
+)
-def make_rolling_apply(
- func: Callable[..., Scalar],
+
+def generate_numba_apply_func(
args: Tuple,
- nogil: bool,
- parallel: bool,
- nopython: bool,
+ kwargs: Dict[str, Any],
+ func: Callable[..., Scalar],
+ engine_kwargs: Optional[Dict[str, bool]],
):
"""
- Creates a JITted rolling apply function with a JITted version of
- the user's function.
+ Generate a numba jitted apply function specified by values from engine_kwargs.
+
+ 1. jit the user's function
+ 2. Return a rolling apply function with the jitted function inline
+
+ Configurations specified in engine_kwargs apply to both the user's
+ function _AND_ the rolling apply function.
Parameters
----------
- func : function
- function to be applied to each window and will be JITed
args : tuple
*args to be passed into the function
- nogil : bool
- nogil parameter from engine_kwargs for numba.jit
- parallel : bool
- parallel parameter from engine_kwargs for numba.jit
- nopython : bool
- nopython parameter from engine_kwargs for numba.jit
+ kwargs : dict
+ **kwargs to be passed into the function
+ func : function
+ function to be applied to each window and will be JITed
+ engine_kwargs : dict
+ dictionary of arguments to be passed into numba.jit
Returns
-------
Numba function
"""
+ nopython, nogil, parallel = get_jit_arguments(engine_kwargs)
+
+ check_kwargs_and_nopython(kwargs, nopython)
+
+ numba_func = jit_user_function(func, nopython, nogil, parallel)
+
numba = import_optional_dependency("numba")
if parallel:
@@ -42,25 +55,6 @@ def make_rolling_apply(
else:
loop_range = range
- if isinstance(func, numba.targets.registry.CPUDispatcher):
- # Don't jit a user passed jitted function
- numba_func = func
- else:
-
- @numba.generated_jit(nopython=nopython, nogil=nogil, parallel=parallel)
- def numba_func(window, *_args):
- if getattr(np, func.__name__, False) is func or isinstance(
- func, types.BuiltinFunctionType
- ):
- jf = func
- else:
- jf = numba.jit(func, nopython=nopython, nogil=nogil)
-
- def impl(window, *_args):
- return jf(window, *_args)
-
- return impl
-
@numba.jit(nopython=nopython, nogil=nogil, parallel=parallel)
def roll_apply(
values: np.ndarray, begin: np.ndarray, end: np.ndarray, minimum_periods: int,
@@ -78,49 +72,3 @@ def roll_apply(
return result
return roll_apply
-
-
-def generate_numba_apply_func(
- args: Tuple,
- kwargs: Dict[str, Any],
- func: Callable[..., Scalar],
- engine_kwargs: Optional[Dict[str, bool]],
-):
- """
- Generate a numba jitted apply function specified by values from engine_kwargs.
-
- 1. jit the user's function
- 2. Return a rolling apply function with the jitted function inline
-
- Configurations specified in engine_kwargs apply to both the user's
- function _AND_ the rolling apply function.
-
- Parameters
- ----------
- args : tuple
- *args to be passed into the function
- kwargs : dict
- **kwargs to be passed into the function
- func : function
- function to be applied to each window and will be JITed
- engine_kwargs : dict
- dictionary of arguments to be passed into numba.jit
-
- Returns
- -------
- Numba function
- """
- if engine_kwargs is None:
- engine_kwargs = {}
-
- nopython = engine_kwargs.get("nopython", True)
- nogil = engine_kwargs.get("nogil", False)
- parallel = engine_kwargs.get("parallel", False)
-
- if kwargs and nopython:
- raise ValueError(
- "numba does not support kwargs with nopython=True: "
- "https://github.com/numba/numba/issues/2916"
- )
-
- return make_rolling_apply(func, args, nogil, parallel, nopython)
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Precursor to https://github.com/pandas-dev/pandas/issues/31845 and other numba engine additions, creates `pandas/core/numba_.py` (open to move elsewhere) as a shared place for common numba operations like default parameters and jitting functions. | https://api.github.com/repos/pandas-dev/pandas/pulls/32770 | 2020-03-17T07:35:10Z | 2020-03-19T00:23:26Z | 2020-03-19T00:23:26Z | 2020-03-19T03:49:06Z |
TYP: annotate Block/BlockManager putmask | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 63f5bd547074a..8c6a5c9d020b4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8652,12 +8652,7 @@ def _where(
self._check_inplace_setting(other)
new_data = self._data.putmask(
- mask=cond,
- new=other,
- align=align,
- inplace=True,
- axis=block_axis,
- transpose=self._AXIS_REVERSED,
+ mask=cond, new=other, align=align, axis=block_axis,
)
self._update_inplace(new_data)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index c429a65ed3369..935ff09585b17 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -910,25 +910,26 @@ def setitem(self, indexer, value):
def putmask(
self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False,
- ):
+ ) -> List["Block"]:
"""
putmask the data to the block; it is possible that we may create a
new dtype of block
- return the resulting block(s)
+ Return the resulting block(s).
Parameters
----------
- mask : the condition to respect
+ mask : the condition to respect
new : a ndarray/object
- inplace : perform inplace modification, default is False
+ inplace : bool, default False
+ Perform inplace modification.
axis : int
- transpose : boolean
- Set to True if self is stored with axes reversed
+ transpose : bool, default False
+ Set to True if self is stored with axes reversed.
Returns
-------
- a list of new blocks, the result of the putmask
+ List[Block]
"""
new_values = self.values if inplace else self.values.copy()
@@ -1626,23 +1627,10 @@ def set(self, locs, values):
self.values[:] = values
def putmask(
- self, mask, new, inplace=False, axis=0, transpose=False,
- ):
+ self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False,
+ ) -> List["Block"]:
"""
- putmask the data to the block; we must be a single block and not
- generate other blocks
-
- return the resulting block
-
- Parameters
- ----------
- mask : the condition to respect
- new : a ndarray/object
- inplace : perform inplace modification, default is False
-
- Returns
- -------
- a new block, the result of the putmask
+ See Block.putmask.__doc__
"""
inplace = validate_bool_kwarg(inplace, "inplace")
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 330acec46f5cd..b245ac09029a2 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -558,14 +558,25 @@ def where(self, **kwargs) -> "BlockManager":
def setitem(self, indexer, value) -> "BlockManager":
return self.apply("setitem", indexer=indexer, value=value)
- def putmask(self, **kwargs):
+ def putmask(
+ self, mask, new, align: bool = True, axis: int = 0,
+ ):
+ transpose = self.ndim == 2
- if kwargs.pop("align", True):
+ if align:
align_keys = ["new", "mask"]
else:
align_keys = ["mask"]
- return self.apply("putmask", align_keys=align_keys, **kwargs)
+ return self.apply(
+ "putmask",
+ align_keys=align_keys,
+ mask=mask,
+ new=new,
+ inplace=True,
+ axis=axis,
+ transpose=transpose,
+ )
def diff(self, n: int, axis: int) -> "BlockManager":
return self.apply("diff", n=n, axis=axis)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9c1f4134746a8..1e1c9963ab3f1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2812,7 +2812,7 @@ def update(self, other) -> None:
other = other.reindex_like(self)
mask = notna(other)
- self._data = self._data.putmask(mask=mask, new=other, inplace=True)
+ self._data = self._data.putmask(mask=mask, new=other)
self._maybe_update_cacher()
# ----------------------------------------------------------------------
| medium-term goal is to avoid passing Series/DataFrame objects to Block methods via BlockManager.apply | https://api.github.com/repos/pandas-dev/pandas/pulls/32769 | 2020-03-17T02:54:58Z | 2020-03-22T20:38:48Z | 2020-03-22T20:38:48Z | 2020-03-22T21:01:45Z |
CLN: remove _ndarray_values | diff --git a/doc/source/development/internals.rst b/doc/source/development/internals.rst
index 4ad045a91b5fe..8f1c3d5d818c2 100644
--- a/doc/source/development/internals.rst
+++ b/doc/source/development/internals.rst
@@ -89,16 +89,10 @@ pandas extends NumPy's type system with custom types, like ``Categorical`` or
datetimes with a timezone, so we have multiple notions of "values". For 1-D
containers (``Index`` classes and ``Series``) we have the following convention:
-* ``cls._ndarray_values`` is *always* a NumPy ``ndarray``. Ideally,
- ``_ndarray_values`` is cheap to compute. For example, for a ``Categorical``,
- this returns the codes, not the array of objects.
* ``cls._values`` refers is the "best possible" array. This could be an
- ``ndarray``, ``ExtensionArray``, or in ``Index`` subclass (note: we're in the
- process of removing the index subclasses here so that it's always an
- ``ndarray`` or ``ExtensionArray``).
+ ``ndarray`` or ``ExtensionArray``.
-So, for example, ``Series[category]._values`` is a ``Categorical``, while
-``Series[category]._ndarray_values`` is the underlying codes.
+So, for example, ``Series[category]._values`` is a ``Categorical``.
.. _ref-subclassing-pandas:
diff --git a/doc/source/reference/extensions.rst b/doc/source/reference/extensions.rst
index 78fdfbfd28144..4c0763e091b75 100644
--- a/doc/source/reference/extensions.rst
+++ b/doc/source/reference/extensions.rst
@@ -37,7 +37,6 @@ objects.
api.extensions.ExtensionArray._from_factorized
api.extensions.ExtensionArray._from_sequence
api.extensions.ExtensionArray._from_sequence_of_strings
- api.extensions.ExtensionArray._ndarray_values
api.extensions.ExtensionArray._reduce
api.extensions.ExtensionArray._values_for_argsort
api.extensions.ExtensionArray._values_for_factorize
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 6aa303dd04703..06dd0111ccb41 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -93,7 +93,6 @@ class ExtensionArray:
_from_factorized
_from_sequence
_from_sequence_of_strings
- _ndarray_values
_reduce
_values_for_argsort
_values_for_factorize
@@ -1044,22 +1043,6 @@ def _concat_same_type(
# of objects
_can_hold_na = True
- @property
- def _ndarray_values(self) -> np.ndarray:
- """
- Internal pandas method for lossy conversion to a NumPy ndarray.
-
- This method is not part of the pandas interface.
-
- The expectation is that this is cheap to compute, and is primarily
- used for interacting with our indexers.
-
- Returns
- -------
- array : ndarray
- """
- return np.array(self)
-
def _reduce(self, name, skipna=True, **kwargs):
"""
Return a scalar result of performing the reduction operation.
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 497a9893e6c66..bfccc6f244219 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -451,10 +451,6 @@ def dtype(self) -> CategoricalDtype:
"""
return self._dtype
- @property
- def _ndarray_values(self) -> np.ndarray:
- return self.codes
-
@property
def _constructor(self) -> Type["Categorical"]:
return Categorical
@@ -2567,12 +2563,7 @@ def _get_codes_for_values(values, categories):
"""
dtype_equal = is_dtype_equal(values.dtype, categories.dtype)
- if dtype_equal:
- # To prevent erroneous dtype coercion in _get_data_algo, retrieve
- # the underlying numpy array. gh-22702
- values = getattr(values, "_ndarray_values", values)
- categories = getattr(categories, "_ndarray_values", categories)
- elif is_extension_array_dtype(categories.dtype) and is_object_dtype(values):
+ if is_extension_array_dtype(categories.dtype) and is_object_dtype(values):
# Support inferring the correct extension dtype from an array of
# scalar objects. e.g.
# Categorical(array[Period, Period], categories=PeriodIndex(...))
@@ -2582,7 +2573,7 @@ def _get_codes_for_values(values, categories):
# exception raised in _from_sequence
values = ensure_object(values)
categories = ensure_object(categories)
- else:
+ elif not dtype_equal:
values = ensure_object(values)
categories = ensure_object(categories)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 105d9581b1a25..41bef7385b8b7 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -455,10 +455,6 @@ def asi8(self) -> np.ndarray:
# do not cache or you'll create a memory leak
return self._data.view("i8")
- @property
- def _ndarray_values(self):
- return self._data
-
# ----------------------------------------------------------------
# Rendering Methods
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index fb33840ad757c..f2880c5cbee42 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -478,18 +478,6 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
data = self.to_numpy(dtype=dtype, **kwargs)
return astype_nansafe(data, dtype, copy=False)
- @property
- def _ndarray_values(self) -> np.ndarray:
- """
- Internal pandas method for lossy conversion to a NumPy ndarray.
-
- This method is not part of the pandas interface.
-
- The expectation is that this is cheap to compute, and is primarily
- used for interacting with our indexers.
- """
- return self._data
-
def _values_for_factorize(self) -> Tuple[np.ndarray, float]:
# TODO: https://github.com/pandas-dev/pandas/issues/30037
# use masked algorithms, rather than object-dtype / np.nan.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index bf2ed02c57a29..9281d2f72b409 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -855,23 +855,6 @@ def to_numpy(self, dtype=None, copy=False, na_value=lib.no_default, **kwargs):
result[self.isna()] = na_value
return result
- @property
- def _ndarray_values(self) -> np.ndarray:
- """
- The data as an ndarray, possibly losing information.
-
- The expectation is that this is cheap to compute, and is primarily
- used for interacting with our indexers.
-
- - categorical -> codes
- """
- if is_extension_array_dtype(self):
- return self.array._ndarray_values
- # As a mixin, we depend on the mixing class having values.
- # Special mixin syntax may be developed in the future:
- # https://github.com/python/typing/issues/246
- return self.values # type: ignore
-
@property
def empty(self):
return not self.size
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 162d69d957669..4fa771dfbcf82 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -464,8 +464,7 @@ def _simple_new(cls, values, name: Label = None):
# _index_data is a (temporary?) fix to ensure that the direct data
# manipulation we do in `_libs/reduction.pyx` continues to work.
# We need access to the actual ndarray, since we're messing with
- # data buffers and strides. We don't re-use `_ndarray_values`, since
- # we actually set this value too.
+ # data buffers and strides.
result._index_data = values
result._name = name
result._cache = {}
@@ -625,7 +624,8 @@ def ravel(self, order="C"):
--------
numpy.ndarray.ravel
"""
- return self._ndarray_values.ravel(order=order)
+ values = self._get_engine_target()
+ return values.ravel(order=order)
def view(self, cls=None):
@@ -3846,29 +3846,24 @@ def _values(self) -> Union[ExtensionArray, np.ndarray]:
"""
The best array representation.
- This is an ndarray or ExtensionArray. This differs from
- ``_ndarray_values``, which always returns an ndarray.
+ This is an ndarray or ExtensionArray.
- Both ``_values`` and ``_ndarray_values`` are consistent between
- ``Series`` and ``Index`` (except for datetime64[ns], which returns
- a DatetimeArray for _values on the Index, but ndarray[M8ns] on the
- Series).
+ ``_values`` are consistent between``Series`` and ``Index``.
It may differ from the public '.values' method.
- index | values | _values | _ndarray_values |
- ----------------- | --------------- | ------------- | --------------- |
- Index | ndarray | ndarray | ndarray |
- CategoricalIndex | Categorical | Categorical | ndarray[int] |
- DatetimeIndex | ndarray[M8ns] | DatetimeArray | ndarray[M8ns] |
- DatetimeIndex[tz] | ndarray[M8ns] | DatetimeArray | ndarray[M8ns] |
- PeriodIndex | ndarray[object] | PeriodArray | ndarray[int] |
- IntervalIndex | IntervalArray | IntervalArray | ndarray[object] |
+ index | values | _values |
+ ----------------- | --------------- | ------------- |
+ Index | ndarray | ndarray |
+ CategoricalIndex | Categorical | Categorical |
+ DatetimeIndex | ndarray[M8ns] | DatetimeArray |
+ DatetimeIndex[tz] | ndarray[M8ns] | DatetimeArray |
+ PeriodIndex | ndarray[object] | PeriodArray |
+ IntervalIndex | IntervalArray | IntervalArray |
See Also
--------
values
- _ndarray_values
"""
return self._data
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 9c55d2de946a8..2f641a3d4c111 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -179,7 +179,7 @@ def sort_values(self, return_indexer=False, ascending=True):
sorted_index = self.take(_as)
return sorted_index, _as
else:
- # NB: using asi8 instead of _ndarray_values matters in numpy 1.18
+ # NB: using asi8 instead of _data matters in numpy 1.18
# because the treatment of NaT has been changed to put NaT last
# instead of first.
sorted_values = np.sort(self.asi8)
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 6d5f0dbb830f9..6851aeec0ca40 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -228,10 +228,6 @@ def __iter__(self):
def __array__(self, dtype=None) -> np.ndarray:
return np.asarray(self._data, dtype=dtype)
- @property
- def _ndarray_values(self) -> np.ndarray:
- return self._data._ndarray_values
-
def _get_engine_target(self) -> np.ndarray:
return self._data._values_for_argsort()
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8a6839b4fb181..dc0ae9569b135 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -550,21 +550,17 @@ def _values(self):
timedelta64 dtypes), while ``.array`` ensures to always return an
ExtensionArray.
- Differs from ``._ndarray_values``, as that ensures to always return a
- numpy array (it will call ``_ndarray_values`` on the ExtensionArray, if
- the Series was backed by an ExtensionArray).
-
Overview:
- dtype | values | _values | array | _ndarray_values |
- ----------- | ------------- | ------------- | ------------- | --------------- |
- Numeric | ndarray | ndarray | PandasArray | ndarray |
- Category | Categorical | Categorical | Categorical | ndarray[int] |
- dt64[ns] | ndarray[M8ns] | DatetimeArray | DatetimeArray | ndarray[M8ns] |
- dt64[ns tz] | ndarray[M8ns] | DatetimeArray | DatetimeArray | ndarray[M8ns] |
- td64[ns] | ndarray[m8ns] | TimedeltaArray| ndarray[m8ns] | ndarray[m8ns] |
- Period | ndarray[obj] | PeriodArray | PeriodArray | ndarray[int] |
- Nullable | EA | EA | EA | ndarray |
+ dtype | values | _values | array |
+ ----------- | ------------- | ------------- | ------------- |
+ Numeric | ndarray | ndarray | PandasArray |
+ Category | Categorical | Categorical | Categorical |
+ dt64[ns] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
+ dt64[ns tz] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
+ td64[ns] | ndarray[m8ns] | TimedeltaArray| ndarray[m8ns] |
+ Period | ndarray[obj] | PeriodArray | PeriodArray |
+ Nullable | EA | EA | EA |
"""
return self._data.internal_values()
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index 46fd1551e6170..59f9103072fe9 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -220,34 +220,6 @@ def test_values_consistent(array, expected_type, dtype):
tm.assert_equal(l_values, r_values)
-@pytest.mark.parametrize(
- "array, expected",
- [
- (np.array([0, 1], dtype=np.int64), np.array([0, 1], dtype=np.int64)),
- (np.array(["0", "1"]), np.array(["0", "1"], dtype=object)),
- (pd.Categorical(["a", "a"]), np.array([0, 0], dtype="int8")),
- (
- pd.DatetimeIndex(["2017-01-01T00:00:00"]),
- np.array(["2017-01-01T00:00:00"], dtype="M8[ns]"),
- ),
- (
- pd.DatetimeIndex(["2017-01-01T00:00:00"], tz="US/Eastern"),
- np.array(["2017-01-01T05:00:00"], dtype="M8[ns]"),
- ),
- (pd.TimedeltaIndex([10 ** 10]), np.array([10 ** 10], dtype="m8[ns]")),
- (
- pd.PeriodIndex(["2017", "2018"], freq="D"),
- np.array([17167, 17532], dtype=np.int64),
- ),
- ],
-)
-def test_ndarray_values(array, expected):
- l_values = pd.Series(array)._ndarray_values
- r_values = pd.Index(array)._ndarray_values
- tm.assert_numpy_array_equal(l_values, r_values)
- tm.assert_numpy_array_equal(l_values, expected)
-
-
@pytest.mark.parametrize("arr", [np.array([1, 2, 3])])
def test_numpy_array(arr):
ser = pd.Series(arr)
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index c13385c135e9f..43f696e0b13db 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -313,16 +313,11 @@ def test_ensure_copied_data(self, indices):
result = result.tz_localize("UTC").tz_convert(indices.tz)
tm.assert_index_equal(indices, result)
- tm.assert_numpy_array_equal(
- indices._ndarray_values, result._ndarray_values, check_same="copy"
- )
if isinstance(indices, PeriodIndex):
# .values an object array of Period, thus copied
result = index_type(ordinal=indices.asi8, copy=False, **init_kwargs)
- tm.assert_numpy_array_equal(
- indices._ndarray_values, result._ndarray_values, check_same="same"
- )
+ tm.assert_numpy_array_equal(indices.asi8, result.asi8, check_same="same")
elif isinstance(indices, IntervalIndex):
# checked in test_interval.py
pass
@@ -331,9 +326,6 @@ def test_ensure_copied_data(self, indices):
tm.assert_numpy_array_equal(
indices.values, result.values, check_same="same"
)
- tm.assert_numpy_array_equal(
- indices._ndarray_values, result._ndarray_values, check_same="same"
- )
def test_memory_usage(self, indices):
indices._engine.clear_mapping()
diff --git a/pandas/tests/indexes/interval/test_constructors.py b/pandas/tests/indexes/interval/test_constructors.py
index 837c124db2bed..fa881df8139c6 100644
--- a/pandas/tests/indexes/interval/test_constructors.py
+++ b/pandas/tests/indexes/interval/test_constructors.py
@@ -91,7 +91,7 @@ def test_constructor_nan(self, constructor, breaks, closed):
assert result.closed == closed
assert result.dtype.subtype == expected_subtype
- tm.assert_numpy_array_equal(result._ndarray_values, expected_values)
+ tm.assert_numpy_array_equal(np.array(result), expected_values)
@pytest.mark.parametrize(
"breaks",
@@ -114,7 +114,7 @@ def test_constructor_empty(self, constructor, breaks, closed):
assert result.empty
assert result.closed == closed
assert result.dtype.subtype == expected_subtype
- tm.assert_numpy_array_equal(result._ndarray_values, expected_values)
+ tm.assert_numpy_array_equal(np.array(result), expected_values)
@pytest.mark.parametrize(
"breaks",
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index c2b209c810af9..efdd3fc9907a2 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -147,7 +147,7 @@ def test_ensure_copied_data(self, closed):
)
# by-definition make a copy
- result = IntervalIndex(index._ndarray_values, copy=False)
+ result = IntervalIndex(np.array(index), copy=False)
tm.assert_numpy_array_equal(
index.left.values, result.left.values, check_same="copy"
)
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index b5ff83ec7514d..cb2140d0b4025 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -147,9 +147,9 @@ def test_constructor_fromarraylike(self):
msg = "freq not specified and cannot be inferred"
with pytest.raises(ValueError, match=msg):
- PeriodIndex(idx._ndarray_values)
+ PeriodIndex(idx.asi8)
with pytest.raises(ValueError, match=msg):
- PeriodIndex(list(idx._ndarray_values))
+ PeriodIndex(list(idx.asi8))
msg = "'Period' object is not iterable"
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 03f0be3f368cb..df2f85cd7f1e2 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -161,7 +161,7 @@ def test_values(self):
tm.assert_numpy_array_equal(idx.to_numpy(), exp)
exp = np.array([], dtype=np.int64)
- tm.assert_numpy_array_equal(idx._ndarray_values, exp)
+ tm.assert_numpy_array_equal(idx.asi8, exp)
idx = PeriodIndex(["2011-01", NaT], freq="M")
@@ -169,7 +169,7 @@ def test_values(self):
tm.assert_numpy_array_equal(idx.values, exp)
tm.assert_numpy_array_equal(idx.to_numpy(), exp)
exp = np.array([492, -9223372036854775808], dtype=np.int64)
- tm.assert_numpy_array_equal(idx._ndarray_values, exp)
+ tm.assert_numpy_array_equal(idx.asi8, exp)
idx = PeriodIndex(["2011-01-01", NaT], freq="D")
@@ -177,7 +177,7 @@ def test_values(self):
tm.assert_numpy_array_equal(idx.values, exp)
tm.assert_numpy_array_equal(idx.to_numpy(), exp)
exp = np.array([14975, -9223372036854775808], dtype=np.int64)
- tm.assert_numpy_array_equal(idx._ndarray_values, exp)
+ tm.assert_numpy_array_equal(idx.asi8, exp)
def test_period_index_length(self):
pi = period_range(freq="A", start="1/1/2001", end="12/1/2009")
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 211d0d52d8357..abd99aadfb484 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -55,9 +55,7 @@ def test_ops(self, opname, obj):
if not isinstance(obj, PeriodIndex):
expected = getattr(obj.values, opname)()
else:
- expected = pd.Period(
- ordinal=getattr(obj._ndarray_values, opname)(), freq=obj.freq
- )
+ expected = pd.Period(ordinal=getattr(obj.asi8, opname)(), freq=obj.freq)
try:
assert result == expected
except TypeError:
| - [x] closes #23565
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32768 | 2020-03-17T02:47:51Z | 2020-03-19T00:24:24Z | 2020-03-19T00:24:24Z | 2020-03-19T01:30:09Z |
REF: Implement core._algos | diff --git a/pandas/core/array_algos/__init__.py b/pandas/core/array_algos/__init__.py
new file mode 100644
index 0000000000000..a7655a013c6cf
--- /dev/null
+++ b/pandas/core/array_algos/__init__.py
@@ -0,0 +1,9 @@
+"""
+core.array_algos is for algorithms that operate on ndarray and ExtensionArray.
+These should:
+
+- Assume that any Index, Series, or DataFrame objects have already been unwrapped.
+- Assume that any list arguments have already been cast to ndarray/EA.
+- Not depend on Index, Series, or DataFrame, nor import any of these.
+- May dispatch to ExtensionArray methods, but should not import from core.arrays.
+"""
diff --git a/pandas/core/array_algos/transforms.py b/pandas/core/array_algos/transforms.py
new file mode 100644
index 0000000000000..f775b6d733d9c
--- /dev/null
+++ b/pandas/core/array_algos/transforms.py
@@ -0,0 +1,33 @@
+"""
+transforms.py is for shape-preserving functions.
+"""
+
+import numpy as np
+
+from pandas.core.dtypes.common import ensure_platform_int
+
+
+def shift(values: np.ndarray, periods: int, axis: int, fill_value) -> np.ndarray:
+ new_values = values
+
+ # make sure array sent to np.roll is c_contiguous
+ f_ordered = values.flags.f_contiguous
+ if f_ordered:
+ new_values = new_values.T
+ axis = new_values.ndim - axis - 1
+
+ if np.prod(new_values.shape):
+ new_values = np.roll(new_values, ensure_platform_int(periods), axis=axis)
+
+ axis_indexer = [slice(None)] * values.ndim
+ if periods > 0:
+ axis_indexer[axis] = slice(None, periods)
+ else:
+ axis_indexer[axis] = slice(periods, None)
+ new_values[tuple(axis_indexer)] = fill_value
+
+ # restore original order
+ if f_ordered:
+ new_values = new_values.T
+
+ return new_values
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 105d9581b1a25..7510bfd1f67ad 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -40,6 +40,7 @@
from pandas.core import missing, nanops, ops
from pandas.core.algorithms import checked_add_with_arr, take, unique1d, value_counts
+from pandas.core.array_algos.transforms import shift
from pandas.core.arrays.base import ExtensionArray, ExtensionOpsMixin
import pandas.core.common as com
from pandas.core.construction import array, extract_array
@@ -773,26 +774,7 @@ def shift(self, periods=1, fill_value=None, axis=0):
fill_value = self._unbox_scalar(fill_value)
- new_values = self._data
-
- # make sure array sent to np.roll is c_contiguous
- f_ordered = new_values.flags.f_contiguous
- if f_ordered:
- new_values = new_values.T
- axis = new_values.ndim - axis - 1
-
- new_values = np.roll(new_values, periods, axis=axis)
-
- axis_indexer = [slice(None)] * self.ndim
- if periods > 0:
- axis_indexer[axis] = slice(None, periods)
- else:
- axis_indexer[axis] = slice(periods, None)
- new_values[tuple(axis_indexer)] = fill_value
-
- # restore original order
- if f_ordered:
- new_values = new_values.T
+ new_values = shift(self._data, periods, axis, fill_value)
return type(self)._simple_new(new_values, dtype=self.dtype)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index cab2bd5146745..adeb1ae04a58d 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -29,7 +29,6 @@
from pandas.core.dtypes.common import (
_NS_DTYPE,
_TD_DTYPE,
- ensure_platform_int,
is_bool_dtype,
is_categorical,
is_categorical_dtype,
@@ -66,6 +65,7 @@
)
import pandas.core.algorithms as algos
+from pandas.core.array_algos.transforms import shift
from pandas.core.arrays import (
Categorical,
DatetimeArray,
@@ -1316,25 +1316,7 @@ def shift(self, periods, axis: int = 0, fill_value=None):
# that, handle boolean etc also
new_values, fill_value = maybe_upcast(self.values, fill_value)
- # make sure array sent to np.roll is c_contiguous
- f_ordered = new_values.flags.f_contiguous
- if f_ordered:
- new_values = new_values.T
- axis = new_values.ndim - axis - 1
-
- if np.prod(new_values.shape):
- new_values = np.roll(new_values, ensure_platform_int(periods), axis=axis)
-
- axis_indexer = [slice(None)] * self.ndim
- if periods > 0:
- axis_indexer[axis] = slice(None, periods)
- else:
- axis_indexer[axis] = slice(periods, None)
- new_values[tuple(axis_indexer)] = fill_value
-
- # restore original order
- if f_ordered:
- new_values = new_values.T
+ new_values = shift(new_values, periods, axis, fill_value)
return [self.make_block(new_values)]
| ATM core.algorithms and core.nanops are a mish-mash in terms of what inputs they expect. This implements core._algos directory intended for guaranteed-ndarray/EA-only implementations.
For the first function to move I de-duplicated a `shift` method. Need suggestions for what to call this module. | https://api.github.com/repos/pandas-dev/pandas/pulls/32767 | 2020-03-17T02:06:48Z | 2020-03-19T00:19:52Z | 2020-03-19T00:19:52Z | 2020-04-08T17:54:07Z |
Backport PR #32758 on branch 1.0.x (BUG: resample.agg with read-only data) | diff --git a/doc/source/whatsnew/v1.0.3.rst b/doc/source/whatsnew/v1.0.3.rst
index 17f1bdc365518..482222fbddbb8 100644
--- a/doc/source/whatsnew/v1.0.3.rst
+++ b/doc/source/whatsnew/v1.0.3.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in ``resample.agg`` when the underlying data is non-writeable (:issue:`31710`)
.. _whatsnew_103.bug_fixes:
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 2fe8137788f61..3bfbd1a88650a 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -849,11 +849,13 @@ cdef inline bint _treat_as_na(rank_t val, bint is_datetimelike) nogil:
return val != val
+# GH#31710 use memorviews once cython 0.30 is released so we can
+# use `const rank_t[:, :] values`
@cython.wraparound(False)
@cython.boundscheck(False)
def group_last(rank_t[:, :] out,
int64_t[:] counts,
- rank_t[:, :] values,
+ ndarray[rank_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
@@ -938,11 +940,13 @@ def group_last(rank_t[:, :] out,
raise RuntimeError("empty group with uint64_t")
+# GH#31710 use memorviews once cython 0.30 is released so we can
+# use `const rank_t[:, :] values`
@cython.wraparound(False)
@cython.boundscheck(False)
def group_nth(rank_t[:, :] out,
int64_t[:] counts,
- rank_t[:, :] values,
+ ndarray[rank_t, ndim=2] values,
const int64_t[:] labels, int64_t rank=1,
Py_ssize_t min_count=-1):
"""
@@ -1236,7 +1240,7 @@ ctypedef fused groupby_t:
@cython.boundscheck(False)
def group_max(groupby_t[:, :] out,
int64_t[:] counts,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
@@ -1309,7 +1313,7 @@ def group_max(groupby_t[:, :] out,
@cython.boundscheck(False)
def group_min(groupby_t[:, :] out,
int64_t[:] counts,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index 170201b4f8e5c..ee3b53649b92b 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -580,3 +580,27 @@ def test_agg_with_datetime_index_list_agg_func(col_name):
columns=pd.MultiIndex(levels=[[col_name], ["mean"]], codes=[[0], [0]]),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_resample_agg_readonly():
+ # GH#31710 cython needs to allow readonly data
+ index = pd.date_range("2020-01-01", "2020-01-02", freq="1h")
+ arr = np.zeros_like(index)
+ arr.setflags(write=False)
+
+ ser = pd.Series(arr, index=index)
+ rs = ser.resample("1D")
+
+ expected = pd.Series([pd.Timestamp(0), pd.Timestamp(0)], index=index[::24])
+
+ result = rs.agg("last")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("first")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("max")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("min")
+ tm.assert_series_equal(result, expected)
| Backport PR #32758: BUG: resample.agg with read-only data | https://api.github.com/repos/pandas-dev/pandas/pulls/32765 | 2020-03-17T01:34:52Z | 2020-03-17T07:57:54Z | 2020-03-17T07:57:54Z | 2020-03-17T07:57:54Z |
BUG: Allow list-like in DatetimeIndex.searchsorted | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index dcbfe6aeb9a12..d0d67e14f2cbc 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -255,6 +255,7 @@ Datetimelike
- Bug in :meth:`Period.to_timestamp`, :meth:`Period.start_time` with microsecond frequency returning a timestamp one nanosecond earlier than the correct time (:issue:`31475`)
- :class:`Timestamp` raising confusing error message when year, month or day is missing (:issue:`31200`)
- Bug in :class:`DatetimeIndex` constructor incorrectly accepting ``bool``-dtyped inputs (:issue:`32668`)
+- Bug in :meth:`DatetimeIndex.searchsorted` not accepting a ``list`` or :class:`Series` as its argument (:issue:`32762`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 9cde636f6bd2c..a153b4e06157b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -846,14 +846,14 @@ def searchsorted(self, value, side="left", sorter=None):
elif isinstance(value, self._recognized_scalars):
value = self._scalar_type(value)
- elif isinstance(value, np.ndarray):
+ elif is_list_like(value) and not isinstance(value, type(self)):
+ value = array(value)
+
if not type(self)._is_recognized_dtype(value):
raise TypeError(
"searchsorted requires compatible dtype or scalar, "
f"not {type(value).__name__}"
)
- value = type(self)(value)
- self._check_compatible_with(value)
if not (isinstance(value, (self._scalar_type, type(self))) or (value is NaT)):
raise TypeError(f"Unexpected type for 'value': {type(value)}")
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index e505917da1dc4..928173aa82797 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -812,3 +812,38 @@ def test_to_numpy_extra(array):
assert result[0] == result[1]
tm.assert_equal(array, original)
+
+
+@pytest.mark.parametrize(
+ "values",
+ [
+ pd.to_datetime(["2020-01-01", "2020-02-01"]),
+ pd.TimedeltaIndex([1, 2], unit="D"),
+ pd.PeriodIndex(["2020-01-01", "2020-02-01"], freq="D"),
+ ],
+)
+@pytest.mark.parametrize("klass", [list, np.array, pd.array, pd.Series])
+def test_searchsorted_datetimelike_with_listlike(values, klass):
+ # https://github.com/pandas-dev/pandas/issues/32762
+ result = values.searchsorted(klass(values))
+ expected = np.array([0, 1], dtype=result.dtype)
+
+ tm.assert_numpy_array_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "values",
+ [
+ pd.to_datetime(["2020-01-01", "2020-02-01"]),
+ pd.TimedeltaIndex([1, 2], unit="D"),
+ pd.PeriodIndex(["2020-01-01", "2020-02-01"], freq="D"),
+ ],
+)
+@pytest.mark.parametrize(
+ "arg", [[1, 2], ["a", "b"], [pd.Timestamp("2020-01-01", tz="Europe/London")] * 2]
+)
+def test_searchsorted_datetimelike_with_listlike_invalid_dtype(values, arg):
+ # https://github.com/pandas-dev/pandas/issues/32762
+ msg = "[Unexpected type|Cannot compare]"
+ with pytest.raises(TypeError, match=msg):
+ values.searchsorted(arg)
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index efdd3fc9907a2..1b2bfa8573c21 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -863,3 +863,25 @@ def test_dir():
index = IntervalIndex.from_arrays([0, 1], [1, 2])
result = dir(index)
assert "str" not in result
+
+
+@pytest.mark.parametrize("klass", [list, np.array, pd.array, pd.Series])
+def test_searchsorted_different_argument_classes(klass):
+ # https://github.com/pandas-dev/pandas/issues/32762
+ values = IntervalIndex([Interval(0, 1), Interval(1, 2)])
+ result = values.searchsorted(klass(values))
+ expected = np.array([0, 1], dtype=result.dtype)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = values._data.searchsorted(klass(values))
+ tm.assert_numpy_array_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "arg", [[1, 2], ["a", "b"], [pd.Timestamp("2020-01-01", tz="Europe/London")] * 2]
+)
+def test_searchsorted_invalid_argument(arg):
+ values = IntervalIndex([Interval(0, 1), Interval(1, 2)])
+ msg = "unorderable types"
+ with pytest.raises(TypeError, match=msg):
+ values.searchsorted(arg)
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index dae220006ebe0..16a32019bf0cb 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -10,8 +10,10 @@
NaT,
Period,
PeriodIndex,
+ Series,
Timedelta,
Timestamp,
+ array,
date_range,
period_range,
)
@@ -64,6 +66,19 @@ def test_searchsorted(self, freq):
with pytest.raises(IncompatibleFrequency, match=msg):
pidx.searchsorted(Period("2014-01-01", freq="5D"))
+ @pytest.mark.parametrize("klass", [list, np.array, array, Series])
+ def test_searchsorted_different_argument_classes(self, klass):
+ pidx = PeriodIndex(
+ ["2014-01-01", "2014-01-02", "2014-01-03", "2014-01-04", "2014-01-05"],
+ freq="D",
+ )
+ result = pidx.searchsorted(klass(pidx))
+ expected = np.arange(len(pidx), dtype=result.dtype)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = pidx._data.searchsorted(klass(pidx))
+ tm.assert_numpy_array_equal(result, expected)
+
def test_searchsorted_invalid(self):
pidx = PeriodIndex(
["2014-01-01", "2014-01-02", "2014-01-03", "2014-01-04", "2014-01-05"],
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index a159baefd60ea..4606fc47c34e1 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -11,6 +11,7 @@
Series,
Timedelta,
TimedeltaIndex,
+ array,
date_range,
timedelta_range,
)
@@ -111,6 +112,26 @@ def test_sort_values(self):
tm.assert_numpy_array_equal(dexer, np.array([0, 2, 1]), check_dtype=False)
+ @pytest.mark.parametrize("klass", [list, np.array, array, Series])
+ def test_searchsorted_different_argument_classes(self, klass):
+ idx = TimedeltaIndex(["1 day", "2 days", "3 days"])
+ result = idx.searchsorted(klass(idx))
+ expected = np.arange(len(idx), dtype=result.dtype)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = idx._data.searchsorted(klass(idx))
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "arg",
+ [[1, 2], ["a", "b"], [pd.Timestamp("2020-01-01", tz="Europe/London")] * 2],
+ )
+ def test_searchsorted_invalid_argument_dtype(self, arg):
+ idx = TimedeltaIndex(["1 day", "2 days", "3 days"])
+ msg = "searchsorted requires compatible dtype"
+ with pytest.raises(TypeError, match=msg):
+ idx.searchsorted(arg)
+
def test_argmin_argmax(self):
idx = TimedeltaIndex(["1 day 00:00:05", "1 day 00:00:01", "1 day 00:00:02"])
assert idx.argmin() == 1
| - [x] closes #32762
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32764 | 2020-03-16T23:34:20Z | 2020-03-26T01:22:58Z | 2020-03-26T01:22:58Z | 2020-03-26T01:25:32Z |
BUG: resample.agg with read-only data | diff --git a/doc/source/whatsnew/v1.0.3.rst b/doc/source/whatsnew/v1.0.3.rst
index 17f1bdc365518..482222fbddbb8 100644
--- a/doc/source/whatsnew/v1.0.3.rst
+++ b/doc/source/whatsnew/v1.0.3.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in ``resample.agg`` when the underlying data is non-writeable (:issue:`31710`)
.. _whatsnew_103.bug_fixes:
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 27b3095d8cb4f..35a6963165194 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -848,11 +848,13 @@ cdef inline bint _treat_as_na(rank_t val, bint is_datetimelike) nogil:
return val != val
+# GH#31710 use memorviews once cython 0.30 is released so we can
+# use `const rank_t[:, :] values`
@cython.wraparound(False)
@cython.boundscheck(False)
def group_last(rank_t[:, :] out,
int64_t[:] counts,
- rank_t[:, :] values,
+ ndarray[rank_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
@@ -937,11 +939,13 @@ def group_last(rank_t[:, :] out,
raise RuntimeError("empty group with uint64_t")
+# GH#31710 use memorviews once cython 0.30 is released so we can
+# use `const rank_t[:, :] values`
@cython.wraparound(False)
@cython.boundscheck(False)
def group_nth(rank_t[:, :] out,
int64_t[:] counts,
- rank_t[:, :] values,
+ ndarray[rank_t, ndim=2] values,
const int64_t[:] labels, int64_t rank=1,
Py_ssize_t min_count=-1):
"""
@@ -1235,7 +1239,7 @@ ctypedef fused groupby_t:
@cython.boundscheck(False)
def group_max(groupby_t[:, :] out,
int64_t[:] counts,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
@@ -1308,7 +1312,7 @@ def group_max(groupby_t[:, :] out,
@cython.boundscheck(False)
def group_min(groupby_t[:, :] out,
int64_t[:] counts,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index d552241f9126f..6389c88c99f73 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -580,3 +580,27 @@ def test_agg_with_datetime_index_list_agg_func(col_name):
columns=pd.MultiIndex(levels=[[col_name], ["mean"]], codes=[[0], [0]]),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_resample_agg_readonly():
+ # GH#31710 cython needs to allow readonly data
+ index = pd.date_range("2020-01-01", "2020-01-02", freq="1h")
+ arr = np.zeros_like(index)
+ arr.setflags(write=False)
+
+ ser = pd.Series(arr, index=index)
+ rs = ser.resample("1D")
+
+ expected = pd.Series([pd.Timestamp(0), pd.Timestamp(0)], index=index[::24])
+
+ result = rs.agg("last")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("first")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("max")
+ tm.assert_series_equal(result, expected)
+
+ result = rs.agg("min")
+ tm.assert_series_equal(result, expected)
| - [x] closes #31710
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32758 | 2020-03-16T18:58:31Z | 2020-03-17T01:34:16Z | 2020-03-17T01:34:15Z | 2020-03-17T16:01:08Z |