The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: RuntimeError Message: Disallowed deserialization of 'arrow.py_extension_type': storage_type = list<item: list<item: list<item: uint8>>> serialized = b'\x80\x04\x95M\x00\x00\x00\x00\x00\x00\x00\x8c\x1adatasets.features.features\x94\x8c\x14Array3DExtensionType\x94\x93\x94M\x00\x02M\x00\x02K\x06\x87\x94\x8c\x05uint8\x94\x86\x94R\x94.' pickle disassembly: 0: \x80 PROTO 4 2: \x95 FRAME 77 11: \x8c SHORT_BINUNICODE 'datasets.features.features' 39: \x94 MEMOIZE (as 0) 40: \x8c SHORT_BINUNICODE 'Array3DExtensionType' 62: \x94 MEMOIZE (as 1) 63: \x93 STACK_GLOBAL 64: \x94 MEMOIZE (as 2) 65: M BININT2 512 68: M BININT2 512 71: K BININT1 6 73: \x87 TUPLE3 74: \x94 MEMOIZE (as 3) 75: \x8c SHORT_BINUNICODE 'uint8' 82: \x94 MEMOIZE (as 4) 83: \x86 TUPLE2 84: \x94 MEMOIZE (as 5) 85: R REDUCE 86: \x94 MEMOIZE (as 6) 87: . STOP highest protocol among opcodes = 4 Reading of untrusted Parquet or Feather files with a PyExtensionType column allows arbitrary code execution. If you trust this file, you can enable reading the extension type by one of: - upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)` - disable this error by running `import pyarrow_hotfix; pyarrow_hotfix.uninstall()` We strongly recommend updating your Parquet/Feather files to use extension types derived from `pyarrow.ExtensionType` instead, and register this type explicitly. See https://arrow.apache.org/docs/dev/python/extending_types.html#defining-extension-types-user-defined-types for more details. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 539, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 69, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 86, in _generate_tables parquet_file = pq.ParquetFile(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 341, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1262, in pyarrow._parquet.ParquetReader.open File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow_hotfix/__init__.py", line 47, in __arrow_ext_deserialize__ raise RuntimeError( RuntimeError: Disallowed deserialization of 'arrow.py_extension_type': storage_type = list<item: list<item: list<item: uint8>>> serialized = b'\x80\x04\x95M\x00\x00\x00\x00\x00\x00\x00\x8c\x1adatasets.features.features\x94\x8c\x14Array3DExtensionType\x94\x93\x94M\x00\x02M\x00\x02K\x06\x87\x94\x8c\x05uint8\x94\x86\x94R\x94.' pickle disassembly: 0: \x80 PROTO 4 2: \x95 FRAME 77 11: \x8c SHORT_BINUNICODE 'datasets.features.features' 39: \x94 MEMOIZE (as 0) 40: \x8c SHORT_BINUNICODE 'Array3DExtensionType' 62: \x94 MEMOIZE (as 1) 63: \x93 STACK_GLOBAL 64: \x94 MEMOIZE (as 2) 65: M BININT2 512 68: M BININT2 512 71: K BININT1 6 73: \x87 TUPLE3 74: \x94 MEMOIZE (as 3) 75: \x8c SHORT_BINUNICODE 'uint8' 82: \x94 MEMOIZE (as 4) 83: \x86 TUPLE2 84: \x94 MEMOIZE (as 5) 85: R REDUCE 86: \x94 MEMOIZE (as 6) 87: . STOP highest protocol among opcodes = 4 Reading of untrusted Parquet or Feather files with a PyExtensionType column allows arbitrary code execution. If you trust this file, you can enable reading the extension type by one of: - upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)` - disable this error by running `import pyarrow_hotfix; pyarrow_hotfix.uninstall()` We strongly recommend updating your Parquet/Feather files to use extension types derived from `pyarrow.ExtensionType` instead, and register this type explicitly. See https://arrow.apache.org/docs/dev/python/extending_types.html#defining-extension-types-user-defined-types for more details.
Need help to make the dataset viewer work? Open a discussion for direct support.
RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods
Description
High-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches.
Citation
@misc{sypetkowski2023rxrx1,
title = {RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods},
author = {Maciej Sypetkowski and Morteza Rezanejad and Saber Saberian and Oren Kraus and John Urbanik and James Taylor and Ben Mabey and Mason Victors and Jason Yosinski and Alborz Rezazadeh Sereshkeh and Imran Haque and Berton Earnshaw},
year = {2023},
eprint = {2301.05768},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
- Downloads last month
- 0