url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.36B
1.71B
| node_id
stringlengths 18
19
| number
int64 4.92k
5.85k
| title
stringlengths 4
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 3
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5231/comments | https://api.github.com/repos/huggingface/datasets/issues/5231/events | https://github.com/huggingface/datasets/issues/5231 | 1,445,883,267 | I_kwDODunzps5WLm2D | 5,231 | Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types"
] | 2022-11-11T18:54:36 | 2022-11-11T20:42:29 | 2022-11-11T18:59:50 | NONE | null | I have a Dataset with two Features defined as follows:
```
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
```
On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example.
However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length).
I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5231/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5230/comments | https://api.github.com/repos/huggingface/datasets/issues/5230/events | https://github.com/huggingface/datasets/issues/5230 | 1,445,507,580 | I_kwDODunzps5WKLH8 | 5,230 | dataclasses error when importing the library in python 3.11 | {
"login": "yonikremer",
"id": 76044840,
"node_id": "MDQ6VXNlcjc2MDQ0ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/76044840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonikremer",
"html_url": "https://github.com/yonikremer",
"followers_url": "https://api.github.com/users/yonikremer/followers",
"following_url": "https://api.github.com/users/yonikremer/following{/other_user}",
"gists_url": "https://api.github.com/users/yonikremer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonikremer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonikremer/subscriptions",
"organizations_url": "https://api.github.com/users/yonikremer/orgs",
"repos_url": "https://api.github.com/users/yonikremer/repos",
"events_url": "https://api.github.com/users/yonikremer/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonikremer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I opened [this issue](https://github.com/python/cpython/issues/99401).\r\nPython's maintainers say that the issue is caused by [this change](https://docs.python.org/3.11/whatsnew/3.11.html#dataclasses).\r\nI believe adding a `__hash__` method to `datasets.utils.version.Version` should solve (at least partially) this issue.",
"Has this been fixed? I am running into this issue now. \r\n\r\nIf this has been fixed, could have a new release with this?\r\n",
"Hi, I am getting error while training \r\n\r\n(tensorflow) C:\\tensorflow\\models\\research\\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config\r\nTraceback (most recent call last):\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\train.py\", line 54, in <module>\r\n from object_detection.legacy import trainer\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\legacy\\trainer.py\", line 27, in <module>\r\n from object_detection.builders import optimizer_builder\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\builders\\optimizer_builder.py\", line 25, in <module>\r\n from official.modeling.optimization import ema_optimizer\r\n File \"C:\\tensorflow\\models\\official\\modeling\\optimization\\__init__.py\", line 19, in <module>\r\n from official.modeling.optimization.configs.optimization_config import *\r\n File \"C:\\tensorflow\\models\\official\\modeling\\optimization\\configs\\optimization_config.py\", line 31, in <module>\r\n @dataclasses.dataclass\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 1223, in dataclass\r\n return wrap(cls)\r\n ^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 1213, in wrap\r\n return _process_class(cls, init, repr, eq, order, unsafe_hash,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 958, in _process_class\r\n cls_fields.append(_get_field(cls, name, type, kw_only))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 815, in _get_field\r\n raise ValueError(f'mutable default {type(f.default)} for field '\r\nValueError: mutable default <class 'official.modeling.optimization.configs.optimizer_config.SGDConfig'> for field sgd is not allowed: use default_factory"
] | 2022-11-11T13:53:49 | 2023-05-09T12:17:30 | 2022-11-14T15:27:37 | NONE | null | ### Describe the bug
When I import datasets using python 3.11 the dataclasses standard library raises the following error:
`ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory`
When I tried to import the library using the following jupyter notebook:
```
%%bash
# create python 3.11 conda env
conda create --yes --quiet -n myenv -c conda-forge python=3.11
# activate is
source activate myenv
# install pyarrow
/opt/conda/envs/myenv/bin/python -m pip install --quiet --extra-index-url https://pypi.fury.io/arrow-nightlies/ \
--prefer-binary --pre pyarrow
# install datasets
/opt/conda/envs/myenv/bin/python -m pip install --quiet datasets
```
```
# create a python file that only imports datasets
with open("import_datasets.py", 'w') as f:
f.write("import datasets")
# run it with the env
!/opt/conda/envs/myenv/bin/python import_datasets.py
```
I get the following error:
```
Traceback (most recent call last):
File "/kaggle/working/import_datasets.py", line 1, in <module>
import datasets
File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 45, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/builder.py", line 91, in <module>
@dataclass
^^^^^^^^^
File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1221, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1211, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 959, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 816, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory
```
This is probably due to one of the following changes in the [dataclasses standard library](https://docs.python.org/3/library/dataclasses.html) in version 3.11:
1. Changed in version 3.11: Instead of looking for and disallowing objects of type list, dict, or set, unhashable objects are now not allowed as default values. Unhashability is used to approximate mutability.
2. fields may optionally specify a default value, using normal Python syntax:
```
@dataclass
class C:
a: int # 'a' has no default value
b: int = 0 # assign a default value for 'b'
In this example, both a and b will be included in the added __init__() method, which will be defined as:
def __init__(self, a: int, b: int = 0):
```
3. Changed in version 3.11: If a field name is already included in the __slots__ of a base class, it will not be included in the generated __slots__ to prevent [overriding them](https://docs.python.org/3/reference/datamodel.html#datamodel-note-slots). Therefore, do not use __slots__ to retrieve the field names of a dataclass. Use [fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) instead. To be able to determine inherited slots, base class __slots__ may be any iterable, but not an iterator.
4. weakref_slot: If true (the default is False), add a slot named “__weakref__”, which is required to make an instance weakref-able. It is an error to specify weakref_slot=True without also specifying slots=True.
[TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) will be raised if a field without a default value follows a field with a default value. This is true whether this occurs in a single class, or as a result of class inheritance.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. go to [the notebook in kaggle](https://www.kaggle.com/yonikremer/repreducing-issue)
2. rub both of the cells
### Expected behavior
I'm expecting no issues.
This error should not occur.
### Environment info
kaggle kernels, with default settings:
pin to original environment, no accelerator. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5230/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5230/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5229/comments | https://api.github.com/repos/huggingface/datasets/issues/5229/events | https://github.com/huggingface/datasets/issues/5229 | 1,445,121,028 | I_kwDODunzps5WIswE | 5,229 | Type error when calling `map` over dataset containing 0-d tensors | {
"login": "phipsgabler",
"id": 7878215,
"node_id": "MDQ6VXNlcjc4NzgyMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7878215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phipsgabler",
"html_url": "https://github.com/phipsgabler",
"followers_url": "https://api.github.com/users/phipsgabler/followers",
"following_url": "https://api.github.com/users/phipsgabler/following{/other_user}",
"gists_url": "https://api.github.com/users/phipsgabler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phipsgabler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phipsgabler/subscriptions",
"organizations_url": "https://api.github.com/users/phipsgabler/orgs",
"repos_url": "https://api.github.com/users/phipsgabler/repos",
"events_url": "https://api.github.com/users/phipsgabler/events{/privacy}",
"received_events_url": "https://api.github.com/users/phipsgabler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nWe could address this by calling `.item()` on such tensors to extract the value, but this would lose us the type, which could lead to storing the generated dataset in a suboptimal format. Considering this, I think the only proper fix would be implementing support for 0-D tensors on Apache Arrow's side (Arrow is the underlying format we use to store datasets on disk/in memory). WDYT @lhoestq?",
"I think we can just convert the item to a numpy typed scalar using `.numpy()` ?\r\n\r\nFor example this works:\r\n```python\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\nassert pa.array([np.float64(1.0)]).type == pa.float64()\r\nassert pa.array([np.float32(1.0)]).type == pa.float32()\r\nassert pa.array([np.int32(1)]).type == pa.int32()\r\nassert pa.array([np.int64(1)]).type == pa.int64()\r\n```\r\n\r\nAnd therefore it would work the same as for PyTorch N-D Tensors: convert to Numpy Array to keep the type in `_cast_to_python_objects`, then convert to Arrow"
] | 2022-11-11T08:27:28 | 2023-01-13T16:00:53 | 2023-01-13T16:00:53 | NONE | null | ### Describe the bug
0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset.
### Steps to reproduce the bug
```
ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_format("torch")
ds.map(None)
```
### Expected behavior
Getting back `ds` without errors.
### Environment info
Python 3.10.8
datasets 2.6.
torch 1.13.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5229/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5228/comments | https://api.github.com/repos/huggingface/datasets/issues/5228/events | https://github.com/huggingface/datasets/issues/5228 | 1,444,763,105 | I_kwDODunzps5WHVXh | 5,228 | Loading a dataset from the hub fails if you happen to have a folder of the same name | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"`load_dataset` first checks for a local directory before checking for the Hub.\r\n\r\nTo make it explicit that it has to fetch the Hub, we could support the `hffs` syntax:\r\n```python\r\nload_dataset(\"hf://datasets/glue\")\r\n```\r\n\r\nwould that work for you ? Also cc @mariosasko who's leading the `hffs` project",
"yeah, that would be a fine solution.",
"This still has no proper solution in 2.11\r\n\r\nperhaps have a `download_config=\"force_remote\"` or just backtrack once you reach `EmptyDatasetError` locally and then try to load it from the hub (or a local cache, as that only gets checked if there is no local folder...?)"
] | 2022-11-11T00:51:54 | 2023-05-03T23:23:04 | null | NONE | null | ### Describe the bug
I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and then training on them. Additionally, they were writing their checkpoints to a folder called `glue`. This meant that once one worker had created the `glue` folder to write checkpoints to, the next worker to try to load a glue dataset would fail as shown in the minimal repro below. I'm not sure what the solution would be since I'm not super familiar with the `datasets` code, but I would expect `load_dataset` to not crash just because i have a local folder with the same name as a dataset from the hub.
### Steps to reproduce the bug
```
In [1]: import datasets
In [2]: rte = datasets.load_dataset('glue', 'rte')
Downloading and preparing dataset glue/rte to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 697k/697k [00:00<00:00, 6.08MB/s]
Dataset glue downloaded and prepared to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 773.81it/s]
In [3]: import os
In [4]: os.mkdir('glue')
In [5]: rte = datasets.load_dataset('glue', 'rte')
---------------------------------------------------------------------------
EmptyDatasetError Traceback (most recent call last)
<ipython-input-5-0d6b9ad8bbd0> in <cell line: 1>()
----> 1 rte = datasets.load_dataset('glue', 'rte')
~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1717
1718 # Create a dataset builder
-> 1719 builder_instance = load_dataset_builder(
1720 path=path,
1721 name=name,
~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1495 download_config = download_config.copy() if download_config else DownloadConfig()
1496 download_config.use_auth_token = use_auth_token
-> 1497 dataset_module = dataset_module_factory(
1498 path,
1499 revision=revision,
~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1152 ).get_module()
1153 elif os.path.isdir(path):
-> 1154 return LocalDatasetModuleFactoryWithoutScript(
1155 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode
1156 ).get_module()
~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in get_module(self)
624 base_path = os.path.join(self.path, self.data_dir) if self.data_dir else self.path
625 patterns = (
--> 626 sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns_locally(base_path)
627 )
628 data_files = DataFilesDict.from_local_or_remote(
~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/data_files.py in get_data_patterns_locally(base_path)
458 return _get_data_files_patterns(resolver)
459 except FileNotFoundError:
--> 460 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
461
462
EmptyDatasetError: The directory at glue doesn't contain any data files
```
### Expected behavior
Dataset is still able to be loaded from the hub even if I have a local folder with the same name.
### Environment info
datasets version: 2.6.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5228/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5227/comments | https://api.github.com/repos/huggingface/datasets/issues/5227/events | https://github.com/huggingface/datasets/issues/5227 | 1,444,620,094 | I_kwDODunzps5WGyc- | 5,227 | datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files | {
"login": "ScottM-wizard",
"id": 102275116,
"node_id": "U_kgDOBhiYLA",
"avatar_url": "https://avatars.githubusercontent.com/u/102275116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottM-wizard",
"html_url": "https://github.com/ScottM-wizard",
"followers_url": "https://api.github.com/users/ScottM-wizard/followers",
"following_url": "https://api.github.com/users/ScottM-wizard/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottM-wizard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottM-wizard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottM-wizard/subscriptions",
"organizations_url": "https://api.github.com/users/ScottM-wizard/orgs",
"repos_url": "https://api.github.com/users/ScottM-wizard/repos",
"events_url": "https://api.github.com/users/ScottM-wizard/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottM-wizard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fixed. Please close."
] | 2022-11-10T21:57:06 | 2022-11-10T22:05:43 | 2022-11-10T22:05:43 | NONE | null | ### Describe the bug
From these lines:
from datasets import list_datasets, load_dataset
dataset = load_dataset("wikisql","binary")
I get error message:
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
And yet the 'wikisql' is reported to exist via the list_datasets().
Any help appreciated.
### Steps to reproduce the bug
From these lines:
from datasets import list_datasets, load_dataset
dataset = load_dataset("wikisql","binary")
I get error message:
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
And yet the 'wikisql' is reported to exist via the list_datasets().
Any help appreciated.
### Expected behavior
Dataset should load. This same code used to work.
### Environment info
Mac OS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5227/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5226/comments | https://api.github.com/repos/huggingface/datasets/issues/5226/events | https://github.com/huggingface/datasets/issues/5226 | 1,444,385,148 | I_kwDODunzps5WF5F8 | 5,226 | Q: Memory release when removing the column? | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Datasets are memory mapped from your disk, i.e. they're not loaded in RAM. This is possible thanks to the Arrow data format.\r\n\r\nTherefore the column you remove is not in RAM, so removing it doesn't cause the RAM to decrease.",
"Thanks for the explanation! @lhoestq \r\nI wonder since it is memory mapped, can we reduce or remove this memory map?",
"Yes you can `del common_voice` for example or wait for it to be garbage collected"
] | 2022-11-10T18:35:27 | 2022-11-29T15:10:10 | 2022-11-29T15:10:10 | NONE | null | ### Describe the bug
How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks?
```python
from datasets import load_dataset
common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True)
# check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670
common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train'])
common_voice.clear()
# check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670
```
I tried `gc.collect()` but did not help
### Steps to reproduce the bug
1. load dataset
2. remove all the columns
3. check memory is reduced or not
[link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567)
### Expected behavior
Memory released when I remove the column
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5226/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5225/comments | https://api.github.com/repos/huggingface/datasets/issues/5225/events | https://github.com/huggingface/datasets/issues/5225 | 1,444,305,183 | I_kwDODunzps5WFlkf | 5,225 | Add video feature | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"@NielsRogge @rwightman may have additional requirements regarding this feature.\r\n\r\nWhen adding a new (decodable) type, the hardest part is choosing the right decoding library. What I mean by \"right\" here is that it has all the features we need and is easy to install (with GPU support?).\r\n\r\nSome candidates/options:\r\n* [`decord`](https://github.com/dmlc/decord): no longer [maintained](https://github.com/dmlc/decord/issues/214), not trivial to install with GPU support\r\n* [`pyAV`](https://github.com/PyAV-Org/PyAV): used for CPU decoding in `torchvision`, GPU decoding not supported if I'm not mistaken, otherwise the best candidate probably\r\n* [`video_reader`](https://github.com/pytorch/vision/blob/de350bc01ad2193ea2888f0ce8a6a346d3cba5a9/torchvision/csrc/io/video_reader/video_reader.cpp): used for GPU decoding in `torchvision`, depends on `torch'\r\n* OpenCV: uses `ffmpeg` for video decoding under the hood\r\n* ...\r\n\r\nAnd the last resort is building our own library, which is the most flexible solution but also requires the most work.\r\n\r\nPS: I'm adding a link to an article that compares various video decoding libraries: https://towardsdatascience.com/lightning-fast-video-reading-in-python-c1438771c4e6",
"@mariosasko is GPU decoding a hard requirement here? Do we really need it? (I don't know)\r\n\r\nSomething to consider with `decord` is that it doesn't (AFAIK) support writing videos, so you'd still need something else for that. also I've noticed [issues](https://github.com/dmlc/decord/issues/242) with decord's ability to decode stereo audio streams along side the video (which you don't run into with PyAV).\r\n\r\n---\r\n\r\nI think PyAV should be able to do the job just fine to start. If we write the video io utilities as their own functions, we can hot swap them later if we find/write a different solution that's faster/better.",
"Video is still a bit of a mess, but I'd say pyAV is likely the best approach (or supporting all three via pytorchvideo, but that adds a middle man dependency).\r\n\r\nBeing able to decode on the GPU, into memory that could be passed off to a Tensor in whatever framework is being used would be the dream, I don't think there is any interop of that nature working right now. Number of decoder instances per GPU is limited so it's not clear if balancing load btw GPU decoders and CPUs would be needed in say large scale video training.\r\n\r\nAny of these solutions is less than ideal due to the nature of video, having a simple Python interface video / start -> end results in lots of extra memory (you need to decode whole range of the clips into a buffer before using anything). Any scalable video system would be streaming on the fly (issuing frames via callbacks as soon as the stream is far enough along to have re-ordered the frames and synced audio+video+other metadata (sensors, CC, etc).\r\n\r\n",
"For standalone usage, decoding on GPU could be ideal but isn't async processing of inputs on CPUs while letting the accelerator busy for training the de-facto? Of course, I am aware of other advanced mechanisms such as CPU offloading, but I think my point is conveyed. ",
"Here's a minimal implementation of the helper functions we'd need from PyAV, a lot of which I borrowed from `pytorchvideo`, stripping out the `torch` specific stuff:\r\n\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/c327cb6ff6b074e6ddc8068d19c0367d/pyav-io.ipynb)\r\n \r\nIt's not too much code...@mariosasko we could probably just maintain these helper fns within the `datasets` library, right? ",
"Also wanted to note I added a PR for video classification in `transformers` here, which uses `decord`. It's still open...should we make a decision now to align the libraries we are using between `datasets` and `transformers`? (CC @Narsil )\r\n\r\nhttps://github.com/huggingface/transformers/pull/20151",
"Fully agree on at least trying to unite things.\r\n\r\nMaking clear function boundaries to help us change dependency if needed seems like a good idea since there doesn't seem to be a clear winner.\r\n\r\nI also happen to like directly calling ffmpeg. For some reason it was a lot faster than pyav. "
] | 2022-11-10T17:36:11 | 2022-12-02T15:13:15 | null | CONTRIBUTOR | null | ### Feature request
Add a `Video` feature to the library so folks can include videos in their datasets.
### Motivation
Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos:
1. Videos, unlike images, can end up being extremely large files
2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference
3. Videos have an additional audio stream, which must be accounted for
4. The feature needs to be able to encode/decode videos (with right video settings) from bytes.
### Your contribution
I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though.
Would love to use this issue as a place to:
- brainstorm ideas on how to do this right
- list ways/examples to work around it for now
CC @sayakpaul @mariosasko @fcakyon | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5225/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5225/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5224/comments | https://api.github.com/repos/huggingface/datasets/issues/5224/events | https://github.com/huggingface/datasets/issues/5224 | 1,443,640,867 | I_kwDODunzps5WDDYj | 5,224 | Seems to freeze when loading audio dataset with wav files from local folder | {
"login": "uriii3",
"id": 45894267,
"node_id": "MDQ6VXNlcjQ1ODk0MjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/45894267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uriii3",
"html_url": "https://github.com/uriii3",
"followers_url": "https://api.github.com/users/uriii3/followers",
"following_url": "https://api.github.com/users/uriii3/following{/other_user}",
"gists_url": "https://api.github.com/users/uriii3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uriii3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uriii3/subscriptions",
"organizations_url": "https://api.github.com/users/uriii3/orgs",
"repos_url": "https://api.github.com/users/uriii3/repos",
"events_url": "https://api.github.com/users/uriii3/events{/privacy}",
"received_events_url": "https://api.github.com/users/uriii3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just tried to do the same but changing the `.wav` files to `.mp3` files and that doesn't fix it.",
"I don't know if anyone will ever read this but I've tried to upload the same dataset with google colab and the output seems more clarifying. I didn't specify the train/test split so the dataset wasn't fully uploaded (or that is what I understood, might be wrong!!).\r\n\r\nNow, including the `drop_metadata` flag I can load the dataset normally (at least with colab notebook):\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"audiofolder\", data_dir=\"../archive/Dataset\", , drop_metadata=True)\r\n```\r\n\r\nI'll close the issue.",
"@uriii3 Hello, I understand correctly that you converted your wav files to mp3?",
"Yes but it didn't matter. I don't remember which of them I ended up working with."
] | 2022-11-10T10:29:31 | 2023-04-25T09:54:05 | 2022-11-22T11:24:19 | NONE | null | ### Describe the bug
I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder.
I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from terminal, seems to work but then freezes with no apparent reason.
The metadata.csv file contains a few columns but the important ones, `file_name` with the filename and `transcription` with the transcription are okay.
The audios are `.wav` files, I don't know if that might be the problem (I will proceed to try to change them all to `.mp3` and try again).
### Steps to reproduce the bug
The code I'm using:
```python
from datasets import load_dataset
dataset = load_dataset("audiofolder", data_dir="../archive/Dataset")
dataset[0]["audio"]
```
The output I obtain:
```
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 311135.43it/s]
Using custom data configuration default-38d4546ffd010f3e
Downloading and preparing dataset audiofolder/default to /Users/mine/.cache/huggingface/datasets/audiofolder/default-38d4546ffd010f3e/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc...
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 166467.72it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 187772.74it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 59623.71it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 138090.55it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 106065.64it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 56036.38it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 74004.24it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 162343.45it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 101881.23it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 60145.67it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 80890.02it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 54036.67it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 95851.09it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 155897.00it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 137656.96it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 131230.81it/s]
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
```
And then here it just freezes and nothing more happens.
### Expected behavior
Load the dataset.
### Environment info
Datasets version:
datasets 2.6.1 pypi_0 pypi
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5224/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5223/comments | https://api.github.com/repos/huggingface/datasets/issues/5223/events | https://github.com/huggingface/datasets/pull/5223 | 1,442,610,658 | PR_kwDODunzps5CjT9Z | 5,223 | Add SQL guide | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5223). All of your documentation changes will be reflected on that endpoint.",
"I think we may want more content on this page that's not SQL related. Some of that content probably already lives in the main `load` docs page, but might be bad to remove major things like csv/pandas from there...WDYT we should do @lhoestq ?",
"Maybe the main load page can only show one example and redirect to this page for more details ?\r\n\r\nWe can do the same for pandas stuff: have one example in load, and redirect to this page for more details",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5223). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-09T19:10:27 | 2022-11-15T17:40:25 | 2022-11-15T17:40:21 | MEMBER | null | This PR adapts @nateraw's awesome SQL notebook as a guide for the docs! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5223/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5223",
"html_url": "https://github.com/huggingface/datasets/pull/5223",
"diff_url": "https://github.com/huggingface/datasets/pull/5223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5223.patch",
"merged_at": "2022-11-15T17:40:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5222/comments | https://api.github.com/repos/huggingface/datasets/issues/5222/events | https://github.com/huggingface/datasets/issues/5222 | 1,442,412,507 | I_kwDODunzps5V-Xfb | 5,222 | HuggingFace website is incorrectly reporting that my datasets are pickled | {
"login": "ProGamerGov",
"id": 10626398,
"node_id": "MDQ6VXNlcjEwNjI2Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProGamerGov",
"html_url": "https://github.com/ProGamerGov",
"followers_url": "https://api.github.com/users/ProGamerGov/followers",
"following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}",
"gists_url": "https://api.github.com/users/ProGamerGov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ProGamerGov/subscriptions",
"organizations_url": "https://api.github.com/users/ProGamerGov/orgs",
"repos_url": "https://api.github.com/users/ProGamerGov/repos",
"events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ProGamerGov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that for now, as it indicates that we checked for pickles and nothing dangerous appeared :)",
"Closing the issue with the typical \"feature not a bug\" "
] | 2022-11-09T16:41:16 | 2022-11-09T18:10:46 | 2022-11-09T18:06:57 | NONE | null | ### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images
### Expected behavior
They should not be reported as being pickled.
### Environment info
N/A | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5222/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5221/comments | https://api.github.com/repos/huggingface/datasets/issues/5221/events | https://github.com/huggingface/datasets/issues/5221 | 1,442,309,094 | I_kwDODunzps5V9-Pm | 5,221 | Cannot push | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards to process)",
"@lhoestq \r\nThanks for the help!\r\n> Maybe you can try before adding\r\n\r\nIt did not help\r\n\r\nBut I totally got your point about split into multiple TAR archives. It really helped!"
] | 2022-11-09T15:32:05 | 2022-11-10T18:11:21 | 2022-11-10T18:11:11 | NONE | null | ### Describe the bug
I am facing the issue when I try to push the tar.gz file around 11G to HUB.
```
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ du -sh *
4.0K README.md
13G data
516K test.jsonl
18M train.jsonl
4.0K ulaanbal_v0.py
11G ulaanbal_v0.tar.gz
452K validation.jsonl
(venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version'
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git push
EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0'
```
I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file.
Following I run before the commit:
```
╰─$ git lfs install
╰─$ huggingface-cli lfs-enable-largefiles .
```
### Steps to reproduce the bug
Create a private dataset on huggingface and push 12G tar.gz file
### Expected behavior
To be pushed with no issue
### Environment info
- `datasets` version: 2.6.1
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 10.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5221/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5220/comments | https://api.github.com/repos/huggingface/datasets/issues/5220/events | https://github.com/huggingface/datasets/issues/5220 | 1,441,664,377 | I_kwDODunzps5V7g15 | 5,220 | Implicit type conversion of lists in to_pandas | {
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think this behavior comes from PyArrow:\r\n```python\r\nimport pyarrow as pa\r\nt = pa.table({\"a\": [[0]]})\r\nt.to_pandas().a.values[0]\r\n# array([0])\r\n```\r\n\r\nI believe this has to do with zero-copy: you can get a pandas DataFrame without copying the buffers from arrow, and therefore end up with numpy arrays.",
"That's interesting, I guess not much to do here then."
] | 2022-11-09T08:40:18 | 2022-11-10T16:12:26 | 2022-11-10T16:12:26 | CONTRIBUTOR | null | ### Describe the bug
```
ds = Dataset.from_list([{'a':[1,2,3]}])
ds.to_pandas().a.values[0]
```
Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy.
### Steps to reproduce the bug
See snippet
### Expected behavior
Keep the original type
### Environment info
datasets 2.6.1
python 3.8.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5220/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5219/comments | https://api.github.com/repos/huggingface/datasets/issues/5219/events | https://github.com/huggingface/datasets/issues/5219 | 1,441,255,910 | I_kwDODunzps5V59Hm | 5,219 | Delta Tables usage using Datasets Library | {
"login": "reichenbch",
"id": 23002137,
"node_id": "MDQ6VXNlcjIzMDAyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/23002137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reichenbch",
"html_url": "https://github.com/reichenbch",
"followers_url": "https://api.github.com/users/reichenbch/followers",
"following_url": "https://api.github.com/users/reichenbch/following{/other_user}",
"gists_url": "https://api.github.com/users/reichenbch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reichenbch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reichenbch/subscriptions",
"organizations_url": "https://api.github.com/users/reichenbch/orgs",
"repos_url": "https://api.github.com/users/reichenbch/repos",
"events_url": "https://api.github.com/users/reichenbch/events{/privacy}",
"received_events_url": "https://api.github.com/users/reichenbch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?",
"Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Datasets library with Delta Lake Tables.\r\n",
"`datasets` can already read/write from parquet from/to a cloud storage using fsspec, if I understand correctly it's should be possible to load parquet files as delat lake tables no ? :) Or is there someting missing ?",
"@lhoestq Per my understanding, delta lake table is a bunch of paruqet files together with the meta to support ACID. For example file 1 contains v0.1 of record A while file 2 contains v0.2 of record A. I am assuming the Hugging face dataset would delegate the read/write delta table to 3rd party lib, maybe pyarrow. Correct me if I was wrong @reichenbch \r\n\r\nAnd I am assuming, people are asking the versioning of Hugging face datasets. But I am assuming Hugging face delegate this function to github and it is not the key requirement for Public Data set. It actually the key function of ML Ops, I am not sure whether hugging face would like expand to that area."
] | 2022-11-09T02:43:56 | 2023-03-02T19:29:12 | null | NONE | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering.
This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose.
### Your contribution
Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns.
I have basic idea about Delta Live Tables, would brush it easily for this feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5219/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5218/comments | https://api.github.com/repos/huggingface/datasets/issues/5218/events | https://github.com/huggingface/datasets/issues/5218 | 1,441,254,194 | I_kwDODunzps5V58sy | 5,218 | Delta Tables usage using Datasets Library | {
"login": "rcv-koo",
"id": 103188035,
"node_id": "U_kgDOBiaGQw",
"avatar_url": "https://avatars.githubusercontent.com/u/103188035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcv-koo",
"html_url": "https://github.com/rcv-koo",
"followers_url": "https://api.github.com/users/rcv-koo/followers",
"following_url": "https://api.github.com/users/rcv-koo/following{/other_user}",
"gists_url": "https://api.github.com/users/rcv-koo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcv-koo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcv-koo/subscriptions",
"organizations_url": "https://api.github.com/users/rcv-koo/orgs",
"repos_url": "https://api.github.com/users/rcv-koo/repos",
"events_url": "https://api.github.com/users/rcv-koo/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcv-koo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 2022-11-09T02:42:18 | 2022-11-09T02:42:36 | 2022-11-09T02:42:36 | NONE | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering.
This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose.
### Your contribution
Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns.
I have basic idea about Delta Live Tables, would brush it easily for this feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5218/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5217/comments | https://api.github.com/repos/huggingface/datasets/issues/5217/events | https://github.com/huggingface/datasets/pull/5217 | 1,441,252,740 | PR_kwDODunzps5CetXs | 5,217 | Reword E2E training and inference tips in the vision guides | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-09T02:40:01 | 2022-11-10T01:38:09 | 2022-11-10T01:36:09 | MEMBER | null | Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5217/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5217",
"html_url": "https://github.com/huggingface/datasets/pull/5217",
"diff_url": "https://github.com/huggingface/datasets/pull/5217.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5217.patch",
"merged_at": "2022-11-10T01:36:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5216/comments | https://api.github.com/repos/huggingface/datasets/issues/5216/events | https://github.com/huggingface/datasets/issues/5216 | 1,441,041,947 | I_kwDODunzps5V5I4b | 5,216 | save_elasticsearch_index | {
"login": "amobash2",
"id": 12739718,
"node_id": "MDQ6VXNlcjEyNzM5NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/12739718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amobash2",
"html_url": "https://github.com/amobash2",
"followers_url": "https://api.github.com/users/amobash2/followers",
"following_url": "https://api.github.com/users/amobash2/following{/other_user}",
"gists_url": "https://api.github.com/users/amobash2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amobash2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amobash2/subscriptions",
"organizations_url": "https://api.github.com/users/amobash2/orgs",
"repos_url": "https://api.github.com/users/amobash2/repos",
"events_url": "https://api.github.com/users/amobash2/events{/privacy}",
"received_events_url": "https://api.github.com/users/amobash2/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! I think there exist tools to dump and reload an index in your elastic search but I'm not super familiar with it.\r\n\r\nAnyway after reloading an index in elastic search you can call `ds.load_elasticsearch_index` which will connect the index to the dataset without re-indexing"
] | 2022-11-08T23:06:52 | 2022-11-09T13:16:45 | null | NONE | null | Hi,
I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5216/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5214/comments | https://api.github.com/repos/huggingface/datasets/issues/5214/events | https://github.com/huggingface/datasets/pull/5214 | 1,440,334,978 | PR_kwDODunzps5CbmWE | 5,214 | Update github pr docs actions | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-08T14:43:37 | 2022-11-08T15:39:58 | 2022-11-08T15:39:57 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5214/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5214",
"html_url": "https://github.com/huggingface/datasets/pull/5214",
"diff_url": "https://github.com/huggingface/datasets/pull/5214.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5214.patch",
"merged_at": "2022-11-08T15:39:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5213/comments | https://api.github.com/repos/huggingface/datasets/issues/5213/events | https://github.com/huggingface/datasets/pull/5213 | 1,440,037,534 | PR_kwDODunzps5CalQ_ | 5,213 | Add support for different configs with `push_to_hub` | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.",
"Nice thanks !\r\n\r\nWould it be possible to have the new folders at the same level as \"data\" ? This way they're all separated\r\n```\r\n├─ config-v1/\r\n│ ├── train-00000-00002-...-.parquet\r\n│ └── train-00001-00002-...-.parquet\r\n└ config-v2/\r\n ├── train-00000-00002-...-.parquet\r\n └── train-00001-00002-...-.parquet\r\n```\r\nand if you don't provide a config name, it goes in a folder named \"default\" instead, that would be loaded by default.\r\n\r\nWe could also write in the YAML something like\r\n```yaml\r\nconfigs:\r\n- name: config-v1\r\n data_dir: config-v1\r\n- name: config-v2\r\n data_dir: config-v2\r\n```\r\nand loading `config-v1` would be equivalent to run `load_dataset(ds_name, \"config-v1\", data_dir=\"config-v1\")`\r\n\r\nDo you think it would make sense ?\r\n\r\nFor backward compatibility we can just keep the \"data/*\" pattern. It's ok to expect users to have an updated version of `datasets` to be able to load datasets with configurations.",
"@lhoestq thank you for the feedback! i'll reflect on this on Moday, my mind just melted because of the fever.\r\n\r\n@mariosasko @albertvillanova what do you think?",
"Thanks for addressing this, @polinaeterna. It is good:\r\n- we support configs for datasets without scripts\r\n- we align the behavior to datasets with scripts as much as possible\r\n\r\nMaybe adding some tests will help clarify what is the expected behavior...",
"After some discussion with @lhoestq we decided that it's better to rely on metadata file than on data files patterns. \r\n\r\nSo we decided to introduce a new field to yaml (like `configs` or smth like that) that would contain arbitrary configs kwargs to be passed to loader, including `data_dir` and `data_files`. \r\nThis is more aligned with datasets with custom scripts where we explicitly write all the supported configs and config parameters in the code and is extendable to all packaged modules.\r\nThis would solve https://github.com/huggingface/datasets/issues/5209\r\n\r\n(@lhoestq was right 21 days ago, this is a more general solution idk why i ignored this...)",
"closed in favor of https://github.com/huggingface/datasets/pull/5331"
] | 2022-11-08T11:45:47 | 2022-12-02T16:48:23 | 2022-12-02T16:44:07 | CONTRIBUTOR | null | will solve #5151
@lhoestq @albertvillanova @mariosasko
This is still a super draft so please ignore code issues but I want to discuss some conceptually important things.
I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data/` dir as it is now), for example:
```
data
|__config-v1
train-00000-00002-...-.parquet
train-00001-00002-...-.parquet
...
|__config-v2
....
```
When loading a dataset, I parse these configs from repository data files (only for `"data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"` pattern that is used for parquet datasets pushed with `.push_to_hub`).
Therefore,
- when user tries to load a dataset that has configs parsed from data files dir names without providing a config (like `load_dataset("repo")` instead of `load_dataset("repo", "config-v1")`) - raise error and asks for config - to be aligned with how it works in datasets with scripts.
- for backward compatibility: if user tries to `.push_to_hub(""repo", "config_name")` to an existing parquet repo with no configurations (all parquet files are directly in `data/` dir) - raise error. My initial idea was to raise a warning and move these files to another dir with name (config) like "default" or smth but in a PR and suggest user to merge it on the Hub. But there is no support for renaming (moving) files via `HfApi` yet so it would require deleting and pushing again if I understand it right.
This parsing approach can be extended to other Hub packaged modules, and to local packaged modules and other data files patterns
(except for cases when splits are in dir names `KEYWORDS_IN_DIR_NAME_BASE_PATTERNS` because we allow for arbitrary depth of directory hierarchy).
Do you think it's reasonable? Not sure how to provide flexibility (and backward compatibility) to not parsing configs and load all the data in a single config as it is now.
I also thought about getting information about configs from Readme.md `dataset_info` ([example](https://huggingface.co/datasets/polinaeterna/test_push_two_configs/blob/main/README.md)). But that way we
are dependent on if it exists. It is created automatically with `.push_to_hub` but what if it is
accidentally deleted or smth).
Also, what I don't like is that this parsing is a part of Module/DataFiles logic, not Builder's one, which is not aligned with datasets with custom scripts. But I don't know to implement the second approach in current library's logic.
What do you think about this all? Am I missing smth?
TODO:
- [ ] save cache in the same dir for configs of the same datasets
- [ ] fix verification errors
- [ ] correctly update `dataset_infos.json` too
- [ ] ...
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5213/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5213",
"html_url": "https://github.com/huggingface/datasets/pull/5213",
"diff_url": "https://github.com/huggingface/datasets/pull/5213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5213.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5212/comments | https://api.github.com/repos/huggingface/datasets/issues/5212/events | https://github.com/huggingface/datasets/pull/5212 | 1,439,642,483 | PR_kwDODunzps5CZPI2 | 5,212 | Fix CI require_beam maximum compatible dill version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-08T07:30:01 | 2022-11-15T06:32:27 | 2022-11-15T06:32:26 | MEMBER | null | A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`:
- d7c942228b8dcf4de64b00a3053dce59b335f618
- ec222b220b79f10c8d7b015769f0999b15959feb
This PR fixes the maximum compatible `dill` version with `apache-beam`, which is <0.3.2 (and not 0.3.6): https://github.com/apache/beam/blob/v2.42.0/sdks/python/setup.py#L219 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5212/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5212",
"html_url": "https://github.com/huggingface/datasets/pull/5212",
"diff_url": "https://github.com/huggingface/datasets/pull/5212.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5212.patch",
"merged_at": "2022-11-15T06:32:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5211/comments | https://api.github.com/repos/huggingface/datasets/issues/5211/events | https://github.com/huggingface/datasets/pull/5211 | 1,438,544,617 | PR_kwDODunzps5CVgBx | 5,211 | Update Overview.ipynb google colab | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"WDYT @albertvillanova ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-07T15:23:52 | 2022-11-29T15:59:48 | 2022-11-29T15:54:17 | MEMBER | null | - removed metrics stuff
- added image example
- added audio example (with ffmpeg instructions)
- updated the "add a new dataset" section | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5211/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5211",
"html_url": "https://github.com/huggingface/datasets/pull/5211",
"diff_url": "https://github.com/huggingface/datasets/pull/5211.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5211.patch",
"merged_at": "2022-11-29T15:54:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5210/comments | https://api.github.com/repos/huggingface/datasets/issues/5210/events | https://github.com/huggingface/datasets/pull/5210 | 1,438,492,507 | PR_kwDODunzps5CVUzx | 5,210 | Tweak readme | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Nit: We should also update the `Disclaimers` section to let the dataset owners know they should use Hub discussions rather than GH issues for removal requests/updates",
"Updated the disclaimers section, thanks !\r\n\r\nDoes it sound good to you @albertvillanova ?"
] | 2022-11-07T14:51:23 | 2022-11-24T11:35:07 | 2022-11-24T11:26:16 | MEMBER | null | Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5210/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5210",
"html_url": "https://github.com/huggingface/datasets/pull/5210",
"diff_url": "https://github.com/huggingface/datasets/pull/5210.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5210.patch",
"merged_at": "2022-11-24T11:26:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5209/comments | https://api.github.com/repos/huggingface/datasets/issues/5209/events | https://github.com/huggingface/datasets/issues/5209 | 1,438,367,678 | I_kwDODunzps5Vu7-- | 5,209 | Implement ability to define splits in metadata section of dataset card | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@merveenoyan Do you want different files to be splits or configurations?\r\n\r\nFrom [what you specified in `Readme.md`](https://huggingface.co/datasets/inria-soda/tabular-benchmark/commit/fb4575853772c62a20203bdd6cc0202f5db4ce4e) I hypothesize that you want to have 4 **configs** corresponding to directories: `\"clf_cat\", \"clf_num\", \"reg_cat\", \"reg_num\"`. And inside each config you require to have as many splits as there are `csv` files\r\nso if you run \r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\", split=\"compass\")\r\n```\r\nyou will generate the data only from `compass.csv` file.\r\nIn this case, running `load_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\"`) without split parameter will return `DatasetDict` object with `\"KDDCup09_upselling\", \"cat_compass\", \"cat_covertype\", ... \"road_safety\"` keys (which values are splits - `Dataset` objects)\r\n\r\n**or**\r\ndo you want each file to be a separate config? Like:\r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat_compass\") # returns DatasetDict with a single \"train\" split\r\n```\r\n**or**\r\nmaybe smth completely different? :smile: \r\n\r\nAnyway, now I have an impression that this is probably rather a matter of automatically inferring configs from repository structure rather than providing parameters in metadata yaml.\r\n",
"@polinaeterna I want the latter where you can think of every CSV file as a config, like MNLI from GLUE.",
"@merveenoyan @lhoestq I see two solutions to this case. \r\n1. Parse configurations automatically from directories names. That is, if you have data structure like:\r\n```\r\ntabular-benchmark\r\n └─clf_cat_compass\r\n └─compass.csv\r\n └─clf_cat_cat_covertype\r\n └─covertype.csv\r\n ...\r\n └─reg_cat_house_sales\r\n └─house_sales.csv\r\n```\r\nyou'll get \"clf_cat_compass\", \"clf_cat_cat_covertype\", ... \"reg_cat_house_sales\" configurations that would contain **only files from corresponding directories**. \r\n**\\+** this is a requested change and needed in general and would solve other problems, see https://github.com/huggingface/datasets/issues/4578, would also help with https://github.com/huggingface/datasets/pull/5213 which I'm working on currently\r\n**\\+** would allow users to do just `load_dataset(“inria-soda/tabular-benchmark”, “clf_cat_compass”)`, no `data_files` param required\r\n**\\-** in this specific case it would require restructuring of the data - putting each file in a directory named as a config name (to me personally it doesn't seem to be a big deal) \r\n\r\n2. More or less what we discussed before - add support for manually specifying parameters in the metadata. We can add new metadata yaml field (say, `\"custom_configs_info\"`), so that we can provide smth like:\r\n```yaml\r\n---\r\n...\r\ndataset_info:\r\n ... \r\ncustom_configs_info:\r\n- config_name: reg_cat_house_sales\r\n data_files:\r\n - reg_cat/house_sales.csv\r\n- config_name: clf_cat_compass\r\n data_files:\r\n - clf_cat/compass.csv\r\n...\r\n---\r\n```\r\n**\\+** Would be useful not only for tabular data and not only for `data_files` parameter - any packaged dataset’s viewer can be customized to use specific, non-default parameters. @merveenoyan do you maybe have any other examples/use cases in mind where you want to provide any specific parameters to the viewer? \r\n**\\-** I'm not sure here but assume that it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in `BUILDER_CONFIGS` list). cc @severo But probably this can be solved on the `datasets` side too.\r\n\r\nOverall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for `datasets` in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :) ",
"> We can add new metadata yaml field (say, \"custom_configs_info\"), so that we can provide smth like:\r\n\r\nLove it ! Some other ideas to name the \"custom_configs_info\" field: \"configs\", \"parameters\", \"config_args\", \"configurations\"\r\n\r\n> it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in BUILDER_CONFIGS list)\r\n\r\nIf we update the `get_dataset_config_names()` function in `datasets` in inspect.py we should be fine - that's what the viewer is using\r\n\r\n> Overall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for datasets in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :)\r\n\r\nActually I feel like the second solution includes the first use case you mentioned. If you implement the second solution, then users would just have to add a few lines of YAML and their directories would be considered configurations no ? Maybe there's no need to implement two different logics to do the same thing",
"is there any update on this? 🕵🏻",
"@merveenoyan I haven't started working on this yet, working on adding configs to packaged datasets instead: https://github.com/huggingface/datasets/pull/5213 because this both would allow you to solve your issue and is a frequently requested feature.\r\n\r\nadding arbitrary parameters to yaml would be my next task i think!",
"@merveenoyan ignore my comment above, I'm switching to this task now :D",
"I want to be able to create folders in a model."
] | 2022-11-07T13:27:16 | 2022-12-21T13:22:29 | null | CONTRIBUTOR | null | ### Feature request
If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`)
e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead.
Also pinging @polinaeterna @lhoestq @adrinjalali
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5209/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5209/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5208/comments | https://api.github.com/repos/huggingface/datasets/issues/5208/events | https://github.com/huggingface/datasets/pull/5208 | 1,438,035,707 | PR_kwDODunzps5CTyxu | 5,208 | Refactor CI hub fixtures to use monkeypatch instead of patch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-07T09:25:05 | 2022-11-08T06:51:20 | 2022-11-08T06:49:17 | MEMBER | null | Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5208/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5208",
"html_url": "https://github.com/huggingface/datasets/pull/5208",
"diff_url": "https://github.com/huggingface/datasets/pull/5208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5208.patch",
"merged_at": "2022-11-08T06:49:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5207/comments | https://api.github.com/repos/huggingface/datasets/issues/5207/events | https://github.com/huggingface/datasets/issues/5207 | 1,437,858,506 | I_kwDODunzps5Vs_rK | 5,207 | Connection error of the HuggingFace's dataset Hub due to SSLError with proxy | {
"login": "leemgs",
"id": 82404,
"node_id": "MDQ6VXNlcjgyNDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/82404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leemgs",
"html_url": "https://github.com/leemgs",
"followers_url": "https://api.github.com/users/leemgs/followers",
"following_url": "https://api.github.com/users/leemgs/following{/other_user}",
"gists_url": "https://api.github.com/users/leemgs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leemgs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leemgs/subscriptions",
"organizations_url": "https://api.github.com/users/leemgs/orgs",
"repos_url": "https://api.github.com/users/leemgs/repos",
"events_url": "https://api.github.com/users/leemgs/events{/privacy}",
"received_events_url": "https://api.github.com/users/leemgs/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! It looks like an issue with your python environment, can you make sure you're able to run GET requests to https://huggingface.co using `requests` in python ?",
"\r\nThanks for your reply. Does this mean that I have to use the `do_dataset `function and the `requests `function to download the dataset from the company's proxy environment?\r\n\r\n\r\n* Reference: \r\n```\r\n### How to load this dataset directly with the [datasets](https://github.com/huggingface/datasets) library\r\n\r\n\r\n* https://huggingface.co/datasets/moyix/debian_csrc\r\n\r\n\r\n* \r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"moyix/debian_csrc\")\r\n\r\n\r\n\r\n### Or just clone the dataset repo\r\n\r\n\r\ngit lfs install\r\ngit clone https://huggingface.co/datasets/moyix/debian_csrc\r\n# if you want to clone without large files – just their pointers\r\n# prepend your git clone with the following env var:\r\nGIT_LFS_SKIP_SMUDGE=1\r\n```",
"You can use `requests` to see if downloading a file from the Hugging Face Hub works. If so, then `datasets` should work as well. If not, then you have to find another way using an internet connection that works"
] | 2022-11-07T06:56:23 | 2022-11-12T15:31:58 | null | NONE | null | ### Describe the bug
It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office.
Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy),
I'm getting the SSLError issue. What should I do to download the datanet stored in HuggingFace normally?
I welcome any comments. I think those comments will be helpful to me.
* Dataset address - https://huggingface.co/datasets/moyix/debian_csrc/viewer/moyix--debian_csrc
* Log message
```
............ OMISSION ..............
Traceback (most recent call last):
File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 587, in <module>
main()
File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 278, in main
raw_datasets = load_dataset(
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset
builder_instance = load_dataset_builder(
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory
raise e1 from None
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory
raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError)
[2022-11-07 15:23:38,476] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 6760
[2022-11-07 15:23:38,476] [ERROR] [launch.py:324:sigkill_handler] ['/home/geunsik-lim/anaconda3/envs/deepspeed/bin/python', '-u', './transformers/examples/pytorch/language-modeling/run_clm.py', '--local_rank=0', '--model_name_or_path=Salesforce/codegen-350M-multi', '--per_device_train_batch_size=1', '--learning_rate', '2e-5', '--num_train_epochs', '1', '--output_dir=./codegen-350M-finetuned', '--overwrite_output_dir', '--dataset_name', 'moyix/debian_csrc', '--cache_dir', '/data/home/geunsik-lim/.cache', '--tokenizer_name', 'Salesforce/codegen-350M-multi', '--block_size', '2048', '--gradient_accumulation_steps', '32', '--do_train', '--fp16', '--deepspeed', 'ds_config_zero2.json'] exits with return code = 1
real 0m7.742s
user 0m4.930s
```
### Steps to reproduce the bug
Steps to reproduce this behavior.
```
(deepspeed) geunsik-lim@ai02:~/qtlab$ ./test_debian_csrc_dataset.py
Traceback (most recent call last):
File "/data/home/geunsik-lim/qtlab/./test_debian_csrc_dataset.py", line 6, in <module>
dataset = load_dataset("moyix/debian_csrc")
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset
builder_instance = load_dataset_builder(
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory
raise e1 from None
File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory
raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError)
(deepspeed) geunsik-lim@ai02:~/qtlab$
(deepspeed) geunsik-lim@ai02:~/qtlab$
(deepspeed) geunsik-lim@ai02:~/qtlab$
(deepspeed) geunsik-lim@ai02:~/qtlab$ cat ./test_debian_csrc_dataset.py
#!/usr/bin/env python
from datasets import load_dataset
dataset = load_dataset("moyix/debian_csrc")
```
1. Adde proxy address of a company in /etc/profile
2. Download dataset with load_dataset() function of datasets package that is provided by HuggingFace.
3. In this case, the address would be "moyix--debian_csrc".
4. I get the "`ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError`)" error message.
### Expected behavior
* error message:
ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError)
### Environment info
* software version information:
```
(deepspeed) geunsik-lim@ai02:~$
(deepspeed) geunsik-lim@ai02:~$ conda list -f pytorch
# packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed:
#
# Name Version Build Channel
pytorch 1.13.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
(deepspeed) geunsik-lim@ai02:~$ conda list -f python
# packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed:
#
# Name Version Build Channel
python 3.10.6 haa1d7c7_1
(deepspeed) geunsik-lim@ai02:~$ conda list -f datasets
# packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed:
#
# Name Version Build Channel
datasets 2.6.1 py_0 huggingface
(deepspeed) geunsik-lim@ai02:~$ uname -a
Linux ai02 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
(deepspeed) geunsik-lim@ai02:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5207/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5206/comments | https://api.github.com/repos/huggingface/datasets/issues/5206/events | https://github.com/huggingface/datasets/issues/5206 | 1,437,223,894 | I_kwDODunzps5VqkvW | 5,206 | Use logging instead of printing to console | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually upon closer inspection, it is documented in the code that this behavior is intentional, so I'll close this."
] | 2022-11-05T23:48:02 | 2022-11-06T00:06:00 | 2022-11-06T00:05:59 | NONE | null | ### Describe the bug
Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger.
### Steps to reproduce the bug
```python
>> import datasets
>> datasets.load_dataset("some-dataset")
Downloading and preparing dataset csv/data to <path>...
Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7729.06it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 527.23it/s]
Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data.
```
### Expected behavior
The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-13.0-x86_64-i386-64bit
- Python version: 3.9.15
- PyArrow version: 10.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5206/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5205/comments | https://api.github.com/repos/huggingface/datasets/issues/5205/events | https://github.com/huggingface/datasets/pull/5205 | 1,437,221,987 | PR_kwDODunzps5CRO33 | 5,205 | Add missing `DownloadConfig.use_auth_token` value | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-05T23:36:36 | 2022-11-08T08:13:00 | 2022-11-07T16:20:24 | CONTRIBUTOR | null | This PR solves https://github.com/huggingface/datasets/issues/5204
Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5205/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5205/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5205",
"html_url": "https://github.com/huggingface/datasets/pull/5205",
"diff_url": "https://github.com/huggingface/datasets/pull/5205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5205.patch",
"merged_at": "2022-11-07T16:20:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5204/comments | https://api.github.com/repos/huggingface/datasets/issues/5204/events | https://github.com/huggingface/datasets/issues/5204 | 1,437,221,259 | I_kwDODunzps5VqkGL | 5,204 | `push_to_hub` not propagating `token` through `DownloadConfig` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign",
"@lhoestq can you close this issue as part of the recent #5205 merge? Thanks 🤗 ",
"Thank you :)"
] | 2022-11-05T23:32:20 | 2022-11-08T10:12:09 | 2022-11-08T10:12:08 | CONTRIBUTOR | null | ### Describe the bug
When trying to upload a new 🤗 Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before.
But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False.
So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`.
### Steps to reproduce the bug
Let's create a new dataset in our HF account via Python as:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
```
When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
>>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`'))
```
### Expected behavior
Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5204/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5203/comments | https://api.github.com/repos/huggingface/datasets/issues/5203/events | https://github.com/huggingface/datasets/pull/5203 | 1,436,710,518 | PR_kwDODunzps5CPlnW | 5,203 | Update canonical links to Hub links | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-04T22:50:50 | 2022-11-07T18:43:05 | 2022-11-07T18:40:19 | MEMBER | null | This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5203/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5203",
"html_url": "https://github.com/huggingface/datasets/pull/5203",
"diff_url": "https://github.com/huggingface/datasets/pull/5203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5203.patch",
"merged_at": "2022-11-07T18:40:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5202/comments | https://api.github.com/repos/huggingface/datasets/issues/5202/events | https://github.com/huggingface/datasets/issues/5202 | 1,435,886,090 | I_kwDODunzps5VleIK | 5,202 | CI fails after bulk edit of canonical datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed by: https://huggingface.co/datasets/paws/discussions/1"
] | 2022-11-04T10:51:20 | 2023-02-16T09:11:10 | 2023-02-16T09:11:10 | MEMBER | null | ```
______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', config_name = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, config_name, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_config_info(path, config_name, expected_splits):
info = get_dataset_config_info(path, config_name=config_name)
assert info.config_name == config_name
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:45: AssertionError
_ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws'
expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final']
expected_splits_in_first_config = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_configs, expected_splits_in_first_config",
[
("squad", ["plain_text"], ["train", "validation"]),
("dalle-mini/wit", ["dalle-mini--wit"], ["train"]),
("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]),
],
)
def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config):
infos = get_dataset_infos(path)
assert list(infos.keys()) == expected_configs
expected_config = expected_configs[0]
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits_in_first_config
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:90: AssertionError
______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', expected_config = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_config, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_split_names(path, expected_config, expected_splits):
infos = get_dataset_infos(path)
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5202/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5201/comments | https://api.github.com/repos/huggingface/datasets/issues/5201/events | https://github.com/huggingface/datasets/pull/5201 | 1,435,881,554 | PR_kwDODunzps5CM0zn | 5,201 | Do not sort splits in dataset info | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153",
"I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://huggingface.co/datasets/paws/discussions/1\r\n\r\nRelated issue:\r\n- #5202",
"@albertvillanova yeah I noticed it right after the PR :smile: thank you! the fix of the dataset info yaml fixes tests on CI, but in general order of splits in yaml influences the order in which they are displayed in the viewer, if I understand it correctly. So I suggest not to sort splits in yaml initially to avoid this for other datasets in the future. I think [this change](https://github.com/huggingface/datasets/pull/5201/files#diff-198ba4fdf2f94cb3e1aba8a0170a43b08d4ab5636d682374321c5a383a8be24dR571) should work for it. \r\n\r\nChanges to tests here maybe can be reverted considering that order in yaml now corresponds to the one in tests, thanks to your change in the dataset info.",
"Hehe, @polinaeterna, we make comments nearly at the same time as well... :laughing: "
] | 2022-11-04T10:47:21 | 2022-11-04T14:47:37 | 2022-11-04T14:45:09 | CONTRIBUTOR | null | I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws
What do you think?
But I added sorting in tests to fix CI (for the same dataset). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5201/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5201",
"html_url": "https://github.com/huggingface/datasets/pull/5201",
"diff_url": "https://github.com/huggingface/datasets/pull/5201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5201.patch",
"merged_at": "2022-11-04T14:45:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5200/comments | https://api.github.com/repos/huggingface/datasets/issues/5200/events | https://github.com/huggingface/datasets/issues/5200 | 1,435,831,559 | I_kwDODunzps5VlQ0H | 5,200 | Some links to canonical datasets in the docs are outdated | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!"
] | 2022-11-04T10:06:21 | 2022-11-07T18:40:20 | 2022-11-07T18:40:20 | CONTRIBUTOR | null | As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5200/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5199/comments | https://api.github.com/repos/huggingface/datasets/issues/5199/events | https://github.com/huggingface/datasets/pull/5199 | 1,434,818,836 | PR_kwDODunzps5CJSv1 | 5,199 | Deprecate dummy data generation command | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-03T15:05:54 | 2022-11-04T14:01:50 | 2022-11-04T13:59:47 | CONTRIBUTOR | null | Deprecate the `dummy_data` CLI command. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5199/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5199",
"html_url": "https://github.com/huggingface/datasets/pull/5199",
"diff_url": "https://github.com/huggingface/datasets/pull/5199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5199.patch",
"merged_at": "2022-11-04T13:59:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5198/comments | https://api.github.com/repos/huggingface/datasets/issues/5198/events | https://github.com/huggingface/datasets/pull/5198 | 1,434,699,165 | PR_kwDODunzps5CI49J | 5,198 | Add note about the name of a dataset script | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-03T13:51:32 | 2022-11-04T12:47:59 | 2022-11-04T12:46:01 | CONTRIBUTOR | null | Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193
also fixed two minor issues in audio docs (broken links) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5198/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5198",
"html_url": "https://github.com/huggingface/datasets/pull/5198",
"diff_url": "https://github.com/huggingface/datasets/pull/5198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5198.patch",
"merged_at": "2022-11-04T12:46:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5197/comments | https://api.github.com/repos/huggingface/datasets/issues/5197/events | https://github.com/huggingface/datasets/pull/5197 | 1,434,676,150 | PR_kwDODunzps5CI0Ac | 5,197 | [zstd] Use max window log size | {
"login": "reyoung",
"id": 728699,
"node_id": "MDQ6VXNlcjcyODY5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/728699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reyoung",
"html_url": "https://github.com/reyoung",
"followers_url": "https://api.github.com/users/reyoung/followers",
"following_url": "https://api.github.com/users/reyoung/following{/other_user}",
"gists_url": "https://api.github.com/users/reyoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reyoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reyoung/subscriptions",
"organizations_url": "https://api.github.com/users/reyoung/orgs",
"repos_url": "https://api.github.com/users/reyoung/repos",
"events_url": "https://api.github.com/users/reyoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/reyoung/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@albertvillanova Please take a review.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5197). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-03T13:35:58 | 2022-11-03T13:45:19 | null | NONE | null | ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags.
Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the max window size. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5197/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5197",
"html_url": "https://github.com/huggingface/datasets/pull/5197",
"diff_url": "https://github.com/huggingface/datasets/pull/5197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5197.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5196/comments | https://api.github.com/repos/huggingface/datasets/issues/5196/events | https://github.com/huggingface/datasets/pull/5196 | 1,434,401,646 | PR_kwDODunzps5CH439 | 5,196 | Use hfh hf_hub_url function | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have override this.\r\n\r\nIf so, I then would suggest to initiate a deprecation cycle.",
"After a discussion with the rest of the datasets team, we agreed we can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: this will have minimal impact, only for **private Hubs**. We will address eventual possible impacts in the future.\r\n\r\nAdditionally, we also ignore `config.HUB_DEFAULT_VERSION`.\r\n\r\nSee explanation in this PR description: https://github.com/huggingface/datasets/pull/5196#issue-1434401646",
"I'm trying to upgrade datasets to 2.7.0 in https://github.com/huggingface/datasets-server, and the tests fail due to this change. I think it's a breaking change (that was not listed in https://github.com/huggingface/datasets/releases/tag/2.7.0) since code that previously worked (by setting `datasets.config.HUB_DATASETS_URL = CI_HUB_DATASETS_URL` for example) does not work anymore.\r\n\r\nI'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).",
"OK, I re-read this thread, and https://github.com/huggingface/datasets/pull/5196#issuecomment-1307430175 explicitely states that `config.HUB_DATASETS_URL` (as well as `config.HUB_DEFAULT_VERSION`) is now ignored. I was expecting the breaking changes to be listed in the release notes: https://github.com/huggingface/datasets/releases/tag/2.7.0.",
"> I'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).\r\n\r\nI think the current workaround of settings an env variable before launching the tests is \"not so bad\" when considering the fact that env variables are evaluated at import time in `huggingface_hub` (and most probable `datasets` as well). I think that when refactoring this in huggingface_hub (https://github.com/huggingface/huggingface_hub/issues/1172) I'll opt for instantiating a `Settings` object (or `Constants`) that contains all the settings variables. This way it will not be possible to import attributes individually + tests would be easier. As I see it, it would be similar to [what `Pydantic` does](https://pydantic-docs.helpmanual.io/usage/settings/) even though we most probably don't want Pydantic as a root dependency just for that. ",
"You can use fixtures in your tests:\r\n```python\r\nCI_HUB_ENDPOINT = \"https://hub-ci.huggingface.co\"\r\nCI_HUB_DATASETS_URL = CI_HUB_ENDPOINT + \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nCI_HFH_HUGGINGFACE_CO_URL_TEMPLATE = CI_HUB_ENDPOINT + \"/{repo_id}/resolve/{revision}/{filename}\"\r\n\r\n@pytest.fixture\r\ndef ci_hfh_hf_hub_url(monkeypatch):\r\n monkeypatch.setattr(\r\n \"huggingface_hub.file_download.HUGGINGFACE_CO_URL_TEMPLATE\", CI_HFH_HUGGINGFACE_CO_URL_TEMPLATE\r\n )\r\n\r\n@pytest.fixture\r\ndef ci_hub_config(monkeypatch):\r\n monkeypatch.setattr(\"datasets.config.HF_ENDPOINT\", CI_HUB_ENDPOINT)\r\n monkeypatch.setattr(\"datasets.config.HUB_DATASETS_URL\", CI_HUB_DATASETS_URL)\r\n```\r\n\r\nand use `@pytest.fixture(autouse=True)` if you want to always use the CI endpoints.\r\n\r\nAnd when `huggingface-hub` and `datasets` change the way we can set the endpoint, we'll just need to update the fixtures.\r\nI think ultimately you'll only have to change the `huggingface-hub` endpoint settings\r\n",
"OK.\r\n\r\nIn fact, in datasets-server we set `config.HUB_DATASETS_URL` (https://github.com/huggingface/datasets-server/blob/35a30dbcd687b26db1f02502ea8305f70c064473/workers/splits/src/splits/config.py#L26) at config time, before starting the workers. It's not an issue with how to launch the tests, but with the app in itself.\r\n\r\nI understand that for now, the only way to fix this is to setup `HF_ENDPOINT` in the env when launching the app (currently, we set the endpoint with `COMMON_HF_ENDPOINT`, a custom env var I set to be sure not to have side-effects)",
"> You can use fixtures in your tests:\r\n\r\nThanks, used in https://github.com/huggingface/datasets-server/pull/644."
] | 2022-11-03T10:08:09 | 2022-12-06T11:38:17 | 2022-11-09T07:15:12 | MEMBER | null | Small refactoring to use `hf_hub_url` function from `huggingface_hub`.
This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`.
This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood).
EDIT:
~~Finally, we use our `config.HUB_DATASETS_URL` when using `hfh.hf_hub_url`~~
There is a breaking change: the `hfh` `hf_hub_url` function uses
- `hfh` `HUGGINGFACE_CO_URL_TEMPLATE` URL template, different from the `datasets` `config.HUB_DATASETS_URL`
- also, `hfh` `DEFAULT_REVISION`, instead of `datasets` `config.HUB_DEFAULT_VERSION` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5196/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5196",
"html_url": "https://github.com/huggingface/datasets/pull/5196",
"diff_url": "https://github.com/huggingface/datasets/pull/5196.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5196.patch",
"merged_at": "2022-11-09T07:15:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5195/comments | https://api.github.com/repos/huggingface/datasets/issues/5195/events | https://github.com/huggingface/datasets/pull/5195 | 1,434,290,689 | PR_kwDODunzps5CHhF2 | 5,195 | [wip testing docs] | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5195). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-03T08:37:34 | 2023-04-04T15:10:37 | 2023-04-04T15:10:33 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5195/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5195",
"html_url": "https://github.com/huggingface/datasets/pull/5195",
"diff_url": "https://github.com/huggingface/datasets/pull/5195.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5195.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5194/comments | https://api.github.com/repos/huggingface/datasets/issues/5194/events | https://github.com/huggingface/datasets/pull/5194 | 1,434,206,951 | PR_kwDODunzps5CHPNY | 5,194 | Fix docs about dataset_info in YAML | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-03T07:10:23 | 2022-11-03T13:31:27 | 2022-11-03T13:29:21 | MEMBER | null | This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card:
- #4926
Related to:
- #5193 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5194/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5194",
"html_url": "https://github.com/huggingface/datasets/pull/5194",
"diff_url": "https://github.com/huggingface/datasets/pull/5194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5194.patch",
"merged_at": "2022-11-03T13:29:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5193/comments | https://api.github.com/repos/huggingface/datasets/issues/5193/events | https://github.com/huggingface/datasets/issues/5193 | 1,433,883,780 | I_kwDODunzps5Vd1SE | 5,193 | "One or several metadata. were found, but not in the same directory or in a parent directory" | {
"login": "lambda-science",
"id": 20109584,
"node_id": "MDQ6VXNlcjIwMTA5NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20109584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambda-science",
"html_url": "https://github.com/lambda-science",
"followers_url": "https://api.github.com/users/lambda-science/followers",
"following_url": "https://api.github.com/users/lambda-science/following{/other_user}",
"gists_url": "https://api.github.com/users/lambda-science/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambda-science/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambda-science/subscriptions",
"organizations_url": "https://api.github.com/users/lambda-science/orgs",
"repos_url": "https://api.github.com/users/lambda-science/repos",
"events_url": "https://api.github.com/users/lambda-science/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambda-science/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Also unrelated but still: https://huggingface.co/docs/datasets/image_dataset#generate-the-dataset\r\n```If your loading script passed the test, you should now have a dataset_infos.json file in your dataset folder.```\r\nIt's not the case anymore as it's now in the readme.md, it was confusing to me",
"And here is my data loader script: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data/blob/main/SDH_16k.py\r\nI have one file archive to download that contains the images for all splits and one `metadata.jsonl` to download that contains the informations about what image goes into what split.",
"Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.",
"> Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.\r\n\r\nHi !\r\n\r\nThank you for your answer. That was... embarrassingly easy, sorry for this issue, everything is fixed now ! \r\n\r\nHave a nice day ! :)",
"@lambda-science that's not embarrassing at all! it's actually not clear from the documentation that the script should have the same name, so thank you for the issue, we'll add this information to the docs :) "
] | 2022-11-02T22:46:25 | 2022-11-03T13:39:16 | 2022-11-03T13:35:44 | NONE | null | ### Describe the bug
When loading my own dataset, on loading it I get an error.
Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data
And the error after loading with:
```python
from datasets import load_dataset
load_dataset("corentinm7/MyoQuant-SDH-Data")
```
```python
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.34k/3.34k [00:00<00:00, 4.45MB/s]
Using custom data configuration SDH_16k-53e7301a92ab0025
Downloading and preparing dataset None/SDH_16k to /home/corentin/.cache/huggingface/datasets/corentinm7___imagefolder/SDH_16k-53e7301a92ab0025/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.28M/3.28M [00:00<00:00, 4.31MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.75s/it]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.13G/1.13G [00:15<00:00, 74.3MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.09s/it]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.16s/it]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/load.py", line 1742, in load_dataset
builder_instance.download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1423, in _download_and_prepare
super()._download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1374, in _prepare_split
for key, record in logging.tqdm(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 394, in _generate_examples
raise ValueError(
ValueError: One or several metadata. were found, but not in the same directory or in a parent directory of /home/corentin/.cache/huggingface/datasets/downloads/extracted/60c4aa8d4da3065bb3d310de4373dffd73bd4dc331aedcb4ee867febe4fdb7cd/validation/sick/2_CG_SDH_TAM_Bin1cKO_ko_pla_4_1640.tif.
```
However the test command is working fine. ```datasets-cli test hugging_face_play/ds_test/SDH_16k.py --save_info --all_configs --force_redownload```
```
Using custom data configuration SDH_16k
Testing builder 'SDH_16k' (1/1)
Downloading and preparing dataset sdh_16k/SDH_16k to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d...
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.13G/1.13G [00:14<00:00, 76.5MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:15<00:00, 15.66s/it]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.28M/3.28M [00:02<00:00, 1.44MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:03<00:00, 3.21s/it]
Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 11586.48it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.42s/it]
Dataset sdh_16k downloaded and prepared to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d. Subsequent calls will reuse this data.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 605.27it/s]
Dataset card saved at hugging_face_play/ds_test/README.md
Test successful.
```
### Steps to reproduce the bug
Simply run on python
```python
from datasets import load_dataset
load_dataset("corentinm7/MyoQuant-SDH-Data")
```
### Expected behavior
As the test command worked, this error should not appear
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 10.0.0
- Pandas version: 1.5.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5193/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5192/comments | https://api.github.com/repos/huggingface/datasets/issues/5192/events | https://github.com/huggingface/datasets/pull/5192 | 1,433,199,790 | PR_kwDODunzps5CD2BQ | 5,192 | Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"> Nit: maybe we can use the count_path_segments function from this file for counting (updated with your logic to make it faster).\r\n\r\n@mariosasko just to make sure I understood you correctly - are you okay with this change? (actually `os.path.normpath` is redundant here as paths from `data_files` should be already normalized but just in case)\r\nhttps://github.com/huggingface/datasets/pull/5192/files#diff-1f09f7a178211f7539b1499b64b69793bd53b30c8b7b34cfcc5835e25d31929fR33\r\nIf you are, we can merge.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.",
"awesome ! :D"
] | 2022-11-02T14:01:41 | 2022-11-15T16:32:53 | 2022-11-15T16:31:07 | CONTRIBUTOR | null | Will close https://github.com/huggingface/datasets/issues/5153
Drop labels by default (`drop_labels=None`) when:
* there are files on different levels of directory hierarchy by checking their path depth
* all files are in the same directory (=only one label was inferred)
First one fixes cases like this:
```
repo
image3.jpg
image4.jpg
data
image1.jpg
image2.jpg
```
Second one fixes cases like this:
```
repo
image1.jpg
image2.jpg
image3.jpg
```
This is mostly to fix the viewer for people who just drop images in the Hub interface into the root dir.
I added tests for both of the cases on local and remote files. **I also changed data files for old test on drop_labels** (`test_generate_examples_drop_labels`). The files I provide to `test_generate_examples_drop_labels` now has "canonical" classification structure (two dirs) in order not to change the logic of the test (=not to check these two cases addressed in the PR).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5192",
"html_url": "https://github.com/huggingface/datasets/pull/5192",
"diff_url": "https://github.com/huggingface/datasets/pull/5192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5192.patch",
"merged_at": "2022-11-15T16:31:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5191/comments | https://api.github.com/repos/huggingface/datasets/issues/5191/events | https://github.com/huggingface/datasets/pull/5191 | 1,433,191,658 | PR_kwDODunzps5CD0Qp | 5,191 | Make torch.Tensor and spacy models cacheable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-02T13:56:18 | 2022-11-02T17:20:48 | 2022-11-02T17:18:42 | CONTRIBUTOR | null | Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models.
Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/3178
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5191",
"html_url": "https://github.com/huggingface/datasets/pull/5191",
"diff_url": "https://github.com/huggingface/datasets/pull/5191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5191.patch",
"merged_at": "2022-11-02T17:18:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5190/comments | https://api.github.com/repos/huggingface/datasets/issues/5190/events | https://github.com/huggingface/datasets/issues/5190 | 1,433,014,626 | I_kwDODunzps5VahFi | 5,190 | `path` is `None` when downloading a custom audio dataset from the Hub | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n"
] | 2022-11-02T11:51:25 | 2022-11-02T12:55:02 | 2022-11-02T12:55:02 | MEMBER | null | ### Describe the bug
I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub.
Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None`
Here's an example:
```python
from datasets import load_dataset
ds = load_dataset("lewtun/audio-test-push")
ds["train"][0]
# {
# "audio": {
# "path": None, <-- Is this expected?
# "array": array(
# [
# 3.97140226e-07,
# 7.30310290e-07,
# 7.56406735e-07,
# ...,
# -1.19636677e-01,
# -1.16811886e-01,
# -1.12441722e-01,
# ]
# ),
# "sampling_rate": 44100,
# },
# "song_id": 0,
# "genre_id": 0,
# "genre": "Electronic",
# }
```
Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :)
### Steps to reproduce the bug
1. Create an audio dataset with the `audiofolder` feature
2. Push the dataset to the Hub with `push_to_hub()`
3. Download the Hub dataset and inspect the `audio.path` feature
### Expected behavior
`audio.path` points to the file associated with the audio data
### Environment info
- `datasets` version: 2.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5189/comments | https://api.github.com/repos/huggingface/datasets/issues/5189/events | https://github.com/huggingface/datasets/issues/5189 | 1,432,769,143 | I_kwDODunzps5VZlJ3 | 5,189 | Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the generated dataset. But then again, I think this lib is a bit too old to make such changes. @lhoestq @albertvillanova WDYT?\r\n\r\n",
"We can brainstorm here to see how we could make it happen ? And then depending on the options we see if it's a change we can do.\r\n\r\nI'm starting with a first reasoning\r\n\r\nCurrently not passing `split=` in `load_dataset` means \"return a dict with each split\".\r\n\r\nNow what would happen if a dataset has no split ? Ideally it should return one Dataset. And passing `split=` would have no sense. So depending on the dataset content, not passing `split=` should return a dict or a Dataset. In particular, those two cases should work:\r\n```python\r\n# case 1: dataset without split\r\nds = load_dataset(\"dataset_without_split\")\r\nds[0], ds[\"column_name\"], list(ds) # we want this\r\n\r\n# case 2: dataset with splits\r\nds = load_dataset(\"dataset_with_splits\")\r\nds[\"train\"] # this works and can't be changed\r\nds = load_dataset(\"dataset_with_splits\", split=\"train\")\r\nds[0], ds[\"column_name\"], list(ds) # this works and can't be changed\r\n```\r\n\r\nI can see several ideas:\r\n1. allowing `load_dataset` to return a different object based on the dataset content - either a Dataset or a DatasetDict\r\n - we can update `get_dataset_split_names` to return None or a list if users want to know in advance what object will be returned. They can also use `isinstance` _a posteriori_\r\n - but in this case we expect users to be careful when loading datasets and always to extra steps to check if they got a Dataset or DatasetDict\r\n2. merge Dataset and DatasetDict objects\r\n - they already share many functions: map, filter, push_to_hub etc.\r\n - we can define `ds[0]` to be the first item of the first split, and consider that the uses accesses rows from the full table of all the splits concatenated\r\n - however there is a collision when doing `ds[\"column_name\"]` or `ds[\"train\"]` that we need to address: the first returns a list, while the other returns a Dataset.\r\n\r\nWhat are your opinions on those two ideas ? Do you have other ideas in mind ?",
"I like the first idea more (concatenating splits doesn't seem useful, no?). This is a significant breaking change, so I think we should do a poll (or something similar) to gather more info on the actual \"expected behavior\" and wait for Datasets 3.0 if we decide to implement it.\r\n\r\nPS: @thomwolf also suggested the same thing a while ago (https://github.com/huggingface/datasets/issues/743#issuecomment-746074641).",
"I think it's an interesting improvement to the user experience for a case that comes often (no split) so I would definitively support it.\r\n\r\nI would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed",
"Related: if a dataset only has one split, we don't show the splits select control in the dataset viewer on the Hub, eg. compare https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils/viewer/image/test with https://huggingface.co/datasets/glue/viewer/mnli/test.\r\n\r\nSee https://github.com/huggingface/moon-landing/pull/3858 for more details (internal)",
"I feel like the second idea is a bit more overkill. \r\n@severo I would say it's a bit irrelevant to the problem we have but is a separate problem @polinaeterna is solving at the moment. 😅 (also discussed on slack)",
"OK, sorry for polluting the thread. The relation I saw with the dataset viewer is that from a UX point of view, we hide the concepts of split and configuration whenever possible -> this issue feels like doing the same in the datasets library.",
"I would agree that returning different types based on the content of the dataset might be confusing.\r\n\r\nWe can do something similar to what `fetch_*` or `load_*` from `sklearn.datasets` do, which is to have an arg which changes the type of the returned type. For instance, `load_iris` would return a dict, but `load_iris(..., return_X_y=True)` would return a tuple.\r\n\r\nHere we can have a similar arg such as `return_X` which would then only return a single `DataSet` or an array.",
"> I feel like the second idea is a bit more overkill.\r\n\r\nOverkill in what sense ?\r\n\r\n> Here we can have a similar arg such as return_X which would then only return a single DataSet or an array.\r\n\r\nRight now one can already pass `split=\"all\"` to get one `Dataset` object with all the data in it (unsplit). We could also have something like `return_all=True` so make the API clearer.\r\n\r\n> I would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed\r\n\r\nI think it would be ok to handle the collision by allowing both `ds[\"train\"]` and `ds[\"column_name\"]` (and maybe adding something like `ds.splits` for those who want to iterate over the splits or add new ones)",
"Would it make sense to remove the notion of \"split\" in `load_dataset`? I feel a lof of it comes from the want to have some sort of group of more or less similar dataset. \"train\"/\"test\"/\"validation\" are the traditional ones, but there are some datasets that have much more splits.\r\n\r\nWould it make sense to force `load_dataset` to only load a single `Dataset` object, and fail if it doesn't point to one. And have another method that's like `load_dataset_group_info` that can return a very arbitrary info class (Dict, List whatever), but you need to pass individual infos to `load_dataset` to run anything? Typically I don't think `DatasetDict.map` is really that helpful, but that's my personal opinion. This would help make things more readable (typically knowing if an object is a `Dataset` or a `DatasetDict`)",
"> Would it make sense to remove the notion of \"split\" in load_dataset?\r\n\r\nI think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\n> Would it make sense to force load_dataset to only load a single Dataset object, and fail if it doesn't point to one.\r\n\r\nWe need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one",
"> I think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\nIt was my understanding that the whole issue was that `load_dataset` returned multiple types of objects.\r\n\r\n> We need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one\r\n\r\nYeah sorry I meant ideally. One can always start developing `load_dataset_v2` can deprecate the first one and remove it in the longer term.",
"> It was my understanding that the whole issue was that load_dataset returned multiple types of objects.\r\n\r\nYes indeed, but we still want to keep a way to load the train/val/test/whatever splits alone ;)",
"@thomasw21's solution is good but it will break backwards compatibility. 😅",
"Started to experiment with merging Dataset and DatasetDict. My plan is to define the splits of a Dataset in Dataset.info.splits (already exists, but never used). A Dataset would then be the concatenation of its splits if they exist.\r\n\r\nNot sure yet this is the way to go. My plan is to play with it and see and share it with you, so we can see if it makes sense from a UX point of view.",
"So just to make sure that I understand the current direction, people will have to be extra careful when handling splits right?\r\nImagine \"potato\" a dataset containing train/validation split:\r\n```\r\nload_dataset(\"potato\") # returns the concatenation of all the splits\r\n```\r\nPreviously the design would force you to choose a split (it would raise otherwise), or manually concat them if you really wanted to play with concatenated splits. Now it would potentially run without raising for a bit of time until you figure out that you've been training on both train and validation split.\r\n\r\nWould it make sense to use a dataset specific default instead of using the concatenation, typically \"potato\" dataset's default would be train?\r\n```\r\nload_dataset(\"potato\") # returns \"train\" split\r\nload_dataset(\"potato\", split=\"train\") # returns \"train\" split\r\nload_dataset(\"potato\", split=\"validation\") # returns \"validation\" split\r\nconcatenate_datasets([load_dataset(\"potato\", split=\"train\"), load_dataset(\"potato\", split=\"validation\")]) # returns concatenation\r\n```",
"> load_dataset(\"potato\") # returns \"train\" split\r\n\r\nTo avoid a breaking change we need to be able to do `load_dataset(\"potato\")[\"validation\"]` as well.\r\n\r\nIn that case I'd wonder where the validation split comes from, since the rows of the dataset wouldn't contain the validation split according to your example. That's why I'm more in favor of concatenating.\r\n\r\nA dataset is one table, that optionally has some split info about subsets (e.g. for training an evaluation)\r\n\r\nThis also allows anyone to re-split the dataset the way they want if they're not happy with the default:\r\n\r\n```python\r\nds = load_dataset(\"potato\").train_test_split(test_size=0.2)\r\ntrain_ds = ds[\"train\"]\r\ntest_ds = ds[\"test\"]\r\n```",
"Just thinking about this, we could just have `to_dataframe()` as `load_dataset(\"blah\").to_dataframe()` to get the whole dataset, and not change anything else.",
"I have a first implementation of option 2 (merging Dataset and DatasetDict) in this PR: https://github.com/huggingface/datasets/pull/5301/\r\n\r\nFeel free to play with it if you're interested, and let me know what you think. In this PR, a dataset is one table that optionally has some split info about subsets.",
"@adrinjalali we already have [to_pandas](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_pandas) AFAIK that essentially does the same thing (for a dataset, not for a dataset dict), I was wondering if it makes sense to have this as I don't know portion of people who load non-tabular datasets into dataframes. @lhoestq I saw your PR and it will break a lot of things imo, WDYT of this option? ",
"> we already have [to_pandas](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_pandas) AFAIK that essentially does the same thing (for a dataset, not for a dataset dict)\r\n\r\nyes correct :)\r\n\r\n> I saw your PR and it will break a lot of things imo\r\n\r\nDo you have concrete examples you can share ?\r\n\r\n> WDYT of this option?\r\n\r\nThe to_dataframe option ? I think it not enough, since you'd still get a `DatasetDict({\"train\": Dataset()})` if you load a dataset with no splits (e.g. one CSV), and this doesn't really make sense.\r\n\r\nNote that in the PR I opened you can do\r\n```python\r\nds = load_dataset(\"dataset_with_just_one_csv\") # Dataset type\r\ndf = load_dataset(\"dataset_with_just_one_csv\").to_pandas() # DataFrame type\r\n```",
"@lhoestq no I think @adrinjalali and I meant when user calls `to_dataframe` if there's only train split in `DatasetDict` we could directly load that into dataframe. This might cause a confusion given there's to_pandas but I think it's more intuitive and least breaking change. (given people -who use `datasets` for tabular workflows- will eventually call `to_pandas` anyway) ",
"So in that case it would be fine to still end up with a dataset dict with a \"train\" split ?",
"yeah what I mean is this:\r\n\r\n```py\r\ndataset = load_dataset(\"blah\")\r\n\r\n# deal with a split of the dataset\r\ntrain = dataset[\"train\"]\r\ntrain_df = dataset[\"train\"].to_dataframe()\r\n\r\n# deal with the whole dataset\r\ndataset_df = dataset.to_dataframe()\r\n```\r\n\r\nSo we do two things to improve tabular experience:\r\n- allow datasets to have a single split\r\n- add `to_dataframe` to the root dict level so that users can simply call `df = load_dataset(\"blah\").to_dataframe()` and have it in their `pandas.DataFrame` object.",
"Ok ! Note that we already have `Dataset.to_pandas()` so for consistency I'd call it `DatasetDict.to_pandas()` as well, does it sound good to you ? This is something we can add pretty easily",
"yeah that sounds perfect @lhoestq !",
"> So just to make sure that I understand the current direction, people will have to be extra careful when handling splits right?\r\n\r\nWe can raise an error if someone does `load_dataset(...)[0]` if the dataset is made of several splits, and return the first example if there's one or zero splits (i.e. when it's not ambiguous). Had this idea from the dicussions in #5312 WDYT @thomasw21 ?",
"> We can raise an error if someone does load_dataset(...)[0] if the dataset is made of several splits,\r\n\r\nBut then how is that different to have the distinction between DatasetDict and Dataset then? Is it just that \"default behaviour when there are no splits or single split, it returns directly the split when there's no ambiguity\".\r\n\r\nAlso I was wondering how the concatenation could have heavy impacts when running mapping functions/filtering in batch? Typically can batch be somehow mixed?",
"> But then how is that different to have the distinction between DatasetDict and Dataset then?\r\n\r\nBecause it doesn't make sense to be able to do `example = ds[0]` or `examples = list(ds)` on a class named `DatasetDict` of type `Dict[str, Dataset]`.\r\n\r\n> Also I was wondering how the concatenation could have heavy impacts when running mapping functions/filtering in batch? Typically can batch be somehow mixed?\r\n\r\nNo, we run each function on each split separated",
"> Because it doesn't make sense to be able to do example = ds[0] or examples = list(ds) on a class named DatasetDict of type Dict[str, Dataset].\r\n\r\nHum but you're still going to raise an exception in both those cases with your current change no? (actually list(ds) would return the name of the splits no?)\r\n\r\n> No, we run each function on each split separated\r\n\r\nNice!"
] | 2022-11-02T09:15:02 | 2022-12-06T12:13:17 | null | CONTRIBUTOR | null | ### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above 😅
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5189/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5188/comments | https://api.github.com/repos/huggingface/datasets/issues/5188/events | https://github.com/huggingface/datasets/pull/5188 | 1,432,477,139 | PR_kwDODunzps5CBaoQ | 5,188 | add: segmentation guide. | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @osanseviero. Am I good to merge? ",
"I would wait for a second approval just in case :) ",
"Sure :) ",
"Merging since the images have been pushed as LFS files ([PR](https://huggingface.co/datasets/huggingface/documentation-images/discussions/8)). "
] | 2022-11-02T04:34:36 | 2022-11-04T18:25:57 | 2022-11-04T18:23:34 | MEMBER | null | Closes #5181
I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links.
I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOTBnyshoME5RSxn5iQy-TWeFbOA?usp=sharing) in case anyone wants to play.
- [x] Replace the image links | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5188/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5188",
"html_url": "https://github.com/huggingface/datasets/pull/5188",
"diff_url": "https://github.com/huggingface/datasets/pull/5188.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5188.patch",
"merged_at": "2022-11-04T18:23:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5187/comments | https://api.github.com/repos/huggingface/datasets/issues/5187/events | https://github.com/huggingface/datasets/pull/5187 | 1,432,375,375 | PR_kwDODunzps5CBE08 | 5,187 | chore: add notebook links to img cls and obj det. | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@nateraw I guess the failing test is unrelated. ",
"@sayakpaul Yea failures are unrelated. ",
"Alright. Will wait for @osanseviero's take and then merge. ",
"FYI @stevhliu ",
"@osanseviero @stevhliu @nateraw thank you for your comments. Acted on them.",
"Thanks! Can I merge? Or should we wait for approvals from the others?",
"Since @stevhliu approved as well, I think you're good to go",
"Alright!\r\n\r\nMerging as a Member for the first time 🫀"
] | 2022-11-02T02:30:09 | 2022-11-03T01:52:24 | 2022-11-03T01:49:56 | MEMBER | null | Closes https://github.com/huggingface/datasets/issues/5182 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5187/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5187",
"html_url": "https://github.com/huggingface/datasets/pull/5187",
"diff_url": "https://github.com/huggingface/datasets/pull/5187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5187.patch",
"merged_at": "2022-11-03T01:49:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5186/comments | https://api.github.com/repos/huggingface/datasets/issues/5186/events | https://github.com/huggingface/datasets/issues/5186 | 1,432,045,011 | I_kwDODunzps5VW0XT | 5,186 | Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! The first `Dataset.from_sql` call also outputs the \"ImportError: Using URI string without sqlalchemy installed.\" message, but you also get \"During handling of the above exception another exception occurred: ...\" after which the ValueError is printed. I agree that this behavior makes it easy to miss the original error. \r\n\r\nI think we can improve this by not throwing the writer's ValueError if the error from a dataset script is already being handled to make debugging easier. @lhoestq @albertvillanova wdyt?",
"Yup ! Alternatively the error can be raised in sql.py before generating the examples ? In `_info` for example",
"yea @lhoestq that would probably be good. The 2nd error is useless if the 1st error is the real reason it failed. "
] | 2022-11-01T20:25:51 | 2022-11-15T18:24:39 | 2022-11-15T18:24:39 | CONTRIBUTOR | null | ### Describe the bug
When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed.
### Steps to reproduce the bug
Make a new sqlite db with `sqlite3` and `pandas` from a remote [URL](https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv).
```python
import sqlite3
import pandas as pd
from datasets import Dataset
conn = sqlite3.connect('us_covid_data.db')
df = pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv')
df.to_sql('states', conn, if_exists='replace')
```
Then if you try to query this DB like this:
```python
ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db")
```
You run into the error I described above:
```ValueError: Please pass `features` or at least one example when writing data```
However, if you try to pass features, as the error suggests, then you get an error that tells you the underlying problem...
```python
from datasets import Dataset, Features, Value
features = Features({
'date': Value('date32'),
'label': Value('string'),
'fips': Value('int32'),
'cases': Value('int32'),
'deaths': Value('int32')
})
ds = Dataset.from_sql(
'''SELECT * from states WHERE state=="New York";''',
"sqlite:///us_covid_data.db",
features=features
)
```
Which results in the actual underlying error: `ImportError: Using URI string without sqlalchemy installed.`
### Expected behavior
Instead of `ValueError` about needing to pass features, we should provide the actual underlying error about not having SQLAlchemy installed when it isn't found in the environment.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 10.0.0
- Pandas version: 1.2.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5186/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5185/comments | https://api.github.com/repos/huggingface/datasets/issues/5185/events | https://github.com/huggingface/datasets/issues/5185 | 1,432,021,611 | I_kwDODunzps5VWupr | 5,185 | Allow passing a subset of output features to Dataset.map | {
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-11-01T20:07:20 | 2022-11-01T20:07:34 | null | CONTRIBUTOR | null | ### Feature request
Currently, map does one of two things to the features (if I'm not mistaken):
* when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise
* when you pass a full specification of features, output features are set to this
However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes.
### Motivation
To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings
Consider the following use of map to convert from float to int
```python
data = Dataset.from_dict({'y':[1.0,2.0,3.0]})
mapped = data.map(lambda r: {'y': int(r['y'])})
mapped['y'] # is floats, not ints
```
The result is a float again, since after the mapping operation it forces the old datatypes back on the data.
Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g.
```python
def format_data(r):
return {**tokenizer(r["text"]), "y": int(r["y"])}
data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]})
mapped = data.map(
format_data,
features=Features({'y': Value(dtype="int64")}),
remove_columns=["text"],
)
```
Results in a crash in dataset internals, as it expects either all or no output features to be specified.
Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward.
### Your contribution
I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5183/comments | https://api.github.com/repos/huggingface/datasets/issues/5183/events | https://github.com/huggingface/datasets/issues/5183 | 1,431,418,066 | I_kwDODunzps5VUbTS | 5,183 | Loading an external dataset in a format similar to conll2003 | {
"login": "Taghreed7878",
"id": 112555442,
"node_id": "U_kgDOBrV1sg",
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taghreed7878",
"html_url": "https://github.com/Taghreed7878",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-11-01T13:18:29 | 2022-11-02T11:57:50 | 2022-11-02T11:57:50 | NONE | null | I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script:
features = datasets.Features(
{"tokens": datasets.Sequence(datasets.Value("string")),
"ner_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=["B-PER", .... etc.]))}
)
from datasets import Dataset
INPUT_COLUMNS = "tokens ner_tags".split(" ")
def read_conll(file):
#all_labels = []
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line:
if line.startswith("-DOCSTART-") and example["tokens"] != []:
print(idx, example)
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []):
continue
else:
row_cols = line.split(" ")
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features)
The following error happened:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0)
285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys
286 # Will raise KeyError if the dict don't have the same keys
--> 287 yield key, tuple(d[key] for d in dicts)
288
TypeError: tuple indices must be integers or slices, not str
What does this mean and what should I modify? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5183/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5182/comments | https://api.github.com/repos/huggingface/datasets/issues/5182/events | https://github.com/huggingface/datasets/issues/5182 | 1,431,029,547 | I_kwDODunzps5VS8cr | 5,182 | Add notebook / other resource links to the task-specific data loading guides | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Yea this would be great! We would need an object detection tutorial notebook too if it doesn't already exist there. ",
"There is one: https://huggingface.co/docs/datasets/object_detection.\r\n\r\nI will start the work. "
] | 2022-11-01T07:57:26 | 2022-11-03T01:49:57 | 2022-11-03T01:49:57 | MEMBER | null | Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model?
For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb.
Applies to https://huggingface.co/docs/datasets/object_detection as well.
Cc: @osanseviero @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5182/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5181/comments | https://api.github.com/repos/huggingface/datasets/issues/5181/events | https://github.com/huggingface/datasets/issues/5181 | 1,431,027,102 | I_kwDODunzps5VS72e | 5,181 | Add a guide for semantic segmentation | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Sure this sounds great! Would this be pure torchvision, albumentations, or something else?",
"I am considering `torchvision` and `albumentations`. Also [works with TensorFlow](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_Finetune.ipynb). \r\n\r\nI am assigning the issue to myself then. "
] | 2022-11-01T07:54:50 | 2022-11-04T18:23:36 | 2022-11-04T18:23:36 | MEMBER | null | Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5181/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5180/comments | https://api.github.com/repos/huggingface/datasets/issues/5180/events | https://github.com/huggingface/datasets/issues/5180 | 1,431,012,438 | I_kwDODunzps5VS4RW | 5,180 | An example or recommendations for creating large image datasets? | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The beam utilities allow to prepare a dataset as parquet in your cloud storage. From my perspective this CLI is not super easy to use, but we've been working on a new python API to prepare a dataset in your cloud storage:\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nbuilder = load_dataset_builder(\"c4\", \"en\")\r\nbuilder.download_and_prepapre(\"s3://my-bucket/c4\", file_format=\"parquet\")\r\n```\r\n\r\nAnd to use Beam you can do:\r\n```python\r\nbeam_runner = ... # one of \"SparkRunner\", \"DataFlowRunner\", \"DirectRunner\", etc.\r\nbeam_options = ...\r\n\r\nbuilder.download_and_prepapre(\r\n \"s3://my-bucket/c4\",\r\n file_format=\"parquet\",\r\n beam_runner=beam_runner,\r\n beam_options=beam_options\r\n)\r\n```\r\n\r\nThough Beam can be used ONLY if there is a dataset script based on the `BeamBasedBuilder` right now - it doesn't work on an arbitrary dataset (see [wikipedia.py](https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py) for example).",
"Thanks! \r\n\r\nWould be nice to have something similar for creating large image datasets. "
] | 2022-11-01T07:38:38 | 2022-11-02T10:17:11 | null | MEMBER | null | I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5180/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5179/comments | https://api.github.com/repos/huggingface/datasets/issues/5179/events | https://github.com/huggingface/datasets/issues/5179 | 1,430,826,100 | I_kwDODunzps5VSKx0 | 5,179 | `map()` fails midway due to format incompatibility | {
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Cc: @lhoestq ",
"You can end up with a list instead of a tensor if all the tensors inside the list can't be stacked together - can you make sure all your inputs are tensors with the same shape ?",
"Is there an easy way to ensure it?",
"You can make sure your `tokenize` function always return tensors of the same shape",
"I modified my `tokenize()` function to be like so:\r\n\r\n```py\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\")\r\n```\r\n\r\nso that the padding always happens w.r.t to the length of the longest sequence in a batch. The issue still persists. Is there any other way? ",
"tbh I though your first implementation was fine\r\n```python\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=True, truncation=True)\r\n```\r\n\r\nMaybe you can try to see what the erroring data looks like by adding a try/except in `get_test_accuracy` ?",
"This is what I got. \r\n\r\nFor the non-erroring data, it looks like (without the labels):\r\n\r\n```\r\ntensor([[ 101, 10047, 3110, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 1045, 2005, ..., 0, 0, 0],\r\n [ 101, 1045, 2572, ..., 0, 0, 0],\r\n [ 101, 10047, 7481, ..., 0, 0, 0]]) 128\r\ntensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]]) 128\r\n```\r\n\r\nFor the erroring part:\r\n\r\n```\r\n[tensor([ 101, 1045, 2064, 2102, 2393, 3110, 2066, 2242, 6355, 3047, 2004, 2574,\r\n 2004, 1996, 8629, 2357, 2125, 4299, 1045, 2071, 2424, 2009, 2006, 7858,\r\n 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 10047, 5458, 1997, 3110, 11654, 1998, 11055, 102, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 1045, 2074, 2064, 2102, 6073, 1996, 3110, 2008, 2026,\r\n 14982, 2000, 5587, 2203, 16650, 29563, 2030, 2569, 4506, 2052,\r\n 2191, 1037, 2738, 11552, 2208, 17044, 14540, 2100, 3375, 102,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]),\r\n...\r\n\r\n[tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),\r\n...\r\n```\r\n\r\nI also tried investigating the shapes of the individual entries within a `batch` without the labels:\r\n\r\n```py\r\ndef get_test_accuracy(model):\r\n def fn(batch): \r\n try:\r\n inputs = {k:v.to(device) for k,v in batch.items() \r\n if k in tokenizer.model_input_names}\r\n with torch.no_grad():\r\n output = model(**inputs)\r\n pred_label = torch.argmax(output.logits, axis=-1)\r\n return {\"predicted_label\": pred_label.cpu().numpy()}\r\n except:\r\n for k in batch:\r\n if k != \"label\":\r\n for i in range(len(batch[k])):\r\n print(batch[k][i].shape)\r\n return fn\r\n```\r\n\r\nThey are:\r\n\r\n```\r\n...\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\n```\r\n\r\nThere are differing shapes. I understand if I set `batch_size=None` in `emotions_encoded = emotions.map(tokenize, batched=True)` the problem should be fixed as the whole dataset would be treated as a single batch. But is there a way to do that in batches? ",
"If you use the same batch_size for your two maps, you should get the exact same batches - therefore all containing the same shapes",
"Oh I see. Thanks. Closing this issue. "
] | 2022-11-01T03:57:59 | 2022-11-08T11:35:26 | 2022-11-08T11:35:26 | MEMBER | null | ### Describe the bug
I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset.
```py
def get_test_accuracy(model):
def fn(batch):
inputs = {k:v.to(device) for k,v in batch.items()
if k in tokenizer.model_input_names}
with torch.no_grad():
output = model(**inputs)
pred_label = torch.argmax(output.logits, axis=-1)
return {"predicted_label": pred_label.cpu().numpy()}
return fn
```
This is how the `get_test_accuracy()` is being used:
```py
emotions = load_dataset("emotion")
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
emotions_encoded = emotions.map(tokenize, batched=True)
emotions_encoded.set_format("torch",
columns=["input_ids", "attention_mask", "label"])
new_dataset = emotions_encoded["validation"].map(
accuracy_fn, batched=True, batch_size=128
)
```
Complete code is available in the Colab Notebook provided below.
The `map()` process fails midway giving:
```shell
AttributeError Traceback (most recent call last)
<ipython-input-8-ad24ac288eb4> in <module>
2
3 new_dataset = emotions_encoded["validation"].map(
----> 4 accuracy_fn, batched=True, batch_size=128
5 )
7 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2588 new_fingerprint=new_fingerprint,
2589 disable_tqdm=disable_tqdm,
-> 2590 desc=desc,
2591 )
2592 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
582 self: "Dataset" = kwargs.pop("self")
583 # apply actual function
--> 584 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
585 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
586 for dataset in datasets:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2970 indices,
2971 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 2972 offset=offset,
2973 )
2974 except NumExamplesMismatchError:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2850 if with_rank:
2851 additional_args += (rank,)
-> 2852 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2853 if update_data is None:
2854 # Check if the function returns updated examples
<ipython-input-6-4e0d280426f6> in fn(batch)
1 def get_test_accuracy(model):
2 def fn(batch):
----> 3 inputs = {k:v.to(device) for k,v in batch.items()
4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
<ipython-input-6-4e0d280426f6> in <dictcomp>(.0)
2 def fn(batch):
3 inputs = {k:v.to(device) for k,v in batch.items()
----> 4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
6 output = model(**inputs)
AttributeError: 'list' object has no attribute 'to'
```
As you'd notice in the notebook, the process fails _midway_ and not at the beginning.
Is this expected?
### Steps to reproduce the bug
Colab Notebook:
https://colab.research.google.com/gist/sayakpaul/d1570d537faf39040d02d77b1ed7de07/scratchpad.ipynb
### Expected behavior
The mapping process should complete as is. If you switch the `split` to `test` it works as expected.
### Environment info
Colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5179/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5178/comments | https://api.github.com/repos/huggingface/datasets/issues/5178/events | https://github.com/huggingface/datasets/issues/5178 | 1,430,800,810 | I_kwDODunzps5VSEmq | 5,178 | Unable to download the Chinese `wikipedia`, the dumpstatus.json not found! | {
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"In the dumps page of the wiki (https://dumps.wikimedia.org/zhwiki/), I found the following dumps:\r\n```\r\nIndex of /zhwiki/\r\n[../](https://dumps.wikimedia.org/)\r\n[20220701/](https://dumps.wikimedia.org/zhwiki/20220701/) 21-Aug-2022 01:48 -\r\n[20220720/](https://dumps.wikimedia.org/zhwiki/20220720/) 02-Sep-2022 01:48 -\r\n[20220801/](https://dumps.wikimedia.org/zhwiki/20220801/) 21-Sep-2022 01:44 -\r\n[20220820/](https://dumps.wikimedia.org/zhwiki/20220820/) 01-Oct-2022 09:39 -\r\n[20220901/](https://dumps.wikimedia.org/zhwiki/20220901/) 20-Oct-2022 09:44 -\r\n[20220920/](https://dumps.wikimedia.org/zhwiki/20220920/) 23-Sep-2022 12:06 -\r\n[20221001/](https://dumps.wikimedia.org/zhwiki/20221001/) 04-Oct-2022 15:10 -\r\n[20221020/](https://dumps.wikimedia.org/zhwiki/20221020/) 01-Nov-2022 03:15 -\r\n[latest/](https://dumps.wikimedia.org/zhwiki/latest/) 01-Nov-2022 03:15 -\r\n```\r\n\r\nMaybe the older dumps are not available which caused the downloading failure? \r\n\r\nHowever, when I changed to the newer version:\r\n```\r\ndata = load_dataset('wikipedia', '20220701.zh', beam_runner='DirectRunner')\r\n```\r\n\r\nit shows:\r\n```\r\nValueError: BuilderConfig 20220701.zh not found. Available: ['20220301.aa', '20220301.ab', '20220301.ace', '20220301.ady', '20220301.af', '20220301.ak', '20220301.als', '20220301.am', '20220301.an', '20220301.ang', '20220301.ar', '20220301.arc', '20220301.arz', '20220301.as', '20220301.ast', '20220301.atj', '20220301.av', '20220301.ay', '20220301.az', '20220301.azb', '20220301.ba', '20220301.bar', '20220301.bat-smg', '20220301.bcl', '20220301.be', '20220301.be-x-old', '20220301.bg', '20220301.bh', '20220301.bi', '20220301.bjn', '20220301.bm', '20220301.bn', '20220301.bo', '20220301.bpy', '20220301.br', '20220301.bs', '20220301.bug', '20220301.bxr', '20220301.ca', '20220301.cbk-zam', '20220301.cdo', '20220301.ce', '20220301.ceb', '20220301.ch', '20220301.cho', '20220301.chr', '20220301.chy', '20220301.ckb', '20220301.co', '20220301.cr', '20220301.crh', '20220301.cs', '20220301.csb', '20220301.cu', '20220301.cv', '20220301.cy', '20220301.da', '20220301.de', '20220301.din', '20220301.diq', '20220301.dsb', '20220301.dty', '20220301.dv', '20220301.dz', '20220301.ee', '20220301.el', '20220301.eml', '20220301.en', '20220301.eo', '20220301.es', '20220301.et', '20220301.eu', '20220301.ext', '20220301.fa', '20220301.ff', '20220301.fi', '20220301.fiu-vro', '20220301.fj', '20220301.fo', '20220301.fr', '20220301.frp', '20220301.frr', '20220301.fur', '20220301.fy', '20220301.ga', '20220301.gag', '20220301.gan', '20220301.gd', '20220301.gl', '20220301.glk', '20220301.gn', '20220301.gom', '20220301.gor', '20220301.got', '20220301.gu', '20220301.gv', '20220301.ha', '20220301.hak', '20220301.haw', '20220301.he', '20220301.hi', '20220301.hif', '20220301.ho', '20220301.hr', '20220301.hsb', '20220301.ht', '20220301.hu', '20220301.hy', '20220301.ia', '20220301.id', '20220301.ie', '20220301.ig', '20220301.ii', '20220301.ik', '20220301.ilo', '20220301.inh', '20220301.io', '20220301.is', '20220301.it', '20220301.iu', '20220301.ja', '20220301.jam', '20220301.jbo', '20220301.jv', '20220301.ka', '20220301.kaa', '20220301.kab', '20220301.kbd', '20220301.kbp', '20220301.kg', '20220301.ki', '20220301.kj', '20220301.kk', '20220301.kl', '20220301.km', '20220301.kn', '20220301.ko', '20220301.koi', '20220301.krc', '20220301.ks', '20220301.ksh', '20220301.ku', '20220301.kv', '20220301.kw', '20220301.ky', '20220301.la', '20220301.lad', '20220301.lb', '20220301.lbe', '20220301.lez', '20220301.lfn', '20220301.lg', '20220301.li', '20220301.lij', '20220301.lmo', '20220301.ln', '20220301.lo', '20220301.lrc', '20220301.lt', '20220301.ltg', '20220301.lv', '20220301.mai', '20220301.map-bms', '20220301.mdf', '20220301.mg', '20220301.mh', '20220301.mhr', '20220301.mi', '20220301.min', '20220301.mk', '20220301.ml', '20220301.mn', '20220301.mr', '20220301.mrj', '20220301.ms', '20220301.mt', '20220301.mus', '20220301.mwl', '20220301.my', '20220301.myv', '20220301.mzn', '20220301.na', '20220301.nah', '20220301.nap', '20220301.nds', '20220301.nds-nl', '20220301.ne', '20220301.new', '20220301.ng', '20220301.nl', '20220301.nn', '20220301.no', '20220301.nov', '20220301.nrm', '20220301.nso', '20220301.nv', '20220301.ny', '20220301.oc', '20220301.olo', '20220301.om', '20220301.or', '20220301.os', '20220301.pa', '20220301.pag', '20220301.pam', '20220301.pap', '20220301.pcd', '20220301.pdc', '20220301.pfl', '20220301.pi', '20220301.pih', '20220301.pl', '20220301.pms', '20220301.pnb', '20220301.pnt', '20220301.ps', '20220301.pt', '20220301.qu', '20220301.rm', '20220301.rmy', '20220301.rn', '20220301.ro', '20220301.roa-rup', '20220301.roa-tara', '20220301.ru', '20220301.rue', '20220301.rw', '20220301.sa', '20220301.sah', '20220301.sat', '20220301.sc', '20220301.scn', '20220301.sco', '20220301.sd', '20220301.se', '20220301.sg', '20220301.sh', '20220301.si', '20220301.simple', '20220301.sk', '20220301.sl', '20220301.sm', '20220301.sn', '20220301.so', '20220301.sq', '20220301.sr', '20220301.srn', '20220301.ss', '20220301.st', '20220301.stq', '20220301.su', '20220301.sv', '20220301.sw', '20220301.szl', '20220301.ta', '20220301.tcy', '20220301.te', '20220301.tet', '20220301.tg', '20220301.th', '20220301.ti', '20220301.tk', '20220301.tl', '20220301.tn', '20220301.to', '20220301.tpi', '20220301.tr', '20220301.ts', '20220301.tt', '20220301.tum', '20220301.tw', '20220301.ty', '20220301.tyv', '20220301.udm', '20220301.ug', '20220301.uk', '20220301.ur', '20220301.uz', '20220301.ve', '20220301.vec', '20220301.vep', '20220301.vi', '20220301.vls', '20220301.vo', '20220301.wa', '20220301.war', '20220301.wo', '20220301.wuu', '20220301.xal', '20220301.xh', '20220301.xmf', '20220301.yi', '20220301.yo', '20220301.za', '20220301.zea', '20220301.zh', '20220301.zh-classical', '20220301.zh-min-nan', '20220301.zh-yue', '20220301.zu']\r\n```\r\n\r\nSo I guess adding the latest dumps versions to the `BuilderConfig` may solve the problem? But how to add it?",
"Hi, @beyondguo, thanks for reporting.\r\n\r\nYou have all the information in the dataset card: https://huggingface.co/datasets/wikipedia\r\n\r\n> Then, you can load any subset of Wikipedia per language and per date this way:\r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> load_dataset(\"wikipedia\", language=\"sw\", date=\"20220120\", beam_runner=...) \r\n> ```\r\n> where you can pass as beam_runner any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass \"DirectRunner\" to run it on your machine.\r\n> \r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nNote that you have to pass the language and date as keyword arguments, and the available dates depend on the language and can be found on Wikimedia website.",
"Also:\r\n> Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n> ```python\r\n> load_dataset(\"wikipedia\", \"20220301.en\")\r\n> ```\r\n> The list of pre-processed subsets is:\r\n> - \"20220301.de\"\r\n> - \"20220301.en\"\r\n> - \"20220301.fr\"\r\n> - \"20220301.frr\"\r\n> - \"20220301.it\"\r\n> - \"20220301.simple\""
] | 2022-11-01T03:17:55 | 2022-11-02T08:27:15 | 2022-11-02T08:24:29 | NONE | null | ### Describe the bug
I tried:
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
and
`data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')`
but both got:
`FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json`
the full report is:
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-13-d07c5021090c> in <module>
1 from datasets import load_dataset
2
----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s]
/opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1740
1741 # Download and prepare data
-> 1742 builder_instance.download_and_prepare(
1743 download_config=download_config,
1744 download_mode=download_mode,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
812 **download_and_prepare_kwargs,
813 }
--> 814 self._download_and_prepare(
815 dl_manager=dl_manager,
816 verify_infos=verify_infos,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
1645 options=beam_options,
1646 )
-> 1647 super()._download_and_prepare(
1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs
1649 ) # TODO handle verify_infos in beam datasets
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
881 split_dict = SplitDict(dataset_name=self.name)
882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
884
885 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)
943 info_url = _base_url(lang) + _INFO_FILE
944 # Use dictionary since testing mock always returns the same result.
--> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url})
946
947 xml_urls = []
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
431 extracted_path(s): `str`, extracted paths of given URL(s).
432 """
--> 433 return self.extract(self.download(url_or_urls))
434
435 def get_recorded_sizes_checksums(self):
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls)
308
309 start_time = datetime.now()
--> 310 downloaded_path_or_paths = map_nested(
311 download_func,
312 url_or_urls,
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
427 num_proc = 1
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
--> 429 mapped = [
430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
429 mapped = [
--> 430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
432 ]
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
329 # Singleton first to spare some computation
330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 331 return function(data_struct)
332
333 # Reduce logging to keep things readable in multiprocessing with tqdm
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
335 # append the relative path to the base_path
336 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 337 return cached_path(url_or_filename, download_config=download_config)
338
339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
186 if is_remote_url(url_or_filename):
187 # URL, so get it from the cache (downloading if necessary)
--> 188 output_path = get_from_cache(
189 url_or_filename,
190 cache_dir=cache_dir,
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
533 )
534 elif response is not None and response.status_code == 404:
--> 535 raise FileNotFoundError(f"Couldn't find file at {url}")
536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
537 if head_error is not None:
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json
```
### Steps to reproduce the bug
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
### Expected behavior
download the data
### Environment info
python3.6
latest datasets/transformers version | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5178/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5177/comments | https://api.github.com/repos/huggingface/datasets/issues/5177/events | https://github.com/huggingface/datasets/pull/5177 | 1,430,238,556 | PR_kwDODunzps5B55iV | 5,177 | Update create image dataset docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-31T17:45:56 | 2022-11-02T17:15:22 | 2022-11-02T17:13:02 | MEMBER | null | Based on @osanseviero and community feedback, it wasn't super clear how to upload a dataset to the Hub after creating something like an image captioning dataset. This PR adds a brief section on how to upload the dataset with `push_to_hub`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5177",
"html_url": "https://github.com/huggingface/datasets/pull/5177",
"diff_url": "https://github.com/huggingface/datasets/pull/5177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5177.patch",
"merged_at": "2022-11-02T17:13:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5176/comments | https://api.github.com/repos/huggingface/datasets/issues/5176/events | https://github.com/huggingface/datasets/issues/5176 | 1,430,214,539 | I_kwDODunzps5VP1eL | 5,176 | prepare dataset for cloud storage doesn't work | {
"login": "largenn",
"id": 27285078,
"node_id": "MDQ6VXNlcjI3Mjg1MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/27285078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/largenn",
"html_url": "https://github.com/largenn",
"followers_url": "https://api.github.com/users/largenn/followers",
"following_url": "https://api.github.com/users/largenn/following{/other_user}",
"gists_url": "https://api.github.com/users/largenn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/largenn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/largenn/subscriptions",
"organizations_url": "https://api.github.com/users/largenn/orgs",
"repos_url": "https://api.github.com/users/largenn/repos",
"events_url": "https://api.github.com/users/largenn/events{/privacy}",
"received_events_url": "https://api.github.com/users/largenn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It looks like an issue with `gcsfs`, are you able to instantiate a `GCSFileSystem` manually ?",
"closing since it was probably due to gcsfs"
] | 2022-10-31T17:28:57 | 2023-03-28T09:11:46 | 2023-03-28T09:11:45 | NONE | null | ### Describe the bug
Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage.
```
from datasets import load_dataset, load_dataset_builder
dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH')
dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet")
```
The above code successfully downloaded dataset, however, it returns error from `download_and_prepare`.
> Traceback (most recent call last):
> File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module>
> dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet")
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare
> fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths
> cls = get_filesystem_class(protocol)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class
> register_implementation(protocol, _import_class(bit["class"]))
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 257, in _import_class
> mod = importlib.import_module(mod)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
> File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
> File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
> File "<frozen importlib._bootstrap_external>", line 850, in exec_module
> File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
> File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module>
> dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet")
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare
> fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths
> cls = get_filesystem_class(protocol)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class
> register_implementation(protocol, _import_class(bit["class"]))
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 258, in _import_class
> return getattr(mod, name)
> AttributeError: partially initialized module 'gcsfs' has no attribute 'GCSFileSystem' (most likely due to a circular import)
### Steps to reproduce the bug
1. pip install datasets==2.6.1 gcsfs==2022.8.2
2. Run the following code will reproduce the issue (change `LOCAL_PATH` and `Bucket_NAME` accordingly)
```
from datasets import load_dataset, load_dataset_builder
dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH')
dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet")
```
### Expected behavior
Expecting successful downloading dataset and uploading it to cloud storage.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5176/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5175/comments | https://api.github.com/repos/huggingface/datasets/issues/5175/events | https://github.com/huggingface/datasets/issues/5175 | 1,428,696,231 | I_kwDODunzps5VKCyn | 5,175 | Loading an external NER dataset | {
"login": "Taghreed7878",
"id": 112555442,
"node_id": "U_kgDOBrV1sg",
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taghreed7878",
"html_url": "https://github.com/Taghreed7878",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-10-30T09:31:55 | 2022-11-01T13:15:49 | 2022-11-01T13:15:49 | NONE | null | I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag.
I tried this code snnipet that I found here as an answer to a similar issue:
from datasets import Dataset
INPUT_COLUMNS = "ID Text NER".split()
def read_conll(file):
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line.startswith("-DOCSTART-") or line == "\n" or not line:
if example[next(iter(example))]:
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
else:
row_cols = line.split()
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
train = Dataset.from_generator(read_conll, gen_kwargs={"file": "some_path"})
But the following error happened:
ValueError: Please pass `features` or at least one example when writing data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5175/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5174/comments | https://api.github.com/repos/huggingface/datasets/issues/5174/events | https://github.com/huggingface/datasets/pull/5174 | 1,427,216,416 | PR_kwDODunzps5Bv3rh | 5,174 | Preserve None in list type cast in PyArrow 10 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-28T12:48:30 | 2022-10-28T13:15:33 | 2022-10-28T13:13:18 | CONTRIBUTOR | null | The `ListArray` type in PyArrow 10.0.0 supports the `mask` parameter, which allows us to preserve Nones in nested lists in `cast` instead of replacing them with empty lists.
Fix https://github.com/huggingface/datasets/issues/3676 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5174/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5174",
"html_url": "https://github.com/huggingface/datasets/pull/5174",
"diff_url": "https://github.com/huggingface/datasets/pull/5174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5174.patch",
"merged_at": "2022-10-28T13:13:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5173/comments | https://api.github.com/repos/huggingface/datasets/issues/5173/events | https://github.com/huggingface/datasets/pull/5173 | 1,425,880,441 | PR_kwDODunzps5BreEm | 5,173 | Raise ffmpeg warnings only once | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-27T15:58:33 | 2022-10-28T16:03:05 | 2022-10-28T16:00:51 | CONTRIBUTOR | null | Our warnings looks nice now.
`librosa` warning that was raised at each decoding:
```
/usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn("PySoundFile failed. Trying audioread instead.")
```
is suppressed with `filterwarnings("ignore")` in a context manager. That means the first warning is also ignored (setting `filterwarnings("once")` didn't work!), so I added info that audioread is used for decoding to our message. Hope it's enough.
Tests failed at first because they used to check if the warning was raised at (each) decoding in `librosa` case but now we throw only one warning (at first decoding). I removed this check for warnings, do you think it's fine? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5173/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5173",
"html_url": "https://github.com/huggingface/datasets/pull/5173",
"diff_url": "https://github.com/huggingface/datasets/pull/5173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5173.patch",
"merged_at": "2022-10-28T16:00:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5172/comments | https://api.github.com/repos/huggingface/datasets/issues/5172/events | https://github.com/huggingface/datasets/issues/5172 | 1,425,523,114 | I_kwDODunzps5U98Gq | 5,172 | Inconsistency behavior between handling local file protocol and other FS protocols | {
"login": "leoleoasd",
"id": 37735580,
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoleoasd",
"html_url": "https://github.com/leoleoasd",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2022-10-27T12:03:20 | 2022-10-27T12:05:19 | null | NONE | null | ### Describe the bug
These lines us used during load_from_disk:
```
if is_remote_filesystem(fs):
dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path)
else:
fs = fsspec.filesystem("file")
dest_dataset_dict_path = dataset_dict_path
```
If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`.
### Steps to reproduce the bug
```
import fsspec.core
url = "hdfs:///somewhere/MNIST"
# url = "file:///somewhere/MNIST"
fs, path = fsspec.core.url_to_fs(url)
fs.ls(path) # this will always work
load_from_disk(path, fs) # only works for local FS
load_from_disk(url, fs) # only works for remote FS
```
### Expected behavior
one of `url` or `path` should always work
I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since:
```
fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST'
```
and
```
fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST")
```
In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too)
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.4.205.1**HIDDEN**
- Python version: 3.7.10
- PyArrow version: 8.0.0
- Pandas version: 1.2.4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5172/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5171/comments | https://api.github.com/repos/huggingface/datasets/issues/5171/events | https://github.com/huggingface/datasets/pull/5171 | 1,425,355,111 | PR_kwDODunzps5BpsXf | 5,171 | Add PB and TB in convert_file_size_to_int | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-27T09:50:31 | 2022-10-27T12:14:27 | 2022-10-27T12:12:30 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5171",
"html_url": "https://github.com/huggingface/datasets/pull/5171",
"diff_url": "https://github.com/huggingface/datasets/pull/5171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5171.patch",
"merged_at": "2022-10-27T12:12:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5170/comments | https://api.github.com/repos/huggingface/datasets/issues/5170/events | https://github.com/huggingface/datasets/issues/5170 | 1,425,301,835 | I_kwDODunzps5U9GFL | 5,170 | [Caching] Deterministic hashing of torch tensors | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 2022-10-27T09:15:15 | 2022-11-02T17:18:43 | 2022-11-02T17:18:43 | MEMBER | null | Currently this fails
```python
import torch
from datasets.fingerprint import Hasher
t = torch.tensor([1.])
def func(x):
return t + x
hash1 = Hasher.hash(func)
t = torch.tensor([1.])
hash2 = Hasher.hash(func)
assert hash1 == hash2
```
Also as noticed in https://discuss.huggingface.co/t/dataset-cant-cache-models-outputs/24945, using a model in a `map` function doesn't work well with caching. Indeed the `bert-base-uncased` model has a different hash every time you reload it. Supporting torch tensors may also help in this case.
This can be fixed by registering a custom pickling functions for torch tensors - as we did for other objects such as CodeType, FunctionType and Regex in `py_utils.py` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5170/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5169/comments | https://api.github.com/repos/huggingface/datasets/issues/5169/events | https://github.com/huggingface/datasets/pull/5169 | 1,425,075,254 | PR_kwDODunzps5Bow1Q | 5,169 | Add "ipykernel" to list of `co_filename`s to remove | {
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I don't know how I could add some tests for this, although jupyter is not among the dependencies so at least that would need to be added. If someone can tell a recommended way I will try to do it!",
"So testing by myself and looking around the jupyter codebase it looks like the `co_filename` of objects created within jupyter is of the form `f\"{tempfile.tempdir}/ipykernel_{id1}/{id2}.py\"` however I can't find the exact command setting it so I [asked in discourse](https://discourse.jupyter.org/t/co-filename-within-notebooks/16538). For now adapted the `co_filename` filter and added tests according to this I hope to get an answer and possibly fix based on that.",
"Ok ! I think it's fine to just check if the parent folder is named like `ipykernel_*` then\r\n\r\nsee the source code of how it's created:\r\n\r\nhttps://github.com/ipython/ipykernel/blob/7f73ff705510b35d1e2faad7f5a676c620ce08d4/ipykernel/compiler.py#L72-L75",
"Should look better now didn't notice the duplicated tests",
"_The documentation is not available anymore as the PR was closed or merged._",
"Should work now on windows too",
"I did the changes you suggested and tried to rebase, the first part went fine, the second less so :( \r\n\r\nIf you have time to spare, can you tell me what should I do now to fix this? thanks",
"Instead of rebasing you can just merge `main` into your branch, otherwise the GitHub preview of your PR shows changes of from `main`.\r\n\r\nFeel free to close this PR and create a new one. Or alternatively your can force push to this PR with a new clean git history.",
"I have force-pushed and merged main, only shows the right changes, if you can run CI one more time it should be ok now",
"Hi, sorry I have been busy, the thing is I can't really understand why the test fail, besides the ugly thing I had done in the last commit to check if within CI smth stange happened with `os`, locally tests pass",
"The CI wasn't passing when using the latest version `dill==0.3.6`. We have a separate function to dump CodeType objects for 0.3.6\r\n\r\nI applied the same changes you did to this other function as well - it should be all good now",
"> The CI wasn't passing when using the latest version `dill==0.3.6`. We have a separate function to dump CodeType objects for 0.3.6\r\n> \r\n> I applied the same changes you did to this other function as well - it should be all good now\r\n\r\nThanks, it would have taken a long time to figure out :)"
] | 2022-10-27T05:56:17 | 2022-11-02T15:46:00 | 2022-11-02T15:43:20 | CONTRIBUTOR | null | Should resolve #5157 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5169/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5169",
"html_url": "https://github.com/huggingface/datasets/pull/5169",
"diff_url": "https://github.com/huggingface/datasets/pull/5169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5169.patch",
"merged_at": "2022-11-02T15:43:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5168/comments | https://api.github.com/repos/huggingface/datasets/issues/5168/events | https://github.com/huggingface/datasets/pull/5168 | 1,424,368,572 | PR_kwDODunzps5BmYnq | 5,168 | Fix CI require beam | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm merging this PR because it is quite a trivial fix and this is required by:\r\n- #5166"
] | 2022-10-26T16:49:33 | 2022-10-27T09:25:19 | 2022-10-27T09:23:26 | MEMBER | null | This PR:
- Fixes the CI `require_beam`: before it was requiring PyTorch instead
```python
def require_beam(test_case):
if not config.TORCH_AVAILABLE:
test_case = unittest.skip("test requires PyTorch")(test_case)
return test_case
```
- Fixes a missing `require_beam` in `test_beam_based_builder_download_and_prepare_as_parquet`
- Refactors `require_beam` to use `pytest` (`skipif`) instead | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5168",
"html_url": "https://github.com/huggingface/datasets/pull/5168",
"diff_url": "https://github.com/huggingface/datasets/pull/5168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5168.patch",
"merged_at": "2022-10-27T09:23:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5167/comments | https://api.github.com/repos/huggingface/datasets/issues/5167/events | https://github.com/huggingface/datasets/pull/5167 | 1,424,124,477 | PR_kwDODunzps5BljPw | 5,167 | Add ffmpeg4 installation instructions in warnings | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To make it warn only once, feel free to use a global counter in python - and if the warning has already been done, you don't do it again",
"> Added the same formatting for the error message :)\r\n\r\nnice!! thank you! \r\n\r\n> Oh and regarding the warning counter, you can do it in another PR maybe ?\r\n\r\nYes, more warnings is better then no warnings.... I'll merge when the CI passes"
] | 2022-10-26T14:21:14 | 2022-10-27T09:01:12 | 2022-10-27T08:58:58 | CONTRIBUTOR | null | Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users).
Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it look nice are welcome!
This is how it looks on Colab:
![image](https://user-images.githubusercontent.com/16348744/198052412-d48018d1-4416-4aa5-9114-f7f9b4af031f.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5167",
"html_url": "https://github.com/huggingface/datasets/pull/5167",
"diff_url": "https://github.com/huggingface/datasets/pull/5167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5167.patch",
"merged_at": "2022-10-27T08:58:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5166/comments | https://api.github.com/repos/huggingface/datasets/issues/5166/events | https://github.com/huggingface/datasets/pull/5166 | 1,423,629,582 | PR_kwDODunzps5Bj5IQ | 5,166 | Support dill 0.3.6 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think it hasn't been merged ? https://github.com/uqfoundation/dill/pull/501\r\n\r\nThough I can see that the CI is green because it uses dill 0.3.1.1 - we should probably fix the dill version in both CIs:\r\n- use 0.3.1.1 for the CI with the minimum requirements\r\n- use latest for the CI with the latest requirements",
"I have noticed our CI uses `dill-0.3.1.1`, so not really testing dill 0.3.6...",
"The dill version in our CI is due to `apache-beam`...",
"I've tested locally: we need a specific fix for 0.3.6 (different from the previous ones)...",
"I think we can force the version of dill to be whatever we want in the CI - no matter what beam says. The alternative would be to run beam tests separately but it's more work",
"@lhoestq I tried the easiest solution: force dill==0.3.6 ignoring the requirement of apache-beam. But it doesn't work:\r\n- For example, for `tests/test_builder.py::test_beam_based_builder_download_and_prepare_as_parquet`:\r\n```\r\n @dill.dill.register(dill.dill.ModuleType)\r\n def save_module(pickler, obj):\r\n if dill.dill.is_dill(pickler) and obj is pickler._main:\r\n return old_save_module(pickler, obj)\r\n else:\r\n> dill.dill.log.info('M2: %s' % obj)\r\nE AttributeError: module 'dill._dill' has no attribute 'log'\r\n\r\nvenv/lib/python3.9/site-packages/apache_beam/internal/dill_pickler.py:170: AttributeError\r\n```\r\n - Apache Beam registers some dill functions (`save_module`) which are incompatible with dill 0.3.6 (in 0.3.6 'dill._dill' has no attribute 'log' but 'logger')\r\n - This has an impact in CI tests using either Apache Beam or `multiprocess` (even without using Apache Beam!):\r\n```\r\nFAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_beam.py::BeamBuilderTest::test_nested_features - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_multiprocessing_in_memory - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_multiprocessing_on_disk - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_builder.py::test_beam_based_download_and_prepare - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_in_memory - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_builder.py::test_beam_based_as_dataset - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_multiprocessing_in_memory - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_multiprocessing_on_disk - AttributeError: module 'dill._dill' has no attribute 'log'\r\nFAILED tests/test_builder.py::test_beam_based_builder_download_and_prepare_as_parquet - AttributeError: module 'dill._dill' has no attribute 'log'\r\n```\r\n\r\nI guess we should implement the other option: run beam tests separately.\r\n\r\nI'm opening another PR for the CI refactoring.",
"Ah crap >< maybe only install apache_beam for the \"minimum requirements\" CI",
"@lhoestq if we install apache-beam only in the \"minimum requirements\" CI, then this other PR should be merged first:\r\n- #5168 \r\n\r\nOtherwise, our CI for \"latest\" will fail because it will try to run the beam tests (because PyTorch is installed but indeed apache-beam is not installed).",
"One of the test is failing because we set \r\n```python\r\n# google colab doesn't allow to pickle loggers\r\n# so we want to make sure each tests passes without pickling the logger\r\ndef reduce_ex(self):\r\n raise pickle.PicklingError()\r\n\r\ndatasets.arrow_dataset.logger.__reduce_ex__ = reduce_ex\r\n```\r\nin `test_arrow_dataset.py` to avoid pickling the logger because it used to fail on google colab.\r\n\r\nNow pickling the logger seems to be working on google colab again - so you can remove it, and it should fix some tests",
"For the other 2 errors:\r\n- FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_in_memory - _pickle.PicklingError: Can't pickle <class 'unittest.mock.MagicMock'>: it's not the same object as unittest.mock.MagicMock\r\n- FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk - _pickle.PicklingError: Can't pickle <class 'unittest.mock.MagicMock'>: it's not the same object as unittest.mock.MagicMock\r\n\r\nI have implemented a pickable MagicMock."
] | 2022-10-26T08:24:59 | 2022-10-28T05:41:05 | 2022-10-28T05:38:14 | MEMBER | null | This PR:
- ~~Unpins dill to allow installing dill>=0.3.6~~
- ~~Removes the fix on dill for >=0.3.6 because they implemented a deterministic mode (to be confirmed by @anivegesana)~~
- Pins dill<0.3.7 to allow latest dill 0.3.6
- Implements a fix for dill `save_function` for dill 0.3.6
- Additionally had to implement a fix for dill `save_code` and `_save_regex` for dill 0.3.6
- Fixes the CI so that the latest dill version is tested (besides the minimum 0.3.1.1 required by apache-beam 2.42.0)
Fix #5162. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5166",
"html_url": "https://github.com/huggingface/datasets/pull/5166",
"diff_url": "https://github.com/huggingface/datasets/pull/5166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5166.patch",
"merged_at": "2022-10-28T05:38:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5165/comments | https://api.github.com/repos/huggingface/datasets/issues/5165/events | https://github.com/huggingface/datasets/issues/5165 | 1,423,616,677 | I_kwDODunzps5U2qql | 5,165 | Memory explosion when trying to access 4d tensors in datasets cast to torch or np | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2022-10-26T08:14:47 | 2022-10-26T08:14:47 | null | CONTRIBUTOR | null | ### Describe the bug
When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors.
### Steps to reproduce the bug
MWE:
```python
from datasets import load_dataset
import numpy as np
def create_4d_tensor(item):
i = item["num_nodes"]
item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor
return item
if __name__ == "__main__":
dataset = load_dataset(path=f"graphs-datasets/PROTEINS")
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset = dataset.map(
create_4d_tensor,
batched=False,
writer_batch_size=100,
)
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset.set_format("torch")
print(dataset["train"].format)
# This gets killed :(
print(dataset["train"][0].keys())
```
The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328)
### Expected behavior
No memory explosion when trying to access dataset items after cast.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5165/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5164/comments | https://api.github.com/repos/huggingface/datasets/issues/5164/events | https://github.com/huggingface/datasets/pull/5164 | 1,422,813,247 | PR_kwDODunzps5BhL4J | 5,164 | WIP: drop labels in Image and Audio folders by default | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"close in favor of https://github.com/huggingface/datasets/pull/5192"
] | 2022-10-25T17:21:49 | 2022-11-16T14:21:16 | 2022-11-02T14:03:02 | CONTRIBUTOR | null | will fix https://github.com/huggingface/datasets/issues/5153 and redundant labels displaying for most of the images datasets on the Hub (which are used just to store files)
TODO: discuss adding `drop_labels` (and `drop_metadata`) params to yaml | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5164/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5164",
"html_url": "https://github.com/huggingface/datasets/pull/5164",
"diff_url": "https://github.com/huggingface/datasets/pull/5164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5164.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5163/comments | https://api.github.com/repos/huggingface/datasets/issues/5163/events | https://github.com/huggingface/datasets/pull/5163 | 1,422,540,337 | PR_kwDODunzps5BgQxp | 5,163 | Reduce default max `writer_batch_size` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-25T14:14:52 | 2022-10-27T12:19:27 | 2022-10-27T12:16:47 | CONTRIBUTOR | null | Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5163/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5163",
"html_url": "https://github.com/huggingface/datasets/pull/5163",
"diff_url": "https://github.com/huggingface/datasets/pull/5163.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5163.patch",
"merged_at": "2022-10-27T12:16:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5162/comments | https://api.github.com/repos/huggingface/datasets/issues/5162/events | https://github.com/huggingface/datasets/issues/5162 | 1,422,461,112 | I_kwDODunzps5UyQi4 | 5,162 | Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6 | {
"login": "Rijgersberg",
"id": 8604946,
"node_id": "MDQ6VXNlcjg2MDQ5NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8604946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rijgersberg",
"html_url": "https://github.com/Rijgersberg",
"followers_url": "https://api.github.com/users/Rijgersberg/followers",
"following_url": "https://api.github.com/users/Rijgersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/Rijgersberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rijgersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rijgersberg/subscriptions",
"organizations_url": "https://api.github.com/users/Rijgersberg/orgs",
"repos_url": "https://api.github.com/users/Rijgersberg/repos",
"events_url": "https://api.github.com/users/Rijgersberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rijgersberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @Rijgersberg.\r\n\r\nWe were waiting for the release of `dill` 0.3.6, that happened 2 days ago (24 Oct 2022): https://github.com/uqfoundation/dill/releases/tag/dill-0.3.6\r\n- See comment: https://github.com/huggingface/datasets/pull/4397#discussion_r880629543\r\n\r\nAlso `multiprocess` 0.70.14 was released 2 days ago: https://github.com/uqfoundation/multiprocess/releases/tag/multiprocess-0.70.14\r\n\r\nWe are addressing this issue to align dependencies.",
"In your specific setup, I guess the compatible configuration is with `multiprocess` 0.70.13 (instead of 0.70.14).",
"@Rijgersberg this issue is fixed. It will be available in our next `datasets` release.",
"Thanks!",
"> @Rijgersberg this issue is fixed. It will be available in our next `datasets` release.\n\nAny chance you have a eta? ",
"@StefanSamba we are disussing about making a release early this week.",
"@Rijgersberg, please also that you can make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version. "
] | 2022-10-25T13:23:50 | 2022-11-14T08:25:37 | 2022-10-28T05:38:15 | NONE | null | ### Describe the bug
When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears.
It is caused by a transitive dependency conflict between `datasets` and `multiprocess`.
### Steps to reproduce the bug
```bash
$ echo "datasets" > requirements.in
$ pip install pip-tools
$ pip-compile requirements.in
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
A correctly generated file `requirements.txt` with pinned dependencies
### Environment info
Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5162/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5161/comments | https://api.github.com/repos/huggingface/datasets/issues/5161/events | https://github.com/huggingface/datasets/issues/5161 | 1,422,371,748 | I_kwDODunzps5Ux6uk | 5,161 | Dataset can’t cache model’s outputs | {
"login": "jongjyh",
"id": 37979232,
"node_id": "MDQ6VXNlcjM3OTc5MjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jongjyh",
"html_url": "https://github.com/jongjyh",
"followers_url": "https://api.github.com/users/jongjyh/followers",
"following_url": "https://api.github.com/users/jongjyh/following{/other_user}",
"gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions",
"organizations_url": "https://api.github.com/users/jongjyh/orgs",
"repos_url": "https://api.github.com/users/jongjyh/repos",
"events_url": "https://api.github.com/users/jongjyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/jongjyh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Addressed in https://github.com/huggingface/datasets/pull/5191 (torch.Tensor objects now produce deterministic hashes)"
] | 2022-10-25T12:19:00 | 2022-11-03T16:12:52 | 2022-11-03T16:12:51 | NONE | null | ### Describe the bug
Hi,
I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this?
### Steps to reproduce the bug
1. run below code
2. get different hash
```
from transformers import BertModel
from transformers import AutoTokenizer
import torch
token = ['hello']
model = BertModel.from_pretrained("bert-base-uncased").eval()
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
def abcd():
with torch.no_grad():
out = model(**tok(token,return_tensors='pt'))[0]
# out = tok(token)
return out
from datasets.fingerprint import Hasher
my_func = abcd
print(Hasher.hash(my_func))
print(abcd())
```
### Expected behavior
I wanna cache all the model output
### Environment info
datasets:2.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5161/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5160/comments | https://api.github.com/repos/huggingface/datasets/issues/5160/events | https://github.com/huggingface/datasets/issues/5160 | 1,422,193,938 | I_kwDODunzps5UxPUS | 5,160 | Automatically add filename for image/audio folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Also cc @anton-l ",
"BTW the exact same holds true for the audio folder",
"I'm fine with adding a new column with the file name personally. Not sure how breaking this is though",
"@patrickvonplaten do you mean just filename or full relative path inside the repo?\r\nI think it shouldn't be breaking, at least I cannot come up with any case where it is. Maybe @mariosasko can?\r\n\r\nalso I think that the problem here and in general is that Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file. It can be changed when you load the dataset with `load_dataset` but not on it's Hub page. \r\n\r\n",
"> also I think that the problem here and in general Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file\r\n\r\nYea I agree it's often the wrong default. We can also imagine adding the builder's parameters as YAML in the repo.",
"@lhoestq yes I also got the idea of some YAML config! not sure of what priority it is though.",
"but it would actually also solve this issue: https://github.com/huggingface/datasets/issues/5153",
"I meant just the file name (no path) that would already be super helpful IMO :-) (maybe dir+filename if there are dirs in the folder)",
"@patrickvonplaten one more time, to be sure I understand you.\r\nFor example, we have data structure like this:\r\n```\r\n├─ data/\r\n│ └─ subdir/\r\n│ └── cats/\r\n│ ├── 0.jpg\r\n│ ├── 1.jpg\r\n│ └── 2.jpg\r\n│ └── dogs/\r\n│ ├── 0.jpg\r\n│ ├── 1.jpg\r\n│ └── 2.jpg\r\n└── another_subdir/\r\n ├── 10.jpg\r\n ├── 11.jpg\r\n └── 12.jpg\r\n```\r\nIs it okay to provide `\"data/subdir/cats/0.jpg\"`, `\"data/subdir/dogs/0.jpg\"`, `\"data/another_subdir/10.jpg\"`?\r\nI think providing just filenames might be confusing if they are not unique, as in this example. ",
"Yes I think the relative path as you proposed makes a lot of sense :-) "
] | 2022-10-25T09:56:49 | 2022-10-26T16:51:46 | null | MEMBER | null | ### Feature request
When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both:
a) Automatically displayed in the viewer
b) Automatically added as a column to the dataset when doing `load_dataset`
In `diffusers` our test rely quite heavily on images and audio files now and it's a bit tedious at the moment to download specific images from a datasets repo.
E.g. we have a dataset of images for tests in `diffusers`: https://huggingface.co/datasets/hf-internal-testing/diffusers-images
where it would be extremely nice to have direct access to the filename both visually on the datasets page (@severo ) as well as via the `load_datasets` function. We currently have some akward functionality to download images by path name: https://github.com/huggingface/diffusers/blob/2fb8fafa4b761f6fc144cf75a6f6f0ea6af3a1c1/src/diffusers/utils/testing_utils.py#L131
It would be much nicer to just go over `load_dataset(...)`
### Motivation
Intuitively the filename is something people understand directly. E.g if you upload a folder of images online, it's nice if you recognize the image as well as the filename next to it directly and that you're able to use it right away.
The label on the other hand is less intuitive to understand as you haven't added it yourself.
### Your contribution
Not sure if I have the time to add it myself anytime soon, but it would help us a lot for `diffusers`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5160/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5159/comments | https://api.github.com/repos/huggingface/datasets/issues/5159/events | https://github.com/huggingface/datasets/pull/5159 | 1,422,172,080 | PR_kwDODunzps5BfBN9 | 5,159 | fsspec lock reset in multiprocessing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-25T09:41:59 | 2022-11-03T20:51:15 | 2022-11-03T20:48:53 | MEMBER | null | `fsspec` added a clean way of resetting its lock - instead of doing it manually | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5159/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5159",
"html_url": "https://github.com/huggingface/datasets/pull/5159",
"diff_url": "https://github.com/huggingface/datasets/pull/5159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5159.patch",
"merged_at": "2022-11-03T20:48:53"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5158/comments | https://api.github.com/repos/huggingface/datasets/issues/5158/events | https://github.com/huggingface/datasets/issues/5158 | 1,422,059,287 | I_kwDODunzps5UwucX | 5,158 | Fix language and license tag names in all Hub datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"There are currently 402 datasets with deprecated \"languages\" or \"licenses\".",
"hey @albertvillanova ,i would love to work on this issue if you like.",
"Hi @ayushthe1, thanks for your offer.\r\n\r\nBut as you can see, I self-assigned this issue.\r\n\r\nI have already fixed 200 out of the 402 datasets. My script is still running and fixing the rest.\r\n\r\nFor example: https://huggingface.co/datasets/fhamborg/news_sentiment_newsmtsc/discussions/2/files",
"Thanks for your time. Will try next time. 😇",
"@ayushthe1 feel free to take one of the non-assigned open issues: https://github.com/huggingface/datasets/issues",
"This is done."
] | 2022-10-25T08:19:29 | 2022-10-25T11:27:26 | 2022-10-25T10:42:19 | MEMBER | null | While working on this:
- #5137
we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license").
This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown.
We should fix the "language" and "license" tag names in all Hub datasets.
TODO:
- [x] Fix language and license tag names in 402 Hub datasets
CC: @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5158/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5157/comments | https://api.github.com/repos/huggingface/datasets/issues/5157/events | https://github.com/huggingface/datasets/issues/5157 | 1,421,703,577 | I_kwDODunzps5UvXmZ | 5,157 | Consistent caching between python and jupyter | {
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Maybe it's possible to have a consistent hash for a function defined in `__main__` and a function define in a notebook.\r\n\r\nHowever for functions imported from another location, pickle uses the location to identify the code, so in that case we can't do much I believe.\r\n\r\nWould it be ok for you if we only try to do this for functions in `__main__` / jupyter ?\r\n\r\nIf you'd like to contribute, you can read this part of the code and let me know if you have questions:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/py_utils.py#L617-L643\r\n\r\nI think the key here would be to also ignore the \"co_filename\" of functions defined in `__main__`",
"Seems like a good solution, I will start a PR and see if I understood the changes needed. Thanks!"
] | 2022-10-25T01:34:33 | 2022-11-02T15:43:22 | 2022-11-02T15:43:22 | CONTRIBUTOR | null | ### Feature request
I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch.
If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour?
### Motivation
If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again.
### Your contribution
I am happy to try a PR if you give me some pointers where the changes should happen | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5157/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5156/comments | https://api.github.com/repos/huggingface/datasets/issues/5156/events | https://github.com/huggingface/datasets/issues/5156 | 1,421,667,125 | I_kwDODunzps5UvOs1 | 5,156 | Unable to download dataset using Azure Data Lake Gen 2 | {
"login": "clarissesimoes",
"id": 87379512,
"node_id": "MDQ6VXNlcjg3Mzc5NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/87379512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clarissesimoes",
"html_url": "https://github.com/clarissesimoes",
"followers_url": "https://api.github.com/users/clarissesimoes/followers",
"following_url": "https://api.github.com/users/clarissesimoes/following{/other_user}",
"gists_url": "https://api.github.com/users/clarissesimoes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clarissesimoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clarissesimoes/subscriptions",
"organizations_url": "https://api.github.com/users/clarissesimoes/orgs",
"repos_url": "https://api.github.com/users/clarissesimoes/repos",
"events_url": "https://api.github.com/users/clarissesimoes/events{/privacy}",
"received_events_url": "https://api.github.com/users/clarissesimoes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! From the `adlfs` docs, there are two filesystems you can use:\r\n> To use the Gen1 filesystem:\r\n> - known_implementations[‘adl’] = {‘class’: ‘adlfs.AzureDatalakeFileSystem’}\r\n> \r\n> To use the Gen2 filesystem:\r\n> - known_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n\r\nIf I'm not mistaken you're using the second one - so you should use `abfs://` instead of `adl://`, and also run this at the beginning of your script:\r\n```python\r\nfrom fsspec.registry import known_implementations\r\nknown_implementations['abfs'] = {'class': 'adlfs.AzureDatalakeFileSystem'}\r\n```\r\n\r\n",
"Thank you @lhoestq . Great call.\r\nUsing the default class from `known_implementations` dict solved my problem\r\n```\r\nknown_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n```\r\nI'm closing this issue."
] | 2022-10-25T00:43:18 | 2022-11-17T23:37:09 | 2022-11-17T23:37:08 | NONE | null | ### Describe the bug
When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed:
```
Traceback (most recent call last):
File "download_hf_dataset.py", line 143, in <module>
main()
File "download_hf_dataset.py", line 102, in main
builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet")
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare
fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths
fs = cls(**options)
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__
obj = super().__call__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'account_name'
```
If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1:
`TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'`
Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2.
### Steps to reproduce the bug
Assuming that you have an account on Azure and at Storage Account that can be used for reproduce:
1. Create a dict with the format to connect to Azure Data Lake Gen 2
```
storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem
```
2. Create a dataset builder for any HF hosted dataset
```
builder = load_dataset_builder(dataset_name)
```
3. Try to download the dataset passing the storage_options as an argument
```
save_dir = 'adl://my_save_dir'
builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet")
```
### Expected behavior
Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5156/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5155/comments | https://api.github.com/repos/huggingface/datasets/issues/5155/events | https://github.com/huggingface/datasets/pull/5155 | 1,421,278,748 | PR_kwDODunzps5BcCYr | 5,155 | TextConfig: added "errors" | {
"login": "NightMachinery",
"id": 36224762,
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NightMachinery",
"html_url": "https://github.com/NightMachinery",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)",
"[**@lhoestq**](https://github.com/lhoestq) commented on [Oct 27, 2022, 4:08 PM GMT+3:30](https://github.com/huggingface/datasets/pull/5155#issuecomment-1293464680 \"2022-10-27T12:38:04Z - Replied by Github Reply Comments\"):\r\n> Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)\r\n\r\nI ran this and force pushed the changes."
] | 2022-10-24T18:56:52 | 2022-11-03T13:38:13 | 2022-11-03T13:35:35 | CONTRIBUTOR | null | This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5155",
"html_url": "https://github.com/huggingface/datasets/pull/5155",
"diff_url": "https://github.com/huggingface/datasets/pull/5155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5155.patch",
"merged_at": "2022-11-03T13:35:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5154/comments | https://api.github.com/repos/huggingface/datasets/issues/5154/events | https://github.com/huggingface/datasets/pull/5154 | 1,421,161,992 | PR_kwDODunzps5BbpQZ | 5,154 | Test latest fsspec in CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"actually the latest fsspec is already installed "
] | 2022-10-24T17:18:13 | 2022-10-25T09:32:51 | 2022-10-25T09:30:45 | MEMBER | null | Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5154/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5154",
"html_url": "https://github.com/huggingface/datasets/pull/5154",
"diff_url": "https://github.com/huggingface/datasets/pull/5154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5154.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5153/comments | https://api.github.com/repos/huggingface/datasets/issues/5153/events | https://github.com/huggingface/datasets/issues/5153 | 1,420,833,457 | I_kwDODunzps5UsDKx | 5,153 | default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Makes sense! For the last structure, we could count the path segments (delimited by \"/\" for URLs and `os.sep` for local paths) to ensure all inferred labels are on the same level. Otherwise, I think it's safe to assume they are meaningless and ignore them.\r\n"
] | 2022-10-24T13:28:18 | 2022-11-15T16:31:10 | 2022-11-15T16:31:09 | CONTRIBUTOR | null | ### Describe the bug
By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios
As this is a corner case for quick exploration of images or audios on the Hub.
### Steps to reproduce the bug
If you have directory like this:
```
repo
image1.jpg
image2.jpg
image3.jpg
```
or
```
repo
data
image1.jpg
image2.jpg
image3.jpg
```
doing `ds = load_dataset(repo)` would create `label` feature:
```python
print(ds["train"][0])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0}
```
Also, if you have the following structure:
```
repo
data
image1.jpg
image2.jpg
image3.jpg
image4.jpg
image5.jpg
image6.jpg
```
it will infer two labels:
```python
print(ds["train"][0])
print(ds["train"][-1])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1}
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0}
```
### Expected behavior
We should have only one base feature (Image/Audio) in such cases.
### Environment info
all versions of `datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5153/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5152/comments | https://api.github.com/repos/huggingface/datasets/issues/5152/events | https://github.com/huggingface/datasets/issues/5152 | 1,420,808,919 | I_kwDODunzps5Ur9LX | 5,152 | refactor FolderBasedBuilder and Image/AudioFolder tests | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | open | false | null | [] | null | [] | 2022-10-24T13:11:52 | 2022-10-24T13:11:52 | null | CONTRIBUTOR | null | Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5152/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5151/comments | https://api.github.com/repos/huggingface/datasets/issues/5151/events | https://github.com/huggingface/datasets/issues/5151 | 1,420,791,163 | I_kwDODunzps5Ur417 | 5,151 | Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?) | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] | 2022-10-24T12:59:18 | 2022-11-04T14:55:20 | null | CONTRIBUTOR | null | Now one can push only different splits within one default config of a dataset.
Would be nice to allow something like:
```
ds.push_to_hub(repo_name, config=config_name)
```
I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5151/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5150/comments | https://api.github.com/repos/huggingface/datasets/issues/5150/events | https://github.com/huggingface/datasets/issues/5150 | 1,420,684,999 | I_kwDODunzps5Ure7H | 5,150 | Problems after upgrading to 2.6.1 | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! I can't reproduce the error following these steps. Can you please provide a reproducible example?",
"I faced the same issue:\r\n\r\n### Repro\r\n```\r\n!pip install datasets==2.6.1\r\nimport datasets as Dataset\r\ndataset = Dataset.from_pandas(dataframe)\r\ndataset.save_to_disk(local)\r\n\r\n!pip install datasets==2.5.2\r\nimport datasets as Dataset\r\ndataset = Dataset.load_from_disk(local)\r\n```\r\n\r\n",
"@Lokiiiiii And what are the contents of the \"dataframe\" in your example?",
"I bumped into the issue too. @Lokiiiiii thanks for steps. I \"solved\" if for now by `pip install datasets>=2.6.1` everywhere.",
"Hi all, \r\nI experienced the same issue. \r\nPlease note that the pull request is related to the IMDB example provided in the doc, and is a fix for that, in that context, to make sure that people can follow the doc example and have a working system. \r\nIt does not provide a fix for Datasets itself. ",
"im getting the same error.\r\n- using the base AWS HF container that uses a datasets <2.\r\n- updating the AWS HF container to use dataset 2.4\r\n",
"Same here, running on our SageMaker pipelines. It's only happening for some but not all of our saved Datasets.",
"I am also receiving this error on Sagemaker but not locally, I have noticed that this occurs when the `.dataset/` folder does not contain a single file like:\r\n\r\n`dataset.arrow`\r\n\r\nbut instead contains multiple files like:\r\n\r\n`data-00000-of-00002.arrow`\r\n`data-00001-of-00002.arrow`\r\n\r\nI think that it may have something to do with this recent PR that updated the behaviour of `dataset.save_to_disk` by introducing sharding: https://github.com/huggingface/datasets/pull/5268\r\n\r\nFor now I can get around this by forcing datasets==2.8.0 on machine that creates dataset and in the huggingface instance for training (by running this at the start of training script `os.system(\"pip install datasets==2.8.0\")`)\r\n\r\nTo ensure the dataset is a single shard when saving the dataset locally:\r\n\r\n```python3\r\ndataset.flatten_indices().save_to_disk('path/to/dataset', num_shards=1)\r\n```\r\n\r\n and then manually changing the name afterwards from `path/to/dataset/data-00000-of-00001.arrow` to `path/to/dataset/dataset.arrow` and updating the `path/to/dataset/state.json` to reflect this name change. i.e. by changing `state.json` to this:\r\n\r\n```javascript\r\n{\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"dataset.arrow\"\r\n }\r\n ],\r\n \"_fingerprint\": \"420086f0636f8727\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_output_all_columns\": false,\r\n \"_split\": null\r\n}\r\n```"
] | 2022-10-24T11:32:36 | 2023-01-03T15:26:00 | null | NONE | null | ### Describe the bug
Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2.
Context:
- Each individual dataset in the dict is created with `Dataset.from_pandas`
- The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred.
### Steps to reproduce the bug
Steps to reproduce:
- Upgrade to datasets==2.6.1
- Create a dataset from pandas dataframe with `Dataset.from_pandas`
- Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- Save to disk with the `save` function
### Expected behavior
Same as in v2.5.2, that is load from disk without errors
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5150/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5149/comments | https://api.github.com/repos/huggingface/datasets/issues/5149/events | https://github.com/huggingface/datasets/pull/5149 | 1,420,415,639 | PR_kwDODunzps5BZJab | 5,149 | Make iter_files deterministic | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-24T08:16:27 | 2022-10-27T09:53:23 | 2022-10-27T09:51:09 | MEMBER | null | Fix #5145. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5149/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5149/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5149",
"html_url": "https://github.com/huggingface/datasets/pull/5149",
"diff_url": "https://github.com/huggingface/datasets/pull/5149.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5149.patch",
"merged_at": "2022-10-27T09:51:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5148/comments | https://api.github.com/repos/huggingface/datasets/issues/5148/events | https://github.com/huggingface/datasets/issues/5148 | 1,420,219,222 | I_kwDODunzps5UptNW | 5,148 | Cannot find the rvl_cdip dataset | {
"login": "santule",
"id": 20509836,
"node_id": "MDQ6VXNlcjIwNTA5ODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20509836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santule",
"html_url": "https://github.com/santule",
"followers_url": "https://api.github.com/users/santule/followers",
"following_url": "https://api.github.com/users/santule/following{/other_user}",
"gists_url": "https://api.github.com/users/santule/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santule/subscriptions",
"organizations_url": "https://api.github.com/users/santule/orgs",
"repos_url": "https://api.github.com/users/santule/repos",
"events_url": "https://api.github.com/users/santule/events{/privacy}",
"received_events_url": "https://api.github.com/users/santule/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, @santule.\r\n\r\nWe have transferred all dataset scripts from GitHub to the Hugging Face Hub: https://huggingface.co/datasets\r\n- Concretely, you have \"rvl_cdip\" here: https://huggingface.co/datasets/rvl_cdip\r\n\r\nTo be able to load them, you should update your `datasets` library:\r\n```\r\npip install -U datasets\r\n```",
"thank you, it worked"
] | 2022-10-24T04:57:42 | 2022-10-24T12:23:47 | 2022-10-24T06:25:28 | NONE | null | Hi,
I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error.
dataset = load_dataset("rvl_cdip")
Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdip/rvl_cdip.py
Regards,
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5148/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5147/comments | https://api.github.com/repos/huggingface/datasets/issues/5147/events | https://github.com/huggingface/datasets/issues/5147 | 1,419,522,275 | I_kwDODunzps5UnDDj | 5,147 | Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting | {
"login": "falcaopetri",
"id": 8387736,
"node_id": "MDQ6VXNlcjgzODc3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falcaopetri",
"html_url": "https://github.com/falcaopetri",
"followers_url": "https://api.github.com/users/falcaopetri/followers",
"following_url": "https://api.github.com/users/falcaopetri/following{/other_user}",
"gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions",
"organizations_url": "https://api.github.com/users/falcaopetri/orgs",
"repos_url": "https://api.github.com/users/falcaopetri/repos",
"events_url": "https://api.github.com/users/falcaopetri/events{/privacy}",
"received_events_url": "https://api.github.com/users/falcaopetri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\n@datasets.hashing.register(ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```",
"Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.",
"> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !",
"> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n"
] | 2022-10-22T21:46:38 | 2022-11-01T22:19:07 | null | NONE | null | ### Feature request
`dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint.
I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing.
Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700).
### Motivation
This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680.
Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output:
```python
def fn(example, verbose=False):
...
```
Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching.
I'm not sure if other methods in the `Dataset` API could benefit from this feature.
### Your contribution
Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation.
I could contribute with a PR if this feature and approach look good to you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5147/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5146/comments | https://api.github.com/repos/huggingface/datasets/issues/5146/events | https://github.com/huggingface/datasets/pull/5146 | 1,418,331,282 | PR_kwDODunzps5BSUWW | 5,146 | Delete duplicate issue template file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-21T13:18:46 | 2022-10-21T13:52:30 | 2022-10-21T13:50:04 | MEMBER | null | A conflict between two PRs:
- #5116
- #5136
was not properly resolved, resulting in a duplicate issue template.
This PR removes the duplicate template. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5146/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5146",
"html_url": "https://github.com/huggingface/datasets/pull/5146",
"diff_url": "https://github.com/huggingface/datasets/pull/5146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5146.patch",
"merged_at": "2022-10-21T13:50:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5145/comments | https://api.github.com/repos/huggingface/datasets/issues/5145/events | https://github.com/huggingface/datasets/issues/5145 | 1,418,005,452 | I_kwDODunzps5UhQvM | 5,145 | Dataset order is not deterministic with ZIP archives and `iter_files` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] | 2022-10-21T09:00:03 | 2022-10-27T09:51:49 | 2022-10-27T09:51:10 | CONTRIBUTOR | null | ### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5145/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5144/comments | https://api.github.com/repos/huggingface/datasets/issues/5144/events | https://github.com/huggingface/datasets/issues/5144 | 1,417,974,731 | I_kwDODunzps5UhJPL | 5,144 | Inconsistent documentation on map remove_columns | {
"login": "zhaowei-wang-nlp",
"id": 22047467,
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowei-wang-nlp",
"html_url": "https://github.com/zhaowei-wang-nlp",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thanks for reporting, @zhaowei-wang-nlp.\r\n\r\nYou are right, the documentation is confusing on the behavior of `remove_columns`. We should better explain it. ",
"This is a duplicate of https://github.com/huggingface/datasets/issues/2343.",
"I'm closing this issue because as @mariosasko pointed out, it is a duplicate of:\r\n- #2343"
] | 2022-10-21T08:37:53 | 2022-11-15T14:15:10 | 2022-11-15T14:15:10 | NONE | null | ### Describe the bug
The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5144/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5143/comments | https://api.github.com/repos/huggingface/datasets/issues/5143/events | https://github.com/huggingface/datasets/issues/5143 | 1,416,837,186 | I_kwDODunzps5UczhC | 5,143 | DownloadManager Git LFS support | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hey ! Actually it works, just pass the right URL ;)\r\nThe URL must be the one with “/resolve/”\r\n\r\ne.g. https://huggingface.co/datasets/imagenet-1k/resolve/main/data/test_images.tar.gz\r\n\r\nYou can even pass a relative path to the dl_manager instead, like `dl_manager.download(\"data/test_images.tar.gz\")`",
"Amazing it works, thanks!"
] | 2022-10-20T15:29:29 | 2022-10-20T17:17:10 | 2022-10-20T17:17:10 | CONTRIBUTOR | null | ### Feature request
Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right?
Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict.
Is there a good way to write a dataset loading script for a repo with lfs files?
### Motivation
/
### Your contribution
/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5143/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5142/comments | https://api.github.com/repos/huggingface/datasets/issues/5142/events | https://github.com/huggingface/datasets/pull/5142 | 1,416,317,678 | PR_kwDODunzps5BLd90 | 5,142 | Deprecate num_proc parameter in DownloadManager.extract | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @mariosasko . Can you please help me with why the tests keep failing. I have reviewed the code changes multiple times but can't spot any mistakes. ",
"You can fix this failure by formatting your code with the `make style` command (run it from the root of the cloned repo).",
"hey @mariosasko ,i cant understand how to use the `make style` command .I searched for it on the internet but cant find any results. \r\nSo i formatted the code using vs-code document formatter. Hope this helps.",
"`make style` runs the \"style\" target defined here: https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n\r\nThis seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile",
"\r\n\r\n\r\n\r\n> `make style` runs the \"style\" target defined here:\r\n> \r\n> https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n> \r\n> This seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile\r\n\r\nThanks! I will look into this :relaxed: "
] | 2022-10-20T09:52:52 | 2022-10-25T18:06:56 | 2022-10-25T15:56:45 | CONTRIBUTOR | null | fixes #5132 : Deprecated the `num_proc` parameter in `DownloadManager.extract` by passing `num_proc` parameter to `map_nested` . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5142/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5142",
"html_url": "https://github.com/huggingface/datasets/pull/5142",
"diff_url": "https://github.com/huggingface/datasets/pull/5142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5142.patch",
"merged_at": "2022-10-25T15:56:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5141/comments | https://api.github.com/repos/huggingface/datasets/issues/5141/events | https://github.com/huggingface/datasets/pull/5141 | 1,415,479,438 | PR_kwDODunzps5BIp1l | 5,141 | Raise ImportError instead of OSError | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @mariosasko ,i commited the changes as you said.\r\n\r\n"
] | 2022-10-19T19:30:05 | 2022-10-25T15:59:25 | 2022-10-25T15:56:58 | CONTRIBUTOR | null | fixes #5134 : Replaced OSError with ImportError if required extraction library is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5141/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5141",
"html_url": "https://github.com/huggingface/datasets/pull/5141",
"diff_url": "https://github.com/huggingface/datasets/pull/5141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5141.patch",
"merged_at": "2022-10-25T15:56:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5140/comments | https://api.github.com/repos/huggingface/datasets/issues/5140/events | https://github.com/huggingface/datasets/pull/5140 | 1,415,075,530 | PR_kwDODunzps5BHTNq | 5,140 | Make the KeyHasher FIPS compliant | {
"login": "vvalouch",
"id": 22592860,
"node_id": "MDQ6VXNlcjIyNTkyODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/22592860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvalouch",
"html_url": "https://github.com/vvalouch",
"followers_url": "https://api.github.com/users/vvalouch/followers",
"following_url": "https://api.github.com/users/vvalouch/following{/other_user}",
"gists_url": "https://api.github.com/users/vvalouch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvalouch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvalouch/subscriptions",
"organizations_url": "https://api.github.com/users/vvalouch/orgs",
"repos_url": "https://api.github.com/users/vvalouch/repos",
"events_url": "https://api.github.com/users/vvalouch/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvalouch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-10-19T14:25:52 | 2022-11-07T16:20:43 | 2022-11-07T16:20:43 | NONE | null | MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5140/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5140",
"html_url": "https://github.com/huggingface/datasets/pull/5140",
"diff_url": "https://github.com/huggingface/datasets/pull/5140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5140.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5137/comments | https://api.github.com/repos/huggingface/datasets/issues/5137/events | https://github.com/huggingface/datasets/issues/5137 | 1,414,642,723 | I_kwDODunzps5UUbwj | 5,137 | Align task tags in dataset metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```",
"All Hub datasets are done.",
"great job! did you have feedback from Hub users/i.E. repo authors?",
"Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co/datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co/datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co/datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co/datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co/datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback",
"As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis",
"Thanks to you both for your feedback! super useful! cc'ing @osanseviero too 🙂\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co/datasets/viewer/ ?\r\n",
"Sorry, this one: https://huggingface.co/datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".",
"good feedback! we'll improve this",
"Super useful feedback, thanks a lot!",
"- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?",
"@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task."
] | 2022-10-19T09:41:42 | 2022-11-10T05:25:58 | 2022-10-25T06:17:00 | MEMBER | null | ## Describe
Once we have agreed on a common naming for task tags for all open source projects, we should align on them.
## Steps
- [x] Align task tags in canonical datasets
- [x] task_categories: 4 datasets
- [x] task_ids (by @lhoestq)
- [x] Open PRs in community datasets
- [x] task_categories: 451 datasets
- [x] task_ids: 556 datasets
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5137/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5136/comments | https://api.github.com/repos/huggingface/datasets/issues/5136/events | https://github.com/huggingface/datasets/pull/5136 | 1,414,492,139 | PR_kwDODunzps5BFWMG | 5,136 | Update docs once dataset scripts transferred to the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-19T07:58:27 | 2022-10-20T08:12:21 | 2022-10-20T08:10:00 | MEMBER | null | Todo:
- [x] Update docs:
- [x] Datasets on GitHub (legacy)
- [x] Load: offline
- [x] About dataset load:
- [x] Maintaining integrity
- [x] Security
- [x] Update docstrings:
- [x] Inspect:
- [x] get_dataset_config_info
- [x] get_dataset_split_names
- [x] Load:
- [x] dataset_module_factory
- [x] load_dataset_builder
- [x] load_dataset
- [x] Remove `ADD_NEW_DATASET.md`
- [x] Update `.github/ISSUE_TEMPLATE/config.yml`
Fix #5135. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5136/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5136",
"html_url": "https://github.com/huggingface/datasets/pull/5136",
"diff_url": "https://github.com/huggingface/datasets/pull/5136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5136.patch",
"merged_at": "2022-10-20T08:10:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5135/comments | https://api.github.com/repos/huggingface/datasets/issues/5135/events | https://github.com/huggingface/datasets/issues/5135 | 1,414,413,519 | I_kwDODunzps5UTjzP | 5,135 | Update docs once dataset scripts transferred to the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-10-19T06:58:19 | 2022-10-20T08:10:01 | 2022-10-20T08:10:01 | MEMBER | null | ## Describe the bug
As discussed in:
- https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701
we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub):
- #4974
Concretely:
- [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy
- [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md
- ...
This PR complements the work of:
- #5067
This PR is a follow-up of PRs:
- #3777
CC: @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5135/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5134/comments | https://api.github.com/repos/huggingface/datasets/issues/5134/events | https://github.com/huggingface/datasets/issues/5134 | 1,413,623,687 | I_kwDODunzps5UQi-H | 5,134 | Raise ImportError instead of OSError if required extraction library is not installed | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"hey ,i would like to work on this issue . Please assign it to me.",
"hey @mariosasko , i made a pr for this issue. Could you please review it.\r\nAlso i found multiple `OSError` in `extract.py` file which i thought could be replaced too but wasn't sure about them.\r\nPlease do tell if that also needs to be done."
] | 2022-10-18T17:53:46 | 2022-10-25T15:56:59 | 2022-10-25T15:56:59 | CONTRIBUTOR | null | According to the official Python docs, `OSError` should be thrown in the following situations:
> This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors).
Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5134/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5133/comments | https://api.github.com/repos/huggingface/datasets/issues/5133/events | https://github.com/huggingface/datasets/issues/5133 | 1,413,623,462 | I_kwDODunzps5UQi6m | 5,133 | Tensor operation not functioning in dataset mapping | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .",
"> Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .\r\n\r\nThank you. "
] | 2022-10-18T17:53:35 | 2022-10-19T04:15:45 | 2022-10-19T04:15:44 | NONE | null | ## Describe the bug
I'm doing a torch.mean() operation in data preprocessing, and it's not working.
## Steps to reproduce the bug
```
from transformers import pipeline
import torch
import numpy as np
from datasets import load_dataset
device = 'cuda:0'
raw_dataset = load_dataset("glue", "sst2")
feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device)
def extracted_data(examples):
# feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device)
# feature = torch.mean(feature, dim=1)
feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1)
print(feature.shape)
return {'feature': feature}
extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16)
```
## Results
When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768].
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5133/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5132/comments | https://api.github.com/repos/huggingface/datasets/issues/5132/events | https://github.com/huggingface/datasets/issues/5132 | 1,413,607,306 | I_kwDODunzps5UQe-K | 5,132 | Depracate `num_proc` parameter in `DownloadManager.extract` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I can take this! #self-assign",
"#self-assign",
"@lazarust i'm already working on this issue :smile: ",
"#self-assign",
"hey @mariosasko , i made a pr for this issue. Could you please review it."
] | 2022-10-18T17:41:05 | 2022-10-25T15:56:46 | 2022-10-25T15:56:46 | CONTRIBUTOR | null | The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5132/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5131/comments | https://api.github.com/repos/huggingface/datasets/issues/5131/events | https://github.com/huggingface/datasets/issues/5131 | 1,413,534,863 | I_kwDODunzps5UQNSP | 5,131 | WikiText 103 tokenizer hangs | {
"login": "TrentBrick",
"id": 12433427,
"node_id": "MDQ6VXNlcjEyNDMzNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/12433427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TrentBrick",
"html_url": "https://github.com/TrentBrick",
"followers_url": "https://api.github.com/users/TrentBrick/followers",
"following_url": "https://api.github.com/users/TrentBrick/following{/other_user}",
"gists_url": "https://api.github.com/users/TrentBrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TrentBrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TrentBrick/subscriptions",
"organizations_url": "https://api.github.com/users/TrentBrick/orgs",
"repos_url": "https://api.github.com/users/TrentBrick/repos",
"events_url": "https://api.github.com/users/TrentBrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/TrentBrick/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 2022-10-18T16:44:00 | 2022-10-18T16:44:00 | null | NONE | null | See issue here: https://github.com/huggingface/transformers/issues/19702 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5131/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5130/comments | https://api.github.com/repos/huggingface/datasets/issues/5130/events | https://github.com/huggingface/datasets/pull/5130 | 1,413,435,000 | PR_kwDODunzps5BBxXX | 5,130 | Avoid extra cast in `class_encode_column` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-18T15:31:24 | 2022-10-19T11:53:02 | 2022-10-19T11:50:46 | CONTRIBUTOR | null | Pass the updated features to `map` to avoid the `cast` in `class_encode_column`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5130/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5130",
"html_url": "https://github.com/huggingface/datasets/pull/5130",
"diff_url": "https://github.com/huggingface/datasets/pull/5130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5130.patch",
"merged_at": "2022-10-19T11:50:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5129/comments | https://api.github.com/repos/huggingface/datasets/issues/5129/events | https://github.com/huggingface/datasets/issues/5129 | 1,413,031,664 | I_kwDODunzps5UOSbw | 5,129 | unexpected `cast` or `class_encode_column` result after `rename_column` | {
"login": "quaeast",
"id": 35144675,
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quaeast",
"html_url": "https://github.com/quaeast",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"repos_url": "https://api.github.com/users/quaeast/repos",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configuration (both datasets 2.5.2 and 2.6.1, python, pyarrow, pandas), but on Linux. The results seem to be the expected `{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}`.\r\nI don't have a Mac device. I can't verify whether this is a M1 chip-specific problem.",
"I've just tested the code on my M1 Mac, and it behaves as expected.",
"> Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...\r\n\r\nThank you for your attention and feel sorry to take your time. Since this is a bug of old version, I think mybe my problem is because `cast` operation directaly used cached data generated by older verion of `datasets`. I tried to deleted the cached data and I got expected result.\r\n"
] | 2022-10-18T11:15:24 | 2022-10-19T03:02:26 | 2022-10-19T03:02:26 | NONE | null | ## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5129/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5128/comments | https://api.github.com/repos/huggingface/datasets/issues/5128/events | https://github.com/huggingface/datasets/pull/5128 | 1,412,783,855 | PR_kwDODunzps5A_k9s | 5,128 | Make filename matching more robust | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> I think we should also modify one of the metadata files in the `folder_based_builder` tests to make sure \"./\" is ignored now in the `file_name`\r\n\r\n@mariosasko what do you mean here? I'm not sure which metadata file I should modify here",
"You can modify this line for instance: https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/tests/packaged_modules/test_folder_based_builder.py#L135"
] | 2022-10-18T08:22:48 | 2022-10-28T13:07:38 | 2022-10-28T13:05:06 | CONTRIBUTOR | null | Fix #5046 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5128/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5128",
"html_url": "https://github.com/huggingface/datasets/pull/5128",
"diff_url": "https://github.com/huggingface/datasets/pull/5128.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5128.patch",
"merged_at": "2022-10-28T13:05:06"
} | true |