url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 758M
1.95B
| node_id
stringlengths 18
32
| number
int64 1.2k
6.31k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2741/comments | https://api.github.com/repos/huggingface/datasets/issues/2741/events | https://github.com/huggingface/datasets/issues/2741 | 957,979,559 | MDU6SXNzdWU5NTc5Nzk1NTk= | 2,741 | Add Hypersim dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | [] | null | [] | "2021-08-02T10:06:50Z" | "2021-12-08T12:06:51Z" | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Hypersim
- **Description:** photorealistic synthetic dataset for holistic indoor scene understanding
- **Paper:** *link to the dataset paper if available*
- **Data:** https://github.com/apple/ml-hypersim
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2741/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5231/comments | https://api.github.com/repos/huggingface/datasets/issues/5231/events | https://github.com/huggingface/datasets/issues/5231 | 1,445,883,267 | I_kwDODunzps5WLm2D | 5,231 | Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/plamb-viso",
"id": 99206017,
"login": "plamb-viso",
"node_id": "U_kgDOBenDgQ",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"type": "User",
"url": "https://api.github.com/users/plamb-viso"
} | [] | closed | false | null | [] | null | [
"In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types"
] | "2022-11-11T18:54:36Z" | "2022-11-11T20:42:29Z" | "2022-11-11T18:59:50Z" | NONE | null | null | null | I have a Dataset with two Features defined as follows:
```
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
```
On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example.
However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length).
I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5231/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5776/comments | https://api.github.com/repos/huggingface/datasets/issues/5776/events | https://github.com/huggingface/datasets/issues/5776 | 1,677,116,100 | I_kwDODunzps5j9sLE | 5,776 | Use Pandas' `read_json` in the JSON builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | [] | "2023-04-20T17:15:49Z" | "2023-04-20T17:15:49Z" | null | CONTRIBUTOR | null | null | null | Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725).
In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5776/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2087/comments | https://api.github.com/repos/huggingface/datasets/issues/2087/events | https://github.com/huggingface/datasets/pull/2087 | 836,587,392 | MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2 | 2,087 | Update metadata if dataset features are modified | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.",
"Awesome thank you !\r\nYes this approach with a wrapper is good :)",
"@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip install git+https://github.com/mariosasko/datasets-1.git@update-metadata\r\n```\r\nin the first cell of the notebook that is attached to the linked issue.\r\n\r\nThe CI failure is unrelated I think (building the docs locally doesn't throw an error).",
"The CI fail for the docs has been fixed on master.\r\nMerging :)"
] | "2021-03-20T02:05:23Z" | "2021-04-09T09:25:33Z" | "2021-04-09T09:25:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2087.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2087",
"merged_at": "2021-04-09T09:25:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2087.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2087"
} | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2087/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1936/comments | https://api.github.com/repos/huggingface/datasets/issues/1936/events | https://github.com/huggingface/datasets/pull/1936 | 814,726,512 | MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4 | 1,936 | [WIP] Adding Support for Reading Pandas Category | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"Thanks ! could you maybe add a few tests in test_arrow_dataset.py to make sure from_pandas works as expected with categorical types ?\r\n\r\nIn particular I'm pretty sure that if you now try to `cast` the dataset to the same features at its current features, it will break instead of just being a no-op.\r\nThis is because `features.type` returns an arrow int64 type for the classlabel column instead of the arrow dictionary type that you have in the arrow table. There are two issues in this case:\r\n- it will try to replace the arrow type from dictionary to int64 instead of being a no-op\r\n- it will crash because pyarrow is not able to cast a dictionary to int64 (even if it's actually possible do cast the column by hand by accessing the sub-array of the dictionary array containing the indices/integers)\r\n\r\nIt would be awesome to fix this case ! Ideally the arrow `pa_type` of classlabel ([here](https://github.com/huggingface/datasets/blob/7072e1becd69d421d863374b825e3da4c6551798/src/datasets/features.py#L558)) should be an arrow dictionary type. This should fix the issue. Then we can start working on backward compatibility.\r\n\r\nLet me know if you have questions or if I can help.\r\nIn particular if there is some glue-ing to do I can take care of that if you want ;)\r\n\r\n--------------\r\n\r\nAlso just a few information regarding the functions you mentioned\r\n\r\n`int2str` and `str2int` are used by users to transforms the labels if they want to. Here sine ClassLabel is instantiated without the class names, they would crash. I was about to make a PR to disallow the creation of an empty ClassLabel feature type.\r\nTherefore can you provide class_names= when creating the ClassLabel ?\r\n\r\n`encode_example` is mostly used with a dataset builder (e.g. squad.py) so it's not used when using .from_pandas.\r\n\r\n\r\n",
"Got it - that's super helpful, I was trying to figure out what would break!\r\n\r\nI think there are two issues we're discussing here:\r\n\r\n1. modifying the pa_type of ClassLabel: totally agree with you on that one if that's OK from a back-compat perspective. (i.e. are users of `datasets` not supposed to access or use the .pa_type attribute of ClassLabel?)\r\n2. creating a ClassLabel requires information that's not present on the pa.DictionaryType object: I think the crux of the problem is that at this line (https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR933) - you only have access to the `pa_type`, which is `DictionaryType[int8, string]`. I've unpacked it and looked at all of the available methods, and I don't believe that any of the actual values (\"names\") are present - those are stored on the `pyarrow.DictArray.dictionary` attribute (i.e. as data, not on the pyarrow.DataType) - so in order to actually be able to instantiate the ClassLabel with the names= parameter, we need to pass in more information to this method.\r\n\r\nWe *could* mostly accomplish this by modifying https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR909 to accept a pyarrow Table in addition to the type, and it's not too difficult to do, but it feels a little bit off to me:\r\n\r\n- It feels a bit off that a \"schema\" definition will change depending on what data gets added to the dataset. In particular, if someone adds rows or concatenates two datasets, the ClassLabel \"names\" will also need to change, right? I think maybe we're getting around this because a Dataset is immutable (I think?) and so any new dataset is freshly constructed, but for example - I think this check wouldn't work for `ClassLabel`s if we were to compare the `Dataset.features` instead of the underlying pyarrow type https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2664\r\n- To that end I wonder if ClassLabel should actually just be the \"type\" akin to Category, and the \"names\" should be considered \"data\" and not part of the \"type\"? Similar to how pyarrow maintains two data objects - the array of indices and the array of string values.\r\n\r\nWith that in mind, I'm wondering if you *should* allow an empty ClassLabel (and`int2str`, etc. can be updated to have more descriptive error messages if labels aren't provided or inferred), and if the underlying data is a pa.DictionaryType, then the names can be inferred and applied at these points in the code:\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L274\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L686\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L673\r\n\r\nI think perhaps the mismatch here is when the data is stored on disk as an int there should be a convenient way of saying \"this is a dictionary and here are some explicitly provided labels\", whereas when it's stored as a string, we'd ideally like to say \"this is a Category and please condense the representation and automatically infer the labels\".\r\n\r\nSorry for the long comment! Hopefully my thoughts make sense - thanks for taking the time to discuss!",
"Yes that makes sense. I completely forgot that the label names of an arrow Dictionary type were not stored in the type but in the DictionaryArray.\r\n\r\nThis is made me realize that it's actually pretty unpractical and I feel that handling this can add unnecessary complexity in the handling of dtypes.\r\nMore specifically:\r\n- it's not possible to create a DictionaryArray from a call to pyarrow.array with python objects, which is the function we use to convert python objects to pyarrow objects (or we would need to convert the python objects to pandas categorical series beforehand but it doesn't work for nested types)\r\n- casting nested types containing Dictionary types would require a lot of array manipulations since it's not compatible with pyarrow.array.cast\r\n\r\nI feel like the original feature request (support of pandas Categorical) should be addressable without adding so much complexity to the library.\r\n\r\nIf we admit that we don't want to deal with arrow Dictionary type, maybe we can simply convert the pandas categorical series to an int64 series and set the feature type to the right ClassLabel in `from_pandas`. We can have the reverse operation in `to_pandas`. This way we don't need to support the arrow DictionaryType and so we can keep simple/accessible code for conversion from python to arrow and also for type casting. Let me know what you think.\r\n\r\nIn the future depending on the usage of the ClassLabel types with pandas/pyarrow we might reconsider this but for now I believe this simple solution is enough.",
"I like that idea! Let me try working up a PR for this",
"OK! I just whipped up the `from_pandas()` portion of this PR, and it works, though I'm not *super* familiar with the available APIs so I'm not sure if there's a more \"vectorized\" way of doing all of these updates - so happy to get some feedback and iterate!\r\n\r\nApologies for multiple commits - I realized how to solve a few different problems right after I gave up and pushed with the intent to ask for help :-)\r\n\r\nI wanted to get some guidance on how to handle the reverse direction: I think there are two main areas to look at, `.to_pandas()` and also `.set_format('pandas')` and then pulling out a dataframe like so: `dataset[:]`. Is there a single place where I can handle both of these cases at once or do these need to be handled independently?",
"Thanks ! This is awesome :) \r\nCould you also add a test ? There is already `test_to_pandas` in test_arrow_dataset.py\r\nFeel free to complete this test to make sure it works for Categorical :)\r\n\r\nTo make it work with the \"pandas\" formating (when you do `set_format(\"pandas\")` and then query `dataset[0]`, `dataset[:]`, etc.), you can take a look and the `PandasFormatter` in formatting.py\r\nIt takes a pyarrow table as input of its formatting methods (one method for rows, one for columns and one for batches) and returns a pandas DataFrame (or a Series for the method for formatting a column). You can cast to Categorical in each one of the formatter methods and it should work directly when you use a pandas-formatted dataset.\r\n\r\nThis formatter can then also be used in `to_pandas` (currently it does `pa_table.to_pandas()` but `PandasFormatter().format_batch(pa_table)` can be used instead)."
] | "2021-02-23T18:32:54Z" | "2022-03-09T18:46:22Z" | "2022-03-09T18:46:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1936",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1936"
} | @lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014
The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.
Just the 4 line change below actually does seem to work:
```
>>> from datasets import Dataset
>>> import pandas as pd
>>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
>>> ds = Dataset.from_pandas(df)
>>> ds.to_pandas()
0
0 a
1 b
2 c
3 a
>>> ds.to_pandas().dtypes
0 category
dtype: object
```
save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are:
```
>>> ds.features.type
StructType(struct<0: int64>)
```
there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone
Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1936/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4048/comments | https://api.github.com/repos/huggingface/datasets/issues/4048/events | https://github.com/huggingface/datasets/issues/4048 | 1,183,804,576 | I_kwDODunzps5Gj2yg | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
}
] | null | [
"Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.",
"Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.",
"No sweat. Will get it patched up ASAP."
] | "2022-03-28T18:12:04Z" | "2022-04-08T12:29:30Z" | "2022-04-08T12:29:30Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata.
Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first.
## Steps to reproduce the bug
```python
load_dataset('amazon_us_reviews', 'PC_v1_00')
```
## Expected results
Dataset is downloaded and extracted successfully.
## Actual results
An split size exception is thrown.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4048/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5252/comments | https://api.github.com/repos/huggingface/datasets/issues/5252/events | https://github.com/huggingface/datasets/pull/5252 | 1,451,765,838 | PR_kwDODunzps5DCI1U | 5,252 | Support for decoding Image/Audio types in map when format type is not default one | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.",
"Yes, if the image column is the first in the batch keys, it will decode the images because it reads the actual values. We could avoid this by checking the batch type, and if it's `LazyDict`, `num_examples` is equal to `len(batch.pa_table)`, which doesn't lead to decoding.",
"Good idea. This can be done in a subsequent PR btw, since it's out of scope of the original goal of this PR",
"Just fixed a small bug where it would show the pyarrow 10 warning about None -> empty lists conversions even with an Array2D with no nulls",
"Fixed another bug when your map function returns a mix of LazyDict or regular dict and added some tests"
] | "2022-11-16T15:02:13Z" | "2022-12-13T17:01:54Z" | "2022-12-13T16:59:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5252",
"merged_at": "2022-12-13T16:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5252"
} | Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python).
Additional improvements:
* make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`)
* iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python
* fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed
* lazily extract and decode arrow data in the default format
TODO:
* [x] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq))
Fix https://github.com/huggingface/datasets/issues/3992, fix https://github.com/huggingface/datasets/issues/3756 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5252/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2673/comments | https://api.github.com/repos/huggingface/datasets/issues/2673/events | https://github.com/huggingface/datasets/pull/2673 | 947,300,008 | MDExOlB1bGxSZXF1ZXN0NjkyMzAxMTgw | 2,673 | Fix potential DuplicatedKeysError in SQuAD | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-07-19T06:08:00Z" | "2021-07-19T07:08:03Z" | "2021-07-19T07:08:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2673.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2673",
"merged_at": "2021-07-19T07:08:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2673.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2673"
} | DONE:
- Fix potential DiplicatedKeysError by ensuring keys are unique.
- Align examples in the docs with SQuAD code.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2673/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4629/comments | https://api.github.com/repos/huggingface/datasets/issues/4629/events | https://github.com/huggingface/datasets/issues/4629 | 1,293,418,800 | I_kwDODunzps5NGAEw | 4,629 | Rename repo default branch to main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | "2022-07-04T17:16:10Z" | "2022-07-06T15:49:57Z" | "2022-07-06T15:49:57Z" | MEMBER | null | null | null | Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin:
Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches
Then:
```
git fetch origin main
git remote set-head origin -a
```
CC: @sgugger | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4629/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3080/comments | https://api.github.com/repos/huggingface/datasets/issues/3080/events | https://github.com/huggingface/datasets/issues/3080 | 1,026,380,626 | I_kwDODunzps49LVNS | 3,080 | Error related to timeout keyword argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2021-10-14T13:10:58Z" | "2021-10-14T14:39:51Z" | "2021-10-14T14:39:51Z" | MEMBER | null | null | null | ## Describe the bug
As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: dataset_info() got an unexpected keyword argument 'timeout'
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3080/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4446/comments | https://api.github.com/repos/huggingface/datasets/issues/4446/events | https://github.com/huggingface/datasets/pull/4446 | 1,260,028,995 | PR_kwDODunzps45E1Qb | 4,446 | Add missing kwargs to docstrings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-03T15:10:27Z" | "2022-06-03T16:10:09Z" | "2022-06-03T16:01:29Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4446.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4446",
"merged_at": "2022-06-03T16:01:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4446.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4446"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4446/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4570/comments | https://api.github.com/repos/huggingface/datasets/issues/4570/events | https://github.com/huggingface/datasets/issues/4570 | 1,284,846,168 | I_kwDODunzps5MlTJY | 4,570 | Dataset sharding non-contiguous? | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ",
"Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ",
"@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ",
"This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)."
] | "2022-06-26T08:34:05Z" | "2022-06-30T11:00:47Z" | "2022-06-26T14:36:20Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made.
## Steps to reproduce the bug
```python
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dataset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"{num_shards=}")
for shard_index in range(num_shards):
shard = dataset.shard(num_shards=num_shards, index=shard_index)
shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet")
os.listdir('tokenized/')
```
## Expected results
I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example
## Actual results
Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4570/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"closing since I think this is cc100, just the name has been changed. thanks "
] | "2021-04-01T23:28:36Z" | "2021-04-02T10:05:19Z" | "2021-04-02T10:05:19Z" | NONE | null | null | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5585/comments | https://api.github.com/repos/huggingface/datasets/issues/5585/events | https://github.com/huggingface/datasets/issues/5585 | 1,602,190,030 | I_kwDODunzps5ff3rO | 5,585 | Cache is not transportable | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [] | closed | false | null | [] | null | [
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.",
"OK good to know. Thanks @lhoestq !"
] | "2023-02-28T00:53:06Z" | "2023-02-28T21:26:52Z" | "2023-02-28T21:26:52Z" | NONE | null | null | null | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5585/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3482/comments | https://api.github.com/repos/huggingface/datasets/issues/3482/events | https://github.com/huggingface/datasets/pull/3482 | 1,088,317,921 | PR_kwDODunzps4wQqE1 | 3,482 | Fix duplicate keys in NewsQA | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Flaky tests?",
"Thanks for your contribution, @bryant1410.\r\n\r\nI think the fix of the duplicate key in this PR was superseded by:\r\n- #3696\r\n\r\nI'm closing this because we are moving all dataset scripts from GitHub to the Hugging Face Hub."
] | "2021-12-24T11:01:59Z" | "2022-09-23T12:57:10Z" | "2022-09-23T12:57:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3482",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3482"
} | * Fix duplicate keys in NewsQA when loading from CSV files.
* Fix s/narqa/newsqa/ in the download manually error message.
* Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues.
* Fix the format of the license text.
* Reformat the code to make it simpler. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3482/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3209/comments | https://api.github.com/repos/huggingface/datasets/issues/3209/events | https://github.com/huggingface/datasets/issues/3209 | 1,044,505,771 | I_kwDODunzps4-QeSr | 3,209 | Unpin keras once TF fixes its release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-11-04T09:15:32Z" | "2021-11-05T10:57:37Z" | "2021-11-05T10:57:37Z" | MEMBER | null | null | null | Related to:
- #3208 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3209/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3151/comments | https://api.github.com/repos/huggingface/datasets/issues/3151/events | https://github.com/huggingface/datasets/pull/3151 | 1,033,890,501 | PR_kwDODunzps4tkL7t | 3,151 | Re-add faiss to windows testing suite | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | [] | "2021-10-22T19:34:29Z" | "2021-11-02T10:47:34Z" | "2021-11-02T10:06:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3151",
"merged_at": "2021-11-02T10:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3151"
} | In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file.
At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works.
```python
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
# do stuff
os.unlink(tmp_file.name)
```
closes #3150 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3151/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6273/comments | https://api.github.com/repos/huggingface/datasets/issues/6273/events | https://github.com/huggingface/datasets/issues/6273 | 1,920,922,260 | I_kwDODunzps5yfvKU | 6,273 | Broken Link to PubMed Abstracts dataset . | {
"avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4",
"events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}",
"followers_url": "https://api.github.com/users/sameemqureshi/followers",
"following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}",
"gists_url": "https://api.github.com/users/sameemqureshi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sameemqureshi",
"id": 100606327,
"login": "sameemqureshi",
"node_id": "U_kgDOBf8hdw",
"organizations_url": "https://api.github.com/users/sameemqureshi/orgs",
"received_events_url": "https://api.github.com/users/sameemqureshi/received_events",
"repos_url": "https://api.github.com/users/sameemqureshi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sameemqureshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sameemqureshi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sameemqureshi"
} | [] | open | false | null | [] | null | [
"This has already been reported in the HF Course repo (https://github.com/huggingface/course/issues/623).",
"@lhoestq @albertvillanova @lewtun I don't think we are allowed to host these data files on the Hub (due to DMCA), which means the only option is to use a different dataset in the course (and to re-record the video 🙂), no?",
"Keeping the video is maybe fine, we can add a note on youtube to suggest to load a dataset with a different name. Maybe C4 ? And update the code snippets on the website ?"
] | "2023-10-01T19:08:48Z" | "2023-10-02T16:40:18Z" | null | NONE | null | null | null | ### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url)
2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link.
### Expected behavior
The link should Redirect to the "PubMed Abstracts dataset" as expected .
### Environment info
. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6273/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1335/comments | https://api.github.com/repos/huggingface/datasets/issues/1335/events | https://github.com/huggingface/datasets/pull/1335 | 759,705,835 | MDExOlB1bGxSZXF1ZXN0NTM0NjYzNzQ2 | 1,335 | Added Bianet dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26374564?v=4",
"events_url": "https://api.github.com/users/param087/events{/privacy}",
"followers_url": "https://api.github.com/users/param087/followers",
"following_url": "https://api.github.com/users/param087/following{/other_user}",
"gists_url": "https://api.github.com/users/param087/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/param087",
"id": 26374564,
"login": "param087",
"node_id": "MDQ6VXNlcjI2Mzc0NTY0",
"organizations_url": "https://api.github.com/users/param087/orgs",
"received_events_url": "https://api.github.com/users/param087/received_events",
"repos_url": "https://api.github.com/users/param087/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/param087/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/param087/subscriptions",
"type": "User",
"url": "https://api.github.com/users/param087"
} | [] | closed | false | null | [] | null | [
"merging since the Ci is fixed on master"
] | "2020-12-08T19:10:32Z" | "2020-12-14T10:00:56Z" | "2020-12-14T10:00:56Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1335",
"merged_at": "2020-12-14T10:00:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1335"
} | Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1335/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6242/comments | https://api.github.com/repos/huggingface/datasets/issues/6242/events | https://github.com/huggingface/datasets/issues/6242 | 1,896,899,123 | I_kwDODunzps5xEGIz | 6,242 | Data alteration when loading dataset with unspecified inner sequence length | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec"
} | [] | closed | false | null | [] | null | [
"While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.",
"Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.table import cast_array_to_feature\r\nfrom datasets import Sequence, Value\r\ndata = [\r\n [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],\r\n [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],\r\n]\r\narr = pa.array(data, pa.list_(pa.list_(pa.float32(), 3)))\r\ncast_array_to_feature(arr, Sequence(Sequence(Value(\"float32\"))))\r\n```\r\n\r\nI've opened a PR with a fix."
] | "2023-09-14T16:12:45Z" | "2023-09-19T17:53:18Z" | "2023-09-19T17:53:18Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent.
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Value, Sequence, load_dataset
# Repository ID
repo_id = "my_repo_id"
# Define features with a specific length of 3 for each inner sequence
specified_features = Features({"key": Sequence(Sequence(Value("float32"), length=3))})
# Create a dataset with the specified features
data = [
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],
]
dataset = Dataset.from_dict({"key": data}, features=specified_features)
# Push the dataset to the hub
dataset.push_to_hub(repo_id)
# Define features without specifying the length
unspecified_features = Features({"key": Sequence(Sequence(Value("float32")))})
# Load the dataset from the hub with this new feature definition
dataset = load_dataset(f"qgallouedec/{repo_id}", split="train", features=unspecified_features)
# The obtained data is altered
print(dataset.to_dict()) # {'key': [[[1.0], [2.0]], [[3.0], [4.0]]]}
```
### Expected behavior
```python
print(dataset.to_dict()) # {'key': [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]}
```
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6242/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5389/comments | https://api.github.com/repos/huggingface/datasets/issues/5389/events | https://github.com/huggingface/datasets/pull/5389 | 1,509,348,626 | PR_kwDODunzps5GHsOo | 5,389 | Fix link in `load_dataset` docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008935 / 0.011353 (-0.002417) | 0.004582 / 0.011008 (-0.006426) | 0.100950 / 0.038508 (0.062442) | 0.030305 / 0.023109 (0.007196) | 0.299759 / 0.275898 (0.023861) | 0.378577 / 0.323480 (0.055097) | 0.007834 / 0.007986 (-0.000152) | 0.003399 / 0.004328 (-0.000930) | 0.078568 / 0.004250 (0.074318) | 0.037990 / 0.037052 (0.000938) | 0.313025 / 0.258489 (0.054536) | 0.359543 / 0.293841 (0.065702) | 0.033631 / 0.128546 (-0.094916) | 0.011681 / 0.075646 (-0.063966) | 0.324542 / 0.419271 (-0.094729) | 0.041014 / 0.043533 (-0.002519) | 0.302884 / 0.255139 (0.047745) | 0.337059 / 0.283200 (0.053859) | 0.089403 / 0.141683 (-0.052280) | 1.491262 / 1.452155 (0.039108) | 1.521626 / 1.492716 (0.028910) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.172627 / 0.018006 (0.154621) | 0.419406 / 0.000490 (0.418917) | 0.001974 / 0.000200 (0.001775) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023598 / 0.037411 (-0.013814) | 0.098127 / 0.014526 (0.083601) | 0.105611 / 0.176557 (-0.070946) | 0.142612 / 0.737135 (-0.594523) | 0.121687 / 0.296338 (-0.174651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418512 / 0.215209 (0.203303) | 4.173099 / 2.077655 (2.095444) | 1.865900 / 1.504120 (0.361780) | 1.664053 / 1.541195 (0.122858) | 1.726289 / 1.468490 (0.257799) | 0.693214 / 4.584777 (-3.891563) | 3.499982 / 3.745712 (-0.245730) | 1.894278 / 5.269862 (-3.375583) | 1.178214 / 4.565676 (-3.387463) | 0.082391 / 0.424275 (-0.341884) | 0.012486 / 0.007607 (0.004878) | 0.532190 / 0.226044 (0.306145) | 5.286612 / 2.268929 (3.017684) | 2.316680 / 55.444624 (-53.127944) | 1.964020 / 6.876477 (-4.912457) | 2.016457 / 2.142072 (-0.125616) | 0.812290 / 4.805227 (-3.992937) | 0.149102 / 6.500664 (-6.351562) | 0.064215 / 0.075469 (-0.011254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281919 / 1.841788 (-0.559869) | 14.107509 / 8.074308 (6.033201) | 13.892369 / 10.191392 (3.700977) | 0.146164 / 0.680424 (-0.534260) | 0.028740 / 0.534201 (-0.505460) | 0.395218 / 0.579283 (-0.184066) | 0.406321 / 0.434364 (-0.028043) | 0.460880 / 0.540337 (-0.079458) | 0.545975 / 1.386936 (-0.840961) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006797 / 0.011353 (-0.004556) | 0.004522 / 0.011008 (-0.006486) | 0.098440 / 0.038508 (0.059932) | 0.027722 / 0.023109 (0.004613) | 0.423995 / 0.275898 (0.148097) | 0.456164 / 0.323480 (0.132684) | 0.005156 / 0.007986 (-0.002830) | 0.003439 / 0.004328 (-0.000889) | 0.075307 / 0.004250 (0.071057) | 0.039599 / 0.037052 (0.002547) | 0.423671 / 0.258489 (0.165181) | 0.463841 / 0.293841 (0.170001) | 0.032473 / 0.128546 (-0.096073) | 0.011674 / 0.075646 (-0.063972) | 0.320548 / 0.419271 (-0.098723) | 0.041618 / 0.043533 (-0.001915) | 0.426133 / 0.255139 (0.170994) | 0.443018 / 0.283200 (0.159819) | 0.091103 / 0.141683 (-0.050579) | 1.468758 / 1.452155 (0.016604) | 1.532695 / 1.492716 (0.039978) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255314 / 0.018006 (0.237308) | 0.422982 / 0.000490 (0.422492) | 0.015405 / 0.000200 (0.015205) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025260 / 0.037411 (-0.012152) | 0.102062 / 0.014526 (0.087537) | 0.108161 / 0.176557 (-0.068395) | 0.144205 / 0.737135 (-0.592930) | 0.111686 / 0.296338 (-0.184653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482633 / 0.215209 (0.267424) | 4.824777 / 2.077655 (2.747123) | 2.488626 / 1.504120 (0.984506) | 2.285410 / 1.541195 (0.744215) | 2.336793 / 1.468490 (0.868303) | 0.701894 / 4.584777 (-3.882883) | 3.506908 / 3.745712 (-0.238804) | 3.399789 / 5.269862 (-1.870072) | 1.536359 / 4.565676 (-3.029317) | 0.083621 / 0.424275 (-0.340655) | 0.012702 / 0.007607 (0.005094) | 0.581259 / 0.226044 (0.355215) | 5.829640 / 2.268929 (3.560711) | 2.932201 / 55.444624 (-52.512424) | 2.577175 / 6.876477 (-4.299301) | 2.621782 / 2.142072 (0.479710) | 0.812074 / 4.805227 (-3.993153) | 0.152840 / 6.500664 (-6.347824) | 0.067982 / 0.075469 (-0.007487) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274915 / 1.841788 (-0.566873) | 14.345800 / 8.074308 (6.271492) | 14.242475 / 10.191392 (4.051083) | 0.143636 / 0.680424 (-0.536788) | 0.016824 / 0.534201 (-0.517377) | 0.376449 / 0.579283 (-0.202834) | 0.394219 / 0.434364 (-0.040145) | 0.435368 / 0.540337 (-0.104969) | 0.518393 / 1.386936 (-0.868544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#187e4faa978fef267a055f6988564f922e51eaa4 \"CML watermark\")\n",
"I also fixed the rest of the links that point to the markdown files. \r\n\r\nPS: the CI failures are unrelated ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.004560 / 0.011008 (-0.006448) | 0.100559 / 0.038508 (0.062051) | 0.029744 / 0.023109 (0.006635) | 0.300580 / 0.275898 (0.024682) | 0.359100 / 0.323480 (0.035620) | 0.007016 / 0.007986 (-0.000970) | 0.003393 / 0.004328 (-0.000936) | 0.078649 / 0.004250 (0.074399) | 0.038138 / 0.037052 (0.001086) | 0.307730 / 0.258489 (0.049241) | 0.347678 / 0.293841 (0.053837) | 0.033630 / 0.128546 (-0.094917) | 0.011452 / 0.075646 (-0.064194) | 0.320903 / 0.419271 (-0.098369) | 0.042659 / 0.043533 (-0.000874) | 0.298886 / 0.255139 (0.043747) | 0.324371 / 0.283200 (0.041171) | 0.092582 / 0.141683 (-0.049101) | 1.490017 / 1.452155 (0.037863) | 1.512825 / 1.492716 (0.020109) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178965 / 0.018006 (0.160958) | 0.420001 / 0.000490 (0.419512) | 0.002686 / 0.000200 (0.002486) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023568 / 0.037411 (-0.013843) | 0.097027 / 0.014526 (0.082502) | 0.104721 / 0.176557 (-0.071836) | 0.148757 / 0.737135 (-0.588378) | 0.110849 / 0.296338 (-0.185489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415034 / 0.215209 (0.199825) | 4.155249 / 2.077655 (2.077594) | 1.837027 / 1.504120 (0.332907) | 1.627754 / 1.541195 (0.086559) | 1.687958 / 1.468490 (0.219468) | 0.699542 / 4.584777 (-3.885235) | 3.376707 / 3.745712 (-0.369005) | 2.900778 / 5.269862 (-2.369083) | 1.556168 / 4.565676 (-3.009508) | 0.082438 / 0.424275 (-0.341837) | 0.012339 / 0.007607 (0.004732) | 0.524952 / 0.226044 (0.298907) | 5.269852 / 2.268929 (3.000924) | 2.278770 / 55.444624 (-53.165854) | 1.917987 / 6.876477 (-4.958490) | 1.955000 / 2.142072 (-0.187072) | 0.821169 / 4.805227 (-3.984058) | 0.149019 / 6.500664 (-6.351645) | 0.064604 / 0.075469 (-0.010865) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199768 / 1.841788 (-0.642020) | 13.760897 / 8.074308 (5.686589) | 13.911550 / 10.191392 (3.720158) | 0.161727 / 0.680424 (-0.518697) | 0.028615 / 0.534201 (-0.505586) | 0.393917 / 0.579283 (-0.185366) | 0.392524 / 0.434364 (-0.041840) | 0.451763 / 0.540337 (-0.088574) | 0.536880 / 1.386936 (-0.850056) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006407 / 0.011353 (-0.004946) | 0.004420 / 0.011008 (-0.006588) | 0.097244 / 0.038508 (0.058736) | 0.027114 / 0.023109 (0.004005) | 0.412512 / 0.275898 (0.136614) | 0.448189 / 0.323480 (0.124709) | 0.005831 / 0.007986 (-0.002155) | 0.005423 / 0.004328 (0.001095) | 0.076051 / 0.004250 (0.071801) | 0.038828 / 0.037052 (0.001776) | 0.414586 / 0.258489 (0.156097) | 0.457196 / 0.293841 (0.163355) | 0.031615 / 0.128546 (-0.096931) | 0.011542 / 0.075646 (-0.064104) | 0.316967 / 0.419271 (-0.102304) | 0.041278 / 0.043533 (-0.002254) | 0.411371 / 0.255139 (0.156232) | 0.436376 / 0.283200 (0.153177) | 0.090212 / 0.141683 (-0.051471) | 1.461831 / 1.452155 (0.009677) | 1.606515 / 1.492716 (0.113799) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221453 / 0.018006 (0.203447) | 0.404140 / 0.000490 (0.403650) | 0.000422 / 0.000200 (0.000222) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024588 / 0.037411 (-0.012824) | 0.098604 / 0.014526 (0.084078) | 0.113682 / 0.176557 (-0.062874) | 0.141141 / 0.737135 (-0.595994) | 0.110069 / 0.296338 (-0.186270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477267 / 0.215209 (0.262058) | 4.775086 / 2.077655 (2.697431) | 2.445449 / 1.504120 (0.941329) | 2.242220 / 1.541195 (0.701025) | 2.303542 / 1.468490 (0.835051) | 0.693448 / 4.584777 (-3.891329) | 3.413319 / 3.745712 (-0.332393) | 3.052734 / 5.269862 (-2.217127) | 1.434075 / 4.565676 (-3.131602) | 0.082429 / 0.424275 (-0.341846) | 0.012594 / 0.007607 (0.004987) | 0.584259 / 0.226044 (0.358214) | 5.865098 / 2.268929 (3.596169) | 2.926301 / 55.444624 (-52.518324) | 2.572555 / 6.876477 (-4.303921) | 2.608584 / 2.142072 (0.466512) | 0.805029 / 4.805227 (-4.000198) | 0.151247 / 6.500664 (-6.349417) | 0.067142 / 0.075469 (-0.008327) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285454 / 1.841788 (-0.556334) | 14.296425 / 8.074308 (6.222117) | 14.147278 / 10.191392 (3.955886) | 0.151698 / 0.680424 (-0.528726) | 0.016876 / 0.534201 (-0.517325) | 0.383302 / 0.579283 (-0.195981) | 0.388461 / 0.434364 (-0.045902) | 0.438286 / 0.540337 (-0.102051) | 0.525249 / 1.386936 (-0.861687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2a3b2f04f1fd62249ac43c534761ce151ad5c269 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008677 / 0.011353 (-0.002676) | 0.004863 / 0.011008 (-0.006145) | 0.096606 / 0.038508 (0.058098) | 0.034004 / 0.023109 (0.010895) | 0.296362 / 0.275898 (0.020464) | 0.323445 / 0.323480 (-0.000035) | 0.007341 / 0.007986 (-0.000644) | 0.005518 / 0.004328 (0.001189) | 0.073584 / 0.004250 (0.069334) | 0.041471 / 0.037052 (0.004419) | 0.302183 / 0.258489 (0.043694) | 0.339369 / 0.293841 (0.045528) | 0.037375 / 0.128546 (-0.091171) | 0.011827 / 0.075646 (-0.063819) | 0.330723 / 0.419271 (-0.088549) | 0.048751 / 0.043533 (0.005218) | 0.298370 / 0.255139 (0.043231) | 0.317781 / 0.283200 (0.034582) | 0.097488 / 0.141683 (-0.044195) | 1.456242 / 1.452155 (0.004088) | 1.530149 / 1.492716 (0.037433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207053 / 0.018006 (0.189046) | 0.438165 / 0.000490 (0.437675) | 0.001161 / 0.000200 (0.000961) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025353 / 0.037411 (-0.012059) | 0.105536 / 0.014526 (0.091010) | 0.116122 / 0.176557 (-0.060434) | 0.151605 / 0.737135 (-0.585530) | 0.121777 / 0.296338 (-0.174561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402780 / 0.215209 (0.187571) | 4.017882 / 2.077655 (1.940227) | 1.813111 / 1.504120 (0.308991) | 1.620000 / 1.541195 (0.078805) | 1.649186 / 1.468490 (0.180696) | 0.687523 / 4.584777 (-3.897254) | 3.712595 / 3.745712 (-0.033117) | 2.038535 / 5.269862 (-3.231326) | 1.414794 / 4.565676 (-3.150882) | 0.083357 / 0.424275 (-0.340918) | 0.012032 / 0.007607 (0.004425) | 0.502899 / 0.226044 (0.276854) | 5.038914 / 2.268929 (2.769985) | 2.250476 / 55.444624 (-53.194148) | 1.919954 / 6.876477 (-4.956523) | 1.930928 / 2.142072 (-0.211144) | 0.826634 / 4.805227 (-3.978593) | 0.161599 / 6.500664 (-6.339066) | 0.061356 / 0.075469 (-0.014113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228998 / 1.841788 (-0.612790) | 14.587914 / 8.074308 (6.513606) | 14.237514 / 10.191392 (4.046122) | 0.190913 / 0.680424 (-0.489510) | 0.029104 / 0.534201 (-0.505097) | 0.436160 / 0.579283 (-0.143123) | 0.431464 / 0.434364 (-0.002900) | 0.511670 / 0.540337 (-0.028668) | 0.609046 / 1.386936 (-0.777890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006980 / 0.011353 (-0.004373) | 0.005260 / 0.011008 (-0.005748) | 0.095288 / 0.038508 (0.056780) | 0.032465 / 0.023109 (0.009356) | 0.410799 / 0.275898 (0.134901) | 0.423814 / 0.323480 (0.100334) | 0.005533 / 0.007986 (-0.002452) | 0.005764 / 0.004328 (0.001436) | 0.070713 / 0.004250 (0.066462) | 0.048193 / 0.037052 (0.011141) | 0.405742 / 0.258489 (0.147253) | 0.458773 / 0.293841 (0.164932) | 0.036415 / 0.128546 (-0.092131) | 0.012192 / 0.075646 (-0.063454) | 0.330655 / 0.419271 (-0.088617) | 0.055945 / 0.043533 (0.012412) | 0.407497 / 0.255139 (0.152358) | 0.421496 / 0.283200 (0.138296) | 0.106285 / 0.141683 (-0.035398) | 1.459837 / 1.452155 (0.007683) | 1.573147 / 1.492716 (0.080431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205776 / 0.018006 (0.187770) | 0.441523 / 0.000490 (0.441033) | 0.003073 / 0.000200 (0.002873) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029207 / 0.037411 (-0.008205) | 0.110295 / 0.014526 (0.095770) | 0.130233 / 0.176557 (-0.046324) | 0.157489 / 0.737135 (-0.579647) | 0.125374 / 0.296338 (-0.170965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440942 / 0.215209 (0.225733) | 4.389647 / 2.077655 (2.311992) | 2.234883 / 1.504120 (0.730763) | 2.029510 / 1.541195 (0.488315) | 2.082503 / 1.468490 (0.614013) | 0.698046 / 4.584777 (-3.886731) | 3.769127 / 3.745712 (0.023415) | 2.058511 / 5.269862 (-3.211351) | 1.324302 / 4.565676 (-3.241375) | 0.085695 / 0.424275 (-0.338580) | 0.012122 / 0.007607 (0.004515) | 0.552406 / 0.226044 (0.326362) | 5.527073 / 2.268929 (3.258145) | 2.711354 / 55.444624 (-52.733270) | 2.328848 / 6.876477 (-4.547629) | 2.340750 / 2.142072 (0.198678) | 0.846300 / 4.805227 (-3.958927) | 0.167465 / 6.500664 (-6.333199) | 0.063419 / 0.075469 (-0.012050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262452 / 1.841788 (-0.579336) | 15.043537 / 8.074308 (6.969229) | 14.212563 / 10.191392 (4.021171) | 0.170229 / 0.680424 (-0.510194) | 0.017696 / 0.534201 (-0.516505) | 0.423194 / 0.579283 (-0.156089) | 0.430908 / 0.434364 (-0.003456) | 0.491733 / 0.540337 (-0.048604) | 0.599267 / 1.386936 (-0.787669) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2a3b2f04f1fd62249ac43c534761ce151ad5c269 \"CML watermark\")\n",
"Program enthusiastic "
] | "2022-12-23T13:26:31Z" | "2023-01-25T19:00:43Z" | "2023-01-24T16:33:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"merged_at": "2023-01-24T16:33:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389"
} | Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5389/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4023/comments | https://api.github.com/repos/huggingface/datasets/issues/4023/events | https://github.com/huggingface/datasets/pull/4023 | 1,180,840,399 | PR_kwDODunzps41BSZT | 4,023 | Replace yahoo_answers_topics data url | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of issues in the dataset cards that are unrelated to this PR - merging"
] | "2022-03-25T14:08:57Z" | "2022-03-28T10:12:56Z" | "2022-03-28T10:07:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4023",
"merged_at": "2022-03-28T10:07:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4023"
} | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4023/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3271/comments | https://api.github.com/repos/huggingface/datasets/issues/3271/events | https://github.com/huggingface/datasets/pull/3271 | 1,053,482,919 | PR_kwDODunzps4uhgi1 | 3,271 | Decode audio from remote | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-11-15T10:25:56Z" | "2021-11-16T11:35:58Z" | "2021-11-16T11:35:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3271.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3271",
"merged_at": "2021-11-16T11:35:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3271.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3271"
} | Currently the Audio feature type can only decode local audio files, not remote files.
To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py
cc @albertvillanova @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3271/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4936/comments | https://api.github.com/repos/huggingface/datasets/issues/4936/events | https://github.com/huggingface/datasets/issues/4936 | 1,363,274,907 | I_kwDODunzps5RQeyb | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan omg this is awesome!! thank you! ",
"We have contacted the authors to ask them."
] | "2022-09-06T13:17:55Z" | "2022-09-21T06:06:02Z" | "2022-09-12T07:14:20Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4936/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3778/comments | https://api.github.com/repos/huggingface/datasets/issues/3778/events | https://github.com/huggingface/datasets/issues/3778 | 1,147,898,946 | I_kwDODunzps5Ea4xC | 3,778 | Not be able to download dataset - "Newsroom" | {
"avatar_url": "https://avatars.githubusercontent.com/u/61326242?v=4",
"events_url": "https://api.github.com/users/Darshan2104/events{/privacy}",
"followers_url": "https://api.github.com/users/Darshan2104/followers",
"following_url": "https://api.github.com/users/Darshan2104/following{/other_user}",
"gists_url": "https://api.github.com/users/Darshan2104/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darshan2104",
"id": 61326242,
"login": "Darshan2104",
"node_id": "MDQ6VXNlcjYxMzI2MjQy",
"organizations_url": "https://api.github.com/users/Darshan2104/orgs",
"received_events_url": "https://api.github.com/users/Darshan2104/received_events",
"repos_url": "https://api.github.com/users/Darshan2104/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darshan2104/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darshan2104/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darshan2104"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.nlp.cornell.edu/newsroom/index.html\r\n- Download page: https://lil.nlp.cornell.edu/newsroom/download/index.html\r\n\r\nI'm fixing the link in our Datasets library.",
"@albertvillanova Thanks for the solution and link you made my day!"
] | "2022-02-23T10:15:50Z" | "2022-02-23T17:05:04Z" | "2022-02-23T13:26:40Z" | NONE | null | null | null | Hello,
I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**!
For manually, Link is also didn't work! It is sawing some ad or something!
If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help!
Thanks
Darshan Tank | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3778/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1523/comments | https://api.github.com/repos/huggingface/datasets/issues/1523/events | https://github.com/huggingface/datasets/pull/1523 | 764,359,524 | MDExOlB1bGxSZXF1ZXN0NTM4NDYyMTE4 | 1,523 | Add eHealth Knowledge Discovery dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/57645283?v=4",
"events_url": "https://api.github.com/users/mariagrandury/events{/privacy}",
"followers_url": "https://api.github.com/users/mariagrandury/followers",
"following_url": "https://api.github.com/users/mariagrandury/following{/other_user}",
"gists_url": "https://api.github.com/users/mariagrandury/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariagrandury",
"id": 57645283,
"login": "mariagrandury",
"node_id": "MDQ6VXNlcjU3NjQ1Mjgz",
"organizations_url": "https://api.github.com/users/mariagrandury/orgs",
"received_events_url": "https://api.github.com/users/mariagrandury/received_events",
"repos_url": "https://api.github.com/users/mariagrandury/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariagrandury/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariagrandury/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariagrandury"
} | [] | closed | false | null | [] | null | [
"Thank you very much for your review @lewtun ! \r\n\r\nI've updated the script metadata, created the README and fixed the two details you commented.\r\n\r\nReady for another review! 🤗 ",
"I've updated the task tag as we discussed and also added a couple of lines of code to make sure I include all the available examples.\r\n\r\nThank you again!"
] | "2020-12-12T20:44:18Z" | "2020-12-17T17:02:41Z" | "2020-12-17T16:48:56Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1523.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1523",
"merged_at": "2020-12-17T16:48:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1523.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1523"
} | This Spanish dataset can be used to mine knowledge from unstructured health texts.
In particular, for:
- Entity recognition
- Relation extraction
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1523/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1523/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2035/comments | https://api.github.com/repos/huggingface/datasets/issues/2035/events | https://github.com/huggingface/datasets/issues/2035 | 829,475,544 | MDU6SXNzdWU4Mjk0NzU1NDQ= | 2,035 | wiki40b/wikipedia for almost all languages cannot be downloaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | [
"Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n",
"Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https://user-images.githubusercontent.com/19718818/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n",
"For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n",
"Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.",
"Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n",
"Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n",
"HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.",
"Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n",
"I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```",
"Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n"
] | "2021-03-11T19:54:54Z" | "2021-03-16T14:53:37Z" | null | NONE | null | null | null | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources.
thank you very much.
```
(fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py
Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...
Traceback (most recent call last):
File "test_data.py", line 3, in <module>
dataset = load_dataset("wiki40b", "cs")
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare
import apache_beam as beam
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module>
from apache_beam import io
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module>
from apache_beam.io.avroio import *
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module>
import avro
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module>
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource
NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2035/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1295/comments | https://api.github.com/repos/huggingface/datasets/issues/1295/events | https://github.com/huggingface/datasets/pull/1295 | 759,375,251 | MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzE1 | 1,295 | add hrenwac_para | {
"avatar_url": "https://avatars.githubusercontent.com/u/11391118?v=4",
"events_url": "https://api.github.com/users/IvanZidov/events{/privacy}",
"followers_url": "https://api.github.com/users/IvanZidov/followers",
"following_url": "https://api.github.com/users/IvanZidov/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanZidov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IvanZidov",
"id": 11391118,
"login": "IvanZidov",
"node_id": "MDQ6VXNlcjExMzkxMTE4",
"organizations_url": "https://api.github.com/users/IvanZidov/orgs",
"received_events_url": "https://api.github.com/users/IvanZidov/received_events",
"repos_url": "https://api.github.com/users/IvanZidov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IvanZidov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanZidov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IvanZidov"
} | [] | closed | false | null | [] | null | [] | "2020-12-08T11:40:06Z" | "2020-12-11T17:42:20Z" | "2020-12-11T17:42:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1295",
"merged_at": "2020-12-11T17:42:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1295"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1295/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/5059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5059/comments | https://api.github.com/repos/huggingface/datasets/issues/5059/events | https://github.com/huggingface/datasets/pull/5059 | 1,395,050,876 | PR_kwDODunzps5AEoX7 | 5,059 | Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-10-03T17:05:25Z" | "2022-10-03T17:34:40Z" | "2022-10-03T17:32:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5059.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5059",
"merged_at": "2022-10-03T17:32:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5059.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5059"
} | Fixes a small typo :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5059/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3569/comments | https://api.github.com/repos/huggingface/datasets/issues/3569/events | https://github.com/huggingface/datasets/pull/3569 | 1,100,478,994 | PR_kwDODunzps4w3XGo | 3,569 | Add the DKTC dataset (Extension of #3564) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sooftware",
"id": 42150335,
"login": "sooftware",
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"repos_url": "https://api.github.com/users/sooftware/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sooftware"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"I reflect your comment! @lhoestq ",
"Wait, the format of the data just changed, so I'll take it into consideration and commit it.",
"I update the code according to the dataset structure change.",
"Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).",
"> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n",
"Hi ! I see, in this case ca you make sure that the dummy data has a full sample ?\r\n\r\nFeel free to open the dummy train.csv in the dummy_data.zip file and add the missing lines",
"Sorry, I'm late to check! I'll send it to you soon!",
"Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.",
"Close this PR. Thanks!"
] | "2022-01-12T15:31:29Z" | "2022-10-01T06:43:05Z" | "2022-10-01T06:43:04Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3569",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3569"
} | New pull request of #3564. (for DKTC)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3569/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2093/comments | https://api.github.com/repos/huggingface/datasets/issues/2093/events | https://github.com/huggingface/datasets/pull/2093 | 837,209,211 | MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx | 2,093 | Fix: Allows a feature to be named "_type" | {
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dcfidalgo",
"id": 15979778,
"login": "dcfidalgo",
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dcfidalgo"
} | [] | closed | false | null | [] | null | [
"Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.\r\n# So we need the conversion of features to dict to work.\r\n# You can test that using `dataclasses._asdict_inner`.\r\n# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict\r\nfrom dataclasses import _asdict_inner \r\n\r\nf = Features({\"_type\": Value(\"string\")})\r\nreloaded_f = Features.from_dict(_asdict_inner(f, dict))\r\nassert reloaded_f == f\r\n```",
"Sure, i will add a test. \r\nOne question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue?",
"The benchmark has a bit of noise, the values are fine ;)\r\nespecially in the change you did since the overhead added is negligible.",
"Ok, i added the test you described above. \r\n\r\nI avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR!"
] | "2021-03-21T23:21:57Z" | "2021-03-25T14:35:54Z" | "2021-03-25T14:35:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2093.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2093",
"merged_at": "2021-03-25T14:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2093.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2093"
} | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2093/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4645/comments | https://api.github.com/repos/huggingface/datasets/issues/4645/events | https://github.com/huggingface/datasets/pull/4645 | 1,296,027,785 | PR_kwDODunzps468oZ6 | 4,645 | Set HF_SCRIPTS_VERSION to main | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-07-06T15:43:21Z" | "2022-07-06T15:56:21Z" | "2022-07-06T15:45:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4645.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4645",
"merged_at": "2022-07-06T15:45:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4645.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4645"
} | After renaming "master" to "main", the CI fails with
```
AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/main/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at /home/circleci/datasets/_dummy/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py"
```
This is because in the CI we were still using `HF_SCRIPTS_VERSION=master`. I changed it to "main" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4645/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5499/comments | https://api.github.com/repos/huggingface/datasets/issues/5499/events | https://github.com/huggingface/datasets/issues/5499 | 1,568,937,026 | I_kwDODunzps5dhBRC | 5,499 | `load_dataset` has ~4 seconds of overhead for cached data | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.\r\n\r\nAlthough I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're not been leveraging the git commit hashes, since the library was built before we even had git repositories for each dataset on HF.",
"Thanks @lhoestq, for memory when I recorded those times I had `HF_DATASETS_OFFLINE` set."
] | "2023-02-02T23:34:50Z" | "2023-02-07T19:35:11Z" | null | NONE | null | null | null | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer.
⏱ 4.84s ⮜ load_dataset
⏱ 119ms ⮜ load_from_disk
### Motivation
I assume this is doing something like checking for a newer version.
If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is.
For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time.
Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement.
### Your contribution
. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5499/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6029/comments | https://api.github.com/repos/huggingface/datasets/issues/6029/events | https://github.com/huggingface/datasets/pull/6029 | 1,803,460,046 | PR_kwDODunzps5VcbPW | 6,029 | [docs] Fix link | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007039 / 0.011353 (-0.004314) | 0.004175 / 0.011008 (-0.006833) | 0.085426 / 0.038508 (0.046918) | 0.079818 / 0.023109 (0.056709) | 0.321924 / 0.275898 (0.046026) | 0.345482 / 0.323480 (0.022002) | 0.005510 / 0.007986 (-0.002475) | 0.003452 / 0.004328 (-0.000877) | 0.065158 / 0.004250 (0.060907) | 0.058843 / 0.037052 (0.021791) | 0.316280 / 0.258489 (0.057791) | 0.351666 / 0.293841 (0.057825) | 0.031190 / 0.128546 (-0.097357) | 0.008500 / 0.075646 (-0.067147) | 0.289595 / 0.419271 (-0.129676) | 0.053798 / 0.043533 (0.010265) | 0.315804 / 0.255139 (0.060665) | 0.334957 / 0.283200 (0.051757) | 0.024350 / 0.141683 (-0.117332) | 1.515753 / 1.452155 (0.063599) | 1.556215 / 1.492716 (0.063499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210378 / 0.018006 (0.192372) | 0.469309 / 0.000490 (0.468820) | 0.002890 / 0.000200 (0.002690) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030214 / 0.037411 (-0.007197) | 0.088492 / 0.014526 (0.073966) | 0.098684 / 0.176557 (-0.077873) | 0.156077 / 0.737135 (-0.581058) | 0.098814 / 0.296338 (-0.197525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404548 / 0.215209 (0.189339) | 4.026173 / 2.077655 (1.948518) | 2.043216 / 1.504120 (0.539096) | 1.880997 / 1.541195 (0.339802) | 1.975205 / 1.468490 (0.506715) | 0.489395 / 4.584777 (-4.095382) | 3.684097 / 3.745712 (-0.061615) | 5.126934 / 5.269862 (-0.142928) | 3.092153 / 4.565676 (-1.473524) | 0.057668 / 0.424275 (-0.366607) | 0.007372 / 0.007607 (-0.000235) | 0.479647 / 0.226044 (0.253603) | 4.780207 / 2.268929 (2.511278) | 2.533457 / 55.444624 (-52.911168) | 2.182126 / 6.876477 (-4.694351) | 2.431834 / 2.142072 (0.289761) | 0.591760 / 4.805227 (-4.213467) | 0.135450 / 6.500664 (-6.365214) | 0.063218 / 0.075469 (-0.012251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262053 / 1.841788 (-0.579734) | 20.246992 / 8.074308 (12.172684) | 14.638222 / 10.191392 (4.446830) | 0.150021 / 0.680424 (-0.530403) | 0.018680 / 0.534201 (-0.515521) | 0.395215 / 0.579283 (-0.184068) | 0.421270 / 0.434364 (-0.013094) | 0.458845 / 0.540337 (-0.081492) | 0.634488 / 1.386936 (-0.752448) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007080 / 0.011353 (-0.004273) | 0.004112 / 0.011008 (-0.006896) | 0.066426 / 0.038508 (0.027918) | 0.090088 / 0.023109 (0.066978) | 0.400191 / 0.275898 (0.124293) | 0.429614 / 0.323480 (0.106134) | 0.005428 / 0.007986 (-0.002558) | 0.003501 / 0.004328 (-0.000827) | 0.065056 / 0.004250 (0.060806) | 0.061643 / 0.037052 (0.024590) | 0.398619 / 0.258489 (0.140130) | 0.445497 / 0.293841 (0.151657) | 0.031703 / 0.128546 (-0.096843) | 0.008708 / 0.075646 (-0.066938) | 0.071561 / 0.419271 (-0.347711) | 0.050684 / 0.043533 (0.007151) | 0.385361 / 0.255139 (0.130222) | 0.409349 / 0.283200 (0.126149) | 0.027388 / 0.141683 (-0.114295) | 1.473021 / 1.452155 (0.020866) | 1.525246 / 1.492716 (0.032529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237710 / 0.018006 (0.219704) | 0.468719 / 0.000490 (0.468230) | 0.000385 / 0.000200 (0.000185) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032539 / 0.037411 (-0.004872) | 0.095324 / 0.014526 (0.080798) | 0.102248 / 0.176557 (-0.074308) | 0.156096 / 0.737135 (-0.581039) | 0.103458 / 0.296338 (-0.192881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416226 / 0.215209 (0.201017) | 4.141044 / 2.077655 (2.063389) | 2.143732 / 1.504120 (0.639612) | 2.001020 / 1.541195 (0.459825) | 2.091194 / 1.468490 (0.622704) | 0.489977 / 4.584777 (-4.094800) | 3.579615 / 3.745712 (-0.166097) | 3.438082 / 5.269862 (-1.831780) | 2.069031 / 4.565676 (-2.496645) | 0.056994 / 0.424275 (-0.367281) | 0.007362 / 0.007607 (-0.000245) | 0.493077 / 0.226044 (0.267033) | 4.922622 / 2.268929 (2.653694) | 2.627083 / 55.444624 (-52.817541) | 2.301141 / 6.876477 (-4.575336) | 2.356794 / 2.142072 (0.214722) | 0.583792 / 4.805227 (-4.221436) | 0.133707 / 6.500664 (-6.366958) | 0.062892 / 0.075469 (-0.012577) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.364908 / 1.841788 (-0.476880) | 20.641219 / 8.074308 (12.566911) | 14.848528 / 10.191392 (4.657136) | 0.174207 / 0.680424 (-0.506217) | 0.018206 / 0.534201 (-0.515995) | 0.413742 / 0.579283 (-0.165541) | 0.419940 / 0.434364 (-0.014424) | 0.458543 / 0.540337 (-0.081794) | 0.616518 / 1.386936 (-0.770418) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18b2202c3e7cdde05920078f01864964556427da \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006875 / 0.011353 (-0.004478) | 0.003489 / 0.011008 (-0.007519) | 0.082077 / 0.038508 (0.043569) | 0.103011 / 0.023109 (0.079902) | 0.370572 / 0.275898 (0.094674) | 0.416400 / 0.323480 (0.092920) | 0.004048 / 0.007986 (-0.003938) | 0.003563 / 0.004328 (-0.000765) | 0.062666 / 0.004250 (0.058416) | 0.063664 / 0.037052 (0.026612) | 0.374206 / 0.258489 (0.115717) | 0.425590 / 0.293841 (0.131749) | 0.028174 / 0.128546 (-0.100373) | 0.007906 / 0.075646 (-0.067741) | 0.266251 / 0.419271 (-0.153020) | 0.045923 / 0.043533 (0.002390) | 0.376746 / 0.255139 (0.121607) | 0.401950 / 0.283200 (0.118750) | 0.024628 / 0.141683 (-0.117054) | 1.441903 / 1.452155 (-0.010252) | 1.537494 / 1.492716 (0.044777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214696 / 0.018006 (0.196690) | 0.425626 / 0.000490 (0.425137) | 0.003370 / 0.000200 (0.003170) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023133 / 0.037411 (-0.014279) | 0.072374 / 0.014526 (0.057848) | 0.081255 / 0.176557 (-0.095301) | 0.146960 / 0.737135 (-0.590175) | 0.081748 / 0.296338 (-0.214590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390683 / 0.215209 (0.175473) | 3.893166 / 2.077655 (1.815511) | 1.884321 / 1.504120 (0.380201) | 1.701899 / 1.541195 (0.160704) | 1.737839 / 1.468490 (0.269349) | 0.497008 / 4.584777 (-4.087769) | 3.041211 / 3.745712 (-0.704501) | 3.519947 / 5.269862 (-1.749915) | 2.015085 / 4.565676 (-2.550592) | 0.057685 / 0.424275 (-0.366590) | 0.006415 / 0.007607 (-0.001192) | 0.465565 / 0.226044 (0.239520) | 4.635224 / 2.268929 (2.366295) | 2.297941 / 55.444624 (-53.146683) | 1.946670 / 6.876477 (-4.929807) | 2.078527 / 2.142072 (-0.063546) | 0.584101 / 4.805227 (-4.221126) | 0.126488 / 6.500664 (-6.374176) | 0.060819 / 0.075469 (-0.014650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223400 / 1.841788 (-0.618388) | 17.960923 / 8.074308 (9.886615) | 13.187683 / 10.191392 (2.996291) | 0.129258 / 0.680424 (-0.551166) | 0.016601 / 0.534201 (-0.517600) | 0.330028 / 0.579283 (-0.249255) | 0.353861 / 0.434364 (-0.080503) | 0.376022 / 0.540337 (-0.164315) | 0.518145 / 1.386936 (-0.868791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006015 / 0.011353 (-0.005338) | 0.003605 / 0.011008 (-0.007403) | 0.062169 / 0.038508 (0.023661) | 0.056094 / 0.023109 (0.032985) | 0.353085 / 0.275898 (0.077187) | 0.393744 / 0.323480 (0.070265) | 0.004672 / 0.007986 (-0.003313) | 0.002859 / 0.004328 (-0.001469) | 0.062992 / 0.004250 (0.058742) | 0.049767 / 0.037052 (0.012714) | 0.356850 / 0.258489 (0.098361) | 0.403731 / 0.293841 (0.109890) | 0.026664 / 0.128546 (-0.101882) | 0.008026 / 0.075646 (-0.067621) | 0.067944 / 0.419271 (-0.351327) | 0.042133 / 0.043533 (-0.001400) | 0.353865 / 0.255139 (0.098726) | 0.383461 / 0.283200 (0.100261) | 0.021250 / 0.141683 (-0.120433) | 1.428102 / 1.452155 (-0.024053) | 1.481061 / 1.492716 (-0.011655) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223552 / 0.018006 (0.205546) | 0.402390 / 0.000490 (0.401900) | 0.000721 / 0.000200 (0.000521) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025065 / 0.037411 (-0.012347) | 0.075537 / 0.014526 (0.061011) | 0.083519 / 0.176557 (-0.093037) | 0.137068 / 0.737135 (-0.600068) | 0.084165 / 0.296338 (-0.212173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420176 / 0.215209 (0.204967) | 4.206226 / 2.077655 (2.128571) | 2.168089 / 1.504120 (0.663969) | 1.987299 / 1.541195 (0.446104) | 2.029489 / 1.468490 (0.560999) | 0.495822 / 4.584777 (-4.088955) | 3.106580 / 3.745712 (-0.639132) | 3.833215 / 5.269862 (-1.436647) | 2.450450 / 4.565676 (-2.115226) | 0.056979 / 0.424275 (-0.367296) | 0.006514 / 0.007607 (-0.001093) | 0.503646 / 0.226044 (0.277601) | 5.035035 / 2.268929 (2.766106) | 2.608245 / 55.444624 (-52.836379) | 2.245492 / 6.876477 (-4.630985) | 2.262868 / 2.142072 (0.120795) | 0.590736 / 4.805227 (-4.214491) | 0.124637 / 6.500664 (-6.376027) | 0.061442 / 0.075469 (-0.014027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316736 / 1.841788 (-0.525052) | 17.948635 / 8.074308 (9.874327) | 13.752442 / 10.191392 (3.561050) | 0.144107 / 0.680424 (-0.536317) | 0.017112 / 0.534201 (-0.517089) | 0.336537 / 0.579283 (-0.242746) | 0.347832 / 0.434364 (-0.086532) | 0.392944 / 0.540337 (-0.147393) | 0.534455 / 1.386936 (-0.852481) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#406b2212263c0d33f267e35b917f410ff6b3bc00 \"CML watermark\")\n"
] | "2023-07-13T17:24:12Z" | "2023-07-13T17:47:41Z" | "2023-07-13T17:38:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6029.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6029",
"merged_at": "2023-07-13T17:38:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6029.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6029"
} | Fixes link to the builder classes :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6029/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3277/comments | https://api.github.com/repos/huggingface/datasets/issues/3277/events | https://github.com/huggingface/datasets/pull/3277 | 1,054,122,656 | PR_kwDODunzps4ujk11 | 3,277 | f-string formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mehdi2402",
"id": 56029953,
"login": "Mehdi2402",
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mehdi2402"
} | [] | closed | false | null | [] | null | [
"Hello @lhoestq, ```make style``` is applied as asked. :)"
] | "2021-11-15T21:37:05Z" | "2021-11-19T20:40:08Z" | "2021-11-17T16:18:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3277.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3277",
"merged_at": "2021-11-17T16:18:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3277.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3277"
} | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
- [x] **src/Datasets/\*.py**
Modules in **_src/Datasets/_**:
- [x] **commands**
- [x] **features**
- [x] **formatting**
- [x] **io**
- [x] **tasks**
- [x] **utils**
Module **datasets** will not be edited as asked by @mariosasko
-A correction of the first PR (#3267)-
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3277/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1745/comments | https://api.github.com/repos/huggingface/datasets/issues/1745/events | https://github.com/huggingface/datasets/issues/1745 | 787,838,256 | MDU6SXNzdWU3ODc4MzgyNTY= | 1,745 | difference between wsc and wsc.fixed for superglue | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost"
} | [] | closed | false | null | [] | null | [
"From the description given in the dataset script for `wsc.fixed`:\r\n```\r\nThis version fixes issues where the spans are not actually substrings of the text.\r\n```"
] | "2021-01-18T00:50:19Z" | "2021-01-18T11:02:43Z" | "2021-01-18T00:59:34Z" | NONE | null | null | null | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1745/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1705/comments | https://api.github.com/repos/huggingface/datasets/issues/1705/events | https://github.com/huggingface/datasets/pull/1705 | 781,474,949 | MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4 | 1,705 | Add information about caching and verifications in "Load a Dataset" docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [] | "2021-01-07T17:18:44Z" | "2021-01-12T14:08:01Z" | "2021-01-12T14:08:01Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1705.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1705",
"merged_at": "2021-01-12T14:08:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1705.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1705"
} | Related to #215.
Missing improvements from @lhoestq's #1703. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1705/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3184/comments | https://api.github.com/repos/huggingface/datasets/issues/3184/events | https://github.com/huggingface/datasets/pull/3184 | 1,040,114,102 | PR_kwDODunzps4t4J61 | 3,184 | RONEC v2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22746816?v=4",
"events_url": "https://api.github.com/users/dumitrescustefan/events{/privacy}",
"followers_url": "https://api.github.com/users/dumitrescustefan/followers",
"following_url": "https://api.github.com/users/dumitrescustefan/following{/other_user}",
"gists_url": "https://api.github.com/users/dumitrescustefan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dumitrescustefan",
"id": 22746816,
"login": "dumitrescustefan",
"node_id": "MDQ6VXNlcjIyNzQ2ODE2",
"organizations_url": "https://api.github.com/users/dumitrescustefan/orgs",
"received_events_url": "https://api.github.com/users/dumitrescustefan/received_events",
"repos_url": "https://api.github.com/users/dumitrescustefan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dumitrescustefan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumitrescustefan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dumitrescustefan"
} | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the review. I totally understand what you are saying. Normally, I would definitely agree with you, but in this particular case, the quality of v1 is poor, and the dataset itself is small (at the time we created v1 it was the only RO NER dataset, and its size was limited by the available resources). \r\n\r\nThis is why we worked to build a larger one, with much better inter-annotator agreement. Fact is, models trained on v1 will be of very low quality and I would not recommend to anybody to use/do that. That's why I'd strongly suggest we replace v1 with v2, and kindof make v1 vanish :) \r\n\r\nWhat do you think? If you insist on having v1 accessible, I'll add the required code. Thanks!\r\n\r\n",
"Ok I see ! I think it's fine then, no need to re-add V1"
] | "2021-10-30T10:50:03Z" | "2021-11-02T16:02:23Z" | "2021-11-02T16:02:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3184",
"merged_at": "2021-11-02T16:02:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3184"
} | Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid.
In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1).
I've run the make style and all the dummy and real data test, and they passed.
I hope it's okay to merge the new RONEC v2 in the datasets.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3184/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5168/comments | https://api.github.com/repos/huggingface/datasets/issues/5168/events | https://github.com/huggingface/datasets/pull/5168 | 1,424,368,572 | PR_kwDODunzps5BmYnq | 5,168 | Fix CI require beam | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm merging this PR because it is quite a trivial fix and this is required by:\r\n- #5166"
] | "2022-10-26T16:49:33Z" | "2022-10-27T09:25:19Z" | "2022-10-27T09:23:26Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5168",
"merged_at": "2022-10-27T09:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5168"
} | This PR:
- Fixes the CI `require_beam`: before it was requiring PyTorch instead
```python
def require_beam(test_case):
if not config.TORCH_AVAILABLE:
test_case = unittest.skip("test requires PyTorch")(test_case)
return test_case
```
- Fixes a missing `require_beam` in `test_beam_based_builder_download_and_prepare_as_parquet`
- Refactors `require_beam` to use `pytest` (`skipif`) instead | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5168/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3674/comments | https://api.github.com/repos/huggingface/datasets/issues/3674/events | https://github.com/huggingface/datasets/pull/3674 | 1,123,027,874 | PR_kwDODunzps4yBe17 | 3,674 | Add FrugalScore metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/moussaKam",
"id": 28675016,
"login": "moussaKam",
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/moussaKam"
} | [] | closed | false | null | [] | null | [
"@lhoestq \r\n\r\nThe model used by default (`moussaKam/frugalscore_tiny_bert-base_bert-score`) is a tiny model.\r\n\r\nI still want to make one modification before merging.\r\nI would like to load the model checkpoint once. Do you think it's a good idea if I load it in `_download_and_prepare`? In this case should the model name be the `self.config_name` or another variable say `self.model_name` ? ",
"OK, I added a commit that loads the checkpoint in `_download_and_prepare`. Please let me know if it looks good. ",
"@lhoestq is everything OK to merge? ",
"I triggered the CI and it's failing, can you merge the `master` branch into yours ? It should fix the issues.\r\n\r\nAlso the doctest apparently raises an error because it outputs `{'scores': [0.6307542, 0.6449357]}` instead of `{'scores': [0.631, 0.645]}` - feel free to edit the code example in the docstring to round the scores, that should fix it",
"@lhoestq hope it's OK now"
] | "2022-02-03T12:28:52Z" | "2022-02-21T15:58:44Z" | "2022-02-21T15:58:44Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3674.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3674",
"merged_at": "2022-02-21T15:58:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3674.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3674"
} | This pull request add FrugalScore metric for NLG systems evaluation.
FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance.
Paper: https://arxiv.org/abs/2110.08559?context=cs
Github: https://github.com/moussaKam/FrugalScore
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3674/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2496/comments | https://api.github.com/repos/huggingface/datasets/issues/2496/events | https://github.com/huggingface/datasets/issues/2496 | 920,216,314 | MDU6SXNzdWU5MjAyMTYzMTQ= | 2,496 | Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | "2021-06-14T09:20:26Z" | "2021-06-21T15:05:03Z" | "2021-06-21T15:05:03Z" | MEMBER | null | null | null | `Dataset.map` uses the dataset fingerprint (a hash) for caching.
However the fingerprint seems to change when someone moves the cache directory of the dataset.
This is because it uses the default fingerprint generation:
1. the dataset path is used to get the fingerprint
2. the modification times of the arrow file is also used to get the fingerprint
To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2496/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3388/comments | https://api.github.com/repos/huggingface/datasets/issues/3388/events | https://github.com/huggingface/datasets/pull/3388 | 1,072,022,021 | PR_kwDODunzps4vbnyY | 3,388 | Fix flaky test of the temporary directory used by load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"CI failed because of a server error - merging"
] | "2021-12-06T11:09:31Z" | "2021-12-06T11:25:03Z" | "2021-12-06T11:24:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3388.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3388",
"merged_at": "2021-12-06T11:24:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3388.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3388"
} | The test is flaky, here is an example of random CI failure:
https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989
I fixed that by not checking the content of the random part of the temporary directory name | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3388/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4720/comments | https://api.github.com/repos/huggingface/datasets/issues/4720/events | https://github.com/huggingface/datasets/issues/4720 | 1,309,980,195 | I_kwDODunzps5OFLYj | 4,720 | Dataset Viewer issue for shamikbose89/lancaster_newsbooks | {
"avatar_url": "https://avatars.githubusercontent.com/u/50837285?v=4",
"events_url": "https://api.github.com/users/shamikbose/events{/privacy}",
"followers_url": "https://api.github.com/users/shamikbose/followers",
"following_url": "https://api.github.com/users/shamikbose/following{/other_user}",
"gists_url": "https://api.github.com/users/shamikbose/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamikbose",
"id": 50837285,
"login": "shamikbose",
"node_id": "MDQ6VXNlcjUwODM3Mjg1",
"organizations_url": "https://api.github.com/users/shamikbose/orgs",
"received_events_url": "https://api.github.com/users/shamikbose/received_events",
"repos_url": "https://api.github.com/users/shamikbose/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamikbose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamikbose/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamikbose"
} | [] | closed | false | null | [] | null | [
"It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"shamikbose89/lancaster_newsbooks\", \"default\")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/shamikbose89--lancaster_newsbooks/2d1c63d269bf7b9342accce0a95960b1710ab4bc774248878bd80eb96c1afaf7/lancaster_newsbooks.py\", line 73, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URL)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 916, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 879, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 348, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 884, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 388, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 354, in _get_extraction_protocol_with_magic_number\r\n f.seek(0)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 684, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nping @huggingface/datasets ",
"Oh, I removed the 'split' key from `kwargs`. I put it back in, but there's still the same error",
"It looks like the data host doesn't support http range requests, which is necessary to glob inside a ZIP archive in streaming mode. Can you try hosting the dataset elsewhere ? Or download each file separately from https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 ?",
"@lhoestq Thanks! That seems to have solved it. I can get the splits with the `get_dataset_split_names()` function. The dataset viewer is still not loading properly, though. The new error is\r\n```\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\r\n\r\nPS. The dataset loads properly and can be accessed"
] | "2022-07-19T20:00:07Z" | "2022-09-08T16:47:21Z" | "2022-09-08T16:47:21Z" | NONE | null | null | null | ### Link
https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks
### Description
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer still doesn't load
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4720/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5875/comments | https://api.github.com/repos/huggingface/datasets/issues/5875/events | https://github.com/huggingface/datasets/issues/5875 | 1,716,770,394 | I_kwDODunzps5mU9Za | 5,875 | Why split slicing doesn't behave like list slicing ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | open | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/1774"
] | "2023-05-19T07:21:10Z" | "2023-05-23T16:02:14Z" | null | NONE | null | null | null | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5875/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1279/comments | https://api.github.com/repos/huggingface/datasets/issues/1279/events | https://github.com/huggingface/datasets/pull/1279 | 759,108,726 | MDExOlB1bGxSZXF1ZXN0NTM0MTU4OTY5 | 1,279 | added para_pat | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [
"Updated with Translation feature type. Working on dataset tags and README",
"merging since the CI is fixed on master"
] | "2020-12-08T06:28:47Z" | "2020-12-14T13:41:17Z" | "2020-12-14T13:41:17Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1279",
"merged_at": "2020-12-14T13:41:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1279"
} | Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632
Working on README.md currently | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1279/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2605/comments | https://api.github.com/repos/huggingface/datasets/issues/2605/events | https://github.com/huggingface/datasets/pull/2605 | 938,648,164 | MDExOlB1bGxSZXF1ZXN0Njg0OTkyODIz | 2,605 | Make any ClientError trigger retry in streaming mode (e.g. ClientOSError) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | [] | "2021-07-07T08:47:23Z" | "2021-07-12T14:10:27Z" | "2021-07-07T08:59:13Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2605",
"merged_at": "2021-07-07T08:59:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2605"
} | During the FLAX sprint some users have this error when streaming datasets:
```python
aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer
```
This error must trigger a retry instead of directly crashing
Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError`
In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2605/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3410/comments | https://api.github.com/repos/huggingface/datasets/issues/3410/events | https://github.com/huggingface/datasets/pull/3410 | 1,075,815,415 | PR_kwDODunzps4voFG7 | 3,410 | Fix dependencies conflicts in Windows CI after conda update to 4.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-12-09T17:19:11Z" | "2021-12-09T17:36:20Z" | "2021-12-09T17:36:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3410",
"merged_at": "2021-12-09T17:36:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3410"
} | For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3410/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5922/comments | https://api.github.com/repos/huggingface/datasets/issues/5922/events | https://github.com/huggingface/datasets/issues/5922 | 1,736,898,953 | I_kwDODunzps5nhvmJ | 5,922 | Length of table does not accurately reflect the split | {
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amogkam",
"id": 8068268,
"login": "amogkam",
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"repos_url": "https://api.github.com/users/amogkam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amogkam"
} | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | [] | null | [
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that we don't plan to \"fix\", so I'm closing this issue."
] | "2023-06-01T18:56:26Z" | "2023-06-02T16:13:31Z" | "2023-06-02T16:13:31Z" | NONE | null | null | null | ### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug
![image](https://github.com/huggingface/datasets/assets/8068268/83e5768f-8b4c-422a-945c-832a7585afff)
### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5922/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1430/comments | https://api.github.com/repos/huggingface/datasets/issues/1430/events | https://github.com/huggingface/datasets/pull/1430 | 760,779,666 | MDExOlB1bGxSZXF1ZXN0NTM1NTU0Njg0 | 1,430 | Add 1.5 billion words Arabic corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
} | [] | closed | false | null | [] | null | [
"Can't pass dummy data tests. For the instructions, it asks me to generate the following file `dummy_data/Youm7_XML_utf_8.rar/Youm7_utf_8.xml` which is strange, any ideas @lhoestq ?\r\n\r\ncc: I tested the data locally and it works, maybe the dummy tests doesn't support `rar` ? ",
"In the dummy_data.zip files you must include the rar file as if is was already extracted.\r\nIn particular here `Youm7_XML_utf_8.rar` is a directory (not an archive).",
"Also I'm getting `BadRarFile: Failed the read enough data: req=16384 got=51` while trying to download and extract the `Alittihad_XML_utf_8.rar` file. Do you have this issue as well ?\r\n\r\nI have rarfile 4.0",
"Sorry it was my mistake, I missed up the directories, it works now. Not sure why you got that error. I have the same version of `rarfile`. Between, there were some suggestions to change the dataset from `1bn_words_arabic` to `arabic_billion_words` like https://github.com/huggingface/datasets/tree/master/datasets/spanish_billion_words. \r\n",
"I'm ok with renaming the dataset `arabic_billion_words` if you want.\r\nNote that you will need to rename class name `ArabicBillionWords` instead of `BillionWords`\r\n(though `BillionWords` was not matching `1bn_words_arabic` anyway)\r\n\r\nYou will need to regenerate the dataset_infos.json file after this change.\r\nOR alternatively just replace all mentions of `billion_words` with `arabic_billion_words` in dataset_infos.json <- this trick should save you some time :)",
"Hmmm I'm still not able to run it on my side because of the rar error (I'm running macos)\r\nI just tried with rarfile 3.1 and it didn't work either.\r\nI would like to be able to run it end-to-end on my side before merging if you don't mind. Let me investigate this issue a little bit",
"No worries, I will investigate it as well. ",
"I created a minimal example in [colab ](https://colab.research.google.com/drive/11ijesuGbrQylANka0VdsZ5vXwIuxkheY?usp=sharing).",
"Nice thanks, maybe it's just an issue on my side then",
"Ok I managed to solve the BadRarFile issue on my side :) \r\nTo fix it I had to install the `unrar` tool for macos (though it seems it's not available with `brew install` anymore, I had to install it from elsewhere)."
] | "2020-12-10T00:32:18Z" | "2020-12-22T10:03:59Z" | "2020-12-22T10:03:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1430.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1430",
"merged_at": "2020-12-22T10:03:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1430.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1430"
} | Needs https://github.com/huggingface/datasets/pull/1429 to work. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1430/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1430/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4078/comments | https://api.github.com/repos/huggingface/datasets/issues/4078/events | https://github.com/huggingface/datasets/pull/4078 | 1,189,513,572 | PR_kwDODunzps41eWnl | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-01T09:26:58Z" | "2022-04-01T14:44:51Z" | "2022-04-01T14:39:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"merged_at": "2022-04-01T14:39:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078"
} | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4078/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5369/comments | https://api.github.com/repos/huggingface/datasets/issues/5369/events | https://github.com/huggingface/datasets/pull/5369 | 1,500,622,276 | PR_kwDODunzps5Fqaj- | 5,369 | Distributed support | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright all the tests are passing - this is ready for review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.015146 / 0.011353 (0.003793) | 0.006683 / 0.011008 (-0.004326) | 0.125994 / 0.038508 (0.087486) | 0.041345 / 0.023109 (0.018235) | 0.378609 / 0.275898 (0.102711) | 0.483139 / 0.323480 (0.159659) | 0.009669 / 0.007986 (0.001684) | 0.005143 / 0.004328 (0.000814) | 0.092015 / 0.004250 (0.087765) | 0.052728 / 0.037052 (0.015676) | 0.397166 / 0.258489 (0.138677) | 0.465820 / 0.293841 (0.171979) | 0.051025 / 0.128546 (-0.077521) | 0.018451 / 0.075646 (-0.057196) | 0.397311 / 0.419271 (-0.021960) | 0.054842 / 0.043533 (0.011309) | 0.391203 / 0.255139 (0.136064) | 0.412743 / 0.283200 (0.129543) | 0.111356 / 0.141683 (-0.030327) | 1.697526 / 1.452155 (0.245372) | 1.795017 / 1.492716 (0.302301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253737 / 0.018006 (0.235731) | 0.583071 / 0.000490 (0.582581) | 0.005958 / 0.000200 (0.005758) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030397 / 0.037411 (-0.007014) | 0.112242 / 0.014526 (0.097716) | 0.138807 / 0.176557 (-0.037749) | 0.209820 / 0.737135 (-0.527316) | 0.139530 / 0.296338 (-0.156808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574111 / 0.215209 (0.358902) | 5.623713 / 2.077655 (3.546058) | 2.416880 / 1.504120 (0.912760) | 1.951013 / 1.541195 (0.409819) | 2.124565 / 1.468490 (0.656075) | 1.268854 / 4.584777 (-3.315923) | 5.942368 / 3.745712 (2.196656) | 5.413814 / 5.269862 (0.143952) | 2.931638 / 4.565676 (-1.634038) | 0.135070 / 0.424275 (-0.289205) | 0.014290 / 0.007607 (0.006683) | 0.708384 / 0.226044 (0.482340) | 7.487994 / 2.268929 (5.219065) | 3.074210 / 55.444624 (-52.370414) | 2.380583 / 6.876477 (-4.495893) | 2.522298 / 2.142072 (0.380226) | 1.336741 / 4.805227 (-3.468486) | 0.236761 / 6.500664 (-6.263903) | 0.076592 / 0.075469 (0.001123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.629415 / 1.841788 (-0.212373) | 19.000640 / 8.074308 (10.926332) | 21.474058 / 10.191392 (11.282666) | 0.231227 / 0.680424 (-0.449197) | 0.046213 / 0.534201 (-0.487988) | 0.565703 / 0.579283 (-0.013580) | 0.662956 / 0.434364 (0.228592) | 0.656475 / 0.540337 (0.116137) | 0.762534 / 1.386936 (-0.624402) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010952 / 0.011353 (-0.000400) | 0.006259 / 0.011008 (-0.004749) | 0.132430 / 0.038508 (0.093922) | 0.037920 / 0.023109 (0.014811) | 0.483565 / 0.275898 (0.207667) | 0.528190 / 0.323480 (0.204710) | 0.008116 / 0.007986 (0.000130) | 0.006768 / 0.004328 (0.002440) | 0.100520 / 0.004250 (0.096270) | 0.055208 / 0.037052 (0.018155) | 0.484672 / 0.258489 (0.226183) | 0.556937 / 0.293841 (0.263096) | 0.057938 / 0.128546 (-0.070609) | 0.020821 / 0.075646 (-0.054826) | 0.430735 / 0.419271 (0.011464) | 0.066317 / 0.043533 (0.022785) | 0.496652 / 0.255139 (0.241513) | 0.502004 / 0.283200 (0.218804) | 0.125403 / 0.141683 (-0.016280) | 1.833396 / 1.452155 (0.381241) | 1.974517 / 1.492716 (0.481800) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269198 / 0.018006 (0.251191) | 0.620314 / 0.000490 (0.619824) | 0.000535 / 0.000200 (0.000335) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032373 / 0.037411 (-0.005039) | 0.130043 / 0.014526 (0.115517) | 0.146217 / 0.176557 (-0.030339) | 0.200187 / 0.737135 (-0.536948) | 0.152839 / 0.296338 (-0.143499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.677478 / 0.215209 (0.462268) | 6.678856 / 2.077655 (4.601201) | 3.025870 / 1.504120 (1.521750) | 2.678196 / 1.541195 (1.137001) | 2.740640 / 1.468490 (1.272150) | 1.237163 / 4.584777 (-3.347614) | 5.752621 / 3.745712 (2.006908) | 3.170435 / 5.269862 (-2.099427) | 2.049174 / 4.565676 (-2.516502) | 0.147663 / 0.424275 (-0.276612) | 0.016107 / 0.007607 (0.008500) | 0.849666 / 0.226044 (0.623621) | 8.395212 / 2.268929 (6.126283) | 3.741120 / 55.444624 (-51.703505) | 3.102926 / 6.876477 (-3.773550) | 3.233655 / 2.142072 (1.091583) | 1.520349 / 4.805227 (-3.284878) | 0.267159 / 6.500664 (-6.233505) | 0.083646 / 0.075469 (0.008177) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640458 / 1.841788 (-0.201330) | 19.043169 / 8.074308 (10.968861) | 22.786126 / 10.191392 (12.594734) | 0.218040 / 0.680424 (-0.462384) | 0.032948 / 0.534201 (-0.501253) | 0.569574 / 0.579283 (-0.009710) | 0.658746 / 0.434364 (0.224382) | 0.650501 / 0.540337 (0.110164) | 0.730588 / 1.386936 (-0.656348) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n",
"just added a note :)",
"Hi @lhoestq ,\r\nCan you please throw some light on the following statement\r\n`If the dataset has a number of shards that is a factor of world_size (i.e. if dataset.n_shards % world_size == 0), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of world_size, skipping the other examples.`\r\n\r\nLet's assume I have 127 parquet files and world_size is 4. I was not able to fully comprehend the above statement\r\nWhat does this statement mean?\r\n`each node keeps 1 example out of world_size, skipping the other examples.`\r\nThank you!",
"If you have 128 parquet files, then `dataset.n_shards % world_size == 0`. In this case each worker can take care of 32 parquet files.\r\n\r\nOn the other hand if you have `dataset.n_shards % world_size != 0` (in your case 127 files), then we can't assign the same number of files to each worker. This is an issue because it may under-utilize your GPU at the end of your training since some workers will take longer to iterate on the dataset than others.\r\n\r\nTherefore in this case, all the workers take care of the 127 parquet files but workers will skip examples to not end up with duplicates. That's what \"each node keeps 1 example out of world_size, skipping the other examples\" means, and in your case it implies:\r\n- rank=0 will read the samples with idx=0, 4, 8 etc.\r\n- rank=1 will read the samples with idx=1, 5, 9 etc.\r\n- rank=2 will read the samples with idx=2, 6, 10 etc.\r\n- rank=3 will read the samples with idx=3, 7, 11 etc.",
"Thanks a lot @lhoestq , this helps!",
"Hi, in the case above, if we use `keep_in_memory=True` for `Dataset`, then we still need to read in n times the dataset if we use DDP on n GPUs (1 node), right? That means we need n times the memory. Is there any way to only load the data once, to save memory?",
"`Dataset` objects are memory mapped from disk so they use almost no RAM (only the current batch)\r\n\r\nAlso they are perfectly sharded using `split_dataset_by_node` so it's going to be read exactly once in total using DDP.\r\nYou can also achieve the same thing using a DistributedSampler in pytorch for DDP instead of using `split_dataset_by_node`.",
"Hi, please correct if I mistake anything: \r\n1. `Dataset` with `keep_in_memory=True` would explicitly pre-load the data into memory, instead of reading from disk via the memory map for every batch. The former way should be faster than the latter.\r\n2. When using DDP, before sending the `Dataset` object into `split_dataset_by_node` or incorporate it with `DistributedSampler`, every process still needs to pre-load the entire data into memory (when `keep_in_memory=True`) and then select the chunked indices from the loaded data. \r\n\r\nGenerally, the dilemma I'm facing is:\r\nSuppose we have a data around 120GB, and we want to use `DistributedLengthGroupedSampler` to optimize batching. When using DDP and `keep_in_memory=True`, every process loads 120GB which is not acceptable. For now, I turned off `keep_in_memory` and try to increase the number of workers for `DataLoader` to get better pipelining. \r\n\r\n**But is it possible to load 120GB once into 4 * A100 (which has around 4*120GB memory) and make each process read from this shared data from memory? Theoretically, maybe it should be faster?** ",
"Feel free to ask your questions on the [forum](https://discuss.huggingface.co/c/datasets/10) if you don't mind, this way the discussions may be useful to other people ;) "
] | "2022-12-16T17:43:47Z" | "2023-07-25T12:00:31Z" | "2023-01-16T13:33:32Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5369",
"merged_at": "2023-01-16T13:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5369"
} | To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]:
```python
import os
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
```
This works for both map-style datasets and iterable datasets.
The dataset is split for the node at rank `rank` in a pool of nodes of size `world_size`.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.
For iterable datasets:
If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`),
then the shards are evenly assigned across the nodes, which is the most optimized.
Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples.
This can also be combined with a `torch.utils.data.DataLoader` if you want each node to use multiple workers to load the data.
This also supports shuffling. At each epoch, the iterable dataset shards are reshuffled across all the nodes - you just have to call `iterable_ds.set_epoch(epoch_number)`.
TODO:
- [x] docs for usage in PyTorch
- [x] unit tests
- [x] integration tests with torch.distributed.launch
Related to https://github.com/huggingface/transformers/issues/20770
Close https://github.com/huggingface/datasets/issues/5360 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5369/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2689/comments | https://api.github.com/repos/huggingface/datasets/issues/2689/events | https://github.com/huggingface/datasets/issues/2689 | 949,447,104 | MDU6SXNzdWU5NDk0NDcxMDQ= | 2,689 | cannot save the dataset to disk after rename_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the other hand, the resulting dataset reads the data from another arrow file that is the result of the map transform.\r\n\r\nTherefore overwriting a dataset after `rename_column` is not possible, but it is possible after `map`, since `rename_column` doesn't switch to using another arrow file (the actual data stay the same).",
"Ok, thanks for clearing it up :)",
"so what would be the right way to read a dataset, then change something and then overwrite it with the new version?"
] | "2021-07-21T08:13:40Z" | "2021-07-21T13:11:04Z" | "2021-07-21T13:11:04Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})
In [7]: dataset.save_to_disk('foo')
In [8]: dataset=load_from_disk('foo')
In [10]: dataset=dataset.rename_column('foo', 'bar')
In [11]: dataset.save_to_disk('foo')
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
<ipython-input-11-a3bc0d4fc339> in <module>
----> 1 dataset.save_to_disk('foo')
/mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path
, fs)
597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths:
598 raise PermissionError(
--> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself."
600 )
601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths:
PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself.
```
N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2689/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2441/comments | https://api.github.com/repos/huggingface/datasets/issues/2441/events | https://github.com/huggingface/datasets/issues/2441 | 908,554,713 | MDU6SXNzdWU5MDg1NTQ3MTM= | 2,441 | DuplicatedKeysError on personal dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22605313?v=4",
"events_url": "https://api.github.com/users/lucaguarro/events{/privacy}",
"followers_url": "https://api.github.com/users/lucaguarro/followers",
"following_url": "https://api.github.com/users/lucaguarro/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaguarro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucaguarro",
"id": 22605313,
"login": "lucaguarro",
"node_id": "MDQ6VXNlcjIyNjA1MzEz",
"organizations_url": "https://api.github.com/users/lucaguarro/orgs",
"received_events_url": "https://api.github.com/users/lucaguarro/received_events",
"repos_url": "https://api.github.com/users/lucaguarro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucaguarro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaguarro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucaguarro"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)",
"Yup, I indeed was generating duplicate keys. Fixed it and now it's working."
] | "2021-06-01T17:59:41Z" | "2021-06-04T23:50:03Z" | "2021-06-04T23:50:03Z" | NONE | null | null | null | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2441/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4799/comments | https://api.github.com/repos/huggingface/datasets/issues/4799/events | https://github.com/huggingface/datasets/issues/4799 | 1,330,889,854 | I_kwDODunzps5PU8R- | 4,799 | video dataset loader/parser | {
"avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4",
"events_url": "https://api.github.com/users/nollied/events{/privacy}",
"followers_url": "https://api.github.com/users/nollied/followers",
"following_url": "https://api.github.com/users/nollied/following{/other_user}",
"gists_url": "https://api.github.com/users/nollied/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nollied",
"id": 26421036,
"login": "nollied",
"node_id": "MDQ6VXNlcjI2NDIxMDM2",
"organizations_url": "https://api.github.com/users/nollied/orgs",
"received_events_url": "https://api.github.com/users/nollied/received_events",
"repos_url": "https://api.github.com/users/nollied/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nollied/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nollied"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! We've just started discussing the video support in `datasets` (decoding backends, video feature type, etc.), so I believe we should have something tangible by the end of this year.\r\n\r\nAlso, if you have additional video features in mind that you would like to see, feel free to let us know",
"Coool thanks @mariosasko ",
"Hey @mariosasko, I was wondering if there's a way to load video data currently in the library? \r\nAlternatively is there a way I could hack it through the dataset.from_dict() method? I tried to hack it, but the issue I run into is that earlier I was doing a `cast_column()` call for the `Image` feature, but now I'm not sure about to do if I want the dataset to have the following keys when I call from_dict on it:\r\n`{\"caption\":[list of text captions], \"video_frames\": [list of image lists with one image list corresponding to one video]}`\r\n\r\nMaybe something like `cast_column(\"video_frames\", List(Image))` ..\r\n(This is assuming I have already extracted frames from video)"
] | "2022-08-07T01:54:12Z" | "2023-10-01T00:08:31Z" | "2022-08-09T16:42:51Z" | CONTRIBUTOR | null | null | null | you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really cool if i could point it to a bunch of video files and use pytorch to start looping through batches of videos. like if my batch size is 16, each sample in the batch is a frame from a video. i'm competing in the [minerl challenge](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition), and it would be awesome to use the HF ecosystem. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4799/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4799/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2434/comments | https://api.github.com/repos/huggingface/datasets/issues/2434/events | https://github.com/huggingface/datasets/issues/2434 | 907,503,557 | MDU6SXNzdWU5MDc1MDM1NTc= | 2,434 | Extend QuestionAnsweringExtractive template to handle nested columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"this is also the case for the following datasets and configurations:\r\n\r\n* `mlqa` with config `mlqa-translate-train.ar`\r\n\r\n",
"The current task API is somewhat deprecated (we plan to align it with `train eval index` at some point), so I think we can close this issue."
] | "2021-05-31T14:06:51Z" | "2022-10-05T17:06:28Z" | "2022-10-05T17:06:28Z" | MEMBER | null | null | null | Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-12-50e5b8f69c20> in <module>
----> 1 ds.prepare_for_task("question-answering-extractive")[0]
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`
1437 dataset.info.task_templates = None
-> 1438 dataset = dataset.cast(features=template.features)
1439 return dataset
1440
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
977 format = self.format
978 dataset = self.with_format("arrow")
--> 979 dataset = dataset.map(
980 lambda t: t.cast(schema),
981 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1600
1601 if num_proc is None or num_proc == 1:
-> 1602 return self._map_single(
1603 function=function,
1604 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
176 }
177 # apply actual function
--> 178 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
179 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
180 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1940 ) # Something simpler?
1941 try:
-> 1942 batch = apply_function_on_filtered_inputs(
1943 batch,
1944 indices,
~/git/datasets/src/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1837 processed_inputs = (
-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1839 )
1840 if update_data is None:
~/git/datasets/src/datasets/arrow_dataset.py in <lambda>(t)
978 dataset = self.with_format("arrow")
979 dataset = dataset.map(
--> 980 lambda t: t.cast(schema),
981 batched=True,
982 batch_size=batch_size,
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
241 else:
242 options = CastOptions.unsafe(target_type)
--> 243 return call_function("cast", [arr], options)
244
245
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<answer_end: list<item: int32>, answer_start: list<item: int32>, text: list<item: string>> to struct using function cast_struct
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2434/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations",
"I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. "
] | "2021-03-29T10:47:50Z" | "2021-03-30T10:20:23Z" | "2021-03-30T10:20:23Z" | CONTRIBUTOR | null | null | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3180/comments | https://api.github.com/repos/huggingface/datasets/issues/3180/events | https://github.com/huggingface/datasets/pull/3180 | 1,039,641,316 | PR_kwDODunzps4t2qQn | 3,180 | fix label mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [] | closed | false | null | [] | null | [
"heck, test failings. moving to draft. will come back to this later today hopefully",
"Thanks for fixing this :)\r\nI just updated the dataset_infos.json and added the missing `pretty_name` tag to the dataset card",
"thank you @lhoestq! running around as always it felt through as a lower priority..."
] | "2021-10-29T14:42:24Z" | "2021-11-02T13:41:07Z" | "2021-11-02T10:37:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3180",
"merged_at": "2021-11-02T10:37:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3180"
} | Fixing label mapping for hlgd.
0 correponds to same event and 1 corresponds to different event
<img width="642" alt="Capture d’écran 2021-10-29 à 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png">
<img width="638" alt="Capture d’écran 2021-10-29 à 10 40 09 AM" src="https://user-images.githubusercontent.com/16107619/139454813-93066a3c-7d33-4f56-b133-2f1a7661e438.png">
nt | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4774/comments | https://api.github.com/repos/huggingface/datasets/issues/4774/events | https://github.com/huggingface/datasets/issues/4774 | 1,323,375,844 | I_kwDODunzps5O4Rzk | 4,774 | Training hangs at the end of epoch, with set_transform/with_transform+multiple workers | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | "2022-07-31T06:32:28Z" | "2022-07-31T06:36:43Z" | null | NONE | null | null | null | ## Describe the bug
I use load_dataset() (I tried with [wiki](https://huggingface.co/datasets/wikipedia) and my own json data) and use set_transform/with_transform for preprocessing. But it hangs at the end of the 1st epoch if dataloader_num_workers>=1. No problem with single worker.
## Steps to reproduce the bug
```python
train_dataset = datasets.load_dataset("wikipedia", "20220301.en",
split='train',
cache_dir=model_args.cache_dir,
streaming=False)
train_dataset.set_transform(psg_parse_fn)
train_dataloader = DataLoader(
train_dataset,
batch_size=args.train_batch_size,
sampler=DistributedSampler(train_dataset),
collate_fn=data_collator,
drop_last=args.dataloader_drop_last,
num_workers=args.dataloader_num_workers,
)
```
## Expected results
## Actual results
It simply hangs. The ending step is num_example/batch_size (one epoch).
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Linux-5.4.170+-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4774/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3801/comments | https://api.github.com/repos/huggingface/datasets/issues/3801/events | https://github.com/huggingface/datasets/pull/3801 | 1,155,649,279 | PR_kwDODunzps4zvqjN | 3,801 | [Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Right ! Will add it in another PR :)"
] | "2022-03-01T18:06:43Z" | "2022-03-07T16:30:30Z" | "2022-03-07T16:30:29Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3801",
"merged_at": "2022-03-07T16:30:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3801"
} | Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing.
In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0**
In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them.
I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns
### TODO
- [x] tests
- [x] docs
Related to https://github.com/huggingface/datasets/issues/3444 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3801/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5819/comments | https://api.github.com/repos/huggingface/datasets/issues/5819/events | https://github.com/huggingface/datasets/issues/5819 | 1,695,536,738 | I_kwDODunzps5lD9Zi | 5,819 | Cannot pickle error in Dataset.from_generator() | {
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xinghaow99",
"id": 50691954,
"login": "xinghaow99",
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xinghaow99"
} | [] | closed | false | null | [] | null | [
"Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ",
"> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n"
] | "2023-05-04T08:39:09Z" | "2023-05-05T19:20:59Z" | "2023-05-05T19:20:58Z" | NONE | null | null | null | ### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5819/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2182/comments | https://api.github.com/repos/huggingface/datasets/issues/2182/events | https://github.com/huggingface/datasets/pull/2182 | 852,384,872 | MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy | 2,182 | Set default in-memory value depending on the dataset size | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | [
"I ping @krandiash to keep him up to date.",
"TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~",
"@lhoestq I have a question, regarding:\r\n> Also maybe we should add a warning if someone tries to specify cache_file_name= in map, filter etc. on a dataset that is in memory, since the computation is not going to be cached in this case.\r\n\r\n- It might be the case that the user has an in-memory dataset and might want to use `map` and cache it, by passing `cache_file_name=`\r\n- This is indeed allowed by the library and works as expected: the dataset is cached.\r\n\r\nWhy adding a warning?",
"Yes right, I meant if `load_from_cache_file` is set to True and `cache_file_name ` is None. my bad :p"
] | "2021-04-07T13:00:18Z" | "2021-04-20T14:20:12Z" | "2021-04-20T10:04:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"merged_at": "2021-04-20T10:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2182"
} | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2182/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2430/comments | https://api.github.com/repos/huggingface/datasets/issues/2430/events | https://github.com/huggingface/datasets/pull/2430 | 907,322,595 | MDExOlB1bGxSZXF1ZXN0NjU4MTg3Njkw | 2,430 | Add version-specific BibTeX | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Maybe we should only keep one citation ?\r\ncc @thomwolf @yjernite ",
"For info:\r\n- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.\r\n- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.\r\n- All the information automatically generated by Zenodo can be corrected/customized if necessary.\r\n - If we decide to correct/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...\r\n\r\nAccording to Zenodo: https://help.zenodo.org/#versioning\r\n> **Which DOI should I use in citations?**\r\n> \r\n> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.\r\n> \r\n> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version.",
"Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation.",
"I have removed the old citation.\r\n\r\nWhat about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https://github.com/huggingface/datasets/graphs/contributors\r\nI do not know if this default automatic list of authors is what we want to show in the citation..."
] | "2021-05-31T10:05:42Z" | "2021-06-08T07:53:22Z" | "2021-06-08T07:53:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2430.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2430",
"merged_at": "2021-06-08T07:53:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2430.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2430"
} | As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2430/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2430/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3802/comments | https://api.github.com/repos/huggingface/datasets/issues/3802/events | https://github.com/huggingface/datasets/pull/3802 | 1,157,009,964 | PR_kwDODunzps4z0biM | 3,802 | Release of FairLex dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | closed | false | null | [] | null | [
"This is awesome ! The dataset card and the dataset script look amazing :)\r\n\r\nI wanted to ask you if you'd be interested to have this dataset under the namespace of you research group at https://huggingface.co/coastalcph ? If yes, then you can actually create a dataset repository under your research group name and upload the files from this PR there",
"Hi @lhoestq,\r\n\r\nYeah, I could do that. I see that people do that a lot of models, but not for datasets. \r\n\r\nIs there any good reason to have it under the organization domain instead of the general domain?\r\n\r\n Thanks!",
"It's nice to have it under your namespace:\r\n- it will appear on your research group page, along with your models\r\n- you can edit or create datasets at any time - you don't need to open PRs on GitHub\r\n\r\nAll the datasets that are not under a namespace are this way because we started adding datasets from GitHub. Now we encourage users to upload them directly to make things simpler, and aligned with the workflow for models\r\n\r\n(the documentation will be updated in the following days)\r\n\r\nNote that we will keep accepting PRs here though when there is no clear namespace under which a dataset should be, or for users that want a review from us",
"Ok, I'll do that. So, I'll just have to upload all the files under the `/fairlex` directory in my PR, right?",
"Yes exactly !",
"Ok, I uploaded most of them from the UI environment (https://huggingface.co/datasets/coastalcph/fairlex). Can I possibly upload the dummy data as well from the UI environment. I really want to avoid the CLI right now 😄 ",
"Yea sure, feel free to use the UI of the website, even for the dummy data ^^",
"Did you upload them yourself? Because I see the data preview, and I'm pretty sure, I didn't do that 😄 ",
"The preview is computed from the real data ;)\r\n\r\nThe dummy data are used for testing only",
"Haha, ok I was shocked! Cool, I close this PR, then. Thanks, again! ",
"Thank you 🤗"
] | "2022-03-02T10:40:18Z" | "2022-03-02T15:21:10Z" | "2022-03-02T15:18:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3802",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3802"
} |
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing**
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
*Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avacaondata",
"id": 35173563,
"login": "avacaondata",
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avacaondata"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n",
"Hi! You can use `datasets_sql` for that now. As of recently, PyArrow also supports querying tables via Substrait, so I think we can start adding these methods to the API soon."
] | "2021-05-10T23:16:10Z" | "2022-10-05T17:27:05Z" | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Additional context**
If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2625/comments | https://api.github.com/repos/huggingface/datasets/issues/2625/events | https://github.com/huggingface/datasets/issues/2625 | 941,439,922 | MDU6SXNzdWU5NDE0Mzk5MjI= | 2,625 | ⚛️😇⚙️🔑 | {
"avatar_url": "https://avatars.githubusercontent.com/u/50596661?v=4",
"events_url": "https://api.github.com/users/hustlen0mics/events{/privacy}",
"followers_url": "https://api.github.com/users/hustlen0mics/followers",
"following_url": "https://api.github.com/users/hustlen0mics/following{/other_user}",
"gists_url": "https://api.github.com/users/hustlen0mics/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hustlen0mics",
"id": 50596661,
"login": "hustlen0mics",
"node_id": "MDQ6VXNlcjUwNTk2NjYx",
"organizations_url": "https://api.github.com/users/hustlen0mics/orgs",
"received_events_url": "https://api.github.com/users/hustlen0mics/received_events",
"repos_url": "https://api.github.com/users/hustlen0mics/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hustlen0mics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hustlen0mics/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hustlen0mics"
} | [] | closed | false | null | [] | null | [] | "2021-07-11T12:14:34Z" | "2021-07-12T05:55:59Z" | "2021-07-12T05:55:59Z" | NONE | null | null | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2625/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/3698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3698/comments | https://api.github.com/repos/huggingface/datasets/issues/3698/events | https://github.com/huggingface/datasets/pull/3698 | 1,129,864,282 | PR_kwDODunzps4yXtyQ | 3,698 | Add finetune-data CodeFill | {
"avatar_url": "https://avatars.githubusercontent.com/u/49989029?v=4",
"events_url": "https://api.github.com/users/rgismondi/events{/privacy}",
"followers_url": "https://api.github.com/users/rgismondi/followers",
"following_url": "https://api.github.com/users/rgismondi/following{/other_user}",
"gists_url": "https://api.github.com/users/rgismondi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rgismondi",
"id": 49989029,
"login": "rgismondi",
"node_id": "MDQ6VXNlcjQ5OTg5MDI5",
"organizations_url": "https://api.github.com/users/rgismondi/orgs",
"received_events_url": "https://api.github.com/users/rgismondi/received_events",
"repos_url": "https://api.github.com/users/rgismondi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rgismondi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rgismondi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rgismondi"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @rgismondi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | "2022-02-10T11:12:51Z" | "2022-10-03T09:36:18Z" | "2022-10-03T09:36:18Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3698",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3698"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3698/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3421/comments | https://api.github.com/repos/huggingface/datasets/issues/3421/events | https://github.com/huggingface/datasets/pull/3421 | 1,077,966,571 | PR_kwDODunzps4vuvJK | 3,421 | Adding mMARCO dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17603035?v=4",
"events_url": "https://api.github.com/users/lhbonifacio/events{/privacy}",
"followers_url": "https://api.github.com/users/lhbonifacio/followers",
"following_url": "https://api.github.com/users/lhbonifacio/following{/other_user}",
"gists_url": "https://api.github.com/users/lhbonifacio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhbonifacio",
"id": 17603035,
"login": "lhbonifacio",
"node_id": "MDQ6VXNlcjE3NjAzMDM1",
"organizations_url": "https://api.github.com/users/lhbonifacio/orgs",
"received_events_url": "https://api.github.com/users/lhbonifacio/received_events",
"repos_url": "https://api.github.com/users/lhbonifacio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhbonifacio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhbonifacio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhbonifacio"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Hi @albertvillanova we've made a major overhaul of the loading script including all configurations we're making available. Could you please review it again?",
"@albertvillanova :ping_pong: ",
"Thanks @lhbonifacio for adding this dataset.\r\nHi there, i got an error about mmarco:\r\nConnectionError: Couldn't reach 'unicamp-dl/mmarco' on the Hub (ConnectionError)\r\ncode:\r\n`from datasets import list_datasets, load_dataset\r\ndataset = load_dataset('unicamp-dl/mmarco', language='portuguese')`\r\n\r\nAny help will be appreciated!",
"Hi @catqaq, we updated the loading script. Now you can load the datasets with:\r\n\r\n```python\r\ndataset = load_dataset('unicamp-dl/mmarco', 'portuguese')\r\n```\r\n\r\nYou can check the list of supported languages and usage examples in [this link](https://huggingface.co/datasets/unicamp-dl/mmarco). Feel free to contact us if you have any issues.",
"\r\n\r\n\r\n> \r\n\r\n\r\n\r\n> Hi @catqaq, we updated the loading script. Now you can load the datasets with:\r\n> \r\n> ```python\r\n> dataset = load_dataset('unicamp-dl/mmarco', 'portuguese')\r\n> ```\r\n> \r\n> You can check the list of supported languages and usage examples in [this link](https://huggingface.co/datasets/unicamp-dl/mmarco). Feel free to contact us if you have any issues.\r\n\r\nThanks for your quick updates. So, how can i get the fixed version, install from the source? It seems that the merging is blocked.",
"@catqaq you can load mMARCO using the namespace `unicamp-dl/mmarco` while this PR remains under review.",
"Thanks for your contribution, @lhbonifacio and @hugoabonizio. And sorry for the late response.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nAs you already created this dataset under your organization namespace (https://huggingface.co/datasets/unicamp-dl/mmarco), I think we can safely close this PR.\r\n\r\nWe would suggest you complete your dataset card with the YAML tags, to make it searchable and discoverable.\r\n\r\nPlease, feel free to tell us if you need some help."
] | "2021-12-13T00:56:43Z" | "2022-10-03T09:37:15Z" | "2022-10-03T09:37:15Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3421",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3421"
} | Adding mMARCO (v1.1) to HF datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2275/comments | https://api.github.com/repos/huggingface/datasets/issues/2275/events | https://github.com/huggingface/datasets/issues/2275 | 869,378,311 | MDU6SXNzdWU4NjkzNzgzMTE= | 2,275 | SNLI dataset has labels of -1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4",
"events_url": "https://api.github.com/users/puzzler10/events{/privacy}",
"followers_url": "https://api.github.com/users/puzzler10/followers",
"following_url": "https://api.github.com/users/puzzler10/following{/other_user}",
"gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/puzzler10",
"id": 17426779,
"login": "puzzler10",
"node_id": "MDQ6VXNlcjE3NDI2Nzc5",
"organizations_url": "https://api.github.com/users/puzzler10/orgs",
"received_events_url": "https://api.github.com/users/puzzler10/received_events",
"repos_url": "https://api.github.com/users/puzzler10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/puzzler10"
} | [] | closed | false | null | [] | null | [
"Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!"
] | "2021-04-28T00:32:25Z" | "2021-05-17T13:34:18Z" | "2021-05-17T13:34:18Z" | NONE | null | null | null | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.
It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained.
Perhaps the documentation should be updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2275/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3169/comments | https://api.github.com/repos/huggingface/datasets/issues/3169/events | https://github.com/huggingface/datasets/pull/3169 | 1,036,773,357 | PR_kwDODunzps4ttYmZ | 3,169 | Configurable max filename length in file locks | {
"avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4",
"events_url": "https://api.github.com/users/lmmx/events{/privacy}",
"followers_url": "https://api.github.com/users/lmmx/followers",
"following_url": "https://api.github.com/users/lmmx/following{/other_user}",
"gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lmmx",
"id": 2979452,
"login": "lmmx",
"node_id": "MDQ6VXNlcjI5Nzk0NTI=",
"organizations_url": "https://api.github.com/users/lmmx/orgs",
"received_events_url": "https://api.github.com/users/lmmx/received_events",
"repos_url": "https://api.github.com/users/lmmx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmmx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lmmx"
} | [] | closed | false | null | [] | null | [
"I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.",
"Cancelling PR in favour of @mariosasko's in #3173"
] | "2021-10-26T21:52:55Z" | "2021-10-28T16:14:14Z" | "2021-10-28T16:14:13Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3169"
} | Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly.
Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model):
```py
import torch
import flash
from flash.audio import SpeechRecognition, SpeechRecognitionData
from flash.core.data.utils import download_data
# 1. Create the DataModule
download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data")
datamodule = SpeechRecognitionData.from_json(
input_fields="file",
target_fields="text",
train_file="data/timit/train.json",
test_file="data/timit/test.json",
)
```
Which gave this traceback:
```py
Traceback (most recent call last):
File "lf_ft.py", line 10, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'
```
Note the filename is 145 chars long:
```
>>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock")
145
```
After installing datasets as an editable local package and modifying the script I was running to first include:
```py
import datasets
datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143
```
The error goes away.
If I instead deliberately set the value incorrectly as 144, the OSError returns:
```
Traceback (most recent call last):
File "lf_ft.py", line 14, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__
self.acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire
self._acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3169/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2083/comments | https://api.github.com/repos/huggingface/datasets/issues/2083/events | https://github.com/huggingface/datasets/issues/2083 | 835,695,425 | MDU6SXNzdWU4MzU2OTU0MjU= | 2,083 | `concatenate_datasets` throws error when changing the order of datasets to concatenate | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)). I'll try to fix this ASAP."
] | "2021-03-19T08:29:48Z" | "2021-04-09T09:25:33Z" | "2021-04-09T09:25:33Z" | MEMBER | null | null | null | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO.
Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2083/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5080/comments | https://api.github.com/repos/huggingface/datasets/issues/5080/events | https://github.com/huggingface/datasets/issues/5080 | 1,398,849,565 | I_kwDODunzps5TYMAd | 5,080 | Use hfh for caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] | "2022-10-06T05:51:58Z" | "2022-10-06T14:26:05Z" | null | MEMBER | null | null | null | ## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages.
First, we could easily start using `hfh` caching for:
- dataset Python scripts
- dataset READMEs
- dataset infos JSON files (now deprecated)
Second, we could also use `hfh` caching for data files downloaded from the Hub.
Further investigation is needed for:
- files downloaded from non-Hub hosts
- extracted files from downloaded archive/compressed files
- generated Arrow files
## Additional context
Docs about the `hfh` caching system:
- [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache)
- [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache)
The `transformers` library has already adopted `hfh` for caching. See:
- huggingface/transformers#18438
- huggingface/transformers#18857
- huggingface/transformers#18966
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5080/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2408/comments | https://api.github.com/repos/huggingface/datasets/issues/2408/events | https://github.com/huggingface/datasets/pull/2408 | 903,422,648 | MDExOlB1bGxSZXF1ZXN0NjU0NjgxMzE4 | 2,408 | Fix head_qa keys | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-05-27T08:50:19Z" | "2021-05-27T09:05:37Z" | "2021-05-27T09:05:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2408.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2408",
"merged_at": "2021-05-27T09:05:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2408.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2408"
} | There were duplicate in the keys, as mentioned in #2382 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2408/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2232/comments | https://api.github.com/repos/huggingface/datasets/issues/2232/events | https://github.com/huggingface/datasets/pull/2232 | 860,075,931 | MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4 | 2,232 | Start filling GLUE dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I replaced all the \"we\" and applied your suggestion",
"Merging this for now, we can continue improving this card in other PRs :)"
] | "2021-04-16T18:37:37Z" | "2021-04-21T09:33:09Z" | "2021-04-21T09:33:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2232",
"merged_at": "2021-04-21T09:33:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2232"
} | The dataset card was pretty much empty.
I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2232/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2619/comments | https://api.github.com/repos/huggingface/datasets/issues/2619/events | https://github.com/huggingface/datasets/pull/2619 | 940,858,236 | MDExOlB1bGxSZXF1ZXN0Njg2ODY3NDA4 | 2,619 | Add ASR task for SUPERB | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | [
"Wait until #2620 is merged before pushing the README tags in this PR",
"> Thanks!\r\n> \r\n> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?\r\n\r\ngreat catch! i've now added the asr task template (along with a mapping from superb task -> template) and updated the `dataset_infos.json` :) ",
"> Good!\r\n> \r\n> I have a suggested refactoring... Tell me what you think! :)\r\n\r\nyour approach is much more elegant - i've included your suggestions 🙏 "
] | "2021-07-09T15:19:45Z" | "2021-07-15T08:55:58Z" | "2021-07-13T12:40:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2619.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2619",
"merged_at": "2021-07-13T12:40:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2619.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2619"
} | This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2619/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2931/comments | https://api.github.com/repos/huggingface/datasets/issues/2931/events | https://github.com/huggingface/datasets/pull/2931 | 998,326,359 | PR_kwDODunzps4r1-JH | 2,931 | Fix bug in to_tf_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"
] | "2021-09-16T15:08:03Z" | "2021-09-16T17:01:38Z" | "2021-09-16T17:01:37Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2931",
"merged_at": "2021-09-16T17:01:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2931"
} | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2931/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1546/comments | https://api.github.com/repos/huggingface/datasets/issues/1546/events | https://github.com/huggingface/datasets/pull/1546 | 765,559,923 | MDExOlB1bGxSZXF1ZXN0NTM4OTkwMjgw | 1,546 | Add persian ner dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KMFODA",
"id": 35491698,
"login": "KMFODA",
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KMFODA"
} | [] | closed | false | null | [] | null | [
"HI @SBrandeis. Thanks for all the comments - very helpful. I realised that the tests had failed and had been trying to figure out what was causing them to do so. All the tests pass when I run the load_real_dataset test however when I run `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner` I get the below error. One thing to note is that the automated dummy data file generation failed when I tried to run it so I manually created the dummy data and ensured that the last line in the file was an empty line as per your comments. Would appreciate your thoughts on what might be causing this:\r\n\r\n```\r\n__________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_persian_ner __________________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_persian_ner>, dataset_name = 'persian_ner'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n--------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------\r\nDownloading and preparing dataset persian_ner/fold1 (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /var/folders/nk/yp5_m5c95cnc0cm_vbd7h7g80000gn/T/tmpzh495aac/persian_ner/fold1/1.1.0...\r\nDataset persian_ner downloaded and prepared to /var/folders/nk/yp5_m5c95cnc0cm_vbd7h7g80000gn/T/tmpzh495aac/persian_ner/fold1/1.1.0. Subsequent calls will reuse this data.\r\n--------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------\r\n \r\n======================================================================= warnings summary =======================================================================\r\nenv/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\nenv/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\nenv/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\nenv/lib/python3.7/site-packages/elasticsearch/compat.py:38\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/elasticsearch/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=================================================================== short test summary info ====================================================================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner - AssertionError: False is not true\r\n```",
"Thanks @SBrandeis. It turns out the error was because I had to manually increase the n_lines variable to get the dummy data generation to cover at least one example. Should all be working okay now.",
"Great, thanks!\r\nIt looks good to me, I'll let @lhoestq take over"
] | "2020-12-13T17:45:48Z" | "2020-12-23T09:53:03Z" | "2020-12-23T09:53:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1546.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1546",
"merged_at": "2020-12-23T09:53:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1546.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1546"
} | Adding the following dataset:
https://github.com/HaniehP/PersianNER
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1546/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2084/comments | https://api.github.com/repos/huggingface/datasets/issues/2084/events | https://github.com/huggingface/datasets/issues/2084 | 835,750,671 | MDU6SXNzdWU4MzU3NTA2NzE= | 2,084 | CUAD - Contract Understanding Atticus Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"+1 on this request"
] | "2021-03-19T09:27:43Z" | "2021-04-16T08:50:44Z" | "2021-04-16T08:50:44Z" | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2084/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2371/comments | https://api.github.com/repos/huggingface/datasets/issues/2371/events | https://github.com/huggingface/datasets/issues/2371 | 894,193,403 | MDU6SXNzdWU4OTQxOTM0MDM= | 2,371 | Align question answering tasks with sub-domains | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null | [
"Closing this issue as the `task_templates` API has been deprecated."
] | "2021-05-18T09:47:59Z" | "2023-07-25T16:52:05Z" | "2023-07-25T16:52:04Z" | MEMBER | null | null | null | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2371/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5390/comments | https://api.github.com/repos/huggingface/datasets/issues/5390/events | https://github.com/huggingface/datasets/issues/5390 | 1,509,357,553 | I_kwDODunzps5Z9vfx | 5,390 | Error when pushing to the CI hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"Hmmm, git bisect tells me that the behavior is the same since https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c (3 Oct), i.e. https://github.com/huggingface/datasets/pull/4926",
"Maybe related to the discussions in https://github.com/huggingface/datasets/pull/5196",
"Maybe the current version of moonlanding in Hub CI is the issue.\r\n\r\nI relaunched tests that were working two days ago: now they are failing. https://github.com/huggingface/datasets-server/commit/746414449cae4b311733f8a76e5b3b4ca73b38a9 for example\r\n\r\ncc @huggingface/moon-landing ",
"Hi! I don't think this has anything to do with `datasets`. Hub CI seems to be the culprit - the identical failure can be found in [this](https://github.com/huggingface/datasets/pull/5389) PR (with unrelated changes) opened today.",
"OK! Thanks for looking at it. Closing then."
] | "2022-12-23T13:36:37Z" | "2022-12-23T20:29:02Z" | "2022-12-23T20:29:02Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co).
The call to `dataset.push_to_hub(` fails:
```
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.93s/it]
Traceback (most recent call last):
File "reproduce_hubci.py", line 16, in <module>
dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True)
File "/home/slesage/hf/datasets/src/datasets/arrow_dataset.py", line 5025, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1346, in upload_file
raise err
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1337, in upload_file
r.raise_for_status()
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_DATASETS_SERVER_USER__/bug-16718047265472/upload/main/README.md
```
### Steps to reproduce the bug
```python
# reproduce.py
from datasets import Dataset
import time
USER = "__DUMMY_DATASETS_SERVER_USER__"
USER_TOKEN = "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"
dataset = Dataset.from_dict({"a": [1, 2, 3]})
repo_id = f"{USER}/bug-{int(time.time() * 10e3)}"
dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True)
```
```bash
$ HF_ENDPOINT="https://hub-ci.huggingface.co" python reproduce.py
```
### Expected behavior
No error and the dataset should be uploaded to the Hub with the README file (which generates the error).
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5390/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5849/comments | https://api.github.com/repos/huggingface/datasets/issues/5849/events | https://github.com/huggingface/datasets/issues/5849 | 1,707,551,511 | I_kwDODunzps5lxysX | 5,849 | CSV datasets should only read the CSV data files in the repo | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2023-05-12T12:29:53Z" | "2023-06-22T14:16:27Z" | "2023-06-22T14:16:27Z" | MEMBER | null | null | null | When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.
I think the Csv builder should filter out non-CSV files when reading.
An analogue solution should be implemented for other packaged builders.
Related to:
- https://huggingface.co/datasets/abidlabs/img2text/discussions/1
- https://github.com/gradio-app/gradio/pull/3973#issuecomment-1545409061
CC: @abidlabs @severo | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5849/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5849/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4908/comments | https://api.github.com/repos/huggingface/datasets/issues/4908/events | https://github.com/huggingface/datasets/pull/4908 | 1,353,995,574 | PR_kwDODunzps499FDS | 4,908 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-29T09:41:53Z" | "2022-09-22T14:35:56Z" | "2022-08-29T16:13:07Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4908.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4908",
"merged_at": "2022-08-29T16:13:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4908.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4908"
} | Fix missing tags in dataset cards:
- asnq
- clue
- common_gen
- cosmos_qa
- guardian_authorship
- hindi_discourse
- py_ast
- x_stance
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4908/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3825/comments | https://api.github.com/repos/huggingface/datasets/issues/3825/events | https://github.com/huggingface/datasets/pull/3825 | 1,159,802,345 | PR_kwDODunzps4z9p4b | 3,825 | Update version and date in Wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint."
] | "2022-03-04T16:05:27Z" | "2022-03-04T17:24:37Z" | "2022-03-04T17:24:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3825",
"merged_at": "2022-03-04T17:24:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3825"
} | CC: @geohci | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3825/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4511/comments | https://api.github.com/repos/huggingface/datasets/issues/4511/events | https://github.com/huggingface/datasets/pull/4511 | 1,273,336,874 | PR_kwDODunzps45w7RN | 4,511 | Support all negative values in ClassLabel | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html?highlight=nllloss#torch.nn.NLLLoss), so switching labels to -1 leads to index errors using Trainer defaults.\r\n\r\nAs a workaround, I'm using master branch directly (`pip install git+https://github.com/huggingface/datasets.git@master` for anyone who needs to do the same) until this gets released.",
"The new release `2.4` fixes the issue, feel free to update `datasets` :) \r\n```\r\npip install -U datasets\r\n```"
] | "2022-06-16T09:59:39Z" | "2022-07-28T16:03:27Z" | "2022-06-16T13:54:07Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4511.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4511",
"merged_at": "2022-06-16T13:54:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4511.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4511"
} | We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3
Fix https://github.com/huggingface/datasets/issues/4508 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4511/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4511/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3381/comments | https://api.github.com/repos/huggingface/datasets/issues/3381/events | https://github.com/huggingface/datasets/issues/3381 | 1,071,283,879 | I_kwDODunzps4_2n6n | 3,381 | Unable to load audio_features from common_voice dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4",
"events_url": "https://api.github.com/users/ashu5644/events{/privacy}",
"followers_url": "https://api.github.com/users/ashu5644/followers",
"following_url": "https://api.github.com/users/ashu5644/following{/other_user}",
"gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashu5644",
"id": 8268102,
"login": "ashu5644",
"node_id": "MDQ6VXNlcjgyNjgxMDI=",
"organizations_url": "https://api.github.com/users/ashu5644/orgs",
"received_events_url": "https://api.github.com/users/ashu5644/received_events",
"repos_url": "https://api.github.com/users/ashu5644/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashu5644"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)",
"Thanks for the information. It works.",
"Cool ! Closing this issue then"
] | "2021-12-04T19:59:11Z" | "2021-12-06T17:52:42Z" | "2021-12-06T17:52:42Z" | NONE | null | null | null | ## Describe the bug
I am not able to load audio features from common_voice dataset
## Steps to reproduce the bug
```
from datasets import load_dataset
import torchaudio
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
```
## Expected results
This piece of code should return test_dataset after loading audio features.
## Actual results
Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
"Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory
0%| | 0/3 [00:00<?, ?ex/s]
Traceback (most recent call last):
File "demo_file.py", line 23, in <module>
test_dataset = test_dataset.map(speech_file_to_array_fn)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map
desc=desc,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated
result = f(decorated_item, *args, **kwargs)
File "demo_file.py", line 19, in speech_file_to_array_fn
speech_array, sampling_rate = torchaudio.load(batch["path"])
File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load
filepath, frame_offset, num_frames, normalize, channels_first, format)
RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3381/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3381/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5700/comments | https://api.github.com/repos/huggingface/datasets/issues/5700/events | https://github.com/huggingface/datasets/pull/5700 | 1,652,527,530 | PR_kwDODunzps5Ng6g_ | 5,700 | fix: fix wrong modification of the 'cache_file_name' -related paramet… | {
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FrancoisNoyez",
"id": 47528215,
"login": "FrancoisNoyez",
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FrancoisNoyez"
} | [] | open | false | null | [] | null | [
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.",
"Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that",
"Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.",
"We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately",
"> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?"
] | "2023-04-03T18:05:26Z" | "2023-04-06T17:17:27Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700"
} | …ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5700/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2923/comments | https://api.github.com/repos/huggingface/datasets/issues/2923/events | https://github.com/huggingface/datasets/issues/2923 | 997,351,590 | I_kwDODunzps47cmCm | 2,923 | Loading an autonlp dataset raises in normal mode but not in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Closing since autonlp dataset are now supported"
] | "2021-09-15T17:44:38Z" | "2022-04-12T10:09:40Z" | "2022-04-12T10:09:39Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False)
## raises an error
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True)
## does not raise an error
```
## Expected results
Both calls should raise the same error
## Actual results
Call with streaming=False:
```
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split
writer.write_table(table)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "splits" does not exist in table schema'
```
Call with `streaming=False`:
```
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s]
```
## Environment info
- `datasets` version: 1.12.1.dev0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2923/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6303/comments | https://api.github.com/repos/huggingface/datasets/issues/6303/events | https://github.com/huggingface/datasets/issues/6303 | 1,943,466,532 | I_kwDODunzps5z1vIk | 6,303 | Parquet uploads off-by-one naming scheme | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast"
} | [] | open | false | null | [] | null | [
"You can find the reasoning behind this naming scheme [here](https://github.com/huggingface/transformers/pull/16343#discussion_r931182168).\r\n\r\nThis point has been raised several times, so I'd be okay with starting with `00001-` (also to be consistent with the `transformers` sharding), but I'm not sure @lhoestq agrees.",
"We start at 0 in `datasets` for consistency with Apache Spark, Apache Beam, Dask and others.\r\n\r\nAlso note `transformers` isn't a good reference on this topic. I talked with the maintainers when they added shards but it was already released this way. Though we found that there is a backward-compatible way in `transformers` to start at 0, but no request from `transformers` users to changes this AFAIK.",
"not sure it would be a good idea to break the consistency now, IMO",
"Makes sense to start at 0 for plenty of good reasons so I'm on board.\r\n\r\nWhat about the second part `-of-0000X`? With single commit PR #6269 just getting merged, there was a note about issues with 100+ file edits https://github.com/huggingface/datasets/pull/6269#issuecomment-1755428581.\r\n\r\nThat would be my last remaining concern in the context of the `push_to_hub(..., append=True)` work to be done, where appending a single file to the full dataset will require renaming every other existing file in the dataset. If it doesn't seem like a big issue for this work then all the better 👍"
] | "2023-10-14T18:31:03Z" | "2023-10-16T16:33:21Z" | null | NONE | null | null | null | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71e7ce">
The `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287
There are also some edits to this section in the single commit branch.
### Steps to reproduce the bug
1. Upload a dataset that requires at least two parquet files in it
2. Observe the naming scheme
### Expected behavior
The couple options here are of course **1. keeping it as is**
**2. Starting the index at 1:**
train-00001-of-00002-{hash}.parquet
train-00002-of-00002-{hash}.parquet
**3. My preferred option** (which would solve my specific issue), dropping the total entirely:
train-00000-{hash}.parquet
train-00001-{hash}.parquet
This also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well.
However, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion.
### Environment info
- `datasets` version: 2.14.6.dev0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6303/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2479/comments | https://api.github.com/repos/huggingface/datasets/issues/2479/events | https://github.com/huggingface/datasets/pull/2479 | 918,672,431 | MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4 | 2,479 | ❌ load_datasets ❌ | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | [] | "2021-06-11T12:14:36Z" | "2021-06-11T14:46:25Z" | "2021-06-11T14:46:25Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2479.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2479",
"merged_at": "2021-06-11T14:46:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2479.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2479"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2479/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-25T16:42:29Z" | "2021-01-26T10:20:20Z" | "2021-01-26T10:20:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779"
} | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition for such functions (even though we're doing it for lambda functions).
For example this code currently prints False:
```python
from datasets.fingerprint import Hasher
# define once
def foo(x):
return x
h = Hasher.hash(foo)
# define a second time elsewhere
def foo(x):
return x
print(h == Hasher.hash(foo))
```
I changed this by ignoring the line number for all functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1700/comments | https://api.github.com/repos/huggingface/datasets/issues/1700/events | https://github.com/huggingface/datasets/pull/1700 | 781,333,589 | MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2 | 1,700 | Update Curiosity dialogs DatasetCard | {
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"events_url": "https://api.github.com/users/vineeths96/events{/privacy}",
"followers_url": "https://api.github.com/users/vineeths96/followers",
"following_url": "https://api.github.com/users/vineeths96/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vineeths96",
"id": 50873201,
"login": "vineeths96",
"node_id": "MDQ6VXNlcjUwODczMjAx",
"organizations_url": "https://api.github.com/users/vineeths96/orgs",
"received_events_url": "https://api.github.com/users/vineeths96/received_events",
"repos_url": "https://api.github.com/users/vineeths96/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vineeths96"
} | [] | closed | false | null | [] | null | [] | "2021-01-07T13:59:27Z" | "2021-01-12T18:51:32Z" | "2021-01-12T18:51:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"merged_at": "2021-01-12T18:51:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700"
} | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1700/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5837/comments | https://api.github.com/repos/huggingface/datasets/issues/5837/events | https://github.com/huggingface/datasets/issues/5837 | 1,703,019,816 | I_kwDODunzps5lggUo | 5,837 | Use DeepSpeed load myself " .csv " dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LanShanPi",
"id": 58167546,
"login": "LanShanPi",
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LanShanPi"
} | [] | open | false | null | [] | null | [
"Hi ! Doing `load_dataset(\"path/to/data.csv\")` is not supported yet, but you can do\r\n\r\n```python\r\nds = load_dataset(\"csv\", data_files=[\"path/to/data.csv\"])\r\n```",
"@lhoestq thank you.",
"The other question: \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1498, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1127, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 708, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 362, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 306, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/home/fm001/hzl/Data/qa/' at /\r\n>>> mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1508, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 115, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/fm001/.cache/huggingface/modules/datasets_modules/datasets/qa/b8b9f481eff9d17b769b4b50f30a51da32b47c94d1af4d2bdffb9fc2c589513a/qa.py\", line 2, in <module>\r\n mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1524, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\nTypeError: 'NoneType' object is not callable\r\n\r\nAnd I follow the setting with https://huggingface.co/docs/datasets/dataset_script"
] | "2023-05-10T02:39:28Z" | "2023-05-15T03:51:36Z" | null | NONE | null | null | null | ### Describe the bug
When I use DeepSpeed train a model with my own " XXX.csv" dataset I got the follow question:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1767, in load_dataset
builder_instance = load_dataset_builder(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1498, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/fm001/hzl/Data/qa.csv/qa.csv.py or any data file in the same directory.
### Steps to reproduce the bug
my code is :
from datasets import load_dataset
mydata = load_dataset("/home/fm001/hzl/Data/qa.csv")
### Expected behavior
。。。
### Environment info
。。。 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5837/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1205/comments | https://api.github.com/repos/huggingface/datasets/issues/1205/events | https://github.com/huggingface/datasets/pull/1205 | 757,942,403 | MDExOlB1bGxSZXF1ZXN0NTMzMjA4NDI1 | 1,205 | add lst20 with manual download | {
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
} | [] | closed | false | null | [] | null | [
"The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead",
"@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review."
] | "2020-12-06T14:49:10Z" | "2020-12-09T16:33:10Z" | "2020-12-09T16:33:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1205",
"merged_at": "2020-12-09T16:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1205"
} | passed on local:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20
```
Not sure how to test:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20
```
```
LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1797/comments | https://api.github.com/repos/huggingface/datasets/issues/1797/events | https://github.com/huggingface/datasets/issues/1797 | 797,357,901 | MDU6SXNzdWU3OTczNTc5MDE= | 1,797 | Connection error | {
"avatar_url": "https://avatars.githubusercontent.com/u/46243662?v=4",
"events_url": "https://api.github.com/users/smile0925/events{/privacy}",
"followers_url": "https://api.github.com/users/smile0925/followers",
"following_url": "https://api.github.com/users/smile0925/following{/other_user}",
"gists_url": "https://api.github.com/users/smile0925/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smile0925",
"id": 46243662,
"login": "smile0925",
"node_id": "MDQ6VXNlcjQ2MjQzNjYy",
"organizations_url": "https://api.github.com/users/smile0925/orgs",
"received_events_url": "https://api.github.com/users/smile0925/received_events",
"repos_url": "https://api.github.com/users/smile0925/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smile0925/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smile0925/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smile0925"
} | [] | closed | false | null | [] | null | [
"Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)"
] | "2021-01-30T07:32:45Z" | "2021-08-04T18:09:37Z" | "2021-08-04T18:09:37Z" | NONE | null | null | null | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1797/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/events | https://github.com/huggingface/datasets/pull/1851 | 804,523,174 | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | 1,851 | set bert_score version dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"events_url": "https://api.github.com/users/pvl/events{/privacy}",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_url": "https://api.github.com/users/pvl/following{/other_user}",
"gists_url": "https://api.github.com/users/pvl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pvl",
"id": 3596,
"login": "pvl",
"node_id": "MDQ6VXNlcjM1OTY=",
"organizations_url": "https://api.github.com/users/pvl/orgs",
"received_events_url": "https://api.github.com/users/pvl/received_events",
"repos_url": "https://api.github.com/users/pvl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pvl"
} | [] | closed | false | null | [] | null | [] | "2021-02-09T12:51:07Z" | "2021-02-09T14:21:48Z" | "2021-02-09T14:21:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"merged_at": "2021-02-09T14:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1851"
} | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1708/comments | https://api.github.com/repos/huggingface/datasets/issues/1708/events | https://github.com/huggingface/datasets/issues/1708 | 781,631,455 | MDU6SXNzdWU3ODE2MzE0NTU= | 1,708 | <html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | {
"avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4",
"events_url": "https://api.github.com/users/Louiejay54/events{/privacy}",
"followers_url": "https://api.github.com/users/Louiejay54/followers",
"following_url": "https://api.github.com/users/Louiejay54/following{/other_user}",
"gists_url": "https://api.github.com/users/Louiejay54/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Louiejay54",
"id": 77126849,
"login": "Louiejay54",
"node_id": "MDQ6VXNlcjc3MTI2ODQ5",
"organizations_url": "https://api.github.com/users/Louiejay54/orgs",
"received_events_url": "https://api.github.com/users/Louiejay54/received_events",
"repos_url": "https://api.github.com/users/Louiejay54/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Louiejay54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Louiejay54/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Louiejay54"
} | [] | closed | false | null | [] | null | [] | "2021-01-07T21:45:24Z" | "2021-01-08T09:00:01Z" | "2021-01-08T09:00:01Z" | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1708/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5441/comments | https://api.github.com/repos/huggingface/datasets/issues/5441/events | https://github.com/huggingface/datasets/pull/5441 | 1,548,417,594 | PR_kwDODunzps5IFeCW | 5,441 | resolving a weird tar extract issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011815 / 0.011353 (0.000463) | 0.006407 / 0.011008 (-0.004601) | 0.132937 / 0.038508 (0.094429) | 0.040634 / 0.023109 (0.017525) | 0.398049 / 0.275898 (0.122151) | 0.498207 / 0.323480 (0.174727) | 0.010111 / 0.007986 (0.002126) | 0.007282 / 0.004328 (0.002954) | 0.103661 / 0.004250 (0.099411) | 0.046223 / 0.037052 (0.009171) | 0.411490 / 0.258489 (0.153001) | 0.480973 / 0.293841 (0.187132) | 0.058397 / 0.128546 (-0.070149) | 0.019952 / 0.075646 (-0.055695) | 0.440734 / 0.419271 (0.021463) | 0.064585 / 0.043533 (0.021052) | 0.392556 / 0.255139 (0.137417) | 0.437842 / 0.283200 (0.154643) | 0.130684 / 0.141683 (-0.010999) | 1.910552 / 1.452155 (0.458397) | 1.984644 / 1.492716 (0.491927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264417 / 0.018006 (0.246411) | 0.676519 / 0.000490 (0.676030) | 0.003369 / 0.000200 (0.003169) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034558 / 0.037411 (-0.002854) | 0.126561 / 0.014526 (0.112035) | 0.134478 / 0.176557 (-0.042079) | 0.202125 / 0.737135 (-0.535010) | 0.143273 / 0.296338 (-0.153066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618592 / 0.215209 (0.403383) | 6.224435 / 2.077655 (4.146780) | 2.636689 / 1.504120 (1.132569) | 2.243507 / 1.541195 (0.702313) | 2.312449 / 1.468490 (0.843959) | 1.188499 / 4.584777 (-3.396277) | 5.738347 / 3.745712 (1.992635) | 4.891933 / 5.269862 (-0.377929) | 2.697631 / 4.565676 (-1.868046) | 0.140200 / 0.424275 (-0.284076) | 0.015484 / 0.007607 (0.007877) | 0.781947 / 0.226044 (0.555903) | 7.946600 / 2.268929 (5.677671) | 3.365574 / 55.444624 (-52.079050) | 2.783443 / 6.876477 (-4.093034) | 2.738634 / 2.142072 (0.596561) | 1.487247 / 4.805227 (-3.317980) | 0.255681 / 6.500664 (-6.244983) | 0.084607 / 0.075469 (0.009138) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.717846 / 1.841788 (-0.123941) | 18.405566 / 8.074308 (10.331258) | 20.508578 / 10.191392 (10.317186) | 0.262364 / 0.680424 (-0.418060) | 0.050881 / 0.534201 (-0.483319) | 0.587516 / 0.579283 (0.008232) | 0.650900 / 0.434364 (0.216536) | 0.656168 / 0.540337 (0.115830) | 0.778876 / 1.386936 (-0.608061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010817 / 0.011353 (-0.000536) | 0.007338 / 0.011008 (-0.003670) | 0.131949 / 0.038508 (0.093441) | 0.037244 / 0.023109 (0.014135) | 0.565994 / 0.275898 (0.290096) | 0.567434 / 0.323480 (0.243954) | 0.007733 / 0.007986 (-0.000252) | 0.005216 / 0.004328 (0.000887) | 0.096578 / 0.004250 (0.092328) | 0.056001 / 0.037052 (0.018949) | 0.538209 / 0.258489 (0.279720) | 0.580385 / 0.293841 (0.286544) | 0.053654 / 0.128546 (-0.074892) | 0.019471 / 0.075646 (-0.056176) | 0.448781 / 0.419271 (0.029509) | 0.064774 / 0.043533 (0.021241) | 0.540222 / 0.255139 (0.285083) | 0.563058 / 0.283200 (0.279858) | 0.122716 / 0.141683 (-0.018967) | 1.839402 / 1.452155 (0.387247) | 1.915523 / 1.492716 (0.422806) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310448 / 0.018006 (0.292442) | 0.603664 / 0.000490 (0.603175) | 0.004833 / 0.000200 (0.004633) | 0.000145 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130115 / 0.014526 (0.115589) | 0.154192 / 0.176557 (-0.022364) | 0.200655 / 0.737135 (-0.536480) | 0.144961 / 0.296338 (-0.151377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671588 / 0.215209 (0.456379) | 6.691642 / 2.077655 (4.613988) | 2.915230 / 1.504120 (1.411110) | 2.573337 / 1.541195 (1.032143) | 2.578204 / 1.468490 (1.109714) | 1.249028 / 4.584777 (-3.335749) | 5.808539 / 3.745712 (2.062827) | 3.079317 / 5.269862 (-2.190545) | 2.033308 / 4.565676 (-2.532369) | 0.142411 / 0.424275 (-0.281864) | 0.015525 / 0.007607 (0.007918) | 0.800389 / 0.226044 (0.574345) | 8.228236 / 2.268929 (5.959308) | 3.660207 / 55.444624 (-51.784417) | 3.021033 / 6.876477 (-3.855444) | 3.088335 / 2.142072 (0.946263) | 1.380137 / 4.805227 (-3.425091) | 0.252065 / 6.500664 (-6.248599) | 0.084302 / 0.075469 (0.008833) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709429 / 1.841788 (-0.132359) | 18.358770 / 8.074308 (10.284462) | 21.109844 / 10.191392 (10.918452) | 0.231549 / 0.680424 (-0.448875) | 0.029251 / 0.534201 (-0.504950) | 0.560719 / 0.579283 (-0.018564) | 0.610125 / 0.434364 (0.175761) | 0.630015 / 0.540337 (0.089678) | 0.751656 / 1.386936 (-0.635280) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18baf4eebf71c0db1d9980f7ee164f1272ff8f26 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5441). All of your documentation changes will be reflected on that endpoint.",
"I think I managed to reproduce it:\r\n\r\n```\r\nrm -rf ~/.cache/huggingface/datasets/HuggingFaceM4___cm4-synthetic-testing\r\nmkdir -p /tmp/xxx/hf-data\r\nsudo ln -s /tmp/xxx /test\r\nmkdir -p /tmp/yyy\r\nln -sf /test/hf-data /tmp/yyy/data\r\ncd /tmp/yyy\r\npython -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/cm4-synthetic-testing\r\n```\r\n\r\nPlease note it includes a creation of a symlink from the `/` (so `sudo`) - may be there is a simpler way but I'm just trying to replicate the real setup. Of course please be careful - it's mostly under `/tmp` not to destroy anything if you try to run this.\r\n\r\nthis fails with:\r\n\r\n```\r\nNo config specified, defaulting to: cm4-synthetic-testing/100.unique\r\nDownloading and preparing dataset cm4-synthetic-testing/100.unique (download: 20.71 KiB, generated: 49.99 MiB, post-processed: Unknown size, total: 50.01 MiB) to /home/stas/.cache/huggingface/datasets/HuggingFaceM4___cm4-synthetic-testing/100.unique/1.1.1/2e33dcc086c7209b8ccff4b19e44f1d41b5be53262e7d793142b96c2e984602b...\r\nExtraction of data is blocked (illegal path: /tmp/yyy)\r\n[...]\r\nExtraction of data/115/texts_03.txt is blocked (illegal path: /tmp/yyy)\r\nGenerating 100.unique split: 0%| | 0/100 [00:00<?, ? examples/s]Generating 100-long unique records split\r\n\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1571, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/stas/.cache/huggingface/modules/datasets_modules/datasets/HuggingFaceM4--cm4-synthetic-testing/2e33dcc086c7209b8ccff4b19e44f1d41b5be53262e7d793142b96c2e984602b/cm4-synthetic-testing.py\", line 190, in _generate_examples\r\n raise ValueError(f\"can't find any data - check {data_path}\")\r\nValueError: can't find any data - check /home/stas/.cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/load.py\", line 1757, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 860, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1612, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 953, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1450, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/builder.py\", line 1607, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nnote that `illegal path: /tmp/yyy` is now with the mods of this PR.\r\n\r\n----------------------\r\n\r\nAlso I think the whole thing should have failed at the first `illegal path` and not continue running. But as it continued and gave:\r\n\r\n\r\n> ValueError: can't find any data - check /home/stas/.cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data\r\n\r\nwhat can a user do with that other than confirming that that dir is indeed empty, but no clue is given to why and it's far from obvious that one needs to scroll up and discover earlier issues. Most users won't do that.\r\n\r\n(my apologies for writing out so much - was trying to make the situation clear)",
"Thank you, Albert, for the explanation.\r\n\r\nTo summarize I think what's needed is:\r\n\r\n1. add a comment in the code to why this is done for someone being puzzled over the odd code\r\n2. and to use an actionable by the user error message\r\n3. perform an untrapped assert on that tar extract error and not continue, so that the user will not get a later misleading error that the folder is empty and is completely not actionable and it's is far from obvious that one needs to scroll up to find earlier errors, which were trapped.\r\n\r\nAfter reading the advisory I'm still not sure why `cwd` is used and not a designated `~/.cache/huggingface/datasets/downloads/extracted`, I can't see what difference does it make since I could `chdir` to the designated directory and it would be `cwd`. The security solution is trying to ensure that `/etc/passwd` won't get overriden. So why is the check done in `.` and not the real target base directory, since the extraction isn't done in the current working dir. By not using `.` you lower the chances that the user will have all sorts of local symlinks that could trigger the issue since `datasets` typically is the only one managing it's `~/.cache/huggingface/datasets` domain and 99.9% of the time the user won't manually create files in it.\r\n\r\nthank you!\r\n"
] | "2023-01-19T02:17:21Z" | "2023-01-20T16:49:22Z" | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5441.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5441",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5441.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5441"
} | ok, every so often, I have been getting a strange failure on dataset install:
```
$ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Downloading and preparing dataset general-pmd-synthetic-testing/100.unique (download: 3.21 KiB, generated: 16.01 MiB, post-processed: Unknown size, total: 16.02 MiB) to /home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2...
Extraction of data is blocked (illegal path)
Extraction of data/1 is blocked (illegal path)
Extraction of data/1/text.null is blocked (illegal path)
[...]
```
I had no idea what to do with that - what in the world does **illegal path** mean?
I started looking at the code in `TarExtractor` and added a debug print of `base` so that told me that there was a problem with the current directory - which was a clone of one of the hf repos.
This particular dataset extracts into a directory `data` and the current dir I was running the tests from already had `data` in it which was a symbolic link to another partition and somehow all that `badpath` code was blowing up there.
https://github.com/huggingface/datasets/blob/80eb8db74f49b7ee9c0f73a819c22177fabd61db/src/datasets/utils/extract.py#L113-L114
I tried hard to come up with a repro, but no matter what I tried it only fails in that particular clone directory that has a `data` symlink and not anywhere else.
In any case, in this PR I'm proposing to at least give a user a hint of what seems to be an issue.
I'm not at all happy with the info I got with this proposed change, but at least it gave me a hint that `TarExtractor` tries to extract into the current directory without any respect to pre-existing files. Say what?
https://github.com/huggingface/datasets/blob/80eb8db74f49b7ee9c0f73a819c22177fabd61db/src/datasets/utils/extract.py#L110
why won't it use the `datasets` designated directory for that? There would never be a problem if it were to do that.
I had to look at all those `resolved`, `badpath` calls and see what it did and why it failed, since it was far from obvious. It appeared like it resolved a symlink and compared it to the original path which of course wasn't matching.
So perhaps you have a better solution than what I proposed in this PR. I think that code line I quoted is the one that should be fixed instead.
But if you can't think of a better solution let's merge this at least so that the user will have a clue that the current dir is somehow involved.
p.s. I double checked that if I remove the pre-existing `data` symlink in the current dir I'm running the dataset install command from, the problem goes away too.
Thanks.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5441/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6151/comments | https://api.github.com/repos/huggingface/datasets/issues/6151/events | https://github.com/huggingface/datasets/issues/6151 | 1,851,497,818 | I_kwDODunzps5uW51a | 6,151 | Faster sorting for single key items | {
"avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4",
"events_url": "https://api.github.com/users/jackapbutler/events{/privacy}",
"followers_url": "https://api.github.com/users/jackapbutler/followers",
"following_url": "https://api.github.com/users/jackapbutler/following{/other_user}",
"gists_url": "https://api.github.com/users/jackapbutler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jackapbutler",
"id": 47942453,
"login": "jackapbutler",
"node_id": "MDQ6VXNlcjQ3OTQyNDUz",
"organizations_url": "https://api.github.com/users/jackapbutler/orgs",
"received_events_url": "https://api.github.com/users/jackapbutler/received_events",
"repos_url": "https://api.github.com/users/jackapbutler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jackapbutler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackapbutler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jackapbutler"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"`Dataset.sort` essentially does the same thing except it uses `pyarrow.compute.sort_indices` which doesn't involve copying the data into python objects (saving memory)\r\n\r\n```python\r\nsort_keys = [(col, \"ascending\") for col in column_names]\r\nindices = pc.sort_indices(self.data, sort_keys=sort_keys)\r\nreturn self.select(indices)\r\n```",
"Ok interesting, I'll continue debugging to see what is going wrong on my end."
] | "2023-08-15T14:02:31Z" | "2023-08-21T14:38:26Z" | "2023-08-21T14:38:25Z" | NONE | null | null | null | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json", **{"data_files": {"train": "path-to-jsonlines"}, "split": "train"}, num_proc=os.cpu_count(), keep_in_memory=True)
sorted_ds = ds.sort("pubDate", keep_in_memory=True)
```
However, once I switched to a different method which
1. unpacked to a list of tuples
2. sorted tuples by key
3. run `.select` with the sorted list of indices
It was significantly faster (orders of magnitude, especially with M's of rows)
### Your contribution
I'd be happy to implement a crude single key sorting algorithm so that other users can benefit from this trick. Broadly, this would take a `Dataset` and perform;
```python
# ds is a Dataset object
# key_name is the sorting key
class Dataset:
...
def _sort(key_name: str) -> Dataset:
index_keys = [(i,x) for i,x in enumerate(self[key_name])]
sorted_rows = sorted(row_pubdate, key=lambda x: x[1])
sorted_indicies = [x[0] for x in sorted_rows]
return self.select(sorted_indicies)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6151/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3710/comments | https://api.github.com/repos/huggingface/datasets/issues/3710/events | https://github.com/huggingface/datasets/pull/3710 | 1,133,955,393 | PR_kwDODunzps4ymQMQ | 3,710 | Fix CI code quality issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2022-02-12T12:05:39Z" | "2022-02-12T12:58:05Z" | "2022-02-12T12:58:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3710",
"merged_at": "2022-02-12T12:58:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3710"
} | Fix CI code quality issue introduced by #3695. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3710/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3213/comments | https://api.github.com/repos/huggingface/datasets/issues/3213/events | https://github.com/huggingface/datasets/pull/3213 | 1,044,745,313 | PR_kwDODunzps4uF6W9 | 3,213 | Fix tuple_ie download url | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | "2021-11-04T13:09:07Z" | "2021-11-05T14:16:06Z" | "2021-11-05T14:16:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3213",
"merged_at": "2021-11-05T14:16:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3213"
} | Fix #3204 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2199/comments | https://api.github.com/repos/huggingface/datasets/issues/2199/events | https://github.com/huggingface/datasets/pull/2199 | 854,417,318 | MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, could you please check if this makes sense? Thanks.",
"What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released",
"Yes, I have seen it is not released yet...\r\n\r\nYou are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;)"
] | "2021-04-09T11:01:10Z" | "2021-04-09T15:57:05Z" | "2021-04-09T15:57:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"merged_at": "2021-04-09T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2199"
} | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2199/timeline | null | null | true |