url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.62B
| node_id
stringlengths 18
32
| number
int64 1
5.62k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4351/comments | https://api.github.com/repos/huggingface/datasets/issues/4351/events | https://github.com/huggingface/datasets/issues/4351 | 1,235,950,209 | I_kwDODunzps5JqxqB | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/datasets/issues/4196)."
] | "2022-05-14T11:30:42" | "2022-12-14T18:22:59" | "2022-12-14T18:22:59" | NONE | null | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.
**Describe the solution you'd like**
I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm.
**Describe alternatives you've considered**
- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4351/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4350/comments | https://api.github.com/repos/huggingface/datasets/issues/4350/events | https://github.com/huggingface/datasets/pull/4350 | 1,235,505,104 | PR_kwDODunzps43zKIV | 4,350 | Add a new metric: CTC_Consistency | {
"login": "YEdenZ",
"id": 92551194,
"node_id": "U_kgDOBYQ4Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/92551194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YEdenZ",
"html_url": "https://github.com/YEdenZ",
"followers_url": "https://api.github.com/users/YEdenZ/followers",
"following_url": "https://api.github.com/users/YEdenZ/following{/other_user}",
"gists_url": "https://api.github.com/users/YEdenZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YEdenZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YEdenZ/subscriptions",
"organizations_url": "https://api.github.com/users/YEdenZ/orgs",
"repos_url": "https://api.github.com/users/YEdenZ/repos",
"events_url": "https://api.github.com/users/YEdenZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/YEdenZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T17:31:19" | "2022-05-19T10:23:04" | "2022-05-19T10:23:03" | NONE | null | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4350/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4350",
"html_url": "https://github.com/huggingface/datasets/pull/4350",
"diff_url": "https://github.com/huggingface/datasets/pull/4350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4350.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4349/comments | https://api.github.com/repos/huggingface/datasets/issues/4349/events | https://github.com/huggingface/datasets/issues/4349 | 1,235,474,765 | I_kwDODunzps5Jo9lN | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```",
"Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352",
"Did you close it because you found that it was due to the incorrect Feature types ?",
"Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue",
"I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?",
"The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk"
] | "2022-05-13T16:55:12" | "2022-06-02T12:51:11" | "2022-05-14T15:08:08" | NONE | null | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.
I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.
Code I am using is provided below
## Steps to reproduce the bug
I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.
```python
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['label'].unique()
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)
encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)
encoded_dataset.set_format(type="torch")
return encoded_dataset
```
```python
PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False)
def preprocess_data(examples):
directory = os.path.join(FILES_PATH, examples['file_location'])
images_dir = os.path.join(directory, PDF_IMAGE_DIR)
textract_response_path = os.path.join(directory, 'textract.json')
doc_meta_path = os.path.join(directory, 'doc_meta.json')
textract_document = get_textract_document(textract_response_path, doc_meta_path)
images, words, bboxes = get_doc_training_data(images_dir, textract_document)
encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True)
# https://github.com/NielsRogge/Transformers-Tutorials/issues/36
encoded_inputs["image"] = np.array(encoded_inputs["image"])
encoded_inputs["label"] = examples['label_id']
return encoded_inputs
```
## Expected results
My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.
## Actual results
If writer_batch_size is set to a value less than the number of rows, I get either:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
or simply
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
If it is greater than the number of rows, i get the `zsh: killed` error above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4349/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4348/comments | https://api.github.com/repos/huggingface/datasets/issues/4348/events | https://github.com/huggingface/datasets/issues/4348 | 1,235,432,976 | I_kwDODunzps5JozYQ | 4,348 | `inspect` functions can't fetch dataset script from the Hub | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?",
"Good catch ! Yea I think it's fine :)"
] | "2022-05-13T16:08:26" | "2022-06-09T10:26:06" | "2022-06-09T10:26:06" | MEMBER | null | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4348/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4347/comments | https://api.github.com/repos/huggingface/datasets/issues/4347/events | https://github.com/huggingface/datasets/pull/4347 | 1,235,318,064 | PR_kwDODunzps43yihq | 4,347 | Support remote cache_dir | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T14:26:35" | "2022-05-25T16:35:23" | "2022-05-25T16:27:03" | MEMBER | null | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4347/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4347",
"html_url": "https://github.com/huggingface/datasets/pull/4347",
"diff_url": "https://github.com/huggingface/datasets/pull/4347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4347.patch",
"merged_at": "2022-05-25T16:27:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4346/comments | https://api.github.com/repos/huggingface/datasets/issues/4346/events | https://github.com/huggingface/datasets/issues/4346 | 1,235,067,062 | I_kwDODunzps5JnaC2 | 4,346 | GH Action to build documentation never ends | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | "2022-05-13T10:44:44" | "2022-05-13T11:22:00" | "2022-05-13T11:22:00" | MEMBER | null | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4346/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4345/comments | https://api.github.com/repos/huggingface/datasets/issues/4345/events | https://github.com/huggingface/datasets/pull/4345 | 1,235,062,787 | PR_kwDODunzps43xrky | 4,345 | Fix never ending GH Action to build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T10:40:10" | "2022-05-13T11:29:43" | "2022-05-13T11:22:00" | MEMBER | null | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4345/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"merged_at": "2022-05-13T11:22:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | {
"login": "felixdivo",
"id": 4403130,
"node_id": "MDQ6VXNlcjQ0MDMxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4403130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixdivo",
"html_url": "https://github.com/felixdivo",
"followers_url": "https://api.github.com/users/felixdivo/followers",
"following_url": "https://api.github.com/users/felixdivo/following{/other_user}",
"gists_url": "https://api.github.com/users/felixdivo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixdivo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixdivo/subscriptions",
"organizations_url": "https://api.github.com/users/felixdivo/orgs",
"repos_url": "https://api.github.com/users/felixdivo/repos",
"events_url": "https://api.github.com/users/felixdivo/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixdivo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T08:06:00" | "2022-05-25T09:23:43" | "2022-05-24T15:35:21" | CONTRIBUTOR | null | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"merged_at": "2022-05-24T15:35:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4343/comments | https://api.github.com/repos/huggingface/datasets/issues/4343/events | https://github.com/huggingface/datasets/issues/4343 | 1,234,864,168 | I_kwDODunzps5Jmogo | 4,343 | Metrics documentation is not accessible in the datasets doc UI | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | closed | false | null | [] | null | [
"Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor "
] | "2022-05-13T07:46:30" | "2022-06-03T08:50:25" | "2022-06-03T08:50:25" | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4343/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4342/comments | https://api.github.com/repos/huggingface/datasets/issues/4342/events | https://github.com/huggingface/datasets/pull/4342 | 1,234,743,765 | PR_kwDODunzps43woHm | 4,342 | Fix failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T05:03:38" | "2022-05-13T05:47:42" | "2022-05-13T05:47:42" | MEMBER | null | This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4342",
"html_url": "https://github.com/huggingface/datasets/pull/4342",
"diff_url": "https://github.com/huggingface/datasets/pull/4342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4342.patch",
"merged_at": "2022-05-13T05:47:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | "2022-05-13T04:55:17" | "2022-05-13T05:47:41" | "2022-05-13T05:47:41" | MEMBER | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4340/comments | https://api.github.com/repos/huggingface/datasets/issues/4340/events | https://github.com/huggingface/datasets/pull/4340 | 1,234,671,025 | PR_kwDODunzps43wY1U | 4,340 | Fix irc_disentangle dataset script | {
"login": "i-am-pad",
"id": 32005017,
"node_id": "MDQ6VXNlcjMyMDA1MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32005017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-pad",
"html_url": "https://github.com/i-am-pad",
"followers_url": "https://api.github.com/users/i-am-pad/followers",
"following_url": "https://api.github.com/users/i-am-pad/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-pad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-pad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-pad/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-pad/orgs",
"repos_url": "https://api.github.com/users/i-am-pad/repos",
"events_url": "https://api.github.com/users/i-am-pad/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-pad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-13T02:37:57" | "2022-05-24T15:37:30" | "2022-05-24T15:37:29" | NONE | null | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4339/comments | https://api.github.com/repos/huggingface/datasets/issues/4339/events | https://github.com/huggingface/datasets/pull/4339 | 1,234,496,289 | PR_kwDODunzps43v0WT | 4,339 | Dataset loader for the MSLR2022 shared task | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T21:23:41" | "2022-07-18T17:19:27" | "2022-07-18T16:58:34" | CONTRIBUTOR | null | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
```
Usage looks like:
```python
>>> ms2 = load_dataset("mslr2022", "ms2", split="validation")
>>> ms2.keys()
dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])
>>> ms2[0].target
'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'
```
I have tested this works with the following command:
```bash
datasets-cli test datasets/mslr2022 --save_infos --all_configs
```
However I have having a little trouble generating the dummy data
```bash
datasets-cli dummy_data datasets/mslr2022 --auto_generate
```
errors out with the following stack trace:
```
Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.
Traceback (most recent call last):
File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module>
load_entry_point('datasets', 'console_scripts', 'datasets-cli')()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run
keep_uncompressed=self._keep_uncompressed,
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples
reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2
```
I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:
```
The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).
It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.
```
Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4339/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4339",
"html_url": "https://github.com/huggingface/datasets/pull/4339",
"diff_url": "https://github.com/huggingface/datasets/pull/4339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4339.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4338/comments | https://api.github.com/repos/huggingface/datasets/issues/4338/events | https://github.com/huggingface/datasets/pull/4338 | 1,234,478,851 | PR_kwDODunzps43vwsm | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T21:02:08" | "2022-05-16T15:51:02" | "2022-05-16T15:42:59" | CONTRIBUTOR | null | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4338/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4338",
"html_url": "https://github.com/huggingface/datasets/pull/4338",
"diff_url": "https://github.com/huggingface/datasets/pull/4338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4338.patch",
"merged_at": "2022-05-16T15:42:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4337/comments | https://api.github.com/repos/huggingface/datasets/issues/4337/events | https://github.com/huggingface/datasets/pull/4337 | 1,234,470,083 | PR_kwDODunzps43vuzF | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T20:52:02" | "2022-05-16T16:26:19" | "2022-05-16T16:18:30" | CONTRIBUTOR | null | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4337/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"merged_at": "2022-05-16T16:18:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4336/comments | https://api.github.com/repos/huggingface/datasets/issues/4336/events | https://github.com/huggingface/datasets/pull/4336 | 1,234,446,174 | PR_kwDODunzps43vpqG | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T20:24:45" | "2022-05-16T16:25:00" | "2022-05-16T16:24:59" | CONTRIBUTOR | null | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4336/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4336",
"html_url": "https://github.com/huggingface/datasets/pull/4336",
"diff_url": "https://github.com/huggingface/datasets/pull/4336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4336.patch",
"merged_at": "2022-05-16T16:24:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4335/comments | https://api.github.com/repos/huggingface/datasets/issues/4335/events | https://github.com/huggingface/datasets/pull/4335 | 1,234,157,123 | PR_kwDODunzps43usJP | 4,335 | Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T15:28:16" | "2022-05-16T16:31:10" | "2022-05-16T16:23:09" | CONTRIBUTOR | null | Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4335/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4335",
"html_url": "https://github.com/huggingface/datasets/pull/4335",
"diff_url": "https://github.com/huggingface/datasets/pull/4335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4335.patch",
"merged_at": "2022-05-16T16:23:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4334/comments | https://api.github.com/repos/huggingface/datasets/issues/4334/events | https://github.com/huggingface/datasets/pull/4334 | 1,234,103,477 | PR_kwDODunzps43uguB | 4,334 | Adding eval metadata for billsum | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T14:49:08" | "2022-05-12T14:49:24" | "2022-05-12T14:49:24" | CONTRIBUTOR | null | Adding eval metadata for billsum | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4334/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4334",
"html_url": "https://github.com/huggingface/datasets/pull/4334",
"diff_url": "https://github.com/huggingface/datasets/pull/4334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4334.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4333/comments | https://api.github.com/repos/huggingface/datasets/issues/4333/events | https://github.com/huggingface/datasets/pull/4333 | 1,234,038,705 | PR_kwDODunzps43uSuj | 4,333 | Adding eval metadata for Banking 77 | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T14:05:05" | "2022-05-12T21:03:32" | "2022-05-12T21:03:31" | CONTRIBUTOR | null | Adding eval metadata for Banking 77 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4333/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4333",
"html_url": "https://github.com/huggingface/datasets/pull/4333",
"diff_url": "https://github.com/huggingface/datasets/pull/4333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4333.patch",
"merged_at": "2022-05-12T21:03:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4332/comments | https://api.github.com/repos/huggingface/datasets/issues/4332/events | https://github.com/huggingface/datasets/pull/4332 | 1,234,021,188 | PR_kwDODunzps43uO8S | 4,332 | Adding eval metadata for arabic speech corpus | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T13:51:38" | "2022-05-12T21:03:21" | "2022-05-12T21:03:20" | CONTRIBUTOR | null | Adding eval metadata for arabic speech corpus | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4332/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4332",
"html_url": "https://github.com/huggingface/datasets/pull/4332",
"diff_url": "https://github.com/huggingface/datasets/pull/4332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4332.patch",
"merged_at": "2022-05-12T21:03:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4331/comments | https://api.github.com/repos/huggingface/datasets/issues/4331/events | https://github.com/huggingface/datasets/pull/4331 | 1,234,016,110 | PR_kwDODunzps43uN2R | 4,331 | Adding eval metadata to Amazon Polarity | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T13:47:59" | "2022-05-12T21:03:14" | "2022-05-12T21:03:13" | CONTRIBUTOR | null | Adding eval metadata to Amazon Polarity | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4331/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4331",
"html_url": "https://github.com/huggingface/datasets/pull/4331",
"diff_url": "https://github.com/huggingface/datasets/pull/4331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4331.patch",
"merged_at": "2022-05-12T21:03:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4330/comments | https://api.github.com/repos/huggingface/datasets/issues/4330/events | https://github.com/huggingface/datasets/pull/4330 | 1,233,992,681 | PR_kwDODunzps43uIwm | 4,330 | Adding eval metadata to Allociné dataset | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T13:31:39" | "2022-05-12T21:03:05" | "2022-05-12T21:03:05" | CONTRIBUTOR | null | Adding eval metadata to Allociné dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4330/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4330",
"html_url": "https://github.com/huggingface/datasets/pull/4330",
"diff_url": "https://github.com/huggingface/datasets/pull/4330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4330.patch",
"merged_at": "2022-05-12T21:03:05"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4329/comments | https://api.github.com/repos/huggingface/datasets/issues/4329/events | https://github.com/huggingface/datasets/pull/4329 | 1,233,991,207 | PR_kwDODunzps43uIcF | 4,329 | Adding eval metadata for AG News | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T13:30:32" | "2022-05-12T21:02:41" | "2022-05-12T21:02:40" | CONTRIBUTOR | null | Adding eval metadata for AG News | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4329/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4329",
"html_url": "https://github.com/huggingface/datasets/pull/4329",
"diff_url": "https://github.com/huggingface/datasets/pull/4329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4329.patch",
"merged_at": "2022-05-12T21:02:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4328/comments | https://api.github.com/repos/huggingface/datasets/issues/4328/events | https://github.com/huggingface/datasets/pull/4328 | 1,233,856,690 | PR_kwDODunzps43trrd | 4,328 | Fix and clean Apache Beam functionality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T11:41:07" | "2022-05-24T13:43:11" | "2022-05-24T13:34:32" | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4328",
"html_url": "https://github.com/huggingface/datasets/pull/4328",
"diff_url": "https://github.com/huggingface/datasets/pull/4328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4328.patch",
"merged_at": "2022-05-24T13:34:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4327/comments | https://api.github.com/repos/huggingface/datasets/issues/4327/events | https://github.com/huggingface/datasets/issues/4327 | 1,233,840,020 | I_kwDODunzps5JiueU | 4,327 | `wikipedia` pre-processed datasets | {
"login": "vpj",
"id": 81152,
"node_id": "MDQ6VXNlcjgxMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/81152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vpj",
"html_url": "https://github.com/vpj",
"followers_url": "https://api.github.com/users/vpj/followers",
"following_url": "https://api.github.com/users/vpj/following{/other_user}",
"gists_url": "https://api.github.com/users/vpj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vpj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vpj/subscriptions",
"organizations_url": "https://api.github.com/users/vpj/orgs",
"repos_url": "https://api.github.com/users/vpj/repos",
"events_url": "https://api.github.com/users/vpj/events{/privacy}",
"received_events_url": "https://api.github.com/users/vpj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 228.58 MiB, generated: 224.18 MiB, post-processed: Unknown size, total: 452.76 MiB) to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66k/1.66k [00:00<00:00, 1.02MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 235M/235M [00:02<00:00, 82.8MB/s]\r\nDataset wikipedia downloaded and prepared to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 290.75it/s]\r\n\r\nreal\t0m9.693s\r\nuser\t0m6.002s\r\nsys\t0m3.260s\r\n```\r\n\r\nCould you please check your environment info, as requested when opening this issue?\r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nMaybe you are using an old version of `datasets`...",
"Downloading and processing `wikipedia simple` dataset completed in under 11sec on M1 Mac. Could you please check `dataset` version as mentioned by @albertvillanova? Also check system specs, if system is under load processing could take some time I guess."
] | "2022-05-12T11:25:42" | "2022-08-31T08:26:57" | "2022-08-31T08:26:57" | NONE | null | ## Describe the bug
[Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", "20220301.en")
```
## Expected results
To load the dataset
## Actual results
Takes a very long time to load (after downloading)
After `Downloading data files: 100%`. It takes hours and gets killed.
Tried `wikipedia.simple` and it got processed after ~30mins. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4327/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4326/comments | https://api.github.com/repos/huggingface/datasets/issues/4326/events | https://github.com/huggingface/datasets/pull/4326 | 1,233,818,489 | PR_kwDODunzps43tjWy | 4,326 | Fix type hint and documentation for `new_fingerprint` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T11:05:08" | "2022-06-01T13:04:45" | "2022-06-01T12:56:18" | CONTRIBUTOR | null | Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.
The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454
for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4326/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4326",
"html_url": "https://github.com/huggingface/datasets/pull/4326",
"diff_url": "https://github.com/huggingface/datasets/pull/4326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4326.patch",
"merged_at": "2022-06-01T12:56:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4325/comments | https://api.github.com/repos/huggingface/datasets/issues/4325/events | https://github.com/huggingface/datasets/issues/4325 | 1,233,812,191 | I_kwDODunzps5Jinrf | 4,325 | Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n",
"Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience 🙏 ",
"Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)",
"Thanks, these are working great now (including @domenicrosati 's, afaics!)"
] | "2022-05-12T10:59:08" | "2022-05-13T10:57:15" | "2022-05-13T10:57:02" | CONTRIBUTOR | null | ### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time.
* https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train
* https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped!
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4325/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4323/comments | https://api.github.com/repos/huggingface/datasets/issues/4323/events | https://github.com/huggingface/datasets/issues/4323 | 1,233,634,928 | I_kwDODunzps5Jh8Zw | 4,323 | Audio can not find value["bytes"] | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"![image](https://user-images.githubusercontent.com/34292279/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecause we have path and bytes already",
"> but i have some confused why path prior is higher than bytes?\r\n\r\nIf the audio file is already available locally, we don't need to store the bytes again.\r\n\r\nIf you don't specify a \"path\" to a local file, then the bytes are stored. You can set \"path\" to None for example.\r\n\r\n> if you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\n> because we have path and bytes already\r\n\r\nIt's useful to pass both \"path\" and \"bytes\" in `_generate_examples`:\r\n- when the dataset has been downloaded, then the \"path\" to the audio files are stored and we can ignore \"bytes\" in order to save disk space.\r\n- when the dataset is loaded in streaming mode, the audio files are not available on your disk and therefore we use the \"bytes\" ",
"@lhoestq \r\nFirst of all, thx for reply\r\n\r\nbut, if i put in \"bytes\" and \"path\"\r\nex) {\"bytes\":\"blah blah~\", \"path\":\"blah blah~\"}\r\n\r\nthat source working that my bytes to empty first,\r\nand then, re-calculate my bytes!\r\n![image](https://user-images.githubusercontent.com/34292279/168534687-1fb60d8c-d369-47d2-a4bb-db68f95194b4.png)\r\n\r\nif you have some pcm file, pcm is can read bytes.\r\nso, i put in bytes and paths.\r\nbut bytes is been None why encode_example func make None\r\nand then, on decode_example func, we no have bytes. so, calculate bytes to path.\r\npcm is not support librosa or soundfile, error occured!\r\n\r\nthe most important thing is not announced anywhere this situation can be reproduced\r\n\r\nis that truly right process flow?",
"I don't think we support PCM files, feel free to convert your data to WAV for now.\r\n\r\nIt would be awesome to support PCM files though, let me know if you'd like to contribute this feature, I'd be happy to help",
"@lhoestq oh, how can i contribute?",
"You can clone the repository (see the guide on [how to contribute](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-create-a-pull-request)) and see how we can make the `Image.encode_example` method work with PCM data.\r\n\r\nThere might be other ways to approach this problem, but here is what I think is a reasonable one:\r\n\r\nI think `Image.encode_example` should be able to take PCM bytes as input and the sampling rate, and return the WAV bytes (built by combining the PCM bytes and the sampling rate info), so that `Image.decode_example` can read it.\r\n\r\nTo check if the input bytes are PCM data, you can just check if the extension of the `path` is \".pcm\".\r\n",
"maybe i can start to contribute on this sunday!\r\n@lhoestq ",
"@lhoestq plz check my pr #4409 \r\n\r\nam i wrong somting?",
"Thanks, I reviewed your PR :)"
] | "2022-05-12T08:31:58" | "2022-07-07T13:16:08" | "2022-07-07T13:16:08" | CONTRIBUTOR | null | ## Describe the bug
I wrote down _generate_examples like:
![image](https://user-images.githubusercontent.com/34292279/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png)
but where is the bytes?
![image](https://user-images.githubusercontent.com/34292279/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png)
## Expected results
value["bytes"] is not None, so i can make datasets with bytes, not path
## bytes looks like:
blah blah~~
\xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03
blah blah~~
that function not return None
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:2.2.1
- Platform:ubuntu 18.04
- Python version:3.6.9
- PyArrow version:6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4323/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4322/comments | https://api.github.com/repos/huggingface/datasets/issues/4322/events | https://github.com/huggingface/datasets/pull/4322 | 1,233,596,947 | PR_kwDODunzps43s1wy | 4,322 | Added stratify option to train_test_split function. | {
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-12T08:00:31" | "2022-11-22T14:53:55" | "2022-05-25T20:43:51" | CONTRIBUTOR | null | This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.
It fixes #3452.
@lhoestq Please review and let me know, if any changes are required.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4322/reactions",
"total_count": 5,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4322/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4322",
"html_url": "https://github.com/huggingface/datasets/pull/4322",
"diff_url": "https://github.com/huggingface/datasets/pull/4322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4322.patch",
"merged_at": "2022-05-25T20:43:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4321/comments | https://api.github.com/repos/huggingface/datasets/issues/4321/events | https://github.com/huggingface/datasets/pull/4321 | 1,233,273,351 | PR_kwDODunzps43ryW7 | 4,321 | Adding dataset enwik8 | {
"login": "HallerPatrick",
"id": 22773355,
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HallerPatrick",
"html_url": "https://github.com/HallerPatrick",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T23:25:02" | "2022-06-01T14:27:30" | "2022-06-01T14:04:06" | CONTRIBUTOR | null | Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4321/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4321/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4321",
"html_url": "https://github.com/huggingface/datasets/pull/4321",
"diff_url": "https://github.com/huggingface/datasets/pull/4321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4321.patch",
"merged_at": "2022-06-01T14:04:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4320/comments | https://api.github.com/repos/huggingface/datasets/issues/4320/events | https://github.com/huggingface/datasets/issues/4320 | 1,233,208,864 | I_kwDODunzps5JgUYg | 4,320 | Multi-news dataset loader attempts to strip wrong character from beginning of summaries | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character",
"Cool! I made a PR."
] | "2022-05-11T21:36:41" | "2022-05-16T13:52:10" | "2022-05-16T13:52:10" | CONTRIBUTOR | null | ## Describe the bug
The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`.
I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)).
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4319/comments | https://api.github.com/repos/huggingface/datasets/issues/4319/events | https://github.com/huggingface/datasets/pull/4319 | 1,232,982,023 | PR_kwDODunzps43q0UY | 4,319 | Adding eval metadata for ade v2 | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T17:36:20" | "2022-05-12T13:29:51" | "2022-05-12T13:22:19" | CONTRIBUTOR | null | Adding metadata to allow evaluation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4319/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4319",
"html_url": "https://github.com/huggingface/datasets/pull/4319",
"diff_url": "https://github.com/huggingface/datasets/pull/4319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4319.patch",
"merged_at": "2022-05-12T13:22:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4318/comments | https://api.github.com/repos/huggingface/datasets/issues/4318/events | https://github.com/huggingface/datasets/pull/4318 | 1,232,905,488 | PR_kwDODunzps43qkkQ | 4,318 | Don't check f.loc in _get_extraction_protocol_with_magic_number | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T16:27:09" | "2022-05-11T16:57:02" | "2022-05-11T16:46:31" | MEMBER | null | `f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)
Fix https://github.com/huggingface/datasets/issues/4310 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4318/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4318",
"html_url": "https://github.com/huggingface/datasets/pull/4318",
"diff_url": "https://github.com/huggingface/datasets/pull/4318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4318.patch",
"merged_at": "2022-05-11T16:46:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4317/comments | https://api.github.com/repos/huggingface/datasets/issues/4317/events | https://github.com/huggingface/datasets/pull/4317 | 1,232,737,401 | PR_kwDODunzps43qBzh | 4,317 | Fix cnn_dailymail (dm stories were ignored) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T14:25:25" | "2022-05-11T16:00:09" | "2022-05-11T15:52:37" | MEMBER | null | https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4317/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4317",
"html_url": "https://github.com/huggingface/datasets/pull/4317",
"diff_url": "https://github.com/huggingface/datasets/pull/4317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4317.patch",
"merged_at": "2022-05-11T15:52:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4316/comments | https://api.github.com/repos/huggingface/datasets/issues/4316/events | https://github.com/huggingface/datasets/pull/4316 | 1,232,681,207 | PR_kwDODunzps43p1Za | 4,316 | Support passing config_kwargs to CLI run_beam | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T13:53:37" | "2022-05-11T14:36:49" | "2022-05-11T14:28:31" | MEMBER | null | This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4316/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4316",
"html_url": "https://github.com/huggingface/datasets/pull/4316",
"diff_url": "https://github.com/huggingface/datasets/pull/4316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4316.patch",
"merged_at": "2022-05-11T14:28:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4315/comments | https://api.github.com/repos/huggingface/datasets/issues/4315/events | https://github.com/huggingface/datasets/pull/4315 | 1,232,549,330 | PR_kwDODunzps43pZ6p | 4,315 | Fix CLI run_beam namespace | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T12:21:00" | "2022-05-11T13:13:00" | "2022-05-11T13:05:08" | MEMBER | null | Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4315/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4315",
"html_url": "https://github.com/huggingface/datasets/pull/4315",
"diff_url": "https://github.com/huggingface/datasets/pull/4315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4315.patch",
"merged_at": "2022-05-11T13:05:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4314/comments | https://api.github.com/repos/huggingface/datasets/issues/4314/events | https://github.com/huggingface/datasets/pull/4314 | 1,232,326,726 | PR_kwDODunzps43oqXD | 4,314 | Catch pull error when mirroring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-11T09:38:35" | "2022-05-11T12:54:07" | "2022-05-11T12:46:42" | MEMBER | null | Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4314",
"html_url": "https://github.com/huggingface/datasets/pull/4314",
"diff_url": "https://github.com/huggingface/datasets/pull/4314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4314.patch",
"merged_at": "2022-05-11T12:46:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4313/comments | https://api.github.com/repos/huggingface/datasets/issues/4313/events | https://github.com/huggingface/datasets/pull/4313 | 1,231,764,100 | PR_kwDODunzps43m4qB | 4,313 | Add API code examples for Builder classes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | "2022-05-10T22:22:32" | "2022-05-12T17:02:43" | "2022-05-12T12:36:57" | MEMBER | null | This PR adds API code examples for the Builder classes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4313/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4313",
"html_url": "https://github.com/huggingface/datasets/pull/4313",
"diff_url": "https://github.com/huggingface/datasets/pull/4313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4313.patch",
"merged_at": "2022-05-12T12:36:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4312/comments | https://api.github.com/repos/huggingface/datasets/issues/4312/events | https://github.com/huggingface/datasets/pull/4312 | 1,231,662,775 | PR_kwDODunzps43mlug | 4,312 | added TR-News dataset | {
"login": "batubayk",
"id": 25901065,
"node_id": "MDQ6VXNlcjI1OTAxMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/25901065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/batubayk",
"html_url": "https://github.com/batubayk",
"followers_url": "https://api.github.com/users/batubayk/followers",
"following_url": "https://api.github.com/users/batubayk/following{/other_user}",
"gists_url": "https://api.github.com/users/batubayk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/batubayk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/batubayk/subscriptions",
"organizations_url": "https://api.github.com/users/batubayk/orgs",
"repos_url": "https://api.github.com/users/batubayk/repos",
"events_url": "https://api.github.com/users/batubayk/events{/privacy}",
"received_events_url": "https://api.github.com/users/batubayk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [] | "2022-05-10T20:33:00" | "2022-10-03T09:36:45" | "2022-10-03T09:36:45" | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4312",
"html_url": "https://github.com/huggingface/datasets/pull/4312",
"diff_url": "https://github.com/huggingface/datasets/pull/4312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4312.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4311/comments | https://api.github.com/repos/huggingface/datasets/issues/4311/events | https://github.com/huggingface/datasets/pull/4311 | 1,231,369,438 | PR_kwDODunzps43ln8- | 4,311 | [Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-10T15:52:15" | "2022-05-10T17:19:42" | "2022-05-10T17:11:47" | MEMBER | null | I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- raise informative error messages when metadata and images aren't linked correctly:
- when an image is missing a metadata file
- when a metadata file is missing an image
I added some tests for these changes as well
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4311/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4311",
"html_url": "https://github.com/huggingface/datasets/pull/4311",
"diff_url": "https://github.com/huggingface/datasets/pull/4311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4311.patch",
"merged_at": "2022-05-10T17:11:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4310/comments | https://api.github.com/repos/huggingface/datasets/issues/4310/events | https://github.com/huggingface/datasets/issues/4310 | 1,231,319,815 | I_kwDODunzps5JZHMH | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | {
"login": "milmin",
"id": 72745467,
"node_id": "MDQ6VXNlcjcyNzQ1NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milmin",
"html_url": "https://github.com/milmin",
"followers_url": "https://api.github.com/users/milmin/followers",
"following_url": "https://api.github.com/users/milmin/following{/other_user}",
"gists_url": "https://api.github.com/users/milmin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milmin/subscriptions",
"organizations_url": "https://api.github.com/users/milmin/orgs",
"repos_url": "https://api.github.com/users/milmin/repos",
"events_url": "https://api.github.com/users/milmin/events{/privacy}",
"received_events_url": "https://api.github.com/users/milmin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | "2022-05-10T15:12:53" | "2022-05-11T16:46:31" | "2022-05-11T16:46:31" | NONE | null | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# path is the path to parquet files
data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
dataset = load_dataset("parquet", data_files=data_files, streaming=True)
```
## Expected results
A dataset object `datasets.dataset_dict.DatasetDict`
## Actual results
```
AttributeError Traceback (most recent call last)
<command-562086> in <module>
11
12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1679 if streaming:
1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)
-> 1681 return builder_instance.as_streaming_dataset(
1682 split=split,
1683 use_auth_token=use_auth_token,
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
904 )
905 self._check_manual_download(dl_manager)
--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
907 # By default, return all splits
908 if split is None:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager)
30 if not self.config.data_files:
31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)
33 if isinstance(data_files, (str, list, tuple)):
34 files = data_files
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
798
799 def download_and_extract(self, url_or_urls):
--> 800 return self.extract(self.download(url_or_urls))
801
802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
776
777 def extract(self, path_or_paths):
--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
779 return urlpaths
780
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
312 num_proc = 1
313 if num_proc <= 1 or len(iterable) <= num_proc:
--> 314 mapped = [
315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
313 if num_proc <= 1 or len(iterable) <= num_proc:
314 mapped = [
--> 315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
317 ]
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
249 # Singleton first to spare some computation
250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 251 return function(data_struct)
252
253 # Reduce logging to keep things readable in multiprocessing with tqdm
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
781 def _extract(self, urlpath: str) -> str:
782 urlpath = str(urlpath)
--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
784 if protocol is None:
785 # no extraction
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)
371 urlpath, kwargs = urlpath, {}
372 with fsspec.open(urlpath, **kwargs) as f:
--> 373 return _get_extraction_protocol_with_magic_number(f)
374
375
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)
335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:
336 """read the magic number from a file-like object and return the compression protocol"""
--> 337 prev_loc = f.loc
338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)
339 f.seek(prev_loc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item)
337
338 def __getattr__(self, item):
--> 339 return getattr(self.f, item)
340
341 def __enter__(self):
AttributeError: '_io.BufferedReader' object has no attribute 'loc'
```
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
- `fsspec` version: 2021.08.1
- `s3fs` version: 2021.08.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4310/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4309/comments | https://api.github.com/repos/huggingface/datasets/issues/4309/events | https://github.com/huggingface/datasets/pull/4309 | 1,231,232,935 | PR_kwDODunzps43lKpm | 4,309 | [WIP] Add TEDLIUM dataset | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | "2022-05-10T14:12:47" | "2022-06-17T12:54:40" | "2022-06-17T11:44:01" | CONTRIBUTOR | null | Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for continuous testing~~
- [ ] ~~Dummy data tests~~
- [ ] ~~Real data tests~~
- [ ] Create the metadata JSON
- [ ] Close PR and add directly to the Hub under LIUM org | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4309",
"html_url": "https://github.com/huggingface/datasets/pull/4309",
"diff_url": "https://github.com/huggingface/datasets/pull/4309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4309.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4308/comments | https://api.github.com/repos/huggingface/datasets/issues/4308/events | https://github.com/huggingface/datasets/pull/4308 | 1,231,217,783 | PR_kwDODunzps43lHdP | 4,308 | Remove unused multiprocessing args from test CLI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-10T14:02:15" | "2022-05-11T12:58:25" | "2022-05-11T12:50:43" | MEMBER | null | Multiprocessing is not used in the test CLI. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4308/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4308",
"html_url": "https://github.com/huggingface/datasets/pull/4308",
"diff_url": "https://github.com/huggingface/datasets/pull/4308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4308.patch",
"merged_at": "2022-05-11T12:50:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4307/comments | https://api.github.com/repos/huggingface/datasets/issues/4307/events | https://github.com/huggingface/datasets/pull/4307 | 1,231,175,639 | PR_kwDODunzps43k-Wo | 4,307 | Add packaged builder configs to the documentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-10T13:34:19" | "2022-05-10T14:03:50" | "2022-05-10T13:55:54" | MEMBER | null | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4307/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4307",
"html_url": "https://github.com/huggingface/datasets/pull/4307",
"diff_url": "https://github.com/huggingface/datasets/pull/4307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4307.patch",
"merged_at": "2022-05-10T13:55:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4306/comments | https://api.github.com/repos/huggingface/datasets/issues/4306/events | https://github.com/huggingface/datasets/issues/4306 | 1,231,137,204 | I_kwDODunzps5JYam0 | 4,306 | `load_dataset` does not work with certain filename. | {
"login": "whatever60",
"id": 57242693,
"node_id": "MDQ6VXNlcjU3MjQyNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whatever60",
"html_url": "https://github.com/whatever60",
"followers_url": "https://api.github.com/users/whatever60/followers",
"following_url": "https://api.github.com/users/whatever60/following{/other_user}",
"gists_url": "https://api.github.com/users/whatever60/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whatever60/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whatever60/subscriptions",
"organizations_url": "https://api.github.com/users/whatever60/orgs",
"repos_url": "https://api.github.com/users/whatever60/repos",
"events_url": "https://api.github.com/users/whatever60/events{/privacy}",
"received_events_url": "https://api.github.com/users/whatever60/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Never mind. It is because of the caching of datasets..."
] | "2022-05-10T13:14:04" | "2022-05-10T18:58:36" | "2022-05-10T18:58:09" | NONE | null | ## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
No error.
## Actual results
The val file is loaded as expected, but the train file throws JSON decoding error:
```
╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ <ipython-input-74-97947e92c100>:5 in <module> │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │
│ load_dataset │
│ │
│ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │
│ 1685 │ │
│ 1686 │ # Download and prepare data │
│ ❱ 1687 │ builder_instance.download_and_prepare( │
│ 1688 │ │ download_config=download_config, │
│ 1689 │ │ download_mode=download_mode, │
│ 1690 │ │ ignore_verifications=ignore_verifications, │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │
│ download_and_prepare │
│ │
│ 602 │ │ │ │ │ │ except ConnectionError: │
│ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │
│ 604 │ │ │ │ │ if not downloaded_from_gcs: │
│ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │
│ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │
│ 607 │ │ │ │ │ │ ) │
│ 608 │ │ │ │ │ # Sync info │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │
│ _download_and_prepare │
│ │
│ 691 │ │ │ │
│ 692 │ │ │ try: │
│ 693 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 695 │ │ │ except OSError as e: │
│ 696 │ │ │ │ raise OSError( │
│ 697 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │
│ _prepare_split │
│ │
│ 1148 │ │ │
│ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │
│ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │
│ ❱ 1151 │ │ │ for key, table in logging.tqdm( │
│ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │
│ 1153 │ │ │ ): │
│ 1154 │ │ │ │ writer.write_table(table) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │
│ __iter__ │
│ │
│ 254 │ │
│ 255 │ def __iter__(self): │
│ 256 │ │ try: │
│ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │
│ 258 │ │ │ │ # return super(tqdm...) will not catch exception │
│ 259 │ │ │ │ yield obj │
│ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │
│ __iter__ │
│ │
│ 1180 │ │ # If the bar is disabled, then just walk the iterable │
│ 1181 │ │ # (note: keep this check outside the loop for performance) │
│ 1182 │ │ if self.disable: │
│ ❱ 1183 │ │ │ for obj in iterable: │
│ 1184 │ │ │ │ yield obj │
│ 1185 │ │ │ return │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │
│ son/json.py:90 in _generate_tables │
│ │
│ 87 │ │ │ # If the file is one json object and if we need to look at the list of │
│ 88 │ │ │ if self.config.field is not None: │
│ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │
│ ❱ 90 │ │ │ │ │ dataset = json.load(f) │
│ 91 │ │ │ │ │
│ 92 │ │ │ │ # We keep only the field we are interested in │
│ 93 │ │ │ │ dataset = dataset[self.config.field] │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │
│ │
│ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │
│ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │
│ 292 │ """ │
│ ❱ 293 │ return loads(fp.read(), │
│ 294 │ │ cls=cls, object_hook=object_hook, │
│ 295 │ │ parse_float=parse_float, parse_int=parse_int, │
│ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │
│ │
│ 354 │ if (cls is None and object_hook is None and │
│ 355 │ │ │ parse_int is None and parse_float is None and │
│ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │
│ ❱ 357 │ │ return _default_decoder.decode(s) │
│ 358 │ if cls is None: │
│ 359 │ │ cls = JSONDecoder │
│ 360 │ if object_hook is not None: │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │
│ │
│ 334 │ │ containing a JSON document). │
│ 335 │ │ │
│ 336 │ │ """ │
│ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │
│ 338 │ │ end = _w(s, end).end() │
│ 339 │ │ if end != len(s): │
│ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │
│ │
│ 350 │ │ │
│ 351 │ │ """ │
│ 352 │ │ try: │
│ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │
│ 354 │ │ except StopIteration as err: │
│ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │
│ 356 │ │ return obj, end │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051)
```
However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well.
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4306/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4303/comments | https://api.github.com/repos/huggingface/datasets/issues/4303/events | https://github.com/huggingface/datasets/pull/4303 | 1,230,867,728 | PR_kwDODunzps43j8cH | 4,303 | Fix: Add missing comma | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-10T09:21:38" | "2022-05-11T08:50:15" | "2022-05-11T08:50:14" | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4303/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4303",
"html_url": "https://github.com/huggingface/datasets/pull/4303",
"diff_url": "https://github.com/huggingface/datasets/pull/4303.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4303.patch",
"merged_at": "2022-05-11T08:50:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4302/comments | https://api.github.com/repos/huggingface/datasets/issues/4302/events | https://github.com/huggingface/datasets/pull/4302 | 1,230,651,117 | PR_kwDODunzps43jPE5 | 4,302 | Remove hacking license tags when mirroring datasets on the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-10T05:52:46" | "2022-05-20T09:48:30" | "2022-05-20T09:40:20" | MEMBER | null | Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub.
I guess this hacking is no longer necessary:
- it is not applied to community datasets
- all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones
Fix #4298. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4302/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4302",
"html_url": "https://github.com/huggingface/datasets/pull/4302",
"diff_url": "https://github.com/huggingface/datasets/pull/4302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4302.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4301/comments | https://api.github.com/repos/huggingface/datasets/issues/4301/events | https://github.com/huggingface/datasets/pull/4301 | 1,230,401,256 | PR_kwDODunzps43idlE | 4,301 | Add ImageNet-Sketch dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-09T23:38:45" | "2022-05-23T18:14:14" | "2022-05-23T18:05:29" | CONTRIBUTOR | null | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4301/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4301",
"html_url": "https://github.com/huggingface/datasets/pull/4301",
"diff_url": "https://github.com/huggingface/datasets/pull/4301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4301.patch",
"merged_at": "2022-05-23T18:05:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4300/comments | https://api.github.com/repos/huggingface/datasets/issues/4300/events | https://github.com/huggingface/datasets/pull/4300 | 1,230,272,761 | PR_kwDODunzps43iA86 | 4,300 | Add API code examples for loading methods | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | "2022-05-09T21:30:26" | "2022-05-25T16:23:15" | "2022-05-25T09:20:13" | MEMBER | null | This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me:
```py
from datasets import inspect_dataset
inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
```
Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4300/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4300",
"html_url": "https://github.com/huggingface/datasets/pull/4300",
"diff_url": "https://github.com/huggingface/datasets/pull/4300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4300.patch",
"merged_at": "2022-05-25T09:20:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4299/comments | https://api.github.com/repos/huggingface/datasets/issues/4299/events | https://github.com/huggingface/datasets/pull/4299 | 1,230,236,782 | PR_kwDODunzps43h5RP | 4,299 | Remove manual download from imagenet-1k | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-09T20:49:18" | "2022-05-25T14:54:59" | "2022-05-25T14:46:16" | CONTRIBUTOR | null | Remove the manual download code from `imagenet-1k` to make it a regular dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4299/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4299",
"html_url": "https://github.com/huggingface/datasets/pull/4299",
"diff_url": "https://github.com/huggingface/datasets/pull/4299.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4299.patch",
"merged_at": "2022-05-25T14:46:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4298/comments | https://api.github.com/repos/huggingface/datasets/issues/4298/events | https://github.com/huggingface/datasets/issues/4298 | 1,229,748,006 | I_kwDODunzps5JTHcm | 4,298 | Normalise license names | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")",
"Fixed by #4367."
] | "2022-05-09T13:51:32" | "2022-05-20T09:51:50" | "2022-05-20T09:51:50" | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata.
**Describe the solution you'd like**
I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) .
**Describe alternatives you've considered**
None
**Additional context**
None
**Priority**
Low
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4298/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4298/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4297/comments | https://api.github.com/repos/huggingface/datasets/issues/4297/events | https://github.com/huggingface/datasets/issues/4297 | 1,229,735,498 | I_kwDODunzps5JTEZK | 4,297 | Datasets YAML tagging space is down | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess",
"Thanks for reporting, fixing it now",
"It's up again :)"
] | "2022-05-09T13:45:05" | "2022-05-09T14:44:25" | "2022-05-09T14:44:25" | CONTRIBUTOR | null | ## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
There's an error message; here's the step where it breaks:
```
Step 18/29 : RUN pip install -r requirements.txt
---> Running in e88bfe7e7e0c
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4))
Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k
WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref.
Running command git checkout -q update-task-list
error: pathspec 'update-task-list' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
```
## Environment info
- Platform: Linux / Brave
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4297/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4295/comments | https://api.github.com/repos/huggingface/datasets/issues/4295/events | https://github.com/huggingface/datasets/pull/4295 | 1,229,527,283 | PR_kwDODunzps43fieR | 4,295 | Fix missing lz4 dependency for tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-09T10:53:20" | "2022-05-09T11:21:22" | "2022-05-09T11:13:44" | MEMBER | null | Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4295/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4295",
"html_url": "https://github.com/huggingface/datasets/pull/4295",
"diff_url": "https://github.com/huggingface/datasets/pull/4295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4295.patch",
"merged_at": "2022-05-09T11:13:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4294/comments | https://api.github.com/repos/huggingface/datasets/issues/4294/events | https://github.com/huggingface/datasets/pull/4294 | 1,229,455,582 | PR_kwDODunzps43fTXA | 4,294 | Fix CLI run_beam save_infos | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-09T09:47:43" | "2022-05-10T07:04:04" | "2022-05-10T06:56:10" | MEMBER | null | Currently, it raises TypeError:
```
TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4294/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4294",
"html_url": "https://github.com/huggingface/datasets/pull/4294",
"diff_url": "https://github.com/huggingface/datasets/pull/4294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4294.patch",
"merged_at": "2022-05-10T06:56:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4293/comments | https://api.github.com/repos/huggingface/datasets/issues/4293/events | https://github.com/huggingface/datasets/pull/4293 | 1,228,815,477 | PR_kwDODunzps43dRt9 | 4,293 | Fix wrong map parameter name in cache docs | {
"login": "h4iku",
"id": 3812788,
"node_id": "MDQ6VXNlcjM4MTI3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h4iku",
"html_url": "https://github.com/h4iku",
"followers_url": "https://api.github.com/users/h4iku/followers",
"following_url": "https://api.github.com/users/h4iku/following{/other_user}",
"gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h4iku/subscriptions",
"organizations_url": "https://api.github.com/users/h4iku/orgs",
"repos_url": "https://api.github.com/users/h4iku/repos",
"events_url": "https://api.github.com/users/h4iku/events{/privacy}",
"received_events_url": "https://api.github.com/users/h4iku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-08T07:27:46" | "2022-06-14T16:49:00" | "2022-06-14T16:07:00" | CONTRIBUTOR | null | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4293/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4293",
"html_url": "https://github.com/huggingface/datasets/pull/4293",
"diff_url": "https://github.com/huggingface/datasets/pull/4293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4293.patch",
"merged_at": "2022-06-14T16:07:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4292/comments | https://api.github.com/repos/huggingface/datasets/issues/4292/events | https://github.com/huggingface/datasets/pull/4292 | 1,228,216,788 | PR_kwDODunzps43bhrp | 4,292 | Add API code examples for remaining main classes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | "2022-05-06T18:15:31" | "2022-05-25T18:05:13" | "2022-05-25T17:56:36" | MEMBER | null | This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4292/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4292",
"html_url": "https://github.com/huggingface/datasets/pull/4292",
"diff_url": "https://github.com/huggingface/datasets/pull/4292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4292.patch",
"merged_at": "2022-05-25T17:56:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4291/comments | https://api.github.com/repos/huggingface/datasets/issues/4291/events | https://github.com/huggingface/datasets/issues/4291 | 1,227,777,500 | I_kwDODunzps5JLmXc | 4,291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.",
"Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)"
] | "2022-05-06T12:03:27" | "2022-05-09T08:25:58" | "2022-05-09T08:25:58" | CONTRIBUTOR | null | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4291/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4290/comments | https://api.github.com/repos/huggingface/datasets/issues/4290/events | https://github.com/huggingface/datasets/pull/4290 | 1,227,592,826 | PR_kwDODunzps43Zr08 | 4,290 | Update paper link in medmcqa dataset card | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [] | "2022-05-06T08:52:51" | "2022-09-30T11:51:28" | "2022-09-30T11:49:07" | CONTRIBUTOR | null | Updating readme in medmcqa dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4290/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4290",
"html_url": "https://github.com/huggingface/datasets/pull/4290",
"diff_url": "https://github.com/huggingface/datasets/pull/4290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4290.patch",
"merged_at": "2022-09-30T11:49:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4288/comments | https://api.github.com/repos/huggingface/datasets/issues/4288/events | https://github.com/huggingface/datasets/pull/4288 | 1,226,821,732 | PR_kwDODunzps43XLKi | 4,288 | Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287 | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-05T15:21:49" | "2022-05-10T12:55:06" | "2022-05-10T12:09:48" | CONTRIBUTOR | null | This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4288/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4288",
"html_url": "https://github.com/huggingface/datasets/pull/4288",
"diff_url": "https://github.com/huggingface/datasets/pull/4288.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4288.patch",
"merged_at": "2022-05-10T12:09:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4287/comments | https://api.github.com/repos/huggingface/datasets/issues/4287/events | https://github.com/huggingface/datasets/issues/4287 | 1,226,806,652 | I_kwDODunzps5JH5V8 | 4,287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L249 when trying to `ds_with_embeddings.add_faiss_index(column='embeddings', device=0)` with the code above.\r\n\r\nAs it seems that the `@staticmethod` doesn't recognize the `import faiss` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L261, so whenever the value of `device` is not None in https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L438, that exception is triggered.\r\n\r\nSo on, adding `import faiss` inside https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L305 right after the check of `device`'s value, solves the issue and lets you calculate the indices in GPU.\r\n\r\nI'll add the code in a PR linked to this issue in case you want to merge it!",
"Adding here the complete error traceback!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/alvarobartt/lol.py\", line 12, in <module>\r\n ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3656, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 478, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=True)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index = self._faiss_index_to_device(index, self.device)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 327, in _faiss_index_to_device\r\n faiss_res = faiss.StandardGpuResources()\r\nNameError: name 'faiss' is not defined\r\n```",
"Closed as https://github.com/huggingface/datasets/pull/4288 already merged! :hugs:"
] | "2022-05-05T15:09:45" | "2022-05-10T13:53:19" | "2022-05-10T13:53:19" | CONTRIBUTOR | null | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset
ds = load_dataset('crime_and_punish', split='train[:100]')
ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()})
ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`
```
## Expected results
A new column named `embeddings` in the dataset that we're adding the index to.
## Actual results
An exception is triggered with the following message `NameError: name 'faiss' is not defined`.
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4287/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4286/comments | https://api.github.com/repos/huggingface/datasets/issues/4286/events | https://github.com/huggingface/datasets/pull/4286 | 1,226,758,621 | PR_kwDODunzps43W-DI | 4,286 | Add Lahnda language tag | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-05T14:34:20" | "2022-05-10T12:10:04" | "2022-05-10T12:02:38" | CONTRIBUTOR | null | This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4286/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4286",
"html_url": "https://github.com/huggingface/datasets/pull/4286",
"diff_url": "https://github.com/huggingface/datasets/pull/4286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4286.patch",
"merged_at": "2022-05-10T12:02:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4285/comments | https://api.github.com/repos/huggingface/datasets/issues/4285/events | https://github.com/huggingface/datasets/pull/4285 | 1,226,374,831 | PR_kwDODunzps43VtEa | 4,285 | Update LexGLUE README.md | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-05T08:36:50" | "2022-05-05T13:39:04" | "2022-05-05T13:33:35" | CONTRIBUTOR | null | Update the leaderboard based on the latest results presented in the ACL 2022 version of the article. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4285/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4285",
"html_url": "https://github.com/huggingface/datasets/pull/4285",
"diff_url": "https://github.com/huggingface/datasets/pull/4285.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4285.patch",
"merged_at": "2022-05-05T13:33:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4283/comments | https://api.github.com/repos/huggingface/datasets/issues/4283/events | https://github.com/huggingface/datasets/pull/4283 | 1,225,686,988 | PR_kwDODunzps43Tnxo | 4,283 | Fix filesystem docstring | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T17:42:42" | "2022-05-06T16:32:02" | "2022-05-06T06:22:17" | MEMBER | null | This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4283/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4283",
"html_url": "https://github.com/huggingface/datasets/pull/4283",
"diff_url": "https://github.com/huggingface/datasets/pull/4283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4283.patch",
"merged_at": "2022-05-06T06:22:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4282/comments | https://api.github.com/repos/huggingface/datasets/issues/4282/events | https://github.com/huggingface/datasets/pull/4282 | 1,225,616,545 | PR_kwDODunzps43TZYL | 4,282 | Don't do unnecessary list type casting to avoid replacing None values by empty lists | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T16:37:01" | "2022-05-06T10:43:58" | "2022-05-06T10:37:00" | MEMBER | null | In certain cases, `None` values are replaced by empty lists when casting feature types.
It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676
This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type.
In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary.
I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so
This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`:
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(4)})
ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"])
print(ds.to_pandas())
# before:
# b
# 0 [None, [0]]
# 1 [[], [0]]
# 2 [[], [0]]
# 3 [[], [0]]
#
# now:
# b
# 0 [None, [0]]
# 1 [None, [0]]
# 2 [None, [0]]
# 3 [None, [0]]
```
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4282/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4282",
"html_url": "https://github.com/huggingface/datasets/pull/4282",
"diff_url": "https://github.com/huggingface/datasets/pull/4282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4282.patch",
"merged_at": "2022-05-06T10:37:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4281/comments | https://api.github.com/repos/huggingface/datasets/issues/4281/events | https://github.com/huggingface/datasets/pull/4281 | 1,225,556,939 | PR_kwDODunzps43TNBm | 4,281 | Remove a copy-paste sentence in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T15:41:55" | "2022-05-06T08:38:03" | "2022-05-04T18:33:16" | MEMBER | null | Remove the following copy-paste sentence from dataset cards:
```
We show detailed information for up to 5 configurations of the dataset.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4281/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4281",
"html_url": "https://github.com/huggingface/datasets/pull/4281",
"diff_url": "https://github.com/huggingface/datasets/pull/4281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4281.patch",
"merged_at": "2022-05-04T18:33:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4280/comments | https://api.github.com/repos/huggingface/datasets/issues/4280/events | https://github.com/huggingface/datasets/pull/4280 | 1,225,446,844 | PR_kwDODunzps43S2xg | 4,280 | Add missing features to commonsense_qa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T14:24:26" | "2022-05-06T14:23:57" | "2022-05-06T14:16:46" | MEMBER | null | Fix partially #4275. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4280/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4280",
"html_url": "https://github.com/huggingface/datasets/pull/4280",
"diff_url": "https://github.com/huggingface/datasets/pull/4280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4280.patch",
"merged_at": "2022-05-06T14:16:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4279/comments | https://api.github.com/repos/huggingface/datasets/issues/4279/events | https://github.com/huggingface/datasets/pull/4279 | 1,225,300,273 | PR_kwDODunzps43SXw5 | 4,279 | Update minimal PyArrow version warning | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T12:26:09" | "2022-05-05T08:50:58" | "2022-05-05T08:43:47" | CONTRIBUTOR | null | Update the minimal PyArrow version warning (should've been part of #4250). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4279/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4279",
"html_url": "https://github.com/huggingface/datasets/pull/4279",
"diff_url": "https://github.com/huggingface/datasets/pull/4279.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4279.patch",
"merged_at": "2022-05-05T08:43:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4278/comments | https://api.github.com/repos/huggingface/datasets/issues/4278/events | https://github.com/huggingface/datasets/pull/4278 | 1,225,122,123 | PR_kwDODunzps43RyTs | 4,278 | Add missing features to openbookqa dataset for additional config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T09:22:50" | "2022-05-06T13:13:20" | "2022-05-06T13:06:01" | MEMBER | null | Fix partially #4276. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4278/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4278",
"html_url": "https://github.com/huggingface/datasets/pull/4278",
"diff_url": "https://github.com/huggingface/datasets/pull/4278.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4278.patch",
"merged_at": "2022-05-06T13:06:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4277/comments | https://api.github.com/repos/huggingface/datasets/issues/4277/events | https://github.com/huggingface/datasets/pull/4277 | 1,225,002,286 | PR_kwDODunzps43RZV9 | 4,277 | Enable label alignment for token classification datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-04T07:15:16" | "2022-05-06T15:42:15" | "2022-05-06T15:36:31" | MEMBER | null | This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER).
Example of usage:
```python
from datasets import load_dataset
ner_ds = load_dataset("conll2003", split="train")
# returns [3, 0, 7, 0, 0, 0, 7, 0, 0]
ner_ds[0]["ner_tags"]
# hypothetical model mapping with O <--> B-LOC
label2id = {
"B-LOC": "0",
"B-MISC": "7",
"B-ORG": "3",
"B-PER": "1",
"I-LOC": "6",
"I-MISC": "8",
"I-ORG": "4",
"I-PER": "2",
"O": "5"
}
ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags")
# returns [3, 5, 7, 5, 5, 5, 7, 5, 5]
ner_aligned_ds[0]["ner_tags"]
```
Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4277/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4277",
"html_url": "https://github.com/huggingface/datasets/pull/4277",
"diff_url": "https://github.com/huggingface/datasets/pull/4277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4277.patch",
"merged_at": "2022-05-06T15:36:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4276/comments | https://api.github.com/repos/huggingface/datasets/issues/4276/events | https://github.com/huggingface/datasets/issues/4276 | 1,224,949,252 | I_kwDODunzps5JAz4E | 4,276 | OpenBookQA has missing and inconsistent field names | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ",
"Ok, awesome @albertvillanova How about #4275 ?",
"On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.",
"@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ",
"I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ",
"IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).",
"I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ",
"I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some kind of standardization/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.",
"@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. ",
"Datasets are not tracked in this repository anymore. I think we must move this thread to the [discussions tab of the dataset](https://huggingface.co/datasets/openbookqa/discussions)",
"Indeed @osbm thanks. I'm closing this issue if it's fine for you all then"
] | "2022-05-04T05:51:52" | "2022-10-11T17:11:53" | "2022-10-05T13:50:03" | CONTRIBUTOR | null | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanScore'],
- 'clarity': row['clarity'],
- 'turkIdAnonymized': row['turkIdAnonymized']
3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Expected results
The structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4276/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4274/comments | https://api.github.com/repos/huggingface/datasets/issues/4274/events | https://github.com/huggingface/datasets/pull/4274 | 1,224,740,303 | PR_kwDODunzps43Qm2w | 4,274 | Add API code examples for IterableDataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | "2022-05-03T22:44:17" | "2022-05-04T16:29:32" | "2022-05-04T16:22:04" | MEMBER | null | This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4274/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4274",
"html_url": "https://github.com/huggingface/datasets/pull/4274",
"diff_url": "https://github.com/huggingface/datasets/pull/4274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4274.patch",
"merged_at": "2022-05-04T16:22:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4273/comments | https://api.github.com/repos/huggingface/datasets/issues/4273/events | https://github.com/huggingface/datasets/pull/4273 | 1,224,681,036 | PR_kwDODunzps43QaA6 | 4,273 | leadboard info added for TNE | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-03T21:35:41" | "2022-05-05T13:25:24" | "2022-05-05T13:18:13" | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4273/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4273",
"html_url": "https://github.com/huggingface/datasets/pull/4273",
"diff_url": "https://github.com/huggingface/datasets/pull/4273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4273.patch",
"merged_at": "2022-05-05T13:18:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4272/comments | https://api.github.com/repos/huggingface/datasets/issues/4272/events | https://github.com/huggingface/datasets/pull/4272 | 1,224,635,660 | PR_kwDODunzps43QQQt | 4,272 | Fix typo in logging docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-03T20:47:57" | "2022-05-04T15:42:27" | "2022-05-04T06:58:36" | MEMBER | null | This PR fixes #4271. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4272/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4272",
"html_url": "https://github.com/huggingface/datasets/pull/4272",
"diff_url": "https://github.com/huggingface/datasets/pull/4272.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4272.patch",
"merged_at": "2022-05-04T06:58:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4271/comments | https://api.github.com/repos/huggingface/datasets/issues/4271/events | https://github.com/huggingface/datasets/issues/4271 | 1,224,404,403 | I_kwDODunzps5I-u2z | 4,271 | A typo in docs of datasets.disable_progress_bar | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)"
] | "2022-05-03T17:44:56" | "2022-05-04T06:58:35" | "2022-05-04T06:58:35" | NONE | null | ## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4271/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4270/comments | https://api.github.com/repos/huggingface/datasets/issues/4270/events | https://github.com/huggingface/datasets/pull/4270 | 1,224,244,460 | PR_kwDODunzps43PC5V | 4,270 | Fix style in openbookqa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-03T15:21:34" | "2022-05-06T08:38:06" | "2022-05-03T16:20:52" | MEMBER | null | CI in PR:
- #4259
was green, but after merging it to master, a code quality error appeared. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4270/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4270",
"html_url": "https://github.com/huggingface/datasets/pull/4270",
"diff_url": "https://github.com/huggingface/datasets/pull/4270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4270.patch",
"merged_at": "2022-05-03T16:20:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4269/comments | https://api.github.com/repos/huggingface/datasets/issues/4269/events | https://github.com/huggingface/datasets/pull/4269 | 1,223,865,145 | PR_kwDODunzps43Nzwh | 4,269 | Add license and point of contact to big_patent dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-03T09:24:07" | "2022-05-06T08:38:09" | "2022-05-03T11:16:19" | MEMBER | null | Update metadata of big_patent dataset with:
- license
- point of contact | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4269/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4269",
"html_url": "https://github.com/huggingface/datasets/pull/4269",
"diff_url": "https://github.com/huggingface/datasets/pull/4269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4269.patch",
"merged_at": "2022-05-03T11:16:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4268/comments | https://api.github.com/repos/huggingface/datasets/issues/4268/events | https://github.com/huggingface/datasets/issues/4268 | 1,223,331,964 | I_kwDODunzps5I6pB8 | 4,268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɜːd/\r\n([General American](https://en.wikipedia.org/wiki/General_American)) [enPR](https://en.wiktionary.org/wiki/Appendix:English_pronunciation): wûrd, [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɝd/",
"Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https://huggingface.co/datasets/bigscience-catalogue-lm-data/lm_en_wiktionary_filtered/commit/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data/file-01.jsonl.gz.lock`, `data/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.",
"Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!",
"All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)",
"Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ",
"Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issues/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your/HG's roadmap). Thanks!",
"Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)",
"@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-content.json.gz) file",
"thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it!",
"thanks @patrickvonplaten. will do - getting my observations together."
] | "2022-05-02T20:34:25" | "2022-05-06T15:53:30" | "2022-05-03T11:23:48" | NONE | null | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results.
## Actual results
```
ExpectedMoreDownloadedFiles Traceback (most recent call last)
[<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
3 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
31 return
32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:
---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:
35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))
ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4268/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4267/comments | https://api.github.com/repos/huggingface/datasets/issues/4267/events | https://github.com/huggingface/datasets/pull/4267 | 1,223,214,275 | PR_kwDODunzps43LzOR | 4,267 | Replace data URL in SAMSum dataset within the same repository | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-02T18:38:08" | "2022-05-06T08:38:13" | "2022-05-02T19:03:49" | MEMBER | null | Replace data URL with one in the same repository. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4267/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4267",
"html_url": "https://github.com/huggingface/datasets/pull/4267",
"diff_url": "https://github.com/huggingface/datasets/pull/4267.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4267.patch",
"merged_at": "2022-05-02T19:03:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4266/comments | https://api.github.com/repos/huggingface/datasets/issues/4266/events | https://github.com/huggingface/datasets/pull/4266 | 1,223,116,436 | PR_kwDODunzps43LeXK | 4,266 | Add HF Speech Bench to Librispeech Dataset Card | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-02T16:59:31" | "2022-05-05T08:47:20" | "2022-05-05T08:40:09" | CONTRIBUTOR | null | Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions?
cc @patrickvonplaten: more leaderboard promotion! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4266/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4266",
"html_url": "https://github.com/huggingface/datasets/pull/4266",
"diff_url": "https://github.com/huggingface/datasets/pull/4266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4266.patch",
"merged_at": "2022-05-05T08:40:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4263/comments | https://api.github.com/repos/huggingface/datasets/issues/4263/events | https://github.com/huggingface/datasets/pull/4263 | 1,222,723,083 | PR_kwDODunzps43KLnD | 4,263 | Rename imagenet2012 -> imagenet-1k | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-02T10:26:21" | "2022-05-02T17:50:46" | "2022-05-02T16:32:57" | MEMBER | null | On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags.
To correctly link models to imagenet, we should rename this dataset `imagenet-1k`.
Later we can add `imagenet-21k` as a new dataset if we want.
Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub.
EDIT: to complete the rationale on why we should name it `imagenet-1k`:
If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they
- wanted to make it explicit that it’s not 21k -> the distinction is important for the community
- or they have been following this convention from other models -> the convention implicitly exists already | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4263/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4263/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4263",
"html_url": "https://github.com/huggingface/datasets/pull/4263",
"diff_url": "https://github.com/huggingface/datasets/pull/4263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4263.patch",
"merged_at": "2022-05-02T16:32:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4262/comments | https://api.github.com/repos/huggingface/datasets/issues/4262/events | https://github.com/huggingface/datasets/pull/4262 | 1,222,130,749 | PR_kwDODunzps43IOye | 4,262 | Add YAML tags to Dataset Card rotten tomatoes | {
"login": "mo6zes",
"id": 10004251,
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mo6zes",
"html_url": "https://github.com/mo6zes",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-05-01T11:59:08" | "2022-05-03T14:27:33" | "2022-05-03T14:20:35" | CONTRIBUTOR | null | The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4262/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4262/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4262",
"html_url": "https://github.com/huggingface/datasets/pull/4262",
"diff_url": "https://github.com/huggingface/datasets/pull/4262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4262.patch",
"merged_at": "2022-05-03T14:20:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4261/comments | https://api.github.com/repos/huggingface/datasets/issues/4261/events | https://github.com/huggingface/datasets/issues/4261 | 1,221,883,779 | I_kwDODunzps5I1HeD | 4,261 | data leakage in `webis/conclugen` dataset | {
"login": "xflashxx",
"id": 54585776,
"node_id": "MDQ6VXNlcjU0NTg1Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/54585776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xflashxx",
"html_url": "https://github.com/xflashxx",
"followers_url": "https://api.github.com/users/xflashxx/followers",
"following_url": "https://api.github.com/users/xflashxx/following{/other_user}",
"gists_url": "https://api.github.com/users/xflashxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xflashxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xflashxx/subscriptions",
"organizations_url": "https://api.github.com/users/xflashxx/orgs",
"repos_url": "https://api.github.com/users/xflashxx/repos",
"events_url": "https://api.github.com/users/xflashxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/xflashxx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.",
"i'd suggest just pinging the authors here in the issue if possible?",
"Thanks for reporting this @xflashxx. I'll have a look and get back to you on this.",
"Hi @xflashxx and @albertvillanova,\r\n\r\nI have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to \"new\" items. The length of `ids_validation` and `ids_testing` is zero.\r\n\r\nRegarding impact on scores:\r\n1. We employed automatic evaluation (on a separate set of 1000 examples) only to justify the exclusion of the smaller models for manual evaluation (due to budget constraints). I am confident the ranking still stands (unsurprisingly, the bigger models doing better than those trained on the smaller splits). We also highlight this in the paper. \r\n\r\n2. The examples used for manual evaluation have no overlap with any splits (also because they do not have any ground truth as we applied the trained models on an unlabeled sample to test its practical usage). I've added these two files to the dataset repository.\r\n\r\nHope this helps!",
"Thanks @shahbazsyed for your fast fix.\r\n\r\nAs a side note:\r\n- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de\r\n- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email address used in your commit author and the email address set on your Hub account settings."
] | "2022-04-30T17:43:37" | "2022-05-03T06:04:26" | "2022-05-03T06:04:26" | NONE | null | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```python
from datasets import load_dataset
training = load_dataset("webis/conclugen", "base", split="train")
validation = load_dataset("webis/conclugen", "base", split="validation")
testing = load_dataset("webis/conclugen", "base", split="test")
# collect which sample id's are present in the training split
ids_validation = list()
ids_testing = list()
for train_sample in training:
train_argument = train_sample["argument"]
train_conclusion = train_sample["conclusion"]
train_id = train_sample["id"]
# test if current sample is in validation split
if train_argument in validation["argument"]:
for validation_sample in validation:
validation_argument = validation_sample["argument"]
validation_conclusion = validation_sample["conclusion"]
validation_id = validation_sample["id"]
if train_argument == validation_argument and train_conclusion == validation_conclusion:
ids_validation.append(validation_id)
# test if current sample is in test split
if train_argument in testing["argument"]:
for testing_sample in testing:
testing_argument = testing_sample["argument"]
testing_conclusion = testing_sample["conclusion"]
testing_id = testing_sample["id"]
if train_argument == testing_argument and train_conclusion == testing_conclusion:
ids_testing.append(testing_id)
```
## Expected results
Length of both lists `ids_validation` and `ids_testing` should be zero.
## Actual results
Length of `ids_validation` = `2556`
Length of `ids_testing` = `287`
Furthermore, there seems to be duplicate samples in (at least) the *training* split, since:
`print(len(set(ids_validation)))` = `950`
`print(len(set(ids_testing)))` = `101`
All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4261/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4260/comments | https://api.github.com/repos/huggingface/datasets/issues/4260/events | https://github.com/huggingface/datasets/pull/4260 | 1,221,830,292 | PR_kwDODunzps43HSfs | 4,260 | Add mr_polarity movie review sentiment classification | {
"login": "mo6zes",
"id": 10004251,
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mo6zes",
"html_url": "https://github.com/mo6zes",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-30T13:19:33" | "2022-04-30T14:16:25" | "2022-04-30T14:16:25" | CONTRIBUTOR | null | Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative".
Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/)
paperswithcode: [https://paperswithcode.com/dataset/mr](https://paperswithcode.com/dataset/mr)
- [ ] I was not able to generate dummy data, the original dataset files have ".pos" and ".neg" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4260/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4260",
"html_url": "https://github.com/huggingface/datasets/pull/4260",
"diff_url": "https://github.com/huggingface/datasets/pull/4260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4260.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4259/comments | https://api.github.com/repos/huggingface/datasets/issues/4259/events | https://github.com/huggingface/datasets/pull/4259 | 1,221,768,025 | PR_kwDODunzps43HHGc | 4,259 | Fix bug in choices labels in openbookqa dataset | {
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-30T07:41:39" | "2022-05-04T06:31:31" | "2022-05-03T15:14:21" | CONTRIBUTOR | null | This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550.
Fix #3550.
cc. @lhoestq @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4259/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4259",
"html_url": "https://github.com/huggingface/datasets/pull/4259",
"diff_url": "https://github.com/huggingface/datasets/pull/4259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4259.patch",
"merged_at": "2022-05-03T15:14:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4258/comments | https://api.github.com/repos/huggingface/datasets/issues/4258/events | https://github.com/huggingface/datasets/pull/4258 | 1,221,637,727 | PR_kwDODunzps43Gstg | 4,258 | Fix/start token mask issue and update documentation | {
"login": "TristanThrush",
"id": 20826878,
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TristanThrush",
"html_url": "https://github.com/TristanThrush",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-29T22:42:44" | "2022-05-02T16:33:20" | "2022-05-02T16:26:12" | MEMBER | null | This pr fixes a couple bugs:
1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct
2) the documentation was not updated | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4258/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4258",
"html_url": "https://github.com/huggingface/datasets/pull/4258",
"diff_url": "https://github.com/huggingface/datasets/pull/4258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4258.patch",
"merged_at": "2022-05-02T16:26:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4257/comments | https://api.github.com/repos/huggingface/datasets/issues/4257/events | https://github.com/huggingface/datasets/pull/4257 | 1,221,393,137 | PR_kwDODunzps43GATC | 4,257 | Create metric card for Mahalanobis Distance | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-29T18:37:27" | "2022-05-02T14:50:18" | "2022-05-02T14:43:24" | CONTRIBUTOR | null | proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4257/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4257",
"html_url": "https://github.com/huggingface/datasets/pull/4257",
"diff_url": "https://github.com/huggingface/datasets/pull/4257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4257.patch",
"merged_at": "2022-05-02T14:43:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4256/comments | https://api.github.com/repos/huggingface/datasets/issues/4256/events | https://github.com/huggingface/datasets/pull/4256 | 1,221,379,625 | PR_kwDODunzps43F9Zw | 4,256 | Create metric card for MSE | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-29T18:21:22" | "2022-05-02T14:55:42" | "2022-05-02T14:48:47" | CONTRIBUTOR | null | Proposing a metric card for Mean Squared Error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4256/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4256/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4256",
"html_url": "https://github.com/huggingface/datasets/pull/4256",
"diff_url": "https://github.com/huggingface/datasets/pull/4256.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4256.patch",
"merged_at": "2022-05-02T14:48:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4255/comments | https://api.github.com/repos/huggingface/datasets/issues/4255/events | https://github.com/huggingface/datasets/pull/4255 | 1,221,142,899 | PR_kwDODunzps43FHgR | 4,255 | No google drive URL for pubmed_qa | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-29T15:55:46" | "2022-04-29T16:24:55" | "2022-04-29T16:18:56" | MEMBER | null | I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license.
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4255/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4255",
"html_url": "https://github.com/huggingface/datasets/pull/4255",
"diff_url": "https://github.com/huggingface/datasets/pull/4255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4255.patch",
"merged_at": "2022-04-29T16:18:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4254/comments | https://api.github.com/repos/huggingface/datasets/issues/4254/events | https://github.com/huggingface/datasets/pull/4254 | 1,220,204,395 | PR_kwDODunzps43Bwnj | 4,254 | Replace data URL in SAMSum dataset and support streaming | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-29T08:21:43" | "2022-05-06T08:38:16" | "2022-04-29T16:26:09" | MEMBER | null | This PR replaces data URL in SAMSum dataset:
- original host (arxiv.org) does not allow HTTP Range requests
- we have hosted the data on the Hub (license: CC BY-NC-ND 4.0)
Moreover, it implements support for streaming.
Fix #4146.
Related to: #4236.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4254/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4254/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4254",
"html_url": "https://github.com/huggingface/datasets/pull/4254",
"diff_url": "https://github.com/huggingface/datasets/pull/4254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4254.patch",
"merged_at": "2022-04-29T16:26:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4253/comments | https://api.github.com/repos/huggingface/datasets/issues/4253/events | https://github.com/huggingface/datasets/pull/4253 | 1,219,286,408 | PR_kwDODunzps42-c8Q | 4,253 | Create metric cards for mean IOU | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T20:58:27" | "2022-04-29T17:44:47" | "2022-04-29T17:38:06" | CONTRIBUTOR | null | Proposing a metric card for mIoU :rocket:
sorry for spamming you with review requests, @albertvillanova ! :hugs: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4253/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4253/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4253",
"html_url": "https://github.com/huggingface/datasets/pull/4253",
"diff_url": "https://github.com/huggingface/datasets/pull/4253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4253.patch",
"merged_at": "2022-04-29T17:38:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4252/comments | https://api.github.com/repos/huggingface/datasets/issues/4252/events | https://github.com/huggingface/datasets/pull/4252 | 1,219,151,100 | PR_kwDODunzps429--I | 4,252 | Creating metric card for MAE | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T19:04:33" | "2022-04-29T16:59:11" | "2022-04-29T16:52:30" | CONTRIBUTOR | null | Initial proposal for MAE metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4252/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4252",
"html_url": "https://github.com/huggingface/datasets/pull/4252",
"diff_url": "https://github.com/huggingface/datasets/pull/4252.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4252.patch",
"merged_at": "2022-04-29T16:52:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4251/comments | https://api.github.com/repos/huggingface/datasets/issues/4251/events | https://github.com/huggingface/datasets/pull/4251 | 1,219,116,354 | PR_kwDODunzps4293dB | 4,251 | Metric card for the XTREME-S dataset | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T18:32:19" | "2022-04-29T16:46:11" | "2022-04-29T16:38:46" | CONTRIBUTOR | null | Proposing a metric card for the XTREME-S dataset :hugs: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4251/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4251",
"html_url": "https://github.com/huggingface/datasets/pull/4251",
"diff_url": "https://github.com/huggingface/datasets/pull/4251.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4251.patch",
"merged_at": "2022-04-29T16:38:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4250/comments | https://api.github.com/repos/huggingface/datasets/issues/4250/events | https://github.com/huggingface/datasets/pull/4250 | 1,219,093,830 | PR_kwDODunzps429yjN | 4,250 | Bump PyArrow Version to 6 | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T18:10:50" | "2022-05-04T09:36:52" | "2022-05-04T09:29:46" | CONTRIBUTOR | null | Fixes #4152
This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files.
This will fix ArrayND error which exists in pyarrow 5. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4250/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4250",
"html_url": "https://github.com/huggingface/datasets/pull/4250",
"diff_url": "https://github.com/huggingface/datasets/pull/4250.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4250.patch",
"merged_at": "2022-05-04T09:29:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4249/comments | https://api.github.com/repos/huggingface/datasets/issues/4249/events | https://github.com/huggingface/datasets/pull/4249 | 1,218,524,424 | PR_kwDODunzps42742y | 4,249 | Support streaming XGLUE dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T10:27:23" | "2022-05-06T08:38:21" | "2022-04-28T16:08:03" | MEMBER | null | Support streaming XGLUE dataset.
Fix #4247.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4249/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4249",
"html_url": "https://github.com/huggingface/datasets/pull/4249",
"diff_url": "https://github.com/huggingface/datasets/pull/4249.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4249.patch",
"merged_at": "2022-04-28T16:08:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4248/comments | https://api.github.com/repos/huggingface/datasets/issues/4248/events | https://github.com/huggingface/datasets/issues/4248 | 1,218,460,444 | I_kwDODunzps5IoDsc | 4,248 | conll2003 dataset loads original data. | {
"login": "sue991",
"id": 26458611,
"node_id": "MDQ6VXNlcjI2NDU4NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26458611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sue991",
"html_url": "https://github.com/sue991",
"followers_url": "https://api.github.com/users/sue991/followers",
"following_url": "https://api.github.com/users/sue991/following{/other_user}",
"gists_url": "https://api.github.com/users/sue991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sue991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sue991/subscriptions",
"organizations_url": "https://api.github.com/users/sue991/orgs",
"repos_url": "https://api.github.com/users/sue991/repos",
"events_url": "https://api.github.com/users/sue991/events{/privacy}",
"received_events_url": "https://api.github.com/users/sue991/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n\r\nIn [3]: dataset[\"train\"][0]\r\nOut[3]: \r\n{'id': '0',\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.'],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0]}\r\n```\r\n\r\nJust guessing: might be the case that you are calling `load_dataset` from a working directory that contains a local folder named `conll2003` (containing the raw data files)? If that is the case, `datasets` library gives precedence to the local folder over the dataset on the Hub. "
] | "2022-04-28T09:33:31" | "2022-07-18T07:15:48" | "2022-07-18T07:15:48" | NONE | null | ## Describe the bug
I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.
Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ?
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
dataset = load_dataset("conll2003")
```
## Expected results
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
## Actual results
```python
print(dataset)
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 219554
})
test: Dataset({
features: ['text'],
num_rows: 50350
})
validation: Dataset({
features: ['text'],
num_rows: 55044
})
})
```
```python
for i in range(20):
print(dataset['train'][i])
{'text': '-DOCSTART- -X- -X- O'}
{'text': ''}
{'text': 'EU NNP B-NP B-ORG'}
{'text': 'rejects VBZ B-VP O'}
{'text': 'German JJ B-NP B-MISC'}
{'text': 'call NN I-NP O'}
{'text': 'to TO B-VP O'}
{'text': 'boycott VB I-VP O'}
{'text': 'British JJ B-NP B-MISC'}
{'text': 'lamb NN I-NP O'}
{'text': '. . O O'}
{'text': ''}
{'text': 'Peter NNP B-NP B-PER'}
{'text': 'Blackburn NNP I-NP I-PER'}
{'text': ''}
{'text': 'BRUSSELS NNP B-NP B-LOC'}
{'text': '1996-08-22 CD I-NP O'}
{'text': ''}
{'text': 'The DT B-NP O'}
{'text': 'European NNP I-NP B-ORG'}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4248/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4247/comments | https://api.github.com/repos/huggingface/datasets/issues/4247/events | https://github.com/huggingface/datasets/issues/4247 | 1,218,320,882 | I_kwDODunzps5Inhny | 4,247 | The data preview of XGLUE | {
"login": "czq1999",
"id": 49108847,
"node_id": "MDQ6VXNlcjQ5MTA4ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/49108847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czq1999",
"html_url": "https://github.com/czq1999",
"followers_url": "https://api.github.com/users/czq1999/followers",
"following_url": "https://api.github.com/users/czq1999/following{/other_user}",
"gists_url": "https://api.github.com/users/czq1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czq1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czq1999/subscriptions",
"organizations_url": "https://api.github.com/users/czq1999/orgs",
"repos_url": "https://api.github.com/users/czq1999/repos",
"events_url": "https://api.github.com/users/czq1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/czq1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n",
"Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (as the error message points out): this must be refactored to support streaming. ",
"Fixed, thanks @albertvillanova !\r\n\r\nhttps://huggingface.co/datasets/xglue\r\n\r\n<img width=\"824\" alt=\"Capture d’écran 2022-04-29 à 10 23 14\" src=\"https://user-images.githubusercontent.com/1676121/165909391-9f98d98a-665a-4e57-822d-8baa2dc9b7c9.png\">\r\n"
] | "2022-04-28T07:30:50" | "2022-04-29T08:23:28" | "2022-04-28T16:08:03" | NONE | null | It seems that something wrong with the data previvew of XGLUE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4247/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4246/comments | https://api.github.com/repos/huggingface/datasets/issues/4246/events | https://github.com/huggingface/datasets/pull/4246 | 1,218,320,293 | PR_kwDODunzps427NiD | 4,246 | Support to load dataset with TSV files by passing only dataset name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-28T07:30:15" | "2022-05-06T08:38:28" | "2022-05-06T08:14:07" | MEMBER | null | This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`):
```python
ds = load_dataset("dataset/name")
```
The refactoring allows for future builder kwargs customizations based on file extension.
Related to #4238. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4246/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4246",
"html_url": "https://github.com/huggingface/datasets/pull/4246",
"diff_url": "https://github.com/huggingface/datasets/pull/4246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4246.patch",
"merged_at": "2022-05-06T08:14:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4245/comments | https://api.github.com/repos/huggingface/datasets/issues/4245/events | https://github.com/huggingface/datasets/pull/4245 | 1,217,959,400 | PR_kwDODunzps426AUR | 4,245 | Add code examples for DatasetDict | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | "2022-04-27T22:52:22" | "2022-04-29T18:19:34" | "2022-04-29T18:13:03" | MEMBER | null | This PR adds code examples for `DatasetDict` in the API reference :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4245/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4245",
"html_url": "https://github.com/huggingface/datasets/pull/4245",
"diff_url": "https://github.com/huggingface/datasets/pull/4245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4245.patch",
"merged_at": "2022-04-29T18:13:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4244/comments | https://api.github.com/repos/huggingface/datasets/issues/4244/events | https://github.com/huggingface/datasets/pull/4244 | 1,217,732,221 | PR_kwDODunzps425Po6 | 4,244 | task id update | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | "2022-04-27T18:28:14" | "2022-05-04T10:43:53" | "2022-05-04T10:36:37" | CONTRIBUTOR | null | changed multi input text classification as task id instead of category | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4244/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4244",
"html_url": "https://github.com/huggingface/datasets/pull/4244",
"diff_url": "https://github.com/huggingface/datasets/pull/4244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4244.patch",
"merged_at": "2022-05-04T10:36:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4243/comments | https://api.github.com/repos/huggingface/datasets/issues/4243/events | https://github.com/huggingface/datasets/pull/4243 | 1,217,689,909 | PR_kwDODunzps425Gkn | 4,243 | WIP: Initial shades loading script and readme | {
"login": "shayne-longpre",
"id": 69018523,
"node_id": "MDQ6VXNlcjY5MDE4NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/69018523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shayne-longpre",
"html_url": "https://github.com/shayne-longpre",
"followers_url": "https://api.github.com/users/shayne-longpre/followers",
"following_url": "https://api.github.com/users/shayne-longpre/following{/other_user}",
"gists_url": "https://api.github.com/users/shayne-longpre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shayne-longpre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shayne-longpre/subscriptions",
"organizations_url": "https://api.github.com/users/shayne-longpre/orgs",
"repos_url": "https://api.github.com/users/shayne-longpre/repos",
"events_url": "https://api.github.com/users/shayne-longpre/events{/privacy}",
"received_events_url": "https://api.github.com/users/shayne-longpre/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [] | "2022-04-27T17:45:43" | "2022-10-03T09:36:35" | "2022-10-03T09:36:35" | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4243/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4243",
"html_url": "https://github.com/huggingface/datasets/pull/4243",
"diff_url": "https://github.com/huggingface/datasets/pull/4243.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4243.patch",
"merged_at": null
} | true |