url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
https://api.github.com/repos/huggingface/datasets/issues/2050/events
https://github.com/huggingface/datasets/issues/2050
831,006,551
MDU6SXNzdWU4MzEwMDY1NTE=
2,050
Build custom dataset to fine-tune Wav2Vec2
{ "login": "Omarnabk", "id": 72882909, "node_id": "MDQ6VXNlcjcyODgyOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Omarnabk", "html_url": "https://github.com/Omarnabk", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "repos_url": "https://api.github.com/users/Omarnabk/repos", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.", "Many thanks! that was what I was looking for. " ]
1,615,672,870,000
1,615,800,448,000
1,615,800,448,000
NONE
null
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2049/comments
https://api.github.com/repos/huggingface/datasets/issues/2049/events
https://github.com/huggingface/datasets/pull/2049
830,978,687
MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0
2,049
Fix text-classification tags
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "LGTM, thanks for fixing." ]
1,615,665,102,000
1,615,909,666,000
1,615,909,666,000
CONTRIBUTOR
null
There are different tags for text classification right now: `text-classification` and `text_classification`: ![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png). This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2049/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2049", "html_url": "https://github.com/huggingface/datasets/pull/2049", "diff_url": "https://github.com/huggingface/datasets/pull/2049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2049.patch", "merged_at": 1615909666000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
https://api.github.com/repos/huggingface/datasets/issues/2047/events
https://github.com/huggingface/datasets/pull/2047
830,626,430
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
2,047
Multilingual dIalogAct benchMark (miam)
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "organizations_url": "https://api.github.com/users/eusip/orgs", "repos_url": "https://api.github.com/users/eusip/repos", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "received_events_url": "https://api.github.com/users/eusip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)", "I will run isort again. Hopefully it resolves the current check_code_quality test failure.", "Once the review period is over, feel free to open a PR to add all the missing information ;)", "Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include." ]
1,615,590,175,000
1,616,495,794,000
1,616,150,833,000
CONTRIBUTOR
null
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047", "html_url": "https://github.com/huggingface/datasets/pull/2047", "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "merged_at": 1616150833000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
https://api.github.com/repos/huggingface/datasets/issues/2046/events
https://github.com/huggingface/datasets/issues/2046
830,423,033
MDU6SXNzdWU4MzA0MjMwMzM=
2,046
add_faisis_index gets very slow when doing it interatively
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ", "Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls", "Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https://user-images.githubusercontent.com/16892570/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n", "@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips", "@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?", "It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).", "@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?", "When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)", "Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ", "@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. " ]
1,615,580,838,000
1,616,624,951,000
1,616,624,951,000
NONE
null
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2045/comments
https://api.github.com/repos/huggingface/datasets/issues/2045/events
https://github.com/huggingface/datasets/pull/2045
830,351,527
MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz
2,045
Preserve column ordering in Dataset.rename_column
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ", "I don't know how to trigger it manually, but an empty commit should do the job" ]
1,615,573,607,000
1,615,906,085,000
1,615,905,305,000
CONTRIBUTOR
null
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns: ```python >>> from datasets import Dataset >>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]}) >>> d Dataset({ features: ['sentences', 'label'], num_rows: 2 }) >>> d.rename_column('sentences', 'text') Dataset({ features: ['label', 'text'], num_rows: 2 }) ``` This PR fixes this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2045/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2045", "html_url": "https://github.com/huggingface/datasets/pull/2045", "diff_url": "https://github.com/huggingface/datasets/pull/2045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2045.patch", "merged_at": 1615905305000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2044/comments
https://api.github.com/repos/huggingface/datasets/issues/2044/events
https://github.com/huggingface/datasets/pull/2044
830,339,905
MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1
2,044
Add CBT dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added changes from the review.", "Thanks for approving @lhoestq " ]
1,615,572,259,000
1,616,152,213,000
1,616,149,755,000
CONTRIBUTOR
null
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301). Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags. The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2044/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2044", "html_url": "https://github.com/huggingface/datasets/pull/2044", "diff_url": "https://github.com/huggingface/datasets/pull/2044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2044.patch", "merged_at": 1616149755000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2043/comments
https://api.github.com/repos/huggingface/datasets/issues/2043/events
https://github.com/huggingface/datasets/pull/2043
830,279,098
MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz
2,043
Support pickle protocol for dataset splits defined as ReadInstruction
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.", "Yes right ! I read it wrong.\r\nPerfect then" ]
1,615,566,911,000
1,615,904,738,000
1,615,903,505,000
CONTRIBUTOR
null
Fixes #2022 (+ some style fixes)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2043/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2043", "html_url": "https://github.com/huggingface/datasets/pull/2043", "diff_url": "https://github.com/huggingface/datasets/pull/2043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2043.patch", "merged_at": 1615903505000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
https://api.github.com/repos/huggingface/datasets/issues/2042/events
https://github.com/huggingface/datasets/pull/2042
830,190,276
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
2,042
Fix arrow memory checks issue in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,560,592,000
1,615,561,463,000
1,615,561,462,000
MEMBER
null
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests. Collecting the garbage collector before checking the arrow memory usage seems to fix this issue. I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042", "html_url": "https://github.com/huggingface/datasets/pull/2042", "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "merged_at": 1615561462000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2041/comments
https://api.github.com/repos/huggingface/datasets/issues/2041/events
https://github.com/huggingface/datasets/pull/2041
830,180,803
MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw
2,041
Doc2dial update data_infos and data_loaders
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,559,969,000
1,615,892,960,000
1,615,892,960,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2041/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2041", "html_url": "https://github.com/huggingface/datasets/pull/2041", "diff_url": "https://github.com/huggingface/datasets/pull/2041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2041.patch", "merged_at": 1615892960000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
https://api.github.com/repos/huggingface/datasets/issues/2040/events
https://github.com/huggingface/datasets/issues/2040
830,169,387
MDU6SXNzdWU4MzAxNjkzODc=
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.", "Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'", "In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```", "Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! " ]
1,615,559,220,000
1,628,100,043,000
1,628,100,043,000
NONE
null
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2039/comments
https://api.github.com/repos/huggingface/datasets/issues/2039/events
https://github.com/huggingface/datasets/pull/2039
830,047,652
MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3
2,039
Doc2dial rc
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,550,188,000
1,615,563,156,000
1,615,563,156,000
CONTRIBUTOR
null
Added fix to handle the last turn that is a user turn.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2039/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2039", "html_url": "https://github.com/huggingface/datasets/pull/2039", "diff_url": "https://github.com/huggingface/datasets/pull/2039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2039.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
https://api.github.com/repos/huggingface/datasets/issues/2038/events
https://github.com/huggingface/datasets/issues/2038
830,036,875
MDU6SXNzdWU4MzAwMzY4NzU=
2,038
outdated dataset_infos.json might fail verifications
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
1,615,549,314,000
1,615,912,060,000
1,615,912,060,000
CONTRIBUTOR
null
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2037/comments
https://api.github.com/repos/huggingface/datasets/issues/2037/events
https://github.com/huggingface/datasets/pull/2037
829,919,685
MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz
2,037
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it" ]
1,615,540,920,000
1,616,479,696,000
1,615,892,482,000
CONTRIBUTOR
null
see: https://github.com/huggingface/datasets/issues/2031 What I did: - replace root.clear with elem.clear - remove lines to get root element - $ make style - $ make test - some tests required some pip packages, I installed them. test results on origin/master and my branch are same. I think it's not related on my modification, isn't it? ``` ==================================================================================== short test summary info ==================================================================================== FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised ============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ============================================================== make: *** [Makefile:19: test] Error 1 ``` Is there anything else I should do?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2037/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2037", "html_url": "https://github.com/huggingface/datasets/pull/2037", "diff_url": "https://github.com/huggingface/datasets/pull/2037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2037.patch", "merged_at": 1615892482000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
{ "login": "Gpwner", "id": 19349207, "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gpwner", "html_url": "https://github.com/Gpwner", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "repos_url": "https://api.github.com/users/Gpwner/repos", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Solved!" ]
1,615,540,179,000
1,615,797,902,000
1,615,797,884,000
NONE
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2034/comments
https://api.github.com/repos/huggingface/datasets/issues/2034/events
https://github.com/huggingface/datasets/pull/2034
829,381,388
MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw
2,034
Fix typo
{ "login": "pcyin", "id": 3413464, "node_id": "MDQ6VXNlcjM0MTM0NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcyin", "html_url": "https://github.com/pcyin", "followers_url": "https://api.github.com/users/pcyin/followers", "following_url": "https://api.github.com/users/pcyin/following{/other_user}", "gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcyin/subscriptions", "organizations_url": "https://api.github.com/users/pcyin/orgs", "repos_url": "https://api.github.com/users/pcyin/repos", "events_url": "https://api.github.com/users/pcyin/events{/privacy}", "received_events_url": "https://api.github.com/users/pcyin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,484,773,000
1,615,485,985,000
1,615,485,985,000
CONTRIBUTOR
null
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2034/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2034", "html_url": "https://github.com/huggingface/datasets/pull/2034", "diff_url": "https://github.com/huggingface/datasets/pull/2034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2034.patch", "merged_at": 1615485985000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2033/comments
https://api.github.com/repos/huggingface/datasets/issues/2033/events
https://github.com/huggingface/datasets/pull/2033
829,295,339
MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy
2,033
Raise an error for outdated sacrebleu versions
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,478,880,000
1,615,485,492,000
1,615,485,492,000
MEMBER
null
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12 For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py): ```python def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=scb.DEFAULT_TOKENIZER, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > output = scb.corpus_bleu( sys_stream=predictions, ref_streams=transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, tokenize=tokenize, use_effective_order=use_effective_order, ) E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method' /mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError ``` I improved the error message when users have an outdated version of sacrebleu. The new error message tells the user to update sacrebleu. cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2033/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2033", "html_url": "https://github.com/huggingface/datasets/pull/2033", "diff_url": "https://github.com/huggingface/datasets/pull/2033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2033.patch", "merged_at": 1615485492000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
https://api.github.com/repos/huggingface/datasets/issues/2031/events
https://github.com/huggingface/datasets/issues/2031
829,122,778
MDU6SXNzdWU4MjkxMjI3Nzg=
2,031
wikipedia.py generator that extracts XML doesn't release memory
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
1,615,467,084,000
1,616,402,032,000
1,616,402,032,000
CONTRIBUTOR
null
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
https://api.github.com/repos/huggingface/datasets/issues/2030/events
https://github.com/huggingface/datasets/pull/2030
829,110,803
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
2,030
Implement Dataset from text
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..." ]
1,615,466,090,000
1,616,074,169,000
1,616,074,169,000
MEMBER
null
Implement `Dataset.from_text`. Analogue to #1943, #1946.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2030", "html_url": "https://github.com/huggingface/datasets/pull/2030", "diff_url": "https://github.com/huggingface/datasets/pull/2030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2030.patch", "merged_at": 1616074169000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
https://api.github.com/repos/huggingface/datasets/issues/2029/events
https://github.com/huggingface/datasets/issues/2029
829,097,290
MDU6SXNzdWU4MjkwOTcyOTA=
2,029
Loading a faiss index KeyError
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.", "Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```", "Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)", "> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR" ]
1,615,464,973,000
1,615,508,469,000
1,615,508,469,000
NONE
null
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
https://api.github.com/repos/huggingface/datasets/issues/2028/events
https://github.com/huggingface/datasets/pull/2028
828,721,393
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
2,028
Adding PersiNLU reading-comprehension
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I think I have addressed all your comments. ", "Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ", "It's all good thanks ;)\r\nmerging" ]
1,615,437,673,000
1,615,801,197,000
1,615,801,197,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028", "html_url": "https://github.com/huggingface/datasets/pull/2028", "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "merged_at": 1615801197000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2027/comments
https://api.github.com/repos/huggingface/datasets/issues/2027/events
https://github.com/huggingface/datasets/pull/2027
828,490,444
MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1
2,027
Update format columns in Dataset.rename_columns
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,420,259,000
1,615,473,520,000
1,615,473,520,000
CONTRIBUTOR
null
Fixes #2026
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2027/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2027", "html_url": "https://github.com/huggingface/datasets/pull/2027", "diff_url": "https://github.com/huggingface/datasets/pull/2027.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2027.patch", "merged_at": 1615473520000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.", "Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?", "I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)" ]
1,615,402,457,000
1,615,473,574,000
1,615,473,520,000
CONTRIBUTOR
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2025/comments
https://api.github.com/repos/huggingface/datasets/issues/2025/events
https://github.com/huggingface/datasets/pull/2025
828,047,476
MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz
2,025
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\n\r\n\r\np.s one last thing?\r\n\r\nIs there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process. \r\n\r\n", "> There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\nIn the new save_to_disk, the filename of the arrow file is fixed: `dataset.arrow`.\r\nThis way is will be overwritten if you save your dataset again\r\n\r\n> Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.\r\n\r\nIf you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\nDoes that answer the question ? How does this \"pointer\" behavior manifest exactly on your side ?", "Apparently the usage of the compute layer of pyarrow requires pyarrow>=1.0.0 (otherwise there are some issues on windows with file permissions when doing dataset concatenation).\r\n\r\nI'll bump the pyarrow requirement from, 0.17.1 to 1.0.0", "\r\n> If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\n> Does that answer the question? How does this \"pointer\" behavior manifest exactly on your side?\r\n\r\nYes, I checked this behavior.. if we update the .arrow file it kind of flushes out the previous one. So your solution is perfect <3. ", "Sorry for spamming, there's a a bug that only happens on the CI so I have to re-run it several times", "Alright I finally added all the tests I wanted !\r\nI also fixed all the bugs and now all the tests are passing :)\r\n\r\nLet me know if you have comments.\r\n\r\nI also noticed that two methods in pyarrow seem to bring some data in memory even for a memory mapped table: filter and cast:\r\n- for filter I took a look at the C++ code on the arrow's side and found [this part](https://github.com/apache/arrow/blob/55c8d74d5556b25238fb2028e9fb97290ea24684/cpp/src/arrow/compute/kernels/vector_selection.cc#L93-L160) that \"builds\" the array during filter. It seems to indicate that it allocates new memory for the filtered array but not 100% sure.\r\n- regarding cast I noticed that it happens when changing the precision of an array of integers. Not sure if there are other cases.\r\n\r\n\r\nMaybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.", "> Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.\r\n\r\nI'm a bit unclear on this, I thought the point of the refactor was to use `Table.filter` to speed up our own `.filter` and stop using `.map` that offloaded too much stuff on disk. \r\nAt some point I recall we decided to use `keep_in_memory=True` as the expectations were that it would be hard to fill the memory?", "> I'm a bit unclear on this, I thought the point of the refactor was to use Table.filter to speed up our own .filter and stop using .map that offloaded too much stuff on disk.\r\n> At some point I recall we decided to use keep_in_memory=True as the expectations were that it would be hard to fill the memory?\r\n\r\nYes it's ok to have the mask in memory, but not the full table. I was not aware that the table returned by filter could actually be in memory (it's not part of the pyarrow documentation afaik).\r\nTo be more specific I noticed that every time you call `filter`, the pyarrow total allocated memory increases.\r\nI haven't checked on a big dataset though, but it would be nice to see how much memory it uses with respect to the size of the dataset.", "I have addressed your comments @theo-m @albertvillanova ! Thanks for the suggestions", "I totally agree with you. I would have loved to use inheritance instead.\r\nHowever because `pa.Table` is a cython class without proper initialization methods (you can't call `__init__` for example): you can't instantiate a subclass of `pa.Table` in python.\r\nTo be more specific, you actually can try to instantiate a subclass of `pa.Table` with no data BUT this is not a valid table so you get an error.\r\nAnd since `pa.Table` objects are immutable you can't even set the data in `__new__` or `__init__`.\r\n\r\nEDIT: one could make a new cython class that inherits from `pa.Table` with proper initialization methods, so that we can inherit from this class instead in python. We can do that in the future if we plan to use cython in `datasets`.\r\n(see: https://arrow.apache.org/docs/python/extending.html)", "@lhoestq, but in which cases you would like to instantiate directly either `InMemoryTable` or `MemoryMappedTable`? You normally use one of their `from_xxx` class methods...", "Yes I was thinking of these cases. The issue is that they return `pa.Table` objects even from a subclass of `pa.Table`", "That is indeed a weird behavior...", "I guess that in this case, the best approach is as you did, using composition over inheritance...\r\n\r\nhttps://github.com/apache/arrow/pull/5322", "@lhoestq I think you forgot to add the new classes to the docs?", "Yes you're right, let me add them" ]
1,615,395,647,000
1,617,115,613,000
1,616,777,519,000
MEMBER
null
## Intro Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling ## Issues Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk. Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk. ## Solution provided in this PR I changed this by allowing several types of Table to be used in the Dataset object. More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable. The in-memory and memory-mapped tables implement the pickling behavior described above. The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks. ## Implementation details The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table. Regarding the MemoryMappedTable: Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk. ## Checklist - [x] add InMemoryTable - [x] add MemoryMappedTable - [x] add ConcatenationTable - [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter - [x] Update Dataset.from_xxx methods - [x] Update load_from_disk and save_to_disk - [x] Backward compatibility of load_from_disk - [x] Add tests for the new tables - [x] Update current tests - [ ] Documentation ---------- I would be happy to discuss the design of this PR :) Close #1877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2025/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2025", "html_url": "https://github.com/huggingface/datasets/pull/2025", "diff_url": "https://github.com/huggingface/datasets/pull/2025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2025.patch", "merged_at": 1616777518000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2024/comments
https://api.github.com/repos/huggingface/datasets/issues/2024/events
https://github.com/huggingface/datasets/pull/2024
827,842,962
MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy
2,024
Remove print statement from mnist.py
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one" ]
1,615,387,198,000
1,615,485,832,000
1,615,485,831,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2024/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2024", "html_url": "https://github.com/huggingface/datasets/pull/2024", "diff_url": "https://github.com/huggingface/datasets/pull/2024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2024.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2023/comments
https://api.github.com/repos/huggingface/datasets/issues/2023/events
https://github.com/huggingface/datasets/pull/2023
827,819,608
MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2
2,023
Add Romanian to XQuAD
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for updating XQUAD :)\r\n\r\nThe slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.\r\n\r\nCould you please generate the dummy data with\r\n```\r\ndatasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data\r\n```\r\nThis will update all the dummy data files, and also add the new one for the romanian configuration.\r\n\r\n\r\nYou can also update the metadata with\r\n```\r\ndatasets-cli test ./datasets/xquad --name xquad.ro --save_infos\r\n```\r\nThis will update the dataset_infos.json file with the metadata of the romanian config :)\r\n\r\nThanks in advance !", "Hello Quentin, and thanks for your help.\r\n\r\nI found that running\r\n\r\n```python\r\ndatasets-cli test ./datasets/xquad --name xquad.ro --save_infos\r\n```\r\n\r\nwas not enough to pass the slow tests, because it was not adding the new `xquad.ro.json` checksum to the other configs infos and becuase of that an `UnexpectedDownloadedFile` error was being thrown, so instead I used:\r\n\r\n```python\r\ndatasets-cli test ./datasets/xquad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n`--ignore_verifications` was necessary to bypass the same `UnexpectedDownloadedFile` error.\r\n\r\nAdditionally, I deleted `dummy_data_copy.zip` and the `copy.sh` script because they both seem now unnecessary.\r\n\r\nThe slow tests for both the real and dummy data now pass successfully, so I hope that I didn't mess anything up :)\r\n", "You're right, you needed the `--ignore_verifications` flag !\r\nThanks for updating them :)\r\n\r\nAlthough I just noticed that the new dummy_data.zip files are quite big (170KB each) because they contain the json files of all the languages, while only one json file per language is necessary. Could you remove the unnecessary json files to reduce the size of the dummy_data.zip files if you don't mind ?", "Done. I created a script (`remove_unnecessary_langs.sh`) to automate the process.\r\n" ]
1,615,386,272,000
1,615,802,897,000
1,615,802,897,000
CONTRIBUTOR
null
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2023/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2023", "html_url": "https://github.com/huggingface/datasets/pull/2023", "diff_url": "https://github.com/huggingface/datasets/pull/2023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2023.patch", "merged_at": 1615802897000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
https://api.github.com/repos/huggingface/datasets/issues/2022/events
https://github.com/huggingface/datasets/issues/2022
827,435,033
MDU6SXNzdWU4Mjc0MzUwMzM=
2,022
ValueError when rename_column on splitted dataset
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```", "This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues" ]
1,615,369,238,000
1,615,903,568,000
1,615,903,505,000
NONE
null
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
1,615,344,514,000
1,615,630,061,000
1,615,630,061,000
NONE
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2020/comments
https://api.github.com/repos/huggingface/datasets/issues/2020/events
https://github.com/huggingface/datasets/pull/2020
826,961,126
MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx
2,020
Remove unnecessary docstart check in conll-like datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,342,816,000
1,615,469,617,000
1,615,469,617,000
CONTRIBUTOR
null
Related to this PR: #1998 Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2020/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2020", "html_url": "https://github.com/huggingface/datasets/pull/2020", "diff_url": "https://github.com/huggingface/datasets/pull/2020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2020.patch", "merged_at": 1615469617000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2019/comments
https://api.github.com/repos/huggingface/datasets/issues/2019/events
https://github.com/huggingface/datasets/pull/2019
826,625,706
MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy
2,019
Replace print with logging in dataset scripts
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?", "Yes definitely !" ]
1,615,323,574,000
1,615,543,741,000
1,615,479,259,000
CONTRIBUTOR
null
Replaces `print(...)` in the dataset scripts with the library logger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2019/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2019", "html_url": "https://github.com/huggingface/datasets/pull/2019", "diff_url": "https://github.com/huggingface/datasets/pull/2019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2019.patch", "merged_at": 1615479258000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2018/comments
https://api.github.com/repos/huggingface/datasets/issues/2018/events
https://github.com/huggingface/datasets/pull/2018
826,473,764
MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz
2,018
Md gender card update
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Link to the card: https://github.com/mcmillanmajora/datasets/blob/md-gender-card/datasets/md_gender_bias/README.md", "dataset card* @sgugger :p ", "Ahah that's what I wanted to say @lhoestq, thanks for fixing. Not used to review the Datasets side ;-)" ]
1,615,316,240,000
1,615,570,260,000
1,615,570,260,000
CONTRIBUTOR
null
I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2018", "html_url": "https://github.com/huggingface/datasets/pull/2018", "diff_url": "https://github.com/huggingface/datasets/pull/2018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2018.patch", "merged_at": 1615570260000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2017/comments
https://api.github.com/repos/huggingface/datasets/issues/2017/events
https://github.com/huggingface/datasets/pull/2017
826,428,578
MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2
2,017
Add TF-based Features to handle different modes of data
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,314,592,000
1,615,984,328,000
1,615,984,327,000
CONTRIBUTOR
null
Hi, I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2017/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2017", "html_url": "https://github.com/huggingface/datasets/pull/2017", "diff_url": "https://github.com/huggingface/datasets/pull/2017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2017.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2016/comments
https://api.github.com/repos/huggingface/datasets/issues/2016/events
https://github.com/huggingface/datasets/pull/2016
825,965,493
MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz
2,016
Not all languages have 2 digit codes.
{ "login": "asiddhant", "id": 13891775, "node_id": "MDQ6VXNlcjEzODkxNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asiddhant", "html_url": "https://github.com/asiddhant", "followers_url": "https://api.github.com/users/asiddhant/followers", "following_url": "https://api.github.com/users/asiddhant/following{/other_user}", "gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}", "starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions", "organizations_url": "https://api.github.com/users/asiddhant/orgs", "repos_url": "https://api.github.com/users/asiddhant/repos", "events_url": "https://api.github.com/users/asiddhant/events{/privacy}", "received_events_url": "https://api.github.com/users/asiddhant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,298,019,000
1,615,485,663,000
1,615,485,663,000
CONTRIBUTOR
null
.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2016/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2016", "html_url": "https://github.com/huggingface/datasets/pull/2016", "diff_url": "https://github.com/huggingface/datasets/pull/2016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2016.patch", "merged_at": 1615485663000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2015/comments
https://api.github.com/repos/huggingface/datasets/issues/2015/events
https://github.com/huggingface/datasets/pull/2015
825,942,108
MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0
2,015
Fix ipython function creation in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,297,019,000
1,615,298,764,000
1,615,298,763,000
MEMBER
null
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created. Fix #2010
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2015/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2015", "html_url": "https://github.com/huggingface/datasets/pull/2015", "diff_url": "https://github.com/huggingface/datasets/pull/2015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2015.patch", "merged_at": 1615298763000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2014/comments
https://api.github.com/repos/huggingface/datasets/issues/2014/events
https://github.com/huggingface/datasets/pull/2014
825,916,531
MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3
2,014
more explicit method parameters
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,295,909,000
1,615,370,917,000
1,615,370,916,000
CONTRIBUTOR
null
re: #2009 not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2014/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2014", "html_url": "https://github.com/huggingface/datasets/pull/2014", "diff_url": "https://github.com/huggingface/datasets/pull/2014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2014.patch", "merged_at": 1615370916000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2013/comments
https://api.github.com/repos/huggingface/datasets/issues/2013/events
https://github.com/huggingface/datasets/pull/2013
825,694,305
MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx
2,013
Add Cryptonite dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,285,931,000
1,615,318,027,000
1,615,318,026,000
CONTRIBUTOR
null
cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2013/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2013", "html_url": "https://github.com/huggingface/datasets/pull/2013", "diff_url": "https://github.com/huggingface/datasets/pull/2013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2013.patch", "merged_at": 1615318026000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14", "~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 " ]
1,615,283,335,000
1,615,289,611,000
1,615,289,611,000
CONTRIBUTOR
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2011/comments
https://api.github.com/repos/huggingface/datasets/issues/2011/events
https://github.com/huggingface/datasets/pull/2011
825,621,952
MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx
2,011
Add RoSent Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,282,808,000
1,615,485,652,000
1,615,485,652,000
CONTRIBUTOR
null
This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529. I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2011/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2011", "html_url": "https://github.com/huggingface/datasets/pull/2011", "diff_url": "https://github.com/huggingface/datasets/pull/2011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2011.patch", "merged_at": 1615485652000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
https://api.github.com/repos/huggingface/datasets/issues/2010/events
https://github.com/huggingface/datasets/issues/2010
825,567,635
MDU6SXNzdWU4MjU1Njc2MzU=
2,010
Local testing fails
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?", "```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n", "I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !" ]
1,615,280,498,000
1,615,298,763,000
1,615,298,763,000
CONTRIBUTOR
null
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
https://api.github.com/repos/huggingface/datasets/issues/2009/events
https://github.com/huggingface/datasets/issues/2009
825,541,366
MDU6SXNzdWU4MjU1NDEzNjY=
2,009
Ambiguous documentation
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir, \"dev.jsonl\"),\r\n \"split\": \"dev\",\r\n },\r\n),\r\n```\r\n\r\nNotice the `gen_kwargs` argument passed to the constructor of `SplitGenerator`: this dict will be unpacked as keyword arguments to pass to the `_generat_examples` method (in this case the `filepath` and `split` arguments).\r\n\r\nLet me know if that helps!", "Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!" ]
1,615,279,331,000
1,615,561,294,000
1,615,561,294,000
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2008/comments
https://api.github.com/repos/huggingface/datasets/issues/2008/events
https://github.com/huggingface/datasets/pull/2008
825,153,804
MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4
2,008
Fix various typos/grammer in the docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "What do yo think of the documentation btw ?\r\nWhat parts would you like to see improved ?", "I like how concise and straightforward the docs are.\r\n\r\nFew things that would further improve the docs IMO:\r\n* the usage example of `Dataset.formatted_as` in https://huggingface.co/docs/datasets/master/processing.html\r\n* the \"Open in Colab\" button would be nice where it makes sense (we can borrow this from the transformers project + link to HF Forum)" ]
1,615,253,968,000
1,615,833,769,000
1,615,285,292,000
CONTRIBUTOR
null
This PR: * fixes various typos/grammer I came across while reading the docs * adds the "Install with conda" installation instructions Closes #1959
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2008/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2008", "html_url": "https://github.com/huggingface/datasets/pull/2008", "diff_url": "https://github.com/huggingface/datasets/pull/2008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2008.patch", "merged_at": 1615285292000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
https://api.github.com/repos/huggingface/datasets/issues/2007/events
https://github.com/huggingface/datasets/issues/2007
824,518,158
MDU6SXNzdWU4MjQ1MTgxNTg=
2,007
How to not load huggingface datasets into memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ", "The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions" ]
1,615,206,926,000
1,628,100,145,000
1,628,100,145,000
NONE
null
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2006/comments
https://api.github.com/repos/huggingface/datasets/issues/2006/events
https://github.com/huggingface/datasets/pull/2006
824,457,794
MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2
2,006
Don't gitignore dvc.lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,201,988,000
1,615,202,915,000
1,615,202,914,000
MEMBER
null
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of ``` ERROR: 'dvc.lock' is git-ignored. ``` I removed the dvc.lock file from the gitignore to fix that
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2006/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2006", "html_url": "https://github.com/huggingface/datasets/pull/2006", "diff_url": "https://github.com/huggingface/datasets/pull/2006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2006.patch", "merged_at": 1615202914000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ", "What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet", "Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000 #(type, len)\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'int'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n<hr>\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n", "Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n<ipython-input-95-d28e69289084> in <module>()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in <dictcomp>(.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```", "# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n![image](https://user-images.githubusercontent.com/29076344/110491452-4cf09c00-8117-11eb-8a47-73bf3fc0c3dc.png)\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.", "What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.", "Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.", "I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved." ]
1,615,189,091,000
1,615,312,693,000
1,615,312,693,000
CONTRIBUTOR
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2004/comments
https://api.github.com/repos/huggingface/datasets/issues/2004/events
https://github.com/huggingface/datasets/pull/2004
824,080,760
MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1
2,004
LaRoSeDa
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq all the changes requested are implemented. Thank you for your time and feedback :)" ]
1,615,165,592,000
1,615,977,800,000
1,615,977,800,000
CONTRIBUTOR
null
Add LaRoSeDa to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2004/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2004", "html_url": "https://github.com/huggingface/datasets/pull/2004", "diff_url": "https://github.com/huggingface/datasets/pull/2004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2004.patch", "merged_at": 1615977800000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2002/comments
https://api.github.com/repos/huggingface/datasets/issues/2002/events
https://github.com/huggingface/datasets/pull/2002
823,955,744
MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3
2,002
MOROCO
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for all the feedback. I've added the suggested changes in my last commit." ]
1,615,134,137,000
1,616,147,526,000
1,616,147,526,000
CONTRIBUTOR
null
Add MOROCO to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2002/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2002", "html_url": "https://github.com/huggingface/datasets/pull/2002", "diff_url": "https://github.com/huggingface/datasets/pull/2002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2002.patch", "merged_at": 1616147526000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "repos_url": "https://api.github.com/users/donggyukimc/repos", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,131,695,000
1,615,960,261,000
1,615,960,261,000
NONE
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
https://api.github.com/repos/huggingface/datasets/issues/2000/events
https://github.com/huggingface/datasets/issues/2000
823,899,910
MDU6SXNzdWU4MjM4OTk5MTA=
2,000
Windows Permission Error (most recent version of datasets)
{ "login": "itsLuisa", "id": 73881148, "node_id": "MDQ6VXNlcjczODgxMTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsLuisa", "html_url": "https://github.com/itsLuisa", "followers_url": "https://api.github.com/users/itsLuisa/followers", "following_url": "https://api.github.com/users/itsLuisa/following{/other_user}", "gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions", "organizations_url": "https://api.github.com/users/itsLuisa/orgs", "repos_url": "https://api.github.com/users/itsLuisa/repos", "events_url": "https://api.github.com/users/itsLuisa/events{/privacy}", "received_events_url": "https://api.github.com/users/itsLuisa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ", "Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 537, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 656, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 982, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 297, in finalize\r\n self.write_on_file()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 230, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\\array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 97, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\\array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\\error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\\error.pxi\", line 107, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Expected bytes, got a 'list' object\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 122, in <module>\r\n main()\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 111, in main\r\n dataset = datasets.load_dataset(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 586, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 543, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 616, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\\\Users\\\\Luisa\\\\.cache\\\\huggingface\\\\datasets\\\\sample\\\\default-20ee7d51a6a9454f\\\\0.0.0\\\\5fc4c3a355ea77ab446bd31fca5082437600b8364d29b2b95264048bd1f398b1.incomplete\\\\sample-train.arrow'\r\n\r\nProcess finished with exit code 1\r\n```", "Hi @itsLuisa, thanks for sharing the Traceback.\r\n\r\nYou are defining the \"id\" field as a `string` feature:\r\n```python\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n ...\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n # ^^ here\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(datasets.features.ClassLabel(names=[...])),\r\n[...]\r\n```\r\n\r\nBut in the `_generate_examples`, the \"id\" field is a list:\r\n```python\r\nids = list()\r\n```\r\n\r\nChanging:\r\n```python\r\n\"id\": datasets.Value(\"string\"),\r\n```\r\nInto:\r\n```python\r\n\"id\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\nShould fix your issue.\r\n\r\nLet me know if this helps!", "It seems to be working now, thanks a lot for the help, @SBrandeis !", "Glad to hear it!\r\nI'm closing the issue" ]
1,615,118,128,000
1,615,293,777,000
1,615,293,777,000
NONE
null
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1999/comments
https://api.github.com/repos/huggingface/datasets/issues/1999/events
https://github.com/huggingface/datasets/pull/1999
823,753,591
MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy
1,999
Add FashionMNIST dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added the changes from the review." ]
1,615,066,617,000
1,615,283,531,000
1,615,283,531,000
CONTRIBUTOR
null
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1999", "html_url": "https://github.com/huggingface/datasets/pull/1999", "diff_url": "https://github.com/huggingface/datasets/pull/1999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1999.patch", "merged_at": 1615283531000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1998/comments
https://api.github.com/repos/huggingface/datasets/issues/1998/events
https://github.com/huggingface/datasets/pull/1998
823,723,960
MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4
1,998
Add -DOCSTART- note to dataset card of conll-like datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice catch! Yes I didn't check the actual data, instead I was just looking for the `if line.startswith(\"-DOCSTART-\")` pattern." ]
1,615,057,709,000
1,615,429,207,000
1,615,429,207,000
CONTRIBUTOR
null
Closes #1983
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1998/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1998", "html_url": "https://github.com/huggingface/datasets/pull/1998", "diff_url": "https://github.com/huggingface/datasets/pull/1998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1998.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1997/comments
https://api.github.com/repos/huggingface/datasets/issues/1997/events
https://github.com/huggingface/datasets/issues/1997
823,679,465
MDU6SXNzdWU4MjM2Nzk0NjU=
1,997
from datasets import MoleculeDataset, GEOMDataset
{ "login": "futianfan", "id": 5087210, "node_id": "MDQ6VXNlcjUwODcyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/futianfan", "html_url": "https://github.com/futianfan", "followers_url": "https://api.github.com/users/futianfan/followers", "following_url": "https://api.github.com/users/futianfan/following{/other_user}", "gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}", "starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/futianfan/subscriptions", "organizations_url": "https://api.github.com/users/futianfan/orgs", "repos_url": "https://api.github.com/users/futianfan/repos", "events_url": "https://api.github.com/users/futianfan/events{/privacy}", "received_events_url": "https://api.github.com/users/futianfan/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,615,045,819,000
1,615,047,206,000
1,615,047,206,000
NONE
null
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1997/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1995/comments
https://api.github.com/repos/huggingface/datasets/issues/1995/events
https://github.com/huggingface/datasets/pull/1995
822,878,431
MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0
1,995
[Timit_asr] Make sure not only the first sample is used
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @lhoestq @vrindaprabhu", "Failing `run (push)` is unrelated -> merging", "Thanks for fixing this, it was affecting my runs for https://github.com/huggingface/transformers/pull/10581/", "I am seeing this very late! Sorry for the blunder everyone! :(" ]
1,614,933,771,000
1,625,034,353,000
1,614,934,739,000
MEMBER
null
When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1995/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1995", "html_url": "https://github.com/huggingface/datasets/pull/1995", "diff_url": "https://github.com/huggingface/datasets/pull/1995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1995.patch", "merged_at": 1614934739000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
https://api.github.com/repos/huggingface/datasets/issues/1993/events
https://github.com/huggingface/datasets/issues/1993
822,758,387
MDU6SXNzdWU4MjI3NTgzODc=
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset", "Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?", "@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ", "I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?", "ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ", "> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly", "Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!" ]
1,614,921,950,000
1,616,385,950,000
1,616,385,950,000
NONE
null
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1991/comments
https://api.github.com/repos/huggingface/datasets/issues/1991/events
https://github.com/huggingface/datasets/pull/1991
822,554,473
MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx
1,991
Adding the conllpp dataset
{ "login": "ZihanWangKi", "id": 21319243, "node_id": "MDQ6VXNlcjIxMzE5MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZihanWangKi", "html_url": "https://github.com/ZihanWangKi", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the reviews! A note that I have addressed the comments, and waiting for a further review." ]
1,614,896,383,000
1,615,977,459,000
1,615,977,459,000
CONTRIBUTOR
null
Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1991/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1991", "html_url": "https://github.com/huggingface/datasets/pull/1991", "diff_url": "https://github.com/huggingface/datasets/pull/1991.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1991.patch", "merged_at": 1615977459000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
https://api.github.com/repos/huggingface/datasets/issues/1990/events
https://github.com/huggingface/datasets/issues/1990
822,384,502
MDU6SXNzdWU4MjIzODQ1MDI=
1,990
OSError: Memory mapping file failed: Cannot allocate memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you", "It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.\r\n\r\nWhat dataset did you use to get this error ?\r\nOn what OS are you running ? What's your python and pyarrow version ?", "Dear @lhoestq \r\nthank you so much for coming back to me. Please find info below:\r\n1) Dataset name: I used wikipedia with config 20200501.en\r\n2) I got these pyarrow in my environment:\r\npyarrow 2.0.0 <pip>\r\npyarrow 3.0.0 <pip>\r\n\r\n3) python version 3.7.10\r\n4) OS version \r\n\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tDebian\r\nDescription:\tDebian GNU/Linux 10 (buster)\r\nRelease:\t10\r\nCodename:\tbuster\r\n\r\n\r\nIs there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108, \r\nthanks \r\n", "I noticed that the error happens when loading the validation dataset.\r\nWhat value of `data_args.validation_split_percentage` did you use ?", "Dear @lhoestq \r\n\r\nthank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid this?\r\n\r\nThank you very much for the great help.\r\n\r\n\r\nOn Mon, Mar 8, 2021 at 11:28 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> I noticed that the error happens when loading the validation dataset.\r\n> What value of data_args.validation_split_percentage did you use ?\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1990#issuecomment-792655644>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMS337ZUJ7HGGVVCCR3TCSREFANCNFSM4YTYAQ2A>\r\n> .\r\n>\r\n", "Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. \r\nThe only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset[\"my_column_names\"]`.\r\n\r\nBut it's possible that trying to use those methods to build your validation set doesn't fix the issue since, if I understand correctly, the error happens when when the dataset arrow file is opened (just before the 5% percentage is applied).\r\n\r\nDid you try to reproduce this issue in a google colab ? This would be super helpful to investigate why this happened.\r\n\r\nAlso maybe you can try clearing your cache at `~/.cache/huggingface/datasets` and try again. If the arrow file was corrupted somehow, removing it and rebuilding may fix the issue." ]
1,614,882,118,000
1,628,100,265,000
1,628,100,265,000
NONE
null
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1988/comments
https://api.github.com/repos/huggingface/datasets/issues/1988/events
https://github.com/huggingface/datasets/issues/1988
822,324,605
MDU6SXNzdWU4MjIzMjQ2MDU=
1,988
Readme.md is misleading about kinds of datasets?
{ "login": "surak", "id": 878399, "node_id": "MDQ6VXNlcjg3ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surak", "html_url": "https://github.com/surak", "followers_url": "https://api.github.com/users/surak/followers", "following_url": "https://api.github.com/users/surak/following{/other_user}", "gists_url": "https://api.github.com/users/surak/gists{/gist_id}", "starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surak/subscriptions", "organizations_url": "https://api.github.com/users/surak/orgs", "repos_url": "https://api.github.com/users/surak/repos", "events_url": "https://api.github.com/users/surak/events{/privacy}", "received_events_url": "https://api.github.com/users/surak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)" ]
1,614,877,460,000
1,628,100,323,000
1,628,100,323,000
NONE
null
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You mention other kinds of datasets, with images and so on. I'm confused. Is it possible to use it to store, say, imagenet locally?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1988/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1986/comments
https://api.github.com/repos/huggingface/datasets/issues/1986/events
https://github.com/huggingface/datasets/issues/1986
822,176,290
MDU6SXNzdWU4MjIxNzYyOTA=
1,986
wmt datasets fail to load
{ "login": "sabania", "id": 32322564, "node_id": "MDQ6VXNlcjMyMzIyNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabania", "html_url": "https://github.com/sabania", "followers_url": "https://api.github.com/users/sabania/followers", "following_url": "https://api.github.com/users/sabania/following{/other_user}", "gists_url": "https://api.github.com/users/sabania/gists{/gist_id}", "starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sabania/subscriptions", "organizations_url": "https://api.github.com/users/sabania/orgs", "repos_url": "https://api.github.com/users/sabania/repos", "events_url": "https://api.github.com/users/sabania/events{/privacy}", "received_events_url": "https://api.github.com/users/sabania/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "caching issue, seems to work again.." ]
1,614,867,535,000
1,614,868,267,000
1,614,868,267,000
NONE
null
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager) 758 # Extract manually downloaded files. 759 manual_files = dl_manager.extract(manual_paths_dict) --> 760 extraction_map = dict(downloaded_files, **manual_files) 761 762 for language in self.config.language_pair: TypeError: type object argument after ** must be a mapping, not list
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1986/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1985/comments
https://api.github.com/repos/huggingface/datasets/issues/1985/events
https://github.com/huggingface/datasets/pull/1985
822,170,651
MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw
1,985
Optimize int precision
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, are the tests OK? Some other cases I missed? Do you agree with this approach?", "I just tested this and it works like a charm :) \r\n\r\nHowever tokenizing and then setting the format to \"torch\" to feed the tokens into a model doesn't seem to work anymore, since the pytorch tensors have the int32/int8 precisions instead of int64 that is required as model inputs.\r\n\r\nFor example:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntorch.set_grad_enabled(False)\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n\r\ndataset = Dataset.from_dict({\"text\": [\"hello there !\"]})\r\ndataset = dataset.map(tokenizer, input_columns=\"text\", remove_columns=dataset.column_names)\r\ndataset = dataset.with_format(\"torch\")\r\n\r\nprint(dataset.features)\r\n# {'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),\r\n# 'input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), # this should be int32 though\r\n# 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}\r\n\r\nmodel(**dataset[:1])\r\n# RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.CharTensor instead (while checking arguments for embedding)\r\n\r\ndataset = dataset.with_format(\"torch\", dtype=torch.int64)\r\n\r\nmodel(**dataset[:1])\r\n# works as expected\r\n```\r\n\r\nPinging @sgugger here to make sure we take the right decision here.\r\n\r\nDo we want the \"torch\" format to always return int64 ? Or does it have to keep the precision defined by the `dataset.features` \r\n and therefore we would need to specify \"torch\" with `dtype=torch.int64` ?", "From a user perspective, I think it's fine if the \"torch\" format converts all ints types to `torch.int64` by default since it's what the model will need almost all the time. I don't see a case where you would want to keep the low precision at the top of my head, and one can always write a custom transform for an edge case.", "Sounds good to me !\r\nFor consistency maybe we should make the float precision fixed as well (float32, I guess)", "Yes, that would be the one used by default.", "Do we have the same requirements for TensorFlow?", "Yes I we should do the same for tensorflow as well since tf models would have the same issue\r\n\r\nThanks for adding this :)", "@lhoestq I think this PR is ready... :)" ]
1,614,867,143,000
1,616,414,680,000
1,615,887,840,000
MEMBER
null
Optimize int precision to reduce dataset file size. Close #1973, close #1825, close #861.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1985/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1985", "html_url": "https://github.com/huggingface/datasets/pull/1985", "diff_url": "https://github.com/huggingface/datasets/pull/1985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1985.patch", "merged_at": 1615887840000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1982/comments
https://api.github.com/repos/huggingface/datasets/issues/1982/events
https://github.com/huggingface/datasets/pull/1982
821,448,791
MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0
1,982
Fix NestedDataStructure.data for empty dict
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I validated that this fixed the problem, thank you, @albertvillanova!\r\n", "still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list", "Hi @sabania \r\nWe released a patch version that fixes this issue (1.4.1), can you try with the new version please ?\r\n```\r\npip install --upgrade datasets\r\n```", "I re-validated with the hotfix and the problem is no more.", "It's working. thanks a lot." ]
1,614,802,611,000
1,614,876,364,000
1,614,811,716,000
MEMBER
null
Fix #1981
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1982", "html_url": "https://github.com/huggingface/datasets/pull/1982", "diff_url": "https://github.com/huggingface/datasets/pull/1982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1982.patch", "merged_at": 1614811716000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1981/comments
https://api.github.com/repos/huggingface/datasets/issues/1981/events
https://github.com/huggingface/datasets/issues/1981
821,411,109
MDU6SXNzdWU4MjE0MTExMDk=
1,981
wmt datasets fail to load
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@stas00 Mea culpa... May I fix this tomorrow morning?", "yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?", "Sure, I will implement a regression test!", "@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?", "I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)", "still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list" ]
1,614,799,299,000
1,614,867,407,000
1,614,811,716,000
CONTRIBUTOR
null
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e... Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators extraction_map = dict(downloaded_files, **manual_files) ``` it worked fine recently. same problem if I try wmt16. git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1981/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1980/comments
https://api.github.com/repos/huggingface/datasets/issues/1980/events
https://github.com/huggingface/datasets/pull/1980
821,312,810
MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy
1,980
Loading all answers from drop
{ "login": "KaijuML", "id": 25499439, "node_id": "MDQ6VXNlcjI1NDk5NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaijuML", "html_url": "https://github.com/KaijuML", "followers_url": "https://api.github.com/users/KaijuML/followers", "following_url": "https://api.github.com/users/KaijuML/following{/other_user}", "gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions", "organizations_url": "https://api.github.com/users/KaijuML/orgs", "repos_url": "https://api.github.com/users/KaijuML/repos", "events_url": "https://api.github.com/users/KaijuML/events{/privacy}", "received_events_url": "https://api.github.com/users/KaijuML/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```", "Done!" ]
1,614,791,587,000
1,615,807,646,000
1,615,807,646,000
CONTRIBUTOR
null
Hello all, I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date"). I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files. Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them. Let me know if there is anything else I can do, Clément
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1980/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1980", "html_url": "https://github.com/huggingface/datasets/pull/1980", "diff_url": "https://github.com/huggingface/datasets/pull/1980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1980.patch", "merged_at": 1615807646000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1979/comments
https://api.github.com/repos/huggingface/datasets/issues/1979/events
https://github.com/huggingface/datasets/pull/1979
820,977,853
MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3
1,979
Add article_id and process test set template for semeval 2020 task 11…
{ "login": "hemildesai", "id": 8195444, "node_id": "MDQ6VXNlcjgxOTU0NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hemildesai", "html_url": "https://github.com/hemildesai", "followers_url": "https://api.github.com/users/hemildesai/followers", "following_url": "https://api.github.com/users/hemildesai/following{/other_user}", "gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}", "starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions", "organizations_url": "https://api.github.com/users/hemildesai/orgs", "repos_url": "https://api.github.com/users/hemildesai/repos", "events_url": "https://api.github.com/users/hemildesai/events{/privacy}", "received_events_url": "https://api.github.com/users/hemildesai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks !\r\nNow to fix the CI the only thing left is to add a dummy `test-task-tc-template.out` file inside the `dummy_data.zip` at `./datasets/sem_eval_2020_task_11/dummy/1.1.0`\r\nIt must contain the labels template for each dummy article of the test set included in `dummy_data.zip`\r\n\r\nAfter that we should be good to merge this one :)", "@lhoestq Made the changes! The failure now seems to be unrelated to the changes. Any idea what's going on?", "This is a bug on master that we're investigating. You can ignore it" ]
1,614,767,672,000
1,615,633,180,000
1,615,554,650,000
CONTRIBUTOR
null
… dataset - `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/ - The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1979/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1979", "html_url": "https://github.com/huggingface/datasets/pull/1979", "diff_url": "https://github.com/huggingface/datasets/pull/1979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1979.patch", "merged_at": 1615554650000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1978/comments
https://api.github.com/repos/huggingface/datasets/issues/1978/events
https://github.com/huggingface/datasets/pull/1978
820,956,806
MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz
1,978
Adding ro sts dataset
{ "login": "lorinczb", "id": 36982089, "node_id": "MDQ6VXNlcjM2OTgyMDg5", "avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorinczb", "html_url": "https://github.com/lorinczb", "followers_url": "https://api.github.com/users/lorinczb/followers", "following_url": "https://api.github.com/users/lorinczb/following{/other_user}", "gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions", "organizations_url": "https://api.github.com/users/lorinczb/orgs", "repos_url": "https://api.github.com/users/lorinczb/repos", "events_url": "https://api.github.com/users/lorinczb/events{/privacy}", "received_events_url": "https://api.github.com/users/lorinczb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_parallel.py changed to camel case as well). In the ro_sts_parallel I have changed the order on the languages, also in the example, as you said order doesn't matter, but just to have them listed in the readme in the same order.\r\n\r\nI have commented above on why we would like to keep them as separate datasets, hope it makes sense.\r\n\r\nIf there is anything else I should change please let me know.\r\n\r\nThanks again!", "@lhoestq I tried to adjust the ro_sts_parallel, locally when I run the tests they are passing, but somewhere it has the old name of rosts-parallel-ro-en which I am trying to change to ro_sts_parallel. I don't think I have left anything related to rosts-parallel-ro-en, but when the dataset_infos.json is regenerated it adds it. Could you please help me out, how can I fix this? Thanks in advance!", "Great, thanks for all your help! " ]
1,614,766,133,000
1,614,938,414,000
1,614,936,835,000
CONTRIBUTOR
null
Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1978/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1978", "html_url": "https://github.com/huggingface/datasets/pull/1978", "diff_url": "https://github.com/huggingface/datasets/pull/1978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1978.patch", "merged_at": 1614936835000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1976/comments
https://api.github.com/repos/huggingface/datasets/issues/1976/events
https://github.com/huggingface/datasets/pull/1976
820,228,538
MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4
1,976
Add datasets full offline mode with HF_DATASETS_OFFLINE
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,706,019,000
1,614,786,331,000
1,614,786,330,000
MEMBER
null
Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939 cc @stas00
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1976/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1976", "html_url": "https://github.com/huggingface/datasets/pull/1976", "diff_url": "https://github.com/huggingface/datasets/pull/1976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1976.patch", "merged_at": 1614786330000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1975/comments
https://api.github.com/repos/huggingface/datasets/issues/1975/events
https://github.com/huggingface/datasets/pull/1975
820,205,485
MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3
1,975
Fix flake8
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,704,353,000
1,614,854,602,000
1,614,854,602,000
MEMBER
null
Fix flake8 style.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1975/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1975", "html_url": "https://github.com/huggingface/datasets/pull/1975", "diff_url": "https://github.com/huggingface/datasets/pull/1975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1975.patch", "merged_at": 1614854602000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1974/comments
https://api.github.com/repos/huggingface/datasets/issues/1974/events
https://github.com/huggingface/datasets/pull/1974
820,122,223
MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0
1,974
feat(docs): navigate with left/right arrow keys
{ "login": "ydcjeff", "id": 32727188, "node_id": "MDQ6VXNlcjMyNzI3MTg4", "avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydcjeff", "html_url": "https://github.com/ydcjeff", "followers_url": "https://api.github.com/users/ydcjeff/followers", "following_url": "https://api.github.com/users/ydcjeff/following{/other_user}", "gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions", "organizations_url": "https://api.github.com/users/ydcjeff/orgs", "repos_url": "https://api.github.com/users/ydcjeff/repos", "events_url": "https://api.github.com/users/ydcjeff/events{/privacy}", "received_events_url": "https://api.github.com/users/ydcjeff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,698,690,000
1,614,854,652,000
1,614,854,568,000
CONTRIBUTOR
null
Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot. More info : https://github.com/sphinx-doc/sphinx/pull/2064 You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1974/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1974", "html_url": "https://github.com/huggingface/datasets/pull/1974", "diff_url": "https://github.com/huggingface/datasets/pull/1974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1974.patch", "merged_at": 1614854568000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1973/comments
https://api.github.com/repos/huggingface/datasets/issues/1973/events
https://github.com/huggingface/datasets/issues/1973
820,077,312
MDU6SXNzdWU4MjAwNzczMTI=
1,973
Question: what gets stored in the datasets cache and why is it so huge?
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.", "Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.", "Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ", "And to clarify, it's not memory, it's disk space. Thank you!", "Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```", "Thanks for the tip, this is useful. ", "Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.", "Thank you!" ]
1,614,695,753,000
1,617,113,039,000
1,615,887,840,000
NONE
null
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1973/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1971/comments
https://api.github.com/repos/huggingface/datasets/issues/1971/events
https://github.com/huggingface/datasets/pull/1971
819,714,231
MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0
1,971
Fix ArrowWriter closes stream at exit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nNot sure about the error, it looks like a process crashed silently.\r\nLet me take a look", "> Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nExactly! On Windows, you got:\r\n> PermissionError: [WinError 32] The process cannot access the file because it is being used by another process\r\n\r\nwhen trying to access the unclosed `stream` file, e.g. by `with incomplete_dir(self._cache_dir) as tmp_data_dir`: `shutil.rmtree(tmp_dir)`\r\n\r\nThe reason is: https://docs.python.org/3/library/os.html#os.remove\r\n\r\n> On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.\r\n\r\n\r\n", "The test passes on my windows. This was probably a circleCI issue. I re-ran the circleCI tests", "NICE! It passed!", "Maybe you can merge master into this branch and check the CI before merging ?", "@lhoestq done! ;)", "Thanks ! merging" ]
1,614,669,154,000
1,615,394,217,000
1,615,394,217,000
MEMBER
null
Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method. Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1971/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1971", "html_url": "https://github.com/huggingface/datasets/pull/1971", "diff_url": "https://github.com/huggingface/datasets/pull/1971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1971.patch", "merged_at": 1615394216000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1970/comments
https://api.github.com/repos/huggingface/datasets/issues/1970/events
https://github.com/huggingface/datasets/pull/1970
819,500,620
MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw
1,970
Fixing the URL filtering for bad MLSUM examples in GEM
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,648,178,000
1,614,655,146,000
1,614,650,493,000
MEMBER
null
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r cc @sebastianGehrmann
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1970/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1970", "html_url": "https://github.com/huggingface/datasets/pull/1970", "diff_url": "https://github.com/huggingface/datasets/pull/1970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1970.patch", "merged_at": 1614650493000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1967/comments
https://api.github.com/repos/huggingface/datasets/issues/1967/events
https://github.com/huggingface/datasets/pull/1967
819,129,568
MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx
1,967
Add Turkish News Category Dataset - 270K - Lite Version
{ "login": "yavuzKomecoglu", "id": 5150963, "node_id": "MDQ6VXNlcjUxNTA5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yavuzKomecoglu", "html_url": "https://github.com/yavuzKomecoglu", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the change, merging now !" ]
1,614,622,919,000
1,614,705,900,000
1,614,705,900,000
CONTRIBUTOR
null
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1967/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1967", "html_url": "https://github.com/huggingface/datasets/pull/1967", "diff_url": "https://github.com/huggingface/datasets/pull/1967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1967.patch", "merged_at": 1614705900000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1966/comments
https://api.github.com/repos/huggingface/datasets/issues/1966/events
https://github.com/huggingface/datasets/pull/1966
819,101,253
MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0
1,966
Fix metrics collision in separate multiprocessed experiments
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!" ]
1,614,620,718,000
1,614,690,345,000
1,614,690,344,000
MEMBER
null
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup. Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it. To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision. Finally I added missing tests for separate experiments in distributed setup.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1966/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1966", "html_url": "https://github.com/huggingface/datasets/pull/1966", "diff_url": "https://github.com/huggingface/datasets/pull/1966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1966.patch", "merged_at": 1614690344000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1965/comments
https://api.github.com/repos/huggingface/datasets/issues/1965/events
https://github.com/huggingface/datasets/issues/1965
818,833,460
MDU6SXNzdWU4MTg4MzM0NjA=
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n", "Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.", "@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. " ]
1,614,602,854,000
1,614,886,856,000
1,614,886,842,000
NONE
null
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1965/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1962/comments
https://api.github.com/repos/huggingface/datasets/issues/1962/events
https://github.com/huggingface/datasets/pull/1962
818,089,156
MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4
1,962
Fix unused arguments
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).", "Thanks !\r\nI'm re-running the CI, maybe this was an issue with circleCI", "Looks all good now, merged :)" ]
1,614,480,427,000
1,615,429,097,000
1,614,789,470,000
CONTRIBUTOR
null
Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1962/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1962", "html_url": "https://github.com/huggingface/datasets/pull/1962", "diff_url": "https://github.com/huggingface/datasets/pull/1962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1962.patch", "merged_at": 1614789470000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1961/comments
https://api.github.com/repos/huggingface/datasets/issues/1961/events
https://github.com/huggingface/datasets/pull/1961
818,077,947
MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0
1,961
Add sst dataset
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,478,109,000
1,614,854,333,000
1,614,854,333,000
CONTRIBUTOR
null
Related to #1934&mdash;Add the Stanford Sentiment Treebank dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1961/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1961", "html_url": "https://github.com/huggingface/datasets/pull/1961", "diff_url": "https://github.com/huggingface/datasets/pull/1961.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1961.patch", "merged_at": 1614854333000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1960/comments
https://api.github.com/repos/huggingface/datasets/issues/1960/events
https://github.com/huggingface/datasets/pull/1960
818,073,154
MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4
1,960
Allow stateful function in dataset.map
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Added a test. If you can come up with a better stateful callable, I'm all ears 😄. ", "Sorry I said earlier that it was good to have it inside the loop, my mistake !", "@lhoestq Okay, did some refactoring and now the \"cache\" part comes before the for loop. Thanks for the guidance.\r\n\r\nThink this is ready for the final review." ]
1,614,475,745,000
1,616,513,209,000
1,616,513,209,000
CONTRIBUTOR
null
Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example. Fixes #1940 @lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1960/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1960", "html_url": "https://github.com/huggingface/datasets/pull/1960", "diff_url": "https://github.com/huggingface/datasets/pull/1960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1960.patch", "merged_at": 1616513209000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1959/comments
https://api.github.com/repos/huggingface/datasets/issues/1959/events
https://github.com/huggingface/datasets/issues/1959
818,055,644
MDU6SXNzdWU4MTgwNTU2NDQ=
1,959
Bug in skip_rows argument of load_dataset function ?
{ "login": "LedaguenelArthur", "id": 73159756, "node_id": "MDQ6VXNlcjczMTU5NzU2", "avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LedaguenelArthur", "html_url": "https://github.com/LedaguenelArthur", "followers_url": "https://api.github.com/users/LedaguenelArthur/followers", "following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}", "gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}", "starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions", "organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs", "repos_url": "https://api.github.com/users/LedaguenelArthur/repos", "events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}", "received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\ntry `skiprows` instead. This part is not properly documented in the docs it seems.\r\n\r\n@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs." ]
1,614,468,774,000
1,615,285,292,000
1,615,285,292,000
NONE
null
Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names `test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\t', column_names=["id", "sentence", "label"], skip_rows=1)` But I got the following error message `__init__() got an unexpected keyword argument 'skip_rows'` Have I used the wrong argument ? Am I missing something or is this a bug ? Thank you very much for your time, Best regards, Arthur
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1959/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1958/comments
https://api.github.com/repos/huggingface/datasets/issues/1958/events
https://github.com/huggingface/datasets/issues/1958
818,037,548
MDU6SXNzdWU4MTgwMzc1NDg=
1,958
XSum dataset download link broken
{ "login": "himat", "id": 1156974, "node_id": "MDQ6VXNlcjExNTY5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1156974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/himat", "html_url": "https://github.com/himat", "followers_url": "https://api.github.com/users/himat/followers", "following_url": "https://api.github.com/users/himat/following{/other_user}", "gists_url": "https://api.github.com/users/himat/gists{/gist_id}", "starred_url": "https://api.github.com/users/himat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/himat/subscriptions", "organizations_url": "https://api.github.com/users/himat/orgs", "repos_url": "https://api.github.com/users/himat/repos", "events_url": "https://api.github.com/users/himat/events{/privacy}", "received_events_url": "https://api.github.com/users/himat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Never mind, I ran it again and it worked this time. Strange." ]
1,614,462,476,000
1,614,462,616,000
1,614,462,616,000
NONE
null
I did ``` from datasets import load_dataset dataset = load_dataset("xsum") ``` This returns `ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1958/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1956/comments
https://api.github.com/repos/huggingface/datasets/issues/1956/events
https://github.com/huggingface/datasets/issues/1956
818,013,741
MDU6SXNzdWU4MTgwMTM3NDE=
1,956
[distributed env] potentially unsafe parallel execution
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.\r\nMaybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?", "Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!" ]
1,614,458,325,000
1,614,619,482,000
1,614,619,482,000
CONTRIBUTOR
null
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason). That's why dist environments use some unique to a group identifier so that each group is dealt with separately. e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT` So ideally this interface should ask for a shared secret to do the right thing. I'm not reporting an immediate need, but am only flagging that this will hit someone down the road. This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment. Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1956/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1955/comments
https://api.github.com/repos/huggingface/datasets/issues/1955/events
https://github.com/huggingface/datasets/pull/1955
818,010,664
MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5
1,955
typos + grammar
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,457,303,000
1,614,619,238,000
1,614,609,799,000
CONTRIBUTOR
null
This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability. N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1955/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1955", "html_url": "https://github.com/huggingface/datasets/pull/1955", "diff_url": "https://github.com/huggingface/datasets/pull/1955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1955.patch", "merged_at": 1614609799000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1954/comments
https://api.github.com/repos/huggingface/datasets/issues/1954/events
https://github.com/huggingface/datasets/issues/1954
817,565,563
MDU6SXNzdWU4MTc1NjU1NjM=
1,954
add a new column
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ", "Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support for a more native way of adding a new column ;)" ]
1,614,363,447,000
1,619,707,843,000
1,619,707,843,000
NONE
null
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1954/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1953/comments
https://api.github.com/repos/huggingface/datasets/issues/1953/events
https://github.com/huggingface/datasets/pull/1953
817,498,869
MDExOlB1bGxSZXF1ZXN0NTgwOTgyMDMz
1,953
Documentation for to_csv, to_pandas and to_dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,357,349,000
1,614,607,428,000
1,614,607,427,000
MEMBER
null
I added these methods to the documentation with a small paragraph. I also fixed some formatting issues in the docstrings
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1953/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1953/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1953", "html_url": "https://github.com/huggingface/datasets/pull/1953", "diff_url": "https://github.com/huggingface/datasets/pull/1953.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1953.patch", "merged_at": 1614607427000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1952/comments
https://api.github.com/repos/huggingface/datasets/issues/1952/events
https://github.com/huggingface/datasets/pull/1952
817,428,160
MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw
1,952
Handle timeouts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I never said the calls were hanging indefinitely, what we need is quite different - in the firewalled env with a network, there should be no network calls or they should fail instantly.\r\n\r\nTo make this work I suppose on top of this PR we need:\r\n1. `DATASETS_OFFLINE` env var to force set timeout to 0 globally (or to 0.0001 if 0 has a special meaning of no timeout)\r\n2. `DATASETS_OFFLINE` should guard against failing network calls and not fail the program if it has all the data it needs locally.\r\n\r\nBottom line - if the logic wants to check online if the local file matches online dataset name, let it go wild, but it should fail instantly, recover and use the local file - if one is specified explicitly or cache if there is one. And only if neither was found only then assert.\r\n\r\nI hope this makes sense and is doable.\r\n\r\nI have started on the same approach for transformers https://github.com/huggingface/transformers/pull/10407\r\n\r\nThank you, @lhoestq ", "Yes that was the first step to add DATASETS_OFFLINE :)\r\n\r\nWith this PR, if a request times out (which couldn't happen before because no time out was set), it falls back on the local files with no error.\r\n\r\nAs you said, setting the timeout to something like 1e-16 makes the requests fail instantly, which is one step forward. One last thing left is to disable request retries and everything will be instant !", "Ah, fantastic. Thank you for elucidating that this PR is part of a bigger master plan! ", "Merging this one, then I'll open a new PR for the `DATASETS_OFFLINE` env var :)" ]
1,614,351,727,000
1,614,608,964,000
1,614,608,964,000
MEMBER
null
As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset. This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00 I added a default timeout, and included an option to our offline environment for tests to be able to simulate both connection errors and timeout errors (previously it was simulating connection errors only). Now networks calls don't hang indefinitely. The default timeout is set to 10sec (we might reduce it).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1952/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1952", "html_url": "https://github.com/huggingface/datasets/pull/1952", "diff_url": "https://github.com/huggingface/datasets/pull/1952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1952.patch", "merged_at": 1614608964000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1951/comments
https://api.github.com/repos/huggingface/datasets/issues/1951/events
https://github.com/huggingface/datasets/pull/1951
817,423,573
MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2
1,951
Add cross-platform support for datasets-cli
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@mariosasko This is kinda cool! " ]
1,614,351,385,000
1,615,429,106,000
1,614,353,426,000
CONTRIBUTOR
null
One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src/datasets/commands/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https://github.com/huggingface/transformers/pull/4131)).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1951/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1951", "html_url": "https://github.com/huggingface/datasets/pull/1951", "diff_url": "https://github.com/huggingface/datasets/pull/1951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1951.patch", "merged_at": 1614353426000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1950/comments
https://api.github.com/repos/huggingface/datasets/issues/1950/events
https://github.com/huggingface/datasets/pull/1950
817,295,235
MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz
1,950
updated multi_nli dataset with missing fields
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,340,476,000
1,614,596,910,000
1,614,596,909,000
CONTRIBUTOR
null
1) updated fields which were missing earlier 2) added tags to README 3) updated a few fields of README 4) new dataset_infos.json and dummy files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1950/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1950", "html_url": "https://github.com/huggingface/datasets/pull/1950", "diff_url": "https://github.com/huggingface/datasets/pull/1950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1950.patch", "merged_at": 1614596909000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1947/comments
https://api.github.com/repos/huggingface/datasets/issues/1947/events
https://github.com/huggingface/datasets/pull/1947
816,590,299
MDExOlB1bGxSZXF1ZXN0NTgwMjI2MDk5
1,947
Update documentation with not in place transforms and update DatasetDict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,270,198,000
1,614,609,414,000
1,614,609,413,000
MEMBER
null
In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`. I added them to the documentation and added a paragraph on how to use them You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-removing-casting-and-flattening-columns) I also added these methods to the DatasetDict class.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1947/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1947/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1947", "html_url": "https://github.com/huggingface/datasets/pull/1947", "diff_url": "https://github.com/huggingface/datasets/pull/1947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1947.patch", "merged_at": 1614609413000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1946/comments
https://api.github.com/repos/huggingface/datasets/issues/1946/events
https://github.com/huggingface/datasets/pull/1946
816,526,294
MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2
1,946
Implement Dataset from CSV
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq question about public API: `keep_in_memory` or just `in_memory`?", "For consistence I'd say `keep_in_memory`, but no strong opinion.", "@lhoestq done!" ]
1,614,265,813,000
1,615,542,168,000
1,615,542,168,000
MEMBER
null
Implement `Dataset.from_csv`. Analogue to #1943. If finally, the scripts should be used instead, at least we can reuse the tests here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1946/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1946", "html_url": "https://github.com/huggingface/datasets/pull/1946", "diff_url": "https://github.com/huggingface/datasets/pull/1946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1946.patch", "merged_at": 1615542168000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1945/comments
https://api.github.com/repos/huggingface/datasets/issues/1945/events
https://github.com/huggingface/datasets/issues/1945
816,421,966
MDU6SXNzdWU4MTY0MjE5NjY=
1,945
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "sorry my mistake, datasets were overwritten closing now, thanks a lot" ]
1,614,258,585,000
1,614,259,235,000
1,614,259,226,000
NONE
null
Hi I am trying to concatenate a list of huggingface datastes as: ` train_dataset = datasets.concatenate_datasets(train_datasets) ` Here is the `train_datasets` when I print: ``` [Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 120361 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2670 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 6944 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 38140 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 173711 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 1655 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 4274 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2019 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2109 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 11963 })] ``` I am getting the following error: `AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' ` I was wondering if you could help me with this issue, thanks a lot
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1945/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1944/comments
https://api.github.com/repos/huggingface/datasets/issues/1944/events
https://github.com/huggingface/datasets/pull/1944
816,267,216
MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3
1,944
Add Turkish News Category Dataset (270K - Lite Version)
{ "login": "yavuzKomecoglu", "id": 5150963, "node_id": "MDQ6VXNlcjUxNTA5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yavuzKomecoglu", "html_url": "https://github.com/yavuzKomecoglu", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I updated your suggestions. Thank you very much for your support. @lhoestq ", "> Thanks for changing to ClassLabel :)\r\n> This is all good now !\r\n> \r\n> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?\r\n> To do so you can create another branch and another PR to only include the interpress_news_category_tr_lite files.\r\n> \r\n> Maybe this happened because of a git rebase ? Once you've already pushed your code, please use git merge instead of rebase in order to avoid this.\r\n\r\nThanks for the feedback.\r\nNew PR https://github.com/huggingface/datasets/pull/1967" ]
1,614,246,322,000
1,614,707,201,000
1,614,623,001,000
CONTRIBUTOR
null
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged. @SBrandeis @lhoestq, can you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1944/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1944", "html_url": "https://github.com/huggingface/datasets/pull/1944", "diff_url": "https://github.com/huggingface/datasets/pull/1944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1944.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1943/comments
https://api.github.com/repos/huggingface/datasets/issues/1943/events
https://github.com/huggingface/datasets/pull/1943
816,160,453
MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0
1,943
Implement Dataset from JSON and JSON Lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks @lhoestq. I was trying to follow @thomwolf suggestion about integrating that script but as `from_json` method...\r\n> Note that I don't think this is necessary a breaking change, we can still keep the old scripts around\r\n\r\nDo you think there is a better way of doing it?\r\n\r\nI was trying to implement more or less the same logic as in the script, but I confess I assumed the target was in-memory only...", "Basically, I was trying to reimplement `Json(datasets.ArrowBasedBuilder)._generate_tables`, and no writing to arrow file (I assumed only in-memory usage). I started with the first \"else\" clause... \r\n\r\nI was planning to remove my `_cast_table_to_info_features` and use `paj.read_json(parse_options=...)` instead (like in the script).", "@lhoestq I am wondering why `keep_in_memory` has no effect for JSON...", "What's the issue exactly ? Apparently it's correctly passed to as_dataset so I don't find the issue", "Nevermind @lhoestq, I found where the problem was in my code... I push!", "<s>merging master into this branch should fix the CI issue :)</s>\r\n\r\nOops I didn't refresh the page sorry ^^'\r\n\r\nLooks all good !", "Good job ! I think we can merge after the last changes regarding the error message and the docstring above :)", "@lhoestq Done! And I have also added some tests for the `field` parameter.", "Let me add some more tests for dict of lists JSON file, please.", "@lhoestq done! ;)", "We can merge. Additional work will be done in another PR. ;)" ]
1,614,237,453,000
1,616,060,528,000
1,616,060,528,000
MEMBER
null
Implement `Dataset.from_jsonl`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1943/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1943", "html_url": "https://github.com/huggingface/datasets/pull/1943", "diff_url": "https://github.com/huggingface/datasets/pull/1943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1943.patch", "merged_at": 1616060528000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1941/comments
https://api.github.com/repos/huggingface/datasets/issues/1941/events
https://github.com/huggingface/datasets/issues/1941
815,985,167
MDU6SXNzdWU4MTU5ODUxNjc=
1,941
Loading of FAISS index fails for index_name = 'exact'
{ "login": "mkserge", "id": 2992022, "node_id": "MDQ6VXNlcjI5OTIwMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mkserge", "html_url": "https://github.com/mkserge", "followers_url": "https://api.github.com/users/mkserge/followers", "following_url": "https://api.github.com/users/mkserge/following{/other_user}", "gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}", "starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mkserge/subscriptions", "organizations_url": "https://api.github.com/users/mkserge/orgs", "repos_url": "https://api.github.com/users/mkserge/repos", "events_url": "https://api.github.com/users/mkserge/events{/privacy}", "received_events_url": "https://api.github.com/users/mkserge/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! I'm taking a look", "Index training was missing, I fixed it here: https://github.com/huggingface/datasets/commit/f5986c46323583989f6ed1dabaf267854424a521\r\n\r\nCan you try again please ?", "Works great 👍 I just put a minor comment on the commit, I think you meant to pass the `train_size` from the one obtained from the config.\r\n\r\nThanks for a quick response!" ]
1,614,216,654,000
1,614,263,326,000
1,614,263,326,000
CONTRIBUTOR
null
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source on latest `master` branch. ```bash (venv) sergey_mkrtchyan datasets (master) $ python Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4 Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) 0%| | 0/10 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained return cls( File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__ self.init_retrieval() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval self.index.init_index() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index self.dataset = load_dataset( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset datasets = utils.map_nested( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset post_processed = self._post_process(ds, resources_paths) File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process dataset.add_faiss_index("embeddings", custom_index=index) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index super().add_faiss_index( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors self.faiss_index.add(vecs) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add self.add_c(n, swig_ptr(x)) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add return _swigfaiss.IndexHNSW_add(self, n, x) RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed >>> ``` The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1941/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1940/comments
https://api.github.com/repos/huggingface/datasets/issues/1940/events
https://github.com/huggingface/datasets/issues/1940
815,770,012
MDU6SXNzdWU4MTU3NzAwMTI=
1,940
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Thanks for the report !\r\n\r\nCurrently we don't have a way to let the user easily disable this behavior.\r\nHowever I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.\r\n\r\nWe needed this function in order to know whether the `map` functions needs to write data or not. if `does_function_return_dict` returns False then we don't write anything.\r\n\r\nInstead of checking the output of the processing function outside of the for loop that iterates through the dataset to process it, we can check the output of the first processed example and at that point decide if we need to write data or not.\r\n\r\nTherefore it's definitely possible to fix this unwanted behavior, any contribution going into this direction is welcome :)", "Thanks @mariosasko for the PR!" ]
1,614,194,336,000
1,616,513,209,000
1,616,513,209,000
CONTRIBUTOR
null
Hi there! In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end: ```python def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter): label = int(example['label']) current_counter = counter.get(label, 0) if current_counter < per_class_limit: counter[label] = current_counter + 1 return True return False ``` At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this: ```python ... kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()} datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs) ... ``` The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290). When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained. In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call. I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect. Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1) Thanks in advance, Francisco Perez-Sorrosal
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1940/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1939/comments
https://api.github.com/repos/huggingface/datasets/issues/1939/events
https://github.com/huggingface/datasets/issues/1939
815,680,510
MDU6SXNzdWU4MTU2ODA1MTA=
1,939
[firewalled env] OFFLINE mode
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n", "I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.", "Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n", "FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.", "Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.", "Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!", "I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!" ]
1,614,186,822,000
1,614,920,994,000
1,614,920,994,000
CONTRIBUTOR
null
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls. I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it. ## 1. Manual manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run: ``` DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ... ``` `datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed. ## 2. Automatic In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice: 1. on the non-firewalled instance: ``` run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` which should download and cached everything. 2. and then immediately after on the firewalled instance, which shares the same filesystem ``` DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online. ## Common Issues 1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided ``` if dataset and path in _PACKAGED_DATASETS_MODULES: ``` 2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging. I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1` Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379 Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1939/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1938/comments
https://api.github.com/repos/huggingface/datasets/issues/1938/events
https://github.com/huggingface/datasets/pull/1938
815,647,774
MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw
1,938
Disallow ClassLabel with no names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,184,677,000
1,614,252,449,000
1,614,252,449,000
MEMBER
null
It was possible to create a ClassLabel without specifying the names or the number of classes. This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str. cc @justin-yan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1938/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1938", "html_url": "https://github.com/huggingface/datasets/pull/1938", "diff_url": "https://github.com/huggingface/datasets/pull/1938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1938.patch", "merged_at": 1614252449000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1937/comments
https://api.github.com/repos/huggingface/datasets/issues/1937/events
https://github.com/huggingface/datasets/issues/1937
815,163,943
MDU6SXNzdWU4MTUxNjM5NDM=
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
{ "login": "yuchenlin", "id": 10104354, "node_id": "MDQ6VXNlcjEwMTA0MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuchenlin", "html_url": "https://github.com/yuchenlin", "followers_url": "https://api.github.com/users/yuchenlin/followers", "following_url": "https://api.github.com/users/yuchenlin/following{/other_user}", "gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions", "organizations_url": "https://api.github.com/users/yuchenlin/orgs", "repos_url": "https://api.github.com/users/yuchenlin/repos", "events_url": "https://api.github.com/users/yuchenlin/events{/privacy}", "received_events_url": "https://api.github.com/users/yuchenlin/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.", "We just fixed the issue, thanks for reporting !" ]
1,614,149,253,000
1,614,337,806,000
1,614,337,806,000
CONTRIBUTOR
null
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1937/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1935/comments
https://api.github.com/repos/huggingface/datasets/issues/1935/events
https://github.com/huggingface/datasets/pull/1935
814,623,827
MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1
1,935
add CoVoST2
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@patrickvonplaten \r\nI removed the mp3 files, dummy_data is much smaller now!" ]
1,614,097,696,000
1,614,190,172,000
1,614,189,909,000
MEMBER
null
This PR adds the CoVoST2 dataset for speech translation and ASR. https://github.com/facebookresearch/covost#covost-2 The dataset requires manual download as the download page requests an email address and the URLs are temporary. The dummy data is a bit bigger because of the mp3 files and 36 configs.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1935/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1935", "html_url": "https://github.com/huggingface/datasets/pull/1935", "diff_url": "https://github.com/huggingface/datasets/pull/1935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1935.patch", "merged_at": 1614189909000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1934/comments
https://api.github.com/repos/huggingface/datasets/issues/1934/events
https://github.com/huggingface/datasets/issues/1934
814,437,190
MDU6SXNzdWU4MTQ0MzcxOTA=
1,934
Add Stanford Sentiment Treebank (SST)
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this." ]
1,614,084,796,000
1,616,089,904,000
1,616,089,904,000
CONTRIBUTOR
null
I am going to add SST: - **Name:** The Stanford Sentiment Treebank - **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - **Data:** https://nlp.stanford.edu/sentiment/index.html - **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where: - the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1} - the labels of the *sub-sentences* were included only in the training set - the labels in the test set are obfuscated So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset. Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous. I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1934/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1932/comments
https://api.github.com/repos/huggingface/datasets/issues/1932/events
https://github.com/huggingface/datasets/pull/1932
814,326,116
MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy
1,932
Fix builder config creation with data_dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,075,962,000
1,614,077,128,000
1,614,077,127,000
MEMBER
null
The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custom BuilderConfig the same as an available...")` I fixed that by commenting the line that used to ignore the data_dir when creating the config. It was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id. Now creating a config with a data_dir works again @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1932/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1932", "html_url": "https://github.com/huggingface/datasets/pull/1932", "diff_url": "https://github.com/huggingface/datasets/pull/1932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1932.patch", "merged_at": 1614077127000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1931/comments
https://api.github.com/repos/huggingface/datasets/issues/1931/events
https://github.com/huggingface/datasets/pull/1931
814,225,074
MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5
1,931
add m_lama (multilingual lama) dataset
{ "login": "pdufter", "id": 13961899, "node_id": "MDQ6VXNlcjEzOTYxODk5", "avatar_url": "https://avatars.githubusercontent.com/u/13961899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdufter", "html_url": "https://github.com/pdufter", "followers_url": "https://api.github.com/users/pdufter/followers", "following_url": "https://api.github.com/users/pdufter/following{/other_user}", "gists_url": "https://api.github.com/users/pdufter/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdufter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdufter/subscriptions", "organizations_url": "https://api.github.com/users/pdufter/orgs", "repos_url": "https://api.github.com/users/pdufter/repos", "events_url": "https://api.github.com/users/pdufter/events{/privacy}", "received_events_url": "https://api.github.com/users/pdufter/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to be resolved now.", "I guess the `dummy_data.zip` is too large. I can reduce the languages that are contained there, but when testing it, it obviously throws an error, as not all files can be found. I guess I can either i) change the default value regarding which languages are loaded or ii) let the `_generate_examples` silently skip any language for which it cannot find files. Both solutions are not really pretty - is there another way around this?", "Thanks for the review and the constructive comments :) ! I tried to address them, and reduced the number of lines in the dummy data to 1 to reduce its size. " ]
1,614,067,917,000
1,614,592,863,000
1,614,592,863,000
CONTRIBUTOR
null
Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1931/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1931", "html_url": "https://github.com/huggingface/datasets/pull/1931", "diff_url": "https://github.com/huggingface/datasets/pull/1931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1931.patch", "merged_at": 1614592863000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1930/comments
https://api.github.com/repos/huggingface/datasets/issues/1930/events
https://github.com/huggingface/datasets/pull/1930
814,055,198
MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0
1,930
updated the wino_bias dataset
{ "login": "JieyuZhao", "id": 22306304, "node_id": "MDQ6VXNlcjIyMzA2MzA0", "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JieyuZhao", "html_url": "https://github.com/JieyuZhao", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !", "> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will have dev/test splits.", "> Cool thanks !\r\n> This looks perfect this way.\r\n> \r\n> Now we just need to update the dataset_infos.json (it contains the metadata of the dataset) and add dummy data to be able to test this script automatically.\r\n> \r\n> To update the dataset_infos.json you just need delete the current one at `./datasets/wino_biais/dataset_infos.json`, and then run this command:\r\n> \r\n> ```\r\n> datasets-cli test ./datasets/wino_biais --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> To add the dummy data there's also a tool to add them automatically.\r\n> First delete the folder at `./datasets/wino_biais/dummy` and then run\r\n> \r\n> ```\r\n> datasets-cli dummy_data ./datasets/wino_biais --auto_generate --match_text_files \"*conll\" --n_lines 15\r\n> ```\r\n> \r\n> Let me know if you have questions :)\r\n> Also don't forget to run `make style` to format the code properly.\r\n\r\nThanks for the instruction! I've updated the metadata and the dummy data and also do the formatting. Please let me know if more is needed. :)" ]
1,614,049,660,000
1,617,809,096,000
1,617,809,096,000
CONTRIBUTOR
null
Updated the wino_bias.py script. - updated the data_url - added different configurations for different data splits - added the coreference_cluster to the data features
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1930/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1930", "html_url": "https://github.com/huggingface/datasets/pull/1930", "diff_url": "https://github.com/huggingface/datasets/pull/1930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1930.patch", "merged_at": 1617809096000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1929/comments
https://api.github.com/repos/huggingface/datasets/issues/1929/events
https://github.com/huggingface/datasets/pull/1929
813,929,669
MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4
1,929
Improve typing and style and fix some inconsistencies
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thanks for the quick review.", "I merged master to this branch to re-run the CI before merging :)" ]
1,614,034,061,000
1,614,183,374,000
1,614,175,434,000
CONTRIBUTOR
null
This PR: * improves typing (mostly more consistent use of `typing.Optional`) * `DatasetDict.cleanup_cache_files` now correctly returns a dict * replaces `dict()` with the corresponding literal * uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1929/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1929", "html_url": "https://github.com/huggingface/datasets/pull/1929", "diff_url": "https://github.com/huggingface/datasets/pull/1929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1929.patch", "merged_at": 1614175433000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1928/comments
https://api.github.com/repos/huggingface/datasets/issues/1928/events
https://github.com/huggingface/datasets/pull/1928
813,793,434
MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4
1,928
Updating old cards
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,021,964,000
1,614,104,365,000
1,614,104,365,000
CONTRIBUTOR
null
Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1928/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1928", "html_url": "https://github.com/huggingface/datasets/pull/1928", "diff_url": "https://github.com/huggingface/datasets/pull/1928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1928.patch", "merged_at": 1614104365000 }
true