url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1B
node_id
stringlengths
18
32
number
int64
1
2.96k
title
stringlengths
1
268
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
listlengths
0
30
created_at
int64
1,587B
1,632B
updated_at
int64
1,587B
1,632B
closed_at
int64
1,587B
1,632B
author_association
stringclasses
4 values
active_lock_reason
null
pull_request
dict
body
stringlengths
0
228k
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1729/comments
https://api.github.com/repos/huggingface/datasets/issues/1729/events
https://github.com/huggingface/datasets/issues/1729
784,565,898
MDU6SXNzdWU3ODQ1NjU4OTg=
1,729
Is there support for Deep learning datasets?
{ "login": "pablodz", "id": 28235457, "node_id": "MDQ6VXNlcjI4MjM1NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pablodz", "html_url": "https://github.com/pablodz", "followers_url": "https://api.github.com/users/pablod...
[]
closed
false
null
[]
null
[ "Hi @ZurMaD!\r\nThanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingfa...
1,610,482,961,000
1,617,164,647,000
1,617,164,647,000
NONE
null
null
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
https://api.github.com/repos/huggingface/datasets/issues/1729/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1728/comments
https://api.github.com/repos/huggingface/datasets/issues/1728/events
https://github.com/huggingface/datasets/issues/1728
784,458,342
MDU6SXNzdWU3ODQ0NTgzNDI=
1,728
Add an entry to an arrow dataset
{ "login": "ameet-1997", "id": 18645407, "node_id": "MDQ6VXNlcjE4NjQ1NDA3", "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ameet-1997", "html_url": "https://github.com/ameet-1997", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight...
1,610,474,507,000
1,610,997,332,000
1,610,997,332,000
NONE
null
null
Is it possible to add an entry to a dataset object? **Motivation: I want to transform the sentences in the dataset and add them to the original dataset** For example, say we have the following code: ``` python from datasets import load_dataset # Load a dataset and print the first examples in the training s...
https://api.github.com/repos/huggingface/datasets/issues/1728/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1727/comments
https://api.github.com/repos/huggingface/datasets/issues/1727/events
https://github.com/huggingface/datasets/issues/1727
784,435,131
MDU6SXNzdWU3ODQ0MzUxMzE=
1,727
BLEURT score calculation raises UnrecognizedFlagError
{ "login": "nadavo", "id": 6603920, "node_id": "MDQ6VXNlcjY2MDM5MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nadavo", "html_url": "https://github.com/nadavo", "followers_url": "https://api.github.com/users/nadavo/foll...
[]
open
false
null
[]
null
[ "Upgrading tensorflow to version 2.4.0 solved the issue.", "I still have the same error even with TF 2.4.0.", "And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!", "I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.co...
1,610,472,422,000
1,618,266,101,000
null
NONE
null
null
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. My environment: ``` python==3.8.5 datasets==1.2.0 tensorflow==2.3.1 cudatoolkit==11.0.221 ``` Test code for reproducing the error: ``` from datasets import load_metric bleurt = load_me...
https://api.github.com/repos/huggingface/datasets/issues/1727/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1726/comments
https://api.github.com/repos/huggingface/datasets/issues/1726/events
https://github.com/huggingface/datasets/pull/1726
784,336,370
MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4
1,726
Offline loading
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_d...
1,610,464,917,000
1,611,857,122,000
1,611,074,552,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1726", "html_url": "https://github.com/huggingface/datasets/pull/1726", "diff_url": "https://github.com/huggingface/datasets/pull/1726.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1726.patch" }
As discussed in #824 it would be cool to make the library work in offline mode. Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError. This is because `prepare_module` fetches online for the latest vers...
https://api.github.com/repos/huggingface/datasets/issues/1726/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1725/comments
https://api.github.com/repos/huggingface/datasets/issues/1725/events
https://github.com/huggingface/datasets/issues/1725
784,182,273
MDU6SXNzdWU3ODQxODIyNzM=
1,725
load the local dataset
{ "login": "xinjicong", "id": 41193842, "node_id": "MDQ6VXNlcjQxMTkzODQy", "avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xinjicong", "html_url": "https://github.com/xinjicong", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
[ "You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.", "sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\...
1,610,453,575,000
1,614,768,943,000
null
NONE
null
null
your guidebook's example is like >>>from datasets import load_dataset >>> dataset = load_dataset('json', data_files='my_file.json') but the first arg is path... so how should i do if i want to load the local dataset for model training? i will be grateful if you can help me handle this problem! thanks a lot!
https://api.github.com/repos/huggingface/datasets/issues/1725/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1723/comments
https://api.github.com/repos/huggingface/datasets/issues/1723/events
https://github.com/huggingface/datasets/pull/1723
783,982,100
MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1
1,723
ADD S3 support for downloading and uploading processed datasets
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystem...
1,610,435,854,000
1,611,680,528,000
1,611,680,528,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1723", "html_url": "https://github.com/huggingface/datasets/pull/1723", "diff_url": "https://github.com/huggingface/datasets/pull/1723.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1723.patch" }
# What does this PR do? This PR adds the functionality to load and save `datasets` from and to s3. You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`. You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`. Lo...
https://api.github.com/repos/huggingface/datasets/issues/1723/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
https://api.github.com/repos/huggingface/datasets/issues/1724/events
https://github.com/huggingface/datasets/issues/1724
784,023,338
MDU6SXNzdWU3ODQwMjMzMzg=
1,724
could not run models on a offline server successfully
{ "login": "lkcao", "id": 49967236, "node_id": "MDQ6VXNlcjQ5OTY3MjM2", "avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lkcao", "html_url": "https://github.com/lkcao", "followers_url": "https://api.github.com/users/lkcao/follow...
[]
open
false
null
[]
null
[ "Transferred to `datasets` based on the stack trace.", "Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/...
1,610,431,686,000
1,614,785,549,000
null
NONE
null
null
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://us...
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1722/comments
https://api.github.com/repos/huggingface/datasets/issues/1722/events
https://github.com/huggingface/datasets/pull/1722
783,921,679
MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4
1,722
Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.
{ "login": "mounicam", "id": 11708999, "node_id": "MDQ6VXNlcjExNzA4OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mounicam", "html_url": "https://github.com/mounicam", "followers_url": "https://api.github.com/users/mou...
[]
closed
false
null
[]
null
[ "The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants." ]
1,610,429,164,000
1,610,475,293,000
1,610,472,957,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1722", "html_url": "https://github.com/huggingface/datasets/pull/1722", "diff_url": "https://github.com/huggingface/datasets/pull/1722.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1722.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1722/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1721/comments
https://api.github.com/repos/huggingface/datasets/issues/1721/events
https://github.com/huggingface/datasets/pull/1721
783,828,428
MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5
1,721
[Scientific papers] Mirror datasets zip
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically", "That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 toke...
1,610,414,140,000
1,610,452,155,000
1,610,451,707,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1721", "html_url": "https://github.com/huggingface/datasets/pull/1721", "diff_url": "https://github.com/huggingface/datasets/pull/1721.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1721.patch" }
Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download.
https://api.github.com/repos/huggingface/datasets/issues/1721/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1720/comments
https://api.github.com/repos/huggingface/datasets/issues/1720/events
https://github.com/huggingface/datasets/pull/1720
783,721,833
MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx
1,720
Adding the NorNE dataset for NER
{ "login": "versae", "id": 173537, "node_id": "MDQ6VXNlcjE3MzUzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/versae", "html_url": "https://github.com/versae", "followers_url": "https://api.github.com/users/versae/follow...
[]
closed
false
null
[]
null
[ "Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy...
1,610,400,853,000
1,617,200,629,000
1,617,199,997,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1720", "html_url": "https://github.com/huggingface/datasets/pull/1720", "diff_url": "https://github.com/huggingface/datasets/pull/1720.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1720.patch" }
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or...
https://api.github.com/repos/huggingface/datasets/issues/1720/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1719/comments
https://api.github.com/repos/huggingface/datasets/issues/1719/events
https://github.com/huggingface/datasets/pull/1719
783,557,542
MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4
1,719
Fix column list comparison in transmit format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,385,836,000
1,610,390,703,000
1,610,390,702,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1719", "html_url": "https://github.com/huggingface/datasets/pull/1719", "diff_url": "https://github.com/huggingface/datasets/pull/1719.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1719.patch" }
As noticed in #1718 the cache might not reload the cache files when new columns were added. This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col...
https://api.github.com/repos/huggingface/datasets/issues/1719/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1718/comments
https://api.github.com/repos/huggingface/datasets/issues/1718/events
https://github.com/huggingface/datasets/issues/1718
783,474,753
MDU6SXNzdWU3ODM0NzQ3NTM=
1,718
Possible cache miss in datasets
{ "login": "ofirzaf", "id": 18296312, "node_id": "MDQ6VXNlcjE4Mjk2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ofirzaf", "html_url": "https://github.com/ofirzaf", "followers_url": "https://api.github.com/users/ofirza...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting l...
1,610,379,451,000
1,619,591,723,000
1,611,629,279,000
NONE
null
null
Hi, I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache. I have attached an example script that for me reproduces the problem. In the attached example the second map function always recomputes instead of loading fr...
https://api.github.com/repos/huggingface/datasets/issues/1718/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1717/comments
https://api.github.com/repos/huggingface/datasets/issues/1717/events
https://github.com/huggingface/datasets/issues/1717
783,074,255
MDU6SXNzdWU3ODMwNzQyNTU=
1,717
SciFact dataset - minor changes
{ "login": "dwadden", "id": 3091916, "node_id": "MDQ6VXNlcjMwOTE5MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwadden", "html_url": "https://github.com/dwadden", "followers_url": "https://api.github.com/users/dwadden/...
[]
closed
false
null
[]
null
[ "Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! 🤗\r\nYou will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob...
1,610,342,800,000
1,611,629,537,000
1,611,629,537,000
CONTRIBUTOR
null
null
Hi, SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated! I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this? It also looks like the dataset is being downloa...
https://api.github.com/repos/huggingface/datasets/issues/1717/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1716/comments
https://api.github.com/repos/huggingface/datasets/issues/1716/events
https://github.com/huggingface/datasets/pull/1716
782,819,006
MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5
1,716
Add Hatexplain Dataset
{ "login": "kushal2000", "id": 48222101, "node_id": "MDQ6VXNlcjQ4MjIyMTAx", "avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kushal2000", "html_url": "https://github.com/kushal2000", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,285,401,000
1,610,979,702,000
1,610,979,702,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1716", "html_url": "https://github.com/huggingface/datasets/pull/1716", "diff_url": "https://github.com/huggingface/datasets/pull/1716.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1716.patch" }
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
https://api.github.com/repos/huggingface/datasets/issues/1716/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1715/comments
https://api.github.com/repos/huggingface/datasets/issues/1715/events
https://github.com/huggingface/datasets/pull/1715
782,754,441
MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5
1,715
add Korean intonation-aided intention identification dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/ste...
[]
closed
false
null
[]
null
[]
1,610,260,144,000
1,631,897,653,000
1,610,471,673,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1715", "html_url": "https://github.com/huggingface/datasets/pull/1715", "diff_url": "https://github.com/huggingface/datasets/pull/1715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1715.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1715/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1714/comments
https://api.github.com/repos/huggingface/datasets/issues/1714/events
https://github.com/huggingface/datasets/pull/1714
782,416,276
MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0
1,714
Adding adversarialQA dataset
{ "login": "maxbartolo", "id": 15869827, "node_id": "MDQ6VXNlcjE1ODY5ODI3", "avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxbartolo", "html_url": "https://github.com/maxbartolo", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Oh that's a really cool one, we'll review/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)?", ...
1,610,142,369,000
1,610,553,924,000
1,610,553,924,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1714", "html_url": "https://github.com/huggingface/datasets/pull/1714", "diff_url": "https://github.com/huggingface/datasets/pull/1714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1714.patch" }
Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293)
https://api.github.com/repos/huggingface/datasets/issues/1714/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1713/comments
https://api.github.com/repos/huggingface/datasets/issues/1713/events
https://github.com/huggingface/datasets/issues/1713
782,337,723
MDU6SXNzdWU3ODIzMzc3MjM=
1,713
Installation using conda
{ "login": "pranav-s", "id": 9393002, "node_id": "MDQ6VXNlcjkzOTMwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pranav-s", "html_url": "https://github.com/pranav-s", "followers_url": "https://api.github.com/users/prana...
[]
closed
false
null
[]
null
[ "Yes indeed the idea is to have the next release on conda cc @LysandreJik ", "Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.", "I think we can have `datasets` on conda by next week. Will see what I can do!", "Thank you. Lo...
1,610,133,135,000
1,631,882,860,000
1,631,882,860,000
NONE
null
null
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and...
https://api.github.com/repos/huggingface/datasets/issues/1713/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1712/comments
https://api.github.com/repos/huggingface/datasets/issues/1712/events
https://github.com/huggingface/datasets/pull/1712
782,313,097
MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4
1,712
Silicone
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/follower...
[]
closed
false
null
[]
null
[ "When should we expect to see our dataset appear in the search dropdown at huggingface.co?", "Hi @eusip,\r\n\r\n> When should we expect to see our dataset appear in the search dropdown at huggingface.co?\r\n\r\nwhen this PR is merged.", "Thanks!", "I've implemented all the changes requested by @lhoestq but I ...
1,610,130,258,000
1,611,238,357,000
1,611,225,071,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1712", "html_url": "https://github.com/huggingface/datasets/pull/1712", "diff_url": "https://github.com/huggingface/datasets/pull/1712.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1712.patch" }
My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication.
https://api.github.com/repos/huggingface/datasets/issues/1712/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1711/comments
https://api.github.com/repos/huggingface/datasets/issues/1711/events
https://github.com/huggingface/datasets/pull/1711
782,129,083
MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2
1,711
Fix windows path scheme in cached path
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,113,556,000
1,610,357,000,000
1,610,356,999,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1711", "html_url": "https://github.com/huggingface/datasets/pull/1711", "diff_url": "https://github.com/huggingface/datasets/pull/1711.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1711.patch" }
As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete. I fixed this and added tests
https://api.github.com/repos/huggingface/datasets/issues/1711/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1710/comments
https://api.github.com/repos/huggingface/datasets/issues/1710/events
https://github.com/huggingface/datasets/issues/1710
781,914,951
MDU6SXNzdWU3ODE5MTQ5NTE=
1,710
IsADirectoryError when trying to download C4
{ "login": "fredriko", "id": 5771366, "node_id": "MDQ6VXNlcjU3NzEzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fredriko", "html_url": "https://github.com/fredriko", "followers_url": "https://api.github.com/users/fredr...
[]
open
false
null
[]
null
[ "I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files." ]
1,610,091,090,000
1,610,531,053,000
null
NONE
null
null
**TLDR**: I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure. How can the problem be fixed? **VERBOSE**: I use Python version 3.7 and have the following dependencies listed in my project: ``` datasets==1.2.0 apache-beam==2.26.0 ``` When runn...
https://api.github.com/repos/huggingface/datasets/issues/1710/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1709/comments
https://api.github.com/repos/huggingface/datasets/issues/1709/events
https://github.com/huggingface/datasets/issues/1709
781,875,640
MDU6SXNzdWU3ODE4NzU2NDA=
1,709
Databases
{ "login": "JimmyJim1", "id": 68724553, "node_id": "MDQ6VXNlcjY4NzI0NTUz", "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimmyJim1", "html_url": "https://github.com/JimmyJim1", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,610,086,443,000
1,610,096,408,000
1,610,096,408,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://api.github.com/repos/huggingface/datasets/issues/1709/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1708/comments
https://api.github.com/repos/huggingface/datasets/issues/1708/events
https://github.com/huggingface/datasets/issues/1708
781,631,455
MDU6SXNzdWU3ODE2MzE0NTU=
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
{ "login": "Louiejay54", "id": 77126849, "node_id": "MDQ6VXNlcjc3MTI2ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Louiejay54", "html_url": "https://github.com/Louiejay54", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,055,924,000
1,610,096,401,000
1,610,096,401,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://api.github.com/repos/huggingface/datasets/issues/1708/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1707/comments
https://api.github.com/repos/huggingface/datasets/issues/1707/events
https://github.com/huggingface/datasets/pull/1707
781,507,545
MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2
1,707
Added generated READMEs for datasets that were missing one.
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/follow...
[]
closed
false
null
[]
null
[ "Looks like we need to trim the ones with too many configs, will look into it tomorrow!" ]
1,610,043,006,000
1,610,980,353,000
1,610,980,353,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1707", "html_url": "https://github.com/huggingface/datasets/pull/1707", "diff_url": "https://github.com/huggingface/datasets/pull/1707.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1707.patch" }
This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible. Code is available here for the moment: https://github.com/madlag/datasets...
https://api.github.com/repos/huggingface/datasets/issues/1707/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1706/comments
https://api.github.com/repos/huggingface/datasets/issues/1706/events
https://github.com/huggingface/datasets/issues/1706
781,494,476
MDU6SXNzdWU3ODE0OTQ0NzY=
1,706
Error when downloading a large dataset on slow connection.
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
[ "Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=...
1,610,041,695,000
1,610,534,102,000
null
CONTRIBUTOR
null
null
I receive the following error after about an hour trying to download the `openwebtext` dataset. The code used is: ```python import datasets datasets.load_dataset("openwebtext") ``` > Traceback (most recent call last): ...
https://api.github.com/repos/huggingface/datasets/issues/1706/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1705/comments
https://api.github.com/repos/huggingface/datasets/issues/1705/events
https://github.com/huggingface/datasets/pull/1705
781,474,949
MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4
1,705
Add information about caching and verifications in "Load a Dataset" docs
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/...
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
1,610,039,924,000
1,610,460,481,000
1,610,460,481,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1705", "html_url": "https://github.com/huggingface/datasets/pull/1705", "diff_url": "https://github.com/huggingface/datasets/pull/1705.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1705.patch" }
Related to #215. Missing improvements from @lhoestq's #1703.
https://api.github.com/repos/huggingface/datasets/issues/1705/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1704/comments
https://api.github.com/repos/huggingface/datasets/issues/1704/events
https://github.com/huggingface/datasets/pull/1704
781,402,757
MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1
1,704
Update XSUM Factuality DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,033,834,000
1,610,458,204,000
1,610,458,204,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1704", "html_url": "https://github.com/huggingface/datasets/pull/1704", "diff_url": "https://github.com/huggingface/datasets/pull/1704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1704.patch" }
Update XSUM Factuality DatasetCard
https://api.github.com/repos/huggingface/datasets/issues/1704/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1703/comments
https://api.github.com/repos/huggingface/datasets/issues/1703/events
https://github.com/huggingface/datasets/pull/1703
781,395,146
MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5
1,703
Improvements regarding caching and fingerprinting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "I few comments here for discussion:\r\n- I'm not convinced yet the end user should really have to understand the difference between \"caching\" and 'fingerprinting\", what do you think? I think fingerprinting should probably stay as an internal thing. Is there a case where we want cahing without fingerprinting or ...
1,610,033,189,000
1,611,077,531,000
1,611,077,530,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1703", "html_url": "https://github.com/huggingface/datasets/pull/1703", "diff_url": "https://github.com/huggingface/datasets/pull/1703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1703.patch" }
This PR adds these features: - Enable/disable caching If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. It is equivalent to setting `load_from_cache` to `False` in dataset transforms. ```python from datasets import set_caching_enabled set_cach...
https://api.github.com/repos/huggingface/datasets/issues/1703/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1702/comments
https://api.github.com/repos/huggingface/datasets/issues/1702/events
https://github.com/huggingface/datasets/pull/1702
781,383,277
MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0
1,702
Fix importlib metdata import in py38
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,032,230,000
1,610,102,835,000
1,610,102,835,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1702", "html_url": "https://github.com/huggingface/datasets/pull/1702", "diff_url": "https://github.com/huggingface/datasets/pull/1702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1702.patch" }
In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib.
https://api.github.com/repos/huggingface/datasets/issues/1702/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1701/comments
https://api.github.com/repos/huggingface/datasets/issues/1701/events
https://github.com/huggingface/datasets/issues/1701
781,345,717
MDU6SXNzdWU3ODEzNDU3MTc=
1,701
Some datasets miss dataset_infos.json or dummy_data.zip
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/follow...
[]
open
false
null
[]
null
[ "Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n" ]
1,610,029,033,000
1,610,458,846,000
null
MEMBER
null
null
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : ``` c4 lm1b reclor wikihow ``` And some does not have a dummy_data.zip : ``` kor_nli math_dataset mlqa ms_marco newsgroup qa4mre qanga...
https://api.github.com/repos/huggingface/datasets/issues/1701/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1700/comments
https://api.github.com/repos/huggingface/datasets/issues/1700/events
https://github.com/huggingface/datasets/pull/1700
781,333,589
MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2
1,700
Update Curiosity dialogs DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,027,967,000
1,610,477,492,000
1,610,477,492,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1700", "html_url": "https://github.com/huggingface/datasets/pull/1700", "diff_url": "https://github.com/huggingface/datasets/pull/1700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1700.patch" }
Update Curiosity dialogs DatasetCard There are some entries in the data fields section yet to be filled. There is little information regarding those fields.
https://api.github.com/repos/huggingface/datasets/issues/1700/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1699/comments
https://api.github.com/repos/huggingface/datasets/issues/1699/events
https://github.com/huggingface/datasets/pull/1699
781,271,558
MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5
1,699
Update DBRD dataset card and download URL
{ "login": "benjaminvdb", "id": 8875786, "node_id": "MDQ6VXNlcjg4NzU3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminvdb", "html_url": "https://github.com/benjaminvdb", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
[ "not sure why the CI was not triggered though" ]
1,610,021,803,000
1,610,026,899,000
1,610,026,859,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1699", "html_url": "https://github.com/huggingface/datasets/pull/1699", "diff_url": "https://github.com/huggingface/datasets/pull/1699.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1699.patch" }
I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes: 1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316. 2. I've updated the dataset card. Cheers! 😄
https://api.github.com/repos/huggingface/datasets/issues/1699/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1698/comments
https://api.github.com/repos/huggingface/datasets/issues/1698/events
https://github.com/huggingface/datasets/pull/1698
781,152,561
MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3
1,698
Update Coached Conv Pref DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?" ]
1,610,010,436,000
1,610,125,473,000
1,610,125,472,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1698", "html_url": "https://github.com/huggingface/datasets/pull/1698", "diff_url": "https://github.com/huggingface/datasets/pull/1698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1698.patch" }
Update Coached Conversation Preferance DatasetCard
https://api.github.com/repos/huggingface/datasets/issues/1698/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1697/comments
https://api.github.com/repos/huggingface/datasets/issues/1697/events
https://github.com/huggingface/datasets/pull/1697
781,126,579
MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5
1,697
Update DialogRE DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?" ]
1,610,007,753,000
1,610,026,468,000
1,610,026,468,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1697", "html_url": "https://github.com/huggingface/datasets/pull/1697", "diff_url": "https://github.com/huggingface/datasets/pull/1697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1697.patch" }
Update the information in the dataset card for the Dialog RE dataset.
https://api.github.com/repos/huggingface/datasets/issues/1697/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1696/comments
https://api.github.com/repos/huggingface/datasets/issues/1696/events
https://github.com/huggingface/datasets/issues/1696
781,096,918
MDU6SXNzdWU3ODEwOTY5MTg=
1,696
Unable to install datasets
{ "login": "glee2429", "id": 12635475, "node_id": "MDQ6VXNlcjEyNjM1NDc1", "avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/glee2429", "html_url": "https://github.com/glee2429", "followers_url": "https://api.github.com/users/gle...
[]
closed
false
null
[]
null
[ "Maybe try to create a virtual env with python 3.8 or 3.7", "Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ", "Damn sorry", "Damn sorry" ]
1,610,004,277,000
1,610,065,985,000
1,610,057,165,000
NONE
null
null
** Edit ** I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! **Short description** I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev...
https://api.github.com/repos/huggingface/datasets/issues/1696/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1695/comments
https://api.github.com/repos/huggingface/datasets/issues/1695/events
https://github.com/huggingface/datasets/pull/1695
780,971,987
MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4
1,695
fix ner_tag bugs in thainer
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "> Thanks :)\r\n> \r\n> Apparently the dummy_data.zip got removed. Is this expected ?\r\n> Also can you remove the `data-pos.conll` file that you added ?\r\n\r\nNot expected. I forgot to remove the `dummy_data` folder used to create `dummy_data.zip`. \r\nChanged to only `dummy_data.zip`." ]
1,609,985,553,000
1,610,030,625,000
1,610,030,608,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1695", "html_url": "https://github.com/huggingface/datasets/pull/1695", "diff_url": "https://github.com/huggingface/datasets/pull/1695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1695.patch" }
fix bug that results in `ner_tag` always equal to 'O'.
https://api.github.com/repos/huggingface/datasets/issues/1695/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1694/comments
https://api.github.com/repos/huggingface/datasets/issues/1694/events
https://github.com/huggingface/datasets/pull/1694
780,429,080
MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx
1,694
Add OSCAR
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Hi @lhoestq, on the OSCAR dataset, the document boundaries are defined by an empty line. Are there any chances to keep this empty line or explicitly group the sentences of a document? I'm asking for this 'cause I need to know if some sentences belong to the same document on my current OSCAR dataset usage.", "Ind...
1,609,928,468,000
1,611,565,833,000
1,611,565,832,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1694", "html_url": "https://github.com/huggingface/datasets/pull/1694", "diff_url": "https://github.com/huggingface/datasets/pull/1694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1694.patch" }
Continuation of #348 The files have been moved to S3 and only the unshuffled version is available. Both original and deduplicated versions of each language are available. Example of usage: ```python from datasets import load_dataset oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="...
https://api.github.com/repos/huggingface/datasets/issues/1694/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1693/comments
https://api.github.com/repos/huggingface/datasets/issues/1693/events
https://github.com/huggingface/datasets/pull/1693
780,268,595
MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx
1,693
Fix reuters metadata parsing errors
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/foll...
[]
closed
false
null
[]
null
[]
1,609,921,563,000
1,610,063,627,000
1,610,028,082,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1693", "html_url": "https://github.com/huggingface/datasets/pull/1693", "diff_url": "https://github.com/huggingface/datasets/pull/1693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1693.patch" }
Was missing the last entry in each metadata category
https://api.github.com/repos/huggingface/datasets/issues/1693/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1691/comments
https://api.github.com/repos/huggingface/datasets/issues/1691/events
https://github.com/huggingface/datasets/pull/1691
779,882,271
MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0
1,691
Updated HuggingFace Datasets README (fix typos)
{ "login": "8bitmp3", "id": 19637339, "node_id": "MDQ6VXNlcjE5NjM3MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/19637339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/8bitmp3", "html_url": "https://github.com/8bitmp3", "followers_url": "https://api.github.com/users/8bitmp...
[]
closed
false
null
[]
null
[]
1,609,899,278,000
1,610,839,847,000
1,610,013,992,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1691", "html_url": "https://github.com/huggingface/datasets/pull/1691", "diff_url": "https://github.com/huggingface/datasets/pull/1691.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1691.patch" }
Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps. ![](https://emojipedia-us.s3.dualstack.us-west-1.amazonaws.com/thumbs/160/google/56/hugging-face_1f917.png)
https://api.github.com/repos/huggingface/datasets/issues/1691/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1690/comments
https://api.github.com/repos/huggingface/datasets/issues/1690/events
https://github.com/huggingface/datasets/pull/1690
779,441,631
MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw
1,690
Fast start up
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,873,673,000
1,609,942,859,000
1,609,942,858,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1690", "html_url": "https://github.com/huggingface/datasets/pull/1690", "diff_url": "https://github.com/huggingface/datasets/pull/1690.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1690.patch" }
Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies. To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ...
https://api.github.com/repos/huggingface/datasets/issues/1690/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1689/comments
https://api.github.com/repos/huggingface/datasets/issues/1689/events
https://github.com/huggingface/datasets/pull/1689
779,107,313
MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw
1,689
Fix ade_corpus_v2 config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,857,208,000
1,609,858,509,000
1,609,858,508,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1689", "html_url": "https://github.com/huggingface/datasets/pull/1689", "diff_url": "https://github.com/huggingface/datasets/pull/1689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1689.patch" }
There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them: - Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification - Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation - Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation
https://api.github.com/repos/huggingface/datasets/issues/1689/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1688/comments
https://api.github.com/repos/huggingface/datasets/issues/1688/events
https://github.com/huggingface/datasets/pull/1688
779,029,685
MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0
1,688
Fix DaNE last example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,853,377,000
1,609,855,215,000
1,609,855,213,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1688", "html_url": "https://github.com/huggingface/datasets/pull/1688", "diff_url": "https://github.com/huggingface/datasets/pull/1688.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1688.patch" }
The last example from the DaNE dataset is empty. Fix #1686
https://api.github.com/repos/huggingface/datasets/issues/1688/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1687/comments
https://api.github.com/repos/huggingface/datasets/issues/1687/events
https://github.com/huggingface/datasets/issues/1687
779,004,894
MDU6SXNzdWU3NzkwMDQ4OTQ=
1,687
Question: Shouldn't .info be a part of DatasetDict?
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
open
false
null
[]
null
[ "We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work.", "Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). " ]
1,609,852,121,000
1,610,014,686,000
null
CONTRIBUTOR
null
null
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets. For instance: ``` >>> ds = datasets.load_dataset("conll2002", "es") >>> ds.info Traceback (most rece...
https://api.github.com/repos/huggingface/datasets/issues/1687/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1686/comments
https://api.github.com/repos/huggingface/datasets/issues/1686/events
https://github.com/huggingface/datasets/issues/1686
778,921,684
MDU6SXNzdWU3Nzg5MjE2ODQ=
1,686
Dataset Error: DaNE contains empty samples at the end
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
closed
false
null
[]
null
[ "Thanks for reporting, I opened a PR to fix that", "One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\...
1,609,847,666,000
1,609,855,269,000
1,609,855,213,000
CONTRIBUTOR
null
null
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': ...
https://api.github.com/repos/huggingface/datasets/issues/1686/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1685/comments
https://api.github.com/repos/huggingface/datasets/issues/1685/events
https://github.com/huggingface/datasets/pull/1685
778,914,431
MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2
1,685
Update README.md of covid-tweets-japanese
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Thanks for reviewing and merging!" ]
1,609,847,247,000
1,609,928,832,000
1,609,925,470,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1685", "html_url": "https://github.com/huggingface/datasets/pull/1685", "diff_url": "https://github.com/huggingface/datasets/pull/1685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1685.patch" }
Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402. - Update "Data Splits" to be more precise that no information is provided for now. - old: [More Information Needed] - new: No information about data spl...
https://api.github.com/repos/huggingface/datasets/issues/1685/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1684/comments
https://api.github.com/repos/huggingface/datasets/issues/1684/events
https://github.com/huggingface/datasets/pull/1684
778,356,196
MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1
1,684
Add CANER Corpus
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/fo...
[]
closed
false
null
[]
null
[]
1,609,793,351,000
1,611,565,760,000
1,611,565,760,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1684", "html_url": "https://github.com/huggingface/datasets/pull/1684", "diff_url": "https://github.com/huggingface/datasets/pull/1684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1684.patch" }
What does this PR do? Adds the following dataset: https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus Who can review? @lhoestq
https://api.github.com/repos/huggingface/datasets/issues/1684/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1683/comments
https://api.github.com/repos/huggingface/datasets/issues/1683/events
https://github.com/huggingface/datasets/issues/1683
778,287,612
MDU6SXNzdWU3NzgyODc2MTI=
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]...
1,609,786,073,000
1,609,787,085,000
1,609,787,085,000
CONTRIBUTOR
null
null
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRCon...
https://api.github.com/repos/huggingface/datasets/issues/1683/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1682/comments
https://api.github.com/repos/huggingface/datasets/issues/1682/events
https://github.com/huggingface/datasets/pull/1682
778,268,156
MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1
1,682
Don't use xlrd for xlsx files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,783,910,000
1,609,783,994,000
1,609,783,993,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1682", "html_url": "https://github.com/huggingface/datasets/pull/1682", "diff_url": "https://github.com/huggingface/datasets/pull/1682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1682.patch" }
Since the latest release of `xlrd` (2.0), the support for xlsx files stopped. Therefore we needed to use something else. A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`. I left the unused import of `openpyxl` in the dataset scripts to show users that ...
https://api.github.com/repos/huggingface/datasets/issues/1682/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1681/comments
https://api.github.com/repos/huggingface/datasets/issues/1681/events
https://github.com/huggingface/datasets/issues/1681
777,644,163
MDU6SXNzdWU3Nzc2NDQxNjM=
1,681
Dataset "dane" missing
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
closed
false
null
[]
null
[ "Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip i...
1,609,682,583,000
1,609,835,735,000
1,609,835,713,000
CONTRIBUTOR
null
null
the `dane` dataset appear to be missing in the latest version (1.1.3). ```python >>> import datasets >>> datasets.__version__ '1.1.3' >>> "dane" in datasets.list_datasets() True ``` As we can see it should be present, but doesn't seem to be findable when using `load_dataset`. ```python >>> datasets.load...
https://api.github.com/repos/huggingface/datasets/issues/1681/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1680/comments
https://api.github.com/repos/huggingface/datasets/issues/1680/events
https://github.com/huggingface/datasets/pull/1680
777,623,053
MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw
1,680
added TurkishProductReviews dataset
{ "login": "basakbuluz", "id": 41359672, "node_id": "MDQ6VXNlcjQxMzU5Njcy", "avatar_url": "https://avatars.githubusercontent.com/u/41359672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/basakbuluz", "html_url": "https://github.com/basakbuluz", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "@lhoestq, can you please review this PR?", "Thanks for the suggestions. Updates were made and dataset_infos.json file was created again." ]
1,609,674,779,000
1,609,784,135,000
1,609,784,135,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1680", "html_url": "https://github.com/huggingface/datasets/pull/1680", "diff_url": "https://github.com/huggingface/datasets/pull/1680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1680.patch" }
This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**. - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** Fatih Barmanbay - @fthbrmnby
https://api.github.com/repos/huggingface/datasets/issues/1680/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1679/comments
https://api.github.com/repos/huggingface/datasets/issues/1679/events
https://github.com/huggingface/datasets/issues/1679
777,587,792
MDU6SXNzdWU3Nzc1ODc3OTI=
1,679
Can't import cc100 dataset
{ "login": "alighofrani95", "id": 14968123, "node_id": "MDQ6VXNlcjE0OTY4MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alighofrani95", "html_url": "https://github.com/alighofrani95", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
[ "cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", la...
1,609,657,976,000
1,609,785,698,000
null
NONE
null
null
There is some issue to import cc100 dataset. ``` from datasets import load_dataset dataset = load_dataset("cc100") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py During handling of the above exception, another exception occur...
https://api.github.com/repos/huggingface/datasets/issues/1679/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1678/comments
https://api.github.com/repos/huggingface/datasets/issues/1678/events
https://github.com/huggingface/datasets/pull/1678
777,567,920
MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy
1,678
Switchboard Dialog Act Corpus added under `datasets/swda`
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.", "It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ", "Hi @lhoestq,\r\nI'...
1,609,646,021,000
1,610,129,361,000
1,609,841,195,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1678", "html_url": "https://github.com/huggingface/datasets/pull/1678", "diff_url": "https://github.com/huggingface/datasets/pull/1678.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1678.patch" }
Switchboard Dialog Act Corpus Intro: The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2, with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC ...
https://api.github.com/repos/huggingface/datasets/issues/1678/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1677/comments
https://api.github.com/repos/huggingface/datasets/issues/1677/events
https://github.com/huggingface/datasets/pull/1677
777,553,383
MDExOlB1bGxSZXF1ZXN0NTQ3ODE3ODI1
1,677
Switchboard Dialog Act Corpus added under `datasets/swda`
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "Need to fix code formatting." ]
1,609,636,602,000
1,609,642,557,000
1,609,642,556,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1677", "html_url": "https://github.com/huggingface/datasets/pull/1677", "diff_url": "https://github.com/huggingface/datasets/pull/1677.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1677.patch" }
Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**. I think this is an important datasets to be added since it is the only one related to dialogue act classification. Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form. The Swi...
https://api.github.com/repos/huggingface/datasets/issues/1677/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1676/comments
https://api.github.com/repos/huggingface/datasets/issues/1676/events
https://github.com/huggingface/datasets/pull/1676
777,477,645
MDExOlB1bGxSZXF1ZXN0NTQ3NzY1OTY3
1,676
new version of Ted Talks IWSLT (WIT3)
{ "login": "skyprince999", "id": 9033954, "node_id": "MDQ6VXNlcjkwMzM5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skyprince999", "html_url": "https://github.com/skyprince999", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "> Nice thank you ! Actually as it is a translation dataset we should probably have one configuration = one language pair no ?\r\n> \r\n> Could you use the same trick for this dataset ?\r\n\r\nI was looking for this input, infact I had written a long post on the Slack channel,...(_but unfortunately due to the holid...
1,609,601,403,000
1,610,619,019,000
1,610,619,019,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1676", "html_url": "https://github.com/huggingface/datasets/pull/1676", "diff_url": "https://github.com/huggingface/datasets/pull/1676.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1676.patch" }
In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!! Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually. Locally I was able to clear the `python dataset...
https://api.github.com/repos/huggingface/datasets/issues/1676/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1675/comments
https://api.github.com/repos/huggingface/datasets/issues/1675/events
https://github.com/huggingface/datasets/issues/1675
777,367,320
MDU6SXNzdWU3NzczNjczMjA=
1,675
Add the 800GB Pile dataset?
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/fo...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models", "The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \...
1,609,541,892,000
1,629,392,205,000
null
MEMBER
null
null
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
https://api.github.com/repos/huggingface/datasets/issues/1675/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1674/comments
https://api.github.com/repos/huggingface/datasets/issues/1674/events
https://github.com/huggingface/datasets/issues/1674
777,321,840
MDU6SXNzdWU3NzczMjE4NDA=
1,674
dutch_social can't be loaded
{ "login": "koenvandenberge", "id": 10134844, "node_id": "MDQ6VXNlcjEwMTM0ODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/10134844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koenvandenberge", "html_url": "https://github.com/koenvandenberge", "followers_url": "https://api...
[]
open
false
null
[]
null
[ "exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n", "Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the...
1,609,522,628,000
1,609,841,821,000
null
NONE
null
null
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
https://api.github.com/repos/huggingface/datasets/issues/1674/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1673/comments
https://api.github.com/repos/huggingface/datasets/issues/1673/events
https://github.com/huggingface/datasets/issues/1673
777,263,651
MDU6SXNzdWU3NzcyNjM2NTE=
1,673
Unable to Download Hindi Wikipedia Dataset
{ "login": "aditya3498", "id": 30871963, "node_id": "MDQ6VXNlcjMwODcxOTYz", "avatar_url": "https://avatars.githubusercontent.com/u/30871963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aditya3498", "html_url": "https://github.com/aditya3498", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Currently this dataset is only available when the library is installed from source since it was added after the last release.\r\n\r\nWe pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.\r\n\r\nWe'll see if we can provide access ...
1,609,498,373,000
1,609,842,132,000
1,609,842,132,000
NONE
null
null
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
https://api.github.com/repos/huggingface/datasets/issues/1673/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1672/comments
https://api.github.com/repos/huggingface/datasets/issues/1672/events
https://github.com/huggingface/datasets/issues/1672
777,258,941
MDU6SXNzdWU3NzcyNTg5NDE=
1,672
load_dataset hang on file_lock
{ "login": "tomacai", "id": 69860107, "node_id": "MDQ6VXNlcjY5ODYwMTA3", "avatar_url": "https://avatars.githubusercontent.com/u/69860107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomacai", "html_url": "https://github.com/tomacai", "followers_url": "https://api.github.com/users/tomaca...
[]
closed
false
null
[]
null
[ "Can you try to upgrade to a more recent version of datasets?", "Thank, upgrading to 1.1.3 resolved the issue.", "Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04\r\n\r\n```py\r\nIn [1]: from datasets import load_dataset ...
1,609,496,707,000
1,617,207,853,000
1,609,501,656,000
NONE
null
null
I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab. Transformers: 3.3.1 Datasets: 1.0.2 Windows 10 (also tested in WSL) ``` datasets.logging.set_verbosity_debug() datasets. train_dataset = load_dataset('squad', split='train') valid_dataset = load_dataset('squad', split='validat...
https://api.github.com/repos/huggingface/datasets/issues/1672/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1671/comments
https://api.github.com/repos/huggingface/datasets/issues/1671/events
https://github.com/huggingface/datasets/issues/1671
776,652,193
MDU6SXNzdWU3NzY2NTIxOTM=
1,671
connection issue
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[]
open
false
null
[]
null
[ "Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\...
1,609,365,380,000
1,609,754,391,000
null
NONE
null
null
Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r...
https://api.github.com/repos/huggingface/datasets/issues/1671/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1670/comments
https://api.github.com/repos/huggingface/datasets/issues/1670/events
https://github.com/huggingface/datasets/issues/1670
776,608,579
MDU6SXNzdWU3NzY2MDg1Nzk=
1,670
wiki_dpr pre-processing performance
{ "login": "dbarnhart", "id": 753898, "node_id": "MDQ6VXNlcjc1Mzg5OA==", "avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbarnhart", "html_url": "https://github.com/dbarnhart", "followers_url": "https://api.github.com/users/dbar...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067401494, "node_id": "MDU6...
open
false
null
[]
null
[ "Hi ! And thanks for the tips :) \r\n\r\nIndeed currently `wiki_dpr` takes some time to be processed.\r\nMultiprocessing for dataset generation is definitely going to speed up things.\r\n\r\nRegarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spe...
1,609,357,303,000
1,611,826,896,000
null
NONE
null
null
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
https://api.github.com/repos/huggingface/datasets/issues/1670/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1669/comments
https://api.github.com/repos/huggingface/datasets/issues/1669/events
https://github.com/huggingface/datasets/issues/1669
776,608,386
MDU6SXNzdWU3NzY2MDgzODY=
1,669
wiki_dpr dataset pre-processesing performance
{ "login": "dbarnhart", "id": 753898, "node_id": "MDQ6VXNlcjc1Mzg5OA==", "avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbarnhart", "html_url": "https://github.com/dbarnhart", "followers_url": "https://api.github.com/users/dbar...
[]
closed
false
null
[]
null
[ "Sorry, double posted." ]
1,609,357,269,000
1,609,357,345,000
1,609,357,345,000
NONE
null
null
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
https://api.github.com/repos/huggingface/datasets/issues/1669/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1668/comments
https://api.github.com/repos/huggingface/datasets/issues/1668/events
https://github.com/huggingface/datasets/pull/1668
776,552,854
MDExOlB1bGxSZXF1ZXN0NTQ3MDIxODI0
1,668
xed_en_fi dataset Cleanup
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,348,278,000
1,609,348,964,000
1,609,348,963,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1668", "html_url": "https://github.com/huggingface/datasets/pull/1668", "diff_url": "https://github.com/huggingface/datasets/pull/1668.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1668.patch" }
Fix ClassLabel feature type and minor mistakes in the dataset card
https://api.github.com/repos/huggingface/datasets/issues/1668/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1667/comments
https://api.github.com/repos/huggingface/datasets/issues/1667/events
https://github.com/huggingface/datasets/pull/1667
776,446,658
MDExOlB1bGxSZXF1ZXN0NTQ2OTM4MjAy
1,667
Fix NER metric example in Overview notebook
{ "login": "jungwhank", "id": 53588015, "node_id": "MDQ6VXNlcjUzNTg4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungwhank", "html_url": "https://github.com/jungwhank", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,609,333,519,000
1,609,377,128,000
1,609,348,911,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1667", "html_url": "https://github.com/huggingface/datasets/pull/1667", "diff_url": "https://github.com/huggingface/datasets/pull/1667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1667.patch" }
Fix errors in `NER metric example` section in `Overview.ipynb`. ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-37-ee559b166e25> in <module>() ----> 1 ner_metric = load_metric('seqeval') ...
https://api.github.com/repos/huggingface/datasets/issues/1667/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1666/comments
https://api.github.com/repos/huggingface/datasets/issues/1666/events
https://github.com/huggingface/datasets/pull/1666
776,432,006
MDExOlB1bGxSZXF1ZXN0NTQ2OTI2MzQw
1,666
Add language to dataset card for Makhzan dataset.
{ "login": "arkhalid", "id": 14899066, "node_id": "MDQ6VXNlcjE0ODk5MDY2", "avatar_url": "https://avatars.githubusercontent.com/u/14899066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arkhalid", "html_url": "https://github.com/arkhalid", "followers_url": "https://api.github.com/users/ark...
[]
closed
false
null
[]
null
[]
1,609,331,152,000
1,609,348,835,000
1,609,348,835,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1666", "html_url": "https://github.com/huggingface/datasets/pull/1666", "diff_url": "https://github.com/huggingface/datasets/pull/1666.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1666.patch" }
Add language to dataset card.
https://api.github.com/repos/huggingface/datasets/issues/1666/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1665/comments
https://api.github.com/repos/huggingface/datasets/issues/1665/events
https://github.com/huggingface/datasets/pull/1665
776,431,087
MDExOlB1bGxSZXF1ZXN0NTQ2OTI1NTgw
1,665
Add language to dataset card for Counter dataset.
{ "login": "arkhalid", "id": 14899066, "node_id": "MDQ6VXNlcjE0ODk5MDY2", "avatar_url": "https://avatars.githubusercontent.com/u/14899066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arkhalid", "html_url": "https://github.com/arkhalid", "followers_url": "https://api.github.com/users/ark...
[]
closed
false
null
[]
null
[]
1,609,331,000,000
1,609,348,820,000
1,609,348,820,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1665", "html_url": "https://github.com/huggingface/datasets/pull/1665", "diff_url": "https://github.com/huggingface/datasets/pull/1665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1665.patch" }
Add language.
https://api.github.com/repos/huggingface/datasets/issues/1665/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1664/comments
https://api.github.com/repos/huggingface/datasets/issues/1664/events
https://github.com/huggingface/datasets/pull/1664
775,956,441
MDExOlB1bGxSZXF1ZXN0NTQ2NTM1NDcy
1,664
removed \n in labels
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[]
1,609,256,503,000
1,609,348,729,000
1,609,348,729,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1664", "html_url": "https://github.com/huggingface/datasets/pull/1664", "diff_url": "https://github.com/huggingface/datasets/pull/1664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1664.patch" }
updated social_i_qa labels as per #1633
https://api.github.com/repos/huggingface/datasets/issues/1664/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1663/comments
https://api.github.com/repos/huggingface/datasets/issues/1663/events
https://github.com/huggingface/datasets/pull/1663
775,914,320
MDExOlB1bGxSZXF1ZXN0NTQ2NTAzMjg5
1,663
update saving and loading methods for faiss index so to accept path l…
{ "login": "tslott", "id": 11614798, "node_id": "MDQ6VXNlcjExNjE0Nzk4", "avatar_url": "https://avatars.githubusercontent.com/u/11614798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tslott", "html_url": "https://github.com/tslott", "followers_url": "https://api.github.com/users/tslott/fo...
[]
closed
false
null
[]
null
[ "Seems ok for me, what do you think @lhoestq ?" ]
1,609,251,337,000
1,610,962,043,000
1,610,962,043,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1663", "html_url": "https://github.com/huggingface/datasets/pull/1663", "diff_url": "https://github.com/huggingface/datasets/pull/1663.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1663.patch" }
- Update saving and loading methods for faiss index so to accept path like objects from pathlib The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes bec...
https://api.github.com/repos/huggingface/datasets/issues/1663/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1662/comments
https://api.github.com/repos/huggingface/datasets/issues/1662/events
https://github.com/huggingface/datasets/issues/1662
775,890,154
MDU6SXNzdWU3NzU4OTAxNTQ=
1,662
Arrow file is too large when saving vector data
{ "login": "weiwangthu", "id": 22360336, "node_id": "MDQ6VXNlcjIyMzYwMzM2", "avatar_url": "https://avatars.githubusercontent.com/u/22360336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiwangthu", "html_url": "https://github.com/weiwangthu", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimensi...
1,609,248,192,000
1,611,238,359,000
1,611,238,359,000
NONE
null
null
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
https://api.github.com/repos/huggingface/datasets/issues/1662/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1661/comments
https://api.github.com/repos/huggingface/datasets/issues/1661/events
https://github.com/huggingface/datasets/pull/1661
775,840,801
MDExOlB1bGxSZXF1ZXN0NTQ2NDQzNjYx
1,661
updated dataset cards
{ "login": "Nilanshrajput", "id": 28673745, "node_id": "MDQ6VXNlcjI4NjczNzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nilanshrajput", "html_url": "https://github.com/Nilanshrajput", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[]
1,609,240,840,000
1,609,348,516,000
1,609,348,516,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1661", "html_url": "https://github.com/huggingface/datasets/pull/1661", "diff_url": "https://github.com/huggingface/datasets/pull/1661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1661.patch" }
added dataset instance in the card.
https://api.github.com/repos/huggingface/datasets/issues/1661/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1660/comments
https://api.github.com/repos/huggingface/datasets/issues/1660/events
https://github.com/huggingface/datasets/pull/1660
775,831,423
MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1
1,660
add dataset info
{ "login": "harshalmittal4", "id": 24206326, "node_id": "MDQ6VXNlcjI0MjA2MzI2", "avatar_url": "https://avatars.githubusercontent.com/u/24206326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harshalmittal4", "html_url": "https://github.com/harshalmittal4", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[]
1,609,239,499,000
1,609,347,870,000
1,609,347,870,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1660", "html_url": "https://github.com/huggingface/datasets/pull/1660", "diff_url": "https://github.com/huggingface/datasets/pull/1660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1660.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1660/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1659/comments
https://api.github.com/repos/huggingface/datasets/issues/1659/events
https://github.com/huggingface/datasets/pull/1659
775,831,288
MDExOlB1bGxSZXF1ZXN0NTQ2NDM1OTcy
1,659
update dataset info
{ "login": "harshalmittal4", "id": 24206326, "node_id": "MDQ6VXNlcjI0MjA2MzI2", "avatar_url": "https://avatars.githubusercontent.com/u/24206326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harshalmittal4", "html_url": "https://github.com/harshalmittal4", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[]
1,609,239,481,000
1,609,347,307,000
1,609,347,307,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1659", "html_url": "https://github.com/huggingface/datasets/pull/1659", "diff_url": "https://github.com/huggingface/datasets/pull/1659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1659.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1659/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1658/comments
https://api.github.com/repos/huggingface/datasets/issues/1658/events
https://github.com/huggingface/datasets/pull/1658
775,651,085
MDExOlB1bGxSZXF1ZXN0NTQ2Mjg4Njg4
1,658
brwac dataset: add instances and data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,205,085,000
1,609,347,266,000
1,609,347,266,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1658", "html_url": "https://github.com/huggingface/datasets/pull/1658", "diff_url": "https://github.com/huggingface/datasets/pull/1658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1658.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1658/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1657/comments
https://api.github.com/repos/huggingface/datasets/issues/1657/events
https://github.com/huggingface/datasets/pull/1657
775,647,000
MDExOlB1bGxSZXF1ZXN0NTQ2Mjg1NjU2
1,657
mac_morpho dataset: add data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,203,921,000
1,609,347,084,000
1,609,347,084,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1657", "html_url": "https://github.com/huggingface/datasets/pull/1657", "diff_url": "https://github.com/huggingface/datasets/pull/1657.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1657.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1657/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1656/comments
https://api.github.com/repos/huggingface/datasets/issues/1656/events
https://github.com/huggingface/datasets/pull/1656
775,645,356
MDExOlB1bGxSZXF1ZXN0NTQ2Mjg0NDI3
1,656
assin 2 dataset: add instances and data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,203,471,000
1,609,347,056,000
1,609,347,056,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1656", "html_url": "https://github.com/huggingface/datasets/pull/1656", "diff_url": "https://github.com/huggingface/datasets/pull/1656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1656.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1656/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1655/comments
https://api.github.com/repos/huggingface/datasets/issues/1655/events
https://github.com/huggingface/datasets/pull/1655
775,643,418
MDExOlB1bGxSZXF1ZXN0NTQ2MjgyOTM4
1,655
assin dataset: add instances and data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,202,876,000
1,609,347,023,000
1,609,347,023,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1655", "html_url": "https://github.com/huggingface/datasets/pull/1655", "diff_url": "https://github.com/huggingface/datasets/pull/1655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1655.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1655/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1654/comments
https://api.github.com/repos/huggingface/datasets/issues/1654/events
https://github.com/huggingface/datasets/pull/1654
775,640,729
MDExOlB1bGxSZXF1ZXN0NTQ2MjgwODIy
1,654
lener_br dataset: add instances and data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,202,112,000
1,609,346,972,000
1,609,346,972,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1654", "html_url": "https://github.com/huggingface/datasets/pull/1654", "diff_url": "https://github.com/huggingface/datasets/pull/1654.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1654.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1654/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1653/comments
https://api.github.com/repos/huggingface/datasets/issues/1653/events
https://github.com/huggingface/datasets/pull/1653
775,632,945
MDExOlB1bGxSZXF1ZXN0NTQ2Mjc0Njc0
1,653
harem dataset: add data splits info
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,609,199,900,000
1,609,346,943,000
1,609,346,943,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1653", "html_url": "https://github.com/huggingface/datasets/pull/1653", "diff_url": "https://github.com/huggingface/datasets/pull/1653.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1653.patch" }
https://api.github.com/repos/huggingface/datasets/issues/1653/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1652/comments
https://api.github.com/repos/huggingface/datasets/issues/1652/events
https://github.com/huggingface/datasets/pull/1652
775,571,813
MDExOlB1bGxSZXF1ZXN0NTQ2MjI1NTM1
1,652
Update dataset cards from previous sprint
{ "login": "j-chim", "id": 22435209, "node_id": "MDQ6VXNlcjIyNDM1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-chim", "html_url": "https://github.com/j-chim", "followers_url": "https://api.github.com/users/j-chim/fo...
[]
closed
false
null
[]
null
[]
1,609,186,847,000
1,609,346,884,000
1,609,346,884,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1652", "html_url": "https://github.com/huggingface/datasets/pull/1652", "diff_url": "https://github.com/huggingface/datasets/pull/1652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1652.patch" }
This PR updates the dataset cards/readmes for the 4 approved PRs I submitted in the previous sprint.
https://api.github.com/repos/huggingface/datasets/issues/1652/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1651/comments
https://api.github.com/repos/huggingface/datasets/issues/1651/events
https://github.com/huggingface/datasets/pull/1651
775,554,319
MDExOlB1bGxSZXF1ZXN0NTQ2MjExMjQw
1,651
Add twi wordsim353
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dad...
[]
closed
false
null
[]
null
[ "Well actually it looks like it was already added in #1428 \r\n\r\nMaybe we can close this one ? Or you wanted to make changes to this dataset ?", "Thank you, it's just a modification of Readme. I added the missing citation.", "Indeed thanks" ]
1,609,183,915,000
1,609,753,179,000
1,609,753,178,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1651", "html_url": "https://github.com/huggingface/datasets/pull/1651", "diff_url": "https://github.com/huggingface/datasets/pull/1651.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1651.patch" }
Added the citation information to the README file
https://api.github.com/repos/huggingface/datasets/issues/1651/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1650/comments
https://api.github.com/repos/huggingface/datasets/issues/1650/events
https://github.com/huggingface/datasets/pull/1650
775,545,912
MDExOlB1bGxSZXF1ZXN0NTQ2MjA0MzYy
1,650
Update README.md
{ "login": "MisbahKhan789", "id": 15351802, "node_id": "MDQ6VXNlcjE1MzUxODAy", "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MisbahKhan789", "html_url": "https://github.com/MisbahKhan789", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[]
1,609,182,545,000
1,609,238,594,000
1,609,238,594,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1650", "html_url": "https://github.com/huggingface/datasets/pull/1650", "diff_url": "https://github.com/huggingface/datasets/pull/1650.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1650.patch" }
added dataset summary
https://api.github.com/repos/huggingface/datasets/issues/1650/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1649/comments
https://api.github.com/repos/huggingface/datasets/issues/1649/events
https://github.com/huggingface/datasets/pull/1649
775,544,487
MDExOlB1bGxSZXF1ZXN0NTQ2MjAzMjE1
1,649
Update README.md
{ "login": "MisbahKhan789", "id": 15351802, "node_id": "MDQ6VXNlcjE1MzUxODAy", "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MisbahKhan789", "html_url": "https://github.com/MisbahKhan789", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[]
1,609,182,300,000
1,609,239,058,000
1,609,238,583,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1649", "html_url": "https://github.com/huggingface/datasets/pull/1649", "diff_url": "https://github.com/huggingface/datasets/pull/1649.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1649.patch" }
Added information in the dataset card
https://api.github.com/repos/huggingface/datasets/issues/1649/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1648/comments
https://api.github.com/repos/huggingface/datasets/issues/1648/events
https://github.com/huggingface/datasets/pull/1648
775,542,360
MDExOlB1bGxSZXF1ZXN0NTQ2MjAxNTQ0
1,648
Update README.md
{ "login": "MisbahKhan789", "id": 15351802, "node_id": "MDQ6VXNlcjE1MzUxODAy", "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MisbahKhan789", "html_url": "https://github.com/MisbahKhan789", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[]
1,609,181,946,000
1,609,238,354,000
1,609,238,354,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1648", "html_url": "https://github.com/huggingface/datasets/pull/1648", "diff_url": "https://github.com/huggingface/datasets/pull/1648.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1648.patch" }
added dataset summary
https://api.github.com/repos/huggingface/datasets/issues/1648/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1647/comments
https://api.github.com/repos/huggingface/datasets/issues/1647/events
https://github.com/huggingface/datasets/issues/1647
775,525,799
MDU6SXNzdWU3NzU1MjU3OTk=
1,647
NarrativeQA fails to load with `load_dataset`
{ "login": "eric-mitchell", "id": 56408839, "node_id": "MDQ6VXNlcjU2NDA4ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eric-mitchell", "html_url": "https://github.com/eric-mitchell", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip i...
1,609,179,369,000
1,609,848,308,000
1,609,696,685,000
NONE
null
null
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://r...
https://api.github.com/repos/huggingface/datasets/issues/1647/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1646/comments
https://api.github.com/repos/huggingface/datasets/issues/1646/events
https://github.com/huggingface/datasets/pull/1646
775,499,344
MDExOlB1bGxSZXF1ZXN0NTQ2MTY4MTk3
1,646
Add missing homepage in some dataset cards
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,175,388,000
1,609,769,337,000
1,609,769,336,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1646", "html_url": "https://github.com/huggingface/datasets/pull/1646", "diff_url": "https://github.com/huggingface/datasets/pull/1646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1646.patch" }
In some dataset cards the homepage field in the `Dataset Description` section was missing/empty
https://api.github.com/repos/huggingface/datasets/issues/1646/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1645/comments
https://api.github.com/repos/huggingface/datasets/issues/1645/events
https://github.com/huggingface/datasets/pull/1645
775,473,106
MDExOlB1bGxSZXF1ZXN0NTQ2MTQ4OTUx
1,645
Rename "part-of-speech-tagging" tag in some dataset cards
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,171,749,000
1,610,014,094,000
1,610,014,093,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1645", "html_url": "https://github.com/huggingface/datasets/pull/1645", "diff_url": "https://github.com/huggingface/datasets/pull/1645.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1645.patch" }
`part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction`
https://api.github.com/repos/huggingface/datasets/issues/1645/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1644/comments
https://api.github.com/repos/huggingface/datasets/issues/1644/events
https://github.com/huggingface/datasets/issues/1644
775,375,880
MDU6SXNzdWU3NzUzNzU4ODA=
1,644
HoVeR dataset fails to load
{ "login": "urikz", "id": 1473778, "node_id": "MDQ6VXNlcjE0NzM3Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1473778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/urikz", "html_url": "https://github.com/urikz", "followers_url": "https://api.github.com/users/urikz/follower...
[]
open
false
null
[]
null
[ "Hover was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `hover` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"hover\")\r\n```" ]
1,609,158,427,000
1,609,785,991,000
null
NONE
null
null
Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library. Steps to reproduce the error: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("hover") Traceback (most recent call last): ...
https://api.github.com/repos/huggingface/datasets/issues/1644/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1643/comments
https://api.github.com/repos/huggingface/datasets/issues/1643/events
https://github.com/huggingface/datasets/issues/1643
775,280,046
MDU6SXNzdWU3NzUyODAwNDY=
1,643
Dataset social_bias_frames 404
{ "login": "atemate", "id": 7501517, "node_id": "MDQ6VXNlcjc1MDE1MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7501517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atemate", "html_url": "https://github.com/atemate", "followers_url": "https://api.github.com/users/atemate/...
[]
closed
false
null
[]
null
[ "I see, master is already fixed in https://github.com/huggingface/datasets/commit/9e058f098a0919efd03a136b9b9c3dec5076f626" ]
1,609,144,534,000
1,609,144,687,000
1,609,144,687,000
NONE
null
null
``` >>> from datasets import load_dataset >>> dataset = load_dataset("social_bias_frames") ... Downloading and preparing dataset social_bias_frames/default ... ~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, ...
https://api.github.com/repos/huggingface/datasets/issues/1643/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1642/comments
https://api.github.com/repos/huggingface/datasets/issues/1642/events
https://github.com/huggingface/datasets/pull/1642
775,159,568
MDExOlB1bGxSZXF1ZXN0NTQ1ODk1MzY1
1,642
Ollie dataset
{ "login": "ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ontocord", "html_url": "https://github.com/ontocord", "followers_url": "https://api.github.com/users/ontoc...
[]
closed
false
null
[]
null
[]
1,609,123,417,000
1,609,767,325,000
1,609,767,324,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1642", "html_url": "https://github.com/huggingface/datasets/pull/1642", "diff_url": "https://github.com/huggingface/datasets/pull/1642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1642.patch" }
This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details.
https://api.github.com/repos/huggingface/datasets/issues/1642/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1641/comments
https://api.github.com/repos/huggingface/datasets/issues/1641/events
https://github.com/huggingface/datasets/issues/1641
775,110,872
MDU6SXNzdWU3NzUxMTA4NzI=
1,641
muchocine dataset cannot be dowloaded
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/...
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" }, { "id": 2067388877, "node_id": "MDU6TGFi...
closed
false
null
[]
null
[ "I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\...
1,609,104,388,000
1,627,967,249,000
1,627,967,249,000
NONE
null
null
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
https://api.github.com/repos/huggingface/datasets/issues/1641/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1640/comments
https://api.github.com/repos/huggingface/datasets/issues/1640/events
https://github.com/huggingface/datasets/pull/1640
774,921,836
MDExOlB1bGxSZXF1ZXN0NTQ1NzI2NzY4
1,640
Fix "'BertTokenizerFast' object has no attribute 'max_len'"
{ "login": "mflis", "id": 15031715, "node_id": "MDQ6VXNlcjE1MDMxNzE1", "avatar_url": "https://avatars.githubusercontent.com/u/15031715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mflis", "html_url": "https://github.com/mflis", "followers_url": "https://api.github.com/users/mflis/follow...
[]
closed
false
null
[]
null
[]
1,609,010,741,000
1,609,176,395,000
1,609,176,395,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1640", "html_url": "https://github.com/huggingface/datasets/pull/1640", "diff_url": "https://github.com/huggingface/datasets/pull/1640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1640.patch" }
Tensorflow 2.3.0 gives: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. Tensorflow 2.4.0 gives: AttributeError 'BertTokenizerFast' object has no attribute 'max_len'
https://api.github.com/repos/huggingface/datasets/issues/1640/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1639/comments
https://api.github.com/repos/huggingface/datasets/issues/1639/events
https://github.com/huggingface/datasets/issues/1639
774,903,472
MDU6SXNzdWU3NzQ5MDM0NzI=
1,639
bug with sst2 in glue
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
open
false
null
[]
null
[ "Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```", "I am looking for alte...
1,609,001,843,000
1,630,076,603,000
null
NONE
null
null
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ...
https://api.github.com/repos/huggingface/datasets/issues/1639/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1638/comments
https://api.github.com/repos/huggingface/datasets/issues/1638/events
https://github.com/huggingface/datasets/pull/1638
774,869,184
MDExOlB1bGxSZXF1ZXN0NTQ1Njg5ODQ5
1,638
Add id_puisi dataset
{ "login": "ilhamfp", "id": 31740013, "node_id": "MDQ6VXNlcjMxNzQwMDEz", "avatar_url": "https://avatars.githubusercontent.com/u/31740013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ilhamfp", "html_url": "https://github.com/ilhamfp", "followers_url": "https://api.github.com/users/ilhamf...
[]
closed
false
null
[]
null
[]
1,608,986,515,000
1,609,346,057,000
1,609,346,057,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1638", "html_url": "https://github.com/huggingface/datasets/pull/1638", "diff_url": "https://github.com/huggingface/datasets/pull/1638.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1638.patch" }
Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :)
https://api.github.com/repos/huggingface/datasets/issues/1638/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1637/comments
https://api.github.com/repos/huggingface/datasets/issues/1637/events
https://github.com/huggingface/datasets/pull/1637
774,710,014
MDExOlB1bGxSZXF1ZXN0NTQ1NTc1NTMw
1,637
Added `pn_summary` dataset
{ "login": "m3hrdadfi", "id": 2601833, "node_id": "MDQ6VXNlcjI2MDE4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3hrdadfi", "html_url": "https://github.com/m3hrdadfi", "followers_url": "https://api.github.com/users/m3...
[]
closed
false
null
[]
null
[ "As always, I got stuck in the correct order of imports 😅\r\n@lhoestq, It's finished!", "@lhoestq, It's done! Is there anything else that needs changes?" ]
1,608,894,084,000
1,609,767,799,000
1,609,767,799,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1637", "html_url": "https://github.com/huggingface/datasets/pull/1637", "diff_url": "https://github.com/huggingface/datasets/pull/1637.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1637.patch" }
#1635 You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)!
https://api.github.com/repos/huggingface/datasets/issues/1637/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1636/comments
https://api.github.com/repos/huggingface/datasets/issues/1636/events
https://github.com/huggingface/datasets/issues/1636
774,574,378
MDU6SXNzdWU3NzQ1NzQzNzg=
1,636
winogrande cannot be dowloaded
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
open
false
null
[]
null
[ "I have same issue for other datasets (`myanmar_news` in my case).\r\n\r\nA version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at \r\n```\r\nhttps://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py\r\n```\r\n\r\nMeanwhi...
1,608,848,902,000
1,609,163,629,000
null
NONE
null
null
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", ...
https://api.github.com/repos/huggingface/datasets/issues/1636/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1635/comments
https://api.github.com/repos/huggingface/datasets/issues/1635/events
https://github.com/huggingface/datasets/issues/1635
774,524,492
MDU6SXNzdWU3NzQ1MjQ0OTI=
1,635
Persian Abstractive/Extractive Text Summarization
{ "login": "m3hrdadfi", "id": 2601833, "node_id": "MDQ6VXNlcjI2MDE4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3hrdadfi", "html_url": "https://github.com/m3hrdadfi", "followers_url": "https://api.github.com/users/m3...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,608,832,032,000
1,609,773,064,000
1,609,773,064,000
CONTRIBUTOR
null
null
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abs...
https://api.github.com/repos/huggingface/datasets/issues/1635/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1634/comments
https://api.github.com/repos/huggingface/datasets/issues/1634/events
https://github.com/huggingface/datasets/issues/1634
774,487,934
MDU6SXNzdWU3NzQ0ODc5MzQ=
1,634
Inspecting datasets per category
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
open
false
null
[]
null
[ "That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?", "Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked ...
1,608,823,594,000
1,610,098,084,000
null
NONE
null
null
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
https://api.github.com/repos/huggingface/datasets/issues/1634/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1633/comments
https://api.github.com/repos/huggingface/datasets/issues/1633/events
https://github.com/huggingface/datasets/issues/1633
774,422,603
MDU6SXNzdWU3NzQ0MjI2MDM=
1,633
social_i_qa wrong format of labels
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
closed
false
null
[]
null
[ "@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file", "Sure feel free to open a PR thanks !" ]
1,608,815,514,000
1,609,348,729,000
1,609,348,729,000
NONE
null
null
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /jul...
https://api.github.com/repos/huggingface/datasets/issues/1633/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1632/comments
https://api.github.com/repos/huggingface/datasets/issues/1632/events
https://github.com/huggingface/datasets/issues/1632
774,388,625
MDU6SXNzdWU3NzQzODg2MjU=
1,632
SICK dataset
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,608,813,614,000
1,612,540,165,000
1,612,540,165,000
CONTRIBUTOR
null
null
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical,...
https://api.github.com/repos/huggingface/datasets/issues/1632/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1631/comments
https://api.github.com/repos/huggingface/datasets/issues/1631/events
https://github.com/huggingface/datasets/pull/1631
774,349,222
MDExOlB1bGxSZXF1ZXN0NTQ1Mjc5MTE2
1,631
Update README.md
{ "login": "savasy", "id": 6584825, "node_id": "MDQ6VXNlcjY1ODQ4MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savasy", "html_url": "https://github.com/savasy", "followers_url": "https://api.github.com/users/savasy/foll...
[]
closed
false
null
[]
null
[]
1,608,810,352,000
1,609,176,941,000
1,609,175,764,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1631", "html_url": "https://github.com/huggingface/datasets/pull/1631", "diff_url": "https://github.com/huggingface/datasets/pull/1631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1631.patch" }
I made small change for citation
https://api.github.com/repos/huggingface/datasets/issues/1631/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1630/comments
https://api.github.com/repos/huggingface/datasets/issues/1630/events
https://github.com/huggingface/datasets/issues/1630
774,332,129
MDU6SXNzdWU3NzQzMzIxMjk=
1,630
Adding UKP Argument Aspect Similarity Corpus
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ", "thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include i...
1,608,807,691,000
1,608,809,418,000
null
CONTRIBUTOR
null
null
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei...
https://api.github.com/repos/huggingface/datasets/issues/1630/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1629/comments
https://api.github.com/repos/huggingface/datasets/issues/1629/events
https://github.com/huggingface/datasets/pull/1629
774,255,716
MDExOlB1bGxSZXF1ZXN0NTQ1MjAwNTQ3
1,629
add wongnai_reviews test set labels
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,608,796,951,000
1,609,176,219,000
1,609,176,219,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1629", "html_url": "https://github.com/huggingface/datasets/pull/1629", "diff_url": "https://github.com/huggingface/datasets/pull/1629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1629.patch" }
- add test set labels provided by @ekapolc - refactor `star_rating` to a `datasets.features.ClassLabel` field
https://api.github.com/repos/huggingface/datasets/issues/1629/timeline
null
true