url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.11B
node_id
stringlengths
18
32
number
int64
1
3.59k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,643B
updated_at
int64
1,587B
1,643B
closed_at
int64
1,587B
1,643B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2375/comments
https://api.github.com/repos/huggingface/datasets/issues/2375/events
https://github.com/huggingface/datasets/pull/2375
894,655,157
MDExOlB1bGxSZXF1ZXN0NjQ2OTg2NTcw
2,375
Dataset Streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,621,362,000,000
1,624,466,102,000
1,624,466,101,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2375", "html_url": "https://github.com/huggingface/datasets/pull/2375", "diff_url": "https://github.com/huggingface/datasets/pull/2375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2375.patch", "merged_at": 1624466101000 }
# Dataset Streaming ## API Current API is ```python from datasets import load_dataset # Load an IterableDataset without downloading data snli = load_dataset("snli", streaming=True) # Access examples by streaming data print(next(iter(snli["train"]))) # {'premise': 'A person on a horse jumps over a broken down airplane.', # 'hypothesis': 'A person is training his horse for a competition.', # 'label': 1} ``` I already implemented a few methods: - IterableDataset.map: apply transforms on-the-fly to the examples - IterableDataset.shuffle: shuffle the data _a la_ TFDS, i.e. with a shuffling buffer - IterableDataset.with_format: set the format to `"torch"` to get a `torch.utils.data.IterableDataset` - merge_datasets: merge two iterable datasets by alternating one or the other (you can specify the probabilities) I would love to have your opinion on the API design :) ## Implementation details ### Streaming Data streaming is done using `fsspec` which has nice caching features. To make dataset streaming work I extend the `open` function of dataset scripts to support opening remote files without downloading them entirely. It also works with remote compressed archives (currently only zip is supported): ```python # Get a file-like object by streaming data from a remote file open("https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt") # Get a file-like object by streaming data from a remote compressed archive by using the hop separator "::" open("zip://snli_1.0_train.txt::https://nlp.stanford.edu/projects/snli/snli_1.0.zip") ``` I also extend the `os.path.join` function to support navigation in remote compressed archives, since it has to deal with the `"::"` separator. This separator is used by `fsspec`. Finally I also added a retry mechanism in case the connection fails during data streaming. ### Transforms An IterableDataset wraps an ExamplesIterable instance. There are different subclasses depending on the transforms we want to apply: - ExamplesIterable: the basic one - MappedExamplesIterable: an iterable with a `map` function applied on the fly - BufferShuffledExamplesIterable: an iterable with a shuffling buffer - CyclingMultiSourcesExamplesIterable: alternates between several ExamplesIterable - RandomlyCyclingMultiSourcesExamplesIterable: randomly alternates between several ExamplesIterable ### DatasetBuilder I use the same builders as usual. I just added a new method `_get_examples_iterable_for_split` to get an ExamplesIterable for a given split. Currently only the GeneratorBasedBuilder and the ArrowBasedBuilder implement it. The BeamBasedBuilder doesn't implement it yet. It means that datasets like wikipedia and natural_questions can't be loaded as IterableDataset for now. ## Other details <S>I may have to do some changes in many dataset script to use `download` instead of `download_and_extract` when extraction is not needed. This will avoid errors for streaming.</s> EDIT: Actually I just check for the extension of the file to do extraction only if needed. EDIT2: It's not possible to stream from .tar.gz files without downloading the file completely. For now I raise an error if one want to get a streaming dataset based on .tar.gz files. ## TODO usual stuff: - [x] make streaming dependency "aiohttp" optional: `pip install datasets[streaming]` - [x] tests - [x] docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2375/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 6, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2375/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2374/comments
https://api.github.com/repos/huggingface/datasets/issues/2374/events
https://github.com/huggingface/datasets/pull/2374
894,579,364
MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw
2,374
add `desc` to `tqdm` in `Dataset.map()`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....\r\n\r\nhttps://github.com/huggingface/transformers/issues/11797", "Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side", "@bhavitvyamalik, as it has been merged would you like to tackle https://github.com/huggingface/transformers/issues/11797?\r\n", "Definitely @stas00. From what I could gather, you guys want more meaningful `.map` calls for all examples [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch)?", "That's exactly right, @bhavitvyamalik \r\n\r\nPerhaps the best approach is to do one example, see that other maintainers agree on it. and then replicate to other." ]
1,621,356,269,000
1,622,130,244,000
1,622,041,161,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2374", "html_url": "https://github.com/huggingface/datasets/pull/2374", "diff_url": "https://github.com/huggingface/datasets/pull/2374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2374.patch", "merged_at": 1622041161000 }
Fixes #2330. Please let me know if anything is also required in this
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2374/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2374/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2373/comments
https://api.github.com/repos/huggingface/datasets/issues/2373/events
https://github.com/huggingface/datasets/issues/2373
894,499,909
MDU6SXNzdWU4OTQ0OTk5MDk=
2,373
Loading dataset from local path
{ "login": "kolakows", "id": 34172905, "node_id": "MDQ6VXNlcjM0MTcyOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kolakows", "html_url": "https://github.com/kolakows", "followers_url": "https://api.github.com/users/kolakows/followers", "following_url": "https://api.github.com/users/kolakows/following{/other_user}", "gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}", "starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolakows/subscriptions", "organizations_url": "https://api.github.com/users/kolakows/orgs", "repos_url": "https://api.github.com/users/kolakows/repos", "events_url": "https://api.github.com/users/kolakows/events{/privacy}", "received_events_url": "https://api.github.com/users/kolakows/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='/data/dir/corpus.txt', \r\n cache_dir='.')\r\n```" ]
1,621,351,250,000
1,621,352,196,000
1,621,352,195,000
NONE
null
null
null
I'm trying to load a local dataset with the code below ``` ds = datasets.load_dataset('my_script.py', data_files='corpus.txt', data_dir='/data/dir', cache_dir='.') ``` But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly? https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2373/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2372/comments
https://api.github.com/repos/huggingface/datasets/issues/2372/events
https://github.com/huggingface/datasets/pull/2372
894,496,064
MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2
2,372
ConvQuestions benchmark added
{ "login": "PhilippChr", "id": 24608689, "node_id": "MDQ6VXNlcjI0NjA4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/24608689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilippChr", "html_url": "https://github.com/PhilippChr", "followers_url": "https://api.github.com/users/PhilippChr/followers", "following_url": "https://api.github.com/users/PhilippChr/following{/other_user}", "gists_url": "https://api.github.com/users/PhilippChr/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilippChr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilippChr/subscriptions", "organizations_url": "https://api.github.com/users/PhilippChr/orgs", "repos_url": "https://api.github.com/users/PhilippChr/repos", "events_url": "https://api.github.com/users/PhilippChr/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilippChr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for your helpful comments and suggestions! :)\r\nI integrated the additional fields, and extended some of the README/dataset card.\r\nAnd I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed.", "I added the answers to the test set actually :)", "Oh great ! Let me revert my change then" ]
1,621,351,010,000
1,622,025,105,000
1,622,025,105,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2372", "html_url": "https://github.com/huggingface/datasets/pull/2372", "diff_url": "https://github.com/huggingface/datasets/pull/2372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2372.patch", "merged_at": 1622025105000 }
Hello, I would like to integrate our dataset on conversational QA. The answers are grounded in the KG. The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016). We hope for further research on how to deal with the challenges of factoid conversational QA. Thanks! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2372/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2372/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2371/comments
https://api.github.com/repos/huggingface/datasets/issues/2371/events
https://github.com/huggingface/datasets/issues/2371
894,193,403
MDU6SXNzdWU4OTQxOTM0MDM=
2,371
Align question answering tasks with sub-domains
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
1,621,331,279,000
1,621,331,362,000
null
MEMBER
null
null
null
As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains: > `question-answering` exists in two forms: abstractive and extractive question answering. > > we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text). > > Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance. > Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail). > Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad > > Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well. > Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema. > > A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178 Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2371/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2370
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2370/comments
https://api.github.com/repos/huggingface/datasets/issues/2370/events
https://github.com/huggingface/datasets/pull/2370
893,606,432
MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy
2,370
Adding HendrycksTest dataset
{ "login": "andyzoujm", "id": 43451571, "node_id": "MDQ6VXNlcjQzNDUxNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyzoujm", "html_url": "https://github.com/andyzoujm", "followers_url": "https://api.github.com/users/andyzoujm/followers", "following_url": "https://api.github.com/users/andyzoujm/following{/other_user}", "gists_url": "https://api.github.com/users/andyzoujm/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyzoujm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyzoujm/subscriptions", "organizations_url": "https://api.github.com/users/andyzoujm/orgs", "repos_url": "https://api.github.com/users/andyzoujm/repos", "events_url": "https://api.github.com/users/andyzoujm/events{/privacy}", "received_events_url": "https://api.github.com/users/andyzoujm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to).", "I took a look at the dummy data and some csv lines were cropped. I fixed them :)" ]
1,621,277,585,000
1,622,479,033,000
1,622,479,033,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2370", "html_url": "https://github.com/huggingface/datasets/pull/2370", "diff_url": "https://github.com/huggingface/datasets/pull/2370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2370.patch", "merged_at": 1622479033000 }
Adding Hendrycks test from https://arxiv.org/abs/2009.03300. I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help! Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2370/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2369/comments
https://api.github.com/repos/huggingface/datasets/issues/2369/events
https://github.com/huggingface/datasets/pull/2369
893,554,153
MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1
2,369
correct labels of conll2003
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,621,273,074,000
1,621,326,462,000
1,621,326,462,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2369", "html_url": "https://github.com/huggingface/datasets/pull/2369", "diff_url": "https://github.com/huggingface/datasets/pull/2369.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2369.patch", "merged_at": 1621326462000 }
# What does this PR It fixes/extends the `ner_tags` for conll2003 to include all. Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2369/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2368/comments
https://api.github.com/repos/huggingface/datasets/issues/2368/events
https://github.com/huggingface/datasets/pull/2368
893,411,076
MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0
2,368
Allow "other-X" in licenses
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,621,262,874,000
1,621,269,387,000
1,621,269,387,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2368", "html_url": "https://github.com/huggingface/datasets/pull/2368", "diff_url": "https://github.com/huggingface/datasets/pull/2368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2368.patch", "merged_at": 1621269387000 }
This PR allows "other-X" licenses during metadata validation. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2368/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2367/comments
https://api.github.com/repos/huggingface/datasets/issues/2367/events
https://github.com/huggingface/datasets/pull/2367
893,317,427
MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0
2,367
Remove getchildren from hyperpartisan news detection
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,621,257,037,000
1,621,260,433,000
1,621,260,433,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2367", "html_url": "https://github.com/huggingface/datasets/pull/2367", "diff_url": "https://github.com/huggingface/datasets/pull/2367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2367.patch", "merged_at": 1621260432000 }
`Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails). https://bugs.python.org/issue29209
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2367/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2366/comments
https://api.github.com/repos/huggingface/datasets/issues/2366/events
https://github.com/huggingface/datasets/issues/2366
893,185,266
MDU6SXNzdWU4OTMxODUyNjY=
2,366
Json loader fails if user-specified features don't match the json data fields order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,621,247,168,000
1,623,840,469,000
1,623,840,469,000
MEMBER
null
null
null
If you do ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then depending on the order of the features in the json data field it fails: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens'] ``` This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast. One way to fix the `cast` would be to replace it with: ```python # reorder the arrays if necessary + cast to schema # we can't simply use .cast here because we may need to change the order of the columns pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2366/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2365/comments
https://api.github.com/repos/huggingface/datasets/issues/2365/events
https://github.com/huggingface/datasets/issues/2365
893,179,697
MDU6SXNzdWU4OTMxNzk2OTc=
2,365
Missing ClassLabel encoding in Json loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[]
1,621,246,750,000
1,624,892,734,000
1,624,892,734,000
MEMBER
null
null
null
Currently if you want to load a json dataset this way ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64 ``` This is because it just tries to cast the string data to integers, without applying the mapping str->int first The current workaround is to do instead ```python dataset = load_dataset("json", data_files=data_files) dataset = dataset.map(features.encode_example, features=features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2365/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2364/comments
https://api.github.com/repos/huggingface/datasets/issues/2364/events
https://github.com/huggingface/datasets/pull/2364
892,420,500
MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx
2,364
README updated for SNLI, MNLI
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?", "@lhoestq I agree, I'll look into it." ]
1,621,078,679,000
1,621,260,867,000
1,621,258,459,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2364", "html_url": "https://github.com/huggingface/datasets/pull/2364", "diff_url": "https://github.com/huggingface/datasets/pull/2364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2364.patch", "merged_at": 1621258458000 }
Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2364/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2363/comments
https://api.github.com/repos/huggingface/datasets/issues/2363/events
https://github.com/huggingface/datasets/issues/2363
892,391,232
MDU6SXNzdWU4OTIzOTEyMzI=
2,363
Trying to use metric.compute but get OSError
{ "login": "hyusterr", "id": 52968111, "node_id": "MDQ6VXNlcjUyOTY4MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/52968111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyusterr", "html_url": "https://github.com/hyusterr", "followers_url": "https://api.github.com/users/hyusterr/followers", "following_url": "https://api.github.com/users/hyusterr/following{/other_user}", "gists_url": "https://api.github.com/users/hyusterr/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyusterr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyusterr/subscriptions", "organizations_url": "https://api.github.com/users/hyusterr/orgs", "repos_url": "https://api.github.com/users/hyusterr/repos", "events_url": "https://api.github.com/users/hyusterr/events{/privacy}", "received_events_url": "https://api.github.com/users/hyusterr/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "also, I test the function on some little data , get the same message:\r\n\r\n```\r\nPython 3.8.5 (default, Jan 27 2021, 15:41:15)\r\n[GCC 9.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_metric\r\n>>> metric = load_metric('accuracy')\r\n>>> metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\n2021-05-15 16:39:17.240991: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\n>>> metric.compute()\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py\", line 391, in compute\r\n self._finalize()\r\n File \"/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py\", line 342, in _finalize\r\n self.writer.finalize()\r\n File \"/home/yshuang/.local/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 370, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 112, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n```", "Hi @hyusterr,\r\nIf you look at the example provided in `metrics/accuracy.py`, it only does `metric.compute()` to calculate the accuracy. Here's an example:\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric('accuracy')\r\noutput = metric.compute(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\nprint(output['accuracy']) # 0.5\r\n```\r\n", "I thought I can use Metric to collect predictions and references, this follows the step from huggingface's sample colab.\r\nBTW, I fix the problem by setting other cache_dir in load_metric, but I'm still wondering about the mechanism.", "I tried this code on a colab notebook and it worked fine (with gpu enabled):\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric('accuracy')\r\noutput = metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\nfinal_score = metric.compute()\r\nprint(final_score) # 0.5\r\n```\r\nAlso, in `load_metric`, I saw `cache_dir` is optional and it defaults to `~/.datasets/`", "Hi ! By default it caches the predictions and references used to compute the metric in `~/.cache/huggingface/datasets/metrics` (not `~/.datasets/`). Let me update the documentation @bhavitvyamalik .\r\n\r\nThe cache is used to store all the predictions and references passed to `add_batch` for example in order to compute the metric later when `compute` is called.\r\n\r\nI think the issue might come from the cache directory that is used by default. Can you check that you have the right permissions ? Otherwise feel free to set `cache_dir` to another location." ]
1,621,067,946,000
1,630,936,866,000
null
NONE
null
null
null
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError? ```python 195 for epoch in range(num_train_epochs): 196 model.train() 197 for step, batch in enumerate(train_loader): 198 # print(batch['input_ids'].shape) 199 outputs = model(**batch) 200 201 loss = outputs.loss 202 loss /= gradient_accumulation_steps 203 accelerator.backward(loss) 204 205 predictions = outputs.logits.argmax(dim=-1) 206 metric.add_batch( 207 predictions=accelerator.gather(predictions), 208 references=accelerator.gather(batch['labels']) 209 ) 210 progress_bar.set_postfix({'loss': loss.item(), 'train batch acc.': train_metrics}) 211 212 if (step + 1) % 50 == 0 or step == len(train_loader) - 1: 213 train_metrics = metric.compute() ``` the error message is as below: ``` Traceback (most recent call last): File "run_multi.py", line 273, in <module> main() File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "run_multi.py", line 213, in main train_metrics = metric.compute() File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py", line 391, in compute self._finalize() File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py", line 342, in _finalize self.writer.finalize() File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/arrow_writer.py", line 370, in finalize self.stream.close() File "pyarrow/io.pxi", line 132, in pyarrow.lib.NativeFile.close File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: error closing file ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.1 - Platform: Linux NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" - Python version: python3.8.5 - PyArrow version: 4.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2363/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2362/comments
https://api.github.com/repos/huggingface/datasets/issues/2362/events
https://github.com/huggingface/datasets/pull/2362
892,100,749
MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw
2,362
Fix web_nlg metadata
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n![image](https://user-images.githubusercontent.com/42851186/118475444-8d1e5d00-b70c-11eb-98e9-844d4daf6139.png)\r\n\r\nTherefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.\r\n\r\nMoreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset(\"indic_glue\", \"sna.bn\")`.\r\n\r\nIs this something that can be fixed on the moonlanding side instead ?", "> Is this something that can be fixed on the moonlanding side instead ?\r\n\r\nNot really unless we change database:)\r\n\r\nWe'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata", "Ok, should we close this PR then ?" ]
1,621,012,507,000
1,621,259,057,000
1,621,258,948,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2362", "html_url": "https://github.com/huggingface/datasets/pull/2362", "diff_url": "https://github.com/huggingface/datasets/pull/2362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2362.patch", "merged_at": null }
Our metadata storage system does not support `.` inside keys. cc @Pierrci
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2362/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2361/comments
https://api.github.com/repos/huggingface/datasets/issues/2361/events
https://github.com/huggingface/datasets/pull/2361
891,982,808
MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4
2,361
Preserve dtype for numpy/torch/tf/jax arrays
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only", "Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039 and `test_map_tf`https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056 \r\nthey're expecting `float64`. Shouldn't that be `float32` now?", "It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.\r\n\r\nI think that we should always keep the precision of the original tensor (torch/tf/numpy).\r\nIt means that as it is in this PR it's fine (the precision is conserved when doing the torch/tf -> numpy conversion).\r\n\r\nThis is a breaking change but in my opinion the fact that we had Value(\"float64\") for torch.float32 tensors was an issue already.\r\n\r\nLet me know what you think. Cc @albertvillanova if you have an opinion on this\r\n\r\nIf we agree on doing this breaking change, we can just change the test. ", "Hi @lhoestq, \r\nMerged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.\r\n\r\n> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`\r\n> \r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039\r\n> \r\n> and `test_map_tf`\r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056\r\n> \r\n> \r\n> they're expecting `float64`. Shouldn't that be `float32` now?\r\n\r\n", "> they're expecting float64. Shouldn't that be float32 now?\r\n\r\nYes feel free to update those tests :)\r\n\r\nIt would be nice to have the same test for JAX as well", "Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well" ]
1,621,003,523,000
1,629,189,004,000
1,629,189,004,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2361", "html_url": "https://github.com/huggingface/datasets/pull/2361", "diff_url": "https://github.com/huggingface/datasets/pull/2361.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2361.patch", "merged_at": 1629189004000 }
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2361/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2360/comments
https://api.github.com/repos/huggingface/datasets/issues/2360/events
https://github.com/huggingface/datasets/issues/2360
891,965,964
MDU6SXNzdWU4OTE5NjU5NjQ=
2,360
Automatically detect datasets with compatible task schemas
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
1,621,002,220,000
1,621,002,220,000
null
MEMBER
null
null
null
See description of #2255 for details.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2360/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2359/comments
https://api.github.com/repos/huggingface/datasets/issues/2359/events
https://github.com/huggingface/datasets/issues/2359
891,946,017
MDU6SXNzdWU4OTE5NDYwMTc=
2,359
Allow model labels to be passed during task preparation
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,621,000,708,000
1,621,000,708,000
null
MEMBER
null
null
null
Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side. For example for sentiment classification on amazon reviews with you could have these labels: - "1 star", "2 stars", "3 stars", "4 stars", "5 stars" - "1", "2", "3", "4", "5" Some models may use the first set, while other models use the second set. Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ? Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that. The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model. Let me know what you think ! This can be done in a future PR _Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2359/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2358/comments
https://api.github.com/repos/huggingface/datasets/issues/2358/events
https://github.com/huggingface/datasets/pull/2358
891,269,577
MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2
2,358
Roman Urdu Stopwords List
{ "login": "devzohaib", "id": 58664161, "node_id": "MDQ6VXNlcjU4NjY0MTYx", "avatar_url": "https://avatars.githubusercontent.com/u/58664161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devzohaib", "html_url": "https://github.com/devzohaib", "followers_url": "https://api.github.com/users/devzohaib/followers", "following_url": "https://api.github.com/users/devzohaib/following{/other_user}", "gists_url": "https://api.github.com/users/devzohaib/gists{/gist_id}", "starred_url": "https://api.github.com/users/devzohaib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devzohaib/subscriptions", "organizations_url": "https://api.github.com/users/devzohaib/orgs", "repos_url": "https://api.github.com/users/devzohaib/repos", "events_url": "https://api.github.com/users/devzohaib/events{/privacy}", "received_events_url": "https://api.github.com/users/devzohaib/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for sharing :)\r\nI think the best place to share this is probably the `Languages at Hugging Face` section of the forum:\r\nhttps://discuss.huggingface.co/c/languages-at-hugging-face/15\r\n\r\nSince this is not a dataset, I'm closing this PR if you don't mind", "Thank you I will look into the link that you have shared with me.\n\n\n\n\nOn Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> Closed #2358 <https://github.com/huggingface/datasets/pull/2358>.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2358#event-4754836267>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AN7SJYJVY4C5XQRDNET743DTOEPC7ANCNFSM443AZ3MA>\n> .\n>\n" ]
1,620,930,567,000
1,621,414,243,000
1,621,260,310,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2358", "html_url": "https://github.com/huggingface/datasets/pull/2358", "diff_url": "https://github.com/huggingface/datasets/pull/2358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2358.patch", "merged_at": null }
A list of most frequently used Roman Urdu words with different spellings and usages. This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2358/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2357/comments
https://api.github.com/repos/huggingface/datasets/issues/2357/events
https://github.com/huggingface/datasets/pull/2357
890,595,693
MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz
2,357
Adding Microsoft CodeXGlue Datasets
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.\r\n\r\nIf it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate.", "Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n\r\n`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?", "> Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n> \r\n> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?\r\n\r\nIf it's already in this format then it's fine thanks ! It's all good then\r\n\r\nTo fix the CI you just need to add the `encoding=` parameters to the `open()` calls", "@lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well?", "Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections/subsections.\r\n\r\nAlso, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.", "> Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n\r\nYes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n\r\ncc @yjernite what do you think about extending our languages taxonomy to programming languages ?", "Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as \"Supported Tasks and Leaderboards\" -> \"Supported Tasks\" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as \"Source Data\" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like \"Source Data\", is this info required?", "Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that\r\n\r\nI also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)\r\n", "Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.\r\nIf you are interested you can fill those sections as well, or leave them empty for now.\r\nThis will also fix the error regarding \"Source Data\"\r\n\r\nYou can see the template of the readme here:\r\nhttps://github.com/huggingface/datasets/blob/9d8bf36fdb861d9b2922d7c782fb58f9f542997c/templates/README.md", "> > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n> \r\n> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n> \r\n> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?\r\n\r\nSounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ? \r\n\r\nI don't think we currently have `_` in language codes/names, but also don't see what it would break *a priori*", "We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ?", "Hi guys, I just started working on https://github.com/huggingface/datasets/pull/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https://github.com/madlag/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.\r\n\r\n", "I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly.", "Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)\r\n\r\nThis PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR", "Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:.", "Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released !" ]
1,620,866,581,000
1,623,144,597,000
1,623,144,597,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2357", "html_url": "https://github.com/huggingface/datasets/pull/2357", "diff_url": "https://github.com/huggingface/datasets/pull/2357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2357.patch", "merged_at": 1623144597000 }
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:. I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2357/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2356/comments
https://api.github.com/repos/huggingface/datasets/issues/2356/events
https://github.com/huggingface/datasets/issues/2356
890,511,019
MDU6SXNzdWU4OTA1MTEwMTk=
2,356
How to Add New Metrics Guide
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! sorry for the late response \r\n\r\nIt would be fantastic to have a guide for adding metrics as well ! Currently we only have this template here:\r\nhttps://github.com/huggingface/datasets/blob/master/templates/new_metric_script.py\r\n\r\nWe can also include test utilities for metrics in the guide.\r\n\r\nWe have a pytest suite with commands that you can use to make sure your metric works as expected.\r\nIt has two useful commands:\r\n\r\n1. This commands tests the code in the `Examples:` desction of the docstring of the metric:\r\n```\r\npytest tests/test_metric_common.py::LocalMetricTest::test_load_metric_<metric_name>\r\n```\r\nThis will run this code for example:\r\n\r\nhttps://github.com/huggingface/datasets/blob/e0787aa2a781cc15a80f7597f56d1f12e23df4c9/metrics/accuracy/accuracy.py#L40-L45\r\n\r\nMoreover this test is meant to be fast so users are free to add patches to the metric to avoid intensive computations.\r\nAnd example of intensive call patch can be found here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/e0787aa2a781cc15a80f7597f56d1f12e23df4c9/tests/test_metric_common.py#L138-L151\r\n\r\n2. This test runs the same thing as 1. except that it doesn't use patches (the real metric is used):\r\n```\r\nRUN_SLOW=1 pytest tests/test_metric_common.py::LocalMetricTest::test_load_metric_<metric_name>\r\n```\r\n\r\nFinally additional metric-specific tests can be added to `test_metric_common.py`.\r\n\r\nVoila :) Feel free to ping me if you have any question or if I can help\r\n" ]
1,620,855,726,000
1,622,486,975,000
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics. **Describe the solution you'd like** I'd like for a guide in a similar style to the dataset guide for adding metrics. I believe many of the content in the dataset guide such as setup can be easily copied over with minimal changes. Also, from what I've seen with existing metrics, it shouldn't be as complicated, especially in documentation of the metric, mainly just citation and usage. The most complicated part I see would be in automated tests that run the new metrics, but y'all's test suite seem pretty comprehensive, so it might not be that hard. **Describe alternatives you've considered** One alternative would be just not having the metrics be community generated and so would not need a step by step guide. New metrics would just be proposed as issues and the internal team would take care of them. However, I think it makes more sense to have a step by step guide for contributors to follow. **Additional context** I'd be happy to help with creating this guide as I am very interested in adding software engineering metrics to the library :nerd_face:, the part I would need guidance on would be testing. P.S. Love the library and community y'all have built! :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2356/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2355/comments
https://api.github.com/repos/huggingface/datasets/issues/2355/events
https://github.com/huggingface/datasets/pull/2355
890,484,408
MDExOlB1bGxSZXF1ZXN0NjQzNDk5NTIz
2,355
normalized TOCs and titles in data cards
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh right! I'd be in favor of still having the same TOC across the board, we can either leave it as is or add a `[More Info Needed]` `Contributions` Section wherever it's currently missing, wdyt?", "(I thought those were programmatically updated based on git history :D )", "Merging for now to avoid conflict since there are so many changes but let's figure out the contributions section next ;) " ]
1,620,853,199,000
1,620,998,592,000
1,620,998,592,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2355", "html_url": "https://github.com/huggingface/datasets/pull/2355", "diff_url": "https://github.com/huggingface/datasets/pull/2355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2355.patch", "merged_at": 1620998592000 }
I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content This PR normalizes all of them to the newer version
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2355/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2355/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2354/comments
https://api.github.com/repos/huggingface/datasets/issues/2354/events
https://github.com/huggingface/datasets/issues/2354
890,439,523
MDU6SXNzdWU4OTA0Mzk1MjM=
2,354
Document DatasetInfo attributes
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
1,620,849,689,000
1,621,675,574,000
1,621,675,574,000
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2354/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2353/comments
https://api.github.com/repos/huggingface/datasets/issues/2353/events
https://github.com/huggingface/datasets/pull/2353
890,296,262
MDExOlB1bGxSZXF1ZXN0NjQzMzM4MDcz
2,353
Update README vallidation rules
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,838,646,000
1,620,982,566,000
1,620,982,566,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2353", "html_url": "https://github.com/huggingface/datasets/pull/2353", "diff_url": "https://github.com/huggingface/datasets/pull/2353.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2353.patch", "merged_at": 1620982566000 }
This PR allows unexpected subsections under third-level headings. All except `Contributions`. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2353/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2352/comments
https://api.github.com/repos/huggingface/datasets/issues/2352/events
https://github.com/huggingface/datasets/pull/2352
889,810,100
MDExOlB1bGxSZXF1ZXN0NjQyOTI4NTgz
2,352
Set to_json default to JSON lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is perfect, @albertvillanova - thank you! Tested it to work.\r\n\r\nMight it be a good idea to document the args to `to_json`?\r\n\r\nand also even a very basic progress bar? took 10min for 8M large records for `openwebtext` so perhaps some indication of it's being alive every min or so?", "@lhoestq I added tests for both `lines` and `orient`." ]
1,620,807,565,000
1,621,587,674,000
1,621,587,673,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2352", "html_url": "https://github.com/huggingface/datasets/pull/2352", "diff_url": "https://github.com/huggingface/datasets/pull/2352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2352.patch", "merged_at": 1621587673000 }
With this PR, the method `Dataset.to_json`: - is added to the docs - defaults to JSON lines
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2352/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2352/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2351/comments
https://api.github.com/repos/huggingface/datasets/issues/2351/events
https://github.com/huggingface/datasets/pull/2351
889,584,953
MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz
2,351
simpllify faiss index save
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,791,650,000
1,621,258,901,000
1,621,258,901,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2351", "html_url": "https://github.com/huggingface/datasets/pull/2351", "diff_url": "https://github.com/huggingface/datasets/pull/2351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2351.patch", "merged_at": 1621258901000 }
Fixes #2350 In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU. In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it. I propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2351/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2350/comments
https://api.github.com/repos/huggingface/datasets/issues/2350/events
https://github.com/huggingface/datasets/issues/2350
889,580,247
MDU6SXNzdWU4ODk1ODAyNDc=
2,350
`FaissIndex.save` throws error on GPU
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```" ]
1,620,790,916,000
1,621,258,901,000
1,621,258,901,000
CONTRIBUTOR
null
null
null
## Describe the bug After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error. ``` File "index_wikipedia.py", line 119, in <module> data["train"].save_faiss_index("text_emb", index_save_path) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index index.save(file) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save faiss.write_index(index, str(file)) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index return _swigfaiss.write_index(*args) RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index ``` ## Steps to reproduce the bug Any dataset will do, I just selected a familiar one. ```python import numpy as np import datasets INDEX_STR = "OPQ16_128,IVF512,PQ32" INDEX_SAVE_PATH = "will_not_save.faiss" data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]") def encode(item): return {"text_emb": np.random.randn(768).astype(np.float32)} data = data.map(encode) data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0) data.save_faiss_index("text_emb", INDEX_SAVE_PATH) ``` ## Expected results Saving the index ## Actual results Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index ## Environment info - `datasets` version: 1.6.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I will be proposing a fix in a couple of minutes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2350/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2349/comments
https://api.github.com/repos/huggingface/datasets/issues/2349/events
https://github.com/huggingface/datasets/pull/2349
888,586,018
MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3
2,349
Update task_ids for Ascent KB
{ "login": "phongnt570", "id": 6749421, "node_id": "MDQ6VXNlcjY3NDk0MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phongnt570", "html_url": "https://github.com/phongnt570", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "repos_url": "https://api.github.com/users/phongnt570/repos", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,765,873,000
1,621,248,794,000
1,621,248,514,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2349", "html_url": "https://github.com/huggingface/datasets/pull/2349", "diff_url": "https://github.com/huggingface/datasets/pull/2349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2349.patch", "merged_at": 1621248514000 }
This "other-other-knowledge-base" task is better suited for the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2349/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2348/comments
https://api.github.com/repos/huggingface/datasets/issues/2348/events
https://github.com/huggingface/datasets/pull/2348
887,927,737
MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4
2,348
Add tests for dataset cards
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq\r\n\r\nShould I remove the scripts? or atleast remove running them from the CircleCI config?\r\n\r\nAlso, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata.", "Also feel free to remove the scripts from the CI and also remove the scripts files :)" ]
1,620,753,267,000
1,621,599,047,000
1,621,599,047,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2348", "html_url": "https://github.com/huggingface/datasets/pull/2348", "diff_url": "https://github.com/huggingface/datasets/pull/2348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2348.patch", "merged_at": 1621599047000 }
Adding tests for dataset cards This PR will potentially remove the scripts being used for dataset tags and readme validation. Additionally, this will allow testing dataset readmes by providing the name as follows: ```bash pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist] ``` and ```bash pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist] ``` or a combined test as: ```bash pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist] ``` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2348/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2347/comments
https://api.github.com/repos/huggingface/datasets/issues/2347/events
https://github.com/huggingface/datasets/issues/2347
887,404,868
MDU6SXNzdWU4ODc0MDQ4Njg=
2,347
Add an API to access the language and pretty name of a dataset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).", "That works for me!", "maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?", "What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.", "hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)" ]
1,620,742,208,000
1,621,589,206,000
null
MEMBER
null
null
null
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2347/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2346/comments
https://api.github.com/repos/huggingface/datasets/issues/2346/events
https://github.com/huggingface/datasets/pull/2346
886,632,114
MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3
2,346
Add Qasper Dataset
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this 😅 Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.\r\nready for review @lhoestq " ]
1,620,725,144,000
1,621,340,908,000
1,621,340,908,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2346", "html_url": "https://github.com/huggingface/datasets/pull/2346", "diff_url": "https://github.com/huggingface/datasets/pull/2346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2346.patch", "merged_at": 1621340907000 }
[Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home) Doing NLP on NLP papers to do NLP ♻️ I had to add it~ - [x] Add README (just gotta fill out some more ) - [x] Dataloader code - [x] Make dummy dataset - [x] generate dataset infos - [x] Tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2346/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2345/comments
https://api.github.com/repos/huggingface/datasets/issues/2345/events
https://github.com/huggingface/datasets/issues/2345
886,586,872
MDU6SXNzdWU4ODY1ODY4NzI=
2,345
[Question] How to move and reuse preprocessed dataset?
{ "login": "AtmaHou", "id": 15045402, "node_id": "MDQ6VXNlcjE1MDQ1NDAy", "avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AtmaHou", "html_url": "https://github.com/AtmaHou", "followers_url": "https://api.github.com/users/AtmaHou/followers", "following_url": "https://api.github.com/users/AtmaHou/following{/other_user}", "gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions", "organizations_url": "https://api.github.com/users/AtmaHou/orgs", "repos_url": "https://api.github.com/users/AtmaHou/repos", "events_url": "https://api.github.com/users/AtmaHou/events{/privacy}", "received_events_url": "https://api.github.com/users/AtmaHou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq @LysandreJik", "<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n", "Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same", "> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~" ]
1,620,724,157,000
1,623,386,351,000
1,623,386,351,000
NONE
null
null
null
Hi, I am training a gpt-2 from scratch using run_clm.py. I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess), I tried to : copy path_to_cache_dir/datasets to new_cache_dir/datasets set export HF_DATASETS_CACHE="new_cache_dir/" but the program still re-preprocess the whole dataset without loading cache. I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M. What is the proper way to do this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2345/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2344/comments
https://api.github.com/repos/huggingface/datasets/issues/2344/events
https://github.com/huggingface/datasets/issues/2344
885,331,505
MDU6SXNzdWU4ODUzMzE1MDU=
2,344
Is there a way to join multiple datasets in one?
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/alexvaca0/followers", "following_url": "https://api.github.com/users/alexvaca0/following{/other_user}", "gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions", "organizations_url": "https://api.github.com/users/alexvaca0/orgs", "repos_url": "https://api.github.com/users/alexvaca0/repos", "events_url": "https://api.github.com/users/alexvaca0/events{/privacy}", "received_events_url": "https://api.github.com/users/alexvaca0/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n" ]
1,620,688,570,000
1,620,721,488,000
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2? **Describe the solution you'd like** Id like to join them with a merge or join method, just like pandas dataframes. **Additional context** If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2344/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2343/comments
https://api.github.com/repos/huggingface/datasets/issues/2343/events
https://github.com/huggingface/datasets/issues/2343
883,208,539
MDU6SXNzdWU4ODMyMDg1Mzk=
2,343
Columns are removed before or after map function applied?
{ "login": "taghizad3h", "id": 8199406, "node_id": "MDQ6VXNlcjgxOTk0MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taghizad3h", "html_url": "https://github.com/taghizad3h", "followers_url": "https://api.github.com/users/taghizad3h/followers", "following_url": "https://api.github.com/users/taghizad3h/following{/other_user}", "gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}", "starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions", "organizations_url": "https://api.github.com/users/taghizad3h/orgs", "repos_url": "https://api.github.com/users/taghizad3h/repos", "events_url": "https://api.github.com/users/taghizad3h/events{/privacy}", "received_events_url": "https://api.github.com/users/taghizad3h/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,620,614,180,000
1,620,614,180,000
null
NONE
null
null
null
## Describe the bug According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2343/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2342/comments
https://api.github.com/repos/huggingface/datasets/issues/2342/events
https://github.com/huggingface/datasets/pull/2342
882,981,420
MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3
2,342
Docs - CER above 1
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,603,660,000
1,620,653,640,000
1,620,653,640,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2342", "html_url": "https://github.com/huggingface/datasets/pull/2342", "diff_url": "https://github.com/huggingface/datasets/pull/2342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2342.patch", "merged_at": 1620653640000 }
CER can actually be greater than 1.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2342/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2341/comments
https://api.github.com/repos/huggingface/datasets/issues/2341/events
https://github.com/huggingface/datasets/pull/2341
882,370,933
MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2
2,341
Added the Ascent KB
{ "login": "phongnt570", "id": 6749421, "node_id": "MDQ6VXNlcjY3NDk0MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phongnt570", "html_url": "https://github.com/phongnt570", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "repos_url": "https://api.github.com/users/phongnt570/repos", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for approving it!" ]
1,620,569,859,000
1,620,724,619,000
1,620,724,619,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2341", "html_url": "https://github.com/huggingface/datasets/pull/2341", "diff_url": "https://github.com/huggingface/datasets/pull/2341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2341.patch", "merged_at": 1620724618000 }
Added the Ascent Commonsense KB of 8.9M assertions. - Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905) - Website: https://ascent.mpi-inf.mpg.de/ (I am the author of the dataset)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2341/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2340/comments
https://api.github.com/repos/huggingface/datasets/issues/2340/events
https://github.com/huggingface/datasets/pull/2340
882,370,824
MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx
2,340
More consistent copy logic
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,569,853,000
1,620,723,513,000
1,620,723,513,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2340", "html_url": "https://github.com/huggingface/datasets/pull/2340", "diff_url": "https://github.com/huggingface/datasets/pull/2340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2340.patch", "merged_at": 1620723513000 }
Use `info.copy()` instead of `copy.deepcopy(info)`. `Features.copy` now creates a deep copy.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2340/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2338/comments
https://api.github.com/repos/huggingface/datasets/issues/2338/events
https://github.com/huggingface/datasets/pull/2338
882,046,077
MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx
2,338
fixed download link for web_science
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,551,540,000
1,620,653,753,000
1,620,653,753,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2338", "html_url": "https://github.com/huggingface/datasets/pull/2338", "diff_url": "https://github.com/huggingface/datasets/pull/2338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2338.patch", "merged_at": 1620653753000 }
Fixes #2337. Should work with: `dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2338/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2337/comments
https://api.github.com/repos/huggingface/datasets/issues/2337/events
https://github.com/huggingface/datasets/issues/2337
881,610,567
MDU6SXNzdWU4ODE2MTA1Njc=
2,337
NonMatchingChecksumError for web_of_science dataset
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! " ]
1,620,525,722,000
1,620,653,753,000
1,620,653,753,000
NONE
null
null
null
NonMatchingChecksumError when trying to download the web_of_science dataset. >NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1'] Setting `ignore_verfications=True` results in OSError. >OSError: Cannot find data file. Original error: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt' ```python dataset = load_dataset('web_of_science', 'WOS5736') ``` There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985' datasets 1.6.2 python 3.7.10 Ubuntu 18.04.5 LTS
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2337/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2336/comments
https://api.github.com/repos/huggingface/datasets/issues/2336/events
https://github.com/huggingface/datasets/pull/2336
881,298,783
MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5
2,336
Fix overflow issue in interpolation search
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge). \r\n\r\n@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho.", "Hi ! Thanks for the fix.\r\nUnfortunately in terms of speed this is not acceptable :/\r\nThe `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.\r\n\r\nWould it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?\r\nOn windows machines the default is int32 unfortunately so we have to force the dtype to be int64\r\n\r\n", "Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity." ]
1,620,507,096,000
1,620,653,347,000
1,620,653,172,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2336", "html_url": "https://github.com/huggingface/datasets/pull/2336", "diff_url": "https://github.com/huggingface/datasets/pull/2336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2336.patch", "merged_at": 1620653172000 }
Fixes #2335 More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2336/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2335/comments
https://api.github.com/repos/huggingface/datasets/issues/2335/events
https://github.com/huggingface/datasets/issues/2335
881,291,887
MDU6SXNzdWU4ODEyOTE4ODc=
2,335
Index error in Dataset.map
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,620,506,697,000
1,620,653,172,000
1,620,653,172,000
CONTRIBUTOR
null
null
null
The following code, if executed on master, raises an IndexError (due to overflow): ```python >>> from datasets import * >>> d = load_dataset("bookcorpus", split="train") Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700) 2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll >>> d.map(lambda ex: ex) 0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i])) 0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map new_fingerprint=new_fingerprint, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper out = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single for i, example in enumerate(pbar): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__ format_kwargs=format_kwargs, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table pa_subtable = _query_table(table, key) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table return table.fast_slice(key % table.num_rows, 1) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice i = _interpolation_search(self._offsets, offset) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") IndexError: Invalid query '290162' for size 74004228. ``` Tested on Windows, can run on Linux if needed. EDIT: It seems like for this to happen, the default NumPy dtype has to be np.int32.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2335/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2334/comments
https://api.github.com/repos/huggingface/datasets/issues/2334/events
https://github.com/huggingface/datasets/pull/2334
879,810,107
MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw
2,334
Updating the DART file checksums in GEM
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@sebastianGehrmann " ]
1,620,424,424,000
1,620,425,890,000
1,620,425,890,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2334", "html_url": "https://github.com/huggingface/datasets/pull/2334", "diff_url": "https://github.com/huggingface/datasets/pull/2334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2334.patch", "merged_at": 1620425890000 }
The DART files were just updated on the source GitHub https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2334/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2333/comments
https://api.github.com/repos/huggingface/datasets/issues/2333/events
https://github.com/huggingface/datasets/pull/2333
879,214,067
MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy
2,333
Fix duplicate keys
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "- @jplu " ]
1,620,401,288,000
1,620,510,451,000
1,620,403,028,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2333", "html_url": "https://github.com/huggingface/datasets/pull/2333", "diff_url": "https://github.com/huggingface/datasets/pull/2333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2333.patch", "merged_at": 1620403028000 }
As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys. Most of the time it was because the counter used for ids were reset at each new data file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2333/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2332/comments
https://api.github.com/repos/huggingface/datasets/issues/2332/events
https://github.com/huggingface/datasets/pull/2332
879,041,608
MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4
2,332
Add note about indices mapping in save_to_disk docstring
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,395,382,000
1,620,408,048,000
1,620,408,048,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2332", "html_url": "https://github.com/huggingface/datasets/pull/2332", "diff_url": "https://github.com/huggingface/datasets/pull/2332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2332.patch", "merged_at": 1620408048000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2332/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2331/comments
https://api.github.com/repos/huggingface/datasets/issues/2331/events
https://github.com/huggingface/datasets/issues/2331
879,031,427
MDU6SXNzdWU4NzkwMzE0Mjc=
2,331
Add Topical-Chat
{ "login": "ktangri", "id": 22266659, "node_id": "MDQ6VXNlcjIyMjY2NjU5", "avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktangri", "html_url": "https://github.com/ktangri", "followers_url": "https://api.github.com/users/ktangri/followers", "following_url": "https://api.github.com/users/ktangri/following{/other_user}", "gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktangri/subscriptions", "organizations_url": "https://api.github.com/users/ktangri/orgs", "repos_url": "https://api.github.com/users/ktangri/repos", "events_url": "https://api.github.com/users/ktangri/events{/privacy}", "received_events_url": "https://api.github.com/users/ktangri/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,620,395,039,000
1,620,395,039,000
null
NONE
null
null
null
## Adding a Dataset - **Name:** Topical-Chat - **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles - **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf - **Data:** https://github.com/alexa/Topical-Chat - **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2331/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2330/comments
https://api.github.com/repos/huggingface/datasets/issues/2330/events
https://github.com/huggingface/datasets/issues/2330
878,490,927
MDU6SXNzdWU4Nzg0OTA5Mjc=
2,330
Allow passing `desc` to `tqdm` in `Dataset.map()`
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?", "I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset." ]
1,620,366,774,000
1,622,041,161,000
1,622,041,161,000
CONTRIBUTOR
null
null
null
It's normal to have many `map()` calls, and some of them can take a few minutes, it would be nice to have a description on the progress bar. Alternative solution: Print the description before/after the `map()` call.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2330/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2329/comments
https://api.github.com/repos/huggingface/datasets/issues/2329/events
https://github.com/huggingface/datasets/pull/2329
877,924,198
MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0
2,329
Add cache dir for in-memory datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n", "Good job! Looking forward to this new feature! 🥂", "@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ", "@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` 😃) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.", "Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they don’t have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !", "Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.", "Superseded by #2460 (to close issue #2458)." ]
1,620,329,732,000
1,623,181,608,000
1,623,179,206,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2329", "html_url": "https://github.com/huggingface/datasets/pull/2329", "diff_url": "https://github.com/huggingface/datasets/pull/2329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2329.patch", "merged_at": null }
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq. Should fix #2322
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2329/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2328/comments
https://api.github.com/repos/huggingface/datasets/issues/2328/events
https://github.com/huggingface/datasets/pull/2328
877,673,896
MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2
2,328
Add Matthews/Pearson/Spearman correlation metrics
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,317,367,000
1,620,320,290,000
1,620,320,290,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2328", "html_url": "https://github.com/huggingface/datasets/pull/2328", "diff_url": "https://github.com/huggingface/datasets/pull/2328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2328.patch", "merged_at": 1620320290000 }
Added three metrics: - The Matthews correlation coefficient (from sklearn) - The Pearson correlation coefficient (from scipy) - The Spearman correlation coefficient (from scipy) cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2328/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2327/comments
https://api.github.com/repos/huggingface/datasets/issues/2327/events
https://github.com/huggingface/datasets/issues/2327
877,565,831
MDU6SXNzdWU4Nzc1NjU4MzE=
2,327
A syntax error in example
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "cc @beurkinger but I think this has been fixed internally and will soon be updated right ?", "This issue has been fixed." ]
1,620,311,684,000
1,621,479,859,000
1,621,479,859,000
NONE
null
null
null
![image](https://user-images.githubusercontent.com/6883957/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png) Sorry to report with an image, I can't find the template source code of this snippet.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2327/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2326/comments
https://api.github.com/repos/huggingface/datasets/issues/2326/events
https://github.com/huggingface/datasets/pull/2326
876,829,254
MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4
2,326
Enable auto-download for PAN-X / Wikiann domain in XTREME
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,248,318,000
1,620,376,870,000
1,620,376,870,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2326", "html_url": "https://github.com/huggingface/datasets/pull/2326", "diff_url": "https://github.com/huggingface/datasets/pull/2326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2326.patch", "merged_at": 1620376870000 }
This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains. While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2326/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2325/comments
https://api.github.com/repos/huggingface/datasets/issues/2325/events
https://github.com/huggingface/datasets/pull/2325
876,653,121
MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx
2,325
Added the HLGD dataset
{ "login": "tingofurro", "id": 2609265, "node_id": "MDQ6VXNlcjI2MDkyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tingofurro", "html_url": "https://github.com/tingofurro", "followers_url": "https://api.github.com/users/tingofurro/followers", "following_url": "https://api.github.com/users/tingofurro/following{/other_user}", "gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}", "starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions", "organizations_url": "https://api.github.com/users/tingofurro/orgs", "repos_url": "https://api.github.com/users/tingofurro/repos", "events_url": "https://api.github.com/users/tingofurro/events{/privacy}", "received_events_url": "https://api.github.com/users/tingofurro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Is there anything else needed from my end?", "Thanks Bhavitvya and Quentin, this was very streamlined!" ]
1,620,233,609,000
1,620,831,313,000
1,620,828,998,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2325", "html_url": "https://github.com/huggingface/datasets/pull/2325", "diff_url": "https://github.com/huggingface/datasets/pull/2325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2325.patch", "merged_at": 1620828998000 }
Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task Dataset Link: https://github.com/tingofurro/headline_grouping Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2325/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2324/comments
https://api.github.com/repos/huggingface/datasets/issues/2324/events
https://github.com/huggingface/datasets/pull/2324
876,602,064
MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz
2,324
Create Audio feature
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/8", "html_url": "https://github.com/huggingface/datasets/milestone/8", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "id": 6968069, "node_id": "MI_kwDODunzps4AalMF", "number": 8, "title": "1.12", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 4, "closed_issues": 2, "state": "open", "created_at": 1626881696000, "updated_at": 1634120793000, "due_on": 1630306800000, "closed_at": null }
[ "For optimal storage, it would be better to:\r\n- store only the audio file path in the cache Arrow file\r\n- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function)", "Thanks a lot @lhoestq for your helpful insights! 🤗 ", "Just one step before having a first running example to benchmark.\r\n\r\nDecision to make: how to call the function `dataset.features.decode_example`:\r\n- The usual approach until now in speech applications: call it in a subsequent `.map` function\r\n - Pros: multiprocessing can be used out of the box\r\n - Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage\r\n- Approach suggested by @lhoestq (see above: https://github.com/huggingface/datasets/pull/2324#discussion_r660758683): doing it in formatting\r\n - Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset\r\n - Cons: it is not cached; need to implement multiprocessing for this case\r\n- Other pros/cons for the previous options?\r\n- Other options?\r\n\r\ncc: @lhoestq @patrickvonplaten @anton-l ", "@albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:\n\n- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.\n- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well.", "Users can do the conversion from mp3 to wav by themselves if they want to using `map`.\r\n\r\nIMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)\r\n\r\nDecompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.\r\n\r\nLet me know what you think about this", "@albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it. \r\n\r\nThe cons:\r\n- \"the operation won't be cached\" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the \"full\" feature extraction she/he will make use of `.map(...)` anyways which means that the result will be cached. \r\n- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.\r\n\r\nThe advantages mostly solve the main poinpoints being:\r\n- exploding disk space\r\n- bad user experience since the audio is not loaded on the go\r\n\r\n=> So I'm very much in favor of the \"direct-access\" feature", "Update: I've retaken this issue.\r\n\r\nIf the decoding logic is implemented when \"examples are accessed\", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus \"accessing them\", before trying to apply the map function)...\r\n\r\nI'm thinking on some other approach...", "I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed.", "What about creating a new specific formatting, just for decoding? This would be only active within a context manager.", "Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.\r\n\r\n@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks.", "Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week.", "Sure it looks nice this way :) Feel free to continue !", "As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n\r\nOne proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n\r\nI suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n\r\nAbove (https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n\r\nNow, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n```python\r\ndef change_dir(example):\r\n example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n\r\n\r\nwith ds.formatted_as(\"no_decoding\"):\r\n print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n ds.map(change_dir)\r\n print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n\r\nprint(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n```\r\n\r\nPlease, just tell me what you think.\r\nCC: @lhoestq @patrickvonplaten @anton-l ", "> As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n> \r\n> One proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n> \r\n> I suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n> \r\n> Above ([#2324 (comment)](https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003)), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n> \r\n> Now, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n> \r\n> ```python\r\n> def change_dir(example):\r\n> example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n> \r\n> \r\n> with ds.formatted_as(\"no_decoding\"):\r\n> print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n> ds.map(change_dir)\r\n> print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n> \r\n> print(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n> ```\r\n> \r\n> Please, just tell me what you think.\r\n> CC: @lhoestq @patrickvonplaten @anton-l\r\n\r\nI'm fine with a context manager! There is no way to **not** decode the audio if its key is not accessed no?\r\n\r\nE.g.\r\n\r\n```python\r\ndef load(batch):\r\n batch[\"speech_array\"] = torchaudio.load(batch[\"file\"])\r\n return batch\r\n\r\nds.map(load)\r\n```\r\n\r\ndoes *e.g.* not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no? \r\n\r\n=> I'm happy with both the context manager and using `input_colmuns`. Both of those solutions are equally good to me if a \"not-access-key-no-decoding\" solution is just not feasible. I let you guys decide :-)", "> \r\n> There is no way to **not** decode the audio if its key is not accessed no?\r\n> \r\n> E.g...\r\n> \r\n> does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n\r\n@patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.", "> > There is no way to **not** decode the audio if its key is not accessed no?\r\n> > E.g...\r\n> > does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n> \r\n> @patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.\r\n\r\nThanks a lot for the message! I'm discussing a bit with @anton-l at the moment - will share our results as soon as possible", "Current implementation: see use cases in file https://github.com/huggingface/datasets/blob/0f80e6eaa6f596ff6287eb33587e2d9c69af0e73/tests/features/test_audio.py\r\n\r\nAutomatic decoding:\r\n- when directly accessing an example or a batch\r\n ```python\r\n dset[0]\r\n dset[:2]\r\n ```\r\n- during map, only if audio field is accessed:\r\n ```python\r\n def process_audio_sampling_rate(example):\r\n example[\"double_sampling_rate\"] = 2 * example[\"audio\"][\"sampling_rate\"]\r\n return example\r\n\r\n decoded_dset = dset.map(process_audio_sampling_rate)\r\n ```\r\n\r\nNo automatic decoding:\r\n- during map if audio field is not accessed:\r\n ```python\r\n def process_text(example):\r\n example[\"text\"] = example[\"text\"] + \" World!\"\r\n return example\r\n\r\n decoded_dset = dset.map(process_text)\r\n ```\r\n\r\nThe types of example and batch are kept as usual, `dict[str, Any]` and `dict[str, list[Any]]` respectively.\r\n\r\nCC: @patrickvonplaten @anton-l @lhoestq ", "That's awesome! Thanks so much for your work on this @albertvillanova!", "Oh and maybe have a test to make sure that casting the Audio feature to change the sampling rate works as expected ?", "@lhoestq the test for the resampling is already in place in `test_audio_resampling`: \r\nhttps://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R48-R56", "Please note that we should agree in the API: see 53d6d73\r\n\r\nThis is just a proposal implementation:\r\n- Create a new method named `cast_column`, which performs a shallow kind of cast (without using `map()` or caching)\r\n\r\nWe should agree in the name, because as it is, it might be confused with `cast` (and users might think `cast_column` caches the result as `cast`)\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ", "IMO cast and cast_column should have the exact same behavior, to make the experience simple for the user (no distinction between shallow or deep cast).\r\n\r\nMaybe we should change `cast` to use `cast_column` on every column and make `cast_column` use `map` if and only if it's necessary. For Audio for example `map` is not needed.\r\n\r\nWe just need to do some tests to know which casts always need map and which ones don't. This implies either looking at the PyArrow source code (the documentation doesn't mention all these details) or playing with PyArrow to figure it out.\r\n\r\nI guess for now we can just have the simplest `cast_column` which always uses map unless it's an Audio feature type.\r\n\r\nLet me know what you think !", "@lhoestq I totally agree: `cast` and `cast_column` should be analog to each other.\r\n\r\nFor the implementation, let me try something simpler than the one suggested by you...", "@lhoestq what do you think of an approach like this 633ef09?\r\n\r\nIf it's OK, then we should implement passing parameters to `cast`.", "@lhoestq maybe for now we could make a simple implementation and finish this PR. Then we could make a follow-up PR to deal specifically with the optimal implementation of `cast_column` and `cast`, as this issue is not specific to the Audio feature.", "> @lhoestq what do you think of an approach like this 633ef09?\r\n\r\nYea that's good enough for the time being :)\r\n\r\nI think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n\r\n(note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).", "> \r\n> I think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n> \r\n> (note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).\r\n\r\n@lhoestq note that `cast_column` may call `cast` in some cases, and the decorator would not be necessary for these cases...\r\n- I did it by setting `inplace=False`, and updating fingerprint explicitly only when `cast` is not called.", "I think current state of this PR could be included in our next release, as experimental feature, for stress testing it and try to find all potential issues. What do you think?\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ", "Looks great! Ready to try it out on the transformers examples after the release :)", "Think we are good to merge here no? :-)" ]
1,620,230,122,000
1,634,120,793,000
1,634,120,793,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2324", "html_url": "https://github.com/huggingface/datasets/pull/2324", "diff_url": "https://github.com/huggingface/datasets/pull/2324.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2324.patch", "merged_at": 1634120793000 }
Create `Audio` feature to handle raw audio files. Some decisions to be further discussed: - I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library. - I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them. - For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager - I also require `pytest-datadir`, which allow to have (audio) data files for tests - The audio data contain: array and sample_rate. - The array is reshaped as 1D array (expected input for `Wav2Vec2`). Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. ## Requirements Specification - Access example with audio loading and resampling: ```python ds[0]["audio"] ``` - Map with audio loading & resampling: ```python def preprocess(batch): batch["input_values"] = processor(batch["audio"]).input_values return batch ds = ds.map(preprocess) ``` - Map without audio loading and resampling: ```python def preprocess(batch): batch["labels"] = processor(batch["target_text"]).input_values return batch ds = ds.map(preprocess) ``` - Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate: ```python ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2324/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2323/comments
https://api.github.com/repos/huggingface/datasets/issues/2323/events
https://github.com/huggingface/datasets/issues/2323
876,438,507
MDU6SXNzdWU4NzY0Mzg1MDc=
2,323
load_dataset("timit_asr") gives back duplicates of just one sample text
{ "login": "ekeleshian", "id": 33647474, "node_id": "MDQ6VXNlcjMzNjQ3NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekeleshian", "html_url": "https://github.com/ekeleshian", "followers_url": "https://api.github.com/users/ekeleshian/followers", "following_url": "https://api.github.com/users/ekeleshian/following{/other_user}", "gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions", "organizations_url": "https://api.github.com/users/ekeleshian/orgs", "repos_url": "https://api.github.com/users/ekeleshian/repos", "events_url": "https://api.github.com/users/ekeleshian/events{/privacy}", "received_events_url": "https://api.github.com/users/ekeleshian/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Upgrading datasets to version 1.6 fixes the issue", "This bug was fixed in #1995. Upgrading the `datasets` should work! ", "Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists." ]
1,620,220,488,000
1,620,383,550,000
1,620,383,550,000
NONE
null
null
null
## Describe the bug When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times. I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue. ## Steps to reproduce the bug ```python from datasets import load_dataset timit = load_dataset("timit_asr") print(timit['train']['text']) print(timit['test']['text']) ``` ## Expected Result Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) <img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png"> ## Actual results Rows of repeated text. <img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png"> ## Versions - Datasets: 1.3.0 - Python: 3.9.1 - Platform: macOS-11.2.1-x86_64-i386-64bit}
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2323/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2322/comments
https://api.github.com/repos/huggingface/datasets/issues/2322/events
https://github.com/huggingface/datasets/issues/2322
876,383,853
MDU6SXNzdWU4NzYzODM4NTM=
2,322
Calls to map are not cached.
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] \r\nNo config specified, defaulting to: sst/default\r\nDownloading and preparing dataset sst/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0/5 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/5 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\nexecuted [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]\r\nexecuted [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]\r\nexecuted [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]\r\nexecuted [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]\r\nexecuted [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]\r\nexecuted [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]\r\n#0: 100%|██████████| 5/5 [00:00<00:00, 94.83ba/s]\r\nexecuted [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]\r\n#1: 100%|██████████| 5/5 [00:00<00:00, 92.75ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]\r\n#0: 100%|██████████| 1/1 [00:00<00:00, 118.81ba/s]\r\n#1: 100%|██████████| 1/1 [00:00<00:00, 123.06ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/2 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/2 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\n#0: 100%|██████████| 2/2 [00:00<00:00, 119.42ba/s]\r\nexecuted [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]\r\n#1: 100%|██████████| 2/2 [00:00<00:00, 123.33ba/s]\r\n\r\n\r\n\r\n ############################## \r\n\r\n\r\n\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-6079777aa097c8f8.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-2dc05c46f68eda6e.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-1ca347e7430b98f1.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-c0f1a73ce3ba40cd.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-832a1407bf1ac5b7.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-036316a259b773c4.arrow\r\n- Datasets: 1.5.0\r\n- Python: 3.8.3 (default, May 19 2020, 18:47:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10\r\n```", "Hi,\r\n\r\nset `keep_in_memory` to False when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):\r\n\r\nhttps://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e767ad406f9da7610df2/src/datasets/arrow_dataset.py#L1718\r\n\r\n@albertvillanova It seems like this behavior was overlooked in #2182.\r\n\r\n", "Hi @villmow, thanks for reporting. \r\n\r\nAs @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed.", "Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.\r\nOn the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.\r\n\r\nBecause of that, currently in-memory datasets simply don't use caching.\r\n\r\nMaybe a Dataset object could have a `cache_dir` that is set to the directory where the arrow files are created during `load_dataset` ?", "Fixed once reverted the default in-memory feature:\r\nClosed by #2460 (to close issue #2458).", "Please @villmow, feel free to update to `Datasets` latest version (1.8)." ]
1,620,216,687,000
1,623,179,402,000
1,623,179,301,000
NONE
null
null
null
## Describe the bug Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed? ## Steps to reproduce the bug ```python import datasets datasets.set_caching_enabled(True) sst = datasets.load_dataset("sst") def foo(samples, i): print("executed", i[:10]) return samples # first call x = sst.map(foo, batched=True, with_indices=True, num_proc=2) print('\n'*3, "#" * 30, '\n'*3) # second call y = sst.map(foo, batched=True, with_indices=True, num_proc=2) # print version import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ## Actual results This code prints the following output for me: ```bash No config specified, defaulting to: sst/default Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff) #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] #0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s] ############################## #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] #0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s] - Datasets: 1.6.1 - Python: 3.8.3 (default, May 19 2020, 18:47:26) [GCC 7.3.0] - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10 ``` ## Expected results Caching should work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2322/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2321/comments
https://api.github.com/repos/huggingface/datasets/issues/2321/events
https://github.com/huggingface/datasets/pull/2321
876,304,364
MDExOlB1bGxSZXF1ZXN0NjMwNDc3NDUy
2,321
Set encoding in OSCAR dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,210,423,000
1,620,211,855,000
1,620,211,855,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2321", "html_url": "https://github.com/huggingface/datasets/pull/2321", "diff_url": "https://github.com/huggingface/datasets/pull/2321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2321.patch", "merged_at": 1620211854000 }
Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms. Fix #2319.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2321/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2320/comments
https://api.github.com/repos/huggingface/datasets/issues/2320/events
https://github.com/huggingface/datasets/pull/2320
876,257,026
MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5
2,320
Set default name in init_dynamic_modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,207,003,000
1,620,287,874,000
1,620,287,874,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2320", "html_url": "https://github.com/huggingface/datasets/pull/2320", "diff_url": "https://github.com/huggingface/datasets/pull/2320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2320.patch", "merged_at": 1620287874000 }
Set default value for the name of dynamic modules. Close #2318.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2320/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2319/comments
https://api.github.com/repos/huggingface/datasets/issues/2319/events
https://github.com/huggingface/datasets/issues/2319
876,251,376
MDU6SXNzdWU4NzYyNTEzNzY=
2,319
UnicodeDecodeError for OSCAR (Afrikaans)
{ "login": "sgraaf", "id": 8904453, "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgraaf", "html_url": "https://github.com/sgraaf", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "repos_url": "https://api.github.com/users/sgraaf/repos", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.", "Awesome, thank you. 😃 ", "@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`." ]
1,620,206,572,000
1,620,212,251,000
1,620,211,855,000
NONE
null
null
null
## Describe the bug When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("oscar", "unshuffled_deduplicated_af") ``` ## Expected results Anything but an error, really. ## Actual results ```python >>> from datasets import load_dataset >>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af") Downloading: 14.7kB [00:00, 4.91MB/s] Downloading: 3.07MB [00:00, 32.6MB/s] Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset builder_instance.download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare self._download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split for key, record in utils.tqdm( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples for line in f: File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined> ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` - Datasets: 1.6.2 - Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] - Platform: Windows-10-10.0.19041-SP0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2319/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2318/comments
https://api.github.com/repos/huggingface/datasets/issues/2318/events
https://github.com/huggingface/datasets/issues/2318
876,212,460
MDU6SXNzdWU4NzYyMTI0NjA=
2,318
[api request] API to obtain "dataset_module" dynamic path?
{ "login": "richardliaw", "id": 4529381, "node_id": "MDQ6VXNlcjQ1MjkzODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardliaw", "html_url": "https://github.com/richardliaw", "followers_url": "https://api.github.com/users/richardliaw/followers", "following_url": "https://api.github.com/users/richardliaw/following{/other_user}", "gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions", "organizations_url": "https://api.github.com/users/richardliaw/orgs", "repos_url": "https://api.github.com/users/richardliaw/repos", "events_url": "https://api.github.com/users/richardliaw/events{/privacy}", "received_events_url": "https://api.github.com/users/richardliaw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```", "Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!", "I like the idea as well ! thanks @albertvillanova ", "Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.", "awesome work @albertvillanova !" ]
1,620,204,048,000
1,620,290,745,000
1,620,287,874,000
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. This is an awesome library. It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import. I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof. **Describe the solution you'd like** A clear and concise description of what you want to happen. `datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case. By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently): https://github.com/huggingface/blog/issues/106 https://github.com/huggingface/transformers/issues/11565 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2318/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2317/comments
https://api.github.com/repos/huggingface/datasets/issues/2317/events
https://github.com/huggingface/datasets/pull/2317
875,767,318
MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4
2,317
Fix incorrect version specification for the pyarrow package
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,156,620,000
1,620,209,356,000
1,620,206,518,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2317", "html_url": "https://github.com/huggingface/datasets/pull/2317", "diff_url": "https://github.com/huggingface/datasets/pull/2317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2317.patch", "merged_at": 1620206518000 }
This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 . Simply, I put a comma between the version bounds. Fix #2316.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2317/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2317/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2316/comments
https://api.github.com/repos/huggingface/datasets/issues/2316/events
https://github.com/huggingface/datasets/issues/2316
875,756,353
MDU6SXNzdWU4NzU3NTYzNTM=
2,316
Incorrect version specification for pyarrow
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Fixed by #2317." ]
1,620,155,711,000
1,620,209,403,000
1,620,209,403,000
CONTRIBUTOR
null
null
null
## Describe the bug The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77). Also as a snippet: ```python "pyarrow>=1.0.0<4.0.0", ``` ## Steps to reproduce the bug ```bash pip install "pyarrow>=1.0.0<4.0.0" ``` ## Expected results It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive). ## Actual results pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0. This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well: ```bash conda env export InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s) ``` ## Fix suggestion Put a comma between the version limits which means replacing the line in setup.py file with the following: ```python "pyarrow>=1.0.0,<4.0.0", ``` ## Versions Paste the output of the following code: ```python - Datasets: 1.6.2 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2316/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2315/comments
https://api.github.com/repos/huggingface/datasets/issues/2315/events
https://github.com/huggingface/datasets/pull/2315
875,742,200
MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy
2,315
Datasets cli improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. " ]
1,620,154,511,000
1,620,664,611,000
1,620,664,610,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2315", "html_url": "https://github.com/huggingface/datasets/pull/2315", "diff_url": "https://github.com/huggingface/datasets/pull/2315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2315.patch", "merged_at": 1620664610000 }
This PR: * replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO) * removes the `download` command (copied from the transformers repo?) * adds missing help messages to the cli commands
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2315/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2314/comments
https://api.github.com/repos/huggingface/datasets/issues/2314/events
https://github.com/huggingface/datasets/pull/2314
875,729,271
MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4
2,314
Minor refactor prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`.", "closing in favor of #2986 " ]
1,620,153,446,000
1,634,116,054,000
1,634,116,054,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2314", "html_url": "https://github.com/huggingface/datasets/pull/2314", "diff_url": "https://github.com/huggingface/datasets/pull/2314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2314.patch", "merged_at": null }
Start to refactor `prepare_module` to try to decouple functionality. This PR does: - extract function `_initialize_dynamic_modules_namespace_package` - extract function `_find_module_in_github_or_s3` - some renaming of variables - use of f-strings
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2314/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2313/comments
https://api.github.com/repos/huggingface/datasets/issues/2313/events
https://github.com/huggingface/datasets/pull/2313
875,475,367
MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4
2,313
Remove unused head_hf_s3 function
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,135,726,000
1,620,379,902,000
1,620,379,902,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2313", "html_url": "https://github.com/huggingface/datasets/pull/2313", "diff_url": "https://github.com/huggingface/datasets/pull/2313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2313.patch", "merged_at": null }
Currently, the function `head_hf_s3` is not used: - neither its returned result is used - nor it raises any exception, as exceptions are catched and returned (not raised) This PR removes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2313/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2312/comments
https://api.github.com/repos/huggingface/datasets/issues/2312/events
https://github.com/huggingface/datasets/pull/2312
875,435,726
MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz
2,312
Add rename_columnS method
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Merging then 😄 " ]
1,620,133,073,000
1,620,135,793,000
1,620,135,792,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2312", "html_url": "https://github.com/huggingface/datasets/pull/2312", "diff_url": "https://github.com/huggingface/datasets/pull/2312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2312.patch", "merged_at": 1620135792000 }
Cherry-picked from #2255
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2312/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2311/comments
https://api.github.com/repos/huggingface/datasets/issues/2311/events
https://github.com/huggingface/datasets/pull/2311
875,262,208
MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx
2,311
Add SLR52, SLR53 and SLR54 to OpenSLR
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!/bin/bash -eo pipefail\r\n./scripts/datasets_metadata_validator.py\r\nWARNING:root:❌ Failed to validate 'datasets/openslr/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:❌ Failed on 1 files.\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1 \r\n```\r\nCould you have a look please? Thanks.", "Hi ! The error is unrelated to your PR and has been fixed on master\r\nNext time feel free to merge master into your branch to fix the CI error ;)" ]
1,620,119,283,000
1,620,381,055,000
1,620,381,055,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2311", "html_url": "https://github.com/huggingface/datasets/pull/2311", "diff_url": "https://github.com/huggingface/datasets/pull/2311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2311.patch", "merged_at": 1620381055000 }
Add large speech datasets for Sinhala, Bengali and Nepali.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2311/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2310/comments
https://api.github.com/repos/huggingface/datasets/issues/2310/events
https://github.com/huggingface/datasets/pull/2310
875,096,051
MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5
2,310
Update README.md
{ "login": "cryoff", "id": 15029054, "node_id": "MDQ6VXNlcjE1MDI5MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cryoff", "html_url": "https://github.com/cryoff", "followers_url": "https://api.github.com/users/cryoff/followers", "following_url": "https://api.github.com/users/cryoff/following{/other_user}", "gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cryoff/subscriptions", "organizations_url": "https://api.github.com/users/cryoff/orgs", "repos_url": "https://api.github.com/users/cryoff/repos", "events_url": "https://api.github.com/users/cryoff/events{/privacy}", "received_events_url": "https://api.github.com/users/cryoff/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}\r\n```" ]
1,620,103,081,000
1,620,110,159,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2310", "html_url": "https://github.com/huggingface/datasets/pull/2310", "diff_url": "https://github.com/huggingface/datasets/pull/2310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2310.patch", "merged_at": null }
Provides description of data instances and dataset features
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2310/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2309/comments
https://api.github.com/repos/huggingface/datasets/issues/2309/events
https://github.com/huggingface/datasets/pull/2309
874,644,990
MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx
2,309
Fix conda release
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,620,053,579,000
1,620,057,677,000
1,620,057,677,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2309", "html_url": "https://github.com/huggingface/datasets/pull/2309", "diff_url": "https://github.com/huggingface/datasets/pull/2309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2309.patch", "merged_at": 1620057677000 }
There were a few issues with conda releases (they've been failing for a while now). To fix this I had to: - add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075)) - set the python version of the conda build stage to 3.8 since 3.9 isn't supported - sync the evrsion requirement of `huggingface_hub` With these changes I'm working on uploading all missing versions until 1.6.2 to conda EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2309/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2308/comments
https://api.github.com/repos/huggingface/datasets/issues/2308/events
https://github.com/huggingface/datasets/issues/2308
874,559,846
MDU6SXNzdWU4NzQ1NTk4NDY=
2,308
Add COCO evaluation metrics
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi @NielsRogge, \r\nI'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?\r\n", "Great!\r\n\r\nHere's a notebook that illustrates how I'm using `CocoEvaluator`: https://drive.google.com/file/d/1VV92IlaUiuPOORXULIuAdtNbBWCTCnaj/view?usp=sharing\r\n\r\nThe evaluation is near the end of the notebook.\r\n\r\n", "I went through the code you've [mentioned](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py) and I think there are 2 options on how we can go ahead:\r\n\r\n1) Implement how DETR people have done this (they're relying very heavily on the official implementation and they're focussing on torch dataset here. I feel ours should be something generic instead of pytorch specific.\r\n2) Do this [implementation](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocoEvalDemo.ipynb) where user can convert its output and ground truth annotation to pre-defined format and then feed it into our function to calculate metrics (looks very similar to you wanted above)\r\n\r\nIn my opinion, 2nd option looks very clean but I'm still figuring out how's it transforming the box co-ordinates of `coco_gt` which you've passed to `CocoEvaluator` (ground truth for evaluation). Since your model output was already converted to COCO api, I faced little problems there.", "Ok, thanks for the update.\r\n\r\nIndeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation.\r\n\r\n[This file](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py) is probably want we need to implement.\r\n\r\n" ]
1,620,047,285,000
1,622,790,687,000
null
NONE
null
null
null
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py#L22) and [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/panoptic_eval.py#L13) respectively). Running these in a notebook gives you nice summaries like this: ![image](https://user-images.githubusercontent.com/48327001/116878842-326f0680-ac20-11eb-9061-d6da02193694.png) It would be great if we could import these metrics from the Datasets library, something like this: ``` import datasets metric = datasets.load_metric('coco') for model_input, gold_references in evaluation_dataset: model_predictions = model(model_inputs) metric.add_batch(predictions=model_predictions, references=gold_references) final_score = metric.compute() ``` I think this would be great for object detection and semantic/panoptic segmentation in general, not just for DETR. Reproducing results of object detection papers would be way easier. However, object detection and panoptic segmentation evaluation is a bit more complex than accuracy (it's more like a summary of metrics at different thresholds rather than a single one). I'm not sure how to proceed here, but happy to help making this possible.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2308/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2308/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2302/comments
https://api.github.com/repos/huggingface/datasets/issues/2302/events
https://github.com/huggingface/datasets/pull/2302
873,961,435
MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3
2,302
Add SubjQA dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and 🤞 ?", "Hi @lewtun, thanks for adding this dataset!\r\n\r\nIf the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\nHere's a link to the [relevant section of the guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md#dataset-creation), let me know if you have any questions!", "> If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\ngreat idea @yjernite! i've added some extra information / moved things as you suggest and will wrap up the rest tomorrow :)", "hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review!" ]
1,619,967,080,000
1,620,638,479,000
1,620,638,479,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2302", "html_url": "https://github.com/huggingface/datasets/pull/2302", "diff_url": "https://github.com/huggingface/datasets/pull/2302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2302.patch", "merged_at": 1620638479000 }
Hello datasetters 🙂! Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance). I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2 Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2302/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2301/comments
https://api.github.com/repos/huggingface/datasets/issues/2301/events
https://github.com/huggingface/datasets/issues/2301
873,941,266
MDU6SXNzdWU4NzM5NDEyNjY=
2,301
Unable to setup dev env on Windows
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.microsoft.com/visual-cpp-build-tools/", "Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot." ]
1,619,961,642,000
1,620,055,081,000
1,620,055,054,000
CONTRIBUTOR
null
null
null
Hi I tried installing the `".[dev]"` version on Windows 10 after cloning. Here is the error I'm facing: ```bat (env) C:\testing\datasets>pip install -e ".[dev]" Obtaining file:///C:/testing/datasets Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5) Collecting pyarrow>=0.17.1 Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB) Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1) Collecting pandas Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB) Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1) Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0) Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2) Collecting multiprocess Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB) Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0) Collecting huggingface_hub<0.1.0 Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB) Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1) Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3) Collecting pytest-xdist Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB) Collecting apache-beam>=2.24.0 Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB) Collecting elasticsearch Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB) Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43) Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43) Collecting moto[s3]==1.3.16 Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB) Collecting rarfile>=4.0 Using cached rarfile-4.0-py3-none-any.whl (28 kB) Collecting tensorflow>=2.3 Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB) Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1) Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1) Collecting bs4 Using cached bs4-0.0.1-py3-none-any.whl Collecting conllu Using cached conllu-4.4-py2.py3-none-any.whl (15 kB) Collecting langdetect Using cached langdetect-1.0.8-py3-none-any.whl Collecting lxml Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB) Collecting mwparserfromhell Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB) Collecting nltk Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB) Collecting openpyxl Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB) Collecting py7zr Using cached py7zr-0.15.2-py3-none-any.whl (66 kB) Collecting tldextract Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB) Collecting zstandard Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB) Collecting bert_score>=0.3.6 Using cached bert_score-0.3.9-py3-none-any.whl (59 kB) Collecting rouge_score Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB) Collecting sacrebleu Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB) Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Collecting seqeval Using cached seqeval-1.2.2-py3-none-any.whl Collecting sklearn Using cached sklearn-0.0-py2.py3-none-any.whl Collecting jiwer Using cached jiwer-2.2.0-py3-none-any.whl (13 kB) Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1) Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2) Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1) Collecting black Using cached black-21.4b2-py3-none-any.whl (130 kB) Collecting isort Using cached isort-5.8.0-py3-none-any.whl (103 kB) Collecting flake8==3.7.9 Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7) Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1) Collecting entrypoints<0.4.0,>=0.3.0 Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB) Collecting pyflakes<2.2.0,>=2.1.0 Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB) Collecting pycodestyle<2.6.0,>=2.5.0 Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB) Collecting mccabe<0.7.0,>=0.6.0 Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB) Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1) Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3) Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0) Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7) Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0) Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1) Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0) Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10) Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1) Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3) Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125) Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3) Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1) Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0) Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1) Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0) Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0) Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0) Collecting hdfs<3.0.0,>=2.1.0 Using cached hdfs-2.6.0-py3-none-any.whl (33 kB) Collecting pyarrow>=0.17.1 Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB) Collecting fastavro<2,>=0.21.4 Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB) Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4) Collecting pymongo<4.0.0,>=3.8.0 Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB) Collecting crcmod<2.0,>=1.7 Using cached crcmod-1.7-py3-none-any.whl Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1 Using cached avro_python3-1.9.2.1-py3-none-any.whl Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3) Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2) Collecting oauth2client<5,>=2.0.1 Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB) Collecting pydot<2,>=1.2.0 Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB) Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8) Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1) Collecting matplotlib Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB) Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9) Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32) Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1) Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0) Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5) Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20) Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227) Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0) Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2) Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12) Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3) Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0) Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2) Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2) Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8) Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8) Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7) Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5) Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0) Collecting keras-preprocessing~=1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0) Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0) Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2) Collecting opt-einsum~=3.3.0 Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB) Collecting gast==0.3.3 Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB) Collecting google-pasta~=0.2 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0) Collecting astunparse~=1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting flatbuffers~=1.12.0 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting h5py~=2.10.0 Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB) Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0) Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0) Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2) Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0) Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45) Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9) Collecting pathspec<1,>=0.8.1 Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB) Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2) Collecting appdirs Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Collecting mypy-extensions>=0.4.3 Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB) Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3) Collecting beautifulsoup4 Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB) Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1) Collecting python-Levenshtein Using cached python-Levenshtein-0.12.2.tar.gz (50 kB) Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1) Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1) Collecting multiprocess Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB) Using cached multiprocess-0.70.10.zip (2.4 MB) Using cached multiprocess-0.70.9-py3-none-any.whl Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1) Collecting et-xmlfile Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB) Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4) Collecting pyppmd<0.13.0,>=0.12.1 Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB) Collecting pycryptodome>=3.6.6 Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB) Collecting bcj-cffi<0.6.0,>=0.5.1 Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB) Collecting multivolumefile<0.3.0,>=0.2.0 Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB) Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0) Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1) Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0) Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4) Collecting pytest-forked Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB) Collecting execnet>=1.1 Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB) Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5) Collecting portalocker==2.0.0 Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0) Building wheels for collected packages: python-Levenshtein Building wheel for python-Levenshtein (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for python-Levenshtein Running setup.py clean for python-Levenshtein Failed to build python-Levenshtein Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam Running setup.py install for python-Levenshtein ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output. ``` Here are conda and python versions: ```bat (env) C:\testing\datasets>conda --version conda 4.9.2 (env) C:\testing\datasets>python --version Python 3.7.10 ``` Please help me out. Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2301/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2300/comments
https://api.github.com/repos/huggingface/datasets/issues/2300/events
https://github.com/huggingface/datasets/issues/2300
873,928,169
MDU6SXNzdWU4NzM5MjgxNjk=
2,300
Add VoxPopuli
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[ "I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternative could be to provide the segments start and end times as a Sequence and then it's up to the user to perform the segmentation on-the-fly if they wish?", "Hey @jfainberg,\r\n\r\nThis sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:\r\n\r\n```python\r\ndataset = load_dataset(\"voxpopuli\", \"french\")\r\n```\r\n\r\n=> so as a start I think your option 2 is the way to go!" ]
1,619,957,860,000
1,620,901,912,000
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2300/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2299/comments
https://api.github.com/repos/huggingface/datasets/issues/2299/events
https://github.com/huggingface/datasets/issues/2299
873,914,717
MDU6SXNzdWU4NzM5MTQ3MTc=
2,299
My iPhone
{ "login": "Jasonbuchanan1983", "id": 82856229, "node_id": "MDQ6VXNlcjgyODU2MjI5", "avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jasonbuchanan1983", "html_url": "https://github.com/Jasonbuchanan1983", "followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers", "following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_user}", "gists_url": "https://api.github.com/users/Jasonbuchanan1983/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jasonbuchanan1983/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jasonbuchanan1983/subscriptions", "organizations_url": "https://api.github.com/users/Jasonbuchanan1983/orgs", "repos_url": "https://api.github.com/users/Jasonbuchanan1983/repos", "events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}", "received_events_url": "https://api.github.com/users/Jasonbuchanan1983/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,953,871,000
1,627,032,256,000
1,620,029,858,000
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2299/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2298/comments
https://api.github.com/repos/huggingface/datasets/issues/2298/events
https://github.com/huggingface/datasets/pull/2298
873,771,942
MDExOlB1bGxSZXF1ZXN0NjI4NDk2NjM2
2,298
Mapping in the distributed setting
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,904,185,000
1,620,050,093,000
1,620,050,093,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2298", "html_url": "https://github.com/huggingface/datasets/pull/2298", "diff_url": "https://github.com/huggingface/datasets/pull/2298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2298.patch", "merged_at": 1620050093000 }
The barrier trick for distributed mapping as discussed on Thursday with @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2298/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2296/comments
https://api.github.com/repos/huggingface/datasets/issues/2296/events
https://github.com/huggingface/datasets/issues/2296
872,974,907
MDU6SXNzdWU4NzI5NzQ5MDc=
2,296
1
{ "login": "zinnyi", "id": 82880142, "node_id": "MDQ6VXNlcjgyODgwMTQy", "avatar_url": "https://avatars.githubusercontent.com/u/82880142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinnyi", "html_url": "https://github.com/zinnyi", "followers_url": "https://api.github.com/users/zinnyi/followers", "following_url": "https://api.github.com/users/zinnyi/following{/other_user}", "gists_url": "https://api.github.com/users/zinnyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/zinnyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinnyi/subscriptions", "organizations_url": "https://api.github.com/users/zinnyi/orgs", "repos_url": "https://api.github.com/users/zinnyi/repos", "events_url": "https://api.github.com/users/zinnyi/events{/privacy}", "received_events_url": "https://api.github.com/users/zinnyi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,619,805,229,000
1,620,029,851,000
1,620,029,851,000
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2296/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2295/comments
https://api.github.com/repos/huggingface/datasets/issues/2295/events
https://github.com/huggingface/datasets/pull/2295
872,902,867
MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3
2,295
Create ExtractManager
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2851292821, "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring", "name": "refactoring", "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "Hi @lhoestq,\r\n\r\nOnce that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.\r\n\r\nThanks.", "I think all is done @lhoestq ;)" ]
1,619,802,814,000
1,626,099,123,000
1,625,731,909,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2295", "html_url": "https://github.com/huggingface/datasets/pull/2295", "diff_url": "https://github.com/huggingface/datasets/pull/2295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2295.patch", "merged_at": 1625731909000 }
Perform refactoring to decouple extract functionality.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2295/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2295/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2294/comments
https://api.github.com/repos/huggingface/datasets/issues/2294/events
https://github.com/huggingface/datasets/issues/2294
872,136,075
MDU6SXNzdWU4NzIxMzYwNzU=
2,294
Slow #0 when using map to tokenize.
{ "login": "VerdureChen", "id": 31714566, "node_id": "MDQ6VXNlcjMxNzE0NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/31714566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VerdureChen", "html_url": "https://github.com/VerdureChen", "followers_url": "https://api.github.com/users/VerdureChen/followers", "following_url": "https://api.github.com/users/VerdureChen/following{/other_user}", "gists_url": "https://api.github.com/users/VerdureChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/VerdureChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VerdureChen/subscriptions", "organizations_url": "https://api.github.com/users/VerdureChen/orgs", "repos_url": "https://api.github.com/users/VerdureChen/repos", "events_url": "https://api.github.com/users/VerdureChen/events{/privacy}", "received_events_url": "https://api.github.com/users/VerdureChen/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?\r\nThere are no difference between process 0 and the others except that it processes the first shard of the dataset.", "Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:\r\n```if args.dataset_name1 is not None:\r\n dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split=\"train\")\r\n dataset1 = dataset1.remove_columns('title')\r\n if args.dataset_name2 is not None:\r\n dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split=\"train\")\r\n assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type)\r\n datasets12 = concatenate_datasets([dataset1, dataset2], split='train')\r\n```\r\nWhen I just use one datasets, e.g. wikipedia, the problem seems no longer exist:\r\n![image](https://user-images.githubusercontent.com/31714566/116967059-13d24380-ace4-11eb-8d14-b7b9c9a275cc.png)\r\n\r\nBookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. \r\n\r\nThe problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.\r\n\r\n", "That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.\r\nAnother option is to concatenate, then shuffle, and then `map`." ]
1,619,769,633,000
1,620,126,011,000
null
NONE
null
null
null
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2294/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2293/comments
https://api.github.com/repos/huggingface/datasets/issues/2293/events
https://github.com/huggingface/datasets/pull/2293
872,079,385
MDExOlB1bGxSZXF1ZXN0NjI3MDQzNzQ3
2,293
imdb dataset from Don't Stop Pretraining Paper
{ "login": "BobbyManion", "id": 52530809, "node_id": "MDQ6VXNlcjUyNTMwODA5", "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BobbyManion", "html_url": "https://github.com/BobbyManion", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "repos_url": "https://api.github.com/users/BobbyManion/repos", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,764,848,000
1,619,765,665,000
1,619,765,665,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2293", "html_url": "https://github.com/huggingface/datasets/pull/2293", "diff_url": "https://github.com/huggingface/datasets/pull/2293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2293.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2293/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2292/comments
https://api.github.com/repos/huggingface/datasets/issues/2292/events
https://github.com/huggingface/datasets/pull/2292
871,230,183
MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy
2,292
Fixed typo seperate->separate
{ "login": "laksh9950", "id": 32505743, "node_id": "MDQ6VXNlcjMyNTA1NzQz", "avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laksh9950", "html_url": "https://github.com/laksh9950", "followers_url": "https://api.github.com/users/laksh9950/followers", "following_url": "https://api.github.com/users/laksh9950/following{/other_user}", "gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}", "starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions", "organizations_url": "https://api.github.com/users/laksh9950/orgs", "repos_url": "https://api.github.com/users/laksh9950/repos", "events_url": "https://api.github.com/users/laksh9950/events{/privacy}", "received_events_url": "https://api.github.com/users/laksh9950/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,714,453,000
1,619,789,358,000
1,619,787,792,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2292", "html_url": "https://github.com/huggingface/datasets/pull/2292", "diff_url": "https://github.com/huggingface/datasets/pull/2292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2292.patch", "merged_at": 1619787792000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2292/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2291/comments
https://api.github.com/repos/huggingface/datasets/issues/2291/events
https://github.com/huggingface/datasets/pull/2291
871,216,757
MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5
2,291
Don't copy recordbatches in memory during a table deepcopy
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,713,565,000
1,619,714,075,000
1,619,714,074,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2291", "html_url": "https://github.com/huggingface/datasets/pull/2291", "diff_url": "https://github.com/huggingface/datasets/pull/2291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2291.patch", "merged_at": 1619714073000 }
Fix issue #2276 and hopefully #2134 The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy. This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory. I fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied). The issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA. Thanks @samsontmr , @TaskManager91 and @mariosasko for the help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2291/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2291/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2290/comments
https://api.github.com/repos/huggingface/datasets/issues/2290/events
https://github.com/huggingface/datasets/pull/2290
871,145,817
MDExOlB1bGxSZXF1ZXN0NjI2MjEyNTIz
2,290
Bbaw egyptian
{ "login": "phiwi", "id": 54144149, "node_id": "MDQ6VXNlcjU0MTQ0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/54144149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiwi", "html_url": "https://github.com/phiwi", "followers_url": "https://api.github.com/users/phiwi/followers", "following_url": "https://api.github.com/users/phiwi/following{/other_user}", "gists_url": "https://api.github.com/users/phiwi/gists{/gist_id}", "starred_url": "https://api.github.com/users/phiwi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phiwi/subscriptions", "organizations_url": "https://api.github.com/users/phiwi/orgs", "repos_url": "https://api.github.com/users/phiwi/repos", "events_url": "https://api.github.com/users/phiwi/events{/privacy}", "received_events_url": "https://api.github.com/users/phiwi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @phiwi,\r\n\r\nThanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.\r\n\r\nCould you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:\r\n```\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```", "Thanks ! Can you check that you have `black==21.4b0` and run `make style` again ? This should fix the \"check_code_quality\" CI issue", "Reformatted with black.", "Hi @phiwi, there are still some minor problems in relation with the tags you used in the dataset card (README.md).\r\n\r\nHere you can find the output of the metadata validator:\r\n```\r\nWARNING:root:❌ Failed to validate 'datasets/bbaw_egyptian/README.md':\r\nCould not validate the metada, found the following errors:\r\n* field 'size_categories':\r\n\t['100K<n<1000K'] are not registered tags for 'size_categories', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/size_categories.json\r\n* field 'task_ids':\r\n\t['machine translation'] are not registered tags for 'task_ids', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/tasks.json\r\n* field 'languages':\r\n\t['eg'] are not registered tags for 'languages', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/languages.json\r\n\r\n``` ", "@albertvillanova corrected :-)", "Thanks, @phiwi. Now all tests should pass green.\r\n\r\nHowever, I think there is still an issue with the language code:\r\n- the code for the Ancient Egyptian is not `ar-EG`\r\n- there is no ISO 639-1 code for the Ancient Egyptian\r\n- there is an ISO 639-2 code: `egy`; but this code will not pass the validation test because it is not in the list of valid codes\r\n\r\nI am not sure what to do in this case... Maybe @lhoestq has an idea? Maybe adding the code to the list? https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json", "I have just checked that in the [list of valid codes](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json) there are already ISO 639-2 codes. Therefore, I would suggest you to add it to the list:\r\n```\r\n\"egy\": \"Egyptian (Ancient)\",\r\n```\r\nand change it in the dataset card.", "Done.", "Hope, everything is okay right now." ]
1,619,710,078,000
1,620,321,925,000
1,620,321,925,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2290", "html_url": "https://github.com/huggingface/datasets/pull/2290", "diff_url": "https://github.com/huggingface/datasets/pull/2290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2290.patch", "merged_at": 1620321925000 }
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2290/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2290/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2289/comments
https://api.github.com/repos/huggingface/datasets/issues/2289/events
https://github.com/huggingface/datasets/pull/2289
871,118,573
MDExOlB1bGxSZXF1ZXN0NjI2MTg5MDU3
2,289
Allow collaborators to self-assign issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "What do you think, @lhoestq? 😉 \r\n\r\nI think this could be another step to facilitate community contributions.", "@lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...\r\n\r\nAnd sure, this must be documented! I just wanted first to know your opinion..." ]
1,619,708,826,000
1,619,807,296,000
1,619,807,296,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2289", "html_url": "https://github.com/huggingface/datasets/pull/2289", "diff_url": "https://github.com/huggingface/datasets/pull/2289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2289.patch", "merged_at": 1619807296000 }
Allow collaborators (without write access to the repository) to self-assign issues. In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2289/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2288/comments
https://api.github.com/repos/huggingface/datasets/issues/2288/events
https://github.com/huggingface/datasets/issues/2288
871,111,235
MDU6SXNzdWU4NzExMTEyMzU=
2,288
Load_dataset for local CSV files
{ "login": "sstojanoska", "id": 17052700, "node_id": "MDQ6VXNlcjE3MDUyNzAw", "avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sstojanoska", "html_url": "https://github.com/sstojanoska", "followers_url": "https://api.github.com/users/sstojanoska/followers", "following_url": "https://api.github.com/users/sstojanoska/following{/other_user}", "gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}", "starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions", "organizations_url": "https://api.github.com/users/sstojanoska/orgs", "repos_url": "https://api.github.com/users/sstojanoska/repos", "events_url": "https://api.github.com/users/sstojanoska/events{/privacy}", "received_events_url": "https://api.github.com/users/sstojanoska/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# load the dataset and copy the features\r\ndef process(ex):\r\n return {\"tokens\": ast.literal_eval(ex[\"tokens\"]), \"labels\": ast.literal_eval(ex[\"labels\"])}\r\ndataset = dataset.map(process, features=new_features)\r\n```\r\n", "Hi,\r\n\r\nThanks for the reply.\r\nI have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:\r\n```\r\nArrowInvalid: Could not convert X with type str: tried to convert to int\r\n```\r\nWhy this happens ? Should labels be mapped to their ids and use int instead of str ?", "Yes, just map the labels to their ids." ]
1,619,708,470,000
1,623,764,966,000
1,623,764,966,000
NONE
null
null
null
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2288/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2287/comments
https://api.github.com/repos/huggingface/datasets/issues/2287/events
https://github.com/huggingface/datasets/pull/2287
871,063,374
MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3
2,287
Avoid copying table's record batches
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !" ]
1,619,705,701,000
1,619,714,063,000
1,619,714,062,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2287", "html_url": "https://github.com/huggingface/datasets/pull/2287", "diff_url": "https://github.com/huggingface/datasets/pull/2287.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2287.patch", "merged_at": null }
Fixes #2276
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2287/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2286/comments
https://api.github.com/repos/huggingface/datasets/issues/2286/events
https://github.com/huggingface/datasets/pull/2286
871,032,393
MDExOlB1bGxSZXF1ZXN0NjI2MTE5MTE2
2,286
Fix metadata validation with config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,703,872,000
1,619,705,249,000
1,619,705,248,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2286", "html_url": "https://github.com/huggingface/datasets/pull/2286", "diff_url": "https://github.com/huggingface/datasets/pull/2286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2286.patch", "merged_at": 1619705248000 }
I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2286/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2285/comments
https://api.github.com/repos/huggingface/datasets/issues/2285/events
https://github.com/huggingface/datasets/issues/2285
871,005,236
MDU6SXNzdWU4NzEwMDUyMzY=
2,285
Help understanding how to build a dataset for language modeling as with the old TextDataset
{ "login": "danieldiezmallo", "id": 46021411, "node_id": "MDQ6VXNlcjQ2MDIxNDEx", "avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danieldiezmallo", "html_url": "https://github.com/danieldiezmallo", "followers_url": "https://api.github.com/users/danieldiezmallo/followers", "following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}", "gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}", "starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions", "organizations_url": "https://api.github.com/users/danieldiezmallo/orgs", "repos_url": "https://api.github.com/users/danieldiezmallo/repos", "events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}", "received_events_url": "https://api.github.com/users/danieldiezmallo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n", "Resolved" ]
1,619,702,205,000
1,621,408,965,000
1,621,408,959,000
NONE
null
null
null
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2285/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2284/comments
https://api.github.com/repos/huggingface/datasets/issues/2284/events
https://github.com/huggingface/datasets/pull/2284
870,932,710
MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5
2,284
Initialize Imdb dataset as used in Don't Stop Pretraining Paper
{ "login": "BobbyManion", "id": 52530809, "node_id": "MDQ6VXNlcjUyNTMwODA5", "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BobbyManion", "html_url": "https://github.com/BobbyManion", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "repos_url": "https://api.github.com/users/BobbyManion/repos", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,697,158,000
1,619,700,874,000
1,619,700,874,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2284", "html_url": "https://github.com/huggingface/datasets/pull/2284", "diff_url": "https://github.com/huggingface/datasets/pull/2284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2284.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2284/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2283/comments
https://api.github.com/repos/huggingface/datasets/issues/2283/events
https://github.com/huggingface/datasets/pull/2283
870,926,475
MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5
2,283
Initialize imdb dataset from don't stop pretraining paper
{ "login": "BobbyManion", "id": 52530809, "node_id": "MDQ6VXNlcjUyNTMwODA5", "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BobbyManion", "html_url": "https://github.com/BobbyManion", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "repos_url": "https://api.github.com/users/BobbyManion/repos", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,696,694,000
1,619,697,024,000
1,619,697,024,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2283", "html_url": "https://github.com/huggingface/datasets/pull/2283", "diff_url": "https://github.com/huggingface/datasets/pull/2283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2283.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2283/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2282/comments
https://api.github.com/repos/huggingface/datasets/issues/2282/events
https://github.com/huggingface/datasets/pull/2282
870,900,332
MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3
2,282
Initialize imdb dataset from don't stop pretraining paper
{ "login": "BobbyManion", "id": 52530809, "node_id": "MDQ6VXNlcjUyNTMwODA5", "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BobbyManion", "html_url": "https://github.com/BobbyManion", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "repos_url": "https://api.github.com/users/BobbyManion/repos", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,695,076,000
1,619,696,631,000
1,619,696,631,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2282", "html_url": "https://github.com/huggingface/datasets/pull/2282", "diff_url": "https://github.com/huggingface/datasets/pull/2282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2282.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2282/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2281/comments
https://api.github.com/repos/huggingface/datasets/issues/2281/events
https://github.com/huggingface/datasets/pull/2281
870,792,784
MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw
2,281
Update multi_woz_v22 checksum
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,687,351,000
1,619,703,695,000
1,619,703,694,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2281", "html_url": "https://github.com/huggingface/datasets/pull/2281", "diff_url": "https://github.com/huggingface/datasets/pull/2281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2281.patch", "merged_at": 1619703694000 }
Fix issue https://github.com/huggingface/datasets/issues/1876 The files were changed in https://github.com/budzianowski/multiwoz/pull/72
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2281/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2280/comments
https://api.github.com/repos/huggingface/datasets/issues/2280/events
https://github.com/huggingface/datasets/pull/2280
870,780,431
MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy
2,280
Fixed typo seperate->separate
{ "login": "laksh9950", "id": 32505743, "node_id": "MDQ6VXNlcjMyNTA1NzQz", "avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laksh9950", "html_url": "https://github.com/laksh9950", "followers_url": "https://api.github.com/users/laksh9950/followers", "following_url": "https://api.github.com/users/laksh9950/following{/other_user}", "gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}", "starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions", "organizations_url": "https://api.github.com/users/laksh9950/orgs", "repos_url": "https://api.github.com/users/laksh9950/repos", "events_url": "https://api.github.com/users/laksh9950/events{/privacy}", "received_events_url": "https://api.github.com/users/laksh9950/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind", "The PR has been merged ! Feel free to merge master into your branch to fix the CI" ]
1,619,686,546,000
1,619,714,482,000
1,619,714,476,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2280", "html_url": "https://github.com/huggingface/datasets/pull/2280", "diff_url": "https://github.com/huggingface/datasets/pull/2280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2280.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2280/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2279/comments
https://api.github.com/repos/huggingface/datasets/issues/2279/events
https://github.com/huggingface/datasets/issues/2279
870,431,662
MDU6SXNzdWU4NzA0MzE2NjI=
2,279
Compatibility with Ubuntu 18 and GLIBC 2.27?
{ "login": "tginart", "id": 11379648, "node_id": "MDQ6VXNlcjExMzc5NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tginart", "html_url": "https://github.com/tginart", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "organizations_url": "https://api.github.com/users/tginart/orgs", "repos_url": "https://api.github.com/users/tginart/repos", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "received_events_url": "https://api.github.com/users/tginart/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?", "Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685" ]
1,619,647,687,000
1,619,682,162,000
1,619,682,162,000
NONE
null
null
null
## Describe the bug For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04). I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC. ## Steps to reproduce the bug 1. clone the transformers repo 2. move to examples/pytorch/language-modeling 3. run example command: ```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm``` ## Expected results As described in the transformers repo. ## Actual results ```Traceback (most recent call last): File "run_clm.py", line 34, in <module> from transformers import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__ return super().__getattr__(name) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module> from .tokenization_layoutlm import LayoutLMTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module> from ..bert.tokenization_bert import BertTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module> from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module> from .tokenization_utils_base import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module> from tokenizers import AddedToken File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so) ``` ## Versions Paste the output of the following code: ``` - Datasets: 1.6.1 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2279/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2278/comments
https://api.github.com/repos/huggingface/datasets/issues/2278/events
https://github.com/huggingface/datasets/issues/2278
870,088,059
MDU6SXNzdWU4NzAwODgwNTk=
2,278
Loss result inGptNeoForCasual
{ "login": "Yossillamm", "id": 51174606, "node_id": "MDQ6VXNlcjUxMTc0NjA2", "avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yossillamm", "html_url": "https://github.com/Yossillamm", "followers_url": "https://api.github.com/users/Yossillamm/followers", "following_url": "https://api.github.com/users/Yossillamm/following{/other_user}", "gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions", "organizations_url": "https://api.github.com/users/Yossillamm/orgs", "repos_url": "https://api.github.com/users/Yossillamm/repos", "events_url": "https://api.github.com/users/Yossillamm/events{/privacy}", "received_events_url": "https://api.github.com/users/Yossillamm/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library" ]
1,619,624,392,000
1,620,317,663,000
1,620,317,663,000
NONE
null
null
null
Is there any way you give the " loss" and "logits" results in the gpt neo api?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2278/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2277/comments
https://api.github.com/repos/huggingface/datasets/issues/2277/events
https://github.com/huggingface/datasets/pull/2277
870,071,994
MDExOlB1bGxSZXF1ZXN0NjI1MzI5NjIz
2,277
Create CacheManager
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2851292821, "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring", "name": "refactoring", "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior" } ]
open
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/8", "html_url": "https://github.com/huggingface/datasets/milestone/8", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "id": 6968069, "node_id": "MI_kwDODunzps4AalMF", "number": 8, "title": "1.12", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 4, "closed_issues": 2, "state": "open", "created_at": 1626881696000, "updated_at": 1634120793000, "due_on": 1630306800000, "closed_at": null }
[]
1,619,623,422,000
1,630,560,811,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2277", "html_url": "https://github.com/huggingface/datasets/pull/2277", "diff_url": "https://github.com/huggingface/datasets/pull/2277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2277.patch", "merged_at": null }
Perform refactoring to decouple cache functionality (method `as_dataset`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2277/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2276/comments
https://api.github.com/repos/huggingface/datasets/issues/2276/events
https://github.com/huggingface/datasets/issues/2276
870,010,511
MDU6SXNzdWU4NzAwMTA1MTE=
2,276
concatenate_datasets loads all the data into memory
{ "login": "TaskManager91", "id": 7063207, "node_id": "MDQ6VXNlcjcwNjMyMDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TaskManager91", "html_url": "https://github.com/TaskManager91", "followers_url": "https://api.github.com/users/TaskManager91/followers", "following_url": "https://api.github.com/users/TaskManager91/following{/other_user}", "gists_url": "https://api.github.com/users/TaskManager91/gists{/gist_id}", "starred_url": "https://api.github.com/users/TaskManager91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TaskManager91/subscriptions", "organizations_url": "https://api.github.com/users/TaskManager91/orgs", "repos_url": "https://api.github.com/users/TaskManager91/repos", "events_url": "https://api.github.com/users/TaskManager91/events{/privacy}", "received_events_url": "https://api.github.com/users/TaskManager91/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n<ipython-input-6-9766d77530b9> in <module>\r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```", "Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ", "@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```", "Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed", "Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues", "We just released `datasets` 1.6.2 that includes the fix :)", "thanks it works like a charm! :)" ]
1,619,620,041,000
1,620,031,315,000
1,620,031,315,000
NONE
null
null
null
## Describe the bug When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk. Interestingly, this happens when trying to save the new dataset to disk or concatenating it again. ![image](https://user-images.githubusercontent.com/7063207/116420321-2b21b480-a83e-11eb-9006-8f6ca729fb6f.png) ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_from_disk test_sampled_pro = load_from_disk("test_sampled_pro") val_sampled_pro = load_from_disk("val_sampled_pro") big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro]) # Loaded to memory big_set.save_to_disk("big_set") # Loaded to memory big_set = concatenate_datasets([big_set, val_sampled_pro]) ``` ## Expected results The data should be loaded into memory in batches and then saved directly to disk. ## Actual results The entire data set is loaded into the memory and then saved to the hard disk. ## Versions Paste the output of the following code: ```python - Datasets: 1.6.1 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2276/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2275/comments
https://api.github.com/repos/huggingface/datasets/issues/2275/events
https://github.com/huggingface/datasets/issues/2275
869,378,311
MDU6SXNzdWU4NjkzNzgzMTE=
2,275
SNLI dataset has labels of -1
{ "login": "puzzler10", "id": 17426779, "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/puzzler10", "html_url": "https://github.com/puzzler10", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "repos_url": "https://api.github.com/users/puzzler10/repos", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!" ]
1,619,569,945,000
1,621,258,458,000
1,621,258,458,000
NONE
null
null
null
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set. It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained. Perhaps the documentation should be updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2275/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2274/comments
https://api.github.com/repos/huggingface/datasets/issues/2274/events
https://github.com/huggingface/datasets/pull/2274
869,186,276
MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx
2,274
Always update metadata in arrow schema
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,551,317,000
1,619,690,271,000
1,619,690,270,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2274", "html_url": "https://github.com/huggingface/datasets/pull/2274", "diff_url": "https://github.com/huggingface/datasets/pull/2274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2274.patch", "merged_at": 1619690270000 }
We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types. For each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date. I also added a line to update the metadata directly in the Dataset.__init__ method. This way even a dataset instantiated with __init__ will have a table with the right metadata. cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2274/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2273/comments
https://api.github.com/repos/huggingface/datasets/issues/2273/events
https://github.com/huggingface/datasets/pull/2273
869,046,290
MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1
2,273
Added CUAD metrics
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,542,152,000
1,619,704,787,000
1,619,704,787,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2273", "html_url": "https://github.com/huggingface/datasets/pull/2273", "diff_url": "https://github.com/huggingface/datasets/pull/2273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2273.patch", "merged_at": 1619704787000 }
`EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2273/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2272/comments
https://api.github.com/repos/huggingface/datasets/issues/2272/events
https://github.com/huggingface/datasets/issues/2272
869,017,977
MDU6SXNzdWU4NjkwMTc5Nzc=
2,272
Bug in Dataset.class_encode_column
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore" ]
1,619,539,998,000
1,619,787,267,000
1,619,787,267,000
MEMBER
null
null
null
## Describe the bug All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded. ## Expected results All the original columns should be kept. This needs regression tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2272/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2271/comments
https://api.github.com/repos/huggingface/datasets/issues/2271/events
https://github.com/huggingface/datasets/issues/2271
869,002,141
MDU6SXNzdWU4NjkwMDIxNDE=
2,271
Synchronize table metadata with features
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "See PR #2274 " ]
1,619,538,913,000
1,619,614,105,000
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to know which feature types to use. These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`. However this something that's almost never tested properly. **Describe the solution you'd like** We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2271/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2270/comments
https://api.github.com/repos/huggingface/datasets/issues/2270/events
https://github.com/huggingface/datasets/pull/2270
868,913,660
MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky
2,270
Fix iterable interface expected by numpy
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is already on master" ]
1,619,534,156,000
1,619,631,567,000
1,619,631,567,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2270", "html_url": "https://github.com/huggingface/datasets/pull/2270", "diff_url": "https://github.com/huggingface/datasets/pull/2270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2270.patch", "merged_at": null }
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2270/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2269/comments
https://api.github.com/repos/huggingface/datasets/issues/2269/events
https://github.com/huggingface/datasets/pull/2269
868,878,468
MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3
2,269
Fix query table with iterable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,619,531,978,000
1,619,533,317,000
1,619,533,316,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2269", "html_url": "https://github.com/huggingface/datasets/pull/2269", "diff_url": "https://github.com/huggingface/datasets/pull/2269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2269.patch", "merged_at": 1619533316000 }
The benchmark runs are failing on master because it tries to use an iterable to query the dataset. However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable. This PR fixes it
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2269/timeline
null
true